hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a70fa092ee7f331e87d92b7787bb5d1e947646ef | 145 | py | Python | tests/year_2018/test_day5a.py | vanillaSlice/advent-of-code | 3f31be38c598040ec6032bc9b24856005e070c21 | [
"MIT"
] | null | null | null | tests/year_2018/test_day5a.py | vanillaSlice/advent-of-code | 3f31be38c598040ec6032bc9b24856005e070c21 | [
"MIT"
] | null | null | null | tests/year_2018/test_day5a.py | vanillaSlice/advent-of-code | 3f31be38c598040ec6032bc9b24856005e070c21 | [
"MIT"
] | null | null | null | from src.year_2018.day5a import alchemical_reduction
def test_alchemical_reduction():
assert alchemical_reduction('dabAcCaCBAcCcaDA') == 10
| 29 | 57 | 0.82069 | 17 | 145 | 6.705882 | 0.764706 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053846 | 0.103448 | 145 | 4 | 58 | 36.25 | 0.823077 | 0 | 0 | 0 | 0 | 0 | 0.110345 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
597fcecd294c73cd5b956ee515c7e723d3f6b313 | 108 | py | Python | app/back/mongo/data/collect/railroads/__init__.py | jgphilpott/polyplot | c46861174ee5881dadffbfb2278d555462523547 | [
"MIT"
] | 5 | 2021-05-17T14:17:14.000Z | 2021-12-14T12:54:32.000Z | app/back/mongo/data/collect/railroads/__init__.py | jgphilpott/iGraph | 2a91ba57e4950856a83d3a109753f8f2badee829 | [
"MIT"
] | 8 | 2020-02-09T02:48:41.000Z | 2021-05-16T04:57:02.000Z | app/back/mongo/data/collect/railroads/__init__.py | jgphilpott/iGraph | 2a91ba57e4950856a83d3a109753f8f2badee829 | [
"MIT"
] | 2 | 2016-09-12T03:48:16.000Z | 2019-05-04T14:15:19.000Z | from back.mongo.data.collect.railroads.model import *
from back.mongo.data.collect.railroads.mongo import *
| 36 | 53 | 0.814815 | 16 | 108 | 5.5 | 0.5 | 0.181818 | 0.295455 | 0.386364 | 0.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 108 | 2 | 54 | 54 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
59d788748f42ce83db0db23e063efdc2b259c905 | 47,712 | py | Python | tests/test_service.py | rscohn2/irrigation_unlimited | d6e5ff56325628e203624346bc5264ac92743b5e | [
"MIT"
] | null | null | null | tests/test_service.py | rscohn2/irrigation_unlimited | d6e5ff56325628e203624346bc5264ac92743b5e | [
"MIT"
] | null | null | null | tests/test_service.py | rscohn2/irrigation_unlimited | d6e5ff56325628e203624346bc5264ac92743b5e | [
"MIT"
] | null | null | null | """Test integration_blueprint setup process."""
from unittest.mock import patch
import pytest
from datetime import datetime, timedelta
import homeassistant.core as ha
from homeassistant.config import load_yaml_config_file
from homeassistant.setup import async_setup_component
from homeassistant.const import SERVICE_RELOAD
from tests.const import MOCK_CONFIG
from tests.iu_test_support import (
quiet_mode,
begin_test,
run_for,
run_for_1_tick,
run_until,
finish_test,
test_config_dir,
check_summary,
)
from custom_components.irrigation_unlimited.irrigation_unlimited import (
IUCoordinator,
)
from custom_components.irrigation_unlimited.const import (
DOMAIN,
COORDINATOR,
SERVICE_CANCEL,
SERVICE_DISABLE,
SERVICE_ENABLE,
SERVICE_MANUAL_RUN,
SERVICE_TIME_ADJUST,
SERVICE_TOGGLE,
)
from custom_components.irrigation_unlimited.__init__ import CONFIG_SCHEMA
quiet_mode()
async def test_service_adjust_time(
hass: ha.HomeAssistant, skip_start, skip_dependencies, skip_history
):
"""Test adjust_time service call."""
full_path = test_config_dir + "service_adjust_time.yaml"
config = CONFIG_SCHEMA(load_yaml_config_file(full_path))
await async_setup_component(hass, DOMAIN, config)
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
start_time = await begin_test(1, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "percentage": 50},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%50.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(2, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "percentage": 200},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(3, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "percentage": 0},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%0.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(4, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "actual": "00:30"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "=0:30:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(5, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "increase": "00:05"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "+0:05:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(6, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "decrease": "00:05"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "-0:05:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(7, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "reset": None},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(8, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_z1",
"percentage": 100,
"minimum": "00:20",
},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%100.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(9, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_z1",
"percentage": 100,
"maximum": "00:05",
},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%100.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(10, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "percentage": 50},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%50.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%50.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(11, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "percentage": 200},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%200.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(12, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "percentage": 0},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%0.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%0.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(13, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "actual": "00:30"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "=0:30:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "=0:30:00"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(14, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "increase": "00:05"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "+0:05:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "+0:05:00"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(15, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "decrease": "00:05"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "-0:05:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "-0:05:00"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(16, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "reset": None},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(17, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"percentage": 100,
"minimum": "00:20",
},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%100.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%100.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(18, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"percentage": 100,
"maximum": "00:05",
},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%100.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%100.0"
await finish_test(hass, coordinator, start_time, True)
check_summary(full_path, coordinator)
async def test_service_enable_disable(
hass: ha.HomeAssistant, skip_start, skip_dependencies, skip_history
):
"""Test enable/disable service call."""
full_path = test_config_dir + "service_enable_disable.yaml"
config = CONFIG_SCHEMA(load_yaml_config_file(full_path))
await async_setup_component(hass, DOMAIN, config)
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
# Zone 1 off
start_time = await begin_test(1, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_DISABLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
await finish_test(hass, coordinator, start_time, True)
# Zone 1 on
start_time = await begin_test(2, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_ENABLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
await finish_test(hass, coordinator, start_time, True)
# Zone 1 off, zone 2 on
start_time = await begin_test(3, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
await finish_test(hass, coordinator, start_time, True)
# Double toggle: zone 1 on, zone 2 off
start_time = await begin_test(4, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z2"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
await finish_test(hass, coordinator, start_time, True)
# All off
start_time = await begin_test(5, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
await finish_test(hass, coordinator, start_time, True)
# All back on
start_time = await begin_test(6, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z2"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
await finish_test(hass, coordinator, start_time, True)
# Controller 1 off
start_time = await begin_test(7, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_DISABLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "blocked"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "blocked"
assert s.attributes["enabled"] == True
await finish_test(hass, coordinator, start_time, True)
# Controller 1 off, zone 1 on, zone 2 off
start_time = await begin_test(8, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_ENABLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_DISABLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z2"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "blocked"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "blocked"
assert s.attributes["enabled"] == False
await finish_test(hass, coordinator, start_time, True)
# Controller 1 on, zone 1 still on, zone 2 still off
start_time = await begin_test(9, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_ENABLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
await finish_test(hass, coordinator, start_time, True)
# Toggle controller 1
start_time = await begin_test(10, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "disabled"
assert s.attributes["enabled"] == False
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "blocked"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "blocked"
assert s.attributes["enabled"] == False
await finish_test(hass, coordinator, start_time, True)
# Toggle controller 1 & zone 2 (All back on)
start_time = await begin_test(11, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m"},
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_TOGGLE,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z2"},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_m")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["status"] == "off"
assert s.attributes["enabled"] == True
await finish_test(hass, coordinator, start_time, True)
check_summary(full_path, coordinator)
async def test_service_manual_run(
hass: ha.HomeAssistant, skip_start, skip_dependencies, skip_history
):
"""Test manual_run service call."""
full_path = test_config_dir + "service_manual_run.yaml"
config = CONFIG_SCHEMA(load_yaml_config_file(full_path))
await async_setup_component(hass, DOMAIN, config)
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
start_time = await begin_test(1, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_MANUAL_RUN,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1", "time": "00:10"},
True,
)
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(2, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_MANUAL_RUN,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "time": "00:10"},
True,
)
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(3, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_MANUAL_RUN,
{
"entity_id": "binary_sensor.irrigation_unlimited_c2_m",
"time": "00:20",
"sequence_id": 1,
},
True,
)
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(4, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_MANUAL_RUN,
{
"entity_id": "binary_sensor.irrigation_unlimited_c2_m",
"time": "00:01",
"sequence_id": 1,
},
True,
)
await finish_test(hass, coordinator, start_time, True)
check_summary(full_path, coordinator)
async def test_service_cancel(
hass: ha.HomeAssistant, skip_start, skip_dependencies, skip_history
):
"""Test cancel service call."""
full_path = test_config_dir + "service_cancel.yaml"
config = CONFIG_SCHEMA(load_yaml_config_file(full_path))
await async_setup_component(hass, DOMAIN, config)
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
start_time = await begin_test(1, coordinator)
next_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:11:45+00:00"),
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_CANCEL,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z1"},
True,
)
await finish_test(hass, coordinator, next_time, True)
start_time = await begin_test(2, coordinator)
next_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:11:45+00:00"),
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_CANCEL,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m"},
True,
)
await finish_test(hass, coordinator, next_time, True)
check_summary(full_path, coordinator)
async def test_service_reload(
hass: ha.HomeAssistant,
skip_start,
skip_dependencies,
skip_history,
):
"""Test reload service call."""
full_path = test_config_dir + "service_reload.yaml"
await async_setup_component(hass, DOMAIN, CONFIG_SCHEMA(MOCK_CONFIG))
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
with patch(
"homeassistant.core.Config.path",
return_value=full_path,
):
await hass.services.async_call(
DOMAIN,
SERVICE_RELOAD,
None,
True,
)
start_time = await begin_test(1, coordinator)
await finish_test(hass, coordinator, start_time, True)
check_summary(full_path, coordinator)
async def test_service_reload_error(
hass: ha.HomeAssistant,
skip_start,
skip_dependencies,
skip_history,
):
"""Test reload service call on a bad config file."""
full_path = test_config_dir + "service_reload_error.yaml"
await async_setup_component(hass, DOMAIN, CONFIG_SCHEMA(MOCK_CONFIG))
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
with patch(
"homeassistant.core.Config.path",
return_value=full_path,
):
with pytest.raises(KeyError, match="controllers"):
await hass.services.async_call(
DOMAIN,
SERVICE_RELOAD,
None,
True,
)
async def test_service_adjust_time_while_running(
hass: ha.HomeAssistant, skip_start, skip_dependencies, skip_history
):
"""Test adjust_time service call while sequence is running."""
full_path = test_config_dir + "service_adjust_time_while_running.yaml"
config = CONFIG_SCHEMA(load_yaml_config_file(full_path))
await async_setup_component(hass, DOMAIN, config)
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
# Start a sequence
start_time = await begin_test(1, coordinator)
next_time = await run_for(
hass, coordinator, start_time, timedelta(minutes=28), True
)
# Hit zone 4 with adjustment midway through sequence
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_z4", "percentage": 200},
True,
)
await finish_test(hass, coordinator, next_time, True)
# Run next test which should be 200%
start_time = await begin_test(2, coordinator)
await finish_test(hass, coordinator, start_time, True)
# Reset adjustments
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "reset": None},
True,
)
# Start a sequence
start_time = await begin_test(3, coordinator)
next_time = await run_for(
hass, coordinator, start_time, timedelta(minutes=28), True
)
# Hit controller with adjustment halfway through sequence
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{"entity_id": "binary_sensor.irrigation_unlimited_c1_m", "percentage": 200},
True,
)
await finish_test(hass, coordinator, next_time, True)
# Run next test which should be 200%
start_time = await begin_test(4, coordinator)
await finish_test(hass, coordinator, start_time, True)
check_summary(full_path, coordinator)
async def test_service_adjust_time_sequence(
hass: ha.HomeAssistant, skip_start, skip_dependencies, skip_history
):
"""Test adjust_time service call on a sequence."""
full_path = test_config_dir + "service_adjust_time_sequence.yaml"
config = CONFIG_SCHEMA(load_yaml_config_file(full_path))
await async_setup_component(hass, DOMAIN, config)
await hass.async_block_till_done()
coordinator: IUCoordinator = hass.data[DOMAIN][COORDINATOR]
start_time = await begin_test(1, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"percentage": 50,
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:06:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%50.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:12:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%50.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%50.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:20:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%50.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(2, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"percentage": 200,
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:11:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%200.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:30:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%200.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 07:01:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%200.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(3, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"percentage": 0,
},
True,
)
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(4, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"actual": "00:30",
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:07:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "=0:30:00"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:16:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "=0:30:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "=0:30:00"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:29:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "=0:30:00"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(5, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"increase": "00:05",
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:08:20+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "+0:05:00"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:19:40+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "+0:05:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "+0:05:00"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:35:45+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "+0:05:00"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(6, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"decrease": "00:05",
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:07:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "-0:05:00"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:16:20+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "-0:05:00"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "-0:05:00"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:29:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "-0:05:00"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(7, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"reset": None,
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:10:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "None"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:21:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "None"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "None"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:32:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "None"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(8, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"percentage": 100,
"minimum": "00:50",
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:09:10+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:22:40+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:49:50+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(9, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"percentage": 100,
"maximum": "00:20",
},
True,
)
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:07:40+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:12:40+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:22:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "None"
assert s.attributes["current_adjustment"] == "%100.0"
await finish_test(hass, coordinator, start_time, True)
start_time = await begin_test(10, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"percentage": 50,
},
True,
)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"percentage": 200,
},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "%200.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:06:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%50.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:12:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%50.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%50.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:20:30+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%50.0"
await finish_test(hass, coordinator, start_time, True)
# Test follows on from above. Timing should be 200% after sequence reset
start_time = await begin_test(11, coordinator)
await hass.services.async_call(
DOMAIN,
SERVICE_TIME_ADJUST,
{
"entity_id": "binary_sensor.irrigation_unlimited_c1_m",
"sequence_id": 1,
"reset": None,
},
True,
)
start_time = await run_for_1_tick(hass, coordinator, start_time, True)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "%200.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:11:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z1")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%200.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 06:30:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z2")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%200.0"
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z3")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%200.0"
start_time = await run_until(
hass,
coordinator,
start_time,
datetime.fromisoformat("2021-01-04 07:01:00+00:00"),
True,
)
s = hass.states.get("binary_sensor.irrigation_unlimited_c1_z4")
assert s.attributes["adjustment"] == "%200.0"
assert s.attributes["current_adjustment"] == "%200.0"
await finish_test(hass, coordinator, start_time, True)
check_summary(full_path, coordinator)
| 36.421374 | 87 | 0.666206 | 5,920 | 47,712 | 5.094088 | 0.030912 | 0.06685 | 0.107106 | 0.17578 | 0.965879 | 0.961203 | 0.95855 | 0.956826 | 0.944557 | 0.939782 | 0 | 0.038632 | 0.215501 | 47,712 | 1,309 | 88 | 36.449198 | 0.767059 | 0.012848 | 0 | 0.820234 | 0 | 0 | 0.251824 | 0.150367 | 0 | 0 | 0 | 0 | 0.158863 | 1 | 0 | false | 0 | 0.010033 | 0 | 0.010033 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
abef25a32a5aba633bf1d41f1954baf30bfdbc3a | 9,000 | py | Python | make_morse_dict.py | huwns/classical_cryptography | 683a1e884c4b56f537800fabc6a7216837a9338a | [
"BSD-3-Clause"
] | 1 | 2021-09-19T06:32:09.000Z | 2021-09-19T06:32:09.000Z | make_morse_dict.py | huwns/classical_cryptography | 683a1e884c4b56f537800fabc6a7216837a9338a | [
"BSD-3-Clause"
] | null | null | null | make_morse_dict.py | huwns/classical_cryptography | 683a1e884c4b56f537800fabc6a7216837a9338a | [
"BSD-3-Clause"
] | null | null | null | from bs4 import BeautifulSoup as bs
# Reference(https://www.google.co.jp/ime/-.-.html)
GOOGLE_MORSE = '''
<div class="morse-table">
<h3>和文モールス符号表</h3>
<table>
<tbody><tr>
<td>ア<span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span></span></td>
<td>イ<span><span class="dit"></span><span class="dah"></span></span></td>
<td>ウ<span><span class="dit"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>エ<span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dah"></span></span></td>
<td>オ<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dit"></span></span></td>
</tr>
<tr>
<td>カ<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>キ<span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>ク<span><span class="dit"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>ケ<span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span></span></td>
<td>コ<span><span class="dah"></span><span class="dah"></span><span class="dah"></span><span class="dah"></span></span></td>
</tr>
<tr>
<td>サ<span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>シ<span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>ス<span><span class="dah"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>セ<span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>ソ<span><span class="dah"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span></td>
</tr>
<tr>
<td>タ<span><span class="dah"></span><span class="dit"></span></span></td>
<td>チ<span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>ツ<span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>テ<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span></span></td>
<td>ト<span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span></span></td>
</tr>
<tr>
<td>ナ<span><span class="dit"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>ニ<span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>ヌ<span><span class="dit"></span><span class="dit"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>ネ<span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>ノ<span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span></span></td>
</tr>
<tr>
<td>ハ<span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>ヒ<span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>フ<span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>ヘ<span><span class="dit"></span></span></td>
<td>ホ<span><span class="dah"></span><span class="dit"></span><span class="dit"></span></span></td>
</tr>
<tr>
<td>マ<span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>ミ<span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>ム<span><span class="dah"></span></span></td>
<td>メ<span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>モ<span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span></span></td>
</tr>
<tr>
<td>ヤ<span><span class="dit"></span><span class="dah"></span><span class="dah"></span></span></td>
<td></td>
<td>ユ<span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span></span></td>
<td></td>
<td>ヨ<span><span class="dah"></span><span class="dah"></span></span></td>
</tr>
<tr>
<td>ラ<span><span class="dit"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>リ<span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>ル<span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>レ<span><span class="dah"></span><span class="dah"></span><span class="dah"></span></span></td>
<td>ロ<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
</tr>
<tr>
<td>ワ<span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>ヰ<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span><span class="dah"></span></span></td>
<td></td>
<td>ヱ<span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dit"></span></span></td>
<td>ヲ<span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dah"></span></span></td>
</tr>
<tr>
<td>ン<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>長音 (ー)<span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
<td>濁点 (゛)<span><span class="dit"></span><span class="dit"></span></span></td>
<td>半濁点 (゜)<span><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span></td>
<td>区切り点 (、)<span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span><span class="dit"></span><span class="dah"></span></span></td>
</tr><tr>
</tr></tbody></table>
<p class="morse-table-note">
「っ」や「ゃゅょ」のような促音・拗音は、従来の和文モールス符号ではサポートされていませんが、Google 日本語入力モールスバージョンでは、大きい「つ」や「やゆよ」の後に半濁点符号<span class="signals"><span class="dit"></span><span class="dit"></span><span class="dah"></span><span class="dah"></span><span class="dit"></span></span>を打つことで、促音・拗音を入力することができます。
</p>
</div>
'''
def google_morse_parser(msg):
parsed_dict = {}
soup = bs(msg, 'html.parser')
soup_td = soup.find('table').find_all('td')
for i in soup_td:
if i.text is not None:
if '(' in i.text:
key = i.text.split('(')[1].split(')')[0]
else:
key = i.text
soup_span = i.find_all('span')
value = ''
for span in soup_span:
dahdit = span.get('class')
if dahdit == ['dah']:
value += '-'
elif dahdit == ['dit']:
value += '.'
parsed_dict[key] = value
return parsed_dict
if __name__ == '__main__':
print(google_morse_parser(GOOGLE_MORSE)) | 78.947368 | 281 | 0.531111 | 1,291 | 9,000 | 3.687839 | 0.099148 | 0.446965 | 0.581601 | 0.362949 | 0.862424 | 0.862424 | 0.862424 | 0.862424 | 0.850662 | 0.850662 | 0 | 0.000687 | 0.190889 | 9,000 | 114 | 282 | 78.947368 | 0.652568 | 0.005333 | 0 | 0.209091 | 0 | 0.463636 | 0.91554 | 0.63546 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009091 | false | 0 | 0.009091 | 0 | 0.027273 | 0.009091 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
f9efeda9feb696d04d440d6cb4f1b6bf54a93a9c | 2,175 | py | Python | NodeDefender/mail/icpe.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 4 | 2016-09-23T17:51:05.000Z | 2017-03-14T02:52:26.000Z | NodeDefender/mail/icpe.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 1 | 2016-09-22T11:32:33.000Z | 2017-11-14T10:00:24.000Z | NodeDefender/mail/icpe.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 4 | 2016-10-09T19:05:16.000Z | 2020-05-14T04:00:30.000Z | from flask_mail import Message
from flask import render_template, url_for
import NodeDefender
import smtplib
@NodeDefender.decorators.mail_enabled
@NodeDefender.decorators.celery_task
def new_icpe(icpe, host, port):
icpe = NodeDefender.db.icpe.get(icpe)
if icpe is None:
return False
mqtt = NodeDefender.db.mqtt.get_sql(host, port)
if mqtt is None:
return False
msg = Message('iCPE {} found on MQTT {}'.format(icpe.mac_address, mqtt.host),
sender='noreply@nodedefender.com', recipients=\
[group.email for group in mqtt.groups ])
url = url_for('node_view.nodes_list')
msg.body = render_template('mail/icpe/new_icpe.txt', icpe = icpe, mqtt =
mqtt, url = url)
try:
NodeDefender.mail.mail.send(msg)
except smtplib.SMTPRecipientsRefused:
NodeDefender.mail.logger.error("Unable to send email for: {}".\
format(icpe.mac_address))
except smtplib.SMTPAuthenticationError:
NodeDefender.mail.logger.error("Authentication Error when sending email")
return True
@NodeDefender.decorators.mail_enabled
@NodeDefender.decorators.celery_task
def icpe_enabled(icpe, host, port):
icpe = NodeDefender.db.icpe.get(icpe)
if icpe is None:
return False
mqtt = NodeDefender.db.mqtt.get_sql(host, port)
if mqtt is None:
return False
msg = Message('iCPE {} Enabled from MQTT {}'.format(icpe.mac_address, mqtt.host),
sender='noreply@nodedefender.com', recipients=\
[group.email for group in mqtt.groups ])
url = url_for('node_view.nodes_list')
msg.body = render_template('mail/icpe/icpe_enabled.txt', icpe = icpe, mqtt =
mqtt, url = url)
try:
NodeDefender.mail.mail.send(msg)
except smtplib.SMTPRecipientsRefused:
NodeDefender.mail.logger.error("Unable to send email for: {}".\
format(icpe.mac_address))
except smtplib.SMTPAuthenticationError:
NodeDefender.mail.logger.error("Authentication Error when sending email")
return True
| 38.157895 | 85 | 0.652414 | 260 | 2,175 | 5.361538 | 0.238462 | 0.068867 | 0.034433 | 0.04878 | 0.895265 | 0.895265 | 0.895265 | 0.895265 | 0.895265 | 0.797704 | 0 | 0 | 0.248736 | 2,175 | 56 | 86 | 38.839286 | 0.853121 | 0 | 0 | 0.8 | 0 | 0 | 0.148046 | 0.044138 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e660fef23e34a2f0463beaefe3ef06636f8985e6 | 141 | py | Python | boa3_test/test_sc/interop_test/policy/ImportPolicy.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/interop_test/policy/ImportPolicy.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/interop_test/policy/ImportPolicy.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin import public
from boa3.builtin.interop import policy
@public
def main() -> int:
return policy.get_exec_fee_factor()
| 17.625 | 39 | 0.765957 | 21 | 141 | 5 | 0.714286 | 0.152381 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.148936 | 141 | 7 | 40 | 20.142857 | 0.858333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
0516bf72f23822c378113dc81216c407c6668113 | 60,534 | py | Python | delicatessen/estimating_equations/regression.py | pzivich/Deli | 761aa51c6949334b59fffb185be4266177454b6c | [
"MIT"
] | null | null | null | delicatessen/estimating_equations/regression.py | pzivich/Deli | 761aa51c6949334b59fffb185be4266177454b6c | [
"MIT"
] | null | null | null | delicatessen/estimating_equations/regression.py | pzivich/Deli | 761aa51c6949334b59fffb185be4266177454b6c | [
"MIT"
] | null | null | null | import warnings
import numpy as np
from delicatessen.utilities import logit, inverse_logit, identity
#################################################################
# Basic Regression Estimating Equations
def ee_regression(theta, X, y, model, weights=None):
r"""Default stacked estimating equation for regression, with available options including: linear, logistic, and
Poisson regression. The general estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - g(X_i^T \theta)) X_i = 0
where :math:`g` indicates a general transformation function. For linear regression, :math:`g` is the identity
function. Logistic regression uses the expit, or the inverse-logit function, :math:`expit(u) = 1 / (1 + exp(u))`.
Finally, Poisson regression is :math:`g(u) = \exp(u)`.
Here, theta is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if X is a
3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of X's (as
long as there is enough support in the data).
Note
----
All provided estimating equations are meant to be wrapped inside a user-specified function. Throughout, these
user-defined functions are defined as ``psi``.
Here, :math:`\theta` corresponds to the coefficients in the corresponding regression model
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
model : str
Type of regression model to estimate. Options are ``'linear'`` (linear regression), ``'logistic'`` (logistic
regression), and ``'poisson'`` (Poisson regression).
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
Examples
--------
Construction of a estimating equation(s) with ``ee_regression`` should be done similar to the following
>>> import numpy as np
>>> import pandas as pd
>>> from scipy.stats import logistic
>>> from delicatessen import MEstimator
>>> from delicatessen.estimating_equations import ee_regression
Some generic data to estimate the regression models
>>> n = 500
>>> data = pd.DataFrame()
>>> data['X'] = np.random.normal(size=n)
>>> data['Z'] = np.random.normal(size=n)
>>> data['Y1'] = 0.5 + 2*data['X'] - 1*data['Z'] + np.random.normal(loc=0, size=n)
>>> data['Y2'] = np.random.binomial(n=1, p=logistic.cdf(0.5 + 2*data['X'] - 1*data['Z']), size=n)
>>> data['Y3'] = np.random.poisson(np.exp(lam=0.5 + 2*data['X'] - 1*data['Z']), size=n)
>>> data['C'] = 1
Note that ``C`` here is set to all 1's. This will be the intercept in the regression.
To start, we will demonstrate linear regression for the outcome ``Y1``. Defining psi, or the stacked estimating
equations
>>> def psi(theta):
>>> return ee_regression(theta=theta, X=data[['C', 'X', 'Z']], y=data['Y1'], model='linear')
Calling the M-estimation procedure (note that ``init`` requires 3 values, since ``X.shape[1] = 3``).
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0.,])
>>> estr.estimate()
Inspecting the parameter estimates, variance, and confidence intervals
>>> estr.theta
>>> estr.variance
>>> estr.confidence_intervals()
Next, we can estimate the parameters for a logistic regression model as follows
>>> def psi(theta):
>>> return ee_regression(theta=theta, X=data[['C', 'X', 'Z']], y=data['Y2'], model='logistic')
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0.,])
>>> estr.estimate()
Finally, we can estimate the parameters for a Poisson regression model as follows
>>> def psi(theta):
>>> return ee_regression(theta=theta, X=data[['C', 'X', 'Z']], y=data['Y3'], model='poisson')
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0.,])
>>> estr.estimate()
Additionally, weighted versions of all the previous models can be estimated by specifying the optional ``weights``
argument.
References
----------
Boos DD, & Stefanski LA. (2013). M-estimation (estimating equations). In Essential Statistical Inference
(pp. 297-337). Springer, New York, NY.
"""
# Preparation of input shapes and object types
X, y, beta = _prep_inputs_(X=X, y=y, theta=theta, penalty=None)
# Determining transformation function to use for the regression model
transform = _model_transform_(model=model) # Looking up corresponding transformation
pred_y = transform(np.dot(X, beta)) # Generating predicted values via speedy matrix calculation
# Allowing for a weighted linear model
w = _generate_weights_(weights=weights, n_obs=X.shape[0])
# Output b-by-n matrix
return w*((y - pred_y) * X).T # Return weighted regression score function
def ee_linear_regression(theta, X, y, weights=None):
r"""Default stacked estimating equation for linear regression without the homoscedastic assumption.
Note
----
The function ``ee_linear_regression`` is deprecated. Please use ``ee_regression`` instead.
The estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - X_i^T \theta) X_i = 0
Here, theta is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if X is a
3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of X's (as
long as there is enough support in the data).
Note
----
All provided estimating equations are meant to be wrapped inside a user-specified function. Throughtout, these
user-defined functions are defined as ``psi``.
Here, theta corresponds to the coefficients in a linear regression model
Note
----
For complex regression problems, the optimizer behind the scenes is not particularly robust (unlike functions
specializing in solely OLS). Therefore, optimization of OLS via a separate functionality can be done then those
estimated parameters are fed forward as the initial values (which should result in a more stable optimization).
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
References
----------
Boos DD, & Stefanski LA. (2013). M-estimation (estimating equations). In Essential Statistical Inference
(pp. 297-337). Springer, New York, NY.
"""
warnings.warn("Regression estimating equations should be implemented using `ee_regression`. The specific type of "
"regression estimating equations will be removed in v1.0", DeprecationWarning)
return ee_regression(theta=theta, X=X, y=y, model='linear', weights=weights)
def ee_logistic_regression(theta, X, y, weights=None):
r"""Default stacked estimating equation for logistic regression.
Note
----
The function ``ee_linear_regression`` is deprecated. Please use ``ee_regression`` instead.
The estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - expit(X_i^T \theta)) X_i = 0
where expit, or the inverse logit is
.. math::
expit(u) = 1 / (1 + exp(u))
Here, theta is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if X is a
3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of X's (as
long as there is enough support in the data).
Note
----
All provided estimating equations are meant to be wrapped inside a user-specified function. Throughtout, these
user-defined functions are defined as ``psi``.
Here, theta corresponds to the coefficients in a logistic regression model, and therefore are the log-odds.
Note
----
For complex regression problems, the optimizer behind the scenes is not particularly robust (unlike functions
specializing in solely logistic regression). Therefore, optimization of logistic regression via a separate
functionality can be done then those estimated parameters are fed forward as the initial values (which should
result in a more stable optimization).
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. The Y values should all be 0 or 1. No missing data should be
included (missing data may cause unexpected behavior).
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
References
----------
Boos DD, & Stefanski LA. (2013). M-estimation (estimating equations). In Essential Statistical Inference
(pp. 297-337). Springer, New York, NY.
"""
warnings.warn("Regression estimating equations should be implemented using `ee_regression`. The specific type of "
"regression estimating equations will be removed in v1.0", DeprecationWarning)
return ee_regression(theta=theta, X=X, y=y, model='logistic', weights=weights)
def ee_poisson_regression(theta, X, y, weights=None):
r"""Default stacked estimating equation for Poisson regression.
Note
----
The function ``ee_linear_regression`` is deprecated. Please use ``ee_regression`` instead.
The estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - \exp(X_i^T \theta)) X_i = 0
Here, theta is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if X is a
3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of X's (as
long as there is enough support in the data).
Note
----
All provided estimating equations are meant to be wrapped inside a user-specified function. Throughtout, these
user-defined functions are defined as ``psi``.
Note
----
For complex regression problems, the optimizer behind the scenes is not particularly robust (unlike functions
specializing in solely logistic regression). Therefore, optimization of logistic regression via a separate
functionality can be done then those estimated parameters are fed forward as the initial values (which should
result in a more stable optimization).
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. The Y values should all be 0 or 1. No missing data should be
included (missing data may cause unexpected behavior).
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
"""
warnings.warn("Regression estimating equations should be implemented using `ee_regression`. The specific type of "
"regression estimating equations will be removed in v1.0", DeprecationWarning)
return ee_regression(theta=theta, X=X, y=y, model='poisson', weights=weights)
#################################################################
# Robust Regression Estimating Equations
def ee_robust_regression(theta, X, y, model, k, weights=None):
r"""Default stacked estimating equation for robust regression. Specifically, robust linear regression is
robust to outlying observations of the outcome variable (``y``). Currently, only linear regression is supported by
``ee_robust_regression``. The estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n \psi_k(Y_i - X_i^T \theta) X_i = 0
where k indicates the upper and lower bounds. Here, theta is a 1-by-b array, where b is the distinct covariates
included as part of X. For example, if X is a 3-by-n matrix, then theta will be a 1-by-3 array. The code is general
to allow for an arbitrary number of X's (as long as there is enough support in the data).
Note
----
All provided estimating equations are meant to be wrapped inside a user-specified function. Throughtout, these
user-defined functions are defined as ``psi``.
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
model : str
Type of regression model to estimate. Options only include ``'linear'`` (linear regression).
k : int, float
Value to set the symmetric maximum upper and lower bounds on the difference between the observations and
predicted values
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
Examples
--------
Construction of a estimating equation(s) with ``ee_robust_regression`` should be done similar to the
following
>>> import numpy as np
>>> import pandas as pd
>>> from delicatessen import MEstimator
>>> from delicatessen.estimating_equations import ee_robust_regression
Some generic data to estimate a robust linear regression model
>>> n = 100
>>> data = pd.DataFrame()
>>> data['X'] = np.random.normal(size=n)
>>> data['Z'] = np.random.normal(size=n)
>>> data['Y'] = 0.5 + 2*data['X'] - 1*data['Z'] + np.random.normal(loc=0, scale=3, size=n)
>>> data['C'] = 1
Note that ``C`` here is set to all 1's. This will be the intercept in the regression.
Defining psi, or the stacked estimating equations
>>> def psi(theta):
>>> return ee_robust_regression(theta=theta, X=data[['C', 'X', 'Z']], y=data['Y'], model='linear', k=3)
Calling the M-estimation procedure (note that ``init`` has 3 values now, since ``X.shape[1] = 3``).
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0.,])
>>> estr.estimate()
Inspecting the parameter estimates, variance, and confidence intervals
>>> estr.theta
>>> estr.variance
>>> estr.confidence_intervals()
References
----------
Boos DD, & Stefanski LA. (2013). M-estimation (estimating equations). In Essential Statistical Inference
(pp. 297-337). Springer, New York, NY.
"""
# Preparation of input shapes and object types
X, y, beta = _prep_inputs_(X=X, y=y, theta=theta, penalty=None)
# Allowing for a weighted linear model
w = _generate_weights_(weights=weights, n_obs=X.shape[0])
# Determining transformation function to use for the regression model
transform = _model_transform_(model=model, # Looking up corresponding transformation
assert_linear_model=True) # ... and make sure it is a linear model
pred_y = transform(np.dot(X, beta)) # Generating predicted values
# Generating predictions and applying Huber function for robust
pred_error = np.clip(y - pred_y, -k, k)
# Output b-by-n matrix
return w*(pred_error * X).T # Score function
def ee_robust_linear_regression(theta, X, y, k, weights=None):
r"""Default stacked estimating equation for robust linear regression.
Note
----
The function ``ee_robust_linear_regression`` is deprecated. Please use ``ee_robust_regression`` instead.
The estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n \psi_k(Y_i - X_i^T \theta) X_i = 0
where k indicates the upper and lower bounds. Here, theta is a 1-by-b array, where b is the distinct covariates
included as part of X. For example, if X is a 3-by-n matrix, then theta will be a 1-by-3 array. The code is general
to allow for an arbitrary number of X's (as long as there is enough support in the data).
Note
----
All provided estimating equations are meant to be wrapped inside a user-specified function. Throughtout, these
user-defined functions are defined as ``psi``.
Here, theta corresponds to the coefficients in a robust linear regression model
Note
----
For complex regression problems, the optimizer behind the scenes is not particularly robust (unlike functions
specializing in solely OLS). Therefore, optimization of OLS via a separate functionality can be done then those
estimated parameters are fed forward as the initial values (which should result in a more stable optimization).
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
k : int, float
Value to set the symmetric maximum upper and lower bounds on the difference between the observations and
predicted values
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
References
----------
Boos DD, & Stefanski LA. (2013). M-estimation (estimating equations). In Essential Statistical Inference
(pp. 297-337). Springer, New York, NY.
"""
warnings.warn("Robust regression estimating equations should be implemented using `ee_robust_regression`. The "
"specific type of regression estimating equations will be removed in v1.0", DeprecationWarning)
return ee_robust_regression(theta=theta, X=X, y=y, model='linear', k=k, weights=weights)
#################################################################
# Penalized Regression Estimating Equations
def ee_ridge_regression(theta, y, X, model, penalty, weights=None, center=0.):
r"""Default stacked estimating equation for ridge linear regression. Ridge regression applies an L2-regularization
through a squared magnitude penalty. The estimating equation is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - X_i^T \theta) X_i - \lambda \theta = 0
Here, theta is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if X is a
3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of X's (as
long as there is enough support in the data).
Note
----
The 'strength' of the penalty term is indicated by :math:`\lambda`, which is the ``penalty`` argument scaled (or
divided by) the number of observations.
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
model : str
Type of regression model to estimate. Options are ``'linear'`` (linear regression), ``'logistic'`` (logistic
regression), and ``'poisson'`` (Poisson regression).
penalty : int, float, ndarray, list, vector
Penalty term to apply to all coefficients (if only a integer or float is provided) or the corresponding
coefficient (if a list or vector of integers or floats is provided). Note that the penalty term should either
consists of a single value or b values (to match the length of ``theta``).
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
center : int, float, ndarray, list, vector, optional
Center or reference value to penalized estimated coefficients towards. Default is zero, which penalized
coefficients towards the null. Other center values can be specified for all coefficients (by providing an
integer or float) or covariate-specific centering values (by providing a vector of values of the same length as
X).
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
Examples
--------
Construction of a estimating equation(s) with ``ee_ridge_regression`` should be done similar to the
following
>>> import numpy as np
>>> import pandas as pd
>>> from scipy.stats import logistic
>>> from delicatessen import MEstimator
>>> from delicatessen.estimating_equations import ee_ridge_regression
Some generic data to estimate a linear regresion model
>>> n = 500
>>> data = pd.DataFrame()
>>> data['V'] = np.random.normal(size=n)
>>> data['W'] = np.random.normal(size=n)
>>> data['X'] = data['W'] + np.random.normal(scale=0.25, size=n)
>>> data['Z'] = np.random.normal(size=n)
>>> data['Y1'] = 0.5 + 2*data['W'] - 1*data['Z'] + np.random.normal(loc=0, size=n)
>>> data['Y2'] = np.random.binomial(n=1, p=logistic.cdf(0.5 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['Y3'] = np.random.poisson(lam=np.exp(1 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['C'] = 1
Note that ``C`` here is set to all 1's. This will be the intercept in the regression.
Defining psi, or the stacked estimating equations. Note that the penalty is a list of values. Here, we are *not*
penalizing the intercept (which is generally recommended when the intercept is unlikely to be zero). The remainder
of covariates have a penalty of 10 applied.
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y1']
>>> return ee_ridge_regression(theta=theta, X=x, y=y, model='linear', penalty=penalty_vals)
Calling the M-estimation procedure (note that ``init`` has 5 values now, since ``X.shape[1] = 5``).
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0., 0., 0.])
>>> estr.estimate(solver='lm')
Inspecting the parameter estimates, variance, and confidence intervals
>>> estr.theta
>>> estr.variance
>>> estr.confidence_intervals()
Next, we can estimate the parameters for a logistic regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y2']
>>> return ee_ridge_regression(theta=theta, X=x, y=y, model='logistic', penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0., 0., 0.])
>>> estr.estimate(solver='lm')
Finally, we can estimate the parameters for a Poisson regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y3']
>>> return ee_ridge_regression(theta=theta, X=x, y=y, model='poisson', penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0., 0., 0.])
>>> estr.estimate(solver='lm')
Additionally, weighted versions of all the previous models can be estimated by specifying the optional ``weights``
argument.
References
----------
Fu WJ. (1998). Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical
Statistics, 7(3), 397-416.
Fu WJ. (2003). Penalized estimating equations. Biometrics, 59(1), 126-132.
"""
# Preparation of input shapes and object types
X, y, beta, penalty, center = _prep_inputs_(X=X, y=y, theta=theta, penalty=penalty, center=center)
# Determining transformation function to use for the regression model
transform = _model_transform_(model=model) # Looking up corresponding transformation
pred_y = transform(np.dot(X, beta)) # Generating predicted values
# Allowing for a weighted penalized regression model
w = _generate_weights_(weights=weights, n_obs=X.shape[0])
# Creating penalty term for ridge regression (bridge with gamma=2 is the special case of ridge)
penalty_terms = _bridge_penalty_(theta=theta, n_obs=X.shape[0], penalty=penalty, gamma=2, center=center)
# Output b-by-n matrix
return w*(((y - pred_y) * X).T - penalty_terms[:, None]) # Score function with penalty term subtracted off
def ee_lasso_regression(theta, y, X, model, penalty, epsilon=3.e-3, weights=None, center=0.):
r"""Default stacked estimating equation for an approximate LASSO (least absolute shrinkage and selection operator)
regressor. LASSO regression applies an L1-regularization through a magnitude penalty. The estimating equation for
the approximate LASSO is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - X_i^T \theta) X_i - (1 + \epsilon) | \theta |^{\epsilon}
sgn(\theta) = 0
Here, we are using an approximation based on the bridge penalty. For the bridge penalty, LASSO is the special case
where :math:`\epsilon = 0`. By making :math:`\epsilon > 0`, we can approximate the LASSO. See the rest of the
documentation for further details.
Note
----
LASSO is not strictly convex. Therefore, root-finding may be difficult. To get around this issue,
``ee_lasso_regression`` uses an approximation to LASSO. Additionally, the approximate LASSO will not result in
coefficients of exactly zero, but coefficients will be shrunk to near zero.
Here, :math:`\theta` is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if
X is a 3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of
X's (as long as there is enough support in the data).
Note
----
The 'strength' of the penalty term is indicated by :math:`\lambda`, which is the ``penalty`` argument scaled (or
divided by) the number of observations.
Note
----
Root-finding for ``ee_lasso_regression`` can be difficult. In general, it is recommended to use the
Leverberg-Marquette algorithm (``MEstimator.estimate(solver='lm')``), increase the number of iterations, and
possibly run some pre-washing for starting values.
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
model : str
Type of regression model to estimate. Options are ``'linear'`` (linear regression), ``'logistic'`` (logistic
regression), and ``'poisson'`` (Poisson regression).
penalty : int, float, ndarray, list, vector
Penalty term to apply to all coefficients (if only a integer or float is provided) or the corresponding
coefficient (if a list or vector of integers or floats is provided). Note that the penalty term should either
consists of a single value or b values (to match the length of ``theta``).
epsilon : float, optional
Approximation error to use for the LASSO approximation. LASSO is the case where ``epsilon=0``. However, the
lack of strict convexity of the penalty may causes issues for root-finding. Using an approximation described
by Fu (2003) is used instead. Instead, ``epsilon`` is set to be slightly larger than 1. Notice that ``epsilon``
must be > 0. Default argument is 0.003, which results in a bridge penalty of 1.0003.
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
center : int, float, ndarray, list, vector, optional
Center or reference value to penalized estimated coefficients towards. Default is zero, which penalized
coefficients towards the null. Other center values can be specified for all coefficients (by providing an
integer or float) or covariate-specific centering values (by providing a vector of values of the same length as
X).
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
Examples
--------
Construction of a estimating equation(s) with ``ee_lasso_regression`` should be done similar to the
following
>>> import numpy as np
>>> import pandas as pd
>>> from scipy.stats import logistic
>>> from delicatessen import MEstimator
>>> from delicatessen.estimating_equations import ee_lasso_regression
Some generic data to estimate a linear regresion model
>>> n = 500
>>> data = pd.DataFrame()
>>> data['V'] = np.random.normal(size=n)
>>> data['W'] = np.random.normal(size=n)
>>> data['X'] = data['W'] + np.random.normal(scale=0.25, size=n)
>>> data['Z'] = np.random.normal(size=n)
>>> data['Y1'] = 0.5 + 2*data['W'] - 1*data['Z'] + np.random.normal(loc=0, size=n)
>>> data['Y2'] = np.random.binomial(n=1, p=logistic.cdf(0.5 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['Y3'] = np.random.poisson(lam=np.exp(1 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['C'] = 1
Note that ``C`` here is set to all 1's. This will be the intercept in the regression.
Defining psi, or the stacked estimating equations. Note that the penalty is a list of values. Here, we are *not*
penalizing the intercept (which is generally recommended when the intercept is unlikely to be zero). The remainder
of covariates have a penalty of 10 applied.
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y1']
>>> return ee_lasso_regression(theta=theta, X=x, y=y, model='linear', penalty=penalty_vals)
Calling the M-estimation procedure (note that ``init`` has 5 values now, since ``X.shape[1] = 5``). Additionally,
we set the maximum number of iterations to be much larger.
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=20000)
Inspecting the parameter estimates, variance, and confidence intervals
>>> estr.theta
>>> estr.variance
>>> estr.confidence_intervals()
Next, we can estimate the parameters for a logistic regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y2']
>>> return ee_lasso_regression(theta=theta, X=x, y=y, model='logistic', penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=20000)
Finally, we can estimate the parameters for a Poisson regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y3']
>>> return ee_lasso_regression(theta=theta, X=x, y=y, model='poisson', penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=20000)
Additionally, weighted versions of all the previous models can be estimated by specifying the optional ``weights``
argument.
References
----------
Fu WJ. (1998). Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical
Statistics, 7(3), 397-416.
Fu WJ. (2003). Penalized estimating equations. Biometrics, 59(1), 126-132.
"""
# Preparation of input shapes and object types
X, y, beta, penalty, center = _prep_inputs_(X=X, y=y, theta=theta, penalty=penalty, center=center)
# Determining transformation function to use for the regression model
transform = _model_transform_(model=model) # Looking up corresponding transformation
pred_y = transform(np.dot(X, beta)) # Generating predicted values
# Allowing for a weighted penalized regression model
w = _generate_weights_(weights=weights, n_obs=X.shape[0])
# Creating penalty term for ridge regression (bridge with gamma=2 is the special case of ridge)
if epsilon < 0:
raise ValueError("epsilon must be greater than zero for the approximate LASSO")
penalty_terms = _bridge_penalty_(theta=theta, n_obs=X.shape[0], penalty=penalty, gamma=1+epsilon, center=center)
# Output b-by-n matrix
return w*(((y - pred_y) * X).T - penalty_terms[:, None]) # Score function with penalty term subtracted off
def ee_elasticnet_regression(theta, y, X, model, penalty, ratio, epsilon=3.e-3, weights=None, center=0.):
r"""Default stacked estimating equation for Elastic-net regression. Elastic-net applies both L1- and
L2-regularization at a pre-specified ratio. Notice that the L1 penalty is based on an approximation. See
``ee_lasso_regression`` for further details on the approximation for the L1 penalty. The estimating equation for
Elastic-net with the approximate L1 penalty is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - X_i^T \theta) X_i - r (1 + \epsilon) | \theta |^{\epsilon}
sgn(\theta) - (1-r) 2 | \theta |^{1} sgn(\theta) = 0
where :math:`r` is the ratio for the L1 vs L2 penalty. Here, we are using an approximation based on the bridge
penalty. For the bridge penalty, LASSO is the special case where :math:`\gamma = 1`. By making :math:`\epsilon > 0`,
we can approximate the LASSO. The ridge penalty is the bridge penalty where :math:`\gamma = 2`, which can be
evaluated directly.
Note
----
LASSO is not strictly convex. Therefore, root-finding may be difficult. To get around this issue,
``ee_elasticnet_regression`` uses an approximation to LASSO.
Here, :math:`\theta` is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if
X is a 3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of
X's (as long as there is enough support in the data).
Note
----
The 'strength' of the penalty term is indicated by :math:`\lambda`, which is the ``penalty`` argument scaled (or
divided by) the number of observations.
Note
----
Root-finding for ``ee_elasticnet_regression`` can be difficult when the L1 penalty is set to be stronger. In
general, it is recommended to use the Leverberg-Marquette algorithm (``MEstimator.estimate(solver='lm')``).
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
model : str
Type of regression model to estimate. Options are ``'linear'`` (linear regression), ``'logistic'`` (logistic
regression), and ``'poisson'`` (Poisson regression).
penalty : int, float, ndarray, list, vector
Penalty term to apply to all coefficients (if only a integer or float is provided) or the corresponding
coefficient (if a list or vector of integers or floats is provided). Note that the penalty term should either
consists of a single value or b values (to match the length of ``theta``).
ratio : float
Ratio for the L1 vs L2 penalty in Elastic-net. The ratio must be be :math:`0 \ge r \ge 1`. Setting ``ratio=1``
results in LASSO and ``ratio=0`` results in ridge regression.
epsilon : float, optional
Approximation error to use for the LASSO approximation. LASSO is the case where ``epsilon=0``. However, the
lack of strict convexity of the penalty may causes issues for root-finding. Using an approximation described
by Fu (2003) is used instead. Instead, ``epsilon`` is set to be slightly larger than 1. Notice that ``epsilon``
must be > 0. Default argument is 0.003, which results in a bridge penalty of 1.0003.
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
center : int, float, ndarray, list, vector, optional
Center or reference value to penalized estimated coefficients towards. Default is zero, which penalized
coefficients towards the null. Other center values can be specified for all coefficients (by providing an
integer or float) or covariate-specific centering values (by providing a vector of values of the same length as
X).
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
Examples
--------
Construction of a estimating equation(s) with ``ee_elasticnet_regression`` should be done similar to the
following
>>> import numpy as np
>>> import pandas as pd
>>> from scipy.stats import logistic
>>> from delicatessen import MEstimator
>>> from delicatessen.estimating_equations import ee_elasticnet_regression
Some generic data to estimate a linear regresion model
>>> n = 500
>>> data = pd.DataFrame()
>>> data['V'] = np.random.normal(size=n)
>>> data['W'] = np.random.normal(size=n)
>>> data['X'] = data['W'] + np.random.normal(scale=0.25, size=n)
>>> data['Z'] = np.random.normal(size=n)
>>> data['Y1'] = 0.5 + 2*data['W'] - 1*data['Z'] + np.random.normal(loc=0, size=n)
>>> data['Y2'] = np.random.binomial(n=1, p=logistic.cdf(0.5 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['Y3'] = np.random.poisson(lam=np.exp(1 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['C'] = 1
Note that ``C`` here is set to all 1's. This will be the intercept in the regression.
Defining psi, or the stacked estimating equations. Note that the penalty is a list of values. Here, we are *not*
penalizing the intercept (which is generally recommended when the intercept is unlikely to be zero). The remainder
of covariates have a penalty of 10 applied.
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y1']
>>> return ee_elasticnet_regression(theta=theta, X=x, y=y, model='linear', ratio=0.5, penalty=penalty_vals)
Calling the M-estimation procedure (note that ``init`` has 5 values now, since ``X.shape[1] = 5``).
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate()
Inspecting the parameter estimates, variance, and confidence intervals
>>> estr.theta
>>> estr.variance
>>> estr.confidence_intervals()
Next, we can estimate the parameters for a logistic regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y2']
>>> return ee_elasticnet_regression(theta=theta, X=x, y=y, model='logistic', ratio=0.5, penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=20000)
Finally, we can estimate the parameters for a Poisson regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y3']
>>> return ee_elasticnet_regression(theta=theta, X=x, y=y, model='poisson', ratio=0.5, penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=20000)
Additionally, weighted versions of all the previous models can be estimated by specifying the optional ``weights``
argument.
References
----------
Fu WJ. (1998). Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical
Statistics, 7(3), 397-416.
Fu WJ. (2003). Penalized estimating equations. Biometrics, 59(1), 126-132.
"""
# Preparation of input shapes and object types
X, y, beta, penalty, center = _prep_inputs_(X=X, y=y, theta=theta, penalty=penalty, center=center)
# Determining transformation function to use for the regression model
transform = _model_transform_(model=model) # Looking up corresponding transformation
pred_y = transform(np.dot(X, beta)) # Generating predicted values
# Allowing for a weighted penalized regression model
w = _generate_weights_(weights=weights, n_obs=X.shape[0])
# Creating penalty term for ridge regression (bridge with gamma=2 is the special case of ridge)
if epsilon < 0:
raise ValueError("epsilon must be greater than zero for the approximate LASSO")
if not 0 <= ratio <= 1:
raise ValueError("The elastic-net penalty is only defined for 0 <= ratio <= 1. The input L1:L2 ratio was "
+ str(ratio))
penalty_l1 = _bridge_penalty_(theta=theta, n_obs=X.shape[0], penalty=penalty, gamma=1+epsilon, center=center)
penalty_l2 = _bridge_penalty_(theta=theta, n_obs=X.shape[0], penalty=penalty, gamma=2, center=center)
penalty_terms = ratio*penalty_l1 + (1-ratio)*penalty_l2
# Output b-by-n matrix
return w * (((y - pred_y) * X).T - penalty_terms[:, None]) # Score function with penalty term subtracted off
def ee_bridge_regression(theta, y, X, model, penalty, gamma, weights=None, center=0.):
r"""Default stacked estimating equation for bridge penalized regression. The bridge penalty is a generalization
of penalized regression, that includes L1 and L2-regularization as special cases. The estimating equation for
bridge penalized regression is
.. math::
\sum_i^n \psi(Y_i, X_i, \theta) = \sum_i^n (Y_i - X_i^T \theta) X_i - \gamma | \theta |^{\gamma - 1}
sgn(\theta) = 0
For the bridge penalty, LASSO is the special case where :math:`\gamma = 1` and ridge regression is
:math:`\gamma = 2`. While the bridge penalty is defined for :math:`\gamma > 0`, the provided estimating equation
only supports :math:`\gamma \ge 1`. Additionally, LASSO is not strictly convex, so :math:`\gamma = 1` is not
generally recommended. Instead, an approximate LASSO can be accomplished by setting :math:`\gamma` to be slightly
larger than 1 (as done in ``ee_lasso_regression``.
Here, :math:`\theta` is a 1-by-b array, where b is the distinct covariates included as part of X. For example, if
X is a 3-by-n matrix, then theta will be a 1-by-3 array. The code is general to allow for an arbitrary number of
X's (as long as there is enough support in the data).
Note
----
The 'strength' of the penalty term is indicated by :math:`\lambda`, which is the ``penalty`` argument scaled (or
divided by) the number of observations.
Note
----
Root-finding for ``ee_bridge_regression`` can be difficult when :math:`2 > \gamma > 1`. In general, it is
recommended to use the Leverberg-Marquette algorithm (``MEstimator.estimate(solver='lm')``).
Parameters
----------
theta : ndarray, list, vector
Theta in this case consists of b values. Therefore, initial values should consist of the same number as the
number of columns present. This can easily be accomplished generally by ``[0, ] * X.shape[1]``.
X : ndarray, list, vector
2-dimensional vector of n observed values for b variables. No missing data should be included (missing data
may cause unexpected behavior).
y : ndarray, list, vector
1-dimensional vector of n observed values. No missing data should be included (missing data may cause unexpected
behavior).
model : str
Type of regression model to estimate. Options are ``'linear'`` (linear regression), ``'logistic'`` (logistic
regression), and ``'poisson'`` (Poisson regression).
penalty : int, float, ndarray, list, vector
Penalty term to apply to all coefficients (if only a integer or float is provided) or the corresponding
coefficient (if a list or vector of integers or floats is provided). Note that the penalty term should either
consists of a single value or b values (to match the length of ``theta``).
gamma : float
Hyperparameter for the bridge penalty, defined for :math:`\gamma > 0`. However, only :math:`\gamma \ge 1` are
supported. If :math:`\gamma = 1`, then the bridge penalty is LASSO. If :math:`\gamma = 2`, then the bridge
penalty is ridge.
weights : ndarray, list, vector, None, optional
1-dimensional vector of n weights. No missing weights should be included. Default is None, which assigns a
weight of 1 to all observations.
center : int, float, ndarray, list, vector, optional
Center or reference value to penalized estimated coefficients towards. Default is zero, which penalized
coefficients towards the null. Other center values can be specified for all coefficients (by providing an
integer or float) or covariate-specific centering values (by providing a vector of values of the same length as
X).
Returns
-------
array :
Returns a b-by-n NumPy array evaluated for the input theta and y
Examples
--------
Construction of a estimating equation(s) with ``ee_bridge_regression`` should be done similar to the
following
>>> import numpy as np
>>> import pandas as pd
>>> from scipy.stats import logistic
>>> from delicatessen import MEstimator
>>> from delicatessen.estimating_equations import ee_bridge_regression
Some generic data to estimate a linear bridge regresion model
>>> n = 500
>>> data = pd.DataFrame()
>>> data['V'] = np.random.normal(size=n)
>>> data['W'] = np.random.normal(size=n)
>>> data['X'] = data['W'] + np.random.normal(scale=0.25, size=n)
>>> data['Z'] = np.random.normal(size=n)
>>> data['Y1'] = 0.5 + 2*data['W'] - 1*data['Z'] + np.random.normal(loc=0, size=n)
>>> data['Y2'] = np.random.binomial(n=1, p=logistic.cdf(0.5 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['Y3'] = np.random.poisson(lam=np.exp(1 + 2*data['W'] - 1*data['Z']), size=n)
>>> data['C'] = 1
Note that ``C`` here is set to all 1's. This will be the intercept in the regression.
Defining psi, or the stacked estimating equations. Note that the penalty is a list of values. Here, we are *not*
penalizing the intercept (which is generally recommended when the intercept is unlikely to be zero). The remainder
of covariates have a penalty of 10 applied.
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y']
>>> return ee_bridge_regression(theta=theta, X=x, y=y, model='linear', gamma=2.3, penalty=penalty_vals)
Calling the M-estimation procedure (note that ``init`` has 5 values now, since ``X.shape[1] = 5``).
>>> estr = MEstimator(stacked_equations=psi, init=[0., 0., 0., 0., 0.])
>>> estr.estimate()
Inspecting the parameter estimates, variance, and confidence intervals
>>> estr.theta
>>> estr.variance
>>> estr.confidence_intervals()
Next, we can estimate the parameters for a logistic regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y2']
>>> return ee_bridge_regression(theta=theta, X=x, y=y, model='logistic', gamma=2.3, penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=5000)
Finally, we can estimate the parameters for a Poisson regression model as follows
>>> penalty_vals = [0., 10., 10., 10., 10.]
>>> def psi(theta):
>>> x, y = data[['C', 'V', 'W', 'X', 'Z']], data['Y3']
>>> return ee_bridge_regression(theta=theta, X=x, y=y, model='poisson', gamma=2.3, penalty=penalty_vals)
>>> estr = MEstimator(stacked_equations=psi, init=[0.01, 0.01, 0.01, 0.01, 0.01])
>>> estr.estimate(solver='lm', maxiter=5000)
Additionally, weighted versions of all the previous models can be estimated by specifying the optional ``weights``
argument.
References
----------
Fu WJ. (1998). Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical
Statistics, 7(3), 397-416.
Fu WJ. (2003). Penalized estimating equations. Biometrics, 59(1), 126-132.
"""
# Preparation of input shapes and object types
X, y, beta, penalty, center = _prep_inputs_(X=X, y=y, theta=theta, penalty=penalty, center=center)
# Determining transformation function to use for the regression model
transform = _model_transform_(model=model) # Looking up corresponding transformation
pred_y = transform(np.dot(X, beta)) # Generating predicted values
# Allowing for a weighted penalized regression model
w = _generate_weights_(weights=weights, n_obs=X.shape[0])
# Creating penalty term for ridge regression (bridge with gamma=2 is the special case of ridge)
penalty_terms = _bridge_penalty_(theta=theta, n_obs=X.shape[0], penalty=penalty, gamma=gamma, center=center)
# Output b-by-n matrix
return w * (((y - pred_y) * X).T - penalty_terms[:, None]) # Score function with penalty term subtracted off
#################################################################
# Utility functions for regression equations
def _prep_inputs_(X, y, theta, penalty=None, center=None):
"""Internal use function to simplify variable transformations for regression. This function is used on the inputs
to ensure they are the proper shapes
Parameters
----------
X : ndarray
y : ndarray
theta : ndarray
penalty : ndarray, None, optiona
Returns
-------
transformed parameters
"""
X = np.asarray(X) # Convert to NumPy array
y = np.asarray(y)[:, None] # Convert to NumPy array and ensure correct shape for matrix algebra
beta = np.asarray(theta)[:, None] # Convert to NumPy array and ensure correct shape for matrix algebra
if penalty is None: # Return the transformed objects
return X, y, beta
else: # Convert penalty term then return all
penalty = np.asarray(penalty) # Convert to NumPy array
center = np.asarray(center) # Convert to NumPy array
return X, y, beta, penalty, center
def _model_transform_(model, assert_linear_model=False):
"""Internal use function to simplify the checking procedure for the model form to use. Takes the input string and
returns the corresponding function for the variable transformation.
Parameters
----------
model : str
Model identifier to calculate the transformation for
Returns
-------
function
"""
# Checking object type (and convert to lower-case)
if isinstance(model, str): # If string, convert to lower-case for internal handling
model = model.lower()
else:
raise ValueError("The model argument must be a str object.")
# forcing model to be 'linear' (used by ee_robust_regression)
if assert_linear_model and model != 'linear':
raise ValueError("The selected estimating equation only supports linear regression.")
# Process the model transformations
if model == 'linear': # If linear regression
transform = identity # ... no transformation needed
elif model == 'logistic': # If logistic regression
transform = inverse_logit # ... expit (inverse_logit) transformation
elif model == 'poisson': # If Poisson regression
transform = np.exp # ... exponential transformation
else: # Else results in error
raise ValueError("Invalid input:", model,
". Please select: 'linear', 'logistic', or 'poisson'.")
return transform
def _generate_weights_(weights, n_obs):
"""Internal use function to return the weights assigned to each observation. Returns a vector of 1's when no
weights are provided. Otherwise, converts provided vector into a numpy array.
Parameters
----------
weights : None, ndarray, list
Vector of weights, or None if no weights are provided
n_obs : int
Number of observations in the data
Returns
-------
ndarray
"""
if weights is None: # If weights is unspecified
w = np.ones(n_obs) # ... assign weight of 1 to all observations
else: # Otherwise
w = np.asarray(weights) # ... set weights as input vector
return w
def _bridge_penalty_(theta, gamma, penalty, n_obs, center):
r"""Internal use function to calculate the corresponding penalty term. The penalty term formula is based on the
bridge penalty, where LASSO is :math:`\gamma = 1` and ridge is :math:`\gamma = 2`. The penalty term is defined for
:math:`\gamma > 0` but :math:`\gamma < 1` requires special optimization.
Note
----
All penalties are scaled by the number of observations.
The penalty term for the score function (first derivative) is:
.. math::
\lambda \gamma | \theta |^{\gamma - 1} sgn(\theta)
where :math:`\lambda` is the (scaled) penalty, :math:`\gamma` is the hyperparameter for the bridge penalty, and
:math:`\theta` are the regression coefficients.
Parameters
----------
theta : ndarray, list, vector
Regression coefficients to penalize. ``theta`` in this case consists of b values.
gamma : float, int
Hyperparameter for the bridge penalty, defined for :math:`\gamma > 0`. Notice that :math:`\gamma = 1`
corresponds to LASSO, and :math:`\gamma = 2` corresponds to ridge.
penalty : int, float, ndarray, list, vector
Penalty term to apply to all coefficients (if only a integer or float is provided) or the corresponding
coefficient (if a list or vector of integers or floats is provided). Note that the penalty term should either
consists of a single value or b values (to match the length of ``theta``).
n_obs : int
Number of observations. Used to rescale the penalty terms
Returns
-------
ndarray
"""
# Checking the penalty term is non-negative
if penalty.size != 1:
if penalty.shape[0] != len(theta):
raise ValueError("The penalty term must be either a single number or the same length as theta.")
if center.size != 1:
if center.shape[0] != len(theta):
raise ValueError("The center term must be either a single number or the same length as theta.")
# Checking a valid hyperparameter is being provided
# if gamma <= 0:
# raise ValueError("L_{gamma} is not defined for `gamma` > 0")
if gamma < 1:
raise ValueError("L_{gamma} for `gamma` < 1 is not currently able to be supported with estimating equations "
"evaluated using numerical methods.")
# Calculating the penalties
penalty_scaled = penalty / (gamma * n_obs)
penalty_terms = penalty_scaled * gamma * (np.abs(theta - center)**(gamma-1)) * np.sign(theta - center)
return penalty_terms
| 47.366197 | 120 | 0.665775 | 8,684 | 60,534 | 4.597651 | 0.061262 | 0.014051 | 0.021715 | 0.004809 | 0.856309 | 0.838126 | 0.822922 | 0.816661 | 0.810399 | 0.805716 | 0 | 0.018813 | 0.223742 | 60,534 | 1,277 | 121 | 47.403289 | 0.830858 | 0.811891 | 0 | 0.322835 | 0 | 0.007874 | 0.173398 | 0.002996 | 0 | 0 | 0 | 0 | 0.023622 | 1 | 0.110236 | false | 0 | 0.023622 | 0 | 0.251969 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
054d9ecfaf7a0846daea50baa25931160e3e9a7a | 5,307 | py | Python | tests/test_testproblems.py | don-alejandrino/rt_opt | f2639ad16a16281755c78d8934e7b8ad51730736 | [
"MIT"
] | 1 | 2021-08-12T08:14:47.000Z | 2021-08-12T08:14:47.000Z | tests/test_testproblems.py | don-alejandrino/rt_opt | f2639ad16a16281755c78d8934e7b8ad51730736 | [
"MIT"
] | null | null | null | tests/test_testproblems.py | don-alejandrino/rt_opt | f2639ad16a16281755c78d8934e7b8ad51730736 | [
"MIT"
] | null | null | null | import unittest
import numpy as np
from rt_opt import testproblems as tp
from rt_opt import testproblems_shifted as tps
def run_test(cls, testProb):
if isinstance(testProb.min.x, tuple):
if isinstance(testProb.min.f, tuple):
for val in testProb.min.x:
cls.assertGreater(testProb.f(val), testProb.min.f[0])
cls.assertLess(testProb.f(val), testProb.min.f[1])
else:
for val in testProb.min.x:
cls.assertAlmostEqual(testProb.f(val), testProb.min.f,
delta=np.finfo(float).eps)
else:
if isinstance(testProb.min.f, tuple):
cls.assertGreater(testProb.f(testProb.min.x), testProb.min.f[0])
cls.assertLess(testProb.f(testProb.min.x), testProb.min.f[1])
else:
cls.assertAlmostEqual(testProb.f(testProb.min.x), testProb.min.f,
delta=np.finfo(float).eps)
class Test_testproblems_2D(unittest.TestCase):
def test_Ackley(self):
testProb = tp.Ackley()
run_test(self, testProb)
def test_Beale(self):
testProb = tp.Beale()
run_test(self, testProb)
def test_GoldsteinPrice(self):
testProb = tp.GoldsteinPrice()
run_test(self, testProb)
def test_Booth(self):
testProb = tp.Booth()
run_test(self, testProb)
def test_Bukin6(self):
testProb = tp.Bukin6()
run_test(self, testProb)
def test_Matyas(self):
testProb = tp.Matyas()
run_test(self, testProb)
def test_Levi13(self):
testProb = tp.Levi13()
run_test(self, testProb)
def test_Himmelblau(self):
testProb = tp.Himmelblau()
run_test(self, testProb)
def test_ThreeHumpCamel(self):
testProb = tp.ThreeHumpCamel()
run_test(self, testProb)
def test_Easom(self):
testProb = tp.Easom()
run_test(self, testProb)
def test_CrossInTray(self):
testProb = tp.CrossInTray()
run_test(self, testProb)
def test_Eggholder(self):
testProb = tp.Eggholder()
run_test(self, testProb)
def test_Hoelder(self):
testProb = tp.Hoelder()
run_test(self, testProb)
def test_McCormick(self):
testProb = tp.McCormick()
run_test(self, testProb)
def test_Schaffer2(self):
testProb = tp.Schaffer2()
run_test(self, testProb)
def test_Schaffer4(self):
testProb = tp.Schaffer4()
run_test(self, testProb)
class Test_testproblems_shifted_2D(unittest.TestCase):
def test_Ackley(self):
testProb = tps.Ackley()
run_test(self, testProb)
def test_Beale(self):
testProb = tps.Beale()
run_test(self, testProb)
def test_GoldsteinPrice(self):
testProb = tps.GoldsteinPrice()
run_test(self, testProb)
def test_Booth(self):
testProb = tps.Booth()
run_test(self, testProb)
def test_Bukin6(self):
testProb = tps.Bukin6()
run_test(self, testProb)
def test_Matyas(self):
testProb = tps.Matyas()
run_test(self, testProb)
def test_Levi13(self):
testProb = tps.Levi13()
run_test(self, testProb)
def test_Himmelblau(self):
testProb = tps.Himmelblau()
run_test(self, testProb)
def test_ThreeHumpCamel(self):
testProb = tps.ThreeHumpCamel()
run_test(self, testProb)
def test_Easom(self):
testProb = tps.Easom()
run_test(self, testProb)
def test_CrossInTray(self):
testProb = tps.CrossInTray()
run_test(self, testProb)
def test_Eggholder(self):
testProb = tps.Eggholder()
run_test(self, testProb)
def test_Hoelder(self):
testProb = tps.Hoelder()
run_test(self, testProb)
def test_McCormick(self):
testProb = tps.McCormick()
run_test(self, testProb)
def test_Schaffer2(self):
testProb = tps.Schaffer2()
run_test(self, testProb)
def test_Schaffer4(self):
testProb = tps.Schaffer4()
run_test(self, testProb)
class Test_testproblems_nD(unittest.TestCase):
def setUp(self):
self.n_dims = 100
def test_Rastrigin(self):
testProb = tp.Rastrigin(self.n_dims)
run_test(self, testProb)
def test_Sphere(self):
testProb = tp.Sphere(self.n_dims)
run_test(self, testProb)
def test_Rosenbrock(self):
testProb = tp.Rosenbrock(self.n_dims)
run_test(self, testProb)
def test_StyblinskiTang(self):
testProb = tp.StyblinskiTang(self.n_dims)
run_test(self, testProb)
class Test_testproblems_shifted_nD(unittest.TestCase):
def setUp(self):
self.n_dims = 100
def test_Rastrigin(self):
testProb = tps.Rastrigin(self.n_dims)
run_test(self, testProb)
def test_Sphere(self):
testProb = tps.Sphere(self.n_dims)
run_test(self, testProb)
def test_Rosenbrock(self):
testProb = tps.Rosenbrock(self.n_dims)
run_test(self, testProb)
def test_StyblinskiTang(self):
testProb = tps.StyblinskiTang(self.n_dims)
run_test(self, testProb)
if __name__ == '__main__':
unittest.main()
| 26.014706 | 77 | 0.621067 | 636 | 5,307 | 5.006289 | 0.105346 | 0.301508 | 0.138191 | 0.238693 | 0.903266 | 0.883166 | 0.852701 | 0.838254 | 0.680276 | 0.680276 | 0 | 0.008314 | 0.274731 | 5,307 | 203 | 78 | 26.142857 | 0.818914 | 0 | 0 | 0.615894 | 0 | 0 | 0.001507 | 0 | 0 | 0 | 0 | 0 | 0.039735 | 1 | 0.284768 | false | 0 | 0.02649 | 0 | 0.337748 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
55332d0ed3b15f531ee890479270f850bfec1c24 | 45 | py | Python | enhance_me/mirnet/models/__init__.py | soumik12345/enhance-me | c0f9bcb6d4eb46030e90d47e58059f8624f5cf7a | [
"MIT"
] | 1 | 2022-02-01T23:20:19.000Z | 2022-02-01T23:20:19.000Z | enhance_me/mirnet/models/__init__.py | soumik12345/enhance-me | c0f9bcb6d4eb46030e90d47e58059f8624f5cf7a | [
"MIT"
] | 2 | 2021-11-27T08:45:47.000Z | 2021-11-28T08:45:59.000Z | enhance_me/mirnet/models/__init__.py | soumik12345/enhance-me | c0f9bcb6d4eb46030e90d47e58059f8624f5cf7a | [
"MIT"
] | null | null | null | from .mirnet_model import build_mirnet_model
| 22.5 | 44 | 0.888889 | 7 | 45 | 5.285714 | 0.714286 | 0.594595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
55906b196788b6e4705f0ff0aa6ec524fd5ae2cf | 6,827 | py | Python | physical_multiagent_env/utils/maps.py | fxnnxc/physical_multiagent_env | 9324d105372da6911de97640e70fb68cde337d61 | [
"Apache-2.0"
] | null | null | null | physical_multiagent_env/utils/maps.py | fxnnxc/physical_multiagent_env | 9324d105372da6911de97640e70fb68cde337d61 | [
"Apache-2.0"
] | 1 | 2021-04-29T02:14:14.000Z | 2021-04-29T02:14:54.000Z | physical_multiagent_env/utils/maps.py | fxnnxc/physical_multiagent_env | 9324d105372da6911de97640e70fb68cde337d61 | [
"Apache-2.0"
] | null | null | null | class GridMap1:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [8,8,0]
self.agent_position = [1,1,0]
self.width = 10
self.height = 10
self.map1 = [[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1], # 5
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap2:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [8,4,0]
self.agent_position = [1,4,0]
self.width = 10
self.height = 10
self.map1 = [[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,1, 1,0,0,0,1], # 5
[1,0,1,1,0, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap3:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [4,4,0]
self.agent_position = [2,4,0]
self.width = 10
self.height = 10
self.map1 = [[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,1,1,1, 1,1,1,0,1],
[1,0,1,1,0, 0,1,1,0,1], # 5
[1,0,1,1,0, 0,1,1,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap4:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [1,1,0]
self.agent_position = [1,2,0]
self.width = 10
self.height = 10
self.map1 = [[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1], # 5
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap5:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [1,1,0]
self.agent_position = [1,2,0]
self.width = 10
self.height = 10
self.map1 = [[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,1,1, 1,1,0,0,1],
[1,0,0,1,1, 1,1,0,0,1], # 5
[1,0,0,1,1, 1,1,0,0,1],
[1,0,0,1,1, 1,1,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap6:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [1,1,0]
self.agent_position = [1,2,0]
self.width = 10
self.height = 10
self.map1 = [
[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap7:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [4.5,1,0]
self.agent_position = [4.5,8,0]
self.width = 10
self.height = 10
self.map1 = [
[1,1,1,1,1, 1,1,1,1,1], # 1
[1,1,1,1,1, 1,1,1,1,1],
[1,1,1,1,1, 1,1,1,1,1],
[1,1,1,1,1, 1,1,1,1,1],
[1,0,0,0,0, 0,0,0,0,1], # 5
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1],
[1,1,1,1,1, 1,1,1,1,1],
[1,1,1,1,1, 1,1,1,1,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap8:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [1,1,0]
self.agent_position = [2,1,0]
self.width = 10
self.height = 10
self.map1 = [
[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 0,0,0,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,1,1,0, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1])
class GridMap9:
def __init__(self):
self.init_position = [5,5,0]
self.target_position = [1,1,0]
self.agent_position = [1,2,0]
self.width = 10
self.height = 10
self.map1 = [
[1,1,1,1,1, 1,1,1,1,1], # 1
[1,0,0,0,0, 1,1,1,1,1],
[1,1,1,1,0, 1,1,1,1,1],
[1,1,0,1,0, 0,0,0,0,1],
[1,0,0,1,1, 1,1,0,0,1],
[1,0,0,1,1, 0,0,0,0,1],
[1,0,1,1,1, 0,1,1,0,1],
[1,0,1,1,1, 0,1,1,0,1],
[1,0,0,0,0, 0,0,0,0,1],
[1,1,1,1,1, 1,1,1,1,1], # 10
]
self.num_obstacles = sum([sum(b) for b in self.map1]) | 36.31383 | 61 | 0.347883 | 1,304 | 6,827 | 1.766104 | 0.02454 | 0.323057 | 0.337386 | 0.401216 | 0.964394 | 0.930091 | 0.930091 | 0.929657 | 0.929657 | 0.924012 | 0 | 0.281576 | 0.438699 | 6,827 | 188 | 62 | 36.31383 | 0.319415 | 0.008935 | 0 | 0.811429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051429 | false | 0 | 0 | 0 | 0.102857 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
559b30510fe04c8f92c357f2ff2e8b1648728a8f | 92 | py | Python | inac8hr/loaders/__init__.py | th-bunratta/8hr.insomniac | 5173500a1ad7197096d513b38258aa65b035fcf3 | [
"BSD-3-Clause"
] | null | null | null | inac8hr/loaders/__init__.py | th-bunratta/8hr.insomniac | 5173500a1ad7197096d513b38258aa65b035fcf3 | [
"BSD-3-Clause"
] | null | null | null | inac8hr/loaders/__init__.py | th-bunratta/8hr.insomniac | 5173500a1ad7197096d513b38258aa65b035fcf3 | [
"BSD-3-Clause"
] | null | null | null | from inac8hr.loaders.image_loader import *
from inac8hr.loaders.dirs import GameDirectories
| 30.666667 | 48 | 0.858696 | 12 | 92 | 6.5 | 0.666667 | 0.282051 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.086957 | 92 | 2 | 49 | 46 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e9cd60a0d3fd3e847360c58817c8890eb0f158c9 | 119 | py | Python | src/PatchMatch/__init__.py | Emmanuel-Ezenwere/Deep-Image-Analogy-PyTorch | 4ce14cd0b7c2d49ec4ab2dd1356aa7163d7ffae2 | [
"MIT"
] | null | null | null | src/PatchMatch/__init__.py | Emmanuel-Ezenwere/Deep-Image-Analogy-PyTorch | 4ce14cd0b7c2d49ec4ab2dd1356aa7163d7ffae2 | [
"MIT"
] | null | null | null | src/PatchMatch/__init__.py | Emmanuel-Ezenwere/Deep-Image-Analogy-PyTorch | 4ce14cd0b7c2d49ec4ab2dd1356aa7163d7ffae2 | [
"MIT"
] | 1 | 2020-04-13T13:20:24.000Z | 2020-04-13T13:20:24.000Z | from .PatchMatchSimple import PatchMatch as PatchMatchSimple
from .PatchMatchOrig import PatchMatch as PatchMatchOrig
| 29.75 | 60 | 0.87395 | 12 | 119 | 8.666667 | 0.5 | 0.307692 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109244 | 119 | 3 | 61 | 39.666667 | 0.981132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
75a4619bab7d849ff78dcc9cda9d3d66789470f5 | 51,622 | py | Python | sdk/python/pulumi_azure/containerservice/registry_task.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/containerservice/registry_task.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/containerservice/registry_task.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['RegistryTaskArgs', 'RegistryTask']
@pulumi.input_type
class RegistryTaskArgs:
def __init__(__self__, *,
container_registry_id: pulumi.Input[str],
agent_pool_name: Optional[pulumi.Input[str]] = None,
agent_setting: Optional[pulumi.Input['RegistryTaskAgentSettingArgs']] = None,
base_image_trigger: Optional[pulumi.Input['RegistryTaskBaseImageTriggerArgs']] = None,
docker_step: Optional[pulumi.Input['RegistryTaskDockerStepArgs']] = None,
enabled: Optional[pulumi.Input[bool]] = None,
encoded_step: Optional[pulumi.Input['RegistryTaskEncodedStepArgs']] = None,
file_step: Optional[pulumi.Input['RegistryTaskFileStepArgs']] = None,
identity: Optional[pulumi.Input['RegistryTaskIdentityArgs']] = None,
is_system_task: Optional[pulumi.Input[bool]] = None,
log_template: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
platform: Optional[pulumi.Input['RegistryTaskPlatformArgs']] = None,
registry_credential: Optional[pulumi.Input['RegistryTaskRegistryCredentialArgs']] = None,
source_triggers: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
timer_triggers: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]]] = None):
"""
The set of arguments for constructing a RegistryTask resource.
:param pulumi.Input[str] container_registry_id: The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input[str] agent_pool_name: The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
:param pulumi.Input['RegistryTaskAgentSettingArgs'] agent_setting: A `agent_setting` block as defined below.
:param pulumi.Input['RegistryTaskBaseImageTriggerArgs'] base_image_trigger: A `base_image_trigger` block as defined below.
:param pulumi.Input['RegistryTaskDockerStepArgs'] docker_step: A `docker_step` block as defined below.
:param pulumi.Input[bool] enabled: Should this Container Registry Task be enabled? Defaults to `true`.
:param pulumi.Input['RegistryTaskEncodedStepArgs'] encoded_step: A `encoded_step` block as defined below.
:param pulumi.Input['RegistryTaskFileStepArgs'] file_step: A `file_step` block as defined below.
:param pulumi.Input['RegistryTaskIdentityArgs'] identity: A `identity` block as defined below.
:param pulumi.Input[bool] is_system_task: Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
:param pulumi.Input[str] log_template: The template that describes the run log artifact.
:param pulumi.Input[str] name: The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input['RegistryTaskPlatformArgs'] platform: A `platform` block as defined below.
:param pulumi.Input['RegistryTaskRegistryCredentialArgs'] registry_credential: One `registry_credential` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]] source_triggers: One or more `source_trigger` blocks as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to the Container Registry Task.
:param pulumi.Input[int] timeout_in_seconds: The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
:param pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]] timer_triggers: One or more `timer_trigger` blocks as defined below.
"""
pulumi.set(__self__, "container_registry_id", container_registry_id)
if agent_pool_name is not None:
pulumi.set(__self__, "agent_pool_name", agent_pool_name)
if agent_setting is not None:
pulumi.set(__self__, "agent_setting", agent_setting)
if base_image_trigger is not None:
pulumi.set(__self__, "base_image_trigger", base_image_trigger)
if docker_step is not None:
pulumi.set(__self__, "docker_step", docker_step)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if encoded_step is not None:
pulumi.set(__self__, "encoded_step", encoded_step)
if file_step is not None:
pulumi.set(__self__, "file_step", file_step)
if identity is not None:
pulumi.set(__self__, "identity", identity)
if is_system_task is not None:
pulumi.set(__self__, "is_system_task", is_system_task)
if log_template is not None:
pulumi.set(__self__, "log_template", log_template)
if name is not None:
pulumi.set(__self__, "name", name)
if platform is not None:
pulumi.set(__self__, "platform", platform)
if registry_credential is not None:
pulumi.set(__self__, "registry_credential", registry_credential)
if source_triggers is not None:
pulumi.set(__self__, "source_triggers", source_triggers)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if timeout_in_seconds is not None:
pulumi.set(__self__, "timeout_in_seconds", timeout_in_seconds)
if timer_triggers is not None:
pulumi.set(__self__, "timer_triggers", timer_triggers)
@property
@pulumi.getter(name="containerRegistryId")
def container_registry_id(self) -> pulumi.Input[str]:
"""
The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
"""
return pulumi.get(self, "container_registry_id")
@container_registry_id.setter
def container_registry_id(self, value: pulumi.Input[str]):
pulumi.set(self, "container_registry_id", value)
@property
@pulumi.getter(name="agentPoolName")
def agent_pool_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
"""
return pulumi.get(self, "agent_pool_name")
@agent_pool_name.setter
def agent_pool_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "agent_pool_name", value)
@property
@pulumi.getter(name="agentSetting")
def agent_setting(self) -> Optional[pulumi.Input['RegistryTaskAgentSettingArgs']]:
"""
A `agent_setting` block as defined below.
"""
return pulumi.get(self, "agent_setting")
@agent_setting.setter
def agent_setting(self, value: Optional[pulumi.Input['RegistryTaskAgentSettingArgs']]):
pulumi.set(self, "agent_setting", value)
@property
@pulumi.getter(name="baseImageTrigger")
def base_image_trigger(self) -> Optional[pulumi.Input['RegistryTaskBaseImageTriggerArgs']]:
"""
A `base_image_trigger` block as defined below.
"""
return pulumi.get(self, "base_image_trigger")
@base_image_trigger.setter
def base_image_trigger(self, value: Optional[pulumi.Input['RegistryTaskBaseImageTriggerArgs']]):
pulumi.set(self, "base_image_trigger", value)
@property
@pulumi.getter(name="dockerStep")
def docker_step(self) -> Optional[pulumi.Input['RegistryTaskDockerStepArgs']]:
"""
A `docker_step` block as defined below.
"""
return pulumi.get(self, "docker_step")
@docker_step.setter
def docker_step(self, value: Optional[pulumi.Input['RegistryTaskDockerStepArgs']]):
pulumi.set(self, "docker_step", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should this Container Registry Task be enabled? Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="encodedStep")
def encoded_step(self) -> Optional[pulumi.Input['RegistryTaskEncodedStepArgs']]:
"""
A `encoded_step` block as defined below.
"""
return pulumi.get(self, "encoded_step")
@encoded_step.setter
def encoded_step(self, value: Optional[pulumi.Input['RegistryTaskEncodedStepArgs']]):
pulumi.set(self, "encoded_step", value)
@property
@pulumi.getter(name="fileStep")
def file_step(self) -> Optional[pulumi.Input['RegistryTaskFileStepArgs']]:
"""
A `file_step` block as defined below.
"""
return pulumi.get(self, "file_step")
@file_step.setter
def file_step(self, value: Optional[pulumi.Input['RegistryTaskFileStepArgs']]):
pulumi.set(self, "file_step", value)
@property
@pulumi.getter
def identity(self) -> Optional[pulumi.Input['RegistryTaskIdentityArgs']]:
"""
A `identity` block as defined below.
"""
return pulumi.get(self, "identity")
@identity.setter
def identity(self, value: Optional[pulumi.Input['RegistryTaskIdentityArgs']]):
pulumi.set(self, "identity", value)
@property
@pulumi.getter(name="isSystemTask")
def is_system_task(self) -> Optional[pulumi.Input[bool]]:
"""
Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
"""
return pulumi.get(self, "is_system_task")
@is_system_task.setter
def is_system_task(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_system_task", value)
@property
@pulumi.getter(name="logTemplate")
def log_template(self) -> Optional[pulumi.Input[str]]:
"""
The template that describes the run log artifact.
"""
return pulumi.get(self, "log_template")
@log_template.setter
def log_template(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "log_template", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def platform(self) -> Optional[pulumi.Input['RegistryTaskPlatformArgs']]:
"""
A `platform` block as defined below.
"""
return pulumi.get(self, "platform")
@platform.setter
def platform(self, value: Optional[pulumi.Input['RegistryTaskPlatformArgs']]):
pulumi.set(self, "platform", value)
@property
@pulumi.getter(name="registryCredential")
def registry_credential(self) -> Optional[pulumi.Input['RegistryTaskRegistryCredentialArgs']]:
"""
One `registry_credential` block as defined below.
"""
return pulumi.get(self, "registry_credential")
@registry_credential.setter
def registry_credential(self, value: Optional[pulumi.Input['RegistryTaskRegistryCredentialArgs']]):
pulumi.set(self, "registry_credential", value)
@property
@pulumi.getter(name="sourceTriggers")
def source_triggers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]]]:
"""
One or more `source_trigger` blocks as defined below.
"""
return pulumi.get(self, "source_triggers")
@source_triggers.setter
def source_triggers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]]]):
pulumi.set(self, "source_triggers", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags which should be assigned to the Container Registry Task.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="timeoutInSeconds")
def timeout_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
"""
return pulumi.get(self, "timeout_in_seconds")
@timeout_in_seconds.setter
def timeout_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "timeout_in_seconds", value)
@property
@pulumi.getter(name="timerTriggers")
def timer_triggers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]]]:
"""
One or more `timer_trigger` blocks as defined below.
"""
return pulumi.get(self, "timer_triggers")
@timer_triggers.setter
def timer_triggers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]]]):
pulumi.set(self, "timer_triggers", value)
@pulumi.input_type
class _RegistryTaskState:
def __init__(__self__, *,
agent_pool_name: Optional[pulumi.Input[str]] = None,
agent_setting: Optional[pulumi.Input['RegistryTaskAgentSettingArgs']] = None,
base_image_trigger: Optional[pulumi.Input['RegistryTaskBaseImageTriggerArgs']] = None,
container_registry_id: Optional[pulumi.Input[str]] = None,
docker_step: Optional[pulumi.Input['RegistryTaskDockerStepArgs']] = None,
enabled: Optional[pulumi.Input[bool]] = None,
encoded_step: Optional[pulumi.Input['RegistryTaskEncodedStepArgs']] = None,
file_step: Optional[pulumi.Input['RegistryTaskFileStepArgs']] = None,
identity: Optional[pulumi.Input['RegistryTaskIdentityArgs']] = None,
is_system_task: Optional[pulumi.Input[bool]] = None,
log_template: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
platform: Optional[pulumi.Input['RegistryTaskPlatformArgs']] = None,
registry_credential: Optional[pulumi.Input['RegistryTaskRegistryCredentialArgs']] = None,
source_triggers: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
timer_triggers: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]]] = None):
"""
Input properties used for looking up and filtering RegistryTask resources.
:param pulumi.Input[str] agent_pool_name: The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
:param pulumi.Input['RegistryTaskAgentSettingArgs'] agent_setting: A `agent_setting` block as defined below.
:param pulumi.Input['RegistryTaskBaseImageTriggerArgs'] base_image_trigger: A `base_image_trigger` block as defined below.
:param pulumi.Input[str] container_registry_id: The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input['RegistryTaskDockerStepArgs'] docker_step: A `docker_step` block as defined below.
:param pulumi.Input[bool] enabled: Should this Container Registry Task be enabled? Defaults to `true`.
:param pulumi.Input['RegistryTaskEncodedStepArgs'] encoded_step: A `encoded_step` block as defined below.
:param pulumi.Input['RegistryTaskFileStepArgs'] file_step: A `file_step` block as defined below.
:param pulumi.Input['RegistryTaskIdentityArgs'] identity: A `identity` block as defined below.
:param pulumi.Input[bool] is_system_task: Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
:param pulumi.Input[str] log_template: The template that describes the run log artifact.
:param pulumi.Input[str] name: The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input['RegistryTaskPlatformArgs'] platform: A `platform` block as defined below.
:param pulumi.Input['RegistryTaskRegistryCredentialArgs'] registry_credential: One `registry_credential` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]] source_triggers: One or more `source_trigger` blocks as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to the Container Registry Task.
:param pulumi.Input[int] timeout_in_seconds: The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
:param pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]] timer_triggers: One or more `timer_trigger` blocks as defined below.
"""
if agent_pool_name is not None:
pulumi.set(__self__, "agent_pool_name", agent_pool_name)
if agent_setting is not None:
pulumi.set(__self__, "agent_setting", agent_setting)
if base_image_trigger is not None:
pulumi.set(__self__, "base_image_trigger", base_image_trigger)
if container_registry_id is not None:
pulumi.set(__self__, "container_registry_id", container_registry_id)
if docker_step is not None:
pulumi.set(__self__, "docker_step", docker_step)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if encoded_step is not None:
pulumi.set(__self__, "encoded_step", encoded_step)
if file_step is not None:
pulumi.set(__self__, "file_step", file_step)
if identity is not None:
pulumi.set(__self__, "identity", identity)
if is_system_task is not None:
pulumi.set(__self__, "is_system_task", is_system_task)
if log_template is not None:
pulumi.set(__self__, "log_template", log_template)
if name is not None:
pulumi.set(__self__, "name", name)
if platform is not None:
pulumi.set(__self__, "platform", platform)
if registry_credential is not None:
pulumi.set(__self__, "registry_credential", registry_credential)
if source_triggers is not None:
pulumi.set(__self__, "source_triggers", source_triggers)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if timeout_in_seconds is not None:
pulumi.set(__self__, "timeout_in_seconds", timeout_in_seconds)
if timer_triggers is not None:
pulumi.set(__self__, "timer_triggers", timer_triggers)
@property
@pulumi.getter(name="agentPoolName")
def agent_pool_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
"""
return pulumi.get(self, "agent_pool_name")
@agent_pool_name.setter
def agent_pool_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "agent_pool_name", value)
@property
@pulumi.getter(name="agentSetting")
def agent_setting(self) -> Optional[pulumi.Input['RegistryTaskAgentSettingArgs']]:
"""
A `agent_setting` block as defined below.
"""
return pulumi.get(self, "agent_setting")
@agent_setting.setter
def agent_setting(self, value: Optional[pulumi.Input['RegistryTaskAgentSettingArgs']]):
pulumi.set(self, "agent_setting", value)
@property
@pulumi.getter(name="baseImageTrigger")
def base_image_trigger(self) -> Optional[pulumi.Input['RegistryTaskBaseImageTriggerArgs']]:
"""
A `base_image_trigger` block as defined below.
"""
return pulumi.get(self, "base_image_trigger")
@base_image_trigger.setter
def base_image_trigger(self, value: Optional[pulumi.Input['RegistryTaskBaseImageTriggerArgs']]):
pulumi.set(self, "base_image_trigger", value)
@property
@pulumi.getter(name="containerRegistryId")
def container_registry_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
"""
return pulumi.get(self, "container_registry_id")
@container_registry_id.setter
def container_registry_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container_registry_id", value)
@property
@pulumi.getter(name="dockerStep")
def docker_step(self) -> Optional[pulumi.Input['RegistryTaskDockerStepArgs']]:
"""
A `docker_step` block as defined below.
"""
return pulumi.get(self, "docker_step")
@docker_step.setter
def docker_step(self, value: Optional[pulumi.Input['RegistryTaskDockerStepArgs']]):
pulumi.set(self, "docker_step", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should this Container Registry Task be enabled? Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="encodedStep")
def encoded_step(self) -> Optional[pulumi.Input['RegistryTaskEncodedStepArgs']]:
"""
A `encoded_step` block as defined below.
"""
return pulumi.get(self, "encoded_step")
@encoded_step.setter
def encoded_step(self, value: Optional[pulumi.Input['RegistryTaskEncodedStepArgs']]):
pulumi.set(self, "encoded_step", value)
@property
@pulumi.getter(name="fileStep")
def file_step(self) -> Optional[pulumi.Input['RegistryTaskFileStepArgs']]:
"""
A `file_step` block as defined below.
"""
return pulumi.get(self, "file_step")
@file_step.setter
def file_step(self, value: Optional[pulumi.Input['RegistryTaskFileStepArgs']]):
pulumi.set(self, "file_step", value)
@property
@pulumi.getter
def identity(self) -> Optional[pulumi.Input['RegistryTaskIdentityArgs']]:
"""
A `identity` block as defined below.
"""
return pulumi.get(self, "identity")
@identity.setter
def identity(self, value: Optional[pulumi.Input['RegistryTaskIdentityArgs']]):
pulumi.set(self, "identity", value)
@property
@pulumi.getter(name="isSystemTask")
def is_system_task(self) -> Optional[pulumi.Input[bool]]:
"""
Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
"""
return pulumi.get(self, "is_system_task")
@is_system_task.setter
def is_system_task(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_system_task", value)
@property
@pulumi.getter(name="logTemplate")
def log_template(self) -> Optional[pulumi.Input[str]]:
"""
The template that describes the run log artifact.
"""
return pulumi.get(self, "log_template")
@log_template.setter
def log_template(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "log_template", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def platform(self) -> Optional[pulumi.Input['RegistryTaskPlatformArgs']]:
"""
A `platform` block as defined below.
"""
return pulumi.get(self, "platform")
@platform.setter
def platform(self, value: Optional[pulumi.Input['RegistryTaskPlatformArgs']]):
pulumi.set(self, "platform", value)
@property
@pulumi.getter(name="registryCredential")
def registry_credential(self) -> Optional[pulumi.Input['RegistryTaskRegistryCredentialArgs']]:
"""
One `registry_credential` block as defined below.
"""
return pulumi.get(self, "registry_credential")
@registry_credential.setter
def registry_credential(self, value: Optional[pulumi.Input['RegistryTaskRegistryCredentialArgs']]):
pulumi.set(self, "registry_credential", value)
@property
@pulumi.getter(name="sourceTriggers")
def source_triggers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]]]:
"""
One or more `source_trigger` blocks as defined below.
"""
return pulumi.get(self, "source_triggers")
@source_triggers.setter
def source_triggers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskSourceTriggerArgs']]]]):
pulumi.set(self, "source_triggers", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags which should be assigned to the Container Registry Task.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="timeoutInSeconds")
def timeout_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
"""
return pulumi.get(self, "timeout_in_seconds")
@timeout_in_seconds.setter
def timeout_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "timeout_in_seconds", value)
@property
@pulumi.getter(name="timerTriggers")
def timer_triggers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]]]:
"""
One or more `timer_trigger` blocks as defined below.
"""
return pulumi.get(self, "timer_triggers")
@timer_triggers.setter
def timer_triggers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['RegistryTaskTimerTriggerArgs']]]]):
pulumi.set(self, "timer_triggers", value)
class RegistryTask(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
agent_pool_name: Optional[pulumi.Input[str]] = None,
agent_setting: Optional[pulumi.Input[pulumi.InputType['RegistryTaskAgentSettingArgs']]] = None,
base_image_trigger: Optional[pulumi.Input[pulumi.InputType['RegistryTaskBaseImageTriggerArgs']]] = None,
container_registry_id: Optional[pulumi.Input[str]] = None,
docker_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskDockerStepArgs']]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
encoded_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskEncodedStepArgs']]] = None,
file_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskFileStepArgs']]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['RegistryTaskIdentityArgs']]] = None,
is_system_task: Optional[pulumi.Input[bool]] = None,
log_template: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
platform: Optional[pulumi.Input[pulumi.InputType['RegistryTaskPlatformArgs']]] = None,
registry_credential: Optional[pulumi.Input[pulumi.InputType['RegistryTaskRegistryCredentialArgs']]] = None,
source_triggers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskSourceTriggerArgs']]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
timer_triggers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskTimerTriggerArgs']]]]] = None,
__props__=None):
"""
Manages a Container Registry Task.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_registry = azure.containerservice.Registry("exampleRegistry",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
sku="Basic")
example_registry_task = azure.containerservice.RegistryTask("exampleRegistryTask",
container_registry_id=example_registry.id,
platform=azure.containerservice.RegistryTaskPlatformArgs(
os="Linux",
),
docker_step=azure.containerservice.RegistryTaskDockerStepArgs(
dockerfile_path="Dockerfile",
context_path="https://github.com/<user name>/acr-build-helloworld-node#main",
context_access_token="<github personal access token>",
image_names=["helloworld:{{.Run.ID}}"],
))
```
## Import
Container Registry Tasks can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:containerservice/registryTask:RegistryTask example /subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.ContainerRegistry/registries/registry1/tasks/task1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] agent_pool_name: The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
:param pulumi.Input[pulumi.InputType['RegistryTaskAgentSettingArgs']] agent_setting: A `agent_setting` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskBaseImageTriggerArgs']] base_image_trigger: A `base_image_trigger` block as defined below.
:param pulumi.Input[str] container_registry_id: The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input[pulumi.InputType['RegistryTaskDockerStepArgs']] docker_step: A `docker_step` block as defined below.
:param pulumi.Input[bool] enabled: Should this Container Registry Task be enabled? Defaults to `true`.
:param pulumi.Input[pulumi.InputType['RegistryTaskEncodedStepArgs']] encoded_step: A `encoded_step` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskFileStepArgs']] file_step: A `file_step` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskIdentityArgs']] identity: A `identity` block as defined below.
:param pulumi.Input[bool] is_system_task: Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
:param pulumi.Input[str] log_template: The template that describes the run log artifact.
:param pulumi.Input[str] name: The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input[pulumi.InputType['RegistryTaskPlatformArgs']] platform: A `platform` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskRegistryCredentialArgs']] registry_credential: One `registry_credential` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskSourceTriggerArgs']]]] source_triggers: One or more `source_trigger` blocks as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to the Container Registry Task.
:param pulumi.Input[int] timeout_in_seconds: The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskTimerTriggerArgs']]]] timer_triggers: One or more `timer_trigger` blocks as defined below.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: RegistryTaskArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Container Registry Task.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_registry = azure.containerservice.Registry("exampleRegistry",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
sku="Basic")
example_registry_task = azure.containerservice.RegistryTask("exampleRegistryTask",
container_registry_id=example_registry.id,
platform=azure.containerservice.RegistryTaskPlatformArgs(
os="Linux",
),
docker_step=azure.containerservice.RegistryTaskDockerStepArgs(
dockerfile_path="Dockerfile",
context_path="https://github.com/<user name>/acr-build-helloworld-node#main",
context_access_token="<github personal access token>",
image_names=["helloworld:{{.Run.ID}}"],
))
```
## Import
Container Registry Tasks can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:containerservice/registryTask:RegistryTask example /subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.ContainerRegistry/registries/registry1/tasks/task1
```
:param str resource_name: The name of the resource.
:param RegistryTaskArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(RegistryTaskArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
agent_pool_name: Optional[pulumi.Input[str]] = None,
agent_setting: Optional[pulumi.Input[pulumi.InputType['RegistryTaskAgentSettingArgs']]] = None,
base_image_trigger: Optional[pulumi.Input[pulumi.InputType['RegistryTaskBaseImageTriggerArgs']]] = None,
container_registry_id: Optional[pulumi.Input[str]] = None,
docker_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskDockerStepArgs']]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
encoded_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskEncodedStepArgs']]] = None,
file_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskFileStepArgs']]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['RegistryTaskIdentityArgs']]] = None,
is_system_task: Optional[pulumi.Input[bool]] = None,
log_template: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
platform: Optional[pulumi.Input[pulumi.InputType['RegistryTaskPlatformArgs']]] = None,
registry_credential: Optional[pulumi.Input[pulumi.InputType['RegistryTaskRegistryCredentialArgs']]] = None,
source_triggers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskSourceTriggerArgs']]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
timer_triggers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskTimerTriggerArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = RegistryTaskArgs.__new__(RegistryTaskArgs)
__props__.__dict__["agent_pool_name"] = agent_pool_name
__props__.__dict__["agent_setting"] = agent_setting
__props__.__dict__["base_image_trigger"] = base_image_trigger
if container_registry_id is None and not opts.urn:
raise TypeError("Missing required property 'container_registry_id'")
__props__.__dict__["container_registry_id"] = container_registry_id
__props__.__dict__["docker_step"] = docker_step
__props__.__dict__["enabled"] = enabled
__props__.__dict__["encoded_step"] = encoded_step
__props__.__dict__["file_step"] = file_step
__props__.__dict__["identity"] = identity
__props__.__dict__["is_system_task"] = is_system_task
__props__.__dict__["log_template"] = log_template
__props__.__dict__["name"] = name
__props__.__dict__["platform"] = platform
__props__.__dict__["registry_credential"] = registry_credential
__props__.__dict__["source_triggers"] = source_triggers
__props__.__dict__["tags"] = tags
__props__.__dict__["timeout_in_seconds"] = timeout_in_seconds
__props__.__dict__["timer_triggers"] = timer_triggers
super(RegistryTask, __self__).__init__(
'azure:containerservice/registryTask:RegistryTask',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
agent_pool_name: Optional[pulumi.Input[str]] = None,
agent_setting: Optional[pulumi.Input[pulumi.InputType['RegistryTaskAgentSettingArgs']]] = None,
base_image_trigger: Optional[pulumi.Input[pulumi.InputType['RegistryTaskBaseImageTriggerArgs']]] = None,
container_registry_id: Optional[pulumi.Input[str]] = None,
docker_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskDockerStepArgs']]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
encoded_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskEncodedStepArgs']]] = None,
file_step: Optional[pulumi.Input[pulumi.InputType['RegistryTaskFileStepArgs']]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['RegistryTaskIdentityArgs']]] = None,
is_system_task: Optional[pulumi.Input[bool]] = None,
log_template: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
platform: Optional[pulumi.Input[pulumi.InputType['RegistryTaskPlatformArgs']]] = None,
registry_credential: Optional[pulumi.Input[pulumi.InputType['RegistryTaskRegistryCredentialArgs']]] = None,
source_triggers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskSourceTriggerArgs']]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
timer_triggers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskTimerTriggerArgs']]]]] = None) -> 'RegistryTask':
"""
Get an existing RegistryTask resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] agent_pool_name: The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
:param pulumi.Input[pulumi.InputType['RegistryTaskAgentSettingArgs']] agent_setting: A `agent_setting` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskBaseImageTriggerArgs']] base_image_trigger: A `base_image_trigger` block as defined below.
:param pulumi.Input[str] container_registry_id: The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input[pulumi.InputType['RegistryTaskDockerStepArgs']] docker_step: A `docker_step` block as defined below.
:param pulumi.Input[bool] enabled: Should this Container Registry Task be enabled? Defaults to `true`.
:param pulumi.Input[pulumi.InputType['RegistryTaskEncodedStepArgs']] encoded_step: A `encoded_step` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskFileStepArgs']] file_step: A `file_step` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskIdentityArgs']] identity: A `identity` block as defined below.
:param pulumi.Input[bool] is_system_task: Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
:param pulumi.Input[str] log_template: The template that describes the run log artifact.
:param pulumi.Input[str] name: The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
:param pulumi.Input[pulumi.InputType['RegistryTaskPlatformArgs']] platform: A `platform` block as defined below.
:param pulumi.Input[pulumi.InputType['RegistryTaskRegistryCredentialArgs']] registry_credential: One `registry_credential` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskSourceTriggerArgs']]]] source_triggers: One or more `source_trigger` blocks as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to the Container Registry Task.
:param pulumi.Input[int] timeout_in_seconds: The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RegistryTaskTimerTriggerArgs']]]] timer_triggers: One or more `timer_trigger` blocks as defined below.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _RegistryTaskState.__new__(_RegistryTaskState)
__props__.__dict__["agent_pool_name"] = agent_pool_name
__props__.__dict__["agent_setting"] = agent_setting
__props__.__dict__["base_image_trigger"] = base_image_trigger
__props__.__dict__["container_registry_id"] = container_registry_id
__props__.__dict__["docker_step"] = docker_step
__props__.__dict__["enabled"] = enabled
__props__.__dict__["encoded_step"] = encoded_step
__props__.__dict__["file_step"] = file_step
__props__.__dict__["identity"] = identity
__props__.__dict__["is_system_task"] = is_system_task
__props__.__dict__["log_template"] = log_template
__props__.__dict__["name"] = name
__props__.__dict__["platform"] = platform
__props__.__dict__["registry_credential"] = registry_credential
__props__.__dict__["source_triggers"] = source_triggers
__props__.__dict__["tags"] = tags
__props__.__dict__["timeout_in_seconds"] = timeout_in_seconds
__props__.__dict__["timer_triggers"] = timer_triggers
return RegistryTask(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="agentPoolName")
def agent_pool_name(self) -> pulumi.Output[Optional[str]]:
"""
The name of the dedicated Container Registry Agent Pool for this Container Registry Task.
"""
return pulumi.get(self, "agent_pool_name")
@property
@pulumi.getter(name="agentSetting")
def agent_setting(self) -> pulumi.Output[Optional['outputs.RegistryTaskAgentSetting']]:
"""
A `agent_setting` block as defined below.
"""
return pulumi.get(self, "agent_setting")
@property
@pulumi.getter(name="baseImageTrigger")
def base_image_trigger(self) -> pulumi.Output[Optional['outputs.RegistryTaskBaseImageTrigger']]:
"""
A `base_image_trigger` block as defined below.
"""
return pulumi.get(self, "base_image_trigger")
@property
@pulumi.getter(name="containerRegistryId")
def container_registry_id(self) -> pulumi.Output[str]:
"""
The ID of the Container Registry that this Container Registry Task resides in. Changing this forces a new Container Registry Task to be created.
"""
return pulumi.get(self, "container_registry_id")
@property
@pulumi.getter(name="dockerStep")
def docker_step(self) -> pulumi.Output[Optional['outputs.RegistryTaskDockerStep']]:
"""
A `docker_step` block as defined below.
"""
return pulumi.get(self, "docker_step")
@property
@pulumi.getter
def enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Should this Container Registry Task be enabled? Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter(name="encodedStep")
def encoded_step(self) -> pulumi.Output[Optional['outputs.RegistryTaskEncodedStep']]:
"""
A `encoded_step` block as defined below.
"""
return pulumi.get(self, "encoded_step")
@property
@pulumi.getter(name="fileStep")
def file_step(self) -> pulumi.Output[Optional['outputs.RegistryTaskFileStep']]:
"""
A `file_step` block as defined below.
"""
return pulumi.get(self, "file_step")
@property
@pulumi.getter
def identity(self) -> pulumi.Output[Optional['outputs.RegistryTaskIdentity']]:
"""
A `identity` block as defined below.
"""
return pulumi.get(self, "identity")
@property
@pulumi.getter(name="isSystemTask")
def is_system_task(self) -> pulumi.Output[Optional[bool]]:
"""
Whether this Container Registry Task is a system task. Changing this forces a new Container Registry Task to be created. Defaults to `false`.
"""
return pulumi.get(self, "is_system_task")
@property
@pulumi.getter(name="logTemplate")
def log_template(self) -> pulumi.Output[Optional[str]]:
"""
The template that describes the run log artifact.
"""
return pulumi.get(self, "log_template")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name which should be used for this Container Registry Task. Changing this forces a new Container Registry Task to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def platform(self) -> pulumi.Output[Optional['outputs.RegistryTaskPlatform']]:
"""
A `platform` block as defined below.
"""
return pulumi.get(self, "platform")
@property
@pulumi.getter(name="registryCredential")
def registry_credential(self) -> pulumi.Output[Optional['outputs.RegistryTaskRegistryCredential']]:
"""
One `registry_credential` block as defined below.
"""
return pulumi.get(self, "registry_credential")
@property
@pulumi.getter(name="sourceTriggers")
def source_triggers(self) -> pulumi.Output[Optional[Sequence['outputs.RegistryTaskSourceTrigger']]]:
"""
One or more `source_trigger` blocks as defined below.
"""
return pulumi.get(self, "source_triggers")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags which should be assigned to the Container Registry Task.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="timeoutInSeconds")
def timeout_in_seconds(self) -> pulumi.Output[Optional[int]]:
"""
The timeout of this Container Registry Task in seconds. The valid range lies from 300 to 28800. Defaults to 3600.
"""
return pulumi.get(self, "timeout_in_seconds")
@property
@pulumi.getter(name="timerTriggers")
def timer_triggers(self) -> pulumi.Output[Optional[Sequence['outputs.RegistryTaskTimerTrigger']]]:
"""
One or more `timer_trigger` blocks as defined below.
"""
return pulumi.get(self, "timer_triggers")
| 50.90927 | 226 | 0.680098 | 5,752 | 51,622 | 5.887517 | 0.04694 | 0.089975 | 0.089207 | 0.031419 | 0.941444 | 0.930843 | 0.921512 | 0.917112 | 0.915606 | 0.902318 | 0 | 0.003828 | 0.215722 | 51,622 | 1,013 | 227 | 50.959526 | 0.832634 | 0.325249 | 0 | 0.883162 | 1 | 0 | 0.173142 | 0.094424 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.001718 | 0.012027 | 0 | 0.278351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
75ac9c4cea19054b561deab166bfc74098c43741 | 3,652 | py | Python | Examples/Cell_Free_System/Code/functions_learning.py | amirpandi/METIS | 8fc39fdc598c7e4564d45cd431df4dc964c8e77f | [
"MIT"
] | 1 | 2022-01-08T08:37:09.000Z | 2022-01-08T08:37:09.000Z | Examples/Cell_Free_System/Code/functions_learning.py | amirpandi/METIS | 8fc39fdc598c7e4564d45cd431df4dc964c8e77f | [
"MIT"
] | 3 | 2022-01-08T08:51:19.000Z | 2022-03-23T12:54:16.000Z | Examples/Cell_Free_System/Code/functions_learning.py | amirpandi/METIS | 8fc39fdc598c7e4564d45cd431df4dc964c8e77f | [
"MIT"
] | 1 | 2022-01-05T20:13:03.000Z | 2022-01-05T20:13:03.000Z | def active_learning(regressor, gold_regressor, allowed_conc, test_size = 100, steps = 10, verbose=0):
## first step
if verbose:
print('step: 1')
# make first dataset
X_train_1 = random_input(allowed_conc, test_size)
# first fit
regressor.fit(X_train_1, gold_regressor.predict(X_train_1))
# save results
result = pd.DataFrame(X_train_1)
result['gold_yield'] = gold_regressor.predict(X_train_1)
result['pred_yield'] = 0.0 # not available but choose 0.0 to avoid further error
result['step'] = 'step_1'
## next steps loop
for step in range(steps-1):
if verbose>=2:
print('step: ',step+2)
# make i th dataset
X_train_1_1 = random_input(allowed_conc, 100000)
df_1 = pd.DataFrame(X_train_1_1)
df_1['pred_yield'] = regressor.predict(X_train_1_1)
df_1 = df_1.sort_values(['pred_yield'], ascending=False)
X_train_2 = df_1.iloc[0:test_size,0:11].values
# save and add results
temp_result = pd.DataFrame(X_train_2)
temp_result['gold_yield'] = gold_regressor.predict(X_train_2)
temp_result['pred_yield'] = df_1.iloc[0:test_size,11:12].values
temp_result['step'] = 'step_{}'.format(step+2)
result = pd.concat([result, temp_result], ignore_index=True)
# update and refit regressor
regressor.fit(result.iloc[:,0:11].values, result.iloc[:,11].values)
return result, regressor
def bayesian_optimization(regressors_list,
gold_regressor,
allowed_conc,
exploitation=1, exploration=1, test_size=100, steps=10, verbose=0):
## first step
if verbose:
print('step: 1')
# make first dataset
X_train_1 = random_input(allowed_conc, test_size)
# first fit
for regressor in regressors_list:
regressor.fit(X_train_1, gold_regressor.predict(X_train_1))
# save results
result = pd.DataFrame(X_train_1)
result['gold_yield'] = gold_regressor.predict(X_train_1)
result['pred_yield'] = 0.0 # not available but choose 0.0 to avoid further error
result['step'] = 'step_1'
## next steps loop
for step in range(steps-1):
if verbose>=2:
print('step: ',step+2)
# make i th dataset
X_train_1_1 = random_input(allowed_conc, 100000)
df_1 = pd.DataFrame(X_train_1_1)
#upper Confidence Bound
for index, regressor in enumerate(regressors_list):
df_1['pred_yield_{}'.format(index)] = regressor.predict(X_train_1_1)
df_1['regressors_std'] = df_1[[str(i) for i in df_1.columns if 'pred_yield' in str(i)]].std(axis=1)
df_1['mean_vote'] = df_1[[str(i) for i in df_1.columns if 'pred_yield' in str(i)]].mean(axis=1)
df_1['UCB'] = exploitation * df_1['mean_vote']+ exploration * df_1['regressors_std']
df_1 = df_1.sort_values(['UCB'], ascending=False)
X_train_2 = df_1.iloc[0:test_size,0:11].values
# save and add results
temp_result = pd.DataFrame(X_train_2)
temp_result['gold_yield'] = gold_regressor.predict(X_train_2)
#temp_result['pred_yield'] = df_1.iloc[0:test_size,11:12].values
temp_result['pred_yield'] = df_1.mean_vote[0:test_size].values
temp_result['step'] = 'step_{}'.format(step+2)
result = pd.concat([result, temp_result], ignore_index=True)
# update and refit regressor
regressor.fit(result.iloc[:,0:11].values, result.iloc[:,11].values)
return result, regressor | 40.577778 | 107 | 0.630066 | 532 | 3,652 | 4.056391 | 0.163534 | 0.061168 | 0.0519 | 0.081557 | 0.835496 | 0.835496 | 0.801668 | 0.801668 | 0.776645 | 0.776645 | 0 | 0.046401 | 0.250548 | 3,652 | 90 | 108 | 40.577778 | 0.742053 | 0.125411 | 0 | 0.714286 | 0 | 0 | 0.080442 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0 | 0 | 0.071429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f9351c58e5a536a929a7d4619c14279b864574a9 | 140 | py | Python | torchir/networks/__init__.py | BDdeVos/TorchIR | 80ea1045c1029182cada3cfa23a0693dd206cbcc | [
"MIT"
] | 9 | 2021-11-02T18:43:54.000Z | 2022-02-19T15:27:55.000Z | torchir/networks/__init__.py | BDdeVos/TorchIR | 80ea1045c1029182cada3cfa23a0693dd206cbcc | [
"MIT"
] | null | null | null | torchir/networks/__init__.py | BDdeVos/TorchIR | 80ea1045c1029182cada3cfa23a0693dd206cbcc | [
"MIT"
] | 4 | 2022-02-07T11:44:15.000Z | 2022-03-23T14:13:02.000Z | from torchir.networks.dirnet import DIRNet
from torchir.networks.globalnet import AIRNet, RigidIRNet
from torchir.networks.unet import UNet
| 35 | 57 | 0.857143 | 19 | 140 | 6.315789 | 0.473684 | 0.275 | 0.475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092857 | 140 | 3 | 58 | 46.666667 | 0.944882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f98657783f8d1f17ba61483a752f273ac44c90c7 | 1,904 | py | Python | malaya_speech/config/fastspeech2.py | ishine/malaya-speech | fd34afc7107af1656dff4b3201fa51dda54fde18 | [
"MIT"
] | 111 | 2020-08-31T04:58:54.000Z | 2022-03-29T15:44:18.000Z | malaya_speech/config/fastspeech2.py | ishine/malaya-speech | fd34afc7107af1656dff4b3201fa51dda54fde18 | [
"MIT"
] | 14 | 2020-12-16T07:27:22.000Z | 2022-03-15T17:39:01.000Z | malaya_speech/config/fastspeech2.py | ishine/malaya-speech | fd34afc7107af1656dff4b3201fa51dda54fde18 | [
"MIT"
] | 29 | 2021-02-09T08:57:15.000Z | 2022-03-12T14:09:19.000Z | config = {
'n_speakers': 1,
'encoder_hidden_size': 384,
'encoder_num_hidden_layers': 4,
'encoder_num_attention_heads': 2,
'encoder_attention_head_size': 192,
'encoder_intermediate_size': 1024,
'encoder_intermediate_kernel_size': 3,
'encoder_hidden_act': 'mish',
'decoder_hidden_size': 384,
'decoder_num_hidden_layers': 4,
'decoder_num_attention_heads': 2,
'decoder_attention_head_size': 192,
'decoder_intermediate_size': 1024,
'decoder_intermediate_kernel_size': 3,
'decoder_hidden_act': 'mish',
'variant_prediction_num_conv_layers': 2,
'variant_predictor_filter': 256,
'variant_predictor_kernel_size': 3,
'variant_predictor_dropout_rate': 0.5,
'num_mels': 80,
'hidden_dropout_prob': 0.2,
'attention_probs_dropout_prob': 0.1,
'max_position_embeddings': 2048,
'initializer_range': 0.02,
'output_attentions': False,
'output_hidden_states': False,
}
config_v2 = {
'n_speakers': 1,
'encoder_hidden_size': 256,
'encoder_num_hidden_layers': 3,
'encoder_num_attention_heads': 2,
'encoder_attention_head_size': 16,
'encoder_intermediate_size': 1024,
'encoder_intermediate_kernel_size': 3,
'encoder_hidden_act': 'mish',
'decoder_hidden_size': 256,
'decoder_num_hidden_layers': 3,
'decoder_num_attention_heads': 2,
'decoder_attention_head_size': 16,
'decoder_intermediate_size': 1024,
'decoder_intermediate_kernel_size': 3,
'decoder_hidden_act': 'mish',
'variant_prediction_num_conv_layers': 2,
'variant_predictor_filter': 256,
'variant_predictor_kernel_size': 3,
'variant_predictor_dropout_rate': 0.5,
'num_mels': 80,
'hidden_dropout_prob': 0.2,
'attention_probs_dropout_prob': 0.1,
'max_position_embeddings': 2048,
'initializer_range': 0.02,
'output_attentions': False,
'output_hidden_states': False,
}
| 32.827586 | 44 | 0.710609 | 237 | 1,904 | 5.189873 | 0.21097 | 0.04878 | 0.053659 | 0.058537 | 0.895935 | 0.895935 | 0.852033 | 0.852033 | 0.852033 | 0.692683 | 0 | 0.058749 | 0.168592 | 1,904 | 57 | 45 | 33.403509 | 0.718256 | 0 | 0 | 0.714286 | 0 | 0 | 0.643908 | 0.462185 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ddbaf11f01859eaa081ff72a9bb1921842bfb30e | 117 | py | Python | Python/Tests/TestData/Grammar/ImportStmt.py | nanshuiyu/pytools | 9f9271fe8cf564b4f94e9456d400f4306ea77c23 | [
"Apache-2.0"
] | null | null | null | Python/Tests/TestData/Grammar/ImportStmt.py | nanshuiyu/pytools | 9f9271fe8cf564b4f94e9456d400f4306ea77c23 | [
"Apache-2.0"
] | null | null | null | Python/Tests/TestData/Grammar/ImportStmt.py | nanshuiyu/pytools | 9f9271fe8cf564b4f94e9456d400f4306ea77c23 | [
"Apache-2.0"
] | null | null | null | import sys
import sys, fob
import sys as oar
import sys as oar, fob as baz
import sys.fob
import sys.fob as oar | 19.5 | 30 | 0.735043 | 25 | 117 | 3.48 | 0.28 | 0.62069 | 0.413793 | 0.413793 | 0.482759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 117 | 6 | 31 | 19.5 | 0.945055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
34b21aad623e158de3c561970739153499a8cf0d | 27,502 | py | Python | bevodevo/policies/mlps.py | riveSunder/bevodevo | d45ec97b825489a9e94f79843e7169daa9491264 | [
"MIT"
] | 4 | 2020-12-02T22:28:29.000Z | 2020-12-28T05:42:06.000Z | bevodevo/policies/mlps.py | riveSunder/bevodevo | d45ec97b825489a9e94f79843e7169daa9491264 | [
"MIT"
] | 5 | 2020-12-27T16:43:42.000Z | 2021-11-11T21:00:15.000Z | bevodevo/policies/mlps.py | riveSunder/bevodevo | d45ec97b825489a9e94f79843e7169daa9491264 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from collections import OrderedDict
from functools import reduce
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import gym
import matplotlib.pyplot as plt
from bevodevo.policies.base import Policy
class MLPPolicy(nn.Module):
def __init__(self, args, discrete=False, use_grad=False):
super(MLPPolicy, self).__init__()
self.use_grad = use_grad
# architecture params
self.input_dim = args["dim_x"]
self.action_dim = args["dim_y"]
self.hid_dims = args["dim_h"]
self.hid_dims = [self.hid_dims] if type(self.hid_dims) is not list else self.hid_dims
self.activations = nn.ReLU #args["activations"]
self.discrete = discrete
self.use_bias = False
self.var = 1.e-2
if type(self.activations) == list:
if len(self.activations) <= (len(self.hid_dims)+1):
# use no activation after list of act fns are used up
for ii in range(len(self.activations), len(self.hid_dims)+1):
# identity function for layers missing activations
self.activations.append(lambda x: x)
elif len(self.activations) >= (len(self.hid_dims)+1):
print("warning: activation list has {} functions but MLP has only {} layers"\
.format(len(self.activations), len(self.hid_dims)+1))
print("... truncating action function list")
self.activations = self.activations[:len(self.hid_dims)]
else:
self.activations = [self.activations] * len(self.hid_dims)
if self.discrete:
pass
#self.activations.append(lambda x: x)
else:
self.activations.append(nn.Tanh)
self.init_params()
if args["params"] is not None:
self.set_params(args["params"])
def init_params(self):
self.layers = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.input_dim, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.activations[0]())\
]))
for jj in range(1, len(self.hid_dims)-1):
self.layers.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.layers.add_module("activation{}".format(jj), self.activations[jj]())
self.layers.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.action_dim, bias=self.use_bias))
if self.discrete:
pass
else:
self.layers.add_module("output_activation",\
self.activations[-1]())
for param in self.layers.parameters():
param.requires_grad = self.use_grad
self.num_params = self.get_params().shape[0]
def forward(self, x):
if type(x) is not torch.Tensor:
x = torch.tensor(x)
x = x.to(torch.float32)
#if True in [p.is_cuda for p in self.parameters()]:
# x = x.to(torch.device("cuda"))
if len(x.shape) == 1:
x = x.unsqueeze(0)
x = self.layers(x)
return x
def get_action(self, x):
y = self.forward(x)
if self.discrete:
act = torch.argmax(y, dim=-1)
else:
act = y
return act.detach().cpu().numpy()
def get_params(self):
params = np.array([])
for param in self.layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def set_params(self, my_params):
param_start = 0
for name, param in self.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), requires_grad=self.use_grad), \
requires_grad=self.use_grad)
def reset(self):
pass
class HebbianMLP(MLPPolicy):
def __init__(self, args, discrete=False, use_grad=False, plastic=True):
self.plastic = plastic
self.lr_layers = None
self.e_min = -1.
self.e_max = 1.
super(HebbianMLP, self).__init__(args, discrete, use_grad)
self.set_traces()
def set_traces(self):
self.dim_list = [self.input_dim]
self.dim_list.extend(self.hid_dims)
self.dim_list.append(self.action_dim)
if self.plastic:
self.init_traces()
else:
self.clear_nodes()
def init_traces(self):
# clear node activations, start at 0 everywhere
self.clear_nodes()
# initialize learning rate values
self.lr_layers = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.input_dim, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.activations[0]())\
]))
self.eligibility_layers = [torch.zeros(self.input_dim, self.hid_dims[0])]
for jj in range(1, len(self.hid_dims)-1):
self.lr_layers.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.lr_layers.add_module("activation{}".format(jj), self.activations[jj]())
self.eligibility_layers.append(torch.zeros(self.hid_dims[jj], self.hid_dims[jj+1]))
self.lr_layers.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.action_dim, bias=self.use_bias))
self.eligibility_layers.append(torch.zeros(self.hid_dims[-1], self.action_dim))
if self.discrete:
pass
else:
self.lr_layers.add_module("output_activation",\
self.activations[-1]())
for param in self.lr_layers.parameters():
param.requires_grad = self.use_grad
self.num_params = self.get_params().shape[0]
def clear_nodes(self):
self.nodes = [torch.zeros(1,elem) for elem in self.dim_list]
def clear_traces(self):
if self.plastic:
self.eligibility_layers = [0.0 * elem for elem in self.eligibility_layers]
def forward(self, x):
if type(x) is not torch.Tensor:
x = torch.tensor(x)
x = x.to(torch.float32)
if len(x.shape) == 1:
x = x.unsqueeze(0)
trace_count = 0
self.nodes[trace_count] = x.clone()#.squeeze()
for name, module in self.layers.named_modules():
if "layer" in name:
trace_count += 1
x = module(x)
self.nodes[trace_count] = x.clone()
elif "activation" in name:
x = module(x)
if self.plastic:
self.update()
return x
def update(self):
num_layers = len(list(self.layers.named_parameters()))
layer_count = 0
for lr_param, param in zip(list(self.lr_layers.named_parameters()), list(self.layers.named_parameters())):
layer_dim_x, layer_dim_y = param[1].shape[1], param[1].shape[0]
self.eligibility_layers[layer_count] += torch.matmul(self.nodes[layer_count].T, self.nodes[layer_count+1])
self.eligibility_layers[layer_count] = torch.clamp(self.eligibility_layers[layer_count], min=self.e_min, max=self.e_max)
for ii in range(layer_dim_x):
for jj in range(layer_dim_y):
param[1][jj,ii] = param[1][jj,ii] + lr_param[1][jj,ii] * self.eligibility_layers[layer_count][ii,jj]
layer_count += 1
def get_params(self):
params = np.array([])
for param in self.layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
if self.lr_layers is not None and self.plastic:
for param in self.lr_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def set_params(self, my_params):
param_start = 0
for name, param in self.layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), requires_grad=self.use_grad), \
requires_grad=self.use_grad)
if self.plastic:
for name, param in self.lr_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), requires_grad=self.use_grad), \
requires_grad=self.use_grad)
def reset(self):
self.clear_nodes()
self.clear_traces()
class HebbianMetaMLP(HebbianMLP):
def __init__(self, args, discrete=False, use_grad=False):
super(HebbianMetaMLP, self).__init__(args, discrete, use_grad)
self.plastic = True
self.reset()
def get_params(self):
params = np.array([])
if self.lr_layers is not None and self.plastic:
for param in self.lr_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def set_params(self, my_params):
param_start = 0
for name, param in self.lr_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
def reset(self):
self.init_params()
self.clear_nodes()
self.clear_traces()
class ABCHebbianMLP(HebbianMLP):
def __init__(self, args, discrete=False, use_grad=False, plastic=True):
super(ABCHebbianMLP, self).__init__(args, discrete, use_grad, plastic)
def init_traces(self):
# clear node activations, start at 0 everywhere
self.clear_nodes()
# learning rules are encoded by lr, A, B, C.
# \delta W_{ij} = lr * (A * o_i*o_j + B * o_i + C * o_j)
# initialize learning rate values
self.lr_layers = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.input_dim, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.activations[0]())\
]))
# Hebbian coefficient A
self.a_layers = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.input_dim, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.activations[0]())\
]))
# pre-synaptic coefficient B
self.b_layers = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.input_dim, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.activations[0]())\
]))
# post-synaptic coefficient C
self.c_layers = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.input_dim, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.activations[0]())\
]))
self.eligibility_layers = [torch.zeros(self.input_dim, self.hid_dims[0])]
for jj in range(1, len(self.hid_dims)-1):
self.lr_layers.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.a_layers.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.b_layers.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.c_layers.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.eligibility_layers.append(torch.zeros(self.hid_dims[jj], self.hid_dims[jj+1]))
self.lr_layers.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.action_dim, bias=self.use_bias))
self.a_layers.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.action_dim, bias=self.use_bias))
self.b_layers.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.action_dim, bias=self.use_bias))
self.c_layers.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.action_dim, bias=self.use_bias))
self.eligibility_layers.append(torch.zeros(self.hid_dims[-1], self.action_dim))
for params in [self.lr_layers.parameters(), self.a_layers.parameters(), \
self.b_layers.parameters(), self.c_layers.parameters()]:
for param in params: #self.lr_layers.parameters():
param.requires_grad = self.use_grad
self.num_params = self.get_params().shape[0]
def update(self):
num_layers = len(list(self.layers.named_parameters()))
layer_count = 0
for lr_param, param, A, B, C in zip(\
list(self.lr_layers.named_parameters()),\
list(self.layers.named_parameters()),\
list(self.a_layers.named_parameters()),\
list(self.b_layers.named_parameters()),\
list(self.c_layers.named_parameters())):
layer_dim_x, layer_dim_y = param[1].shape[1], param[1].shape[0]
self.eligibility_layers[layer_count] += torch.matmul(self.nodes[layer_count].T, self.nodes[layer_count+1])
self.eligibility_layers[layer_count] = torch.clamp(self.eligibility_layers[layer_count], min=self.e_min, max=self.e_max)
for ii in range(layer_dim_x):
for jj in range(layer_dim_y):
param[1][jj,ii] = torch.clamp(param[1][jj,ii] + lr_param[1][jj,ii] \
* (\
A[1][jj,ii] * self.eligibility_layers[layer_count][ii,jj] \
+B[1][jj,ii] * self.nodes[layer_count][:,ii] \
+C[1][jj,ii] * self.nodes[layer_count+1][:,jj] \
), min=-10, max=10)
layer_count += 1
def get_params(self):
params = np.array([])
for param in self.layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
if self.lr_layers is not None and self.plastic:
for param in self.lr_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
for param in self.a_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
for param in self.b_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
for param in self.c_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def set_params(self, my_params):
param_start = 0
for name, param in self.layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), requires_grad=self.use_grad), \
requires_grad=self.use_grad)
if self.plastic:
for name, param in self.lr_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape),\
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
for name, param in self.a_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
for name, param in self.b_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
for name, param in self.c_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
class ABCHebbianMetaMLP(ABCHebbianMLP):
def __init__(self, args, discrete=False, use_grad=False):
super(ABCHebbianMetaMLP, self).__init__(args, discrete, use_grad)
self.plastic = True
self.set_traces()
def get_params(self):
params = np.array([])
if self.lr_layers is not None and self.plastic:
for param in self.lr_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
for param in self.a_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
for param in self.b_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
for param in self.c_layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def set_params(self, my_params):
param_start = 0
if self.plastic:
for name, param in self.lr_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape),\
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
for name, param in self.a_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
for name, param in self.b_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
for name, param in self.c_layers.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), \
requires_grad=self.use_grad), \
requires_grad=self.use_grad)
def reset(self):
self.init_params()
self.clear_nodes()
self.clear_traces()
class CPPNHebbianMLP(HebbianMLP):
def __init__(self, args, discrete=False, use_grad=False):
super(CPPNHebbianMLP, self).__init__(args, discrete, use_grad)
self.plastic = False
self.set_traces()
self.init_cppn()
self.set_params(self.get_cppn_params())
def init_cppn(self):
self.cppn_in = 6
self.cppn_out = 2
self.cppn_h = [32]
self.cppn_act = [nn.LeakyReLU, lambda x: x]
self.cppn_out_act = [nn.Tanh, nn.Sigmoid]
self.cppn = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.cppn_in, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.cppn_act[0]())\
]))
for jj in range(1, len(self.cppn_h)-1):
self.cppn.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.cppn.add_module("activation{}".format(jj), self.cppn_act[jj]())
self.cppn.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.cppn_out, bias=self.use_bias))
for param in self.cppn.parameters():
param.requires_grad = self.use_grad
self.num_params = self.get_cppn_params().shape[0]
def build_mlp(self):
num_layers = len(list(self.layers.named_parameters()))
trace_count = 0
for layer_num, param in enumerate(list(self.layers.named_parameters())):
layer_dim_x, layer_dim_y = param[1].shape[1], param[1].shape[0]
for ii in range(layer_dim_x):
for jj in range(layer_dim_y):
cppn_input = torch.Tensor([layer_num/num_layers - 0.5,\
ii/layer_dim_x - 0.5, \
jj/ layer_dim_y - 0.5,\
self.nodes[trace_count][0,ii],\
self.nodes[trace_count+1][0,jj],\
param[1][jj,ii]\
])\
.unsqueeze(0)
weight = self.cppn.forward(cppn_input)
param[1][jj,ii] = torch.tanh(weight[:,0]) * torch.sigmoid(weight[:,1])
trace_count += 1
def get_action(self, x):
self.build_mlp()
y = self.forward(x)
if self.discrete:
act = torch.argmax(y, dim=-1)
else:
act = y
return act.detach().cpu().numpy()
def set_params(self, my_params):
# set the cppn params, which are then used to set the mlp params
param_start = 0
for name, param in self.cppn.named_parameters():
param.requires_grad = False
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), requires_grad=self.use_grad), \
requires_grad=self.use_grad)
#
self.build_mlp()
def get_cppn_params(self):
params = np.array([])
for param in self.cppn.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def get_params(self):
params = np.array([])
for param in self.layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
class CPPNMLPPolicy(MLPPolicy):
def __init__(self, args, discrete=False, use_grad=False):
super(CPPNMLPPolicy, self).__init__(args, discrete, use_grad)
"""
CPPN
input (3) by dim_h (32) by output (2)
input is the layer + weight coordinate for each weight
output is a gating function and weight strength, the product of these ouputs defines weight values
The CPPN defines the MLP, which is the actually policy
dim_x () by dim_hp (64) by dim_y
dim_x and dim_y are the observation space and action space dimensions.
"""
self.use_bias = False
self.init_cppn()
self.set_params(self.get_cppn_params())
def init_cppn(self):
self.cppn_in = 3
self.cppn_out = 2
self.cppn_h = [32]
self.cppn_act = [nn.LeakyReLU, lambda x: x]
self.cppn_out_act = [nn.Tanh, nn.Sigmoid]
self.cppn = nn.Sequential(OrderedDict([\
("layer0", nn.Linear(self.cppn_in, self.hid_dims[0], bias=self.use_bias)),\
("activation_0", self.cppn_act[0]())\
]))
for jj in range(1, len(self.cppn_h)-1):
self.cppn.add_module("layer{}".format(jj),\
nn.Linear(self.hid_dims[jj], self.hid_dims[jj+1], bias=self.use_bias))
self.cppn.add_module("activation{}".format(jj), self.cppn_act[jj]())
self.cppn.add_module("output_layer",\
nn.Linear(self.hid_dims[-1], self.cppn_out, bias=self.use_bias))
for param in self.cppn.parameters():
param.requires_grad = self.use_grad
self.num_params = self.get_cppn_params().shape[0]
def build_mlp(self):
num_layers = len(list(self.layers.named_parameters()))
for layer_num, param in enumerate(list(self.layers.named_parameters())):
layer_dim_x, layer_dim_y = param[1].shape[1], param[1].shape[0]
for ii in range(layer_dim_x):
for jj in range(layer_dim_y):
cppn_input = torch.Tensor([layer_num/num_layers - 0.5,\
ii/layer_dim_x - 0.5, \
jj/ layer_dim_y - 0.5])\
.unsqueeze(0)
weight = self.cppn.forward(cppn_input)
param[1][jj,ii] = torch.tanh(weight[:,0]) * torch.sigmoid(weight[:,1])
def set_params(self, my_params):
# set the cppn params, which are then used to set the mlp params
param_start = 0
for name, param in self.cppn.named_parameters():
param_stop = param_start + reduce(lambda x,y: x*y, param.shape)
param[:] = torch.nn.Parameter(torch.tensor(\
my_params[param_start:param_stop].reshape(param.shape), requires_grad=self.use_grad), \
requires_grad=self.use_grad)
#
self.build_mlp()
def get_cppn_params(self):
params = np.array([])
for param in self.cppn.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
def get_params(self):
params = np.array([])
for param in self.layers.named_parameters():
params = np.append(params, param[1].detach().numpy().ravel())
return params
if __name__ == "__main__":
# run tests
args = {}
args["dim_x"] = 6
args["dim_y"] = 1
args["dim_h"] = 16
args["params"] = None
temp = MLPPolicy(args)
temp = HebbianMLP(args)
temp = ABCHebbianMLP(args)
temp = CPPNHebbianMLP(args)
temp = CPPNMLPPolicy(args)
print("OK")
| 34.812658 | 132 | 0.573886 | 3,526 | 27,502 | 4.277935 | 0.061259 | 0.028772 | 0.040109 | 0.044086 | 0.841952 | 0.827102 | 0.812782 | 0.805556 | 0.792827 | 0.781756 | 0 | 0.010405 | 0.301105 | 27,502 | 789 | 133 | 34.856781 | 0.774361 | 0.027889 | 0 | 0.757396 | 0 | 0 | 0.021081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084813 | false | 0.00789 | 0.019724 | 0 | 0.143984 | 0.005917 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
34c67bba4ac7fc80130acdcb490d4af6f2cff1e0 | 165 | py | Python | directory_components/forms/__init__.py | MichaelWalker/directory-components | 9dac4e9d7fd477cd272e09440f2c9b7d1ef76e1e | [
"MIT"
] | 2 | 2019-06-24T20:22:23.000Z | 2019-07-26T12:51:31.000Z | directory_components/forms/__init__.py | MichaelWalker/directory-components | 9dac4e9d7fd477cd272e09440f2c9b7d1ef76e1e | [
"MIT"
] | 278 | 2018-02-21T11:49:46.000Z | 2021-09-16T08:27:54.000Z | directory_components/forms/__init__.py | MichaelWalker/directory-components | 9dac4e9d7fd477cd272e09440f2c9b7d1ef76e1e | [
"MIT"
] | 3 | 2019-05-02T15:26:26.000Z | 2020-02-18T17:47:57.000Z | from directory_components.forms.fields import * # NOQA
from directory_components.forms.forms import * # NOQA
from directory_components.forms.widgets import * # NOQA
| 41.25 | 55 | 0.818182 | 21 | 165 | 6.285714 | 0.380952 | 0.295455 | 0.522727 | 0.636364 | 0.575758 | 0.575758 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 165 | 3 | 56 | 55 | 0.897959 | 0.084848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
9ba5a2d15a754fb66e76293e2f656b5c47d8893e | 3,531 | py | Python | tests/test_networks.py | brunocroh/wifidog-auth-flask | a215cd9ccebc061723a56ce05db62821354f6785 | [
"MIT"
] | 18 | 2016-04-19T08:04:46.000Z | 2021-12-15T06:45:04.000Z | tests/test_networks.py | brunocroh/wifidog-auth-flask | a215cd9ccebc061723a56ce05db62821354f6785 | [
"MIT"
] | 3 | 2016-12-02T14:40:52.000Z | 2022-01-15T01:03:45.000Z | tests/test_networks.py | brunocroh/wifidog-auth-flask | a215cd9ccebc061723a56ce05db62821354f6785 | [
"MIT"
] | 8 | 2015-11-30T13:21:44.000Z | 2018-12-31T05:56:41.000Z | from tests import TestCase
class TestNetworks(TestCase):
def test_networks_index_as_anonymous(self):
self.assertLogin('/networks')
def test_networks_index_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
self.assertRedirect('/networks')
def test_networks_index_as_network(self):
self.login('main-network@example.com', 'admin')
self.assertRedirect('/networks')
def test_networks_index_as_super(self):
self.login('super-admin@example.com', 'admin')
html = self.assertOk('/networks')
networks = html.findall('//table[@id="networks"]/tbody/tr')
self.assertEqual(2, len(networks))
self.assertEqual('main-network', networks[0].get('data-id'))
def test_networks_new_as_anonymous(self):
self.assertLogin('/networks/new')
def test_networks_new_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
self.assertForbidden('/networks/new')
def test_networks_new_as_network(self):
self.login('main-network@example.com', 'admin')
self.assertForbidden('/networks/new')
def test_networks_new_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.get('/networks/new')
self.assertEqual(200, response.status_code)
def test_networks_store_as_anonymous(self):
self.assertLoginPost('/networks/new', data={'id': 'network', 'title': 'Network'})
def test_networks_store_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
self.assertForbiddenPost('/networks/new', data={'id': 'network', 'title': 'Network'})
def test_networks_store_as_network(self):
self.login('main-network@example.com', 'admin')
self.assertForbiddenPost('/networks/new', data={'id': 'network', 'title': 'Network'})
def test_networks_store_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.post('/networks/new', data={'id': 'network', 'title': 'Network'}, follow_redirects=True)
self.assertEqual(200, response.status_code)
def test_networks_edit_as_anonymous(self):
self.assertLogin('/networks/main-network')
def test_networks_edit_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
self.assertForbidden('/networks/main-network')
def test_networks_edit_as_network(self):
self.login('main-network@example.com', 'admin')
self.assertForbidden('/networks/main-network')
def test_networks_edit_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.get('/networks/main-network')
self.assertEqual(200, response.status_code)
def test_networks_update_as_anonymous(self):
self.assertLoginPost('/networks/main-network', {'id': 'network', 'title': 'Network'})
def test_networks_update_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
self.assertForbiddenPost('/networks/main-network')
def test_networks_update_as_network(self):
self.login('main-network@example.com', 'admin')
self.assertForbiddenPost('/networks/main-network')
def test_networks_update_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.post('/networks/main-network', data={'id': 'network', 'title': 'Network'}, follow_redirects=True)
self.assertEqual(200, response.status_code)
| 39.233333 | 128 | 0.683093 | 426 | 3,531 | 5.460094 | 0.126761 | 0.060189 | 0.128977 | 0.073087 | 0.905417 | 0.882631 | 0.796647 | 0.767412 | 0.751075 | 0.700774 | 0 | 0.006456 | 0.166525 | 3,531 | 89 | 129 | 39.674157 | 0.783894 | 0 | 0 | 0.453125 | 0 | 0 | 0.262815 | 0.160861 | 0 | 0 | 0 | 0 | 0.34375 | 1 | 0.3125 | false | 0 | 0.015625 | 0 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fd63a3e7beeeb837f486e5465ff5cc824eb15c4e | 214,536 | py | Python | Account/app/tests/test_sdk.py | TamSzaGot/mydata-sdk | 9c8afb75077f0b993819aa534b904501a8112f76 | [
"MIT"
] | 4 | 2018-04-21T00:46:40.000Z | 2019-12-03T13:52:03.000Z | Account/app/tests/test_sdk.py | TamSzaGot/mydata-sdk | 9c8afb75077f0b993819aa534b904501a8112f76 | [
"MIT"
] | 1 | 2019-01-09T10:45:23.000Z | 2019-01-09T10:45:23.000Z | Account/app/tests/test_sdk.py | TamSzaGot/mydata-sdk | 9c8afb75077f0b993819aa534b904501a8112f76 | [
"MIT"
] | 4 | 2018-04-21T01:12:12.000Z | 2020-09-24T06:19:29.000Z | # -*- coding: utf-8 -*-
"""
Test Cases for Internal API
__author__ = "Jani Yli-Kantola"
__copyright__ = ""
__credits__ = ["Harri Hirvonsalo", "Aleksi Palomäki"]
__license__ = "MIT"
__version__ = "1.3.0"
__maintainer__ = "Jani Yli-Kantola"
__contact__ = "https://github.com/HIIT/mydata-stack"
__status__ = "Development"
"""
import unittest
from base64 import b64encode
from random import randint
from flask import json
from app import create_app
from app.tests.controller import is_json, validate_json, account_create, default_headers, \
generate_sl_init_sink, generate_sl_init_source, gen_jwk_key, generate_sl_payload, \
generate_sl_store_payload, generate_sls_store_payload, generate_signed_ssr_store_payload, generate_consent_payload, \
generate_consent_status_payload, generate_consent_status_payload_signed
from app.tests.schemas.schema_account import schema_account_create, schema_account_auth, schema_account_get, schema_account_sdk_info
from app.tests.schemas.schema_authorisation import schema_give_consent, schema_consent_status_change, \
schema_consent_listing, schema_consent_status_listing, schema_consent_status, schema_consent
from app.tests.schemas.schema_data_connection import schema_authorisation_token_data
from app.tests.schemas.schema_error import schema_request_error_detail_as_str, schema_request_error_detail_as_dict
from app.tests.schemas.schema_service_linking import schema_slr_init, schema_slr_sign, \
schema_slr_store, schema_slr_listing, schema_slr, schema_slr_status_listing, schema_slr_status, schema_surrogate
from app.tests.schemas.schema_system import schema_db_clear, system_running, schema_sdk_auth, schema_system_status
class SdkTestCase(unittest.TestCase):
API_PREFIX_INTERNAL = "/account/api/v1.3/internal"
API_PREFIX_EXTERNAL = "/account/api/v1.3/external"
SDK_USERNAME = "test_sdk"
SDK_PASSWORD = "test_sdk_pw"
# Operator info
OPERATOR_ID = str(randint(100, 1000))
OPERATOR_KEY_OBJECT, OPERATOR_KEY_PRIVATE_JSON, OPERATOR_KEY_PUBLIC_JSON, OPERATOR_KID = gen_jwk_key(prefix="operator")
OPERATOR_KEY_PUBLIC = json.loads(OPERATOR_KEY_PUBLIC_JSON)
OPERATOR_KEY_PRIVATE = json.loads(OPERATOR_KEY_PRIVATE_JSON)
# Sink Service
SINK_SERVICE_ID = "srv_sink-" + str(randint(100, 1000))
SINK_SURROGATE_ID = "sink-surrogate-" + str(randint(100, 1000))
SINK_KEY_OBJECT, SINK_KEY_PRIVATE_JSON, SINK_KEY_PUBLIC_JSON, SINK_KID = gen_jwk_key(prefix="srv_sink")
SINK_KEY_PRIVATE = json.loads(SINK_KEY_PRIVATE_JSON)
SINK_KEY_PUBLIC = json.loads(SINK_KEY_PUBLIC_JSON)
# Source Service
SOURCE_SERVICE_ID = "srv_source-" + str(randint(100, 1000))
SOURCE_SURROGATE_ID = "source-surrogate-" + str(randint(100, 1000))
SOURCE_KEY_OBJECT, SOURCE_KEY_PRIVATE_JSON, SOURCE_KEY_PUBLIC_JSON, SOURCE_KID = gen_jwk_key(prefix="srv_sink")
SOURCE_KEY_PRIVATE = json.loads(SOURCE_KEY_PRIVATE_JSON)
SOURCE_KEY_PUBLIC = json.loads(SOURCE_KEY_PUBLIC_JSON)
def setUp(self):
"""
TestCase Set Up
:return:
"""
app = create_app()
app.config['TESTING'] = True
app = app.test_client()
self.app = app
def tearDown(self):
"""
TestCase Tear Down
:return:
"""
pass
##########
##########
def test_system_running(self):
"""
Test system running
:return:
"""
url = '/'
response = self.app.get(url)
unittest.TestCase.assertEqual(self, response.status_code, 200)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, system_running))
##########
##########
def test_system_status(self):
"""
Test system running
:return:
"""
url = '/system/status/'
response = self.app.get(url)
unittest.TestCase.assertEqual(self, response.status_code, 200)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_system_status))
##########
##########
def test_system_routes(self):
"""
Test system running
:return:
"""
url = '/system/routes/'
response = self.app.get(url)
unittest.TestCase.assertEqual(self, response.status_code, 200)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
##########
##########
def test_sdk_auth(self):
"""
SDK authentication
:return:
"""
request_headers = default_headers
request_headers['Authorization'] = 'Basic ' + b64encode("{0}:{1}".format(self.SDK_USERNAME, self.SDK_PASSWORD))
url = self.API_PREFIX_INTERNAL + '/auth/sdk/'
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_sdk_auth))
response_json = json.loads(response.data)
api_key = response_json["Api-Key-Sdk"]
return api_key
##########
##########
def test_clear_db_positive(self):
"""
Test database clearing
:return:
"""
response = self.app.get('/system/db/clear/')
unittest.TestCase.assertEqual(self, response.status_code, 200)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_db_clear))
##########
##########
def test_account_create_positive(self):
"""
Test Account creation. Positive case
:return:
"""
account_json, account_username, account_password = account_create()
response = self.app.post(self.API_PREFIX_EXTERNAL + '/accounts/', data=account_json, headers=default_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_account_create))
return account_username, account_password
##########
##########
def test_account_authentication(self):
"""
Test user authentication
:return:
"""
account_username, account_password = self.test_account_create_positive()
request_headers = default_headers
request_headers['Authorization'] = 'Basic ' + b64encode("{0}:{1}".format(account_username, account_password))
url = self.API_PREFIX_EXTERNAL + '/auth/user/'
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_account_auth))
response_json = json.loads(response.data)
api_key = response_json["Api-Key-User"]
account_id = response_json["account_id"]
return api_key, account_id
##########
##########
def test_account_fetch(self):
"""
Fetch Account entry
:return:
"""
account_api_key, account_id = self.test_account_authentication()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
url = self.API_PREFIX_EXTERNAL + "/accounts/" + str(account_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_account_get))
##########
##########
def test_account_delete(self):
"""
Test user deletion
:return:
"""
account_api_key, account_id = self.test_account_authentication()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
url = self.API_PREFIX_EXTERNAL + "/accounts/" + str(account_id) + "/"
response = self.app.delete(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 204, msg=response.data)
##########
##########
def test_sdk_account_info(self):
"""
Verify User-API-Key belongs to specified user
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_api_key, account_id = self.test_account_authentication()
sdk_api_key = self.test_sdk_auth()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/auth/sdk/account/" + str(account_id) + "/info/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_account_sdk_info))
return account_id, account_api_key, sdk_api_key
##########
##########
def test_slr_init_sink(self):
"""
Test Sink SLR init
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_api_key, account_id = self.test_account_authentication()
sdk_api_key = self.test_sdk_auth()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/sink/"
payload, code, slr_id, pop_key = generate_sl_init_sink()
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_init))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_init_sink_misformatted(self):
"""
Test Sink SLR init with misformatted pop_key
:return:
"""
account_api_key, account_id = self.test_account_authentication()
sdk_api_key = self.test_sdk_auth()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/sink/"
payload, code, slr_id, pop_key = generate_sl_init_sink(misformatted_payload=True)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
##########
##########
def test_slr_init_sink_duplicate(self):
"""
Test Sink SLR init duplicate
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id_original = self.test_slr_init_sink()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/sink/"
payload, code, slr_id, pop_key = generate_sl_init_sink(slr_id=slr_id_original)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 409, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_init_source(self):
"""
Test Sink SLR init
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_api_key, account_id = self.test_account_authentication()
sdk_api_key = self.test_sdk_auth()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/source/"
payload, code, slr_id = generate_sl_init_source()
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_init))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_init_source_misformatted(self):
"""
Test Source SLR init with misformatted pop_key
:return:
"""
account_api_key, account_id = self.test_account_authentication()
sdk_api_key = self.test_sdk_auth()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/source/"
payload, code, slr_id = generate_sl_init_source(misformatted_payload=True)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
##########
##########
def test_slr_init_source_duplicate(self):
"""
Test Source SLR init duplicate
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id_original = self.test_slr_init_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/source/"
payload, code, slr_id = generate_sl_init_source(slr_id=slr_id_original)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 409, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_sign_sink(self):
"""
Test Sink SLR signing
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_slr_init_sink()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/"
payload = generate_sl_payload(
slr_id=slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SINK_SERVICE_ID,
surrogate_id=self.SINK_SURROGATE_ID
)
response = self.app.patch(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_sign))
return account_id, account_api_key, sdk_api_key, slr_id, response.data
##########
##########
def test_slr_sign_sink_malformed(self):
"""
Test Sink malformed SLR signing
:return:
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_slr_init_sink()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/"
payload = generate_sl_payload(
slr_id=slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SINK_SERVICE_ID,
surrogate_id=self.SINK_SURROGATE_ID,
misformatted_payload=True
)
response = self.app.patch(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
##########
##########
def test_slr_sign_sink_wrong_id(self):
"""
Test Sink SLR signing with wrong SLR id
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_slr_init_sink()
slr_id = "wrong-" + slr_id
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/"
payload = generate_sl_payload(
slr_id=slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SINK_SERVICE_ID,
surrogate_id=self.SINK_SURROGATE_ID
)
response = self.app.patch(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id, response.data
##########
##########
def test_slr_store_sink(self):
"""
Test Sink SLR storing
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_sink()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SINK_SURROGATE_ID,
service_key=self.SINK_KEY_OBJECT,
service_kid=self.SINK_KID
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_store))
return account_id, account_api_key, sdk_api_key, slr_id, response.data
##########
##########
def test_slr_store_sink_malformed(self):
"""
Test Sink SLR storing - Malformed
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_sink()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SINK_SURROGATE_ID,
service_key=self.SINK_KEY_OBJECT,
service_kid=self.SINK_KID,
misformatted_payload=True
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, slr_id, response.data
##########
##########
def test_slr_store_sink_malformed_signature(self):
"""
Test Sink SLR storing - Signature verification fails
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_sink()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SINK_SURROGATE_ID,
service_key=self.SINK_KEY_OBJECT,
service_kid=self.SINK_KID,
misformatted_signature=True
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_sign_source(self):
"""
Test Source SLR signing
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_slr_init_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/"
payload = generate_sl_payload(
slr_id=slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SOURCE_SERVICE_ID,
surrogate_id=self.SOURCE_SURROGATE_ID
)
response = self.app.patch(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_sign))
return account_id, account_api_key, sdk_api_key, slr_id, response.data
##########
##########
def test_slr_sign_source_malformed(self):
"""
Test Source malformed SLR signing
:return:
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_slr_init_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/"
payload = generate_sl_payload(
slr_id=slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SOURCE_SERVICE_ID,
surrogate_id=self.SOURCE_SURROGATE_ID,
misformatted_payload=True
)
response = self.app.patch(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
##########
##########
def test_slr_sign_source_wrong_id(self):
"""
Test Source SLR signing with wrong SLR id
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_slr_init_source()
slr_id = "wrong-" + slr_id
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/"
payload = generate_sl_payload(
slr_id=slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SOURCE_SERVICE_ID,
surrogate_id=self.SOURCE_SURROGATE_ID
)
response = self.app.patch(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id, response.data
##########
##########
def test_slr_store_source(self):
"""
Test Source SLR storing
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_source()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SOURCE_SURROGATE_ID,
service_key=self.SOURCE_KEY_OBJECT,
service_kid=self.SOURCE_KID
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_store))
return account_id, account_api_key, sdk_api_key, slr_id, ssr_id
##########
##########
def test_slr_store_source_malformed(self):
"""
Test Source SLR storing - Malformed
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_source()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SOURCE_SURROGATE_ID,
service_key=self.SOURCE_KEY_OBJECT,
service_kid=self.SOURCE_KID,
misformatted_payload=True
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_store_source_malformed_signature(self):
"""
Test Source SLR storing - Signature verification fails
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_source()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SOURCE_SURROGATE_ID,
service_key=self.SOURCE_KEY_OBJECT,
service_kid=self.SOURCE_KID,
misformatted_signature=True
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_slr_store_wrong_id(self):
"""
Test SLR storing with wrong ID
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slr_data = self.test_slr_sign_source()
slr_data = json.loads(slr_data)
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/store/"
payload, ssr_id = generate_sl_store_payload(
slr_id=slr_id,
slr_signed=slr_data['data'],
surrogate_id=self.SINK_SURROGATE_ID,
service_key=self.SOURCE_KEY_OBJECT,
service_kid=self.SOURCE_KID
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing(self):
"""
Test Fetch SLR listing
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_listing))
# ID verification
verification_id_array = [slr_id]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr(self):
"""
Test Fetch SLR
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id = self.test_fetch_slr_listing()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr))
# ID verification
verification_id_array = [slr_id]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_wrong_id(self):
"""
Test Fetch SLR with wrong slr id
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
slr_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id_wrong) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_status_listing(self):
"""
Test Fetch SLR status listing
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_status_listing))
response_data_dict = json.loads(response.data)
slsr_id = response_data_dict['data'][0]['id']
# ID verification
verification_id_array = [ssr_id]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id, slsr_id
##########
##########
def test_fetch_slr_status_listing_wrong_id(self):
"""
Test Fetch SLR status listing with wrong slr_id
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
slr_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id_wrong) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_status(self):
"""
Test Fetch SLR status by ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_status_listing()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id) + "/statuses/" + str(slsr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_status))
# ID verification
verification_id_array = [slsr_id]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id, slsr_id
##########
##########
def test_fetch_slr_status_wrong_id(self):
"""
Test Fetch SLR status by wrong ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_status_listing()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
slrs_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id) + "/statuses/" + str(slrs_id_wrong) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id, slsr_id
##########
##########
def test_fetch_slr_last_status(self):
"""
Test Fetch SLR last status
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_status_listing()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_status))
response_data_dict = json.loads(response.data)
slsr_id_from_response = response_data_dict['data']['id']
# ID verification
verification_id_array = [slsr_id]
id_to_verify = str(response_data_dict['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id, slsr_id_from_response
##########
##########
def test_fetch_slr_last_status_wrong_id(self):
"""
Test Fetch SLR last status with wrong slr id
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_status_listing()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
slr_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(slr_id_wrong) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id, slsr_id
##########
##########
def test_fetch_slr_listing_for_service(self):
"""
Test Fetch SLR listing for Service
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_listing))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_surrogate_id(self):
"""
Test Fetch SLR listing for Service with Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?surrogate_id=" + str(self.SOURCE_SURROGATE_ID)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_listing))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_account_id(self):
"""
Test Fetch SLR listing for Service with Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?account_id=" + str(account_id)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_listing))
# ID verification
verification_id_array = [slr_id]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_account_id_and_surrogate_id(self):
"""
Test Fetch SLR listing for Service with Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?account_id=" + str(account_id) + "&surrogate_id=" + str(self.SOURCE_SURROGATE_ID)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_listing))
# ID verification
verification_id_array = [slr_id]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_account_id_and_surrogate_id_wrong_surrogate_id(self):
"""
Test Fetch SLR listing for Service with Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?account_id=" + str(account_id) + "&surrogate_id=" + str(self.SINK_SURROGATE_ID)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_account_id_and_surrogate_id_wrong_account_id(self):
"""
Test Fetch SLR listing for Service with Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?account_id=" + str(slr_id) + "&surrogate_id=" + str(self.SOURCE_SURROGATE_ID)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_account_id_and_surrogate_id_wrong_account_id_and_surrogate_id(self):
"""
Test Fetch SLR listing for Service with Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?account_id=" + str(slr_id) + "&surrogate_id=" + str(self.SINK_SURROGATE_ID)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_with_wrong_surrogate_id(self):
"""
Test Fetch SLR listing for Service with wrong Surrogate ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/?surrogate_id=" + str(self.SINK_SURROGATE_ID)
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_listing_for_service_wrong_service_id(self):
"""
Test Fetch SLR listing for Service with wrong ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
service_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/services/" + service_id_wrong + "/servicelinks/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_for_service(self):
"""
Test Fetch SLR for Service
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(slr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr))
# ID verification
verification_id_array = [slr_id]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_for_service_wrong_service_id(self):
"""
Test Fetch SLR for Service with wrong Service ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
service_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/services/" + service_id_wrong + "/servicelinks/" + str(slr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_slr_for_service_wrong_link_id(self):
"""
Test Fetch SLR for Service with wrong Link ID
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
slr_id_wrong = str(randint(100, 10000))
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(slr_id_wrong) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_ssr_store_source(self):
"""
Test Source SSR storing
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_last_status()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/statuses/"
payload = generate_sls_store_payload(
slr_id=slr_id,
surrogate_id=self.SOURCE_SURROGATE_ID,
prev_record_id=slsr_id,
status="Removed"
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_status))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_ssr_store_source_malformed(self):
"""
Test Source SSR storing with malformed payload
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_last_status()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/statuses/"
payload = generate_sls_store_payload(
slr_id=slr_id,
surrogate_id=self.SOURCE_SURROGATE_ID,
prev_record_id=slsr_id,
status="Removed",
misformatted_payload=True
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_ssr_store_source_signed(self):
"""
Test Source SSR storing with signed SSR
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_last_status()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/statuses/signed/"
payload = generate_signed_ssr_store_payload(
slr_id=slr_id,
surrogate_id=self.SOURCE_SURROGATE_ID,
prev_record_id=slsr_id,
status="Removed",
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 201, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_slr_status))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_ssr_store_source_signed_malformed(self):
"""
Test Source SSR storing with signed SSR with malformed payload
:return: account_id, account_api_key, sdk_api_key, slr_id, response.data
"""
account_id, account_api_key, sdk_api_key, slr_id, slsr_id = self.test_fetch_slr_last_status()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + slr_id + "/statuses/signed/"
payload = generate_signed_ssr_store_payload(
slr_id=slr_id,
surrogate_id=self.SOURCE_SURROGATE_ID,
prev_record_id=slsr_id,
status="Removed",
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT,
misformatted_payload=True
)
response = self.app.post(url, data=payload, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 400, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, slr_id
##########
##########
def test_fetch_surrogate_object(self):
"""
Test Fetch Surrogate object
:return: account_id, account_api_key, sdk_api_key, slr_id
"""
account_id, account_api_key, sdk_api_key, slr_id, ssr_id = self.test_slr_store_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/surrogates/" + str(self.SOURCE_SURROGATE_ID) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_surrogate))
return account_id, account_api_key, sdk_api_key, slr_id
#################################################################################
# #
# Complete flow testing with Service Linking, Authorisation and Data Connection #
# #
#################################################################################
##########
##########
def test_for_account_link_services(self):
"""
Link two services for same Account
:return: account_id, user_api_key, sdk_api_key, source_slr_id, sink_slr_id
"""
# Create and Authenticate Account
account_api_key, account_id = self.test_account_authentication()
# Authenticate Operator-SDK
sdk_api_key = self.test_sdk_auth()
# Authentication for following requests
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
request_headers['Api-Key-User'] = str(account_api_key)
# Service Link Init for Source Service
source_slr_init_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/source/"
source_slr_init_payload, source_slr_code, source_slr_id = generate_sl_init_source()
source_slr_init_response = self.app.post(source_slr_init_url, data=source_slr_init_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, source_slr_init_response.status_code, 201, msg=source_slr_init_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=source_slr_init_response.data), msg=source_slr_init_response.data)
unittest.TestCase.assertTrue(self, validate_json(source_slr_init_response.data, schema_slr_init))
# Service Link Init for Sink Service
sink_slr_init_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/init/sink/"
sink_slr_init_payload, sink_slr_code, sink_slr_id, sink_slr_pop_key = generate_sl_init_sink()
sink_slr_init_response = self.app.post(sink_slr_init_url, data=sink_slr_init_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, sink_slr_init_response.status_code, 201, msg=sink_slr_init_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=sink_slr_init_response.data), msg=sink_slr_init_response.data)
unittest.TestCase.assertTrue(self, validate_json(sink_slr_init_response.data, schema_slr_init))
# Account Owner's signature for Service Link of Source Service
source_slr_sign_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/"
source_slr_sign_payload = generate_sl_payload(
slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SOURCE_SERVICE_ID,
surrogate_id=self.SOURCE_SURROGATE_ID
)
source_slr_sign_response = self.app.patch(source_slr_sign_url, data=source_slr_sign_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, source_slr_sign_response.status_code, 201, msg=source_slr_sign_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=source_slr_sign_response.data), msg=source_slr_sign_response.data)
unittest.TestCase.assertTrue(self, validate_json(source_slr_sign_response.data, schema_slr_sign))
# Account Owner's signature for Service Link of Sink Service
sink_slr_sign_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(sink_slr_id) + "/"
sink_slr_sign_payload = generate_sl_payload(
slr_id=sink_slr_id,
operator_id=self.OPERATOR_ID,
operator_key=self.OPERATOR_KEY_PUBLIC,
service_id=self.SINK_SERVICE_ID,
surrogate_id=self.SINK_SURROGATE_ID
)
sink_slr_sign_response = self.app.patch(sink_slr_sign_url, data=sink_slr_sign_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, sink_slr_sign_response.status_code, 201, msg=sink_slr_sign_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=sink_slr_sign_response.data), msg=sink_slr_sign_response.data)
unittest.TestCase.assertTrue(self, validate_json(sink_slr_sign_response.data, schema_slr_sign))
# Store Service Link of Source Service
source_slr_store_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/store/"
source_slr_store_payload, ssr_id = generate_sl_store_payload(
slr_id=source_slr_id,
slr_signed=json.loads(source_slr_sign_response.data)['data'],
surrogate_id=self.SOURCE_SURROGATE_ID,
service_key=self.SOURCE_KEY_OBJECT,
service_kid=self.SOURCE_KID
)
source_slr_store_response = self.app.post(source_slr_store_url, data=source_slr_store_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, source_slr_store_response.status_code, 201, msg=source_slr_store_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=source_slr_store_response.data), msg=source_slr_store_response.data)
unittest.TestCase.assertTrue(self, validate_json(source_slr_store_response.data, schema_slr_store))
source_ssr_id = json.loads(source_slr_store_response.data)['data']['ssr']['id']
# Store Service Link of Sink Service
sink_slr_store_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + sink_slr_id + "/store/"
sink_slr_store_payload, ssr_id = generate_sl_store_payload(
slr_id=sink_slr_id,
slr_signed=json.loads(sink_slr_sign_response.data)['data'],
surrogate_id=self.SINK_SURROGATE_ID,
service_key=self.SINK_KEY_OBJECT,
service_kid=self.SINK_KID
)
sink_slr_store_response = self.app.post(sink_slr_store_url, data=sink_slr_store_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, sink_slr_store_response.status_code, 201, msg=sink_slr_store_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=sink_slr_store_response.data), msg=sink_slr_store_response.data)
unittest.TestCase.assertTrue(self, validate_json(sink_slr_store_response.data, schema_slr_store))
sink_ssr_id = json.loads(sink_slr_store_response.data)['data']['ssr']['id']
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
##########
##########
def test_for_account_give_consent(self):
"""
Give Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
# Give Consent
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 201, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_give_consent))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_multiple(self):
"""
Give Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
# Give Consent
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
count = 0
source_cr_id_array = []
source_csr_id_array = []
sink_cr_id_array = []
sink_csr_id_array = []
for i in range(0, 3):
count += 1
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 201, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_give_consent))
source_cr_id_array.append(source_cr_id)
source_csr_id_array.append(source_csr_id)
sink_cr_id_array.append(sink_cr_id)
sink_csr_id_array.append(sink_csr_id)
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_give_consent_malformed(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=True
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_wrong_source_surrogate_id(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id="wrong-id",
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_unknown_sink_surrogate_id(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id="unknown",
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_unknown_source_slr_id(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
#source_slr_id=source_slr_id,
source_slr_id="unknown",
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_wrong_sink_slr_id(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id="wrong_id",
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_wrong_source_cr_id_pair(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False,
source_cr_id_fault=True,
sink_cr_id_fault=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_wrong_sink_cr_id_pair(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False,
source_cr_id_fault=False,
sink_cr_id_fault=True
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_surrogate_id_mismatch_source(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(
account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False,
source_cr_id_fault=False,
sink_cr_id_fault=False,
source_surrogate_id_fault=True,
sink_surrogate_id_fault=False
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self,validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_give_consent_surrogate_id_mismatch_sink(self):
"""
Give Consent - With incorrect payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id = self.test_for_account_link_services()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
give_consent_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(
account_id) + "/servicelinks/" + source_slr_id + "/" + sink_slr_id + "/consents/"
give_consent_payload, source_cr_id, source_csr_id, sink_cr_id, sink_csr_id = generate_consent_payload(
source_surrogate_id=self.SOURCE_SURROGATE_ID,
source_slr_id=source_slr_id,
operator_id=self.OPERATOR_ID,
source_subject_id=self.SOURCE_SERVICE_ID,
sink_pop_key=self.SINK_KEY_PUBLIC,
operator_pub_key=self.OPERATOR_KEY_PUBLIC,
sink_surrogate_id=self.SINK_SURROGATE_ID,
sink_slr_id=sink_slr_id,
sink_subject_id=self.SINK_SERVICE_ID,
misformatted_payload=False,
source_cr_id_fault=False,
sink_cr_id_fault=False,
source_surrogate_id_fault=False,
sink_surrogate_id_fault=True
)
give_consent_response = self.app.post(give_consent_url, data=give_consent_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, give_consent_response.status_code, 400, msg=give_consent_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=give_consent_response.data), msg=give_consent_response.data)
unittest.TestCase.assertTrue(self,validate_json(give_consent_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_source(self):
"""
Change Consent Status - Source Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=False,
cr_id_fault=False
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 201, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_consent_status_change))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_sink(self):
"""
Change Consent Status - Sink Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id, sink_csr_id_new
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + sink_cr_id + "/statuses/"
consent_status_change_payload, sink_csr_id_new = generate_consent_status_payload(
surrogate_id=self.SINK_SURROGATE_ID,
cr_id=sink_cr_id,
consent_status="Paused",
prev_record_id=sink_csr_id,
misformatted_payload=False,
cr_id_fault=False
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 201, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_consent_status_change))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id, sink_csr_id_new
##########
##########
def test_for_account_change_consent_status_incorrect_cr_id(self):
"""
Change Consent Status - Faulty payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=False,
cr_id_fault=True
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 400, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_incorrect_payload(self):
"""
Change Consent Status - Faulty payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=True,
cr_id_fault=False
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 400, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_unknown_consent(self):
"""
Change Consent Status - Faulty payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
source_cr_id = "unknown-" + source_cr_id
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=False,
cr_id_fault=False
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 400, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_signed_source(self):
"""
Change Consent Status by Operator - Source Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/signed/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload_signed(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=False,
cr_id_fault=False,
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 201, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_consent_status_change))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_signed_sink(self):
"""
Change Consent Status by Operator - Sink Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id, sink_csr_id_new
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_signed_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + sink_cr_id + "/statuses/signed/"
consent_status_change_payload, sink_csr_id_new = generate_consent_status_payload_signed(
surrogate_id=self.SINK_SURROGATE_ID,
cr_id=sink_cr_id,
consent_status="Paused",
prev_record_id=sink_csr_id,
misformatted_payload=False,
cr_id_fault=False,
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 201, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_consent_status_change))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id, sink_csr_id_new
##########
##########
def test_for_account_change_consent_status_signed_incorrect_payload(self):
"""
Change Consent Status by Operator - Faulty payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/signed/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload_signed(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=True,
cr_id_fault=False,
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 400, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_request_error_detail_as_dict))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_signed_incorrect_cr_id(self):
"""
Change Consent Status by Operator - Faulty payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/signed/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload_signed(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=False,
cr_id_fault=True,
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 400, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_change_consent_status_signed_unknown_cr_id(self):
"""
Change Consent Status by Operator - Faulty payload
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_give_consent()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
source_cr_id = "unknown-" + source_cr_id
# Change Consent Status of Source Service
consent_status_change_url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + source_cr_id + "/statuses/signed/"
consent_status_change_payload, source_csr_id_new = generate_consent_status_payload_signed(
surrogate_id=self.SOURCE_SURROGATE_ID,
cr_id=source_cr_id,
consent_status="Paused",
prev_record_id=source_csr_id,
misformatted_payload=False,
cr_id_fault=False,
operator_kid=self.OPERATOR_KID,
operator_key=self.OPERATOR_KEY_OBJECT
)
consent_status_change_response = self.app.post(consent_status_change_url, data=consent_status_change_payload, headers=request_headers)
unittest.TestCase.assertEqual(self, consent_status_change_response.status_code, 400, msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=consent_status_change_response.data), msg=consent_status_change_response.data)
unittest.TestCase.assertTrue(self, validate_json(consent_status_change_response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consents_by_link(self):
"""
Test Consents
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = source_cr_id_array
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
##########
##########
def test_for_account_fetch_consents_by_link_with_consent_pairs(self):
"""
Test Consents
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/?get_consent_pair=True"
count *= 2
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = source_cr_id_array + sink_cr_id_array
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consents_by_link_wrong_slr_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_ssr_id) + "/consents/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consents_by_link_with_consent_pairs_wrong_slr_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_ssr_id) + "/consents/?get_consent_pair=True"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consents_by_link_wrong_account_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(sink_slr_id) + "/servicelinks/" + str(source_slr_id) + "/consents/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
##########
##########
def test_for_account_fetch_consent_by_link(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id_array[0]) + "?get_consent_pair=False"
count = 1
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_cr_id_array[0]]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_link_with_consent_pair(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id_array[0]) + "?get_consent_pair=True"
count = 2
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_cr_id_array[0], sink_cr_id_array[0]]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_link_wrong_slr_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_ssr_id) + "/consents/" + str(source_cr_id_array[0]) + "/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_link_with_consent_pairs_wrong_slr_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_ssr_id) + "/consents/" + str(source_cr_id_array[0]) + "/?get_consent_pair=True"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_link_wrong_account_id(self):
"""
Test Consent - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(sink_slr_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id_array[0]) + "/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_last_consent_by_link(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/last/?get_consent_pair=False"
count = 1
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_cr_id_array[-1]]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_last_consent_by_link_with_consent_pair(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/last/?get_consent_pair=True"
count = 2
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_cr_id_array[-1], sink_cr_id_array[-1]]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_last_consent_by_link_wrong_slr_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_ssr_id) + "/consents/last/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_last_consent_by_link_with_consent_pairs_wrong_slr_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_ssr_id) + "/consents/last/?get_consent_pair=True"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_last_consent_by_link_wrong_account_id(self):
"""
Test Consent - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(sink_slr_id) + "/servicelinks/" + str(source_slr_id) + "/consents/last/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consents(self):
"""
Test Consents
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/"
count *= 2
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = source_cr_id_array + sink_cr_id_array
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consents_wrong_account_id(self):
"""
Test Consents - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(sink_slr_id) + "/consents/?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id
##########
##########
def test_for_account_fetch_consent_by_account(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id_array[0]) + "/?get_consent_pair=False"
count = 1
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_cr_id_array[0]]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_account_with_consent_pair(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id_array[0]) + "?get_consent_pair=True"
count = 2
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_cr_id_array[0], sink_cr_id_array[0]]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_account_with_wrong_consent_id(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(sink_slr_id) + "?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_by_account_with_wrong_account_id(self):
"""
Test Consent
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(source_slr_id) + "/consents/" + str(source_cr_id_array[0]) + "?get_consent_pair=False"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_consent(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status_listing))
# ID verification
verification_id_array = [source_csr_id, source_csr_id_new]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_consent_with_status_filter(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id) + "/statuses/?status_id=" + str(source_csr_id)
count = 1
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status_listing))
unittest.TestCase.assertEqual(self, len(json.loads(response.data)['data']), count, msg="Response array is containing {} objects instead of {} expexted objects".format(len(json.loads(response.data)['data']), count))
# ID verification
verification_id_array = [source_csr_id_new]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_consent_with_wrong_consent_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(sink_slr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_consent_with_status_filter_faulty(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id) + "/statuses/?status_id=faulty_id"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_status_by_account_and_consent(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(source_csr_id) +"/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status))
# ID verification
verification_id_array = [source_csr_id]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_status_by_account_and_consent_wrong_consent_status_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(sink_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_link_and_consent(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status_listing))
# ID verification
verification_id_array = [source_csr_id, source_csr_id_new]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_link_and_consent_wrong_link_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(sink_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_statuses_by_account_and_link_and_consent_wrong_consent_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(sink_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_status_by_account_and_link_and_consent(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(source_csr_id_new) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status))
# ID verification
verification_id_array = [source_csr_id_new]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_status_by_account_and_link_and_consent_wrong_link_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(sink_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(source_csr_id_new) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_status_by_account_and_link_and_consent_wrong_consent_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(sink_cr_id) + "/statuses/" + str(source_csr_id_new) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_status_by_account_and_link_and_consent_wrong_status_id(self):
"""
Test Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(sink_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_last_status_by_account_and_consent(self):
"""
Test Last Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status))
# ID verification
verification_id_array = [source_csr_id_new]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_last_status_by_account_and_consent_with_wrong_consent_id(self):
"""
Test Last Consent Status - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/consents/" + str(source_ssr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_last_status_by_account_and_consent_with_wrong_account_id(self):
"""
Test Last Consent Status - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_last_status_by_account_and_link_and_consent(self):
"""
Test Last Consent Status
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status))
# ID verification
verification_id_array = [source_csr_id_new]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_last_status_by_account_and_link_and_consent_with_wrong_consent_id(self):
"""
Test Last Consent Status - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(account_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_ssr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_account_fetch_consent_last_status_by_account_and_link_and_consent_with_wrong_account_id(self):
"""
Test Last Consent Status - Invalid IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-User'] = str(account_api_key)
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/accounts/" + str(source_slr_id) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 403, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##
# TEST CASES FOR SERVICES
##########
##########
def test_for_service_fetch_consents(self):
"""
Consent listing for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_listing))
# ID verification
verification_id_array = [source_cr_id]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consents_wrong_service_id(self):
"""
Consent listing for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SINK_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consents_wrong_link_id(self):
"""
Consent listing for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(sink_slr_id) + "/consents/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent(self):
"""
Consent for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent))
# ID verification
verification_id_array = [source_cr_id]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_wrong_service_id(self):
"""
Consent for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SINK_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_wrong_link_id(self):
"""
Consent for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(sink_slr_id) + "/consents/" + str(source_cr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_wrong_consent_id(self):
"""
Consent for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(sink_cr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_statuses(self):
"""
Consent Statuses for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status_listing))
# ID verification
verification_id_array = [source_csr_id, source_csr_id_new]
for record_object in json.loads(response.data)['data']:
id_to_verify = str(record_object['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_statuses_wrong_service_id(self):
"""
Consent Statuses for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SINK_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_statuses_wrong_link_id(self):
"""
Consent Statuses for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(sink_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_statuses_wrong_consent_id(self):
"""
Consent Statuses for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(sink_cr_id) + "/statuses/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_status(self):
"""
Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(source_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status))
# ID verification
verification_id_array = [source_csr_id]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_status_wrong_service_id(self):
"""
Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SINK_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(source_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_status_wrong_link_id(self):
"""
Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(sink_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(source_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_status_wrong_consent_id(self):
"""
Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(sink_cr_id) + "/statuses/" + str(source_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_consent_status_wrong_consent_status_id(self):
"""
Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/" + str(sink_csr_id) + "/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_last_consent_status(self):
"""
Last Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_consent_status))
# ID verification
verification_id_array = [source_csr_id_new]
id_to_verify = str(json.loads(response.data)['data']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_last_consent_status_wrong_service_id(self):
"""
Last Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SINK_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_last_consent_status_wrong_link_id(self):
"""
Last Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(sink_slr_id) + "/consents/" + str(source_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_for_service_fetch_last_consent_status_wrong_consent_id(self):
"""
Last Consent Status for Service
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id = self.test_for_account_change_consent_status_source()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/services/" + str(self.SOURCE_SERVICE_ID) + "/servicelinks/" + str(source_slr_id) + "/consents/" + str(sink_cr_id) + "/statuses/last/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, source_cr_id, source_csr_id, source_csr_id_new, sink_slr_id, sink_ssr_id, sink_cr_id, sink_csr_id
##########
##########
def test_authorisation_token_data(self):
"""
Authorisation token data
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/consents/" + str(sink_cr_id_array[0]) + "/authorisationtoken/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 200, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_authorisation_token_data))
# ID verification
## Source's Consent Record
verification_id_array = [source_cr_id_array[0]]
id_to_verify = str(json.loads(response.data)['data']['consent_record']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="Source's Consent Record ID {} not one of {}".format(id_to_verify, verification_id_array))
## Sink's Service Link Record
verification_id_array = [sink_slr_id]
id_to_verify = str(json.loads(response.data)['data']['service_link_record']['id'])
unittest.TestCase.assertIn(self, id_to_verify, verification_id_array, msg="Sink's Service Link Record ID {} not one of {}".format(id_to_verify, verification_id_array))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
##########
##########
def test_authorisation_token_data_wrong_consent_id(self):
"""
Authorisation token data - Faulty IDs
:return: account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
"""
account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count = self.test_for_account_give_consent_multiple()
request_headers = default_headers
request_headers['Api-Key-Sdk'] = str(sdk_api_key)
url = self.API_PREFIX_INTERNAL + "/consents/" + str(source_cr_id_array[0]) + "/authorisationtoken/"
response = self.app.get(url, headers=request_headers)
unittest.TestCase.assertEqual(self, response.status_code, 404, msg=response.data)
unittest.TestCase.assertTrue(self, is_json(json_object=response.data), msg=response.data)
unittest.TestCase.assertTrue(self, validate_json(response.data, schema_request_error_detail_as_str))
return account_id, account_api_key, sdk_api_key, source_slr_id, source_ssr_id, sink_slr_id, sink_ssr_id, source_cr_id_array, source_csr_id_array, sink_cr_id_array, sink_csr_id_array, count
if __name__ == '__main__':
unittest.main()
| 55.250064 | 239 | 0.718956 | 30,307 | 214,536 | 4.626654 | 0.007787 | 0.052503 | 0.032863 | 0.040436 | 0.97424 | 0.964755 | 0.958472 | 0.95184 | 0.947461 | 0.94473 | 0 | 0.003263 | 0.175742 | 214,536 | 3,882 | 240 | 55.264297 | 0.789689 | 0.113454 | 0 | 0.790952 | 0 | 0 | 0.054604 | 0.005913 | 0 | 0 | 0 | 0 | 0.22381 | 1 | 0.067143 | false | 0.003333 | 0.005714 | 0 | 0.141905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b5ce103b1e939ac31cca4fcb1f9f75aa952842cb | 150 | py | Python | rman_operators/__init__.py | ian-hsieh/RenderManForBlender | c827f029f4cbbd1fcc71ed8d3694fc5ac58cc468 | [
"MIT"
] | 12 | 2019-05-03T21:58:15.000Z | 2022-02-24T07:02:21.000Z | rman_operators/__init__.py | ian-hsieh/RenderManForBlender | c827f029f4cbbd1fcc71ed8d3694fc5ac58cc468 | [
"MIT"
] | 4 | 2019-03-07T18:20:16.000Z | 2020-09-24T21:53:15.000Z | rman_operators/__init__.py | ian-hsieh/RenderManForBlender | c827f029f4cbbd1fcc71ed8d3694fc5ac58cc468 | [
"MIT"
] | 3 | 2019-05-25T01:17:09.000Z | 2019-09-13T14:43:12.000Z | from . import rman_operators_printer
def register():
rman_operators_printer.register()
def unregister():
rman_operators_printer.unregister() | 21.428571 | 39 | 0.786667 | 17 | 150 | 6.588235 | 0.470588 | 0.348214 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126667 | 150 | 7 | 39 | 21.428571 | 0.854962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0.6 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 8 |
b5e38c7b2f37f77f94a6733875de87bb7dd90a18 | 43 | py | Python | utils/__init__.py | 12doge-LEO/imiTGBot | 2be263e77bac61232b7a55c0d0159a9fdd0e7b47 | [
"MIT"
] | null | null | null | utils/__init__.py | 12doge-LEO/imiTGBot | 2be263e77bac61232b7a55c0d0159a9fdd0e7b47 | [
"MIT"
] | null | null | null | utils/__init__.py | 12doge-LEO/imiTGBot | 2be263e77bac61232b7a55c0d0159a9fdd0e7b47 | [
"MIT"
] | null | null | null | from .audio_cache import update_audio_cache | 43 | 43 | 0.906977 | 7 | 43 | 5.142857 | 0.714286 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bd41592bf06539b32b241df692280394e2bfa7d7 | 956 | py | Python | jade_utils/dask_tools/__init__.py | tam203/jade_utils | f717229444bd2f18c94e78c9cc659a9ac1650fa4 | [
"BSD-3-Clause"
] | null | null | null | jade_utils/dask_tools/__init__.py | tam203/jade_utils | f717229444bd2f18c94e78c9cc659a9ac1650fa4 | [
"BSD-3-Clause"
] | 2 | 2018-09-25T08:59:09.000Z | 2018-09-25T08:59:09.000Z | jade_utils/dask_tools/__init__.py | tam203/jade_utils | f717229444bd2f18c94e78c9cc659a9ac1650fa4 | [
"BSD-3-Clause"
] | 1 | 2021-04-10T23:57:42.000Z | 2021-04-10T23:57:42.000Z | """Tools for working with dask."""
def update_worker_memory(cluster, new_limit):
cluster.pod_template.spec.containers[0].resources.limits["memory"] = new_limit
cluster.pod_template.spec.containers[0].resources.requests["memory"] = new_limit
if '--memory-limit' in cluster.pod_template.spec.containers[0].args:
index = cluster.pod_template.spec.containers[0].args.index('--memory-limit')
cluster.pod_template.spec.containers[0].args[index + 1] = new_limit
return cluster
def update_worker_cpu(cluster, new_limit):
cluster.pod_template.spec.containers[0].resources.limits["cpu"] = new_limit
cluster.pod_template.spec.containers[0].resources.requests["cpu"] = new_limit
if '--nthreads' in cluster.pod_template.spec.containers[0].args:
index = cluster.pod_template.spec.containers[0].args.index('--nthreads')
cluster.pod_template.spec.containers[0].args[index + 1] = new_limit
return cluster
| 47.8 | 84 | 0.73431 | 131 | 956 | 5.19084 | 0.21374 | 0.147059 | 0.264706 | 0.323529 | 0.804412 | 0.804412 | 0.804412 | 0.797059 | 0.797059 | 0.797059 | 0 | 0.014337 | 0.124477 | 956 | 19 | 85 | 50.315789 | 0.798088 | 0.029289 | 0 | 0.285714 | 0 | 0 | 0.071584 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1f9a80523da6daa5d13eb1c6a821d016c111480e | 2,305 | py | Python | cdisp/helpers.py | felippebarbosa/cdisp | d9a612c252495ab017bffccdd7e82bbb555e07dd | [
"BSL-1.0"
] | null | null | null | cdisp/helpers.py | felippebarbosa/cdisp | d9a612c252495ab017bffccdd7e82bbb555e07dd | [
"BSL-1.0"
] | null | null | null | cdisp/helpers.py | felippebarbosa/cdisp | d9a612c252495ab017bffccdd7e82bbb555e07dd | [
"BSL-1.0"
] | null | null | null | #-*- coding: utf-8 -*-
"""
Helper functions
"""
import numpy # module for array manipulation
def Df_fine(x, t):
"""Function for computing first order numerical derivatives using 3rd order derivative calculation (more precise and slower)
Usage
------
The function returns y = dx/dt.
Parameters
------
x: 1-dimensional array
t: 1-dimensional array preferably with the same length as x
"""
#######
if type(x) <> 'numpy.ndarray': x = numpy.array(x) # convert to numpy array
N = x.shape[0] # length of the original array
df = [] # initial derivative empyy list
for k in range(N): # loop for calculation
if k == 0: # first point case
dx = x[k + 1] - x[k]
dt = t[k + 1] - t[k]
elif k == N - 1: # last point case
dx = x[k] - x[k - 1]
dt = t[k] - t[k - 1]
elif k == 1 or k == N - 2: # second and second-to-last cases
dx = x[k + 1] - x[k - 1]
dt = t[k + 1] - t[k - 1]
else: # remaining cases
dx = -x[k + 2] + 8*x[k + 1] - 8*x[k - 1] + x[k - 2]
dt = 3*(t[k + 2] - t[k - 2])
df.append(Ddata/Dvar) # add point to the list
return numpy.array(df)
def Df(x, t):
"""Function for computing first order numerical derivatives
Usage
------
The function returns y = dx/dt.
Parameters
------
x: 1-dimensional array
t: 1-dimensional array preferably with the same length as x
"""
#######
if type(x) <> 'numpy.ndarray': x = numpy.array(x) # convert to numpy array
N = x.shape[0] # length of the original array
df = [] # initial derivative empyy list
for k in range(N): # loop for calculation
if k == 0: # first point case
dx = x[k + 1] - x[k]
dt = t[k + 1] - t[k]
elif k == N - 1: # last point case
dx = x[k] - x[k - 1]
dt = t[k] - t[k - 1]
else: # remaining cases
dx = x[k + 1] - x[k - 1]
dt = t[k + 1] - t[k - 1]
df.append(dx/dt) # add point to the list
return numpy.array(df)
| 30.733333 | 128 | 0.470282 | 329 | 2,305 | 3.291793 | 0.234043 | 0.035088 | 0.027701 | 0.018467 | 0.808864 | 0.804247 | 0.804247 | 0.804247 | 0.804247 | 0.611265 | 0 | 0.028037 | 0.396529 | 2,305 | 74 | 129 | 31.148649 | 0.750539 | 0.18872 | 0 | 0.777778 | 0 | 0 | 0.021399 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027778 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1fbb87a2f29fc734a30131b212b93ebc41ef8a20 | 185 | py | Python | platform/hwconf_data/efm32hg/modules/PIN/PIN_Snippets.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | null | null | null | platform/hwconf_data/efm32hg/modules/PIN/PIN_Snippets.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T02:36:22.000Z | 2020-08-25T02:36:22.000Z | platform/hwconf_data/efm32hg/modules/PIN/PIN_Snippets.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T01:56:04.000Z | 2020-08-25T01:56:04.000Z | """
Generated from a template
"""
import efm32hg.PythonSnippet.RuntimeModel as RuntimeModel
from efm32hg.modules.PIN.PIN_Defs import PORT_PINS
def activate_runtime():
pass
| 10.882353 | 57 | 0.762162 | 23 | 185 | 6 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025806 | 0.162162 | 185 | 16 | 58 | 11.5625 | 0.864516 | 0.135135 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
1fc035e5c5221f5032b0f3463b27b45a2fc128a5 | 193 | py | Python | bookworm/platform_services/speech_engines.py | xingkong0113/bookworm | 7214067f48e7a951198806a1f9170e3fd8fc0cce | [
"MIT"
] | 36 | 2020-11-15T03:21:39.000Z | 2022-03-05T01:11:26.000Z | bookworm/platform_services/speech_engines.py | xingkong0113/bookworm | 7214067f48e7a951198806a1f9170e3fd8fc0cce | [
"MIT"
] | 90 | 2020-10-06T14:46:07.000Z | 2022-03-31T03:03:34.000Z | bookworm/platform_services/speech_engines.py | xingkong0113/bookworm | 7214067f48e7a951198806a1f9170e3fd8fc0cce | [
"MIT"
] | 20 | 2020-09-30T17:40:44.000Z | 2022-03-17T19:59:53.000Z | # coding: utf-8
from . import PLATFORM
if PLATFORM == "win32":
from ._win32.speech_engines import TTS_ENGINES
elif PLATFORM == "linux":
from ._linux.speech_engines import TTS_ENGINES
| 21.444444 | 50 | 0.735751 | 26 | 193 | 5.230769 | 0.5 | 0.191176 | 0.279412 | 0.323529 | 0.426471 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.170984 | 193 | 8 | 51 | 24.125 | 0.81875 | 0.067358 | 0 | 0 | 0 | 0 | 0.05618 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1fc7aa9c9db8139983023c3a38b1b4c3e4c9e8ad | 8,042 | py | Python | main.py | SoloGuardiaN/Project-Zork | 63f617f87710fe9246be6314f8ff3ddbe5699199 | [
"MIT"
] | null | null | null | main.py | SoloGuardiaN/Project-Zork | 63f617f87710fe9246be6314f8ff3ddbe5699199 | [
"MIT"
] | null | null | null | main.py | SoloGuardiaN/Project-Zork | 63f617f87710fe9246be6314f8ff3ddbe5699199 | [
"MIT"
] | null | null | null | import random
# Floor 1 - 3
floor3 = ['sword', 'stairs down', 'nothing', 'boss monster', 'prize']
floor2 = ['magic stone', 'stairs up', 'monster', 'stairs down', 'nothing']
floor1 = ['nothing', 'sword', 'monster', 'stairs up', 'sword']# inventory, user room, and user floor variables
inventory = [0, 0, 0]
user_room = 0
user_floor = floor1
game_over = 0
last = ''
print("Welcome unfortunate victim, this is a test of your skills in combat and how fast you can think on your feet. Currently you are locked in a warehouse there are 3 normal 'guests' and one very very special 'guest'. Can you defeat the 'guests' and retrieve the key to escape? Which room on this floor would you like to go to? Or maybe you'd like to go to a different floor? Type 'help' for the commands.\n")# if statements
for game
function
while game_over == 0:
if user_floor[user_room] == 'nothing':
print("This room has nothing in it.")
elif user_floor[user_room] == 'sword':
print("This room has a sword in it!")
elif user_floor[user_room] == 'magic stone':
print("This room has magic stones in it!")
elif user_floor[user_room] == 'stairs up':
print("This room has stairs going up.")
elif user_floor[user_room] == 'stairs down':
print("This room has stairs going down.")
elif user_floor[user_room] == 'monster':
print("There is a monster in the room with you.")
elif user_floor[user_room] == 'boss monster':
print("The boss looks at you.")
elif user_floor[user_room] == 'prize':
print("Congrats, you obtained the prize of finishing the game")
game_over = 1
x = input("What do you do?")
if x == 'help':
print("left, right, up, down, grab, fight, help, end")
elif x == 'left':
if user_floor[user_room] == 'monster'
and(last == 'left'
or last == 'grab'
or last == 'up'
or last == 'down'):
print("You can pass the monster without a fight.")
elif user_room > -1:
user_room -= 1
else :
print("You run straight into a wall.")
elif x == 'right':
if user_floor[user_room] == 'monster'
or user_floor[user_room] == 'boss monster'import random
# Floor 1 - 3
floor3 = ['sword', 'stairs down', 'nothing', 'boss monster', 'prize']
floor2 = ['magic stone', 'stairs up', 'monster', 'stairs down', 'nothing']
floor1 = ['nothing', 'sword', 'monster', 'stairs up', 'sword']# inventory, user room, and user floor variables
inventory = [0, 0, 0]
user_room = 0
user_floor = floor1
game_over = 0
last = ''
print("Welcome unfortunate victim, this is a test of your skills in combat and how fast you can think on your feet. Currently you are locked in a warehouse there are 3 normal 'guests' and one very very special 'guest'. Can you defeat the 'guests' and retrieve the key to escape? Which room on this floor would you like to go to? Or maybe you'd like to go to a different floor? Type 'help' for the commands.\n")# if statements
for game
function
while game_over == 0:
if user_floor[user_room] == 'nothing':
print("This room has nothing in it.")
elif user_floor[user_room] == 'sword':
print("This room has a sword in it!")
elif user_floor[user_room] == 'magic stone':
print("This room has magic stones in it!")
elif user_floor[user_room] == 'stairs up':
print("This room has stairs going up.")
elif user_floor[user_room] == 'stairs down':
print("This room has stairs going down.")
elif user_floor[user_room] == 'monster':
print("There is a monster in the room with you.")
elif user_floor[user_room] == 'boss monster':
print("The boss looks at you.")
elif user_floor[user_room] == 'prize':
print("Congrats, you obtained the prize of finishing the game")
game_over = 1
x = input("What do you do?")
if x == 'help':
print("left, right, up, down, grab, fight, help, end")
elif x == 'left':
if user_floor[user_room] == 'monster'
and(last == 'left'
or last == 'grab'
or last == 'up'
or last == 'down'):
print("You can pass the monster without a fight.")
elif user_room > -1:
user_room -= 1
else :
print("You run straight into a wall.")
elif x == 'right':
if user_floor[user_room] == 'monster'
or user_floor[user_room] == 'boss monster'
and(last == 'right'
or last == 'grab'
or last == 'up'
or last == 'down'):
print("You can pass the monster without a fight.")
elif user_room < 5:
user_room += 1
else :
print("You run straight into a wall")
elif x == 'up':
if user_floor[user_room] == 'stairs up':
if user_floor == floor1:
user_floor = floor2
print("You went up the stairs")
elif user_floor == floor2:
user_floor = floor3
print("You went up the stairs")
elif x == 'down':
if user_floor[user_room] == 'stairs down':
if user_floor == floor2:
user_floor = floor1
print("You went down the stairs")
elif user_floor == floor3:
user_floor = floor2
print("You went down the stairs")
elif x == 'grab':
if user_floor[user_room] == 'sword':
slots = 0
var = 0
while
var == 0:
if inventory[slots] == 'sword'
or inventory[slots] == 'magic stone':
slots += 1
else :
inventory[slots] = "sword"
print("You picked up a sword")
var = 1
user_floor[user_room] = "nothing"
if user_floor[user_room] == 'magic stone':
slots = 0
var = 0
while
var == 0:
if inventory[slots] == 'sword'
or inventory[slots] == 'magic stone':
slots += 1
else :
inventory[slots] = "magic stone"
print("You picked up a magic stone")
var = 1
user_floor[user_room] = "nothing"
elif x == 'fight':
if user_floor[user_room] == 'monster':
slots = 2
var = 0
while
var == 0:
if inventory[slots] == '0'
or inventory[slots] == 'magic stone':
slots -= 1
else :
user_floor[user_room] = 'nothing'
inventory[slots] = 0
print("You killed the monster, but broke your sword in the process")
var = 1
elif user_floor[user_room] == 'boss monster':
slots = 2
var = 0
while
var == 0:
if inventory[slots] == '0'
or inventory[slots] == 'sword':
slots -= 1
else :
user_floor[user_room] = 'nothing'
inventory[slots] = 0
print("You killed the boss, but broke your magic stone in the process")
var = 1
elif x == 'end':
print("You killed yourself. How pityful.")
game_over = 1
else :
print("That is not a command.")
last = x
and(last == 'right'
or last == 'grab'
or last == 'up'
or last == 'down'):
print("You can pass the monster without a fight.")
elif user_room < 5:
user_room += 1
else :
print("You run straight into a wall")
elif x == 'up':
if user_floor[user_room] == 'stairs up':
if user_floor == floor1:
user_floor = floor2
print("You went up the stairs")
elif user_floor == floor2:
user_floor = floor3
print("You went up the stairs")
elif x == 'down':
if user_floor[user_room] == 'stairs down':
if user_floor == floor2:
user_floor = floor1
print("You went down the stairs")
elif user_floor == floor3:
user_floor = floor2
print("You went down the stairs")
elif x == 'grab':
if user_floor[user_room] == 'sword':
slots = 0
var = 0
while
var == 0:
if inventory[slots] == 'sword'
or inventory[slots] == 'magic stone':
slots += 1
else :
inventory[slots] = "sword"
print("You picked up a sword")
var = 1
user_floor[user_room] = "nothing"
if user_floor[user_room] == 'magic stone':
slots = 0
var = 0
while
var == 0:
if inventory[slots] == 'sword'
or inventory[slots] == 'magic stone':
slots += 1
else :
inventory[slots] = "magic stone"
print("You picked up a magic stone")
var = 1
user_floor[user_room] = "nothing"
elif x == 'fight':
if user_floor[user_room] == 'monster':
slots = 2
var = 0
while
var == 0:
if inventory[slots] == '0'
or inventory[slots] == 'magic stone':
slots -= 1
else :
user_floor[user_room] = 'nothing'
inventory[slots] = 0
print("You killed the monster, but broke your sword in the process")
var = 1
elif user_floor[user_room] == 'boss monster':
slots = 2
var = 0
while
var == 0:
if inventory[slots] == '0'
or inventory[slots] == 'sword':
slots -= 1
else :
user_floor[user_room] = 'nothing'
inventory[slots] = 0
print("You killed the boss, but broke your magic stone in the process")
var = 1
elif x == 'end':
print("You killed yourself. How pityful.")
game_over = 1
else :
print("That is not a command.")
last = x
| 29.895911 | 427 | 0.673464 | 1,300 | 8,042 | 4.073846 | 0.094615 | 0.105363 | 0.103097 | 0.134819 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.015596 | 0.186769 | 8,042 | 268 | 428 | 30.007463 | 0.79419 | 0.01803 | 0 | 0.988593 | 0 | 0.007605 | 0.406844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.015209 | 0.007605 | null | null | 0.18251 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
95200020b103a63680eccbbca94a28ccc056ada1 | 21,225 | py | Python | bank_bot/tests/test_hacking_module.py | Tengro/larp_bankbot | 22d5ea49d5f507da74fb3b1f106c24ad52cb9e68 | [
"MIT"
] | 3 | 2019-07-27T15:20:49.000Z | 2019-10-14T13:10:55.000Z | bank_bot/tests/test_hacking_module.py | Tengro/larp_bankbot | 22d5ea49d5f507da74fb3b1f106c24ad52cb9e68 | [
"MIT"
] | 1 | 2021-06-01T23:55:12.000Z | 2021-06-01T23:55:12.000Z | bank_bot/tests/test_hacking_module.py | Tengro/larp_bankbot | 22d5ea49d5f507da74fb3b1f106c24ad52cb9e68 | [
"MIT"
] | null | null | null | import pytest
from bank_bot.banking_system.client_factory import BankingClientFactory
from bank_bot.banking_system.banking_system_class_based import BankingClient
from bank_bot.banking_system.user_class import User
from bank_bot.banking_system import UserError, TransactionError, HackerError
from bank_bot.settings import NO_USER_DATA, NO_TRANSACTIONS_FOUND, DEFAULT_FINANCES, ATTRIBUTE_UPDATE_MESSAGE, NO_MESSAGES_FOUND
from bank_bot.banking_system.transaction_class import Transaction
from bank_bot.banking_system.message_class import Message
def test_hacker_validation(database, mock_message):
character_hash = User.create_user(2, 2, "Test user", database)
client = BankingClientFactory(database).create_client(mock_message)
with pytest.raises(HackerError):
client.hacker_validation(0)
User.update_db_value(character_hash, "hacker_level", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
with pytest.raises(HackerError):
client.hacker_validation(2)
client.hacker_validation(0)
def test_hack_inspect_user(database, mock_message):
User.create_admin(1, 1, database)
character_hash = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash, "hacker_level", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_4 = User.create_user(4, 4, "Test user 4", database)
User.update_db_value(character_hash_4, "hacker_defence", 1, database)
user2 = client.get_user_by_user_hash(character_hash_2)
user4 = client.get_user_by_user_hash(character_hash_4)
with pytest.raises(UserError):
client.hack_inspect_user("/hack 1234567890")
resulting_data, chat_id, show_hack = client.hack_inspect_user(f'/hack {character_hash_2}')
assert resulting_data == user2.hack_result
assert chat_id == user2.chat_id
assert not show_hack
with pytest.raises(HackerError):
client.hack_inspect_user(f'/hack 0000000000')
resulting_data, chat_id, show_hack = client.hack_inspect_user(f'/hack {character_hash_4}')
assert resulting_data == user4.hack_result
assert chat_id == user4.chat_id
assert show_hack
def test_hack_inspect_transactions(database, mock_message):
User.create_admin(1, 1, database)
character_hash_1 = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash_1, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_3 = User.create_user(4, 4, "Test user 3", database)
character_hash_4 = User.create_user(5, 5, "Test user 4", database)
User.update_db_value(character_hash_3, "hacker_defence", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
results, chat_id, show_hack = client.hack_inspect_transactions(f"/hack_history_sent {character_hash_2}", True)
assert results == NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_transactions(f"/hack_history_recieved {character_hash_2}", False)
assert results == NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_transactions(f"/hack_history_recieved {character_hash_3}", False)
assert results == NO_TRANSACTIONS_FOUND
assert show_hack
Transaction.create_transaction(character_hash_2, character_hash_4, 100, database)
Transaction.create_transaction(character_hash_2, character_hash_3, 100, database)
Transaction.create_transaction(character_hash_2, "0000000000", 100, database)
results, chat_id, show_hack = client.hack_inspect_transactions(f"/hack_history_sent {character_hash_2}", True)
assert results != NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_transactions(f"/hack_history_recieved {character_hash_2}", False)
assert results == NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_transactions(f"/hack_history_recieved {character_hash_3}", False)
assert results != NO_TRANSACTIONS_FOUND
assert show_hack
with pytest.raises(HackerError):
client.hack_inspect_transactions(f"/hack_history_recieved 0000000000", False)
def test_hack_inspect_all_transactions(database, mock_message):
User.create_admin(1, 1, database)
character_hash_1 = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash_1, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_3 = User.create_user(4, 4, "Test user 3", database)
character_hash_4 = User.create_user(5, 5, "Test user 4", database)
User.update_db_value(character_hash_3, "hacker_defence", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
results, chat_id, show_hack = client.hack_inspect_all_transactions(f"/hack_history_all {character_hash_2}")
assert results == NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_all_transactions(f"/hack_history_all {character_hash_3}")
assert results == NO_TRANSACTIONS_FOUND
assert show_hack
Transaction.create_transaction(character_hash_2, character_hash_4, 100, database)
Transaction.create_transaction(character_hash_2, character_hash_3, 100, database)
Transaction.create_transaction(character_hash_2, "0000000000", 100, database)
results, chat_id, show_hack = client.hack_inspect_all_transactions(f"/hack_history_all {character_hash_2}")
assert results != NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_all_transactions(f"/hack_history_all {character_hash_3}")
assert results != NO_TRANSACTIONS_FOUND
assert show_hack
with pytest.raises(HackerError):
client.hack_inspect_all_transactions(f"/hack_history_all 0000000000")
def test_hack_inspect_pair_transactions(database, mock_message):
User.create_admin(1, 1, database)
character_hash_1 = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash_1, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_3 = User.create_user(4, 4, "Test user 3", database)
character_hash_4 = User.create_user(5, 5, "Test user 4", database)
User.update_db_value(character_hash_3, "hacker_defence", 1, database)
User.update_db_value(character_hash_4, "hacker_defence", 2, database)
client = BankingClientFactory(database).create_client(mock_message)
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history(f"/hack_history_pair {character_hash_2} {character_hash_3}")
assert results == NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history(f"/hack_history_pair {character_hash_4} {character_hash_3}")
assert results == NO_TRANSACTIONS_FOUND
assert show_hack
with pytest.raises(HackerError):
client.hack_inspect_pair_history(f"/hack_history_pair {character_hash_4} 0000000000")
Transaction.create_transaction(character_hash_2, character_hash_4, 100, database)
Transaction.create_transaction(character_hash_2, character_hash_3, 100, database)
Transaction.create_transaction(character_hash_2, "0000000000", 100, database)
Transaction.create_transaction(character_hash_3, character_hash_4, 100, database)
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history(f"/hack_history_pair {character_hash_2} {character_hash_3}")
assert results != NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history(f"/hack_history_pair {character_hash_2} 0000000000")
assert results != NO_TRANSACTIONS_FOUND
assert not show_hack
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history(f"/hack_history_pair {character_hash_4} {character_hash_3}")
assert results != NO_TRANSACTIONS_FOUND
assert show_hack
def test_hack_inspect_messages(database, mock_message):
User.create_admin(1, 1, database)
character_hash_1 = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash_1, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_3 = User.create_user(4, 4, "Test user 3", database)
character_hash_4 = User.create_user(5, 5, "Test user 4", database)
User.update_db_value(character_hash_3, "hacker_defence", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
results, chat_id, show_hack = client.hack_inspect_messages(f"/hack_messages_history_sent {character_hash_2}", True)
assert results == NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_messages(f"/hack_messages_history_recieved {character_hash_2}", False)
assert results == NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_messages(f"/hack_messages_history_recieved {character_hash_3}", False)
assert results == NO_MESSAGES_FOUND
assert show_hack
Message.create_message(character_hash_2, character_hash_4, "100", database)
Message.create_message(character_hash_2, character_hash_3, "100", database)
Message.create_message(character_hash_2, "0000000000", "100", database)
results, chat_id, show_hack = client.hack_inspect_messages(f"/hack_messages_history_sent {character_hash_2}", True)
assert results != NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_messages(f"/hack_messages_history_recieved {character_hash_2}", False)
assert results == NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_messages(f"/hack_messages_history_recieved {character_hash_3}", False)
assert results != NO_MESSAGES_FOUND
assert show_hack
with pytest.raises(HackerError):
client.hack_inspect_messages(f"/hack_messages_history_recieved 0000000000", False)
def test_hack_inspect_all_messages(database, mock_message):
User.create_admin(1, 1, database)
character_hash_1 = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash_1, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_3 = User.create_user(4, 4, "Test user 3", database)
character_hash_4 = User.create_user(5, 5, "Test user 4", database)
User.update_db_value(character_hash_3, "hacker_defence", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
results, chat_id, show_hack = client.hack_inspect_all_messages(f"/hack_messages_history_all {character_hash_2}")
assert results == NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_all_messages(f"/hack_messages_history_all {character_hash_3}")
assert results == NO_MESSAGES_FOUND
assert show_hack
Message.create_message(character_hash_2, character_hash_4, "100", database)
Message.create_message(character_hash_2, character_hash_3, "100", database)
Message.create_message(character_hash_2, "0000000000", "100", database)
results, chat_id, show_hack = client.hack_inspect_all_messages(f"/hack_messages_history_all {character_hash_2}")
assert results != NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, show_hack = client.hack_inspect_all_messages(f"/hack_messages_history_all {character_hash_3}")
assert results != NO_MESSAGES_FOUND
assert show_hack
with pytest.raises(HackerError):
client.hack_inspect_all_messages(f"/hack_messages_history_all 0000000000")
def test_hack_inspect_pair_messages(database, mock_message):
User.create_admin(1, 1, database)
character_hash_1 = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash_1, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_3 = User.create_user(4, 4, "Test user 3", database)
character_hash_4 = User.create_user(5, 5, "Test user 4", database)
User.update_db_value(character_hash_3, "hacker_defence", 1, database)
User.update_db_value(character_hash_4, "hacker_defence", 2, database)
client = BankingClientFactory(database).create_client(mock_message)
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history_messages(f"/hack_messages_history_pair {character_hash_2} {character_hash_3}")
assert results == NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history_messages(f"/hack_messages_history_pair {character_hash_4} {character_hash_3}")
assert results == NO_MESSAGES_FOUND
assert show_hack
with pytest.raises(HackerError):
client.hack_inspect_pair_history_messages(f"/hack_messages_history_pair {character_hash_4} 0000000000")
Message.create_message(character_hash_2, character_hash_4, "100", database)
Message.create_message(character_hash_2, character_hash_3, "100", database)
Message.create_message(character_hash_2, "0000000000", "100", database)
Message.create_message(character_hash_3, character_hash_4, "100", database)
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history_messages(f"/hack_messages_history_pair {character_hash_2} {character_hash_3}")
assert results != NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history_messages(f"/hack_messages_history_pair {character_hash_2} 0000000000")
assert results != NO_MESSAGES_FOUND
assert not show_hack
results, chat_id, hash_1, chat_2_id, hash_2, show_hack = client.hack_inspect_pair_history_messages(f"/hack_messages_history_pair {character_hash_4} {character_hash_3}")
assert results != NO_MESSAGES_FOUND
assert show_hack
def test_hack_send_hacked_message(database, mock_message):
User.create_admin(1, 1, database)
character_hash = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash, "hacker_level", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_4 = User.create_user(4, 4, "Test user 4", database)
User.update_db_value(character_hash_4, "hacker_defence", 1, database)
user2 = client.get_user_by_user_hash(character_hash_2)
user4 = client.get_user_by_user_hash(character_hash_4)
with pytest.raises(UserError):
client.prepare_hacker_message("/h@ck_message 1234567890 OLOLO")
chat_id, sent_message, show_hack = client.prepare_hacker_message(f'/h@ck_message {character_hash_2} OLOLO')
assert sent_message == "OLOLO"
assert chat_id == user2.chat_id
assert not show_hack
with pytest.raises(HackerError):
client.hack_inspect_user(f'/h@ck_message 0000000000 OLOLO')
chat_id, sent_message, show_hack= client.prepare_hacker_message(f'/h@ck_message {character_hash_4} OLOLO')
assert sent_message == "OLOLO"
assert chat_id == user4.chat_id
assert show_hack
def test_create_hacked_transaction(database, mock_message):
User.create_admin(1, 1, database)
character_hash = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_4 = User.create_user(4, 4, "Test user 4", database)
User.update_db_value(character_hash_4, "hacker_defence", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
double_amount = DEFAULT_FINANCES * 2
half_amount = DEFAULT_FINANCES / 2
user2 = client.get_user_by_user_hash(character_hash_2)
user1 = client.get_user_by_user_hash(character_hash)
assert user2.finances == DEFAULT_FINANCES
assert user1.finances == DEFAULT_FINANCES
with pytest.raises(TransactionError):
client.create_hacker_transaction(f"/h@ck_theft {character_hash_2} {double_amount}")
with pytest.raises(TransactionError):
client.create_hacker_transaction(f"/h@ck_theft {character_hash} {half_amount}")
with pytest.raises(TransactionError):
client.create_hacker_transaction(f"/h@ck_theft {character_hash_2} notanumber")
with pytest.raises(TransactionError):
client.create_hacker_transaction(f"/h@ck_theft {character_hash_2} 0")
with pytest.raises(UserError):
client.create_hacker_transaction(f"/h@ck_theft 1234567890 {half_amount}")
with pytest.raises(HackerError):
client.create_hacker_transaction(f"/h@ck_theft 0000000000 {half_amount}")
hacker_chat_id, hacker_hash, victim_chat_id, transaction_message, show_hack = client.create_hacker_transaction(f"/h@ck_theft {character_hash_2} {half_amount}")
user2 = client.get_user_by_user_hash(character_hash_2)
user1 = client.get_user_by_user_hash(character_hash)
assert user2.finances == DEFAULT_FINANCES - half_amount
assert user1.finances == DEFAULT_FINANCES + half_amount
assert hacker_chat_id == user1.chat_id
assert victim_chat_id == user2.chat_id
assert not show_hack
hacker_chat_id, hacker_hash, victim_chat_id, transaction_message, show_hack = client.create_hacker_transaction(f"/h@ck_theft {character_hash_4} {half_amount}")
user2 = client.get_user_by_user_hash(character_hash_4)
user = client.get_user_by_user_hash(character_hash)
assert hacker_chat_id == user1.chat_id
assert victim_chat_id == user2.chat_id
assert user2.finances == DEFAULT_FINANCES - half_amount
assert user.finances == DEFAULT_FINANCES + half_amount + half_amount
assert show_hack
def test_create_hacked_transaction_other(database, mock_message):
User.create_admin(1, 1, database)
character_hash = User.create_user(2, 2, "Test user", database)
User.update_db_value(character_hash, "hacker_level", 1, database)
character_hash_2 = User.create_user(3, 3, "Test user 2", database)
character_hash_4 = User.create_user(4, 4, "Test user 4", database)
character_hash_5 = User.create_user(5, 5, "Test user 5", database)
User.update_db_value(character_hash_4, "hacker_defence", 1, database)
client = BankingClientFactory(database).create_client(mock_message)
double_amount = DEFAULT_FINANCES * 2
half_amount = DEFAULT_FINANCES / 2
user2 = client.get_user_by_user_hash(character_hash_2)
user1 = client.get_user_by_user_hash(character_hash)
assert user2.finances == DEFAULT_FINANCES
assert user1.finances == DEFAULT_FINANCES
with pytest.raises(TransactionError):
client.create_hacker_transaction_other(f"/h@ck_theft_other {character_hash_2} {character_hash_5} {double_amount}")
with pytest.raises(TransactionError):
client.create_hacker_transaction_other(f"/h@ck_theft_other {character_hash} {character_hash_5} {half_amount}")
with pytest.raises(TransactionError):
client.create_hacker_transaction_other(f"/h@ck_theft_other {character_hash_2} {character_hash_5} notanumber")
with pytest.raises(TransactionError):
client.create_hacker_transaction_other(f"/h@ck_theft_other {character_hash_2} {character_hash_5} 0")
with pytest.raises(UserError):
client.create_hacker_transaction_other(f"/h@ck_theft_other 1234567890 {character_hash_5} {half_amount}")
with pytest.raises(HackerError):
client.create_hacker_transaction_other(f"/h@ck_theft_other 0000000000 {character_hash_5} {half_amount}")
hacker_hash, victim_chat_id, profiteer_chat_id, transaction_message, show_hack = client.create_hacker_transaction_other(f"/h@ck_theft_other {character_hash_2} {character_hash_5} {half_amount}")
user2 = client.get_user_by_user_hash(character_hash_2)
user5 = client.get_user_by_user_hash(character_hash_5)
assert user2.finances == DEFAULT_FINANCES - half_amount
assert user5.finances == DEFAULT_FINANCES + half_amount
assert hacker_hash == client.user.character_hash
assert victim_chat_id == user2.chat_id
assert not show_hack
hacker_hash, victim_chat_id, profiteer_chat_id, transaction_message, show_hack = client.create_hacker_transaction_other(f"/h@ck_theft_other {character_hash_4} {character_hash_5} {half_amount}")
user4 = client.get_user_by_user_hash(character_hash_4)
user5 = client.get_user_by_user_hash(character_hash_5)
assert hacker_hash == client.user.character_hash
assert victim_chat_id == user4.chat_id
assert user4.finances == DEFAULT_FINANCES - half_amount
assert user5.finances == DEFAULT_FINANCES + half_amount + half_amount
assert show_hack
| 57.210243 | 197 | 0.77828 | 3,036 | 21,225 | 5.035903 | 0.030303 | 0.1505 | 0.056773 | 0.037674 | 0.956897 | 0.942639 | 0.939957 | 0.930473 | 0.910916 | 0.890771 | 0 | 0.035951 | 0.132438 | 21,225 | 370 | 198 | 57.364865 | 0.794341 | 0 | 0 | 0.8 | 0 | 0 | 0.166879 | 0.028787 | 0 | 0 | 0 | 0 | 0.295385 | 1 | 0.033846 | false | 0 | 0.024615 | 0 | 0.058462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
952bdeb83064609dd2e5264159d2547f86e12e00 | 111 | py | Python | flask_opentracing/__init__.py | aaronluoq/python-flask | 74bfe8bcd00eee9ce75a15c1634fda4c5d5f26ca | [
"BSD-3-Clause"
] | 136 | 2016-08-24T17:57:45.000Z | 2022-03-17T03:43:19.000Z | flask_opentracing/__init__.py | aaronluoq/python-flask | 74bfe8bcd00eee9ce75a15c1634fda4c5d5f26ca | [
"BSD-3-Clause"
] | 43 | 2016-12-21T19:11:33.000Z | 2021-06-16T09:10:16.000Z | flask_opentracing/__init__.py | aaronluoq/python-flask | 74bfe8bcd00eee9ce75a15c1634fda4c5d5f26ca | [
"BSD-3-Clause"
] | 45 | 2016-09-04T03:23:25.000Z | 2022-03-12T20:38:18.000Z | from .tracing import FlaskTracing # noqa
from .tracing import FlaskTracing as FlaskTracer # noqa, deprecated
| 37 | 68 | 0.801802 | 13 | 111 | 6.846154 | 0.615385 | 0.247191 | 0.382022 | 0.651685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153153 | 111 | 2 | 69 | 55.5 | 0.946809 | 0.189189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
952ced4a81ae82870eb2d060e5f1dba9f45a6e48 | 18,821 | py | Python | go/vumitools/tests/test_utils.py | lynnUg/vumi-go | 852f906c46d5d26940bd6699f11488b73bbc3742 | [
"BSD-3-Clause"
] | null | null | null | go/vumitools/tests/test_utils.py | lynnUg/vumi-go | 852f906c46d5d26940bd6699f11488b73bbc3742 | [
"BSD-3-Clause"
] | null | null | null | go/vumitools/tests/test_utils.py | lynnUg/vumi-go | 852f906c46d5d26940bd6699f11488b73bbc3742 | [
"BSD-3-Clause"
] | null | null | null | from twisted.internet.defer import inlineCallbacks
from vumi.tests.helpers import VumiTestCase, MessageHelper
from go.vumitools.utils import MessageMetadataDictHelper, MessageMetadataHelper
from go.vumitools.tests.helpers import VumiApiHelper
class TestMessageMetadataDictHelper(VumiTestCase):
def setUp(self):
self.msg_helper = self.add_helper(MessageHelper())
def mk_msg(self, go_metadata=None, optout_metadata=None):
helper_metadata = {}
if go_metadata is not None:
helper_metadata['go'] = go_metadata
if optout_metadata is not None:
helper_metadata['optout'] = optout_metadata
return self.msg_helper.make_inbound(
"hi", helper_metadata=helper_metadata)
def mk_md(self, message=None, go_metadata=None, optout_metadata=None):
if message is None:
message = self.mk_msg(go_metadata, optout_metadata)
return MessageMetadataDictHelper(message['helper_metadata'])
def test_is_sensitive(self):
md = self.mk_md()
self.assertFalse(md.is_sensitive())
md = self.mk_md(go_metadata={'sensitive': True})
self.assertTrue(md.is_sensitive())
def test_has_user_account(self):
md = self.mk_md()
self.assertFalse(md.has_user_account())
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertTrue(md.has_user_account())
def test_get_account_key(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_account_key)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertEqual(md.get_account_key(), 'user-1')
def test_get_conversation_key(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_conversation_key)
md = self.mk_md(go_metadata={'conversation_key': 'conv-1'})
self.assertEqual(md.get_conversation_key(), 'conv-1')
def test_get_conversation_info(self):
md = self.mk_md()
self.assertEqual(md.get_conversation_info(), None)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertEqual(md.get_conversation_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'conversation_type': 'dummy',
})
self.assertEqual(md.get_conversation_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'conversation_type': 'dummy',
'conversation_key': 'conv-1',
})
self.assertEqual(md.get_conversation_info(), {
'user_account': 'user-1',
'conversation_type': 'dummy',
'conversation_key': 'conv-1',
})
def test_set_conversation_info(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_conversation_info('dummy', 'conv-1')
self.assertEqual(msg['helper_metadata']['go'], {
'conversation_type': 'dummy',
'conversation_key': 'conv-1',
})
def test_set_user_account(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_user_account('user-1')
self.assertEqual(msg['helper_metadata']['go'], {
'user_account': 'user-1',
})
def test_get_router_key(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_router_key)
md = self.mk_md(go_metadata={'router_key': 'router-1'})
self.assertEqual(md.get_router_key(), 'router-1')
def test_get_router_info(self):
md = self.mk_md()
self.assertEqual(md.get_router_info(), None)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertEqual(md.get_router_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'router_type': 'dummy',
})
self.assertEqual(md.get_router_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'router_type': 'dummy',
'router_key': 'router-1',
})
self.assertEqual(md.get_router_info(), {
'user_account': 'user-1',
'router_type': 'dummy',
'router_key': 'router-1',
})
def test_set_router_info(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_router_info('dummy', 'router-1')
self.assertEqual(msg['helper_metadata']['go'], {
'router_type': 'dummy',
'router_key': 'router-1',
})
@inlineCallbacks
def test_add_conversation_metadata(self):
md = self.mk_md()
self.assertEqual(md._go_metadata, {})
vumi_helper = yield self.add_helper(VumiApiHelper())
user_helper = yield vumi_helper.make_user(u'user')
conv = yield user_helper.create_conversation(u'bulk_message')
md.add_conversation_metadata(conv)
self.assertEqual(md._go_metadata, {
'user_account': user_helper.account_key,
'conversation_type': conv.conversation_type,
'conversation_key': conv.key,
})
@inlineCallbacks
def test_add_router_metadata(self):
md = self.mk_md()
self.assertEqual(md._go_metadata, {})
vumi_helper = yield self.add_helper(VumiApiHelper())
user_helper = yield vumi_helper.make_user(u'user')
router = yield user_helper.create_router(u'keyword')
md.add_router_metadata(router)
self.assertEqual(md._go_metadata, {
'user_account': user_helper.account_key,
'router_type': router.router_type,
'router_key': router.key,
})
class TestMessageMetadataHelper(VumiTestCase):
@inlineCallbacks
def setUp(self):
self.vumi_helper = yield self.add_helper(VumiApiHelper())
self.user_helper = yield self.vumi_helper.make_user(u'user')
self.msg_helper = self.add_helper(MessageHelper())
def mk_msg(self, go_metadata=None, optout_metadata=None):
helper_metadata = {}
if go_metadata is not None:
helper_metadata['go'] = go_metadata
if optout_metadata is not None:
helper_metadata['optout'] = optout_metadata
return self.msg_helper.make_inbound(
"hi", helper_metadata=helper_metadata)
def mk_md(self, message=None, go_metadata=None, optout_metadata=None):
if message is None:
message = self.mk_msg(go_metadata, optout_metadata)
return MessageMetadataHelper(self.vumi_helper.get_vumi_api(), message)
def test_is_sensitive(self):
md = self.mk_md()
self.assertFalse(md.is_sensitive())
md = self.mk_md(go_metadata={'sensitive': True})
self.assertTrue(md.is_sensitive())
def test_has_user_account(self):
md = self.mk_md()
self.assertFalse(md.has_user_account())
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertTrue(md.has_user_account())
def test_get_account_key(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_account_key)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertEqual(md.get_account_key(), 'user-1')
def test_get_user_api(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_user_api)
md = self.mk_md(
go_metadata={'user_account': self.user_helper.account_key})
user_api = md.get_user_api()
self.assertEqual(
user_api.user_account_key, self.user_helper.account_key)
def test_get_conversation_key(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_conversation_key)
md = self.mk_md(go_metadata={'conversation_key': 'conv-1'})
self.assertEqual(md.get_conversation_key(), 'conv-1')
@inlineCallbacks
def test_get_conversation(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_conversation)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertRaises(KeyError, md.get_conversation)
conversation = yield self.user_helper.create_conversation(
u'bulk_message')
md = self.mk_md(go_metadata={
'user_account': self.user_helper.account_key,
'conversation_key': conversation.key,
})
md_conv = yield md.get_conversation()
self.assertEqual(md_conv.key, conversation.key)
@inlineCallbacks
def test_clear_object_cache(self):
conversation = yield self.user_helper.create_conversation(
u'bulk_message')
md = self.mk_md(go_metadata={
'user_account': conversation.user_account.key,
'conversation_key': conversation.key,
})
md.set_tag(["pool", "tag"])
self.assertEqual(md._store_objects, {})
md_conv = yield md.get_conversation()
tag_info = yield md.get_tag_info()
self.assertEqual(md._store_objects, {
'conversation': md_conv,
'tag_info': tag_info,
})
md.clear_object_cache()
self.assertEqual(md._store_objects, {})
@inlineCallbacks
def test_conversation_caching(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_conversation)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertRaises(KeyError, md.get_conversation)
conversation = yield self.user_helper.create_conversation(
u'bulk_message')
md = self.mk_md(go_metadata={
'user_account': conversation.user_account.key,
'conversation_key': conversation.key,
})
md_conv = yield md.get_conversation()
self.assertEqual(md_conv.key, conversation.key)
self.assertEqual(md_conv.status, conversation.status)
# Modify the conversation and get it from md again, making sure we
# still have cached data.
conversation.set_status_starting()
yield conversation.save()
md_conv2 = yield md.get_conversation()
self.assertIdentical(md_conv, md_conv2)
self.assertNotEqual(md_conv2.status, conversation.status)
# Clear the stored object cache and get the conversation from md again,
# making sure we have new data now.
md.clear_object_cache()
md_conv3 = yield md.get_conversation()
self.assertEqual(md_conv3.key, conversation.key)
self.assertEqual(md_conv3.status, conversation.status)
self.assertNotIdentical(md_conv, md_conv3)
def test_get_conversation_info(self):
md = self.mk_md()
self.assertEqual(md.get_conversation_info(), None)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertEqual(md.get_conversation_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'conversation_type': 'dummy',
})
self.assertEqual(md.get_conversation_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'conversation_type': 'dummy',
'conversation_key': 'conv-1',
})
self.assertEqual(md.get_conversation_info(), {
'user_account': 'user-1',
'conversation_type': 'dummy',
'conversation_key': 'conv-1',
})
def test_set_conversation_info(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_conversation_info('dummy', 'conv-1')
self.assertEqual(msg['helper_metadata']['go'], {
'conversation_type': 'dummy',
'conversation_key': 'conv-1',
})
def test_set_user_account(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_user_account('user-1')
self.assertEqual(msg['helper_metadata']['go'], {
'user_account': 'user-1',
})
def test_is_optout_message(self):
md = self.mk_md()
self.assertFalse(md.is_optout_message())
md = self.mk_md(optout_metadata={"optout": True})
self.assertTrue(md.is_optout_message())
def test_get_router_key(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_router_key)
md = self.mk_md(go_metadata={'router_key': 'router-1'})
self.assertEqual(md.get_router_key(), 'router-1')
@inlineCallbacks
def test_get_router(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_router)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertRaises(KeyError, md.get_router)
router = yield self.user_helper.create_router(u'keyword')
md = self.mk_md(go_metadata={
'user_account': self.user_helper.account_key,
'router_key': router.key,
})
md_router = yield md.get_router()
self.assertEqual(md_router.key, router.key)
@inlineCallbacks
def test_router_caching(self):
md = self.mk_md()
self.assertRaises(KeyError, md.get_router)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertRaises(KeyError, md.get_router)
router = yield self.user_helper.create_router(u'keyword')
md = self.mk_md(go_metadata={
'user_account': router.user_account.key,
'router_key': router.key,
})
md_router = yield md.get_router()
self.assertEqual(md_router.key, router.key)
self.assertEqual(md_router.status, router.status)
# Modify the router and get it from md again, making sure we
# still have cached data.
router.set_status_starting()
yield router.save()
md_router2 = yield md.get_router()
self.assertIdentical(md_router, md_router2)
self.assertNotEqual(md_router2.status, router.status)
# Clear the stored object cache and get the router from md again,
# making sure we have new data now.
md.clear_object_cache()
md_router3 = yield md.get_router()
self.assertEqual(md_router3.key, router.key)
self.assertEqual(md_router3.status, router.status)
self.assertNotIdentical(md_router, md_router3)
def test_get_router_info(self):
md = self.mk_md()
self.assertEqual(md.get_router_info(), None)
md = self.mk_md(go_metadata={'user_account': 'user-1'})
self.assertEqual(md.get_router_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'router_type': 'dummy',
})
self.assertEqual(md.get_router_info(), None)
md = self.mk_md(go_metadata={
'user_account': 'user-1',
'router_type': 'dummy',
'router_key': 'router-1',
})
self.assertEqual(md.get_router_info(), {
'user_account': 'user-1',
'router_type': 'dummy',
'router_key': 'router-1',
})
def test_set_router_info(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_router_info('dummy', 'router-1')
self.assertEqual(msg['helper_metadata']['go'], {
'router_type': 'dummy',
'router_key': 'router-1',
})
def test_set_tag(self):
msg = self.mk_msg()
md = self.mk_md(msg)
md.set_tag(["pool", "tagname"])
self.assertEqual(msg['helper_metadata']['tag'], {
'tag': ["pool", "tagname"],
})
def test_rewrap(self):
msg = self.mk_msg()
md = self.mk_md(msg)
# The metadata wrapper creates the 'go' metadata
self.assertEqual(msg['helper_metadata']['go'], {})
# We create a new wrapper around the same message object and make sure
# the cached message store objects are still there in the new one.
new_md = self.mk_md(msg)
self.assertNotEqual(md, new_md)
self.assertIdentical(md._store_objects, new_md._store_objects)
self.assertIdentical(md._go_metadata, new_md._go_metadata)
# We create a new wrapper around the a copy of the message object and
# make sure the message store object cache is empty, but the metadata
# remains.
other_md = self.mk_md(msg.copy())
self.assertNotIdentical(md, other_md)
self.assertEqual({}, other_md._store_objects)
self.assertEqual(md._go_metadata, other_md._go_metadata)
def test_get_tag_info_no_tag(self):
md = self.mk_md()
self.assertEqual(None, md.tag)
self.assertRaises(ValueError, md.get_tag_info)
def test_get_tagpool_metadata_no_tag(self):
md = self.mk_md()
self.assertEqual(None, md.tag)
self.assertRaises(ValueError, md.get_tagpool_metadata)
@inlineCallbacks
def test_get_tag_info(self):
md = self.mk_md()
md.set_tag(["pool", "tagname"])
tag_info = yield md.get_tag_info()
self.assertEqual(("pool", "tagname"), tag_info.tag)
@inlineCallbacks
def test_tag_info_caching(self):
md = self.mk_md()
md.set_tag(["pool", "tagname"])
self.assertEqual({}, md._store_objects)
tag_info = yield md.get_tag_info()
self.assertEqual(("pool", "tagname"), tag_info.tag)
self.assertEqual({'tag_info': tag_info}, md._store_objects)
# Stash a fake thing in the cache to make sure that what we get is
# actually the thing in the cache.
md._store_objects['tag_info'] = "I am the cached tag_info"
cached_tag_info = yield md.get_tag_info()
self.assertEqual(cached_tag_info, "I am the cached tag_info")
@inlineCallbacks
def test_get_tagpool_metadata(self):
yield self.vumi_helper.setup_tagpool("pool", ["tagname"], metadata={
"foo": "bar",
})
md = self.mk_md()
md.set_tag(["pool", "tagname"])
tagpool_metadata = yield md.get_tagpool_metadata()
self.assertEqual({"foo": "bar"}, tagpool_metadata)
@inlineCallbacks
def test_tagpool_metadata_caching(self):
yield self.vumi_helper.setup_tagpool("pool", ["tagname"], metadata={
"foo": "bar",
})
md = self.mk_md()
md.set_tag(["pool", "tagname"])
self.assertEqual({}, md._store_objects)
tagpool_metadata = yield md.get_tagpool_metadata()
self.assertEqual({"foo": "bar"}, tagpool_metadata)
self.assertEqual(
{'tagpool_metadata': tagpool_metadata}, md._store_objects)
# Stash a fake thing in the cache to make sure that what we get is
# actually the thing in the cache.
md._store_objects['tagpool_metadata'] = "I am the cached metadata"
cached_tagpool_metadata = yield md.get_tagpool_metadata()
self.assertEqual(cached_tagpool_metadata, "I am the cached metadata")
| 37.566866 | 79 | 0.627012 | 2,351 | 18,821 | 4.740961 | 0.055296 | 0.053293 | 0.05096 | 0.0637 | 0.817962 | 0.776781 | 0.754531 | 0.723668 | 0.715234 | 0.693163 | 0 | 0.004833 | 0.252431 | 18,821 | 500 | 80 | 37.642 | 0.787349 | 0.047553 | 0 | 0.78744 | 0 | 0 | 0.111409 | 0 | 0 | 0 | 0 | 0 | 0.23913 | 1 | 0.103865 | false | 0 | 0.009662 | 0 | 0.128019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1f093df8504af1904df82708b160c2b0ec920c88 | 20,554 | py | Python | tests/benchmarks/unpack_sequence.py | jacebrowning/voc | 7bc84e8a870674d300ad5083748cf6b826e7fb68 | [
"BSD-3-Clause"
] | null | null | null | tests/benchmarks/unpack_sequence.py | jacebrowning/voc | 7bc84e8a870674d300ad5083748cf6b826e7fb68 | [
"BSD-3-Clause"
] | 2 | 2018-09-26T12:52:52.000Z | 2018-09-27T13:51:29.000Z | tests/benchmarks/unpack_sequence.py | jacebrowning/voc | 7bc84e8a870674d300ad5083748cf6b826e7fb68 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
"""Microbenchmark for Python's sequence unpacking."""
import time
def do_unpacking(iterations, to_unpack):
times = []
for _ in range(iterations):
t0 = time.time()
# Should be 400 unpackings, but MethodCodeTooLarge
# TODO Look into why this code is so big
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
"""
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
"""
t1 = time.time()
times.append(t1 - t0)
return times
def test_tuple_unpacking(iterations):
x = tuple(range(10))
return do_unpacking(iterations, x)
def test_list_unpacking(iterations):
x = range(10)
return do_unpacking(iterations, x)
def test_all(iterations):
tuple_data = test_tuple_unpacking(iterations)
list_data = test_list_unpacking(iterations)
return [x + y for (x, y) in zip(tuple_data, list_data)]
if __name__ == "__main__":
import sys
loops = int(sys.argv[1])
times = test_all(loops)
print("Time elapsed: " + str(sum(times)) + " sec")
| 46.188764 | 59 | 0.399679 | 4,926 | 20,554 | 1.580999 | 0.01421 | 0.411916 | 0.154083 | 0.205444 | 0.935285 | 0.935285 | 0.935285 | 0.935285 | 0.935285 | 0.924499 | 0 | 0.000974 | 0.400409 | 20,554 | 444 | 60 | 46.292793 | 0.630964 | 0.00759 | 0 | 0.871166 | 0 | 0 | 0.003405 | 0 | 0 | 0 | 0 | 0.002252 | 0 | 1 | 0.02454 | false | 0 | 0.01227 | 0 | 0.06135 | 0.006135 | 0 | 0 | 1 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
1f368c2100fa3033fd16e2888fca891530eef26e | 109 | py | Python | MiddleKit/Design/MySQLPythonGenerator.py | PeaceWorksTechnologySolutions/w4py | 74f5a03a63f1a93563502b908474aefaae2abda2 | [
"MIT"
] | 18 | 2016-08-01T20:15:59.000Z | 2019-12-24T16:00:03.000Z | MiddleKit/Design/MySQLPythonGenerator.py | WebwareForPython/w4py | bba08f5974d49f5da7e88abe3eeda1037d0824a3 | [
"MIT"
] | 6 | 2016-09-13T05:48:45.000Z | 2020-01-09T18:29:12.000Z | MiddleKit/Design/MySQLPythonGenerator.py | WebwareForPython/w4py | bba08f5974d49f5da7e88abe3eeda1037d0824a3 | [
"MIT"
] | 6 | 2016-09-16T14:32:29.000Z | 2020-01-03T18:52:16.000Z | from SQLPythonGenerator import SQLPythonGenerator
class MySQLPythonGenerator(SQLPythonGenerator):
pass
| 18.166667 | 49 | 0.853211 | 8 | 109 | 11.625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119266 | 109 | 5 | 50 | 21.8 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
1f5329be89cab1e1995b2487d3838dda997e5db4 | 139 | py | Python | tests/parser/true_negation.1.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/true_negation.1.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/true_negation.1.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | input = """
true | -a :- not b.
true | b :- not -a.
-true.
"""
output = """
true | -a :- not b.
true | b :- not -a.
-true.
"""
| 10.692308 | 21 | 0.395683 | 20 | 139 | 2.75 | 0.3 | 0.181818 | 0.290909 | 0.327273 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0 | 0 | 0 | 0.330935 | 139 | 12 | 22 | 11.583333 | 0.591398 | 0 | 0 | 0.8 | 0 | 0 | 0.763359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
1f93705b9815513d4df36a31fc46c08bf3d4a4f3 | 90 | py | Python | examples/simple.py | nitred/imdb-wiki-dataset | 3424ead8c190fd5f1b53e2aba2caf5c163ca0003 | [
"MIT"
] | null | null | null | examples/simple.py | nitred/imdb-wiki-dataset | 3424ead8c190fd5f1b53e2aba2caf5c163ca0003 | [
"MIT"
] | null | null | null | examples/simple.py | nitred/imdb-wiki-dataset | 3424ead8c190fd5f1b53e2aba2caf5c163ca0003 | [
"MIT"
] | null | null | null | """Basic functionality."""
import imdb_wiki_dataset
print(imdb_wiki_dataset.__version__)
| 18 | 36 | 0.822222 | 11 | 90 | 6 | 0.727273 | 0.242424 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 90 | 4 | 37 | 22.5 | 0.785714 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
2f57ae488f95d3c86d7881d335bfe2557e0359d9 | 1,148 | py | Python | src/models/chainlink.py | Dragonfly-Capital/oracles.club.server | 092dc1e6d205ceb475cd65f9b1c3e4aa6ef588dd | [
"MIT"
] | 7 | 2020-04-28T02:17:51.000Z | 2020-09-23T17:39:38.000Z | src/models/chainlink.py | Dragonfly-Capital/oracles.club.server | 092dc1e6d205ceb475cd65f9b1c3e4aa6ef588dd | [
"MIT"
] | 1 | 2020-08-10T19:39:12.000Z | 2020-08-10T19:39:12.000Z | src/models/chainlink.py | Dragonfly-Capital/oracles.club.server | 092dc1e6d205ceb475cd65f9b1c3e4aa6ef588dd | [
"MIT"
] | 2 | 2020-05-10T09:39:47.000Z | 2020-07-27T18:12:23.000Z | from .create_db import db
class ChainlinkETH(db.Model):
__tablename__ = 'chainlink'
id = db.Column('id', db.Integer, primary_key=True)
blocknumber = db.Column('blocknumber', db.Integer)
timestamp = db.Column('timestamp', db.Integer)
price = db.Column('price', db.Float)
def __repr__(self):
return '{}, {}, {}'.format(self.blocknumber, self.timestamp, self.price)
class ChainlinkBTC(db.Model):
__tablename__ = 'chainlinkbtc'
id = db.Column('id', db.Integer, primary_key=True)
blocknumber = db.Column('blocknumber', db.Integer)
timestamp = db.Column('timestamp', db.Integer)
price = db.Column('price', db.Float)
def __repr__(self):
return '{}, {}, {}'.format(self.blocknumber, self.timestamp, self.price)
class ChainlinkBAT(db.Model):
__tablename__ = 'chainlinkbat'
id = db.Column('id', db.Integer, primary_key=True)
blocknumber = db.Column('blocknumber', db.Integer)
timestamp = db.Column('timestamp', db.Integer)
price = db.Column('price', db.Float)
def __repr__(self):
return '{}, {}, {}'.format(self.blocknumber, self.timestamp, self.price)
| 32.8 | 80 | 0.663763 | 137 | 1,148 | 5.357664 | 0.189781 | 0.13079 | 0.065395 | 0.049046 | 0.80654 | 0.80654 | 0.80654 | 0.80654 | 0.80654 | 0.80654 | 0 | 0 | 0.173345 | 1,148 | 34 | 81 | 33.764706 | 0.773446 | 0 | 0 | 0.72 | 0 | 0 | 0.125436 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.04 | 0.12 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 8 |
2f828a8122993bb35a5603c99b290b1d88a36bbe | 198 | py | Python | books/admin.py | adilmohak/django_book_sharing | 6d47cb131524dc761becb7d432b7cc75064c4f58 | [
"MIT"
] | 13 | 2021-03-26T05:39:58.000Z | 2021-10-13T22:03:46.000Z | books/admin.py | adilmohak/django_book_sharing | 6d47cb131524dc761becb7d432b7cc75064c4f58 | [
"MIT"
] | 1 | 2021-03-26T05:42:47.000Z | 2021-04-24T17:33:26.000Z | books/admin.py | adilmohak/django_book_sharing | 6d47cb131524dc761becb7d432b7cc75064c4f58 | [
"MIT"
] | 2 | 2021-03-26T05:54:59.000Z | 2021-03-26T09:03:46.000Z | from django.contrib import admin
from .models import Book, Genres, Review, Booklist
admin.site.register(Book)
admin.site.register(Booklist)
admin.site.register(Genres)
admin.site.register(Review)
| 22 | 50 | 0.808081 | 28 | 198 | 5.714286 | 0.428571 | 0.225 | 0.425 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085859 | 198 | 8 | 51 | 24.75 | 0.883978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
85f13528fb29dd37cf4ff04ccd41d5c434f96cd4 | 181 | py | Python | app/utils.py | arezi/invest-monitor | d6dda558be0f13b731a3127ead60695f04fcdf16 | [
"MIT"
] | null | null | null | app/utils.py | arezi/invest-monitor | d6dda558be0f13b731a3127ead60695f04fcdf16 | [
"MIT"
] | null | null | null | app/utils.py | arezi/invest-monitor | d6dda558be0f13b731a3127ead60695f04fcdf16 | [
"MIT"
] | null | null | null | import os
def get_app_base_path():
return os.path.dirname(os.path.realpath(__file__))
#def get_static_folder_path():
# return os.path.join(get_app_base_path(), "static")
| 18.1 | 55 | 0.734807 | 29 | 181 | 4.137931 | 0.482759 | 0.15 | 0.166667 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127072 | 181 | 9 | 56 | 20.111111 | 0.759494 | 0.458564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
c807b5f4d75139d7e7769488aa2248c239b523b9 | 15,687 | py | Python | manila/tests/share/drivers/infortrend/fake_infortrend_manila_data.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 159 | 2015-01-02T09:35:15.000Z | 2022-01-04T11:51:34.000Z | manila/tests/share/drivers/infortrend/fake_infortrend_manila_data.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 6 | 2021-02-11T16:09:43.000Z | 2022-03-15T09:56:25.000Z | manila/tests/share/drivers/infortrend/fake_infortrend_manila_data.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 128 | 2015-01-05T22:52:28.000Z | 2021-12-29T14:00:58.000Z | # Copyright (c) 2019 Infortrend Technology, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class InfortrendManilaTestData(object):
fake_share_id = ['4d6984fd-8572-4467-964f-24936a8c4ea2', # NFS
'a7b933e6-bb77-4823-a86f-f2c3ab41a8a5'] # CIFS
fake_id = ['iftt8862-2226-0126-7610-chengweichou',
'987c8763-3333-4444-5555-666666666666']
fake_share_nfs = {
'share_id': fake_share_id[0],
'availability_zone': 'nova',
'terminated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)',
'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3',
'updated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)',
'share_network_id': None,
'export_locations': [],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': '5a0aa06e-1c57-4996-be46-b81e360e8866',
'size': 30,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': '172.27.112.223:/share-pool-01/LV-1/' +
fake_share_id[0],
'display_description': None,
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': 'datetime.datetime(2017, 5, 8, 8, 23, 33)',
'scheduled_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)',
'status': 'deleting',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': 'compute@ift-manila#share-pool-01',
'access_rules_status': 'active',
'display_name': 'nfs-01',
'name': 'share-5a0aa06e-1c57-4996-be46-b81e360e8866',
'created_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)',
'share_proto': 'NFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
fake_share_cifs = {
'share_id': fake_share_id[1],
'availability_zone': 'nova',
'terminated_at': None,
'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3',
'updated_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)',
'share_network_id': None,
'export_locations': [],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': 'aac4fe64-7a9c-472a-b156-9adbb50b4d29',
'size': 50,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': None,
'display_description': None,
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': None,
'scheduled_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)',
'status': 'creating',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': 'compute@ift-manila#share-pool-01',
'access_rules_status': 'active',
'display_name': 'cifs-01',
'name': 'share-aac4fe64-7a9c-472a-b156-9adbb50b4d29',
'created_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)',
'share_proto': 'CIFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
fake_share_cifs_no_host = {
'share_id': fake_share_id[1],
'availability_zone': 'nova',
'terminated_at': None,
'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3',
'updated_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)',
'share_network_id': None,
'export_locations': [],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': 'aac4fe64-7a9c-472a-b156-9adbb50b4d29',
'size': 50,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': None,
'display_description': None,
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': None,
'scheduled_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)',
'status': 'creating',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': '',
'access_rules_status': 'active',
'display_name': 'cifs-01',
'name': 'share-aac4fe64-7a9c-472a-b156-9adbb50b4d29',
'created_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)',
'share_proto': 'CIFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
fake_non_exist_share = {
'share_id': fake_id[0],
'availability_zone': 'nova',
'terminated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)',
'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3',
'updated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)',
'share_network_id': None,
'export_locations': [],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': fake_id[1],
'size': 30,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': '172.27.112.223:/share-pool-01/LV-1/' +
fake_id[0],
'display_description': None,
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': 'datetime.datetime(2017, 5, 8, 8, 23, 33)',
'scheduled_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)',
'status': 'available',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': 'compute@ift-manila#share-pool-01',
'access_rules_status': 'active',
'display_name': 'nfs-01',
'name': 'share-5a0aa06e-1c57-4996-be46-b81e360e8866',
'created_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)',
'share_proto': 'NFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
fake_access_rules_nfs = [{
'share_id': fake_share_id[0],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 8, 41, 21)',
'updated_at': None,
'access_type': 'ip',
'access_to': '172.27.1.1',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': 'fa60b50f-1428-44a2-9931-7e31f0c5b033'}, {
'share_id': fake_share_id[0],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 8, 45, 37)',
'updated_at': None,
'access_type': 'ip',
'access_to': '172.27.1.2',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': '9bcdd5e6-11c7-4f8f-939c-84fa2f3334bc'
}]
fake_rule_ip_1 = [{
'share_id': fake_share_id[0],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 8, 41, 21)',
'updated_at': None,
'access_type': 'ip',
'access_to': '172.27.1.1',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': 'fa60b50f-1428-44a2-9931-7e31f0c5b033'
}]
fake_rule_ip_2 = [{
'share_id': fake_share_id[0],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 8, 45, 37)',
'updated_at': None,
'access_type': 'ip',
'access_to': '172.27.1.2',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': '9bcdd5e6-11c7-4f8f-939c-84fa2f3334bc'
}]
fake_access_rules_cifs = [{
'share_id': fake_share_id[1],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 9, 39, 18)',
'updated_at': None,
'access_type': 'user',
'access_to': 'user02',
'access_level': 'ro',
'instance_mappings': [],
'deleted_at': None,
'id': '6e8bc969-51c9-4bbb-8e8b-020dc5fec81e'}, {
'share_id': fake_share_id[1],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 9, 38, 59)',
'updated_at': None,
'access_type': 'user',
'access_to': 'user01',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': '0cd9926d-fac4-4122-a523-538e98752e78'
}]
fake_rule_user01 = [{
'share_id': fake_share_id[1],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 9, 38, 59)',
'updated_at': None,
'access_type': 'user',
'access_to': 'user01',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': '0cd9926d-fac4-4122-a523-538e98752e78'
}]
fake_rule_user02 = [{
'share_id': fake_share_id[1],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 9, 39, 18)',
'updated_at': None,
'access_type': 'user',
'access_to': 'user02',
'access_level': 'ro',
'instance_mappings': [],
'deleted_at': None,
'id': '6e8bc969-51c9-4bbb-8e8b-020dc5fec81e'
}]
fake_rule_user03 = [{
'share_id': fake_id[0],
'deleted': 'False',
'created_at': 'datetime.datetime(2017, 5, 9, 9, 39, 18)',
'updated_at': None,
'access_type': 'user',
'access_to': 'user03',
'access_level': 'rw',
'instance_mappings': [],
'deleted_at': None,
'id': fake_id[1]
}]
fake_share_for_manage_nfs = {
'share_id': '419ab73c-c0fc-4e73-b56a-70756e0b6d27',
'availability_zone': None,
'terminated_at': None,
'availability_zone_id': None,
'updated_at': None,
'share_network_id': None,
'export_locations': [{
'uuid': '0ebd59e4-e65e-4fda-9457-320375efd0be',
'deleted': 0,
'created_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)',
'updated_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)',
'is_admin_only': False,
'share_instance_id': 'd3cfe195-85cf-41e6-be4f-a96f7e7db192',
'path': '172.27.112.223:/share-pool-01/LV-1/test-folder',
'el_metadata': {},
'deleted_at': None,
'id': 83
}],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': '615ac1ed-e808-40b5-8d7b-87018c6f66eb',
'size': None,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': '172.27.112.223:/share-pool-01/LV-1/test-folder',
'display_description': '',
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': None,
'scheduled_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)',
'status': 'manage_starting',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': 'compute@ift-manila#share-pool-01',
'access_rules_status': 'active',
'display_name': 'test-manage',
'name': 'share-615ac1ed-e808-40b5-8d7b-87018c6f66eb',
'created_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)',
'share_proto': 'NFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
def _get_fake_share_for_manage(self, location=''):
return {
'share_id': '419ab73c-c0fc-4e73-b56a-70756e0b6d27',
'availability_zone': None,
'terminated_at': None,
'availability_zone_id': None,
'updated_at': None,
'share_network_id': None,
'export_locations': [{
'uuid': '0ebd59e4-e65e-4fda-9457-320375efd0be',
'deleted': 0,
'created_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)',
'updated_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)',
'is_admin_only': False,
'share_instance_id': 'd3cfe195-85cf-41e6-be4f-a96f7e7db192',
'path': location,
'el_metadata': {},
'deleted_at': None,
'id': 83
}],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': '615ac1ed-e808-40b5-8d7b-87018c6f66eb',
'size': None,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': location,
'display_description': '',
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': None,
'scheduled_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)',
'status': 'manage_starting',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': 'compute@ift-manila#share-pool-01',
'access_rules_status': 'active',
'display_name': 'test-manage',
'name': 'share-615ac1ed-e808-40b5-8d7b-87018c6f66eb',
'created_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)',
'share_proto': 'NFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
fake_share_for_manage_cifs = {
'share_id': '3a1222d3-c981-490a-9390-4d560ced68eb',
'availability_zone': None,
'terminated_at': None,
'availability_zone_id': None,
'updated_at': None,
'share_network_id': None,
'export_locations': [{
'uuid': '0ebd59e4-e65e-4fda-9457-320375efd0de',
'deleted': 0,
'created_at': 'datetime.datetime(2017, 5, 11, 10, 10, 3)',
'updated_at': 'datetime.datetime(2017, 5, 11, 10, 10, 3)',
'is_admin_only': False,
'share_instance_id': 'd3cfe195-85cf-41e6-be4f-a96f7e7db192',
'path': '\\\\172.27.113.209\\test-folder-02',
'el_metadata': {},
'deleted_at': None,
'id': 87
}],
'share_server_id': None,
'snapshot_id': None,
'deleted_at': None,
'id': 'd156baf7-5422-4c9b-8c78-ee7943d000ec',
'size': None,
'replica_state': None,
'user_id': '4944594433f0405588928a4212964658',
'export_location': '\\\\172.27.113.209\\test-folder-02',
'display_description': '',
'consistency_group_id': None,
'project_id': '0e63326c50a246ac81fa1a0c8e003d5b',
'launched_at': None,
'scheduled_at': 'datetime.datetime(2017, 5, 11, 3, 7, 59)',
'status': 'manage_starting',
'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f',
'deleted': 'False',
'host': 'compute@ift-manila#share-pool-01',
'access_rules_status': 'active',
'display_name': 'test-manage-02',
'name': 'share-d156baf7-5422-4c9b-8c78-ee7943d000ec',
'created_at': 'datetime.datetime(2017, 5, 11, 3, 7, 59)',
'share_proto': 'CIFS',
'is_public': False,
'source_cgsnapshot_member_id': None
}
| 38.354523 | 78 | 0.56601 | 1,741 | 15,687 | 4.867892 | 0.160253 | 0.029027 | 0.078584 | 0.096047 | 0.889676 | 0.875752 | 0.872802 | 0.862183 | 0.860413 | 0.84413 | 0 | 0.162787 | 0.274367 | 15,687 | 408 | 79 | 38.448529 | 0.581745 | 0.039906 | 0 | 0.851064 | 0 | 0.005319 | 0.54221 | 0.237437 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00266 | false | 0 | 0 | 0.00266 | 0.047872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c80fef0ac54b4f7eaf4ce042cb6f476835195b2f | 193 | py | Python | test/convert_meshes.py | mskim99/Pix2Vox_modify | 0cc28e2c9a4a86c25e570317d2dd296bb8565ff7 | [
"MIT"
] | null | null | null | test/convert_meshes.py | mskim99/Pix2Vox_modify | 0cc28e2c9a4a86c25e570317d2dd296bb8565ff7 | [
"MIT"
] | null | null | null | test/convert_meshes.py | mskim99/Pix2Vox_modify | 0cc28e2c9a4a86c25e570317d2dd296bb8565ff7 | [
"MIT"
] | null | null | null | import meshio
mesh = meshio.read('I:\Program/Pix2Vox-master/voxel_log/voxel_process/gv_mha_000000_up.vtu')
mesh.write("I:\Program/Pix2Vox-master/voxel_log/voxel_process/gv_mha_000000_up.obj") | 38.6 | 92 | 0.823834 | 33 | 193 | 4.515152 | 0.545455 | 0.107383 | 0.201342 | 0.281879 | 0.724832 | 0.724832 | 0.724832 | 0.724832 | 0.724832 | 0.724832 | 0 | 0.075269 | 0.036269 | 193 | 5 | 93 | 38.6 | 0.725806 | 0 | 0 | 0 | 0 | 0 | 0.721649 | 0.721649 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 11 |
c81b046a72fff213eb2d2b705f4e5dbc330d712f | 5,909 | py | Python | hacker_earth/python_problems/tictactoe.py | Faraaz54/python_training_problems | 24c7b42daaf54366759e1d7c4b42f9936316e94b | [
"MIT"
] | null | null | null | hacker_earth/python_problems/tictactoe.py | Faraaz54/python_training_problems | 24c7b42daaf54366759e1d7c4b42f9936316e94b | [
"MIT"
] | null | null | null | hacker_earth/python_problems/tictactoe.py | Faraaz54/python_training_problems | 24c7b42daaf54366759e1d7c4b42f9936316e94b | [
"MIT"
] | null | null | null |
for _ in xrange(int(raw_input())):
board = []
count = 0
pos = []
winnable = 'NO'
for i in range(4):
rows = [i for i in raw_input()]
board.append(rows)
for row in board:
print " ".join(row)
for row in board:
for place in row:
if place == 'x' or place == 'o':
count += 1
if count == 0:
print winnable
if count % 2 == 0:
bat_piece = 'x'
else:
bat_piece = 'o'
for row in range(0,4):
for col in range(0,4):
if board[row][col] == bat_piece:
pos.append((row, col))
print pos
for positions in pos:
row_1, col_1 = positions
if winnable == 'YES':
break
#right_row
if col_1 + 1 in range(0,4) and col_1 + 2 in range(0,4):
if board[row_1][col_1 + 1] == bat_piece and board[row_1][col_1 + 2] == '.':
winnable = 'YES'
elif board[row_1][col_1 + 1] == '.' and board[row_1][col_1 + 2] == bat_piece:
winnable = 'YES'
#left_row
elif col_1 - 1 in range(0,4) and col_1 - 2 in range(0,4):
if board[row_1][col_1 - 1] == bat_piece and board[row_1][col_1 - 2] == '.':
winnable = 'YES'
elif board[row_1][col_1 - 1] == '.' and board[row_1][col_1 - 2] == bat_piece:
winnable = 'YES'
#below
elif row_1 + 1 in range(0,4) and row_1 + 2 in range(0,4):
if board[row_1 + 1][col_1] == bat_piece and board[row_1 + 2][col_1] == '.':
winnable = 'YES'
elif board[row_1 + 1][col_1] == '.' and board[row_1 + 2][col_1] == bat_piece:
winnable = 'YES'
#above
elif row_1 - 1 in range(0,4) and row_1 - 2 in range(0,4):
if board[row_1 - 1][col_1] == bat_piece and board[row_1 - 2][col_1] == '.':
winnable = 'YES'
if board[row_1 - 1][col_1] == '.' and board[row_1 - 2][col_1] == bat_piece:
winnable = 'YES'
#left_diagonal
elif row_1 - 1 in range(0,4) and col_1 - 1 in range(0,4):
if board[row_1 - 1][col_1 - 1] == bat_piece and board[row_1 + 1][col_1 + 1] == '.':
winnable = 'YES'
elif board[row_1 - 1][col_1 - 1] == '.' and board[row_1 + 1][col_1 + 1] == bat_piece:
winnable = 'YES'
#left_diagonal
elif row_1 - 1 in range(0, 4) and col_1 - 1 in range(0, 4):
if row_1 - 2 in range(0, 4) and col_1 - 2 in range(0, 4):
if board[row_1 - 1][col_1 - 1] == bat_piece and board[row_1 - 2][col_1 -2] == '.':
winnable = 'YES'
elif board[row_1 - 1][col_1 - 1] == '.' and board[row_1 - 2][col_1 - 2] == bat_piece:
winnable = 'YES'
#right_diagonal
elif row_1 - 1 in range(0,4) and col_1 + 1 in range(0,4):
if board[row_1 - 1][col_1 + 1] == bat_piece and board[row_1 + 1][col_1 - 1] == '.':
winnable = 'YES'
elif board[row_1 - 1][col_1 + 1] == '.' and board[row_1 + 1][col_1 - 1] == bat_piece:
winnable = 'YES'
#right_diagonal
elif row_1 - 1 in range(0, 4) and col_1 + 1 in range(0, 4):
if row_1 - 2 in range(0, 4) and col_1 + 2 in range(0, 4):
if board[row_1 - 1][col_1 + 1] == bat_piece and board[row_1 - 2][col_1 + 2] == '.':
winnable = 'YES'
elif board[row_1 - 1][col_1 + 1] == '.' and board[row_1 - 2][col_1 + 2] == bat_piece:
winnable = 'YES'
#left_below_diagonal
elif row_1 + 1 in range(0,4) and col_1 - 1 in range(0,4):
if board[row_1 + 1][col_1 - 1] == bat_piece and board[row_1 - 1][col_1 + 1] == '.':
winnable = 'YES'
elif board[row_1 + 1][col_1 - 1] == '.' and board[row_1 - 1][col_1 + 1] == bat_piece:
winnable = 'YES'
#left_below_diagonal
elif row_1 + 1 in range(0, 4) and col_1 - 1 in range(0, 4):
if row_1 + 2 in range(0, 4) and col_1 - 2 in range(0, 4):
if board[row_1 + 1][col_1 - 1] == bat_piece and board[row_1 + 2][col_1 - 2] == '.':
winnable = 'YES'
elif board[row_1 + 1][col_1 - 1] == '.' and board[row_1 + 2][col_1 - 2] == bat_piece:
winnable = 'YES'
#right_below_diagonal
elif row_1 + 1 in range(0,4) and col_1 + 1 in range(0,4):
if board[row_1 + 1][col_1 + 1] == bat_piece and board[row_1 - 1][col_1 - 1] == '.':
winnable = 'YES'
elif board[row_1 + 1][col_1 + 1] == '.' and board[row_1 - 1][col_1 - 1] == bat_piece:
winnable = 'YES'
#right_below_diagonal
elif row_1 + 1 in range(0, 4) and col_1 + 1 in range(0, 4) and row_1 + 2 in range(0, 4) and col_1 + 2 in range(0, 4):
if board[row_1 + 1][col_1 + 1] == bat_piece and board[row_1 + 2][col_1 + 2] == '.':
print 'YES'
elif board[row_1 + 1][col_1 + 1] == '.' and board[row_1 + 2][col_1 + 2] == bat_piece:
print 'YES'
#print winnable
'''if row_1 + 2 in range(0, 4) and col_1 + 2 in range(0, 4):'''
| 36.701863 | 126 | 0.43273 | 867 | 5,909 | 2.746251 | 0.04729 | 0.063839 | 0.181436 | 0.136077 | 0.841663 | 0.841663 | 0.841663 | 0.832843 | 0.831163 | 0.831163 | 0 | 0.096659 | 0.43273 | 5,909 | 160 | 127 | 36.93125 | 0.613663 | 0.029277 | 0 | 0.347826 | 0 | 0 | 0.019318 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.054348 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c82a1d72cb68d521be5c5e33dcb4438b218cac18 | 40,046 | py | Python | stat_analysis/pouring/utils.py | asaran/gaze-LfD | 964635d9bf7b208abe35d40b2bf791b05b8a0c3b | [
"MIT"
] | 1 | 2022-02-16T15:35:58.000Z | 2022-02-16T15:35:58.000Z | stat_analysis/pouring/utils.py | asaran/gaze-LfD | 964635d9bf7b208abe35d40b2bf791b05b8a0c3b | [
"MIT"
] | null | null | null | stat_analysis/pouring/utils.py | asaran/gaze-LfD | 964635d9bf7b208abe35d40b2bf791b05b8a0c3b | [
"MIT"
] | null | null | null | import cv2
import ast
from bisect import bisect_left
from numpy import ones,vstack
from numpy.linalg import lstsq
import numpy as np
import math
import rosbag
import math
import os
import gzip
def takeClosest(myList, myNumber):
"""
Assumes myList is sorted. Returns closest value to myNumber.
If two numbers are equally close, return the smallest number.
"""
pos = bisect_left(myList, myNumber)
if pos == 0:
return myList[0], 0
if pos == len(myList):
return myList[-1], len(myList)-1
before = myList[pos - 1]
after = myList[pos]
if after - myNumber < myNumber - before:
return after, pos
else:
return before, pos-1
def read_json(data_dir):
data = []
files = os.listdir(data_dir)
for file in files:
if (file.endswith("json.gz")):
with gzip.open(data_dir+'/'+file, "rb") as f:
data=f.readlines()
for r in range(len(data)):
row = data[r]
data[r] = ast.literal_eval(row.strip('\n'))
vid2ts = {} # dictionary mapping video time to time stamps in json
right_eye_pd, left_eye_pd, gp = {}, {}, {} # dicts mapping ts to pupil diameter and gaze points (2D) for both eyes
for d in data:
if 'vts' in d and d['s']==0:
if d['vts'] == 0:
vid2ts[d['vts']] = d['ts']
else:
vid2ts[d['vts']] = d['ts']
if 'pd' in d and d['s']==0 and d['eye']=='right':
right_eye_pd[d['ts']] = d['pd']
if 'pd' in d and d['s']==0 and d['eye']=='left':
left_eye_pd[d['ts']] = d['pd']
if 'gp' in d and d['s']==0 :
gp[d['ts']] = d['gp'] #list of 2 coordinates
print('read json')
# map vts to ts
all_vts = sorted(vid2ts.keys())
a = all_vts[0]
model = []
for i in range(1,len(all_vts)):
points = [(a,vid2ts[a]),(all_vts[i],vid2ts[all_vts[i]])]
x_coords, y_coords = zip(*points)
A = vstack([x_coords, ones(len(x_coords))]).T
m, c = lstsq(A, y_coords)[0]
model.append((m,c))
return data, gp, model, all_vts
def color_dist(color1, color2):
r1,g1,b1 = color1
r2,g2,b2 = color2
color_d = pow(r1-r2,2) + pow(g1-g2,2) + pow(b1-b2,2)
mean_rgb = ((r1+r2)/2, (g1+g2)/2, (b1+b2)/2)
return color_d, mean_rgb
def pixel_dist(p1,p2):
x1, y1 = p1
x2, y2 = p2
d = pow(x1-x2,2) + pow(y1-y2,2)
return math.sqrt(d)
def is_known_color(color):
known_colors = {
'red': [[170,190],[65,180],[60,90]],
'green': [[95,105],[85,115],[65,95]],
'yellow': [[10,25],[130,160],[110,150]]
}
lower_red = np.array([170,65,60])
upper_red = np.array([190,180,90])
lower_yellow = np.array([10,130,110])
upper_yellow = np.array([25,160,150])
h,s,v = color
for color in known_colors.keys():
if h>color[0][0] and h<color[0][1]:
if s>color[1][0] and s<color[1][1]:
if v>color[2][0] and v<color[2][1]:
return color
return None
# returns a list of frame indices corresponding to the annotated KF for video demonstrations
def get_video_keyframes(user_id, video_file, video_kf_file):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
length = int(vidcap.get(cv2.CAP_PROP_FRAME_COUNT))
print('read video file')
vidcap.release()
cv2.destroyAllWindows()
# read video files
with open(video_kf_file) as f:
content = f.readlines()
# you may also want to remove whitespace characters like `\n` at the end of each line
content = [x.strip() for x in content]
# print(content)
print('read text file')
# find segmentation points in video file
keyframes = {
'Start': [],
'Reaching': [],
'Grasping': [],
'Close': [],
'Open': [],
'Transport': [],
'Pouring': [],
'Return': [],
'Release': [],
'Stop': []
}
kf_type = {
1: 'Start',
2: 'Reaching',
3: 'Grasping',
4: 'Transport',
5: 'Pouring',
6: 'Return',
7: 'Release',
8: 'Reaching',
9: 'Grasping',
10: 'Transport',
11: 'Pouring',
12: 'Return',
13: 'Release',
14: 'Stop'
}
for kf in content:
data = kf.split(' ')
# print(data)
user = data[0]
if(user == user_id):
for i in range(1,len(data)):
d = data[i]
# print(d)
if(d=='end'):
frame_idx = length
else:
kf_time = float(d)
frame_idx = math.floor(kf_time*fps)
k = kf_type[i]
keyframes[k].append(frame_idx)
print('Found start and stop keyframe indices')
return keyframes
def get_video_keyframe_labels(user_id, video_file, video_kf_file):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
length = int(vidcap.get(cv2.CAP_PROP_FRAME_COUNT))
print('read video file')
vidcap.release()
cv2.destroyAllWindows()
# read video files
with open(video_kf_file) as f:
content = f.readlines()
# you may also want to remove whitespace characters like `\n` at the end of each line
content = [x.strip() for x in content]
# print(content)
print('read text file')
all_keyframe_indices = []
keyframes = {}
kf_type = {
1: 'Start',
2: 'Reaching',
3: 'Grasping',
4: 'Transport',
5: 'Pouring',
6: 'Return',
7: 'Release',
8: 'Reaching',
9: 'Grasping',
10: 'Transport',
11: 'Pouring',
12: 'Return',
13: 'Release',
14: 'Stop'
}
for kf in content:
data = kf.split(' ')
# print(data)
user = data[0]
if(user == user_id):
for i in range(1,len(data)):
d = data[i]
# print(d)
if(d=='end'):
frame_idx = length
else:
kf_time = float(d)
frame_idx = math.floor(kf_time*fps)
k = kf_type[i]
# The same frame_idx can have multiple kf_types
if(frame_idx not in keyframes or k!='Stop'):
keyframes[frame_idx] = k
all_keyframe_indices.append(frame_idx)
print('Found start and stop keyframe indices')
return keyframes, all_keyframe_indices
# returns a list of rgb color values for gaze point for each video frame
def get_color_timeline(data, video_file, keep_saccades):
timeline = []
vid2ts = {} # dictionary mapping video time to time stamps in json
right_eye_pd, left_eye_pd, gp = {}, {}, {} # dicts mapping ts to pupil diameter and gaze points (2D) for both eyes
for d in data:
if 'vts' in d and d['s']==0:
if d['vts'] == 0:
vid2ts[d['vts']] = d['ts']
else:
vid2ts[d['vts']] = d['ts']
if 'pd' in d and d['s']==0 and d['eye']=='right':
right_eye_pd[d['ts']] = d['pd']
if 'pd' in d and d['s']==0 and d['eye']=='left':
left_eye_pd[d['ts']] = d['pd']
if 'gp' in d and d['s']==0 :
gp[d['ts']] = d['gp'] #list of 2 coordinates
print('read json')
# map vts to ts
all_vts = sorted(vid2ts.keys())
a = all_vts[0]
model = []
for i in range(1,len(all_vts)):
points = [(a,vid2ts[a]),(all_vts[i],vid2ts[all_vts[i]])]
x_coords, y_coords = zip(*points)
A = vstack([x_coords,ones(len(x_coords))]).T
m, c = lstsq(A, y_coords)[0]
model.append((m,c))
a = all_vts[i]
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
print('reading video file')
last_fixation_color =(0,0,0)
all_ts = sorted(gp.keys())
count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
gaze_pts = []
while success:
frame_ts = int((count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts, _ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
gaze_pts.append(gaze_coords)
b, g, r = img[gaze_coords[1]-1][gaze_coords[0]-1]
instant_color = [r/255.0,g/255.0,b/255.0]
timeline.append(instant_color)
count += 1
success, img = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
saccade_indices = []
if not keep_saccades:
timeline, saccade_indices = remove_saccades(gaze_pts, timeline, fps)
return timeline, saccade_indices
def get_kt_keyframes_labels(all_vts, model, gp, video_file, bag_file):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
print('reading video file')
keyframes = {}
last_fixation_color =(0,0,0)
all_ts = sorted(gp.keys())
count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
videoframe2trackerts = []
gaze_pts = []
while success:
frame_ts = int((count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts, _ = takeClosest(all_ts,ts)
videoframe2trackerts.append(tracker_ts)
count += 1
success, img = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
# find segmentation points on bagfile
all_keyframe_indices = []
gripper = {}
record_k = False
bag = rosbag.Bag(bag_file)
print(bag_file)
# get the start time for KT recording
start= False
frame_idx = None
if bag.get_message_count('/gaze_tracker')!=0: # gaze_tracker topic was recorded
for topic, msg, t in bag.read_messages(topics=['/gaze_tracker','/log_KTframe','/joint_states','/vector/right_gripper/stat']):
if (topic=='/log_KTframe'):
if("Recorded keyframe" in msg.data):
record_k = True
if 'Reaching' in msg.data:
kf_type = 'Reaching'
elif 'Grasping' in msg.data:
kf_type = 'Grasping'
elif 'Transport' in msg.data:
kf_type = 'Transport'
elif 'Pouring' in msg.data:
kf_type = 'Pouring'
elif 'Return' in msg.data:
kf_type = 'Return'
elif 'Release' in msg.data:
kf_type = 'Release'
else:
kf_type = 'Other'
if("Open" in msg.data):
record_k = True
kf_type = 'Open'
if("Close" in msg.data):
record_k = True
kf_type = 'Close'
if (topic == '/gaze_tracker'):
if('gp' in msg.data):
gaze_msg = msg.data
s = gaze_msg.find('"ts":')
e = gaze_msg.find(',')
gaze_ts = gaze_msg[s+5:e]
tracker_ts, frame_idx = takeClosest(videoframe2trackerts,int(gaze_ts))
if(record_k == True):
all_keyframe_indices.append(frame_idx)
keyframes[frame_idx] = kf_type
record_k = False
if (topic == '/joint_states') and not start and frame_idx!=None:
start = True
keyframes[frame_idx] = 'Start'
bag.close()
return keyframes, all_keyframe_indices
def get_kt_keyframes(all_vts, model, gp, video_file, bag_file):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
print('reading video file')
last_fixation_color =(0,0,0)
all_ts = sorted(gp.keys())
count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
videoframe2trackerts = []
gaze_pts = []
while success:
frame_ts = int((count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts, _ = takeClosest(all_ts,ts)
videoframe2trackerts.append(tracker_ts)
count += 1
success, img = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
# find segmentation points on bagfile
all_keyframe_indices = []
record_k = False
bag = rosbag.Bag(bag_file)
print(bag_file)
if bag.get_message_count('/gaze_tracker')!=0: # gaze_tracker topic was recorded
for topic, msg, t in bag.read_messages(topics=['/gaze_tracker','/log_KTframe']):
if (topic=='/log_KTframe'):
if("Recorded keyframe" in msg.data):
record_k = True
if 'Reaching' in msg.data:
kf_type = 'Reaching'
elif 'Grasping' in msg.data:
kf_type = 'Grasping'
elif 'Transport' in msg.data:
kf_type = 'Transport'
elif 'Pouring' in msg.data:
kf_type = 'Pouring'
elif 'Return' in msg.data:
kf_type = 'Return'
elif 'Release' in msg.data:
kf_type = 'Release'
else:
kf_type = 'Other'
if("Open" in msg.data):
record_k = True
kf_type = 'Open'
if("Close" in msg.data):
record_k = True
kf_type = 'Close'
if (topic == '/gaze_tracker'):
if(record_k == True):
if('gp' in msg.data):
gaze_msg = msg.data
s = gaze_msg.find('"ts":')
e = gaze_msg.find(',')
gaze_ts = gaze_msg[s+5:e]
tracker_ts, frame_idx = takeClosest(videoframe2trackerts,int(gaze_ts))
all_keyframe_indices.append(frame_idx)
record_k = False
bag.close()
return all_keyframe_indices
def find_saccades(gaze_pts, fps):
speed = []
saccade_indices = []
speed.append(0)
dt = 1.0/fps
for i in range(1,len(gaze_pts)):
g = gaze_pts[i]
prev_g = gaze_pts[i-1]
s = (math.sqrt(math.pow(g[0]-prev_g[0],2)+math.pow(g[1]-prev_g[1],2)))/dt
if s>800:
saccade_indices.append(i)
return saccade_indices
def remove_saccades(gaze_pts, color_timeline, fps):
speed = []
saccade_indices = []
speed.append(0)
dt = 1.0/fps
for i in range(1,len(gaze_pts)):
g = gaze_pts[i]
prev_g = gaze_pts[i-1]
s = (math.sqrt(math.pow(g[0]-prev_g[0],2)+math.pow(g[1]-prev_g[1],2)))/dt
if s>200:
color_timeline[i] = [1.0, 1.0, 1.0]
saccade_indices.append(i)
return color_timeline, saccade_indices
def get_cumulative_gaze_dist(data, video_file):
vid2ts = {} # dictionary mapping video time to time stamps in json
right_eye_pd, left_eye_pd, gp = {}, {}, {} # dicts mapping ts to pupil diameter and gaze points (2D) for both eyes
for d in data:
if 'vts' in d and d['s']==0:
if d['vts'] == 0:
vid2ts[d['vts']] = d['ts']
else:
vid2ts[d['vts']] = d['ts']
if 'pd' in d and d['s']==0 and d['eye']=='right':
right_eye_pd[d['ts']] = d['pd']
if 'pd' in d and d['s']==0 and d['eye']=='left':
left_eye_pd[d['ts']] = d['pd']
if 'gp' in d and d['s']==0 :
gp[d['ts']] = d['gp'] #list of 2 coordinates
print('read json')
# map vts to ts
all_vts = sorted(vid2ts.keys())
a = all_vts[0]
model = []
for i in range(1,len(all_vts)):
points = [(a,vid2ts[a]),(all_vts[i],vid2ts[all_vts[i]])]
x_coords, y_coords = zip(*points)
A = vstack([x_coords,ones(len(x_coords))]).T
m, c = lstsq(A, y_coords)[0]
model.append((m,c))
a = all_vts[i]
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
print('reading video file')
last_fixation_color =(0,0,0)
all_ts = sorted(gp.keys())
count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
gaze_pts = []
current_dist = 0
cumulative_dist = [0]
tracker_ts, _ = takeClosest(all_ts,all_vts[0])
gx_p, gy_p = gp[tracker_ts]
while success:
frame_ts = int((count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts, _ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
gaze_pts.append(gaze_coords)
gx, gy = gaze_coords
d = math.sqrt(math.pow(gx-gx_p,2)+math.pow(gy-gy_p,2))
current_dist = current_dist + d
cumulative_dist.append(current_dist)
gx_p, gy_p = gx, gy
count += 1
success, img = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
return cumulative_dist
def get_color_name(hsv):
color_ranges = {
'red': [[161,140,70],[184,255,255]],
'green': [[36,64,28],[110,155,220]],
'yellow': [[0,90,100],[32,180,180]],
'blue': [[94,111,34],[118,165,136]],
'black': [[0,0,0],[180,255,40]],
'white': [[0,0,170],[180,255,255]]
}
color_val = {
'black': (0,0,0),
'white': (255,255,255),
'red': (0,0,255),
'green': (0,255,0),
'yellow': (0,255,255),
'blue': (255,0,0),
'pasta': (0,215,225)
}
h,s,v = hsv
color = ''
value = None
for i, (n,r) in enumerate(color_ranges.items()):
if h>=r[0][0] and h<=r[1][0]:
if s>=r[0][1] and s<=r[1][1]:
if v>=r[0][2] and v<=r[1][2]:
color = n
value = color_val[n]
pasta_color_range = [[0,30,0],[40,130,100]]
p = pasta_color_range
if color=='':
if h>=p[0][0] and h<=p[1][0]:
if s>=p[0][1] and s<=p[1][1]:
if v>=p[0][2] and v<=p[1][2]:
color = 'pasta'
value = color_val['pasta']
return color, value
def get_color_name_from_hist(gaze_coords, img_hsv, radius):
color_hist ={
'blue': 0,
'yellow': 0,
'red': 0,
'green': 0,
'black': 0,
'pasta': 0,
'other': 0
}
color_val = {
'black': (0,0,0),
'red': (0,0,255),
'green': (0,255,0),
'yellow': (0,255,255),
'pasta': (0,255,255),
'blue': (255,0,0),
'other': (192,192,192)
}
x, y = gaze_coords
hsv = img_hsv[y-1][x-1]
h,s,v = hsv
color = ''
value = None
# pixels in the image which lie inside a circle of given radius
min_x, max_x = max(0,x-radius), min(1920, x+radius)
min_y, max_y = max(0,y-radius), min(1080, y+radius)
for i,j in zip(range(min_x,max_x), range(min_y,max_y)):
d = math.pow((i-x),2)+ math.pow((j-y),2)
if d<= math.pow(radius,2):
curr_hsv= img_hsv[j][i]
current_color, _ = get_color_name(curr_hsv)
if current_color in color_hist.keys():
color_hist[current_color] += 1
else:
color_hist['other'] += 1
max_val = 0
max_color = ''
for key,val in color_hist.items():
if val>max_val:
max_val = val
max_color = key
# do not assign other color if relevant colors are present
second_max_val = 0
second_max_color = ''
if max_color=='other':
# print('***other***')
for key,val in color_hist.items():
if key=='other':
continue
else:
if val>second_max_val:
second_max_val = val
second_max_color = key
if second_max_val>5:
max_color = second_max_color
max_val = second_max_val
# print(max_color, second_max_val)
value = color_val[max_color]
return max_color, value
def get_color_name_from_hist_ignore_black(gaze_coords, img_hsv, radius):
color_hist ={
'blue': 0,
'yellow': 0,
'red': 0,
'green': 0,
'black': 0,
'pasta': 0,
'other': 0
}
color_val = {
'black': (0,0,0),
'red': (0,0,255),
'green': (0,255,0),
'yellow': (0,255,255),
'pasta': (0,255,255),
'blue': (255,0,0),
'other': (192,192,192)
}
x, y = gaze_coords
hsv = img_hsv[y-1][x-1]
h,s,v = hsv
color = ''
value = None
# pixels in the image which lie inside a circle of given radius
min_x, max_x = max(0,x-radius), min(1920, x+radius)
min_y, max_y = max(0,y-radius), min(1080, y+radius)
for i,j in zip(range(min_x,max_x), range(min_y,max_y)):
d = math.pow((i-x),2)+ math.pow((j-y),2)
if d<= math.pow(radius,2):
curr_hsv= img_hsv[j][i]
current_color, _ = get_color_name(curr_hsv)
if current_color in color_hist.keys():
color_hist[current_color] += 1
else:
color_hist['other'] += 1
max_val = 0
max_color = ''
for key,val in color_hist.items():
# print(val)
if val>max_val:
max_val = val
max_color = key
# tie break black with other colors
second_max_val = 0
second_max_color = ''
if max_color=='black':
# print('***other***')
for key,val in color_hist.items():
if key=='black':
continue
else:
if val>second_max_val:
second_max_val = val
second_max_color = key
if second_max_val>50:
max_color = second_max_color
max_val = second_max_val
# do not assign other color if relevant colors are present
second_max_val = 0
second_max_color = ''
if max_color=='other':
# print('***other***')
for key,val in color_hist.items():
if key=='other':
continue
else:
if val>second_max_val:
second_max_val = val
second_max_color = key
if second_max_val>5:
max_color = second_max_color
max_val = second_max_val
# print(max_color, second_max_val)
value = color_val[max_color]
return max_color, value
# returns a list of rgb color values for gaze point for each video frame
def get_hsv_color_timeline(data, video_file):
timeline = []
vid2ts = {} # dictionary mapping video time to time stamps in json
right_eye_pd, left_eye_pd, gp = {}, {}, {} # dicts mapping ts to pupil diameter and gaze points (2D) for both eyes
for d in data:
if 'vts' in d and d['s']==0:
if d['vts'] == 0:
vid2ts[d['vts']] = d['ts']
else:
#vid_time = d['ts'] - d['vts']
vid2ts[d['vts']] = d['ts']
if 'pd' in d and d['s']==0 and d['eye']=='right':
right_eye_pd[d['ts']] = d['pd']
if 'pd' in d and d['s']==0 and d['eye']=='left':
left_eye_pd[d['ts']] = d['pd']
if 'gp' in d and d['s']==0 :
gp[d['ts']] = d['gp'] #list of 2 coordinates
print('read json')
# map vts to ts
all_vts = sorted(vid2ts.keys())
a = all_vts[0]
model = []
for i in range(1,len(all_vts)):
points = [(a,vid2ts[a]),(all_vts[i],vid2ts[all_vts[i]])]
x_coords, y_coords = zip(*points)
A = vstack([x_coords,ones(len(x_coords))]).T
m, c = lstsq(A, y_coords)[0]
model.append((m,c))
a = all_vts[i]
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
print('reading video file')
last_fixation_color =(0,0,0)
all_ts = sorted(gp.keys())
count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
gaze_pts = []
while success:
img_hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
frame_ts = int((count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts, _ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
gaze_pts.append(gaze_coords)
h,s,v = img_hsv[gaze_coords[1]-1][gaze_coords[0]-1]
instant_color = [h, s, v]
timeline.append(instant_color)
count += 1
success, img = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
saccade_indices = []
saccade_indices = find_saccades(gaze_pts, fps)
return timeline, saccade_indices, fps
# map vts to ts
all_vts = sorted(vid2ts.keys())
a = all_vts[0]
model = []
for i in range(1,len(all_vts)):
points = [(a,vid2ts[a]),(all_vts[i],vid2ts[all_vts[i]])]
x_coords, y_coords = zip(*points)
A = vstack([x_coords,ones(len(x_coords))]).T
m, c = lstsq(A, y_coords)[0]
model.append((m,c))
a = all_vts[i]
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
print('reading video file')
last_fixation_color =(0,0,0)
all_ts = sorted(gp.keys())
count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
gaze_pts = []
while success:
img_hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
frame_ts = int((count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts, _ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
gaze_pts.append(gaze_coords)
h,s,v = img_hsv[gaze_coords[1]-1][gaze_coords[0]-1]
instant_color = [h, s, v]
timeline.append(instant_color)
count += 1
success, img = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
saccade_indices = []
saccade_indices = find_saccades(gaze_pts, fps)
return timeline, saccade_indices, fps
def filter_fixations(video_file, model, gp, all_vts, demo_type, saccade_indices, start_idx, end_idx):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
fourcc = cv2.VideoWriter_fourcc(*'XVID')
KT_fixation_count = {
'red': 0,
'yellow': 0,
'blue': 0,
'green': 0,
'black': 0,
'other': 0,
'pasta': 0
}
fixation_count = KT_fixation_count
all_ts = sorted(gp.keys())
total_count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
window = []
win_size = 3
radius = 100
valid_count = 0
while success:
if total_count<start_idx or total_count>end_idx:
total_count += 1
success, img = vidcap.read()
continue
frame_ts = int((total_count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts,_ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
img_hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
color_name, color_value = get_color_name_from_hist(gaze_coords, img_hsv, radius)
window.append(color_name)
if(len(window)>win_size):
del window[0]
font = cv2.FONT_HERSHEY_SIMPLEX
if total_count not in saccade_indices:
# might be a fixation
fixation = True
for det_c in window:
if det_c!=color_name:
fixation=False
if(fixation):
fixation_count[color_name] += 1
valid_count += 1
total_count += 1
success, img = vidcap.read()
cv2.destroyAllWindows()
for f in fixation_count:
if(valid_count!=0):
fixation_count[f] = fixation_count[f]*100.0/valid_count
else:
fixation_count[f] = -1
return fixation_count
def filter_fixation_counts(video_file, model, gp, all_vts, demo_type, saccade_indices, start_idx, end_idx):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
fourcc = cv2.VideoWriter_fourcc(*'XVID')
KT_fixation_count = {
'red': 0,
'yellow': 0,
'blue': 0,
'green': 0,
'black': 0,
'other': 0,
'pasta': 0
}
fixation_count = KT_fixation_count
all_ts = sorted(gp.keys())
total_count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
window = []
win_size = 3
radius = 100
valid_count = 0
while success:
if total_count<start_idx or total_count>end_idx:
total_count += 1
success, img = vidcap.read()
continue
frame_ts = int((total_count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts,_ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
img_hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
color_name, color_value = get_color_name_from_hist(gaze_coords, img_hsv, radius)
window.append(color_name)
if(len(window)>win_size):
del window[0]
font = cv2.FONT_HERSHEY_SIMPLEX
if total_count not in saccade_indices:
# might be a fixation
fixation = True
for det_c in window:
if det_c!=color_name:
fixation=False
if(fixation):
fixation_count[color_name] += 1
valid_count += 1
total_count += 1
success, img = vidcap.read()
cv2.destroyAllWindows()
for f in fixation_count:
if(valid_count==0):
fixation_count[f] = -1
return fixation_count
def filter_fixations_ignore_black(video_file, model, gp, all_vts, demo_type, saccade_indices, keyframe_indices, keyframes):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
fourcc = cv2.VideoWriter_fourcc(*'XVID')
KT_fixation_count = {
'red': 0,
'yellow': 0,
'blue': 0,
'green': 0,
'black': 0,
'other': 0,
'pasta': 0
}
fixation_count = KT_fixation_count
target_objects = {
'Reaching': ['green','yellow'],
'Grasping': ['green', 'yellow'],
'Open': ['green', 'yellow'],
'Close': ['green', 'yellow'],
'Transport': ['red', 'blue'],
'Pouring': ['red', 'blue'],
'Return': ['red','blue'],
'Release': ['red','blue']
}
start_idx, end_idx = keyframe_indices[0], keyframe_indices[-1]
all_ts = sorted(gp.keys())
total_count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
window = []
win_size = 3
radius = 100
valid_count = 0
current_action = 'Reaching'
while success:
if total_count<start_idx or total_count>end_idx:
total_count += 1
success, img = vidcap.read()
continue
first_act = True
# get current KF segment
if demo_type=='k':
for i in range(0,len(keyframe_indices)-1):
if total_count>keyframe_indices[i] and\
total_count<=keyframe_indices[i+1]:
if keyframes[keyframe_indices[i+1]]!= 'Other':
current_action = keyframes[keyframe_indices[i+1]]
elif demo_type == 'v':
for i in range(1,len(keyframe_indices)-1):
if total_count>=keyframe_indices[i] and\
total_count<keyframe_indices[i+1]:
current_action = keyframes[keyframe_indices[i]]
if current_action == 'Open' and first_act==True and demo_type=='k':
first_act = False
if current_action == 'Release' and first_act==True and demo_type=='v':
first_act = False
frame_ts = int((total_count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts,_ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
img_hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
color_name, color_value = get_color_name_from_hist_ignore_black(gaze_coords, img_hsv, radius)
if color_name == 'pasta':
if first_act:
color_name = target_objects[current_action][0]
else:
color_name = target_objects[current_action][1]
window.append(color_name)
if(len(window)>win_size):
del window[0]
font = cv2.FONT_HERSHEY_SIMPLEX
if total_count not in saccade_indices:
# might be a fixation
fixation = True
for det_c in window:
if det_c!=color_name:
fixation=False
if(fixation):
fixation_count[color_name] += 1
valid_count += 1
total_count += 1
success, img = vidcap.read()
cv2.destroyAllWindows()
for f in fixation_count:
if(valid_count!=0):
fixation_count[f] = fixation_count[f]*100.0/valid_count
else:
fixation_count[f] = -1
return fixation_count
def filter_fixations_with_timeline(video_file, model, gp, all_vts, demo_type, saccade_indices, start_idx, end_idx):
vidcap = cv2.VideoCapture(video_file)
fps = vidcap.get(cv2.CAP_PROP_FPS)
success, img = vidcap.read()
fourcc = cv2.VideoWriter_fourcc(*'XVID')
fixation_list, fixation_idx_list = [], []
all_ts = sorted(gp.keys())
total_count = 0
imgs = [] # list of image frames
frame2ts = [] # corresponding list of video time stamp values in microseconds
window = []
win_size = 3
radius = 100
valid_count = 0
while success:
if total_count<start_idx or total_count>end_idx:
total_count += 1
success, img = vidcap.read()
continue
frame_ts = int((total_count/fps)*1000000)
frame2ts.append(frame_ts)
less = [a for a in all_vts if a<=frame_ts]
idx = len(less)-1
if idx<len(model):
m,c = model[idx]
else:
m,c = model[len(model)-1]
ts = m*frame_ts + c
tracker_ts,_ = takeClosest(all_ts,ts)
gaze = gp[tracker_ts]
gaze_coords = (int(gaze[0]*1920), int(gaze[1]*1080))
img_hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
color_name, color_value = get_color_name_from_hist(gaze_coords, img_hsv, radius)
window.append(color_name)
if(len(window)>win_size):
del window[0]
font = cv2.FONT_HERSHEY_SIMPLEX
if total_count not in saccade_indices:
fixation = True
for det_c in window:
if det_c!=color_name:
fixation=False
if(fixation):
b,g,r = color_value
c_val = [r/255.0, g/255.0, b/255.0]
if(color_name != 'other'):
fixation_list.append(c_val)
fixation_idx_list.append(valid_count)
valid_count += 1
total_count += 1
success, img = vidcap.read()
cv2.destroyAllWindows()
return fixation_list, fixation_idx_list
def get_step_kf_indices(keyframes, keyframe_indices):
valid_types = ['Reaching', 'Grasping', 'Transport', 'Pouring', 'Return', 'Release']
step_kf_indices = []
kf_type = keyframes[keyframe_indices[0]]
if(kf_type!= keyframes[keyframe_indices[1]] and kf_type in valid_types):
step_kf_indices.append(keyframe_indices[0])
last_kf_type = ''
for i in range(1,len(keyframe_indices)-1):
kf = keyframe_indices[i]
kf_type = keyframes[kf]
prev_kf_type = keyframes[keyframe_indices[i-1]]
next_kf_type = keyframes[keyframe_indices[i+1]]
if kf_type=='Close':
kf_type = 'Grasping'
if kf_type=='Open':
kf_type = 'Release'
if prev_kf_type=='Open':
prev_kf_type = 'Release'
if prev_kf_type=='Close':
prev_kf_type = 'Grasping'
# the last KF in a sequence of identical labels is a segmentation KF
if (prev_kf_type==kf_type and next_kf_type!=kf_type) and (kf_type in valid_types):
step_kf_indices.append(kf)
if (prev_kf_type!=kf_type and next_kf_type!=kf_type) and (kf_type in valid_types):
step_kf_indices.append(kf)
kf_type = keyframes[keyframe_indices[-1]]
if(kf_type!= keyframes[keyframe_indices[-2]] and kf_type in valid_types):
step_kf_indices.append(keyframe_indices[-1])
return step_kf_indices | 29.510685 | 133 | 0.538031 | 5,407 | 40,046 | 3.800444 | 0.069909 | 0.014891 | 0.018687 | 0.023359 | 0.830697 | 0.810794 | 0.794248 | 0.784758 | 0.78398 | 0.780427 | 0 | 0.038384 | 0.335115 | 40,046 | 1,357 | 134 | 29.510685 | 0.733381 | 0.07379 | 0 | 0.790875 | 0 | 0 | 0.050299 | 0.000703 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020913 | false | 0 | 0.010456 | 0 | 0.057034 | 0.01711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c85589033f6ff58614faa7424a530aceeec041d5 | 34 | py | Python | myQuiz.py | blulady/python | 65d8e99f6411cf79be0353abc99a2677dfeebe11 | [
"bzip2-1.0.6"
] | null | null | null | myQuiz.py | blulady/python | 65d8e99f6411cf79be0353abc99a2677dfeebe11 | [
"bzip2-1.0.6"
] | null | null | null | myQuiz.py | blulady/python | 65d8e99f6411cf79be0353abc99a2677dfeebe11 | [
"bzip2-1.0.6"
] | 1 | 2020-09-11T16:05:46.000Z | 2020-09-11T16:05:46.000Z | import budget; print(calcBills())
| 17 | 33 | 0.764706 | 4 | 34 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
23c0f106aff8d53d787f8eb1342e73c1c17ee7e1 | 7,361 | py | Python | api/device.py | icarus213/iotku | 4aa70002b88ad160861b2b9cd14f5d20706a6643 | [
"MIT"
] | 8 | 2018-07-06T10:40:53.000Z | 2019-07-31T09:12:10.000Z | api/device.py | echobots/iotku | 4aa70002b88ad160861b2b9cd14f5d20706a6643 | [
"MIT"
] | null | null | null | api/device.py | echobots/iotku | 4aa70002b88ad160861b2b9cd14f5d20706a6643 | [
"MIT"
] | 7 | 2018-07-06T11:02:48.000Z | 2021-01-06T06:32:01.000Z | from flask import Blueprint, request, session, jsonify, url_for
from . import api, iotku
#------------------DEVICE-------------------------
@api.route('/api/device/name', methods=['GET'])
def device_name():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not content.get('device_id'):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
return jsonify({'result':device.get('device_name')})
@api.route('/api/device/time_added', methods=['GET'])
def device_time_added():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not content.get('device_id'):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
return jsonify({'result':device.get('time_added')})
@api.route('/api/device/total_sensor', methods=['GET'])
def device_total_sensor():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not content.get('device_id'):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
return jsonify({'result':device.get('total_sensor')})
@api.route('/api/device/sensor_list', methods=['GET'])
def device_sensor_list():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not content.get('device_id'):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensors = device.get_sensor_list()
sensor_id = [x.get('sensor_id') for x in sensors]
sensor_name = [x.get('sensor_name') for x in sensors]
sensor_list = [{'sensor_id':x,'sensor_name':y} for x,y in zip(sensor_id, sensor_name)]
return jsonify({'result':sensor_list})
@api.route('/api/device/add_sensor', methods=['POST'])
def device_add_sensor():
content = request.get_json(silent=True)
if not content:
return jsonify({'result': False, 'reason': 'Invalid format'})
elif not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id","sensor_name"]):
return jsonify({'result': False, 'reason': 'Invalid format'})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Device ID not found"})
else:
sensor_id, sensor_name = content["sensor_id"],content["sensor_name"]
if device.find_sensor(sensor_id):
return jsonify({'result': False, 'reason': "Sensor ID exists"})
else:
device.add_sensor(sensor_id,sensor_name)
return jsonify({'result': True})
@api.route('/api/device/remove_sensor', methods=['POST'])
def device_remove_sensor():
content = request.get_json(silent=True)
if not content:
return jsonify({'result': False, 'reason': 'Invalid format'})
elif not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result': False, 'reason': 'Invalid format'})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Device ID not found"})
else:
sensor_id = content["sensor_id"]
if device.find_sensor(sensor_id):
device.remove_sensor(sensor_id)
return jsonify({'result': True})
else:
return jsonify({'result': False, 'reason': "Sensor not found"})
@api.route('/api/device/command', methods=['GET'])
def device_command():
content = request.args
if not 'device_id' in content.keys():
if not all(x in session.keys() for x in ["logged_in","device_id"]):
return jsonify({'result':False,'reason':'Not logged in / Invalid login type / Invalid format'})
else:
device_id = session['device_id']
user = iotku.find_user(api_key=session["api_key"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Invalid Device ID. Please relogin"})
else:
command = device.get('command')
return jsonify({'result': command})
else:
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Invalid Device ID"})
else:
command = device.get('command')
return jsonify({'result': command})
@api.route('/api/device/command_history', methods=['GET'])
def device_command_history():
content = request.args
if not 'device_id' in content.keys():
if not all(x in session.keys() for x in ["logged_in","device_id"]):
return jsonify({'result':False,'reason':'Not logged in / Invalid login type / Invalid format'})
else:
device_id = session['device_id']
user = iotku.find_user(api_key=session["api_key"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Invalid Device ID. Please relogin"})
else:
command = device.get('command_history')
return jsonify({'result': command})
else:
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
else:
device_id = content['device_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Invalid Device ID"})
else:
command = device.get('command_history')
return jsonify({'result': command})
#------------------/DEVICE------------------------- | 41.587571 | 101 | 0.651814 | 990 | 7,361 | 4.713131 | 0.062626 | 0.085727 | 0.16288 | 0.154308 | 0.875482 | 0.827904 | 0.8024 | 0.776254 | 0.776254 | 0.770682 | 0 | 0 | 0.178101 | 7,361 | 177 | 102 | 41.587571 | 0.77124 | 0.013449 | 0 | 0.792683 | 0 | 0 | 0.254373 | 0.019694 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.012195 | 0 | 0.304878 | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
23fcb04d48793ef2b46693e6ab203b3cf719565d | 71,815 | py | Python | great_international/migrations/0048_capitalinvestrelatedsubsectors_internationalsubsectorpage.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 6 | 2018-03-20T11:19:07.000Z | 2021-10-05T07:53:11.000Z | great_international/migrations/0048_capitalinvestrelatedsubsectors_internationalsubsectorpage.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 802 | 2018-02-05T14:16:13.000Z | 2022-02-10T10:59:21.000Z | great_international/migrations/0048_capitalinvestrelatedsubsectors_internationalsubsectorpage.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 6 | 2019-01-22T13:19:37.000Z | 2019-07-01T10:35:26.000Z | # Generated by Django 2.2.2 on 2019-07-16 13:48
import core.model_fields
import core.validators
from django.db import migrations, models
import django.db.models.deletion
import great_international.panels.great_international
import modelcluster.fields
class Migration(migrations.Migration):
dependencies = [
('export_readiness', '0051_auto_20190627_1424'),
('wagtailcore', '0041_group_collection_permissions_verbose_name_plural'),
('wagtailimages', '0001_squashed_0021'),
('great_international', '0047_investregionlandingpage_investsectorpage'),
]
operations = [
migrations.CreateModel(
name='InternationalSubSectorPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('service_name', models.CharField(choices=[('FIND_A_SUPPLIER', 'Find a Supplier'), ('EXPORT_READINESS', 'Export Readiness'), ('INVEST', 'Invest'), ('COMPONENTS', 'Components'), ('GREAT_INTERNATIONAL', 'Great International')], db_index=True, max_length=100, null=True)),
('uses_tree_based_routing', models.BooleanField(default=False, help_text="Allow this page's URL to be determined by its slug, and the slugs of its ancestors in the page tree.", verbose_name='tree-based routing enabled')),
('heading', models.CharField(max_length=255, verbose_name='Sector name')),
('heading_en_gb', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_de', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_ja', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_zh_hans', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_fr', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_es', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_pt', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('heading_ar', models.CharField(max_length=255, null=True, verbose_name='Sector name')),
('sub_heading', models.TextField(blank=True)),
('sub_heading_en_gb', models.TextField(blank=True, null=True)),
('sub_heading_de', models.TextField(blank=True, null=True)),
('sub_heading_ja', models.TextField(blank=True, null=True)),
('sub_heading_zh_hans', models.TextField(blank=True, null=True)),
('sub_heading_fr', models.TextField(blank=True, null=True)),
('sub_heading_es', models.TextField(blank=True, null=True)),
('sub_heading_pt', models.TextField(blank=True, null=True)),
('sub_heading_ar', models.TextField(blank=True, null=True)),
('heading_teaser', models.TextField(blank=True, verbose_name='Introduction')),
('heading_teaser_en_gb', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_de', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_ja', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_zh_hans', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_fr', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_es', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_pt', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('heading_teaser_ar', models.TextField(blank=True, null=True, verbose_name='Introduction')),
('section_one_body', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_en_gb', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_de', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_ja', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_zh_hans', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_fr', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_es', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_pt', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_body_ar', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='3 unique selling points markdown')),
('section_one_image_caption', models.CharField(blank=True, max_length=255, verbose_name='Image caption')),
('section_one_image_caption_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption')),
('section_one_image_caption_company', models.CharField(blank=True, max_length=255, verbose_name='Image caption attribution')),
('section_one_image_caption_company_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('section_one_image_caption_company_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Image caption attribution')),
('statistic_1_number', models.CharField(blank=True, max_length=255)),
('statistic_1_number_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_number_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading', models.CharField(blank=True, max_length=255)),
('statistic_1_heading_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_heading_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint', models.CharField(blank=True, max_length=255)),
('statistic_1_smallprint_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_1_smallprint_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number', models.CharField(blank=True, max_length=255)),
('statistic_2_number_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_number_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading', models.CharField(blank=True, max_length=255)),
('statistic_2_heading_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_heading_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint', models.CharField(blank=True, max_length=255)),
('statistic_2_smallprint_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_2_smallprint_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number', models.CharField(blank=True, max_length=255)),
('statistic_3_number_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_number_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading', models.CharField(blank=True, max_length=255)),
('statistic_3_heading_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_heading_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint', models.CharField(blank=True, max_length=255)),
('statistic_3_smallprint_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_3_smallprint_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number', models.CharField(blank=True, max_length=255)),
('statistic_4_number_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_number_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading', models.CharField(blank=True, max_length=255)),
('statistic_4_heading_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_heading_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint', models.CharField(blank=True, max_length=255)),
('statistic_4_smallprint_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_4_smallprint_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number', models.CharField(blank=True, max_length=255)),
('statistic_5_number_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_number_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading', models.CharField(blank=True, max_length=255)),
('statistic_5_heading_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_heading_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint', models.CharField(blank=True, max_length=255)),
('statistic_5_smallprint_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_5_smallprint_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number', models.CharField(blank=True, max_length=255)),
('statistic_6_number_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_number_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading', models.CharField(blank=True, max_length=255)),
('statistic_6_heading_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_heading_ar', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint', models.CharField(blank=True, max_length=255)),
('statistic_6_smallprint_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_de', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_ja', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_fr', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_es', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_pt', models.CharField(blank=True, max_length=255, null=True)),
('statistic_6_smallprint_ar', models.CharField(blank=True, max_length=255, null=True)),
('section_two_heading', models.CharField(blank=True, max_length=255, verbose_name='Spotlight')),
('section_two_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight')),
('section_two_teaser', models.TextField(blank=True, verbose_name='Spotlight summary')),
('section_two_teaser_en_gb', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_de', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_ja', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_zh_hans', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_fr', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_es', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_pt', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_teaser_ar', models.TextField(blank=True, null=True, verbose_name='Spotlight summary')),
('section_two_subsection_one_heading', models.CharField(blank=True, max_length=255, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 1 heading')),
('section_two_subsection_one_body', models.TextField(blank=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_en_gb', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_de', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_ja', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_zh_hans', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_fr', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_es', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_pt', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_one_body_ar', models.TextField(blank=True, null=True, verbose_name='Spotlight 1 body')),
('section_two_subsection_two_heading', models.CharField(blank=True, max_length=255, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 2 heading')),
('section_two_subsection_two_body', models.TextField(blank=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_en_gb', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_de', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_ja', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_zh_hans', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_fr', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_es', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_pt', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_two_body_ar', models.TextField(blank=True, null=True, verbose_name='Spotlight 2 body')),
('section_two_subsection_three_heading', models.CharField(blank=True, max_length=255, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Spotlight 3 heading')),
('section_two_subsection_three_body', models.TextField(blank=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_en_gb', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_de', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_ja', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_zh_hans', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_fr', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_es', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_pt', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('section_two_subsection_three_body_ar', models.TextField(blank=True, null=True, verbose_name='Spotlight 3 body')),
('case_study_title', models.CharField(blank=True, max_length=255)),
('case_study_title_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_de', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_ja', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_fr', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_es', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_pt', models.CharField(blank=True, max_length=255, null=True)),
('case_study_title_ar', models.CharField(blank=True, max_length=255, null=True)),
('case_study_description', models.TextField(blank=True)),
('case_study_description_en_gb', models.TextField(blank=True, null=True)),
('case_study_description_de', models.TextField(blank=True, null=True)),
('case_study_description_ja', models.TextField(blank=True, null=True)),
('case_study_description_zh_hans', models.TextField(blank=True, null=True)),
('case_study_description_fr', models.TextField(blank=True, null=True)),
('case_study_description_es', models.TextField(blank=True, null=True)),
('case_study_description_pt', models.TextField(blank=True, null=True)),
('case_study_description_ar', models.TextField(blank=True, null=True)),
('case_study_cta_text', models.TextField(blank=True, verbose_name='Case study link text')),
('case_study_cta_text_en_gb', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_de', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_ja', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_zh_hans', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_fr', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_es', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_pt', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('case_study_cta_text_ar', models.TextField(blank=True, null=True, verbose_name='Case study link text')),
('section_three_heading', models.CharField(blank=True, max_length=255, verbose_name='Fact sheets heading')),
('section_three_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheets heading')),
('section_three_teaser', models.TextField(blank=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_en_gb', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_de', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_ja', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_zh_hans', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_fr', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_es', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_pt', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_teaser_ar', models.TextField(blank=True, null=True, verbose_name='Fact sheets teaser')),
('section_three_subsection_one_heading', models.CharField(blank=True, max_length=255, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 1 heading')),
('section_three_subsection_one_teaser', models.TextField(blank=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_en_gb', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_de', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_ja', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_zh_hans', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_fr', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_es', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_pt', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_teaser_ar', models.TextField(blank=True, null=True, verbose_name='Fact sheet 1 teaser')),
('section_three_subsection_one_body', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_en_gb', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_de', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_ja', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_zh_hans', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_fr', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_es', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_pt', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_one_body_ar', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 1 body')),
('section_three_subsection_two_heading', models.CharField(blank=True, max_length=255, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_en_gb', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_de', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_ja', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_zh_hans', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_fr', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_es', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_pt', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_heading_ar', models.CharField(blank=True, max_length=255, null=True, verbose_name='Fact sheet 2 heading')),
('section_three_subsection_two_teaser', models.TextField(blank=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_en_gb', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_de', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_ja', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_zh_hans', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_fr', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_es', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_pt', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_teaser_ar', models.TextField(blank=True, null=True, verbose_name='Fact sheet 2 teaser')),
('section_three_subsection_two_body', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_en_gb', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_de', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_ja', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_zh_hans', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_fr', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_es', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_pt', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('section_three_subsection_two_body_ar', core.model_fields.MarkdownField(blank=True, null=True, validators=[core.validators.slug_hyperlinks], verbose_name='Fact sheet 2 body')),
('project_opportunities_title', models.CharField(blank=True, max_length=255)),
('project_opportunities_title_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_de', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_ja', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_fr', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_es', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_pt', models.CharField(blank=True, max_length=255, null=True)),
('project_opportunities_title_ar', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text', models.CharField(blank=True, max_length=255)),
('related_opportunities_cta_text_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_de', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_ja', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_fr', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_es', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_pt', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_text_ar', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link', models.CharField(blank=True, max_length=255)),
('related_opportunities_cta_link_en_gb', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_de', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_ja', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_zh_hans', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_fr', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_es', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_pt', models.CharField(blank=True, max_length=255, null=True)),
('related_opportunities_cta_link_ar', models.CharField(blank=True, max_length=255, null=True)),
('case_study_cta_page', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_cta_page_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page', verbose_name='Case study link URL')),
('case_study_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('case_study_image_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_ar', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_de', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_en_gb', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_es', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_fr', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_ja', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_pt', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('hero_image_zh_hans', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('related_page_one', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_one_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_three_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('related_page_two_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page')),
('section_one_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_one_image_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Image for unique selling points')),
('section_two_subsection_one_icon', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_one_icon_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 1 icon')),
('section_two_subsection_three_icon', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_three_icon_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 3 icon')),
('section_two_subsection_two_icon', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_ar', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_de', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_en_gb', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_es', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_fr', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_ja', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_pt', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('section_two_subsection_two_icon_zh_hans', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image', verbose_name='Spotlight 2 icon')),
('tags', modelcluster.fields.ParentalManyToManyField(blank=True, to='export_readiness.Tag')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page', great_international.panels.great_international.BaseInternationalSectorPagePanels),
),
migrations.CreateModel(
name='CapitalInvestRelatedSubSectors',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('page', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_ar', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_de', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_en_gb', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_es', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_fr', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_ja', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_pt', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('page_zh_hans', modelcluster.fields.ParentalKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='related_sub_sectors', to='great_international.CapitalInvestOpportunityPage')),
('related_sub_sector', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='great_international.InternationalSubSectorPage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
]
| 128.241071 | 285 | 0.704324 | 9,217 | 71,815 | 5.18238 | 0.019421 | 0.093832 | 0.072353 | 0.140183 | 0.967006 | 0.962756 | 0.960286 | 0.958255 | 0.943977 | 0.916405 | 0 | 0.020254 | 0.161248 | 71,815 | 559 | 286 | 128.470483 | 0.77274 | 0.000627 | 0 | 0.0217 | 1 | 0.001808 | 0.29208 | 0.174562 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.01085 | 0 | 0.016275 | 0.097649 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7b0ffc813113ffbeeca98ab0b5134d769a1c489f | 98 | py | Python | neuroBN/inference/map_exact/__init__.py | Centiment-io/neuroBN | 0863efd03f5cc79a2084efcc592d34969c16d4a4 | [
"Apache-2.0"
] | 1 | 2018-09-04T09:32:07.000Z | 2018-09-04T09:32:07.000Z | neuroBN/inference/map_exact/__init__.py | Centiment-io/neuroBN | 0863efd03f5cc79a2084efcc592d34969c16d4a4 | [
"Apache-2.0"
] | null | null | null | neuroBN/inference/map_exact/__init__.py | Centiment-io/neuroBN | 0863efd03f5cc79a2084efcc592d34969c16d4a4 | [
"Apache-2.0"
] | 2 | 2019-10-03T21:23:09.000Z | 2020-03-21T11:12:56.000Z | from neuroBN.inference.map_exact.ilp_map import *
from neuroBN.inference.map_exact.ve_map import * | 49 | 49 | 0.846939 | 16 | 98 | 4.9375 | 0.5 | 0.278481 | 0.506329 | 0.582278 | 0.708861 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 98 | 2 | 50 | 49 | 0.868132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
7b2fc71f61ed14ab76058fa7d5ea0cd6905d3377 | 96 | py | Python | naplib/io/__init__.py | gavinmischler/naplib-python | 8cd7a0fc700f1c07243169ec42fc087955885adc | [
"MIT"
] | 1 | 2022-03-02T20:54:23.000Z | 2022-03-02T20:54:23.000Z | naplib/io/__init__.py | gavinmischler/gavlib | cacf9180b1442e4aed98b6182d586747a6d6ef90 | [
"MIT"
] | null | null | null | naplib/io/__init__.py | gavinmischler/gavlib | cacf9180b1442e4aed98b6182d586747a6d6ef90 | [
"MIT"
] | null | null | null | from .fileio import load, save, import_outstruct
__all__ = ['load','save','import_outstruct']
| 19.2 | 48 | 0.739583 | 12 | 96 | 5.416667 | 0.583333 | 0.246154 | 0.430769 | 0.707692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 96 | 4 | 49 | 24 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9e5bc86bff1888faa8a491e73ffebbdbc46f2de0 | 6,139 | py | Python | drsa/functions.py | collinprather/DRSA-PyTorch | 071148fa81188dd02793ccd90c7812a3f53bbf8b | [
"Apache-2.0"
] | 9 | 2020-06-18T22:06:20.000Z | 2022-03-07T12:02:19.000Z | drsa/functions.py | collinprather/DRSA-PyTorch | 071148fa81188dd02793ccd90c7812a3f53bbf8b | [
"Apache-2.0"
] | 3 | 2020-05-25T18:30:07.000Z | 2021-09-28T02:52:46.000Z | drsa/functions.py | collinprather/DRSA-PyTorch | 071148fa81188dd02793ccd90c7812a3f53bbf8b | [
"Apache-2.0"
] | 2 | 2020-06-06T08:16:36.000Z | 2020-09-11T07:33:46.000Z | # AUTOGENERATED! DO NOT EDIT! File to edit: notebooks/00_functions.ipynb (unless otherwise specified).
__all__ = ['survival_rate', 'event_rate', 'event_time', 'log_survival_rate', 'log_event_rate', 'log_event_time',
'event_time_loss', 'event_rate_loss']
# Cell
import torch
# Internal Cell
def assert_correct_input_shape(h):
if len(h.shape) != 3:
raise ValueError(f"h is of shape {h.shape}. It is expected that h is of shape (batch size, sequence_length, 1), as this is most amenable to use in training neural nets with pytorch.")
def assert_correct_output_shape(q, batch_size):
if q.shape != torch.Size([batch_size, 1]):
raise ValueError(f"q is of shape {q.shape}. It is expected that q is of shape (batch_size, 1)")
# Cell
def survival_rate(h):
"""
Given the predicted conditional hazard rate, this function estimates
the survival rate.
*input*:
* `h`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`, as this is most amenable to use in training neural nets with pytorch.
_output_:
* `s`:
- type: `torch.tensor`
- estimated survival rate at time t.
- note: `s.shape == (batch_size, 1)`
"""
assert_correct_input_shape(h)
s = (1-h).prod(dim=1)
return s
# Cell
def event_rate(h):
"""
Given the predicted conditional hazard rate, this function estimates
the event rate.
*input*:
* `h`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`, as this is most amenable to use in training neural nets with pytorch.
_output_:
* `w`:
- type: `torch.tensor`
- estimated survival rate at time t.
- note: `w.shape == (batch_size, 1)`
"""
assert_correct_input_shape(h)
w = 1-survival_rate(h)
return w
# Cell
def event_time(h):
"""
Given the predicted conditional hazard rate, this function estimates
the probability that the event occurs at time t.
*input*:
* `h`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`, as this is most amenable to use in training neural nets with pytorch.
_output_:
* `p`:
- type: `torch.tensor`
- estimated probability of event at time t.
- note: `p.shape == (batch_size, 1)`
"""
assert_correct_input_shape(h)
p = h[:, -1, :] * survival_rate(h[:, :-1, :])
return p
# Cell
def log_survival_rate(h):
"""
Given the predicted conditional hazard rate, this function estimates
the log survival rate.
*input*:
* `h`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`, as this is most amenable to use in training neural nets with pytorch.
_output_:
* `s`:
- type: `torch.tensor`
- estimated log survival rate at time t.
- note: `s.shape == (batch_size, 1)`
"""
assert_correct_input_shape(h)
s = (1-h).log().sum(dim=1)
return s
# Cell
def log_event_rate(h):
"""
Given the predicted conditional hazard rate, this function estimates
the log event rate.
*input*:
* `h`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`, as this is most amenable to use in training neural nets with pytorch.
_output_:
* `w`:
- type: `torch.tensor`
- estimated log survival rate at time t.
- note: `w.shape == (batch_size, 1)`
"""
assert_correct_input_shape(h)
# w = event_rate(h).log() # numerically unstable, darn probabilities
w = (1 - log_survival_rate(h).exp()).log() # numerically stable
return w
# Cell
def log_event_time(h):
"""
Given the predicted conditional hazard rate, this function estimates
the log probability that the event occurs at time t.
*input*:
* `h`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`, as this is most amenable to use in training neural nets with pytorch.
_output_:
* `p`:
- type: `torch.tensor`
- estimated log probability of event at time t.
- note: `p.shape == (batch_size, 1)`
"""
assert_correct_input_shape(h)
p = torch.log(h[:, -1, :]) + log_survival_rate(h[:, :-1, :])
return p
# Cell
def event_time_loss(input, target=None):
"""
Loss function applied to uncensored data in order
to optimize the PDF of the true event time, z
input:
* `input`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`
* `target`:
- unused, only present to mimic pytorch loss functions
output:
* `evt_loss`:
- type: `torch.tensor`
- Loss associated with how wrong each predicted probability was at each time step
"""
assert_correct_input_shape(input)
evt_loss = -log_event_time(input).mean(dim=0).squeeze()
return evt_loss
# Cell
def event_rate_loss(input, target=None):
"""
Loss function applied to uncensored data in order
to optimize the CDF of the true event time, z
input:
* `input`:
- type: `torch.tensor`,
- predicted conditional hazard rate, at each observed time step.
- note: `h.shape == (batch size, 1, 1)`
* `target`:
- unused, only present to mimic pytorch loss functions
output:
* `evr_loss`:
- type: `torch.tensor`
- Loss associated with how cumulative predicted probabilities differ from the ground truth labels.
"""
assert_correct_input_shape(input)
evr_loss = -log_event_rate(input).mean(dim=0).squeeze()
return evr_loss | 30.093137 | 191 | 0.621437 | 836 | 6,139 | 4.44378 | 0.143541 | 0.043607 | 0.060296 | 0.060565 | 0.810229 | 0.761238 | 0.73755 | 0.73755 | 0.700942 | 0.700942 | 0 | 0.008863 | 0.264864 | 6,139 | 204 | 192 | 30.093137 | 0.814314 | 0.646034 | 0 | 0.341463 | 1 | 0.04878 | 0.204762 | 0 | 0 | 0 | 0 | 0 | 0.243902 | 1 | 0.243902 | false | 0 | 0.02439 | 0 | 0.463415 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9e61a896c0cce22507cb60f8634873dd1d870de3 | 154 | py | Python | whitenoise/generators/__init__.py | James1345/white-noise | 25bad49b69950a59660e14292a1b1819a4e26bd2 | [
"MIT"
] | 4 | 2019-01-11T17:05:09.000Z | 2022-03-05T19:57:22.000Z | whitenoise/generators/__init__.py | James1345/white-noise | 25bad49b69950a59660e14292a1b1819a4e26bd2 | [
"MIT"
] | 2 | 2019-07-02T23:14:46.000Z | 2019-07-12T00:30:41.000Z | whitenoise/generators/__init__.py | James1345/white-noise | 25bad49b69950a59660e14292a1b1819a4e26bd2 | [
"MIT"
] | 1 | 2019-07-02T21:57:06.000Z | 2019-07-02T21:57:06.000Z | import inspect
from whitenoise.generators.simple import *
from whitenoise.generators.list import *
from whitenoise.generators.generator import generator
| 25.666667 | 53 | 0.850649 | 18 | 154 | 7.277778 | 0.444444 | 0.320611 | 0.549618 | 0.458015 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097403 | 154 | 5 | 54 | 30.8 | 0.942446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
9ea475e2ccfc0b24631364f9707359ed84b7cb3d | 14,009 | py | Python | swagger_client/api/project_api.py | radon-h2020/radon-ctt-cli | 3120b748c73e99d81d0cac5037e393229577d640 | [
"Apache-2.0"
] | null | null | null | swagger_client/api/project_api.py | radon-h2020/radon-ctt-cli | 3120b748c73e99d81d0cac5037e393229577d640 | [
"Apache-2.0"
] | null | null | null | swagger_client/api/project_api.py | radon-h2020/radon-ctt-cli | 3120b748c73e99d81d0cac5037e393229577d640 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
RADON CTT Server API
This is API of the RADON Continuous Testing Tool (CTT) Server: <a href=\"https://github.com/radon-h2020/radon-ctt\">https://github.com/radon-h2020/radon-ctt<a/> # noqa: E501
OpenAPI spec version: 1.0.0-oas3
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class ProjectApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_project(self, **kwargs): # noqa: E501
"""Creates a project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_project(async_req=True)
>>> result = thread.get()
:param async_req bool
:param POSTProject body:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_project_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.create_project_with_http_info(**kwargs) # noqa: E501
return data
def create_project_with_http_info(self, **kwargs): # noqa: E501
"""Creates a project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_project_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param POSTProject body:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_project" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/project', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_project(self, project_uuid, **kwargs): # noqa: E501
"""Delete a project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_project(project_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str project_uuid: UUID of the project to delete (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_project_with_http_info(project_uuid, **kwargs) # noqa: E501
else:
(data) = self.delete_project_with_http_info(project_uuid, **kwargs) # noqa: E501
return data
def delete_project_with_http_info(self, project_uuid, **kwargs): # noqa: E501
"""Delete a project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_project_with_http_info(project_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str project_uuid: UUID of the project to delete (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_uuid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_uuid' is set
if ('project_uuid' not in params or
params['project_uuid'] is None):
raise ValueError("Missing the required parameter `project_uuid` when calling `delete_project`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_uuid' in params:
path_params['project_uuid'] = params['project_uuid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/project/{project_uuid}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_project_by_uuid(self, project_uuid, **kwargs): # noqa: E501
"""Retrieve a project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_project_by_uuid(project_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str project_uuid: UUID of the project to return (required)
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_project_by_uuid_with_http_info(project_uuid, **kwargs) # noqa: E501
else:
(data) = self.get_project_by_uuid_with_http_info(project_uuid, **kwargs) # noqa: E501
return data
def get_project_by_uuid_with_http_info(self, project_uuid, **kwargs): # noqa: E501
"""Retrieve a project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_project_by_uuid_with_http_info(project_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str project_uuid: UUID of the project to return (required)
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_uuid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_project_by_uuid" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_uuid' is set
if ('project_uuid' not in params or
params['project_uuid'] is None):
raise ValueError("Missing the required parameter `project_uuid` when calling `get_project_by_uuid`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_uuid' in params:
path_params['project_uuid'] = params['project_uuid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/project/{project_uuid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_projects(self, **kwargs): # noqa: E501
"""Get a list of all projects # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_projects(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_projects_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_projects_with_http_info(**kwargs) # noqa: E501
return data
def get_projects_with_http_info(self, **kwargs): # noqa: E501
"""Get a list of all projects # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_projects_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_projects" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/project', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Project]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 35.920513 | 178 | 0.602113 | 1,616 | 14,009 | 4.952351 | 0.097153 | 0.044983 | 0.02799 | 0.035987 | 0.916781 | 0.907785 | 0.907785 | 0.884918 | 0.883169 | 0.855304 | 0 | 0.015786 | 0.308159 | 14,009 | 389 | 179 | 36.012853 | 0.809946 | 0.319652 | 0 | 0.777778 | 0 | 0 | 0.16157 | 0.037768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.019324 | 0 | 0.125604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7b6132d882f953bc7ad7edac798ce0e0d987d7b7 | 11,696 | py | Python | src/models/kernels.py | mfkiwl/precise_gps | e30c6355447424cb69549feb85c9393b10eae7aa | [
"MIT"
] | null | null | null | src/models/kernels.py | mfkiwl/precise_gps | e30c6355447424cb69549feb85c9393b10eae7aa | [
"MIT"
] | null | null | null | src/models/kernels.py | mfkiwl/precise_gps | e30c6355447424cb69549feb85c9393b10eae7aa | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
import gpflow
import tensorflow_probability as tfp
from src.models.initialization import *
from src.models.base_kernel import BaseKernel
class ARD(BaseKernel, gpflow.kernels.Kernel):
"""
Own implementation of the squared exponential kernel with ard
property. Should workthe same way as
gpflow.kernels.SquaredExponential(ARD = True). Lengthscales and
variance can be randomized. This should be handled when initializing
the kernel.
Args:
variance (float) : kernel variance which scales the whole kernel
lengthscales (numpy array) : list of lengthscales
(should match the dimension of the input)
"""
def __init__(self, **kwargs):
super().__init__()
randomized = kwargs["randomized"]
dim = kwargs["dim"]
if not randomized:
lengthscales = np.ones(dim)
variance = 1.0
else:
lengthscales = np.random.uniform(0.5,2,dim)
variance = 1.0
self.variance = gpflow.Parameter(
variance, transform = gpflow.utilities.positive())
self.lengthscales = gpflow.Parameter(
lengthscales, transform = gpflow.utilities.positive())
self.dim = dim
def K_diag(self, X) -> tf.Tensor:
"""
Returns the diagonal vector when X1 == X2
(used in the background of gpflow)
"""
return self.variance * tf.ones_like(X[:,0])
def K(self, X1, X2=None) -> tf.Tensor:
"""
Returns the squared exponential ard kernel.
Args:
X1 (numpy array) : shaped N x D
X2 (numpy array) : shaped M x D
(D denotes the number of dimensions of the input)
"""
if X2 is None:
X2 = X1
# Precision is the inverse squared of the lengthscales
P = tf.linalg.diag(self.lengthscales**2)
X11 = tf.squeeze(
tf.expand_dims(X1,axis = 1) @ P @ tf.expand_dims(X1,axis = -1),-1)
X22 = tf.transpose(
tf.squeeze(
tf.expand_dims(X2,axis = 1) @ P @ tf.expand_dims(X2,axis = -1),
-1))
X12 = X1 @ P @ tf.transpose(X2)
K = self.variance * tf.exp(-0.5 * (X11 - 2*X12 + X22))
return K
def precision(self) -> tf.Tensor:
return tf.linalg.diag(self.lengthscales**(2))
class ARD_gpflow(BaseKernel, gpflow.kernels.SquaredExponential):
def __init__(self, **kwargs):
randomized = kwargs["randomized"]
dim = kwargs["dim"]
if not randomized:
lengthscales = np.ones(dim)
variance = 1.0
else:
lengthscales = np.random.uniform(0.5,3,dim)
variance = 1.0
super().__init__(variance, lengthscales)
def precision(self) -> tf.Tensor:
return tf.linalg.diag(self.lengthscales**(-2))
class FullGaussianKernel(BaseKernel, gpflow.kernels.Kernel):
"""
Implementation of the full Gaussian kernel which introduces also the
off-diagonal covariates of the precision matrix. Randomizing the
initialization should be handled outside of this class.
Args:
variance (float) : signal variance which scales the whole kernel
L (numpy array) : vector representation of L, where LL^T = P :
precision
"""
def __init__(self, **kwargs):
super().__init__()
randomized = kwargs["randomized"]
dim = kwargs["dim"]
if not randomized:
L = np.ones((dim*(dim+1))//2)
variance = 1.0
else:
L = init_precision(dim)
variance = 1.0
self.variance = gpflow.Parameter(
variance, transform = gpflow.utilities.positive())
self.L = gpflow.Parameter(L)
self.dim = dim
def K_diag(self, X) -> tf.Tensor:
"""
Returns the diagonal vector when X1 == X2
(used in the background of gpflow)
"""
return self.variance * tf.ones_like(X[:,0])
def K(self, X1, X2=None) -> tf.Tensor:
"""
Returns the full Gaussian kernel.
Args:
X1 (numpy array) : shaped N x D
X2 (numpy array) : shaped M x D
(D denotes the number of dimensions of the input)
"""
if X2 is None:
X2 = X1
#L = tfp.math.fill_triangular(self.L) # matrix representation of L
#A = X1 @ L
#B = X2 @ L
P = self.precision()
X11 = tf.squeeze(
tf.expand_dims(X1,axis = 1) @ P @ tf.expand_dims(X1,axis = -1),-1)
X22 = tf.transpose(
tf.squeeze(
tf.expand_dims(X2,axis = 1) @ P @ tf.expand_dims(X2,axis = -1),
-1))
X12 = X1 @ P @ tf.transpose(X2)
# kernel (N,1) - (N,M) + (1,M)
K = self.variance*tf.exp(-0.5 * (X11 - 2*X12 + X22))
return K
def precision(self) -> tf.Tensor:
L = tfp.math.fill_triangular(self.L)
return L@tf.transpose(L)
class LowRankFullGaussianKernel(BaseKernel, gpflow.kernels.Kernel):
"""
Implementation of the full Gaussian kernel which introduces also the
off-diagonal covariates of the precision matrix. Randomizing the
initialization should be handled outside of this class.
Args:
variance (float) : signal variance which scales the whole kernel
L (numpy array) : vector representation of L, where LL^T = P :
precision
"""
def __init__(self, **kwargs):
super().__init__()
randomized = kwargs["randomized"]
dim = kwargs["dim"]
rank = kwargs["rank"]
if not randomized:
L = np.ones((dim*(dim+1))//2)
variance = 1.0
else:
L = init_lowrank_precision(dim, rank)
variance = 1.0
self.length = L.shape[0]
self.variance = gpflow.Parameter(
variance, transform = gpflow.utilities.positive())
self.L = gpflow.Parameter(L)
self.rank = rank
def K_diag(self, X) -> tf.Tensor:
"""
Returns the diagonal vector when X1 == X2
(used in the background of gpflow)
"""
return self.variance * tf.ones_like(X[:,0])
def K(self, X1, X2=None) -> tf.Tensor:
"""
Returns the full Gaussian kernel.
Args:
X1 (numpy array) : shaped N x D
X2 (numpy array) : shaped M x D
(D denotes the number of dimensions of the input)
"""
if X2 is None:
X2 = X1
P = self.precision()
X11 = tf.squeeze(
tf.expand_dims(X1,axis = 1) @ P @ tf.expand_dims(X1,axis = -1),-1)
X22 = tf.transpose(
tf.squeeze(
tf.expand_dims(X2,axis = 1) @ P @ tf.expand_dims(X2,axis = -1),
-1))
X12 = X1 @ P @ tf.transpose(X2)
K = self.variance * tf.exp(-0.5 * (X11 - 2*X12 + X22))
return K
def precision(self) -> tf.Tensor:
L = fill_lowrank_triangular(self.L, self.rank, self.length)
return tf.transpose(L)@L
class SGHMC_Full(BaseKernel, gpflow.kernels.Kernel):
"""
Implementation of the full Gaussian kernel which introduces also the
off-diagonal covariates of the precision matrix. Randomizing the
initialization should be handled outside of this class.
Args:
variance (float) : signal variance which scales the whole kernel
L (numpy array) : vector representation of L, where LL^T = P :
precision
"""
def __init__(self, **kwargs):
super().__init__()
randomized = kwargs["randomized"]
dim = kwargs["dim"]
if not randomized:
L = np.ones((dim*(dim+1))//2)
variance = 0.0
else:
L = init_precision(dim, "wishart")
variance = np.random.randn()
self.variance = tf.Variable(variance, dtype = tf.float64,
trainable = True)
self.L = tf.Variable(L, dtype = tf.float64, trainable = False)
self.dim = dim
def K_diag(self, X) -> tf.Tensor:
"""
Returns the diagonal vector when X1 == X2 (used in the background of gpflow)
"""
return tf.exp(self.variance) * tf.ones_like(X[:,0])
def K(self, X1, X2=None) -> tf.Tensor:
"""
Returns the full Gaussian kernel.
Args:
X1 (numpy array) : shaped N x D
X2 (numpy array) : shaped M x D (D denotes the number of dimensions of the input)
"""
if X2 is None:
X2 = X1
L = tfp.math.fill_triangular(self.L) # matrix representation of L
A = X1 @ L
B = X2 @ L
X11 = tf.squeeze(
tf.expand_dims(A, axis = 1) @ tf.expand_dims(A, axis = -1),
axis = -1) # (N, 1)
X22 = tf.transpose(
tf.squeeze(
tf.expand_dims(B, axis = 1) @ tf.expand_dims(B, axis = -1),
axis = -1)) # (1,M)
X12 = A @ tf.transpose(B) # (N,M)
K = tf.exp(self.variance)*tf.exp(-0.5 * (X11 - 2*X12 + X22))
return K
def precision(self) -> tf.Tensor:
L = tfp.math.fill_triangular(self.L)
return L@tf.transpose(L)
class SGHMC_ARD(BaseKernel, gpflow.kernels.Kernel):
"""
Own implementation of the squared exponential kernel with ard
property. Should work the same way as
gpflow.kernels.SquaredExponential(ARD = True). Lengthscales and
variance can be randomized. This should be handled when initializing
the kernel.
Args:
variance (float) : kernel variance which scales the whole kernel
lengthscales (numpy array) : list of lengthscales
(should match the dimension of the input)
"""
def __init__(self, **kwargs):
super().__init__()
randomized = kwargs["randomized"]
dim = kwargs["dim"]
if not randomized:
L = np.ones(dim)
variance = 0.0
else:
L = np.random.randn(dim)
variance = np.random.randn()
self.variance = tf.Variable(variance, dtype = tf.float64,
trainable = True)
self.L = tf.Variable(L, dtype = tf.float64, trainable = False)
self.dim = dim
def K_diag(self, X) -> tf.Tensor:
"""
Returns the diagonal vector when X1 == X2
(used in the background of gpflow)
"""
return tf.exp(self.variance) * tf.ones_like(X[:,0])
def K(self, X1, X2=None) -> tf.Tensor:
"""
Returns the squared exponential ard kernel.
Args:
X1 (numpy array) : shaped N x D
X2 (numpy array) : shaped M x D
(D denotes the number of dimensions of the input)
"""
if X2 is None:
X2 = X1
# Precision is the inverse squared of the lengthscales
P = tf.linalg.diag(self.L**(2))
X11 = tf.squeeze(
tf.expand_dims(X1,axis = 1) @ P @ tf.expand_dims(X1,axis = -1),-1)
X22 = tf.transpose(
tf.squeeze(
tf.expand_dims(X2,axis = 1) @ P @ tf.expand_dims(X2,axis = -1),
-1)) # (1,M)
X12 = X1 @ P @ tf.transpose(X2) # (N,M)
# kernel (N,1) - (N,M) + (1,M)
K = tf.exp(self.variance) * tf.exp(-0.5 * (X11 - 2*X12 + X22))
return K
def precision(self) -> tf.Tensor:
return tf.linalg.diag(self.L**(2)) | 32.579387 | 93 | 0.549162 | 1,465 | 11,696 | 4.320819 | 0.098976 | 0.017378 | 0.037915 | 0.028436 | 0.909479 | 0.901896 | 0.880253 | 0.880253 | 0.876619 | 0.870932 | 0 | 0.030606 | 0.340715 | 11,696 | 359 | 94 | 32.579387 | 0.7903 | 0.2954 | 0 | 0.792553 | 0 | 0 | 0.011706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117021 | false | 0 | 0.031915 | 0.015957 | 0.265957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7bc05932cc891fe4e995171e9505ee992063abe8 | 9,315 | py | Python | src/tf_transformers/utils/convert/convert_mt5.py | s4sarath/tf-transformers | 361f7b01c7816034ddfc8661f8b6a967835bc1de | [
"Apache-2.0"
] | 2 | 2021-03-31T17:48:16.000Z | 2021-08-22T11:52:19.000Z | src/tf_transformers/utils/convert/convert_mt5.py | Vibha111094/tf-transformers | f26d440a4de0557e0e481279bfd70a732aaa8825 | [
"Apache-2.0"
] | null | null | null | src/tf_transformers/utils/convert/convert_mt5.py | Vibha111094/tf-transformers | f26d440a4de0557e0e481279bfd70a732aaa8825 | [
"Apache-2.0"
] | null | null | null | import tensorflow as tf
from absl import logging
logging.set_verbosity("INFO")
def convert_mt5_hf_to_tf_transformers(model_hf, model_tf_transformers, config):
# Encoder Side
# From vars (Transformer variables)
from_model_vars = [
"tfm_t5model/encoder/block_._{}/layer_._0/SelfAttention/q/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._0/SelfAttention/k/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._0/SelfAttention/v/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._0/SelfAttention/o/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._0/layer_norm/weight:0",
"tfm_t5model/encoder/block_._{}/layer_._1/DenseReluDense/wi_0/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._1/DenseReluDense/wo/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._1/DenseReluDense/wi_1/kernel:0",
"tfm_t5model/encoder/block_._{}/layer_._1/layer_norm/weight:0",
]
to_model_vars = [
"tf_transformers/mt5_encoder/transformer/layer_{}/self_attention/query/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/self_attention/key/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/self_attention/value/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/self_attention_output/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/pre_attention_norm/weight:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/intermediate/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/output/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/intermediate2/kernel:0",
"tf_transformers/mt5_encoder/transformer/layer_{}/self_attention_layer_norm/weight:0",
]
# Simple Assertion
# assert len(from_model_vars) == len(to_model_vars)
mapping_dict = {}
for index in range(len(from_model_vars)):
for i in range(config["num_hidden_layers"]):
mapping_dict[from_model_vars[index].format(i)] = to_model_vars[index].format(i)
# Only Layer 0
mapping_dict[
"tfm_t5model/encoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0"
] = "tf_transformers/mt5_encoder/transformer/layer_0/self_attention/relative_attention_bias/embeddings:0"
# Word Embedding
mapping_dict["shared/shared/weight:0"] = "tf_transformers/mt5_encoder/word_embeddings/embeddings:0"
# Final Layer Norm weight
mapping_dict[
"tfm_t5model/encoder/final_layer_norm/weight:0"
] = "tf_transformers/mt5_encoder/last_layer_norm/weight:0"
from_to_variable_dict = {var.name: var for var in model_hf.variables}
# del model_hf
logging.info("Deleteing huggingface model for saving memory")
tf_transformers_model_index_dict = {}
for index, var in enumerate(model_tf_transformers.variables):
tf_transformers_model_index_dict[var.name] = index
# legacy_ai <-- hub
assigned_map = []
assigned_map_values = []
for original_var, legacy_var in mapping_dict.items():
index = tf_transformers_model_index_dict[legacy_var]
# If not in mapping_dict, then mostly it is from attention layer
if "query/kernel:0" in legacy_var or "key/kernel:0" in legacy_var or "value/kernel:0" in legacy_var:
# hub (2D) to tf_transformers (3D)
model_tf_transformers.variables[index].assign(
tf.reshape(
from_to_variable_dict.get(original_var),
(
config["embedding_size"],
config["num_attention_heads"],
config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
continue
model_tf_transformers.variables[index].assign(from_to_variable_dict.get(original_var))
assigned_map.append((original_var, legacy_var))
logging.info("Done assigning ENCODER variables weights {}".format(len(assigned_map)))
# Decoder Side
# From vars (Transformer variables)
from_model_vars = [
"tfm_t5model/decoder/block_._{}/layer_._0/SelfAttention/q/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._0/SelfAttention/k/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._0/SelfAttention/v/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._0/SelfAttention/o/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._0/layer_norm/weight:0",
"tfm_t5model/decoder/block_._{}/layer_._1/EncDecAttention/q/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._1/EncDecAttention/k/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._1/EncDecAttention/v/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._1/EncDecAttention/o/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._1/layer_norm/weight:0",
"tfm_t5model/decoder/block_._{}/layer_._2/DenseReluDense/wi_0/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._2/DenseReluDense/wo/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._2/DenseReluDense/wi_1/kernel:0",
"tfm_t5model/decoder/block_._{}/layer_._2/layer_norm/weight:0",
]
to_model_vars = [
"tf_transformers/mt5_decoder/transformer/layer_{}/self_attention/query/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/self_attention/key/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/self_attention/value/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/self_attention_output/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/pre_attention_norm/weight:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/cross_attention/query/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/cross_attention/key/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/cross_attention/value/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/cross_attention_output/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/pre_cross_attention_norm/weight:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/intermediate/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/output/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/intermediate2/kernel:0",
"tf_transformers/mt5_decoder/transformer/layer_{}/self_attention_layer_norm/weight:0",
]
# Simple Assertion
assert len(from_model_vars) == len(to_model_vars)
mapping_dict = {}
for index in range(len(from_model_vars)):
for i in range(config["num_hidden_layers"]):
mapping_dict[from_model_vars[index].format(i)] = to_model_vars[index].format(i)
# Only Layer 0
mapping_dict[
"tfm_t5model/decoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0"
] = "tf_transformers/mt5_decoder/transformer/layer_0/self_attention/relative_attention_bias/embeddings:0"
mapping_dict[
"tfm_t5model/decoder/block_._0/layer_._1/EncDecAttention/relative_attention_bias/embeddings:0"
] = "tf_transformers/mt5_decoder/transformer/layer_0/cross_attention/relative_attention_bias/embeddings:0"
# Final Layer Norm weight
mapping_dict[
"tfm_t5model/decoder/final_layer_norm/weight:0"
] = "tf_transformers/mt5_decoder/last_layer_norm/weight:0"
from_to_variable_dict = {var.name: var for var in model_hf.variables}
# del model_hf
logging.info("Deleteing huggingface model for saving memory")
tf_transformers_model_index_dict = {}
for index, var in enumerate(model_tf_transformers.variables):
tf_transformers_model_index_dict[var.name] = index
# legacy_ai <-- hub
assigned_map = []
assigned_map_values = []
for original_var, legacy_var in mapping_dict.items():
index = tf_transformers_model_index_dict[legacy_var]
# If not in mapping_dict, then mostly it is from attention layer
if "query/kernel:0" in legacy_var or "key/kernel:0" in legacy_var or "value/kernel:0" in legacy_var:
# hub (2D) to tf_transformers (3D)
model_tf_transformers.variables[index].assign(
tf.reshape(
from_to_variable_dict.get(original_var),
(
config["embedding_size"],
config["num_attention_heads"],
config["attention_head_size"],
),
)
)
assigned_map.append((original_var, legacy_var))
continue
if (
original_var
== "tfm_t5model/decoder/block_._0/layer_._1/EncDecAttention/relative_attention_bias/embeddings:0"
):
if original_var not in from_to_variable_dict:
model_tf_transformers.variables[index].assign(tf.zeros_like(model_tf_transformers.variables[index]))
assigned_map.append((original_var, legacy_var))
continue
model_tf_transformers.variables[index].assign(from_to_variable_dict.get(original_var))
assigned_map.append((original_var, legacy_var))
logging.info("Done assigning DECODER variables weights {}".format(len(assigned_map)))
| 50.901639 | 116 | 0.697692 | 1,165 | 9,315 | 5.183691 | 0.099571 | 0.108958 | 0.081636 | 0.080477 | 0.954794 | 0.945852 | 0.923166 | 0.90876 | 0.844345 | 0.670641 | 0 | 0.022416 | 0.185829 | 9,315 | 182 | 117 | 51.181319 | 0.773866 | 0.055824 | 0 | 0.432624 | 0 | 0 | 0.521655 | 0.476294 | 0 | 0 | 0 | 0 | 0.007092 | 1 | 0.007092 | false | 0 | 0.014184 | 0 | 0.021277 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c8fbdf24c52d2db22f56373e69c8956c66b3dc26 | 5,338 | py | Python | tests/functional/conftest.py | dmulyalin/scrapli_netconf | 7c9e5e74a1afac7955177db759e54d2211637d42 | [
"MIT"
] | 61 | 2020-05-17T19:57:25.000Z | 2022-03-30T01:10:32.000Z | tests/functional/conftest.py | dmulyalin/scrapli_netconf | 7c9e5e74a1afac7955177db759e54d2211637d42 | [
"MIT"
] | 79 | 2020-05-17T20:22:05.000Z | 2022-03-02T14:37:28.000Z | tests/functional/conftest.py | dmulyalin/scrapli_netconf | 7c9e5e74a1afac7955177db759e54d2211637d42 | [
"MIT"
] | 6 | 2021-01-07T16:45:28.000Z | 2022-02-11T19:31:49.000Z | import time
import pytest
from scrapli_netconf.driver.async_driver import AsyncNetconfDriver
from scrapli_netconf.driver.sync_driver import NetconfDriver
NETCONF_1_0_DEVICE_TYPES = ["cisco_iosxe_1_0", "juniper_junos_1_0"]
NETCONF_1_1_DEVICE_TYPES = ["cisco_iosxe_1_1", "cisco_iosxr_1_1"]
NETCONF_ALL_VERSIONS_DEVICE_TYPES = NETCONF_1_0_DEVICE_TYPES + NETCONF_1_1_DEVICE_TYPES
@pytest.fixture(scope="session")
def real_valid_ssh_key_path(test_data_path):
return f"{test_data_path}/files/vrnetlab_key"
@pytest.fixture(scope="session", params=(True, False), ids=("compressed", "uncompressed"))
def use_compressed_parser(request):
yield request.param
@pytest.fixture(
scope="session",
params=NETCONF_1_0_DEVICE_TYPES,
)
def device_type_1_0(request):
yield request.param
@pytest.fixture(
scope="session",
params=NETCONF_1_1_DEVICE_TYPES,
)
def device_type_1_1(request):
yield request.param
@pytest.fixture(
scope="session",
params=NETCONF_ALL_VERSIONS_DEVICE_TYPES,
)
def device_type(request):
yield request.param
@pytest.fixture(scope="class", params=["system", "ssh2", "paramiko"])
def transport(request):
yield request.param
@pytest.fixture(scope="session", params=["password"])
def auth_type(request):
yield request.param
@pytest.fixture(scope="function")
def sync_conn_1_0(
test_devices_dict, real_valid_ssh_key_path, device_type_1_0, auth_type, transport
):
device = test_devices_dict[device_type_1_0].copy()
if auth_type == "key":
device.pop("auth_password")
device["auth_private_key"] = real_valid_ssh_key_path
conn = NetconfDriver(**device, transport=transport)
yield conn, device_type_1_0
if conn.isalive():
conn.close()
# slow down connections since the lab vms can be slow sometimes
time.sleep(1)
if "cisco_iosxr" in device_type_1_0:
# doubly true for xr vm!
time.sleep(2)
@pytest.fixture(scope="function")
async def async_conn_1_0(test_devices_dict, real_valid_ssh_key_path, device_type_1_0, auth_type):
device = test_devices_dict[device_type_1_0].copy()
device["transport"] = "asyncssh"
if auth_type == "key":
device.pop("auth_password")
device["auth_private_key"] = real_valid_ssh_key_path
conn = AsyncNetconfDriver(**device)
yield conn, device_type_1_0
if conn.isalive():
await conn.close()
# slow down connections since the lab vms can be slow sometimes
time.sleep(1)
if "cisco_iosxr" in device_type_1_0:
# doubly true for xr vm!
time.sleep(2)
@pytest.fixture(scope="function")
def sync_conn_1_1(
test_devices_dict, real_valid_ssh_key_path, device_type_1_1, auth_type, transport
):
device = test_devices_dict[device_type_1_1].copy()
if auth_type == "key":
device.pop("auth_password")
device["auth_private_key"] = real_valid_ssh_key_path
conn = NetconfDriver(**device, transport=transport)
yield conn, device_type_1_1
if conn.isalive():
conn.close()
# slow down connections since the lab vms can be slow sometimes
time.sleep(1)
if "cisco_iosxr" in device_type_1_1:
# doubly true for xr vm!
time.sleep(2)
@pytest.fixture(scope="function")
async def async_conn_1_1(test_devices_dict, real_valid_ssh_key_path, device_type_1_1, auth_type):
device = test_devices_dict[device_type_1_1].copy()
device["transport"] = "asyncssh"
if auth_type == "key":
device.pop("auth_password")
device["auth_private_key"] = real_valid_ssh_key_path
conn = AsyncNetconfDriver(**device)
yield conn, device_type_1_1
if conn.isalive():
await conn.close()
# slow down connections since the lab vms can be slow sometimes
time.sleep(1)
if "cisco_iosxr" in device_type_1_1:
# doubly true for xr vm!
time.sleep(2)
@pytest.fixture(scope="function")
def sync_conn(
test_devices_dict,
real_valid_ssh_key_path,
device_type,
auth_type,
transport,
use_compressed_parser,
):
device = test_devices_dict[device_type].copy()
if auth_type == "key":
device.pop("auth_password")
device["auth_private_key"] = real_valid_ssh_key_path
conn = NetconfDriver(**device, transport=transport, use_compressed_parser=use_compressed_parser)
yield conn, device_type
if conn.isalive():
conn.close()
# slow down connections since the lab vms can be slow sometimes
time.sleep(1)
if "cisco_iosxr" in device_type:
# doubly true for xr vm!
time.sleep(2)
@pytest.fixture(scope="function")
async def async_conn(test_devices_dict, real_valid_ssh_key_path, device_type, auth_type):
device = test_devices_dict[device_type].copy()
device["transport"] = "asyncssh"
if auth_type == "key":
device.pop("auth_password")
device["auth_private_key"] = real_valid_ssh_key_path
conn = AsyncNetconfDriver(**device)
yield conn, device_type
if conn.isalive():
await conn.close()
# slow down connections since the lab vms can be slow sometimes
time.sleep(1)
if "cisco_iosxr" in device_type:
# doubly true for xr vm!
time.sleep(2)
| 30.158192 | 100 | 0.696141 | 753 | 5,338 | 4.601594 | 0.112882 | 0.077922 | 0.057143 | 0.056277 | 0.879654 | 0.817893 | 0.803463 | 0.789033 | 0.749206 | 0.712554 | 0 | 0.018122 | 0.204009 | 5,338 | 176 | 101 | 30.329545 | 0.797364 | 0.095354 | 0 | 0.689922 | 0 | 0 | 0.113995 | 0.007267 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077519 | false | 0.054264 | 0.031008 | 0.007752 | 0.116279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
c8ffda9081edf28454df980d743689c5f0530743 | 22,246 | py | Python | padqc/steps/patterns.py | qis-unipr/padqc | 94599db20711dc755b53425951fa3cb15b749f64 | [
"Apache-2.0"
] | null | null | null | padqc/steps/patterns.py | qis-unipr/padqc | 94599db20711dc755b53425951fa3cb15b749f64 | [
"Apache-2.0"
] | null | null | null | padqc/steps/patterns.py | qis-unipr/padqc | 94599db20711dc755b53425951fa3cb15b749f64 | [
"Apache-2.0"
] | 1 | 2021-02-18T22:11:18.000Z | 2021-02-18T22:11:18.000Z | from padqc.gates import Cx, Hadamard
from padqc.q_graph import Graph, Node
from padqc.steps import TransformationStep
class Patterns(TransformationStep):
"""
Transformation step for specific two-qubit gate patterns.
"""
def __init__(self):
super().__init__()
self._num_qubits = None
self._wires_to_id = {}
self._id_to_wires = {}
self._layers = None
self._extra_layers = None
self._skip = []
self.patterns = 0
def run(self, q_circuit):
"""Executes the transformation step.
Args:
q_circuit (q_circuit.QCircuit): the circuit on which to run the step
"""
self._num_qubits = q_circuit.q_graph.n_qubits
i = 0
for q_reg in q_circuit.q_graph.q_registers.values():
for q in range(q_reg['dim']):
self._wires_to_id[(q_reg['id'], q)] = i
self._id_to_wires[i] = (q_reg['id'], q)
i += 1
self.find_pattern(q_circuit)
q_circuit.patterns = self.patterns
def find_pattern(self, q_circuit):
"""Finds specific two-qubit gate patterns in *q_circuit*
Args:
q_circuit (q_circuit.QCircuit): the circuit into which to search for patterns
"""
q_graph = q_circuit.q_graph
new_graph = Graph()
for register in q_graph.q_registers:
new_graph._add_q_register(register, q_circuit.q_regs[register][1])
for register in q_graph.c_registers:
new_graph._add_c_register(register, q_circuit.c_regs[register][1])
# get dag layers
self._layers = [layer for layer in q_circuit.q_graph.layers()]
# this is the list of new layers for the nearest-neighbor CNOT sequences
self._extra_layers = {l: [] for l in range(len(self._layers))}
# loop through all layers
for i, layer in enumerate(self._layers):
if i != 0:
# add nearest-neighbor CNOT sequences in the right layer
for node in self._extra_layers[i - 1]:
new_graph._append_node(node.type, node.gate)
# check all gates in the layer
for node in layer:
temp = None
# do not add gates that have been used in the transformation process
if node in self._skip:
continue
# every cnot could be the starting point for a CNOT cascade
elif node.name == 'cx':
# check for a CNOT cascade
# print('Checking Cascade')
temp = self.check_cascade(node, i)
if temp is not None:
self._skip.extend(temp)
# print('Found Cascade')
self.patterns += 1
else:
# check for an inverted CNOT cascade
# print('Checking Inverse Cascade')
temp = self.check_inverse_cascade(node, i)
if temp is not None:
self._skip.extend(temp)
# print('Found Inverse Cascade')
self.patterns += 1
else:
# apply the CNOT if no cascade was found
self._skip.append(node)
new_graph._append_node(node.type, node.gate)
else:
if node.type == 'gate':
self._skip.append(node)
new_graph._append_node(node.type, node.gate)
q_circuit.q_graph = new_graph
def check_cascade(self, node, layer_id):
"""Starting from *q_node*, searches for CNOT cascades
and transform them into nearest-neighbor CNOT sequences.
Args:
layer_id (int): the layer index
node (q_graph.Node): the node from which to start searching for a CNOT cascade
Returns:
list: a list of nodes to be skipped as they are part of an already transformed CNOT cascade
"""
target = self._wires_to_id[node.q_args[1]]
control = self._wires_to_id[node.q_args[0]]
controls = [control]
skip = [node]
# qubits already added to the CNOT sequence
used = set()
used.add(target)
used.add(control)
# qubits that cannot be used anymore
off_limits = set()
before = {}
after = []
# flag to identify the direction of the cascade
descending = False
if control > target:
descending = True
count = 1
last_layer = layer_id
double_break = False
# loop through layers until a max limit is reached
while count < min([2 * self._num_qubits, len(self._layers) - layer_id]):
for node in self._layers[layer_id + count]:
if node in self._skip:
for qarg in node.q_args:
if self._wires_to_id[qarg] == target:
double_break = True
break
else:
if node.name == 'cx':
# print('CX: ', node.q_args)
g_control = self._wires_to_id[node.q_args[0]]
g_target = self._wires_to_id[node.q_args[1]]
if g_control == target:
double_break = True
break
if g_control in off_limits or g_target in off_limits:
off_limits.add(g_control)
off_limits.add(g_target)
if g_control not in used:
used.add(g_control)
if g_target not in used:
used.add(g_target)
continue
# chek that the CNOT is part of the cascade
a = (g_target == target and g_control not in controls and g_control not in used)
b = (descending is True and g_control > target) \
or (descending is False and g_control < target)
if a and b:
# print('Adding to Cascade')
controls.append(g_control)
used.add(g_control)
skip.append(node)
# check if the CNOT interrupts the cascade
elif g_target != target and g_control != target:
# remember to put the CNOT after the transformation
if g_target not in used and g_control not in used:
if last_layer < layer_id + count:
last_layer = layer_id + count
# updates used and off limits qubits when necessary
else:
off_limits.add(g_control)
off_limits.add(g_target)
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
if g_control not in used:
used.add(g_control)
if g_target not in used:
used.add(g_target)
else:
# break the loop if the CNOT interrupts the cascade
double_break = True
break
else:
# ignore gates acting on off limits qubits
double_continue = False
for qarg in node.q_args:
if self._wires_to_id[qarg] in off_limits:
double_continue = True
continue
if double_continue is True:
continue
# for special multi-qubits gates, update used and off limits qubits properly,
# break the loop if necessary
if node.name in ["barrier", "snapshot", "save", "load", "noise"]:
qargs = [self._wires_to_id[qarg] for qarg in node.q_args]
if target in qargs:
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
double_break = True
break
u = []
not_u = []
for qarg in qargs:
if qarg in used:
off_limits.add(qarg)
u.append(qarg)
else:
not_u.append(qarg)
if len(u) == len(qargs):
# the transformation must be applied before this gate
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
elif len(u) == 0:
# the transformation must be applied after this gate
if last_layer < layer_id + count:
last_layer = layer_id + count
else:
# the transformation must be applied before this gate
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
for qarg in not_u + u:
used.add(qarg)
off_limits.add(qarg)
else:
# print(node.name, node.q_args)
# check if one-qubits gates either interrupt the cascade,
# can be applied after or before
qarg = self._wires_to_id[node.q_args[0]]
if qarg == target:
after.append(node)
skip.append(node)
double_break = True
break
if qarg not in used:
# print('Before')
if qarg not in before:
before[qarg] = []
before[qarg].append(node)
else:
# print('After')
after.append(node)
skip.append(node)
count += 1
if double_break is True:
break
# if a cascade was found
if len(controls) > 1:
if descending is True:
controls = sorted(controls)
else:
controls = sorted(controls, reverse=True)
# apply all gates that were encountered before the cascade
for u in before:
for node in before[u]:
self._extra_layers[last_layer].append(node)
# apply the transformation
for i in range(len(controls) - 1, 0, -1):
self._extra_layers[last_layer].append(Node(type='gate',
gate=Cx(self._id_to_wires[controls[i]],
self._id_to_wires[controls[i - 1]])))
self._extra_layers[last_layer].append(
Node(type='gate', gate=Cx(self._id_to_wires[controls[0]], self._id_to_wires[target])))
for i in range(len(controls) - 1):
self._extra_layers[last_layer].append(
Node(type='gate', gate=Cx(self._id_to_wires[controls[i + 1]],
self._id_to_wires[controls[i]])))
# apply all gates that were encountered after the cascade
for node in after:
self._extra_layers[last_layer].append(node)
else:
skip = None
return skip
def check_inverse_cascade(self, node, layer_id):
"""Starting from *q_node*, searches for inverted CNOT cascades
and transforms them into nearest-neighbor CNOT sequences.
Args:
layer_id (int): the layer kindex
node (q_graph.Node): the node from which to start searching for an inverted CNOT cascade
Returns:
list: a list of nodes to be skipped as they are part of
an already transformed inverted CNOT cascade
"""
target = self._wires_to_id[node.q_args[1]]
control = self._wires_to_id[node.q_args[0]]
targets = [target]
skip = [node]
# qubits already added to the CNOT sequence
used = set()
used.add(target)
used.add(control)
# qubits that cannot be used anymore
off_limits = set()
before = {}
after = []
# flag to identify the direction of the cascade
descending = False
if target > control:
descending = True
count = 1
last_layer = layer_id
double_break = False
# loop through layers until a max limit is reached
while count < min([2 * self._num_qubits, len(self._layers) - layer_id]):
for node in self._layers[layer_id + count]:
if node in self._skip:
for qarg in node.q_args:
if self._wires_to_id[qarg] == control:
double_break = True
break
else:
if node.name == 'cx':
g_control = self._wires_to_id[node.q_args[0]]
g_target = self._wires_to_id[node.q_args[1]]
if g_target == control:
double_break = True
break
if g_control in off_limits or g_target in off_limits:
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
off_limits.add(g_control)
off_limits.add(g_target)
if g_control not in used:
used.add(g_control)
if g_target not in used:
used.add(g_target)
continue
# chek that the CNOT is part of the cascade
a = (g_control == control and g_target not in targets and g_target not in used)
b = (descending is True and g_target > control) or (
descending is False and g_target < control)
if a and b:
targets.append(g_target)
used.add(g_target)
skip.append(node)
# check if the CNOT interrupts the cascade
elif g_control != control and g_target != control:
# remember to put the CNOT after the transformation
if g_control not in used and g_target not in used:
if last_layer < layer_id + count:
last_layer = layer_id + count
# updates used and off limits qubits when necessary
else:
off_limits.add(g_control)
off_limits.add(g_target)
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
if g_control not in used:
used.add(g_control)
if g_target not in used:
used.add(g_target)
else:
# break the loop if the CNOT interrupts the cascade
double_break = True
break
else:
# ignore gates acting on off limits qubits
double_continue = False
for qarg in node.q_args:
if self._wires_to_id[qarg] in off_limits:
double_continue = True
continue
if double_continue is True:
continue
# for special multi-qubits gates, update used and off limits qubits properly,
# break the loop if necessary
if node.name in ["barrier", "snapshot", "save", "load", "noise"]:
qargs = [self._wires_to_id[qarg] for qarg in node.q_args]
if control in qargs:
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
double_break = True
break
u = []
not_u = []
for qarg in qargs:
if qarg in used:
off_limits.add(qarg)
u.append(qarg)
else:
not_u.append(qarg)
if len(u) == len(qargs):
# the transformation must be applied before this gate
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
elif len(u) == 0:
# the transformation must be applied after this gate
if last_layer < layer_id + count:
last_layer = layer_id + count
else:
# the transformation must be applied before this gate
if last_layer > layer_id + count - 1:
last_layer = layer_id + count - 1
for qarg in not_u + u:
used.add(qarg)
off_limits.add(qarg)
else:
# check if one-qubits gates either interrupt the cascade,
# can be applied after or before
qarg = self._wires_to_id[node.q_args[0]]
if qarg == control:
after.append(node)
skip.append(node)
double_break = True
break
if qarg not in used:
if qarg not in before:
before[qarg] = []
before[qarg].append(node)
skip.append(node)
else:
after.append(node)
skip.append(node)
count += 1
if double_break is True:
break
# if an inverse cascade was found
if len(targets) > 1:
if descending is True:
targets = sorted(targets)
else:
targets = sorted(targets, reverse=True)
# apply all gates that were encountered before the cascade
for u in before:
for node in before[u]:
self._extra_layers[last_layer].append(node)
# apply the transformation
self._extra_layers[last_layer].append(
Node(type='gate', gate=Hadamard(self._id_to_wires[control])))
for t in targets:
self._extra_layers[last_layer].append(Node(type='gate', gate=Hadamard(self._id_to_wires[t])))
for i in range(len(targets) - 1, 0, -1):
self._extra_layers[last_layer].append(Node(type='gate', gate=Cx(self._id_to_wires[targets[i]],
self._id_to_wires[
targets[i - 1]])))
self._extra_layers[last_layer].append(
Node(type='gate', gate=Cx(self._id_to_wires[targets[0]], self._id_to_wires[control])))
for i in range(len(targets) - 1):
self._extra_layers[last_layer].append(
Node(type='gate', gate=Cx(self._id_to_wires[targets[i + 1]],
self._id_to_wires[targets[i]])))
self._extra_layers[last_layer].append(
Node(type='gate', gate=Hadamard(self._id_to_wires[control])))
for t in targets:
self._extra_layers[last_layer].append(Node(type='gate', gate=Hadamard(self._id_to_wires[t])))
# apply all gates that were encountered after the cascade
for node in after:
self._extra_layers[last_layer].append(node)
else:
skip = None
return skip
| 47.635974 | 110 | 0.443451 | 2,283 | 22,246 | 4.129216 | 0.087166 | 0.040098 | 0.041583 | 0.047523 | 0.804922 | 0.762915 | 0.731622 | 0.720696 | 0.703617 | 0.689615 | 0 | 0.005421 | 0.494201 | 22,246 | 466 | 111 | 47.738197 | 0.832385 | 0.161782 | 0 | 0.737609 | 0 | 0 | 0.006148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014577 | false | 0 | 0.008746 | 0 | 0.03207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cd9e19ca52f3066816b62a13c3caaa394eb62fbb | 2,549 | py | Python | AllMotorTest.py | vvzen/Kitronik-Pico-Robotics-Board-MicroPython | fba59af843929e319164949d52ff7f293d2ef499 | [
"MIT"
] | 8 | 2021-05-08T14:34:05.000Z | 2022-03-01T23:43:41.000Z | AllMotorTest.py | vvzen/Kitronik-Pico-Robotics-Board-MicroPython | fba59af843929e319164949d52ff7f293d2ef499 | [
"MIT"
] | 1 | 2021-05-31T21:17:27.000Z | 2021-06-07T13:09:09.000Z | AllMotorTest.py | vvzen/Kitronik-Pico-Robotics-Board-MicroPython | fba59af843929e319164949d52ff7f293d2ef499 | [
"MIT"
] | 2 | 2021-05-14T08:56:59.000Z | 2021-05-14T16:33:21.000Z | #AllMotorTest.py
# test code that ramps each motor 0-100-0 then changes direction and does it again.
#all motors run at once, but with staggered timings
import PicoRobotics
import utime
board = PicoRobotics.KitronikPicoRobotics()
directions = ["f","r"]
while True:
for direction in directions:
for speed in range(0,25):
board.motorOn(1, direction, speed)
board.motorOn(2, direction, 25-speed)
board.motorOn(3, direction, 50-speed)
board.motorOn(4, direction, 75-speed)
utime.sleep_ms(100) #ramp speed over 25x100ms => approx 2.5 second.
for speed in range(0,25):
board.motorOn(1, direction, 25+speed)
board.motorOn(2, direction, speed)
board.motorOn(3, direction, 25-speed)
board.motorOn(4, direction, 50-speed)
utime.sleep_ms(100)
for speed in range(0,25):
board.motorOn(1, direction, 50+speed)
board.motorOn(2, direction, 25+speed)
board.motorOn(3, direction, speed)
board.motorOn(4, direction, 25-speed)
utime.sleep_ms(100)
for speed in range(0,25):
board.motorOn(1, direction, 75+speed)
board.motorOn(2, direction, 50+speed)
board.motorOn(3, direction, 25+speed)
board.motorOn(4, direction, speed)
utime.sleep_ms(100)
for speed in range(0,25):
board.motorOn(1, direction, 100-speed)
board.motorOn(2, direction, 75+speed)
board.motorOn(3, direction, 50+speed)
board.motorOn(4, direction, 25+speed)
utime.sleep_ms(100)
for speed in range(0,25):
board.motorOn(1, direction, 75-speed)
board.motorOn(2, direction, 100-speed)
board.motorOn(3, direction, 75+speed)
board.motorOn(4, direction, 50+speed)
utime.sleep_ms(100)
for speed in range(0,25):
board.motorOn(1, direction, 50-speed)
board.motorOn(2, direction, 75-speed)
board.motorOn(3, direction, 100-speed)
board.motorOn(4, direction, 75+speed)
utime.sleep_ms(100)
for speed in range(0,25):
board.motorOn(1, direction, 25-speed)
board.motorOn(2, direction, 50-speed)
board.motorOn(3, direction, 75-speed)
board.motorOn(4, direction, 100-speed)
utime.sleep_ms(100)
| 39.215385 | 84 | 0.573558 | 319 | 2,549 | 4.557994 | 0.175549 | 0.264099 | 0.280605 | 0.082531 | 0.814993 | 0.795048 | 0.795048 | 0.795048 | 0.795048 | 0.795048 | 0 | 0.087709 | 0.320126 | 2,549 | 64 | 85 | 39.828125 | 0.751298 | 0.075716 | 0 | 0.296296 | 0 | 0 | 0.000874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cdf739aca2929bd13a7b52cdc1525c92ad3ebb45 | 7,935 | py | Python | england/Databricks/CCU002_03-D11-outcomes_dose1.py | BHFDSC/CCU002_03 | e525441cf5c8de20e28ce51e12ddf7737109dfce | [
"Apache-2.0"
] | null | null | null | england/Databricks/CCU002_03-D11-outcomes_dose1.py | BHFDSC/CCU002_03 | e525441cf5c8de20e28ce51e12ddf7737109dfce | [
"Apache-2.0"
] | null | null | null | england/Databricks/CCU002_03-D11-outcomes_dose1.py | BHFDSC/CCU002_03 | e525441cf5c8de20e28ce51e12ddf7737109dfce | [
"Apache-2.0"
] | null | null | null | # Databricks notebook source
# MAGIC %md # CCU002_03-D11-outcomes_dose1
# MAGIC
# MAGIC **Description** This notebook determines outcomes for the analysis.
# MAGIC
# MAGIC **Author(s)** Venexia Walker
# COMMAND ----------
# MAGIC %md ## Clear cache
# COMMAND ----------
# MAGIC %sql
# MAGIC CLEAR CACHE
# COMMAND ----------
# MAGIC %md ## Define functions
# COMMAND ----------
# Define create table function by Sam Hollings
# Source: Workspaces/dars_nic_391419_j3w9t_collab/DATA_CURATION_wrang000_functions
def create_table(table_name:str, database_name:str='dars_nic_391419_j3w9t_collab', select_sql_script:str=None) -> None:
"""Will save to table from a global_temp view of the same name as the supplied table name (if no SQL script is supplied)
Otherwise, can supply a SQL script and this will be used to make the table with the specificed name, in the specifcied database."""
spark.conf.set("spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation","true")
if select_sql_script is None:
select_sql_script = f"SELECT * FROM global_temp.{table_name}"
spark.sql(f"""CREATE TABLE {database_name}.{table_name} AS
{select_sql_script}
""")
spark.sql(f"ALTER TABLE {database_name}.{table_name} OWNER TO {database_name}")
def drop_table(table_name:str, database_name:str='dars_nic_391419_j3w9t_collab', if_exists=True):
if if_exists:
IF_EXISTS = 'IF EXISTS'
else:
IF_EXISTS = ''
spark.sql(f"DROP TABLE {IF_EXISTS} {database_name}.{table_name}")
# COMMAND ----------
# MAGIC %md ## Specify outcomes
# COMMAND ----------
outcomes = ['myocarditis','pericarditis']
# COMMAND ----------
index_date = '2020-12-08'
# COMMAND ----------
# MAGIC %md ## GDPPR
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_gdppr_" + codelist + " AS SELECT NHS_NUMBER_DEID, min(DATE) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, DATE FROM dars_nic_391419_j3w9t_collab.ccu002_03_gdppr_dars_nic_391419_j3w9t WHERE CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' AND terminology=='SNOMED')) WHERE DATE>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
# MAGIC %md ## HES APC
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_first_hesapc_" + codelist + " AS SELECT NHS_NUMBER_DEID, MIN(EPISTART) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, EPISTART FROM dars_nic_391419_j3w9t_collab.ccu002_03_hes_apc_longformat WHERE CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' AND TERMINOLOGY='ICD10') AND (SOURCE='DIAG_3_01' OR SOURCE='DIAG_4_01')) WHERE EPISTART>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_any_hesapc_" + codelist + " AS SELECT NHS_NUMBER_DEID, MIN(EPISTART) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, EPISTART FROM dars_nic_391419_j3w9t_collab.ccu002_03_hes_apc_longformat WHERE CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' AND TERMINOLOGY='ICD10')) WHERE EPISTART>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
# MAGIC %md ## SUS
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_first_sus_" + codelist + " AS SELECT NHS_NUMBER_DEID, MIN(EPISODE_START_DATE) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, EPISODE_START_DATE FROM dars_nic_391419_j3w9t_collab.ccu002_03_sus_longformat WHERE ((CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10')) OR (LEFT(CODE,3) IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10'))) AND SOURCE='PRIMARY_DIAGNOSIS_CODE') WHERE EPISODE_START_DATE>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_any_sus_" + codelist + " AS SELECT NHS_NUMBER_DEID, MIN(EPISODE_START_DATE) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, EPISODE_START_DATE FROM dars_nic_391419_j3w9t_collab.ccu002_03_sus_longformat WHERE ((CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10')) OR (LEFT(CODE,3) IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10')))) WHERE EPISODE_START_DATE>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
# MAGIC %md ## Deaths
# COMMAND ----------
# DEATHS
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_first_deaths_" + codelist + " AS SELECT NHS_NUMBER_DEID, MIN(DATE) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, DATE FROM dars_nic_391419_j3w9t_collab.ccu002_03_deaths_longformat WHERE ((CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10')) OR (LEFT(CODE,3) IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10'))) AND SOURCE='S_UNDERLYING_COD_ICD10') WHERE DATE>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
# DEATHS
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_any_deaths_" + codelist + " AS SELECT NHS_NUMBER_DEID, MIN(DATE) AS out_dose1_" + codelist + " FROM (SELECT NHS_NUMBER_DEID, DATE FROM dars_nic_391419_j3w9t_collab.ccu002_03_deaths_longformat WHERE ((CODE IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10')) OR (LEFT(CODE,3) IN (SELECT code FROM dars_nic_391419_j3w9t_collab.ccu002_03_codelists WHERE name = '" + codelist + "' and terminology=='ICD10')))) WHERE DATE>='" + index_date + "' GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
# MAGIC %md ## Combine data sources
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_first_" + codelist + " AS SELECT NHS_NUMBER_DEID, min(out_dose1_" + codelist + ") AS out_dose1_first_" + codelist + " FROM (SELECT * FROM global_temp.ccu002_03_out_dose1_gdppr_" + codelist + " UNION ALL SELECT * FROM global_temp.ccu002_03_out_dose1_first_hesapc_" + codelist + " UNION ALL SELECT * FROM global_temp.ccu002_03_out_dose1_first_deaths_" + codelist + " UNION ALL SELECT * FROM global_temp.ccu002_03_out_dose1_first_sus_" + codelist + ") GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
for codelist in outcomes:
sql("CREATE OR REPLACE GLOBAL TEMP VIEW ccu002_03_out_dose1_any_" + codelist + " AS SELECT NHS_NUMBER_DEID, min(out_dose1_" + codelist + ") AS out_dose1_any_" + codelist + " FROM (SELECT * FROM global_temp.ccu002_03_out_dose1_gdppr_" + codelist + " UNION ALL SELECT * FROM global_temp.ccu002_03_out_dose1_any_hesapc_" + codelist + " UNION ALL SELECT * FROM global_temp.ccu002_03_out_dose1_any_deaths_" + codelist + " UNION ALL SELECT * FROM global_temp.ccu002_03_out_dose1_any_sus_" + codelist + ") GROUP BY NHS_NUMBER_DEID")
# COMMAND ----------
# MAGIC %md ## Save as tables
# COMMAND ----------
for codelist in outcomes:
drop_table('ccu002_03_out_dose1_first_'+codelist)
# COMMAND ----------
for codelist in outcomes:
create_table('ccu002_03_out_dose1_first_'+codelist)
# COMMAND ----------
for codelist in outcomes:
drop_table('ccu002_03_out_dose1_any_'+codelist)
# COMMAND ----------
for codelist in outcomes:
create_table('ccu002_03_out_dose1_any_'+codelist)
| 53.255034 | 678 | 0.739635 | 1,138 | 7,935 | 4.807557 | 0.118629 | 0.05849 | 0.059404 | 0.072382 | 0.796381 | 0.782489 | 0.781576 | 0.779017 | 0.768781 | 0.768781 | 0 | 0.066248 | 0.136358 | 7,935 | 148 | 679 | 53.614865 | 0.732088 | 0.164965 | 0 | 0.309524 | 0 | 0.261905 | 0.723943 | 0.339844 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a834267d3d1320be877a6e6edf376c222b8cb0bc | 5,204 | py | Python | staging/staging/validate_input.py | lexis-project/ddi-service-apis | 9e96c4159154d70613b1977a8ea28374c038b463 | [
"Apache-2.0"
] | null | null | null | staging/staging/validate_input.py | lexis-project/ddi-service-apis | 9e96c4159154d70613b1977a8ea28374c038b463 | [
"Apache-2.0"
] | null | null | null | staging/staging/validate_input.py | lexis-project/ddi-service-apis | 9e96c4159154d70613b1977a8ea28374c038b463 | [
"Apache-2.0"
] | null | null | null | from . import generate_config
from . import verifyMetadata
import logging
import yaml
with open("/etc/staging_api/system.yml") as file:
systems = yaml.load(file, Loader=yaml.FullLoader)
def validate_staging_input_body(input_data):
try:
logging.info(input_data["source_system"])
logging.info(input_data["target_system"])
logging.info(input_data["source_path"])
logging.info(input_data["target_path"])
logging.info(input_data["encryption"])
logging.info(input_data["compression"])
source_system = input_data["source_system"]
target_system = input_data["target_system"]
source_type = generate_config.get_type(source_system)
target_type = generate_config.get_type(target_system)
except KeyError:
raise KeyError(
"Input is not valid. Please check source and target information")
# Encryption and compression flags validation
if "encryption" not in input_data or "compression" not in input_data:
raise KeyError(
"Input is not valid. Check encryption and compression values")
if input_data["encryption"] != "yes" and input_data["encryption"] != "no":
raise KeyError(
"Input is not valid. Check encryption values. Allowed values are yes or no")
if input_data["compression"] != "yes" and input_data["compression"] != "no":
raise KeyError(
"Input is not valid. Check compression values. Allowed values are yes or no")
if source_type == "HPC" or target_type == "HPC":
try:
logging.info(input_data["task_id"])
logging.info(input_data["job_id"])
logging.info(input_data["heappe_url"])
except KeyError:
raise KeyError(
"Input is not valid. HEAppE job and task id are required")
elif target_type == "iRODS":
try:
logging.info(input_data["metadata"])
e = verifyMetadata.verifyMetadataForUpload(input_data["metadata"])
if e is not None:
raise Exception("Metadata is not valid: " + e)
except KeyError:
raise KeyError("Input is not valid. Metadata are required")
try:
logging.info(systems["systems"][source_system])
logging.info(systems["systems"][target_system])
except KeyError:
raise (KeyError("Source or target doesn't exist!"))
def validate_deletion_input_body(input_data):
try:
logging.info(input_data["target_system"])
logging.info(input_data["target_path"])
target_system = input_data["target_system"]
target_type = generate_config.get_type(target_system)
except KeyError:
raise KeyError("Input is not valid. Please check target information")
if target_type == "HPC":
try:
logging.info(input_data["task_id"])
logging.info(input_data["job_id"])
logging.info(input_data["heappe_url"])
except KeyError:
raise KeyError(
"Input is not valid. HEAppE job and task id are required")
try:
logging.info(systems["systems"][target_system])
except KeyError:
raise KeyError("Target doesn't exist!")
def validate_replication_input_body(input_data):
try:
logging.info(input_data["source_system"])
logging.info(input_data["source_path"])
logging.info(input_data["target_system"])
source_system = input_data["source_system"]
target_system = input_data["target_system"]
except KeyError:
raise KeyError(
"Input is not valid. Please check source or target information")
try:
logging.info(systems["systems"][source_system])
logging.info(systems["systems"][target_system])
except KeyError:
raise (Exception("Source or Target doesn't exist!"))
def validate_pid_assignment_input_body(input_data):
try:
logging.info(input_data["source_system"])
logging.info(input_data["source_path"])
source_system = input_data["source_system"]
except KeyError:
raise KeyError("Input is not valid. Please check target information")
try:
logging.info(systems["systems"][source_system])
except KeyError:
raise (Exception("Source doesn't exist!"))
def validate_replication_status_input_body(input_data):
try:
logging.info(input_data["target_system"])
logging.info(input_data["target_path"])
target_system = input_data["target_system"]
except KeyError:
raise KeyError("Input is not valid. Please check target information")
try:
logging.info(systems["systems"][target_system])
except KeyError:
raise (Exception("Target doesn't exist!"))
def validate_data_size_input_body(input_data):
try:
logging.info(input_data["target_system"])
logging.info(input_data["target_path"])
target_system = input_data["target_system"]
except KeyError:
raise KeyError("Input is not valid. Please check target information")
try:
logging.info(systems["systems"][target_system])
except KeyError:
raise KeyError("Target doesn't exist!")
| 38.548148 | 89 | 0.662375 | 629 | 5,204 | 5.27504 | 0.122417 | 0.122061 | 0.115732 | 0.144665 | 0.80862 | 0.786317 | 0.739904 | 0.736588 | 0.646474 | 0.637734 | 0 | 0 | 0.234243 | 5,204 | 134 | 90 | 38.835821 | 0.832622 | 0.008263 | 0 | 0.705882 | 1 | 0 | 0.269626 | 0.005234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05042 | false | 0 | 0.033613 | 0 | 0.084034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b548dedf9e64e6e024ddb5ee1cb820bdb5ed3145 | 24,050 | py | Python | test/test_GraceHandler.py | josiah-wolf-oberholtzer/consort | 6c7d511835d5ad883ad1ad52ae9cd48c4a7b5571 | [
"MIT"
] | 9 | 2015-02-11T09:35:40.000Z | 2019-04-29T23:57:49.000Z | test/test_GraceHandler.py | josiah-wolf-oberholtzer/consort | 6c7d511835d5ad883ad1ad52ae9cd48c4a7b5571 | [
"MIT"
] | 2 | 2016-02-07T18:54:47.000Z | 2017-08-10T01:38:01.000Z | test/test_GraceHandler.py | josiah-wolf-oberholtzer/consort | 6c7d511835d5ad883ad1ad52ae9cd48c4a7b5571 | [
"MIT"
] | 1 | 2019-05-13T12:37:15.000Z | 2019-05-13T12:37:15.000Z | import abjad
import collections
import consort
from abjad.tools import rhythmmakertools
from abjad.tools import systemtools
from abjad.tools import templatetools
segment_metadata = collections.OrderedDict(
segment_count=2,
segment_number=1,
)
def test_GraceHandler_01():
music_specifier = consort.MusicSpecifier(
grace_handler=consort.GraceHandler(
counts=(1,),
),
rhythm_maker=rhythmmakertools.NoteRhythmMaker(
tie_specifier=rhythmmakertools.TieSpecifier(
tie_across_divisions=False,
),
),
)
segment_maker = consort.SegmentMaker(
discard_final_silence=True,
desired_duration_in_seconds=4,
omit_stylesheets=True,
score_template=templatetools.GroupedRhythmicStavesScoreTemplate(
staff_count=2,
),
settings=(
consort.MusicSetting(
timespan_maker=consort.TaleaTimespanMaker(
initial_silence_talea=rhythmmakertools.Talea(
counts=(0, 1),
denominator=4,
),
playing_groupings=(2,),
),
v1=music_specifier,
v2=music_specifier,
),
),
tempo=abjad.MetronomeMark((1, 4), 60),
permitted_time_signatures=((4, 4),),
)
lilypond_file, metadata = segment_maker(segment_metadata=segment_metadata)
assert format(lilypond_file) == abjad.String.normalize(
r'''
\version "2.19.65"
\language "english"
#(ly:set-option 'relative-includes #t)
\score {
\context Score = "Grouped Rhythmic Staves Score" <<
\tag #'time
\context TimeSignatureContext = "Time Signature Context" {
{
\tempo 4=60
\time 4/4
s1 * 1
}
}
\context StaffGroup = "Grouped Rhythmic Staves Staff Group" <<
\context RhythmicStaff = "Staff 1" {
\context Voice = "Voice 1" {
{
% [Voice 1] Measure 1
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
{
{
\afterGrace
r4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
}
{
{
c'4
}
}
}
}
\context RhythmicStaff = "Staff 2" {
\context Voice = "Voice 2" {
{
% [Voice 2] Measure 1
{
\afterGrace
r4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
}
{
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
{
{
r4
}
}
}
}
>>
>>
}
''')
def test_GraceHandler_02():
music_specifier = consort.MusicSpecifier(
grace_handler=consort.GraceHandler(
counts=(1, 2, 3),
),
rhythm_maker=rhythmmakertools.NoteRhythmMaker(
tie_specifier=rhythmmakertools.TieSpecifier(
tie_across_divisions=False,
),
),
)
segment_maker = consort.SegmentMaker(
discard_final_silence=True,
desired_duration_in_seconds=4,
omit_stylesheets=True,
score_template=templatetools.GroupedRhythmicStavesScoreTemplate(
staff_count=2,
),
settings=(
consort.MusicSetting(
timespan_maker=consort.TaleaTimespanMaker(
initial_silence_talea=rhythmmakertools.Talea(
counts=(1,),
denominator=4,
),
playing_groupings=(3,),
),
v1=music_specifier,
v2=music_specifier,
),
),
tempo=abjad.MetronomeMark((1, 4), 60),
permitted_time_signatures=((4, 4),),
)
lilypond_file, metadata = segment_maker(segment_metadata=segment_metadata)
assert format(lilypond_file) == abjad.String.normalize(
r'''
\version "2.19.65"
\language "english"
#(ly:set-option 'relative-includes #t)
\score {
\context Score = "Grouped Rhythmic Staves Score" <<
\tag #'time
\context TimeSignatureContext = "Time Signature Context" {
{
\tempo 4=60
\time 4/4
s1 * 1
}
}
\context StaffGroup = "Grouped Rhythmic Staves Staff Group" <<
\context RhythmicStaff = "Staff 1" {
\context Voice = "Voice 1" {
{
% [Voice 1] Measure 1
{
\afterGrace
r4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
}
{
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
}
}
\context RhythmicStaff = "Staff 2" {
\context Voice = "Voice 2" {
{
% [Voice 2] Measure 1
{
\afterGrace
r4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
}
{
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
}
}
>>
>>
}
''')
def test_GraceHandler_03():
music_specifier = consort.MusicSpecifier(
grace_handler=consort.GraceHandler(
counts=(0, 2, 4),
),
rhythm_maker=rhythmmakertools.NoteRhythmMaker(
tie_specifier=rhythmmakertools.TieSpecifier(
tie_across_divisions=False,
),
),
)
segment_maker = consort.SegmentMaker(
discard_final_silence=True,
desired_duration_in_seconds=4,
omit_stylesheets=True,
score_template=templatetools.GroupedRhythmicStavesScoreTemplate(
staff_count=2,
),
settings=(
consort.MusicSetting(
timespan_maker=consort.TaleaTimespanMaker(
initial_silence_talea=rhythmmakertools.Talea(
counts=(1,),
denominator=4,
),
playing_groupings=(3,),
),
v1=music_specifier,
v2=music_specifier,
),
),
tempo=abjad.MetronomeMark((1, 4), 60),
permitted_time_signatures=((4, 4),),
)
lilypond_file, metadata = segment_maker(segment_metadata=segment_metadata)
assert format(lilypond_file) == abjad.String.normalize(
r'''
\version "2.19.65"
\language "english"
#(ly:set-option 'relative-includes #t)
\score {
\context Score = "Grouped Rhythmic Staves Score" <<
\tag #'time
\context TimeSignatureContext = "Time Signature Context" {
{
\tempo 4=60
\time 4/4
s1 * 1
}
}
\context StaffGroup = "Grouped Rhythmic Staves Staff Group" <<
\context RhythmicStaff = "Staff 1" {
\context Voice = "Voice 1" {
{
% [Voice 1] Measure 1
{
r4
}
}
{
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
}
}
\context RhythmicStaff = "Staff 2" {
\context Voice = "Voice 2" {
{
% [Voice 2] Measure 1
{
\afterGrace
r4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
}
{
{
c'4
}
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
c'16
c'16
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
}
}
>>
>>
}
''')
def test_GraceHandler_04():
music_specifier = consort.MusicSpecifier(
grace_handler=consort.GraceHandler(
only_if_preceded_by_silence=True,
),
rhythm_maker=rhythmmakertools.NoteRhythmMaker(
tie_specifier=rhythmmakertools.TieSpecifier(
tie_across_divisions=False,
),
),
)
segment_maker = consort.SegmentMaker(
discard_final_silence=True,
desired_duration_in_seconds=4,
omit_stylesheets=True,
score_template=templatetools.GroupedRhythmicStavesScoreTemplate(
staff_count=1,
),
settings=(
consort.MusicSetting(
timespan_maker=consort.TaleaTimespanMaker(
initial_silence_talea=rhythmmakertools.Talea(
counts=(1,),
denominator=4,
),
playing_groupings=(2,),
),
v1=music_specifier,
),
),
tempo=abjad.MetronomeMark((1, 4), 60),
permitted_time_signatures=((4, 4),),
)
lilypond_file, metadata = segment_maker(segment_metadata=segment_metadata)
assert format(lilypond_file) == abjad.String.normalize(
r'''
\version "2.19.65"
\language "english"
#(ly:set-option 'relative-includes #t)
\score {
\context Score = "Grouped Rhythmic Staves Score" <<
\tag #'time
\context TimeSignatureContext = "Time Signature Context" {
{
\tempo 4=60
\time 4/4
s1 * 1
}
}
\context StaffGroup = "Grouped Rhythmic Staves Staff Group" <<
\context RhythmicStaff = "Staff 1" {
\context Voice = "Voice 1" {
{
% [Voice 1] Measure 1
{
\afterGrace
r4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
}
{
{
c'4
}
{
c'4
}
}
{
{
r4
}
}
}
}
>>
>>
}
''')
def test_GraceHandler_05():
music_specifier = consort.MusicSpecifier(
grace_handler=consort.GraceHandler(
only_if_preceded_by_nonsilence=True,
),
rhythm_maker=rhythmmakertools.NoteRhythmMaker(
tie_specifier=rhythmmakertools.TieSpecifier(
tie_across_divisions=False,
),
),
)
segment_maker = consort.SegmentMaker(
discard_final_silence=True,
desired_duration_in_seconds=4,
omit_stylesheets=True,
score_template=templatetools.GroupedRhythmicStavesScoreTemplate(
staff_count=1,
),
settings=(
consort.MusicSetting(
timespan_maker=consort.TaleaTimespanMaker(
initial_silence_talea=rhythmmakertools.Talea(
counts=(1,),
denominator=4,
),
playing_groupings=(2,),
),
v1=music_specifier,
),
),
tempo=abjad.MetronomeMark((1, 4), 60),
permitted_time_signatures=((4, 4),),
)
lilypond_file, metadata = segment_maker(segment_metadata=segment_metadata)
assert format(lilypond_file) == abjad.String.normalize(
r'''
\version "2.19.65"
\language "english"
#(ly:set-option 'relative-includes #t)
\score {
\context Score = "Grouped Rhythmic Staves Score" <<
\tag #'time
\context TimeSignatureContext = "Time Signature Context" {
{
\tempo 4=60
\time 4/4
s1 * 1
}
}
\context StaffGroup = "Grouped Rhythmic Staves Staff Group" <<
\context RhythmicStaff = "Staff 1" {
\context Voice = "Voice 1" {
{
% [Voice 1] Measure 1
{
r4
}
}
{
{
\afterGrace
c'4
{
\override Flag.stroke-style = #"grace"
\override Script.font-size = #0.5
c'16
\revert Flag.stroke-style
\revert Script.font-size
}
}
{
c'4
}
}
{
{
r4
}
}
}
}
>>
>>
}
''')
| 38.603531 | 78 | 0.300665 | 1,282 | 24,050 | 5.516381 | 0.095164 | 0.045249 | 0.067873 | 0.052036 | 0.96366 | 0.96366 | 0.957721 | 0.957721 | 0.957721 | 0.943298 | 0 | 0.034384 | 0.63842 | 24,050 | 622 | 79 | 38.665595 | 0.778864 | 0 | 0 | 0.84456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025907 | 1 | 0.025907 | false | 0 | 0.031088 | 0 | 0.056995 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b57f017b8a4b52fbfa3a6c5def47add616840bb6 | 219 | py | Python | tests/conftest.py | SurajDonthi/auto-labeling-pipeline | 3c334d973faae0cb5ef66d30fd85d4bcfbac8a6b | [
"MIT"
] | 31 | 2020-11-01T15:10:59.000Z | 2022-03-17T06:27:39.000Z | tests/conftest.py | SurajDonthi/auto-labeling-pipeline | 3c334d973faae0cb5ef66d30fd85d4bcfbac8a6b | [
"MIT"
] | 9 | 2020-12-06T05:03:34.000Z | 2021-12-07T14:06:36.000Z | tests/conftest.py | SurajDonthi/auto-labeling-pipeline | 3c334d973faae0cb5ef66d30fd85d4bcfbac8a6b | [
"MIT"
] | 12 | 2021-02-19T08:49:44.000Z | 2021-10-21T22:46:18.000Z | import pathlib
import pytest
@pytest.fixture
def data_path():
return pathlib.Path(__file__).parent / 'data'
@pytest.fixture
def cassettes_path():
return pathlib.Path(__file__).parent / 'fixtures/cassettes'
| 15.642857 | 63 | 0.744292 | 27 | 219 | 5.666667 | 0.444444 | 0.169935 | 0.20915 | 0.27451 | 0.405229 | 0.405229 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141553 | 219 | 13 | 64 | 16.846154 | 0.81383 | 0 | 0 | 0.25 | 0 | 0 | 0.100457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
a9448445e229c9e0782f946102c093146c6476c2 | 646,125 | py | Python | lauetoolsnn/utils_lauenn.py | ravipurohit1991/lauetoolsnn | 6cc413fb60872297c9ca7a202dd9dd596d4a9a5b | [
"MIT"
] | null | null | null | lauetoolsnn/utils_lauenn.py | ravipurohit1991/lauetoolsnn | 6cc413fb60872297c9ca7a202dd9dd596d4a9a5b | [
"MIT"
] | null | null | null | lauetoolsnn/utils_lauenn.py | ravipurohit1991/lauetoolsnn | 6cc413fb60872297c9ca7a202dd9dd596d4a9a5b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Nov 6 14:47:33 2021
@author: PURUSHOT
Functions for lauetoolsneuralnetwork
"""
__author__ = "Ravi raj purohit PURUSHOTTAM RAJ PUROHIT, CRG-IF BM32 @ ESRF"
import warnings
warnings.filterwarnings('ignore')
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import logging
logger = logging.getLogger()
old_level = logger.level
logger.setLevel(100)
# import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Qt5Agg')
# matplotlib.rcParams.update({'font.size': 14})
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
from random import random as rand1
from math import acos
import time
import enum
import functools
import math
from numpy import pi, dot
import scipy
# from scipy.spatial.transform import Rotation as R
import _pickle as cPickle
import configparser
from skimage.transform import (hough_line, hough_line_peaks)
# =============================================================================
# Additonal networkx module
import networkx as nx
# =============================================================================
## LaueTools import
try:
from lauetools import dict_LaueTools as dictLT
from lauetools import IOLaueTools as IOLT
from lauetools import generaltools as GT
from lauetools import CrystalParameters as CP
from lauetools import lauecore as LT
from lauetools import LaueGeometry as Lgeo
from lauetools import readmccd as RMCCD
from lauetools import FitOrient as FitO
from lauetools import findorient as FindO
from lauetools import IOimagefile as IOimage
except:
import lauetoolsnn.lauetools.dict_LaueTools as dictLT
import lauetoolsnn.lauetools.IOLaueTools as IOLT
import lauetoolsnn.lauetools.generaltools as GT
import lauetoolsnn.lauetools.CrystalParameters as CP
import lauetoolsnn.lauetools.lauecore as LT
import lauetoolsnn.lauetools.LaueGeometry as Lgeo
import lauetoolsnn.lauetools.readmccd as RMCCD
import lauetoolsnn.lauetools.FitOrient as FitO
import lauetoolsnn.lauetools.findorient as FindO
import lauetoolsnn.lauetools.IOimagefile as IOimage
from collections import OrderedDict
from math import cos, radians, sin, sqrt
import fractions
import collections
import random, itertools
import re
## Keras import
tensorflow_keras = True
try:
import tensorflow as tf
import keras
from keras.models import Sequential
from tensorflow.keras.callbacks import Callback
from keras.layers import Dense, Activation, Dropout
from tensorflow.keras.utils import to_categorical
from keras.regularizers import l2
# from tf.keras.layers.normalization import BatchNormalization
except:
print("tensorflow not loaded")
tensorflow_keras = False
try:
from wyckpos import wp, eqhkl_default, eqhkl_custom, sgrp_sym, sgrp_name,\
sgrp_params
except:
from lauetoolsnn.wyckpos import wp, eqhkl_default, eqhkl_custom, sgrp_sym, sgrp_name,\
sgrp_params
## for faster binning of histogram
## C version of hist
# from fast_histogram import histogram1d
import h5py
## GPU Nvidia drivers needs to be installed! Ughh
## if wish to use only CPU set the value to -1 else set it to 0 for GPU
## CPU training is suggested (as the model requires more RAM)
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
for device in visible_devices:
assert device.device_type != 'GPU'
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
def resource_path(relative_path, verbose=0):
""" Get absolute path to resource, works for dev and for PyInstaller """
base_path = os.path.dirname(__file__)
if verbose:
print("Base path of the library: ",base_path)
return os.path.join(base_path, relative_path)
metricsNN = [
keras.metrics.FalseNegatives(name="fn"),
keras.metrics.FalsePositives(name="fp"),
keras.metrics.TrueNegatives(name="tn"),
keras.metrics.TruePositives(name="tp"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="accuracy"),
]
ACCEPTABLE_FORMATS = [".npz"]
gui_state = np.random.randint(1e6)
DIGITS = int(abs(np.log10(1e-08)))
CST_ENERGYKEV = 12.398
ACCEPTABLE_FORMATS = [".npz"]
hklcond_group = re.compile(r'([-hkil0-9\(\)]+): ([-+hklnor1-8=\s,]+)(?:, |$)')
DEG = np.pi / 180.0
dist_threshold = 50
# residues_threshold=0.5
# nb_spots_global_threshold=8
# option_global = "v2"
# use_om_user = True
# nb_spots_consider = 100
##v1 same as strains
##v2 ambigious spots all
##v3 ambigious spots with uniqueness
# if you wish to plot the training and testing dataset images
plot_images = False
try:
from adjustText import adjust_text
except:
plot_images = False
def call_global():
global residues_threshold, nb_spots_global_threshold, option_global, \
use_om_user, nb_spots_consider, path_user_OM, intensity_threshold, \
FitPixelDev_global123, boxsize, softmax_threshold_global123, cap_matchrate123,\
strain_free_parameters, additional_expression
## read a test config file and update the variables.
config_setting = configparser.ConfigParser()
filepath = resource_path('settings.ini')
config_setting.read(filepath)
residues_threshold = float(config_setting.get('CALLER', 'residues_threshold'))
nb_spots_global_threshold = int(float(config_setting.get('CALLER', 'nb_spots_global_threshold')))
option_global = config_setting.get('CALLER', 'option_global')
use_om_user = config_setting.get('CALLER', 'use_om_user') == "true"
nb_spots_consider = int(float(config_setting.get('CALLER', 'nb_spots_consider')))
path_user_OM = config_setting.get('CALLER', 'path_user_OM')
intensity_threshold = int(float(config_setting.get('CALLER', 'intensity')))
boxsize = int(float(config_setting.get('CALLER', 'boxsize')))
FitPixelDev_global123 = int(float(config_setting.get('CALLER', 'pixdev')))
softmax_threshold_global123 = float(config_setting.get('CALLER', 'cap_softmax'))
cap_matchrate123 = float(config_setting.get('CALLER', 'cap_mr'))
strain_free_parameters = config_setting.get('CALLER', 'strain_free_parameters').split(",")
additional_expression = config_setting.get('CALLER', 'additional_expression').split(",")
if cap_matchrate123 < 1:
cap_matchrate123 = cap_matchrate123 *100.0
def rmv_freq_class(freq_rmv = 0, elements="all", freq_rmv1 = 0, elements1="all",
save_directory="", material_=None, material1_=None, write_to_console=None,
progress=None, qapp=None):
classhkl0 = np.load(save_directory+"//grain_classhkl_angbin.npz")["arr_0"]
if write_to_console != None:
write_to_console("First material index length: " + str(len(classhkl0)))
ind_mat = np.array([ij for ij in range(len(classhkl0))])
if material_ != material1_:
classhkl1 = np.load(save_directory+"//grain_classhkl_angbin1.npz")["arr_0"]
if write_to_console != None:
write_to_console("Second material index length: " + str(len(classhkl1)))
pre_ind = ind_mat[-1] + 1
ind_mat1 = np.array([pre_ind+ij for ij in range(len(classhkl1))])
classhkl = np.vstack((classhkl0, classhkl1))
else:
classhkl = classhkl0
# ind_mat = None
ind_mat1 = None
elements1 = "all"
freq_rmv1 = 0
angbins = np.load(save_directory+"//grain_classhkl_angbin.npz")["arr_1"]
loc = np.array([ij for ij in range(len(classhkl))])
trainy_ = array_generatorV2(save_directory+"//training_data", 0, progress, qapp)
if material_ != material1_:
## split trainy_ for two materials index
trainy_mat0 = []
trainy_mat1 = []
for ijnode in trainy_:
if ijnode in ind_mat:
trainy_mat0.append(ijnode)
elif ijnode in ind_mat1:
trainy_mat1.append(ijnode)
trainy_mat0 = np.array(trainy_mat0)
trainy_mat1 = np.array(trainy_mat1)
else:
trainy_mat0 = trainy_
trainy_mat1 = None
if write_to_console != None:
write_to_console("Class ID and frequency; check for data imbalance and select \
appropriate LOSS function for training the model")
## lets extract the least common occuring classes to simply the training dataset
if elements == "all":
most_common0 = collections.Counter(trainy_mat0).most_common()
else:
most_common0 = collections.Counter(trainy_mat0).most_common()[:elements]
if material_ != material1_:
if elements1 =="all":
most_common1 = collections.Counter(trainy_mat1).most_common()
else:
most_common1 = collections.Counter(trainy_mat1).most_common()[:elements1]
else:
most_common1 = []
most_common = most_common0 + most_common1
print(most_common)
class_present = [most_common[i][0] for i in range(len(most_common))]
rmv_indices = []
count = 0
for i in loc:
if i not in class_present:
rmv_indices.append(i)
elif i in class_present:
ind_ = np.where(np.array(class_present)==i)[0]
ij = most_common[ind_[0]]
if material_ != material1_:
if (ij[0] in ind_mat) and (ij[1] <= freq_rmv):
rmv_indices.append(int(ij[0]))
if (ij[0] in ind_mat1) and (ij[1] <= freq_rmv1):
rmv_indices.append(int(ij[0]))
else:
if (ij[1] <= freq_rmv):
rmv_indices.append(int(ij[0]))
else:
if write_to_console != None:
write_to_console("Something Fishy in Remove Freq Class module")
if material_ != material1_:
for i in rmv_indices:
if i in ind_mat:
indd = np.where(ind_mat == i)[0]
ind_mat = np.delete(ind_mat, indd, axis=0)
elif i in ind_mat1:
indd = np.where(ind_mat1 == i)[0]
ind_mat1 = np.delete(ind_mat1, indd, axis=0)
else:
for i in rmv_indices:
if i in ind_mat:
indd = np.where(ind_mat == i)[0]
ind_mat = np.delete(ind_mat, indd, axis=0)
loc_new = np.delete(loc, rmv_indices)
occurances = [most_common[i][1] for i in range(len(most_common)) if int(most_common[i][0]) in loc_new]
occurances = np.array(occurances)
class_weight = {}
class_weight_temp = {}
count = 0
for i in loc_new:
for ij in most_common:
if int(ij[0]) == i:
class_weight[count] = int(np.max(occurances)/ij[1]) ##+99 a quick hack to influence the weights
class_weight_temp[int(ij[0])] = int(np.max(occurances)/ij[1])
count += 1
for occ in range(len(most_common)):
if int(most_common[occ][0]) in loc_new:
if write_to_console != None:
if int(most_common[occ][0]) == -100:
write_to_console("Unclassified HKL (-100); occurance : "+str(most_common[occ][1])+\
": NN_weights : 0.0")
else:
write_to_console("HKL : " +str(classhkl[int(most_common[occ][0])])+"; occurance : "+\
str(most_common[occ][1])+\
": NN_weights : "+ str(class_weight_temp[int(most_common[occ][0])]))
if write_to_console != None:
write_to_console(str(len(rmv_indices))+ " classes removed from the classHKL object [removal frequency: "+\
str(freq_rmv)+"] (before:"+str(len(classhkl))+", now:"+str(len(classhkl)-len(rmv_indices))+")")
print(str(len(rmv_indices))+ " classes removed from the classHKL object [removal frequency: "+\
str(freq_rmv)+"] (before:"+str(len(classhkl))+", now:"+str(len(classhkl)-len(rmv_indices))+")")
classhkl = np.delete(classhkl, rmv_indices, axis=0)
## save the altered classHKL object
if material_ != material1_:
np.savez_compressed(save_directory+'//MOD_grain_classhkl_angbin.npz', classhkl, angbins, loc_new,
rmv_indices, freq_rmv, len(ind_mat), len(ind_mat1))
else:
np.savez_compressed(save_directory+'//MOD_grain_classhkl_angbin.npz', classhkl, angbins, loc_new,
rmv_indices, freq_rmv)
with open(save_directory + "//class_weights.pickle", "wb") as output_file:
cPickle.dump([class_weight], output_file)
if write_to_console != None:
write_to_console("Saved class weights data")
def array_generator(path_, batch_size, n_classes, loc_new, write_to_console=None, tocategorical=True):
"""
Assign a new class to data that is removed (to include in the training anyway)
"""
array_pairs = get_path(path_, ver=0)
random.shuffle(array_pairs)
zipped = itertools.cycle(array_pairs)
while True:
temp_var = False
for bs in range(batch_size):
array_path = next(zipped)
obj = np.load(array_path)
trainX = obj["arr_0"]
loc1 = obj["arr_1"]
if len(trainX) == 0 or len(loc1) == 0:
if write_to_console != None:
write_to_console("Skipping File: "+ array_path+"; No data is found")
if bs == 0:
temp_var = True
continue
## remove the non frequent class and rearrange the data
loc1_new = []
loc1_new_rmv = []
for k, i in enumerate(loc1):
temp_loc = np.where(loc_new==i)[0]
if len(temp_loc) == 1:
loc1_new.append(temp_loc)
else:
loc1_new_rmv.append(k)
loc1_new = np.array(loc1_new).ravel()
loc1_new_rmv = np.array(loc1_new_rmv).ravel()
if len(trainX) != len(loc1_new):
if len(loc1_new_rmv) > 0:
trainX = np.delete(trainX, loc1_new_rmv, axis=0)
if bs == 0 or temp_var:
trainX1 = np.copy(trainX)
trainY1 = np.copy(loc1_new)
else:
trainX1 = np.vstack((trainX1, trainX))
trainY1 = np.hstack((trainY1, loc1_new))
## To normalize the size of one hot encoding
count = 0
if np.min(trainY1) != 0:
trainY1 = np.append(trainY1, 0)
count += 1
if np.max(trainY1) != (n_classes-1):
trainY1 = np.append(trainY1, n_classes-1)
count += 1
if tocategorical:
trainY1 = to_categorical(trainY1)
if count == 1:
trainY1 = np.delete(trainY1, [len(trainY1)-1] ,axis=0)
elif count == 2:
trainY1 = np.delete(trainY1, [len(trainY1)-1,len(trainY1)-2] ,axis=0)
yield trainX1, trainY1
def vali_array(path_, batch_size, n_classes, loc_new, write_to_console=None, tocategorical=True):
array_pairs = get_path(path_, ver=0)
random.shuffle(array_pairs)
zipped = itertools.cycle(array_pairs)
temp_var = False
for bs in range(batch_size):
array_path = next(zipped)
obj = np.load(array_path)
trainX = obj["arr_0"]
loc1 = obj["arr_1"]
if len(trainX) == 0 or len(loc1) == 0:
if write_to_console != None:
write_to_console("Skipping File: "+ array_path+"; No data is found")
if bs == 0:
temp_var = True
continue
## remove the non frequent class and rearrange the data
loc1_new = []
loc1_new_rmv = []
for k, i in enumerate(loc1):
temp_loc = np.where(loc_new==i)[0]
if len(temp_loc) == 1:
loc1_new.append(temp_loc)
else:
loc1_new_rmv.append(k)
loc1_new = np.array(loc1_new).ravel()
loc1_new_rmv = np.array(loc1_new_rmv).ravel()
if len(trainX) != len(loc1_new):
if len(loc1_new_rmv) > 0:
trainX = np.delete(trainX, loc1_new_rmv, axis=0)
if bs == 0 or temp_var:
trainX1 = trainX
trainY1 = loc1_new
else:
trainX1 = np.vstack((trainX1, trainX))
trainY1 = np.hstack((trainY1, loc1_new))
count = 0
if np.min(trainY1) != 0:
trainY1 = np.append(trainY1, 0)
count += 1
if np.max(trainY1) != (n_classes-1):
trainY1 = np.append(trainY1, n_classes-1)
count += 1
if tocategorical:
trainY1 = to_categorical(trainY1)
if count == 1:
trainY1 = np.delete(trainY1, [len(trainY1)-1] ,axis=0)
elif count == 2:
trainY1 = np.delete(trainY1, [len(trainY1)-1,len(trainY1)-2] ,axis=0)
return trainX1, trainY1
def get_path(path_, ver=0):
image_files = []
for dir_entry in os.listdir(path_):
if os.path.isfile(os.path.join(path_, dir_entry)) and \
os.path.splitext(dir_entry)[1] in ACCEPTABLE_FORMATS:
file_name, file_extension = os.path.splitext(dir_entry)
image_files.append((file_name, file_extension,
os.path.join(path_, dir_entry)))
return_value = []
for image_file, _, image_full_path in image_files:
if image_file == "grain_classhkl_angbin":
continue
if image_file == "grain_classhkl_angbin1":
continue
if ver == 1 and image_file == "grain_init":
continue
if ver == 1 and image_file == "grain_init1":
continue
return_value.append((image_full_path))
return return_value
def array_generator_verify(path_, batch_size, n_classes, loc_new, write_to_console=None):
array_pairs = get_path(path_, ver=1)
random.shuffle(array_pairs)
zipped = itertools.cycle(array_pairs)
while True:
temp_var = False
for bs in range(batch_size):
array_path = next(zipped)
obj = np.load(array_path)
loc1 = obj["arr_1"]
if len(loc1) == 0:
if write_to_console !=None:
write_to_console("Skipping File: "+ array_path+"; No data is found")
if bs == 0:
temp_var = True
continue
## remove the non frequent class and rearrange the data
loc1_new = []
for k, i in enumerate(loc1):
temp_loc = np.where(loc_new==i)[0]
if len(temp_loc) == 1:
loc1_new.append(temp_loc)
loc1_new = np.array(loc1_new).ravel()
if bs == 0 or temp_var:
trainY1 = np.copy(loc1_new)
else:
trainY1 = np.hstack((trainY1, loc1_new))
return trainY1
def create_additional_data(path_, write_to_console=None, material=None, material1=None):
"""array_generator_verify(self.save_directory+"//training_data", batch_size,
len(self.classhkl), self.loc_new, self.write_to_console)
if generate_additional_data==True"""
array_pairs = get_path(path_, ver=1)
for ijk in array_pairs:
if ijk.split("\\")[-1].startswith(material+"_grain_"):
obj = np.load(ijk)
loc1 = obj["arr_1"]
for kji in array_pairs:
pass#TODO
# random.shuffle(array_pairs)
# zipped = itertools.cycle(array_pairs)
# while True:
# temp_var = False
# for bs in range(batch_size):
# array_path = next(zipped)
# obj = np.load(array_path)
# loc1 = obj["arr_1"]
# if len(loc1) == 0:
# if write_to_console !=None:
# write_to_console("Skipping File: "+ array_path+"; No data is found")
# if bs == 0:
# temp_var = True
# continue
# ## remove the non frequent class and rearrange the data
# loc1_new = []
# for k, i in enumerate(loc1):
# temp_loc = np.where(loc_new==i)[0]
# if len(temp_loc) == 1:
# loc1_new.append(temp_loc)
# loc1_new = np.array(loc1_new).ravel()
# if bs == 0 or temp_var:
# trainY1 = np.copy(loc1_new)
# else:
# trainY1 = np.hstack((trainY1, loc1_new))
# return trainY1
# open each npz file and combine two grains to form another Laue pattern
##save all data then open them and combine into one laue pattern --> better for two phase material
# s_tth, s_chi, s_miller_ind, _, _, _, \
# ori_mat, ori_mat1
# s_tth = np.array([item for sublist in l_tth for item in sublist])
# s_chi = np.array([item for sublist in l_chi for item in sublist])
# s_miller_ind = np.array([item for sublist in l_miller_ind for item in sublist])
# s_posx = np.array([item for sublist in l_posx for item in sublist])
# s_posy = np.array([item for sublist in l_posy for item in sublist])
# s_E = np.array([item for sublist in l_E for item in sublist])
# s_intensity=np.array([item for sublist in l_intensity for item in sublist])
# if sortintensity:
# indsort = np.argsort(s_intensity)[::-1]
# s_tth=np.take(s_tth, indsort)
# s_chi=np.take(s_chi, indsort)
# s_miller_ind=np.take(s_miller_ind, indsort, axis=0)
# s_posx=np.take(s_posx, indsort)
# s_posy=np.take(s_posy, indsort)
# s_E=np.take(s_E, indsort)
# s_intensity=np.take(s_intensity, indsort)
def array_generatorV2(path_, ver=1, progress=None, qapp=None):
array_pairs = get_path(path_, ver=ver)
random.shuffle(array_pairs)
if progress !=None:
progress.setMaximum(len(array_pairs))
for bs in range(len(array_pairs)):
loc1 = np.load(array_pairs[bs])["arr_1"]
if bs == 0:
trainY1 = loc1
if bs > 0:
trainY1 = np.hstack((trainY1, loc1))
if progress !=None:
progress.setValue(bs+1)
if qapp !=None:
qapp.processEvents()
return trainY1
def printProgressBar(iteration, total, prefix = '', suffix = 'Complete',
decimals = 1, length = 50, fill = '█', printEnd = "\r"):
percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total)))
filledLength = int(length * iteration // total)
bar = fill * filledLength + '-' * (length - filledLength)
print(f'\r{prefix} |{bar}| {percent}% {suffix}', end = printEnd)
# Print New Line on Complete
if iteration == total:
print()
def mse_images(pathA, pathB, ix, iy, ccd_label, progressbar=False, iteration=None, total=None):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
imageA, _, _ = IOimage.readCCDimage(pathA, stackimageindex=-1,
CCDLabel=ccd_label,
dirname=None, verbose=0)
imageB, _, _ = IOimage.readCCDimage(pathB, stackimageindex=-1,
CCDLabel=ccd_label,
dirname=None, verbose=0)
err = np.sum((imageA.astype("int") - imageB.astype("int")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
if progressbar:
printProgressBar(iteration, total-1)
return err, ix, iy
class LoggingCallback(Callback):
"""Callback that logs message at end of epoch.
"""
def __init__(self, print_fcn, progress_func, qapp, model, fn_model):
Callback.__init__(self)
self.print_fcn = print_fcn
self.progress_func = progress_func
self.batch_count = 0
self.qapp = qapp
self.model = model
self.model_name = fn_model
def on_batch_end(self, batch, logs={}):
self.batch_count += 1
self.progress_func.setValue(self.batch_count)
self.qapp.processEvents()
def on_epoch_end(self, epoch, logs={}):
msg = "{Epoch: %i} %s" % (epoch, ", ".join("%s: %f" % (k, v) for k, v in logs.items()))
self.print_fcn(msg)
model_json = self.model.to_json()
with open(self.model_name+".json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
self.model.save_weights(self.model_name+"_"+str(epoch)+".h5")
def model_arch_general(n_bins, n_outputs, kernel_coeff = 0.0005, bias_coeff = 0.0005, lr=None, verbose=1,
write_to_console=None):
"""
Very simple and straight forward Neural Network with few hyperparameters
straighforward RELU activation strategy with cross entropy to identify the HKL
Tried BatchNormalization --> no significant impact
Tried weighted approach --> not better for HCP
Trying Regularaization
l2(0.001) means that every coefficient in the weight matrix of the layer
will add 0.001 * weight_coefficient_value**2 to the total loss of the network
"""
if n_outputs >= n_bins:
param = n_bins
if param*15 < (2*n_outputs): ## quick hack; make Proper implementation
param = (n_bins + n_outputs)//2
else:
# param = n_outputs ## More reasonable ???
param = n_outputs*2 ## More reasonable ???
# param = n_bins//2
model = Sequential()
model.add(keras.Input(shape=(n_bins,)))
## Hidden layer 1
model.add(Dense(n_bins, kernel_regularizer=l2(kernel_coeff), bias_regularizer=l2(bias_coeff)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.3)) ## Adding dropout as we introduce some uncertain data with noise
## Hidden layer 2
model.add(Dense(((param)*15 + n_bins)//2, kernel_regularizer=l2(kernel_coeff), bias_regularizer=l2(bias_coeff)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.3))
## Hidden layer 3
model.add(Dense((param)*15, kernel_regularizer=l2(kernel_coeff), bias_regularizer=l2(bias_coeff)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.3))
## Output layer
model.add(Dense(n_outputs, activation='softmax'))
## Compile model
if lr != None:
otp = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(loss='categorical_crossentropy', optimizer=otp, metrics=[metricsNN])
else:
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=[metricsNN])
if verbose == 1:
model.summary()
stringlist = []
model.summary(print_fn=lambda x: stringlist.append(x))
short_model_summary = "\n".join(stringlist)
if write_to_console!=None:
write_to_console(short_model_summary)
return model
def generate_classHKL(n, rules, lattice_material, symmetry, material_, crystal=None, SG=None, general_diff_cond=False,
save_directory="", write_to_console=None, progress=None, qapp=None, ang_maxx = None, step = None):
temp_ = GT.threeindices_up_to(int(n))
classhkl_ = temp_
if write_to_console !=None:
write_to_console("Generating HKL objects")
# generate HKL object
if progress !=None:
progress.setMaximum(len(classhkl_))
hkl_all = {}
# another_method = False
for i in range(len(classhkl_)):
new_hkl = classhkl_[i,:]
if general_diff_cond:
cond_proceed = crystal.hkl_allowed(new_hkl, returnequivalents=False)
else:
cond_proceed = True
if not cond_proceed:
continue
new_rounded_hkl = _round_indices(new_hkl)
mul_family = crystal.equivalent_hkls(new_rounded_hkl)
family = []
for sym in mul_family:
family.append(sym)
hkl_all[str(new_rounded_hkl)] = {"hkl":new_rounded_hkl,
"family": family}
if progress !=None:
progress.setValue(i+1)
if qapp !=None:
qapp.processEvents()
## FAST IMPLEMENTATION
## make comprehensive list of dictionary
equ_hkl = np.zeros((1,3))
for j in hkl_all.keys():
equ_hkl = np.vstack((equ_hkl, hkl_all[j]["family"]))
equ_hkl = np.delete(equ_hkl, 0, axis =0)
index_hkl = [j for j,k in enumerate(hkl_all.keys()) for i in range(len(hkl_all[k]["family"]))]
if write_to_console !=None:
write_to_console("Removing harmonics and building equivalent HKL objects")
if progress !=None:
progress.setMaximum(len(hkl_all.keys()))
ind_rmv = []
for j1, i1 in enumerate(hkl_all.keys()):
hkl_1 = hkl_all[i1]["hkl"]
temp1_ = np.all(hkl_1 == equ_hkl, axis=1)
if len(np.where(temp1_)[0]) != 0:
ind_ = np.where(temp1_)[0]
for inin in ind_:
if index_hkl[inin] > j1:
ind_rmv.append(i1)
break
if progress !=None:
progress.setValue(j1+1)
if qapp !=None:
qapp.processEvents()
if len(ind_rmv) != 0:
for inrmv in ind_rmv:
_ = hkl_all.pop(inrmv, None)
#Check same class HKL and remove them to avoid conflict
#ADD the removed class as Multiplicity for the non removed class
classhkl = np.zeros((len(hkl_all),3))
keys_rmv = []
for j1, i1 in enumerate(hkl_all.keys()):
hkl_object = hkl_all[i1]["hkl"]
classhkl[j1,:] = hkl_object
keys_rmv.append(i1)
if ang_maxx == None:
ang_maxx= 90
if step == None:
step=0.1
codebars, angbins = get_material_data(material_ = material_, ang_maxx = ang_maxx, step = step,
hkl_ref=n, classhkl=classhkl)
# if write_to_console !=None:
# write_to_console("Verifying if two different HKL class have same angular distribution (can be very time consuming depending on the symmetry)")
list_appended = []
list_remove = []
for i, j in enumerate(codebars):
for k, l in enumerate(codebars):
# if i in list_appended and k in list_appended:
# continue
if i != k and np.all(j == l):
# string0 = "HKL's "+ str(classhkl[i])+" and "+str(classhkl[k])+" have exactly the same angular distribution."
# if write_to_console !=None:
# write_to_console(string0)
if keys_rmv[i] in list_remove or keys_rmv[k] in list_remove:
if write_to_console !=None:
continue
# write_to_console("list already added")
else:
list_remove.append(keys_rmv[i])
ind_rmv.append(keys_rmv[i])
for ijk in hkl_all[keys_rmv[i]]['family']:
hkl_all[keys_rmv[k]]['family'].append(ijk)
list_appended.append(i)
list_appended.append(k)
if len(list_remove) != 0:
for inrmv in list_remove:
_ = hkl_all.pop(inrmv, None)
if write_to_console !=None:
write_to_console("Finalizing the HKL objects")
hkl_all_class = hkl_all
hkl_millerindices = {}
classhkl = np.zeros((len(hkl_all),3))
for j1, i1 in enumerate(hkl_all.keys()):
hkl_object = hkl_all[i1]["hkl"]
classhkl[j1,:] = hkl_object
family = hkl_all_class[i1]["family"]
hkl_millerindices[i1] = np.array([ii for ii in family])
tempdict = hkl_millerindices
with open(save_directory + "//classhkl_data_"+material_+".pickle", "wb") as output_file:
cPickle.dump([classhkl, classhkl_, ind_rmv, n, temp_, \
hkl_all_class, hkl_all, lattice_material, symmetry], output_file)
with open(save_directory + "//classhkl_data_nonpickled_"+material_+".pickle", "wb") as output_file:
cPickle.dump([tempdict], output_file)
if write_to_console !=None:
write_to_console("Saved class HKL data in : "+save_directory + "//classhkl_data_"+material_+".pickle")
def write_training_testing_dataMTEX(save_directory,material_, material1_, lattice_material, lattice_material1,
material0_lauegroup, material1_lauegroup):
for imh in ["training_data", "testing_data"]:
image_files = []
path_ = save_directory+"//"+imh
for dir_entry in os.listdir(path_):
if os.path.isfile(os.path.join(path_, dir_entry)) and \
os.path.splitext(dir_entry)[1] in ACCEPTABLE_FORMATS:
file_name, file_extension = os.path.splitext(dir_entry)
image_files.append((file_name, file_extension,
os.path.join(path_, dir_entry)))
return_value = []
for image_file, _, image_full_path in image_files:
if image_file == "grain_classhkl_angbin" or image_file == "grain_classhkl_angbin1" or\
image_file == "grain_init" or image_file == "grain_init1":
continue
return_value.append((image_full_path))
ori_array1 = np.zeros((1,3,3))
if material_ != material1_:
ori_array2 = np.zeros((1,3,3))
for bs in return_value:
obj = np.load(bs)
ori1 = obj["arr_2"]
ori2 = obj["arr_3"]
flag = obj["arr_4"]
## flag 0 is random data
## flag 1, 2, 3 are small angle miori data
if flag == 0:
if len(ori1) != 0:
ori_array1 = np.vstack((ori_array1,ori1))
if material_ != material1_:
if len(ori2) != 0:
ori_array2 = np.vstack((ori_array2,ori2))
ori_array1 = np.delete(ori_array1, 0, axis=0)
phase_ori1 = np.ones(len(ori_array1))
ori_array = ori_array1
phase_ori = phase_ori1
if material_ != material1_:
ori_array2 = np.delete(ori_array2, 0, axis=0)
phase_ori2 = np.ones(len(ori_array2))*2
ori_array = np.vstack((ori_array, ori_array2))
phase_ori = np.hstack((phase_ori, phase_ori2))
if material_ == material1_:
lattice = lattice_material
material0_LG = material0_lauegroup
header = [
"Channel Text File",
"Prj lauetoolsnn",
"Author [Ravi raj purohit]",
"JobMode Grid",
"XCells "+str(len(ori_array)),
"YCells "+str(1),
"XStep 1.0",
"YStep 1.0",
"AcqE1 0",
"AcqE2 0",
"AcqE3 0",
"Euler angles refer to Sample Coordinate system (CS0)! Mag 100 Coverage 100 Device 0 KV 15 TiltAngle 40 TiltAxis 0",
"Phases 1",
str(round(lattice._lengths[0]*10,5))+";"+str(round(lattice._lengths[1]*10,5))+";"+\
str(round(lattice._lengths[2]*10,5))+"\t"+str(round(lattice._angles[0],5))+";"+\
str(round(lattice._angles[1],5))+";"+str(round(lattice._angles[2],5))+"\t"+"Material1"+ "\t"+material0_LG+ "\t"+"????"+"\t"+"????",
"Phase X Y Bands Error Euler1 Euler2 Euler3 MAD BC BS"]
else:
lattice = lattice_material
lattice1 = lattice_material1
material0_LG = material0_lauegroup
material1_LG = material1_lauegroup
header = [
"Channel Text File",
"Prj lauetoolsnn",
"Author [Ravi raj purohit]",
"JobMode Grid",
"XCells "+str(len(ori_array)),
"YCells "+str(1),
"XStep 1.0",
"YStep 1.0",
"AcqE1 0",
"AcqE2 0",
"AcqE3 0",
"Euler angles refer to Sample Coordinate system (CS0)! Mag 100 Coverage 100 Device 0 KV 15 TiltAngle 40 TiltAxis 0",
"Phases 2",
str(round(lattice._lengths[0]*10,5))+";"+str(round(lattice._lengths[1]*10,5))+";"+\
str(round(lattice._lengths[2]*10,5))+"\t"+str(round(lattice._angles[0],5))+";"+\
str(round(lattice._angles[1],5))+";"+str(round(lattice._angles[2],5))+"\t"+"Material1"+ "\t"+material0_LG+ "\t"+"????"+"\t"+"????",
str(round(lattice1._lengths[0]*10,5))+";"+str(round(lattice1._lengths[1]*10,5))+";"+\
str(round(lattice1._lengths[2]*10,5))+"\t"+str(round(lattice1._angles[0],5))+";"+\
str(round(lattice1._angles[1],5))+";"+str(round(lattice1._angles[2],5))+"\t"+"Material2"+ "\t"+material1_LG+ "\t"+"????"+"\t"+"????",
"Phase X Y Bands Error Euler1 Euler2 Euler3 MAD BC BS"]
# =================CALCULATION OF POSITION=====================================
euler_angles = np.zeros((len(ori_array),3))
phase_euler_angles = np.zeros(len(ori_array))
for i in range(len(ori_array)):
# euler_angles[i,:] = rot_mat_to_euler(ori_array[i,:,:])
euler_angles[i,:] = OrientationMatrix2Euler(ori_array[i,:,:])
phase_euler_angles[i] = phase_ori[i]
a = euler_angles
if material_ != material1_:
filename125 = save_directory+ "//"+material_+"_"+material1_+"_MTEX_UBmat_"+imh+".ctf"
else:
filename125 = save_directory+ "//"+material_+"_MTEX_UBmat_"+imh+".ctf"
f = open(filename125, "w")
for ij in range(len(header)):
f.write(header[ij]+" \n")
for j123 in range(euler_angles.shape[0]):
y_step = 1
x_step = 1 * j123
phase_id = int(phase_euler_angles[j123])
eul = str(phase_id)+'\t' + "%0.4f" % x_step +'\t'+"%0.4f" % y_step+'\t8\t0\t'+ \
"%0.4f" % a[j123,0]+'\t'+"%0.4f" % a[j123,1]+ \
'\t'+"%0.4f" % a[j123,2]+'\t0.0001\t180\t0\n'
string = eul
f.write(string)
f.close()
def get_material_data(material_="Cu", ang_maxx = 45, step = 0.5, hkl_ref=13, classhkl = None):
a, b, c, alpha, beta, gamma = dictLT.dict_Materials[material_][1]
Gstar = CP.Gstar_from_directlatticeparams(a, b, c, alpha, beta, gamma)
rules = dictLT.dict_Materials[material_][-1]
hkl2 = GT.threeindices_up_to(int(hkl_ref))
hkl2 = CP.ApplyExtinctionrules(hkl2,rules)
hkl2 = hkl2.astype(np.int16)
query_angle = ang_maxx/2.
angle_tol = ang_maxx/2.
metrics = Gstar
hkl1 = classhkl
H1 = hkl1
n1 = hkl1.shape[0]
H2 = hkl2
n2 = hkl2.shape[0]
dstar_square_1 = np.diag(np.inner(np.inner(H1, metrics), H1))
dstar_square_2 = np.diag(np.inner(np.inner(H2, metrics), H2))
scalar_product = np.inner(np.inner(H1, metrics), H2) * 1.0
d1 = np.sqrt(dstar_square_1.reshape((n1, 1))) * 1.0
d2 = np.sqrt(dstar_square_2.reshape((n2, 1))) * 1.0
outy = np.outer(d1, d2)
ratio = scalar_product / outy
ratio = np.round(ratio, decimals=7)
tab_angulardist = np.arccos(ratio) / (np.pi / 180.0)
np.putmask(tab_angulardist, np.abs(tab_angulardist) < 0.001, 400)
# self.write_to_console("Calculating Mutual angular distances")
# self.progress.setMaximum(len(tab_angulardist))
closest_angles_values = []
for ang_ in range(len(tab_angulardist)):
tab_angulardist_ = tab_angulardist[ang_,:]
angles_set = np.ravel(tab_angulardist_) # 1D array
sorted_ind = np.argsort(angles_set)
sorted_angles = angles_set[sorted_ind]
angle_query = angle_tol
if isinstance(query_angle, (list, np.ndarray, tuple)):
angle_query = query_angle[0]
array_angledist = np.abs(sorted_angles - angle_query)
pos_min = np.argmin(array_angledist)
closest_angle = sorted_angles[pos_min]
if np.abs(closest_angle - query_angle) > angle_tol:
if angle_query > 0.5:
pass
print("TODO function get_material_data")
condition = array_angledist <= angle_tol
closest_index_in_sorted_angles_raw = np.where(condition)[0]
closest_angles_values.append(np.take(sorted_angles, closest_index_in_sorted_angles_raw))
# self.progress.setValue(ang_+1)
# QApplication.processEvents()
# self.write_to_console("Constructing histograms")
# self.progress.setMaximum(len(closest_angles_values))
codebars = []
angbins = np.arange(0, ang_maxx+step, step)
for i in range(len(closest_angles_values)):
angles = closest_angles_values[i]
fingerprint = np.histogram(angles, bins=angbins)[0]
# fingerprint = histogram1d(angles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## Normalize the histogram by its maximum: simple way
## Maybe better normalization is possible.. to be seen
max_codebars = np.max(fingerprint)
fingerprint = fingerprint/ max_codebars
codebars.append(fingerprint)
# self.progress.setValue(i+1)
# QApplication.processEvents()
# self.progress.setValue(0)
return codebars, angbins
def Euler2OrientationMatrix(euler):
"""Compute the orientation matrix :math:`\mathbf{g}` associated with
the 3 Euler angles :math:`(\phi_1, \Phi, \phi_2)`.
:param euler: The triplet of the Euler angles (in degrees).
:return g: The 3x3 orientation matrix.
"""
(rphi1, rPhi, rphi2) = np.radians(euler)
c1 = np.cos(rphi1)
s1 = np.sin(rphi1)
c = np.cos(rPhi)
s = np.sin(rPhi)
c2 = np.cos(rphi2)
s2 = np.sin(rphi2)
# rotation matrix g
g11 = c1 * c2 - s1 * s2 * c
g12 = s1 * c2 + c1 * s2 * c
g13 = s2 * s
g21 = -c1 * s2 - s1 * c2 * c
g22 = -s1 * s2 + c1 * c2 * c
g23 = c2 * s
g31 = s1 * s
g32 = -c1 * s
g33 = c
g = np.array([[g11, g12, g13], [g21, g22, g23], [g31, g32, g33]])
return g
def getpatterns_(nb, nb1, material_=None, material1_=None, emin=5, emax=23, detectorparameters=None, pixelsize=None,
sortintensity = False, ang_maxx = 45, step = 0.5, classhkl = None, classhkl1 = None, noisy_data=False,
remove_peaks=False, seed = None,hkl_all=None, lattice_material=None, family_hkl=None,
normal_hkl=None, index_hkl=None, hkl_all1=None, lattice_material1=None, family_hkl1=None,
normal_hkl1=None, index_hkl1=None, dim1=2048, dim2=2048, removeharmonics=1, flag = 0,
img_i=None, img_j=None, save_directory_=None, odf_data=None, odf_data1=None, modelp=None,
misorientation_angle=None, max_millerindex=0, max_millerindex1=0, general_diff_cond=False, crystal=None, crystal1=None,
phase_always_present=None):
s_tth, s_chi, s_miller_ind, _, _, _, \
ori_mat, ori_mat1 = simulatemultiplepatterns(nb, nb1, seed=seed, key_material=material_,
key_material1=material1_,
emin=emin, emax=emax,
detectorparameters=detectorparameters,
pixelsize=pixelsize,
sortintensity = sortintensity,
dim1=dim1, dim2=dim2,
removeharmonics=removeharmonics,
flag=flag, odf_data=odf_data,
odf_data1=odf_data1, mode=modelp,
misorientation_angle=misorientation_angle,
phase_always_present=phase_always_present)
if noisy_data:
## apply random gaussian type noise to the data (tth and chi)
## So adding noise to the angular distances
## Instead of adding noise to all HKL's ... Add to few selected HKLs
## Adding noise to randomly 30% of the HKLs
## Realistic way of introducting strains is through Pixels and not 2theta
noisy_pixel = 0.15
indices_noise = np.random.choice(len(s_tth), int(len(s_tth)*0.3), replace=False)
noise_ = np.random.normal(0,noisy_pixel,len(indices_noise))
s_tth[indices_noise] = s_tth[indices_noise] + noise_
noise_ = np.random.normal(0,noisy_pixel,len(indices_noise))
s_chi[indices_noise] = s_chi[indices_noise] + noise_
if remove_peaks:
len_mi = np.array([iq for iq in range(len(s_miller_ind))])
len_mi = len_mi[int(0.6*len(s_miller_ind)):]
indices_remove = np.random.choice(len_mi, int(len(len_mi)*0.3), replace=False)
## delete randomly selected less intense peaks
## to simulate real peak detection, where some peaks may not be
## well detected
## Include maybe Intensity approach: Delete peaks based on their SF and position in detector
if len(indices_remove) !=0:
s_tth = np.delete(s_tth, indices_remove)
s_chi = np.delete(s_chi, indices_remove)
s_miller_ind = np.delete(s_miller_ind, indices_remove, axis=0)
else:
print(nb, nb1, material_, material1_, odf_data, odf_data1)
# replace all hkl class with relevant hkls
## skip HKLS that dont follow the general diffraction rules
location = []
skip_hkl = []
delete_spots = []
for j, i in enumerate(s_miller_ind):
new_hkl = _round_indices(i[:3])
if i[3] == 0: ##material 1
if general_diff_cond:
cond_proceed = crystal.hkl_allowed(i[:3], returnequivalents=False)
else:
cond_proceed = True
if not cond_proceed:
delete_spots.append(j)
continue
if np.any(np.abs(new_hkl)>max_millerindex):
skip_hkl.append(j)
continue
temp_ = np.all(new_hkl == normal_hkl, axis=1)
if len(np.where(temp_)[0]) == 1:
ind_ = np.where(temp_)[0][0]
location.append(index_hkl[ind_])
elif len(np.where(temp_)[0]) == 0:
# print("Entering -100 for "+ str(i) + "\n")
skip_hkl.append(j)
elif len(np.where(temp_)[0]) > 1:
## first check if they both are same class or not
class_output = []
for ij in range(len(np.where(temp_)[0])):
indc = index_hkl[np.where(temp_)[0][ij]]
class_output.append(indc)
if len(set(class_output)) <= 1:
location.append(class_output[0])
else:
skip_hkl.append(j)
print(i)
print(np.where(temp_)[0])
for ij in range(len(np.where(temp_)[0])):
indc = index_hkl[np.where(temp_)[0][ij]]
print(classhkl[indc])
print("Entering -500: Skipping HKL as something is not proper with equivalent HKL module")
elif i[3] == 1: ##material 2
if general_diff_cond:
cond_proceed1 = crystal1.hkl_allowed(i[:3], returnequivalents=False)
else:
cond_proceed1 = True
if not cond_proceed1:
delete_spots.append(j)
continue
if np.any(np.abs(new_hkl)>max_millerindex1):
skip_hkl.append(j)
continue
temp_ = np.all(new_hkl == normal_hkl1, axis=1)
if len(np.where(temp_)[0]) == 1:
ind_ = np.where(temp_)[0][0]
location.append(index_hkl1[ind_])
elif len(np.where(temp_)[0]) == 0:
# print("Entering -100 for "+ str(i) + "\n")
skip_hkl.append(j)
elif len(np.where(temp_)[0]) > 1:
## first check if they both are same class or not
class_output = []
for ij in range(len(np.where(temp_)[0])):
indc = index_hkl1[np.where(temp_)[0][ij]]
class_output.append(indc)
if len(set(class_output)) <= 1:
location.append(class_output[0])
else:
skip_hkl.append(j)
print(i)
print(np.where(temp_)[0])
for ij in range(len(np.where(temp_)[0])):
indc = index_hkl1[np.where(temp_)[0][ij]]
print(classhkl[indc])
print("Entering -500: Skipping HKL as something is not proper with equivalent HKL module")
allspots_the_chi = np.transpose(np.array([s_tth/2., s_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(allspots_the_chi, allspots_the_chi))
codebars = []
angbins = np.arange(0,ang_maxx+step,step)
for i in range(len(tabledistancerandom)):
if i in skip_hkl or i in delete_spots: ## not saving skipped HKL
continue
angles = tabledistancerandom[i]
spots_delete = [i]
for del_spts in delete_spots:
spots_delete.append(del_spts)
angles = np.delete(angles, spots_delete)
# angles = np.delete(angles, i)# removing the self distance
fingerprint = np.histogram(angles, bins=angbins)[0]
# fingerprint = histogram1d(angles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## same normalization as before
max_codebars = np.max(fingerprint)
fingerprint = fingerprint/ max_codebars
codebars.append(fingerprint)
if phase_always_present != None:
suffix_ = "_development"
else:
suffix_ = ""
###########################################################
if flag in [0,1,2,3] and plot_images:
fig = plt.figure()
plt.scatter(s_tth, s_chi, c='k')
plt.ylabel(r'$\chi$ (in deg)',fontsize=8)
plt.xlabel(r'2$\theta$ (in deg)', fontsize=10)
plt.grid(linestyle='--', linewidth=0.5)
texts1=[]
for i, txt_hkl in enumerate(s_miller_ind):
txt = _round_indices(txt_hkl[:3])
# print("Actual hkl: "+str(txt_hkl[:3])+" ; Rounded hkl: "+str(txt[:3]))
if txt_hkl[3] == 0:
if np.any(np.abs(txt) > max_millerindex):
continue
elif txt_hkl[3] == 1:
if np.any(np.abs(txt) > max_millerindex1):
continue
txt = txt_hkl
texts1.append(plt.text(s_tth[i], s_chi[i], str(int(txt[0]))+" "+str(int(txt[1]))+" "+str(int(txt[2])), size=8))
adjust_text(texts1, only_move={'points':'y', 'text':'y'})
###########################################################
if flag == 0:
if plot_images:
plt.savefig(save_directory_+'//grain_'+str(img_i)+"_"+\
str(img_j)+suffix_+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
if len(codebars) != 0:
if nb == 0:
np.savez_compressed(save_directory_+'//'+material1_+'_grain_'+str(img_i)+"_"+\
str(img_j)+suffix_+'.npz', codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
elif nb1 == 0:
np.savez_compressed(save_directory_+'//'+material_+'_grain_'+str(img_i)+"_"+\
str(img_j)+suffix_+'.npz', codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
else:
np.savez_compressed(save_directory_+'//'+material_+"_"+material1_+'_grain_'+str(img_i)+"_"+\
str(img_j)+suffix_+'.npz', codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
else:
print("Skipping a simulation file: "+save_directory_+'//grain_'+\
str(img_i)+"_"+str(img_j)+suffix_+'.npz'+"; Due to no data conforming user settings")
elif flag == 1:
if plot_images:
plt.savefig(save_directory_+'//grain_'+str(img_j)+suffix_+'_smo.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
if len(codebars) != 0:
np.savez_compressed(save_directory_+'//grain_'+str(img_j)+suffix_+'_smo.npz', \
codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
else:
print("Skipping a simulation file: "+save_directory_+'//grain_'+\
str(img_j)+'_smo.npz'+"; Due to no data conforming user settings")
elif flag == 2:
if plot_images:
plt.savefig(save_directory_+'//grain_'+str(img_j)+suffix_+'_smo1.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
if len(codebars) != 0:
np.savez_compressed(save_directory_+'//grain_'+str(img_j)+suffix_+'_smo1.npz', \
codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
else:
print("Skipping a simulation file: "+save_directory_+'//grain_'+\
str(img_j)+'_smo1.npz'+"; Due to no data conforming user settings")
elif flag == 3:
if plot_images:
plt.savefig(save_directory_+'//grain_'+str(img_j)+suffix_+'_smo2.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
if len(codebars) != 0:
np.savez_compressed(save_directory_+'//grain_'+str(img_j)+suffix_+'_smo2.npz', \
codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
else:
print("Skipping a simulation file: "+save_directory_+'//grain_'+\
str(img_j)+suffix_+'_smo2.npz'+"; Due to no data conforming user settings")
def simulatemultiplepatterns(nbUBs, nbUBs1, seed=123, key_material=None, key_material1=None,
emin=5, emax=23, detectorparameters=None, pixelsize=None,
sortintensity = False, dim1=2048, dim2=2048, removeharmonics=1, flag = 0,
odf_data=None, odf_data1=None, mode="random", misorientation_angle = 1,
phase_always_present=None):
detectordiameter = pixelsize * dim1 #TODO * 2.0
# UBelemagnles = np.random.random((3,nbUBs))*360-180
orientation_send = []
orientation_send1 = []
if flag == 0:
g = np.zeros((nbUBs, 3, 3))
if key_material != key_material1:
g1 = np.zeros((nbUBs1, 3, 3))
if mode == "random":
if key_material != key_material1:
for igr in range(nbUBs1):
phi1 = rand1() * 360.
phi = 180. * acos(2 * rand1() - 1) / np.pi
phi2 = rand1() * 360.
g1[igr] = Euler2OrientationMatrix((phi1, phi, phi2))
orientation_send1.append(g1[igr])
for igr in range(nbUBs):
phi1 = rand1() * 360.
phi = 180. * acos(2 * rand1() - 1) / np.pi
phi2 = rand1() * 360.
g[igr] = Euler2OrientationMatrix((phi1, phi, phi2))
orientation_send.append(g[igr])
elif mode == "uniform":
if key_material != key_material1:
g1 = odf_data1
for igr in range(len(g1)):
orientation_send1.append(g1[igr])
g = odf_data
for igr in range(len(g)):
orientation_send.append(g[igr])
elif flag == 1 or flag == 2 or flag == 3:
nbUBs = 2
g = np.zeros((nbUBs, 3, 3))
for igr in range(nbUBs):
if igr == 0:
phi1 = rand1() * 360.
phi = 180. * acos(2 * rand1() - 1) / np.pi
phi2 = rand1() * 360.
g[igr] = Euler2OrientationMatrix((phi1, phi, phi2))
orientation_send.append(g[igr])
elif igr == 1:
phi2 = phi2 + misorientation_angle ## adding user defined deg misorientation along phi2
g[igr] = Euler2OrientationMatrix((phi1, phi, phi2))
orientation_send1.append(g[igr])
l_tth, l_chi, l_miller_ind, l_posx, l_posy, l_E, l_intensity = [],[],[],[],[],[],[]
if flag == 1:
for grainind in range(nbUBs):
UBmatrix = g[grainind]
grain = CP.Prepare_Grain(key_material, UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
s_miller_ind = np.c_[s_miller_ind, np.zeros(len(s_miller_ind))]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
elif flag == 2:
for grainind in range(nbUBs):
UBmatrix = g[grainind]
grain = CP.Prepare_Grain(key_material1, UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
s_miller_ind = np.c_[s_miller_ind, np.ones(len(s_miller_ind))]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
elif flag == 3:
for grainind in range(nbUBs):
UBmatrix = g[grainind]
if grainind == 0:
grain = CP.Prepare_Grain(key_material, UBmatrix)
else:
grain = CP.Prepare_Grain(key_material1, UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
s_miller_ind = np.c_[s_miller_ind, np.ones(len(s_miller_ind))]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
else:
for grainind in range(nbUBs):
UBmatrix = g[grainind]
grain = CP.Prepare_Grain(key_material, UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
s_miller_ind = np.c_[s_miller_ind, np.zeros(len(s_miller_ind))]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
if (key_material != key_material1):
for grainind in range(nbUBs1):
# print(nbUBs, nbUBs1, key_material, key_material1, flag)
UBmatrix = g1[grainind]
grain = CP.Prepare_Grain(key_material1, UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
s_miller_ind = np.c_[s_miller_ind, np.ones(len(s_miller_ind))]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
## add constant UB matrix to the simulated data
if phase_always_present != None:
UBmatrix, key_material_new = phase_always_present.split(';')
UBmat = []
for kk in UBmatrix.split(","):
UBmat.append(float(kk))
UBmat = np.array(UBmat)
UBmatrix = UBmat.reshape((3,3))
grain = CP.Prepare_Grain(key_material_new, UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
if key_material_new == key_material:
s_miller_ind = np.c_[s_miller_ind, np.zeros(len(s_miller_ind))]
elif key_material_new == key_material1:
s_miller_ind = np.c_[s_miller_ind, np.ones(len(s_miller_ind))]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
#flat_list = [item for sublist in l for item in sublist]
s_tth = np.array([item for sublist in l_tth for item in sublist])
s_chi = np.array([item for sublist in l_chi for item in sublist])
s_miller_ind = np.array([item for sublist in l_miller_ind for item in sublist])
s_posx = np.array([item for sublist in l_posx for item in sublist])
s_posy = np.array([item for sublist in l_posy for item in sublist])
s_E = np.array([item for sublist in l_E for item in sublist])
s_intensity=np.array([item for sublist in l_intensity for item in sublist])
if sortintensity:
indsort = np.argsort(s_intensity)[::-1]
s_tth=np.take(s_tth, indsort)
s_chi=np.take(s_chi, indsort)
s_miller_ind=np.take(s_miller_ind, indsort, axis=0)
s_posx=np.take(s_posx, indsort)
s_posy=np.take(s_posy, indsort)
s_E=np.take(s_E, indsort)
s_intensity=np.take(s_intensity, indsort)
return s_tth, s_chi, s_miller_ind, s_posx, s_posy, s_intensity, orientation_send, orientation_send1
def chunker_list(seq, size):
return (seq[i::size] for i in range(size))
def worker_generation(inputs_queue, outputs_queue, proc_id):
while True:
time.sleep(0.01)
if not inputs_queue.empty():
message = inputs_queue.get()
num1, _, meta = message
flag1 = meta['flag']
for ijk in range(len(num1)):
nb, nb1, material_, material1_, emin, emax, detectorparameters, pixelsize, \
sortintensity, ang_maxx, step, classhkl, classhkl1, noisy_data, \
remove_peaks, seed,hkl_all, lattice_material, family_hkl,\
normal_hkl, index_hkl, hkl_all1, lattice_material1, family_hkl1,\
normal_hkl1, index_hkl1, dim1, dim2, removeharmonics, flag,\
img_i, img_j, save_directory_, odf_data, odf_data1, modelp,\
misorientation_angle, max_millerindex, max_millerindex1,\
general_diff_cond, crystal, crystal1, phase_always_present = num1[ijk]
getpatterns_(nb, nb1, material_, material1_, emin, emax, detectorparameters, pixelsize, \
sortintensity, ang_maxx, step, classhkl, classhkl1, noisy_data, \
remove_peaks, seed,hkl_all, lattice_material, family_hkl,\
normal_hkl, index_hkl, hkl_all1, lattice_material1, family_hkl1,\
normal_hkl1, index_hkl1, dim1, dim2, removeharmonics, flag,\
img_i, img_j, save_directory_, odf_data, odf_data1, modelp, \
misorientation_angle, max_millerindex, max_millerindex1, general_diff_cond, crystal, \
crystal1, phase_always_present)
if ijk%10 == 0 and ijk!=0:
outputs_queue.put(11)
if flag1 == 1:
break
def ComputeGnomon_singledata(tth, chi, CenterProjection=(45 * DEG, 0 * DEG)):
data_theta = tth / 2.0
data_chi = chi
lat = np.arcsin(np.cos(data_theta * DEG) * np.cos(data_chi * DEG)) # in rads
longit = np.arctan(
-np.sin(data_chi * DEG) / np.tan(data_theta * DEG)) # + ones(len(data_chi))*(np.pi)
centerlat, centerlongit = CenterProjection
slat0 = np.sin(centerlat)
clat0 = np.cos(centerlat)
longit0 = centerlongit
slat = np.sin(lat)
clat = np.cos(lat)
cosanguldist = slat * slat0 + clat * clat0 * np.cos(longit - longit0)
Xgno = clat * np.sin(longit0 - longit) / cosanguldist
Ygno = (slat * clat0 - clat * slat0 * np.cos(longit - longit0)) / cosanguldist
NbptsGno = 300
maxsize = max(Xgno,Ygno,-Xgno,-Ygno)+.0
xgnomin,xgnomax,ygnomin,ygnomax=(-0.8,0.8,-0.5,0.5)
xgnomin,xgnomax,ygnomin,ygnomax=(-maxsize,maxsize,-maxsize,maxsize)
XGNO = int((Xgno-xgnomin)/(xgnomax-xgnomin)*NbptsGno)
YGNO = int((Ygno-ygnomin)/(ygnomax-ygnomin)*NbptsGno)
return np.array((XGNO, YGNO))
def ComputeGnomon_2(TwiceTheta_Chi, CenterProjection=(45 * DEG, 0 * DEG)):
data_theta = TwiceTheta_Chi[0] / 2.0
data_chi = TwiceTheta_Chi[1]
lat = np.arcsin(np.cos(data_theta * DEG) * np.cos(data_chi * DEG)) # in rads
longit = np.arctan(
-np.sin(data_chi * DEG) / np.tan(data_theta * DEG)) # + ones(len(data_chi))*(np.pi)
centerlat, centerlongit = CenterProjection
slat0 = np.ones(len(data_chi)) * np.sin(centerlat)
clat0 = np.ones(len(data_chi)) * np.cos(centerlat)
longit0 = np.ones(len(data_chi)) * centerlongit
slat = np.sin(lat)
clat = np.cos(lat)
cosanguldist = slat * slat0 + clat * clat0 * np.cos(longit - longit0)
_gnomonx = clat * np.sin(longit0 - longit) / cosanguldist
_gnomony = (slat * clat0 - clat * slat0 * np.cos(longit - longit0)) / cosanguldist
return _gnomonx, _gnomony
def computeGnomonicImage(TwiceTheta,Chi):
DEG = np.pi/180.
# CenterProjectionAngleTheta = 50#45
TwiceTheta_Chi = TwiceTheta,Chi
Xgno,Ygno = ComputeGnomon_2(TwiceTheta_Chi, CenterProjection=(45 * DEG, 0 * DEG))
pts =(np.array([Xgno,Ygno]).T)
nbpeaks=len(pts)
NbptsGno = 300
maxsize = max(Xgno.max(),Ygno.max(),-Xgno.min(),-Ygno.min())+.0
xgnomin,xgnomax,ygnomin,ygnomax=(-0.8,0.8,-0.5,0.5)
xgnomin,xgnomax,ygnomin,ygnomax=(-maxsize,maxsize,-maxsize,maxsize)
halfdiagonal = np.sqrt(xgnomax**2+ygnomax**2)*NbptsGno
XGNO = np.array((Xgno-xgnomin)/(xgnomax-xgnomin)*NbptsGno, dtype=np.int)
YGNO = np.array((Ygno-ygnomin)/(ygnomax-ygnomin)*NbptsGno, dtype=np.int)
imageGNO=np.zeros((NbptsGno+1,NbptsGno+1))
imageGNO[XGNO,YGNO]=100
return imageGNO, nbpeaks, halfdiagonal
def read_hdf5(path):
weights = {}
keys = []
with h5py.File(path, 'r') as f: # open file
f.visit(keys.append) # append all keys to list
for key in keys:
if ':' in key: # contains data if ':' in key
weights[f[key].name] = f[key][:]
return weights
def softmax(x):
return (np.exp(x).T / np.sum(np.exp(x).T, axis=0)).T
def predict(x, wb, temp_key):
# first layer
layer0 = np.dot(x, wb[temp_key[1]]) + wb[temp_key[0]]
layer0 = np.maximum(0, layer0) ## ReLU activation
# Second layer
layer1 = np.dot(layer0, wb[temp_key[3]]) + wb[temp_key[2]]
layer1 = np.maximum(0, layer1)
# Third layer
layer2 = np.dot(layer1, wb[temp_key[5]]) + wb[temp_key[4]]
layer2 = np.maximum(0, layer2)
# Output layer
layer3 = np.dot(layer2, wb[temp_key[7]]) + wb[temp_key[6]]
layer3 = softmax(layer3) ## output softmax activation
return layer3
def worker(inputs_queue, outputs_queue, proc_id, run_flag):#, mp_rotation_matrix):
print(f'Initializing worker {proc_id}')
while True:
if not run_flag.value:
break
time.sleep(0.01)
if not inputs_queue.empty():
message = inputs_queue.get()
if message == 'STOP':
print(f'[{proc_id}] stopping')
break
num1, num2, meta = message
files_worked = []
while True:
if len(num1) == len(files_worked) or len(num1) == 0:
print("process finished")
break
for ijk in range(len(num1)):
if ijk in files_worked:
continue
if not run_flag.value:
num1, files_worked = [], []
print(f'[{proc_id}] stopping')
break
files, cnt, rotation_matrix, strain_matrix, strain_matrixs,\
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,\
check,detectorparameters,pixelsize,angbins,\
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,\
material_, material1_, symmetry, symmetry1,lim_x,lim_y,\
strain_calculation, ind_mat, ind_mat1,\
model_direc, tolerance , tolerance1,\
matricies, ccd_label,\
filename_bkg,intensity_threshold,\
boxsize,bkg_treatment,\
filenameDirec, experimental_prefix,\
blacklist_file, text_file, \
files_treated,try_previous1,\
wb, temp_key, cor_file_directory, mode_spotCycle1,\
softmax_threshold_global123,mr_threshold_global123,\
cap_matchrate123, tolerance_strain123, tolerance_strain1231,\
NumberMaxofFits123,fit_peaks_gaussian_global123,\
FitPixelDev_global123,coeff123,coeff_overlap,\
material0_limit, material1_limit, use_previous_UBmatrix_name1,\
material_phase_always_present1, crystal, crystal1, strain_free_parameters = num1[ijk]
if np.all(check[cnt,:]) == 1: #TODO
continue
if os.path.isfile(files):
# try:
strain_matrix12, strain_matrixs12, \
rotation_matrix12, col12, \
colx12, coly12,\
match_rate12, mat_global12, cnt12,\
files_treated12, spots_len12, \
iR_pix12, fR_pix12, check12, \
best_match12, pred_hkl = predict_preprocessMP(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,
mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,
material_, material1_, symmetry, symmetry1,lim_x,lim_y,
strain_calculation, ind_mat, ind_mat1,
model_direc, tolerance, tolerance1,
matricies, ccd_label,
filename_bkg,intensity_threshold,
boxsize,bkg_treatment,
filenameDirec, experimental_prefix,
blacklist_file, text_file,
files_treated,try_previous1,
wb, temp_key, cor_file_directory, mode_spotCycle1,
softmax_threshold_global123,mr_threshold_global123,
cap_matchrate123, tolerance_strain123,
tolerance_strain1231,NumberMaxofFits123,
fit_peaks_gaussian_global123,
FitPixelDev_global123, coeff123,coeff_overlap,
material0_limit,material1_limit,
use_previous_UBmatrix_name1,
material_phase_always_present1,
crystal, crystal1, strain_free_parameters)
files_worked.append(ijk)
meta['proc_id'] = proc_id
r_message = (strain_matrix12, strain_matrixs12, rotation_matrix12, col12, \
colx12, coly12, match_rate12, mat_global12, cnt12, meta, \
files_treated12, spots_len12, iR_pix12, fR_pix12, best_match12, check12)
outputs_queue.put(r_message)
# except Exception as e:
# print(e)
# continue
print("broke the worker while loop")
def predict_preprocessMP_vsingle(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,
material_, material1_, symmetry, symmetry1,lim_x,lim_y,
strain_calculation, ind_mat, ind_mat1,
model_direc=None, tolerance =None, tolerance1 =None,
matricies=None, ccd_label=None,
filenameDirec=None, experimental_prefix=None,
files_treated=None,try_previous1=False,
wb=None, temp_key=None, cor_file_directory=None, mode_spotCycle1=None,
softmax_threshold_global123=None,mr_threshold_global123=None,
cap_matchrate123=None,tolerance_strain123=None,tolerance_strain1231=None,\
coeff123=None, coeff_overlap=None,
material0_limit=None, material1_limit=None, use_previous_UBmatrix_name=None,
material_phase_always_present=None, crystal=None, crystal1=None, peak_XY=None,
strain_free_parameters=None):
if files in files_treated:
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
print("# Predicting for "+ files)
call_global()
CCDLabel=ccd_label
seednumber = "Experimental "+CCDLabel+" file"
s_ix = np.argsort(peak_XY[:, 2])[::-1]
peak_XY = peak_XY[s_ix]
framedim = dictLT.dict_CCD[CCDLabel][0]
twicetheta, chi = Lgeo.calc_uflab(peak_XY[:,0], peak_XY[:,1], detectorparameters,
returnAngles=1,
pixelsize=pixelsize,
kf_direction='Z>0')
data_theta, data_chi = twicetheta/2., chi
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]#TODO*2
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peak_XY[:,0]
dict_dp['peakY']=peak_XY[:,1]
dict_dp['intensity']=peak_XY[:,2]
CCDcalib = {"CCDLabel":CCDLabel,
"dd":detectorparameters[0],
"xcen":detectorparameters[1],
"ycen":detectorparameters[2],
"xbet":detectorparameters[3],
"xgam":detectorparameters[4],
"pixelsize": pixelsize}
path = os.path.normpath(files)
IOLT.writefile_cor(cor_file_directory+"//"+path.split(os.sep)[-1].split(".")[0], twicetheta,
chi, peak_XY[:,0], peak_XY[:,1], peak_XY[:,2],
param=CCDcalib, sortedexit=0)
sorted_data = np.transpose(np.array([data_theta, data_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(sorted_data, sorted_data))
codebars_all = []
spots_in_center = np.arange(0,len(data_theta))
spots_in_center = spots_in_center[:nb_spots_consider]
for i in spots_in_center:
spotangles = tabledistancerandom[i]
spotangles = np.delete(spotangles, i)# removing the self distance
codebars = np.histogram(spotangles, bins=angbins)[0]
# codebars = histogram1d(spotangles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## normalize the same way as training data
max_codebars = np.max(codebars)
codebars = codebars/ max_codebars
codebars_all.append(codebars)
## reshape for the model to predict all spots at once
codebars = np.array(codebars_all)
## Do prediction of all spots at once
prediction = predict(codebars, wb, temp_key)
max_pred = np.max(prediction, axis = 1)
class_predicted = np.argmax(prediction, axis = 1)
predicted_hkl123 = classhkl[class_predicted]
predicted_hkl123 = predicted_hkl123.astype(int)
s_tth = data_theta * 2.
s_chi = data_chi
rotation_matrix1, mr_highest, mat_highest, \
strain_crystal, strain_sample, iR_pix1, \
fR_pix1, spots_len1,\
best_match1, check12 = predict_ubmatrix(seednumber, spots_in_center, classhkl,
hkl_all_class0,
hkl_all_class1, files,
s_tth1=s_tth,s_chi1=s_chi,
predicted_hkl1=predicted_hkl123,
class_predicted1=class_predicted,
max_pred1=max_pred,
emin=emin,emax=emax,
material_=material_,
material1_=material1_,
lim_y=lim_y, lim_x=lim_x,
cnt=cnt,
dict_dp=dict_dp,
rotation_matrix=rotation_matrix,
mat_global=mat_global,
strain_calculation=strain_calculation,
ind_mat=ind_mat,
ind_mat1=ind_mat1,
tolerance=tolerance,
tolerance1 =tolerance1,
matricies=matricies,
tabledistancerandom=tabledistancerandom,
text_file = None,
try_previous1=True,
mode_spotCycle=mode_spotCycle1,
softmax_threshold_global123 = softmax_threshold_global123,
mr_threshold_global123=mr_threshold_global123,
cap_matchrate123=cap_matchrate123,
tolerance_strain123=tolerance_strain123,
tolerance_strain1231=tolerance_strain1231,
coeff123=coeff123,
coeff_overlap=coeff_overlap,
material0_limit=material0_limit,
material1_limit=material1_limit,
model_direc=model_direc,
use_previous_UBmatrix_name=use_previous_UBmatrix_name,
material_phase_always_present=material_phase_always_present,
match_rate=match_rate,
check=check[cnt,:],
crystal=crystal,
crystal1=crystal1, angbins=angbins,
wb=wb, temp_key=temp_key,
strain_free_parameters=strain_free_parameters)
for intmat in range(matricies):
if len(rotation_matrix1[intmat]) == 0:
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
else:
mat_global[intmat][0][cnt] = mat_highest[intmat][0]
final_symm =symmetry
final_crystal = crystal
if mat_highest[intmat][0] == 1:
final_symm = symmetry
final_crystal = crystal
elif mat_highest[intmat][0] == 2:
final_symm = symmetry1
final_crystal = crystal1
symm_operator = final_crystal._hklsym
strain_matrix[intmat][0][cnt,:,:] = strain_crystal[intmat][0]
strain_matrixs[intmat][0][cnt,:,:] = strain_sample[intmat][0]
rotation_matrix[intmat][0][cnt,:,:] = rotation_matrix1[intmat][0]
col_temp = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 0., 1.]), final_symm, symm_operator)
col[intmat][0][cnt,:] = col_temp
col_tempx = get_ipf_colour(rotation_matrix1[intmat][0], np.array([1., 0., 0.]), final_symm, symm_operator)
colx[intmat][0][cnt,:] = col_tempx
col_tempy = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 1., 0.]), final_symm, symm_operator)
coly[intmat][0][cnt,:] = col_tempy
match_rate[intmat][0][cnt] = mr_highest[intmat][0]
spots_len[intmat][0][cnt] = spots_len1[intmat][0]
iR_pix[intmat][0][cnt] = iR_pix1[intmat][0]
fR_pix[intmat][0][cnt] = fR_pix1[intmat][0]
best_match[intmat][0][cnt] = best_match1[intmat][0]
check[cnt,intmat] = check12[intmat]
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, match_rate, \
mat_global, cnt, files_treated, spots_len, iR_pix, fR_pix, check, best_match, predicted_hkl123
def predict_preprocessMP(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,
material_, material1_, symmetry, symmetry1,lim_x,lim_y,
strain_calculation, ind_mat, ind_mat1,
model_direc=None, tolerance =None, tolerance1 =None,
matricies=None, ccd_label=None,
filename_bkg=None,intensity_threshold=None,
boxsize=None,bkg_treatment=None,
filenameDirec=None, experimental_prefix=None,
blacklist_file =None, text_file=None,
files_treated=None,try_previous1=False,
wb=None, temp_key=None, cor_file_directory=None, mode_spotCycle1=None,
softmax_threshold_global123=None,mr_threshold_global123=None,
cap_matchrate123=None,tolerance_strain123=None,tolerance_strain1231=None,\
NumberMaxofFits123=None,fit_peaks_gaussian_global123=None,
FitPixelDev_global123=None,coeff123=None, coeff_overlap=None,
material0_limit=None, material1_limit=None, use_previous_UBmatrix_name=None,
material_phase_always_present=None, crystal=None, crystal1=None, strain_free_parameters=None):
if files in files_treated:
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match, None
print("# Predicting for "+ files)
call_global()
if files.split(".")[-1] != "cor":
CCDLabel=ccd_label
seednumber = "Experimental "+CCDLabel+" file"
try:
out_name = blacklist_file
except:
out_name = None
if bkg_treatment == None:
bkg_treatment = "A-B"
try:
### Max space = space between pixles
peak_XY = RMCCD.PeakSearch(
files,
stackimageindex = -1,
CCDLabel=CCDLabel,
NumberMaxofFits=NumberMaxofFits123,
PixelNearRadius=10,
removeedge=2,
IntensityThreshold=intensity_threshold,
local_maxima_search_method=0,
boxsize=boxsize,
position_definition=1,
verbose=0,
fit_peaks_gaussian=fit_peaks_gaussian_global123,
xtol=0.001,
FitPixelDev=FitPixelDev_global123,
return_histo=0,
# Saturation_value=1e10, # to be merged in CCDLabel
# Saturation_value_flatpeak=1e10,
MinIntensity=0,
PeakSizeRange=(0.65,200),
write_execution_time=1,
Data_for_localMaxima = "auto_background",
formulaexpression=bkg_treatment,
Remove_BlackListedPeaks_fromfile=out_name,
reject_negative_baseline=True,
Fit_with_Data_for_localMaxima=False,
maxPixelDistanceRejection=15.0,
)
peak_XY = peak_XY[0]#[:,:2] ##[2] Integer peak lists
except:
print("Error in Peak detection for "+ files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match, None
try:
s_ix = np.argsort(peak_XY[:, 2])[::-1]
peak_XY = peak_XY[s_ix]
except:
print("Error in Peak detection (argsort routine) for "+ files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match, None
framedim = dictLT.dict_CCD[CCDLabel][0]
twicetheta, chi = Lgeo.calc_uflab(peak_XY[:,0], peak_XY[:,1], detectorparameters,
returnAngles=1,
pixelsize=pixelsize,
kf_direction='Z>0')
data_theta, data_chi = twicetheta/2., chi
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]#TODO*2
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peak_XY[:,0]
dict_dp['peakY']=peak_XY[:,1]
dict_dp['intensity']=peak_XY[:,2]
CCDcalib = {"CCDLabel":CCDLabel,
"dd":detectorparameters[0],
"xcen":detectorparameters[1],
"ycen":detectorparameters[2],
"xbet":detectorparameters[3],
"xgam":detectorparameters[4],
"pixelsize": pixelsize}
path = os.path.normpath(files)
IOLT.writefile_cor(cor_file_directory+"//"+path.split(os.sep)[-1].split(".")[0], twicetheta,
chi, peak_XY[:,0], peak_XY[:,1], peak_XY[:,2],
param=CCDcalib, sortedexit=0)
elif files.split(".")[-1] == "cor":
seednumber = "Experimental COR file"
allres = IOLT.readfile_cor(files, True)
data_theta, data_chi, peakx, peaky, intensity = allres[1:6]
CCDcalib = allres[-1]
detectorparameters = allres[-2]
pixelsize = CCDcalib['pixelsize']
CCDLabel = CCDcalib['CCDLabel']
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]#TODO*2
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peakx
dict_dp['peakY']=peaky
dict_dp['intensity']=intensity
sorted_data = np.transpose(np.array([data_theta, data_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(sorted_data, sorted_data))
codebars_all = []
if len(data_theta) == 0:
print("No peaks Found for : " + files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match, None
if not use_om_user:
spots_in_center = np.arange(0,len(data_theta))
spots_in_center = spots_in_center[:nb_spots_consider]
for i in spots_in_center:
spotangles = tabledistancerandom[i]
spotangles = np.delete(spotangles, i)# removing the self distance
codebars = np.histogram(spotangles, bins=angbins)[0]
# codebars = histogram1d(spotangles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## normalize the same way as training data
max_codebars = np.max(codebars)
codebars = codebars/ max_codebars
codebars_all.append(codebars)
## reshape for the model to predict all spots at once
codebars = np.array(codebars_all)
## Do prediction of all spots at once
prediction = predict(codebars, wb, temp_key)
max_pred = np.max(prediction, axis = 1)
class_predicted = np.argmax(prediction, axis = 1)
predicted_hkl123 = classhkl[class_predicted]
predicted_hkl123 = predicted_hkl123.astype(int)
else:
max_pred = None
class_predicted = None
predicted_hkl123 = None
spots_in_center = None
s_tth = data_theta * 2.
s_chi = data_chi
rotation_matrix1, mr_highest, mat_highest, \
strain_crystal, strain_sample, iR_pix1, \
fR_pix1, spots_len1,\
best_match1, check12 = predict_ubmatrix(seednumber, spots_in_center, classhkl,
hkl_all_class0,
hkl_all_class1, files,
s_tth1=s_tth,s_chi1=s_chi,
predicted_hkl1=predicted_hkl123,
class_predicted1=class_predicted,
max_pred1=max_pred,
emin=emin,emax=emax,
material_=material_,
material1_=material1_,
lim_y=lim_y, lim_x=lim_x,
cnt=cnt,
dict_dp=dict_dp,
rotation_matrix=rotation_matrix,
mat_global=mat_global,
strain_calculation=strain_calculation,
ind_mat=ind_mat,
ind_mat1=ind_mat1,
tolerance=tolerance,
tolerance1 =tolerance1,
matricies=matricies,
tabledistancerandom=tabledistancerandom,
text_file = text_file,
try_previous1=try_previous1,
mode_spotCycle=mode_spotCycle1,
softmax_threshold_global123 = softmax_threshold_global123,
mr_threshold_global123=mr_threshold_global123,
cap_matchrate123=cap_matchrate123,
tolerance_strain123=tolerance_strain123,
tolerance_strain1231=tolerance_strain1231,
coeff123=coeff123,
coeff_overlap=coeff_overlap,
material0_limit=material0_limit,
material1_limit=material1_limit,
model_direc=model_direc,
use_previous_UBmatrix_name=use_previous_UBmatrix_name,
material_phase_always_present=material_phase_always_present,
match_rate=match_rate,
check=check[cnt,:],
crystal=crystal,
crystal1=crystal1, angbins=angbins,
wb=wb, temp_key=temp_key,
strain_free_parameters=strain_free_parameters)
for intmat in range(matricies):
if len(rotation_matrix1[intmat]) == 0:
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
else:
mat_global[intmat][0][cnt] = mat_highest[intmat][0]
final_symm =symmetry
final_crystal = crystal
if mat_highest[intmat][0] == 1:
final_symm = symmetry
final_crystal = crystal
elif mat_highest[intmat][0] == 2:
final_symm = symmetry1
final_crystal = crystal1
symm_operator = final_crystal._hklsym
strain_matrix[intmat][0][cnt,:,:] = strain_crystal[intmat][0]
strain_matrixs[intmat][0][cnt,:,:] = strain_sample[intmat][0]
rotation_matrix[intmat][0][cnt,:,:] = rotation_matrix1[intmat][0]
col_temp = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 0., 1.]), final_symm, symm_operator)
col[intmat][0][cnt,:] = col_temp
col_tempx = get_ipf_colour(rotation_matrix1[intmat][0], np.array([1., 0., 0.]), final_symm, symm_operator)
colx[intmat][0][cnt,:] = col_tempx
col_tempy = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 1., 0.]), final_symm, symm_operator)
coly[intmat][0][cnt,:] = col_tempy
match_rate[intmat][0][cnt] = mr_highest[intmat][0]
spots_len[intmat][0][cnt] = spots_len1[intmat][0]
iR_pix[intmat][0][cnt] = iR_pix1[intmat][0]
fR_pix[intmat][0][cnt] = fR_pix1[intmat][0]
best_match[intmat][0][cnt] = best_match1[intmat][0]
check[cnt,intmat] = check12[intmat]
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, match_rate, \
mat_global, cnt, files_treated, spots_len, iR_pix, fR_pix, check, best_match, predicted_hkl123
def predict_ubmatrix(seednumber, spots_in_center, classhkl, hkl_all_class0,
hkl_all_class1, filename,
s_tth1,s_chi1,predicted_hkl1,class_predicted1,max_pred1,
emin, emax, material_, material1_, lim_y, lim_x, cnt,
dict_dp,rotation_matrix,mat_global,strain_calculation,
ind_mat, ind_mat1,
tolerance=None, tolerance1 =None, matricies=None, tabledistancerandom=None,
text_file=None,try_previous1=False, mode_spotCycle=None,
softmax_threshold_global123=None,mr_threshold_global123=None,
cap_matchrate123=None, tolerance_strain123=None,tolerance_strain1231=None, coeff123=None,
coeff_overlap=None, material0_limit=None, material1_limit=None, model_direc=None,
use_previous_UBmatrix_name=None, material_phase_always_present=None, match_rate=None,
check = None, crystal=None, crystal1=None, angbins=None, wb=None, temp_key=None,
strain_free_parameters=None):
input_params = {"tolerance": tolerance,
"tolerance1":tolerance1,
"tolerancestrain": tolerance_strain123, ## For strain calculations
"tolerancestrain1": tolerance_strain1231,
"emin": emin,
"emax": emax,
"mat":0}
call_global()
strain_matrix = [[] for i in range(matricies)]
strain_matrixs = [[] for i in range(matricies)]
best_matrix = [[] for i in range(matricies)]
mr_highest = [[] for i in range(matricies)]
ir_pixels = [[] for i in range(matricies)]
fr_pixels = [[] for i in range(matricies)]
spots_len = [[] for i in range(matricies)]
mat_highest = [[] for i in range(matricies)]
best_match = [[] for i in range(matricies)]
spots1 = []
spots1_global = [[] for i in range(matricies)]
if not use_om_user:
dist = tabledistancerandom
## one time calculations
lattice_params0 = dictLT.dict_Materials[material_][1]
B0 = CP.calc_B_RR(lattice_params0)
Gstar_metric0 = CP.Gstar_from_directlatticeparams(lattice_params0[0],lattice_params0[1],\
lattice_params0[2],lattice_params0[3],\
lattice_params0[4],lattice_params0[5])
tab_distance_classhkl_data0 = get_material_dataP(Gstar_metric0, predicted_hkl1[:nb_spots_consider,:])
if material_ != material1_:
lattice_params1 = dictLT.dict_Materials[material1_][1]
B1 = CP.calc_B_RR(lattice_params1)
Gstar_metric1 = CP.Gstar_from_directlatticeparams(lattice_params1[0],lattice_params1[1],\
lattice_params1[2],lattice_params1[3],\
lattice_params1[4],lattice_params1[5])
tab_distance_classhkl_data1 = get_material_dataP(Gstar_metric1, predicted_hkl1[:nb_spots_consider,:])
else:
tab_distance_classhkl_data1 = None
Gstar_metric1 = None
B1 = None
else:
dist = tabledistancerandom
tab_distance_classhkl_data0 = None
tab_distance_classhkl_data1 = None
## one time calculations
lattice_params0 = dictLT.dict_Materials[material_][1]
B0 = CP.calc_B_RR(lattice_params0)
Gstar_metric0 = CP.Gstar_from_directlatticeparams(lattice_params0[0],lattice_params0[1],\
lattice_params0[2],lattice_params0[3],\
lattice_params0[4],lattice_params0[5])
if material_ != material1_:
lattice_params1 = dictLT.dict_Materials[material1_][1]
B1 = CP.calc_B_RR(lattice_params1)
Gstar_metric1 = CP.Gstar_from_directlatticeparams(lattice_params1[0],lattice_params1[1],\
lattice_params1[2],lattice_params1[3],\
lattice_params1[4],lattice_params1[5])
else:
Gstar_metric1 = None
B1 = None
spots = []
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr = 0
mat = 0
iR = 0
fR = 0
strain_crystal = np.zeros((3,3))
strain_sample = np.zeros((3,3))
material0_count = 0
material1_count = 0
calcul_done = False
objective_function1 = None
for igrain in range(matricies):
# if check[igrain] == 1: # or len(spots1_global[igrain]) != 0:
# continue
try_previous = try_previous1
max_mr, min_mr = 0, 0
iR, fR= 0, 0
case = "None"
if use_om_user:
use_previous_UBmatrix_name = False
try_previous = False
temp_qsd = np.loadtxt(path_user_OM, delimiter=",")
temp_qsd = temp_qsd.reshape((len(temp_qsd),3,3))
rotationmatrix_indexed = temp_qsd[igrain,:,:]
mat = 1
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
spots_prev, theo_spots_prev = remove_spots(s_tth1, s_chi1, rotationmatrix_indexed,
Keymaterial_, input_params, dict_dp['detectorparameters'],
dict_dp)
newmatchrate = 100*len(spots_prev)/theo_spots_prev
## Filter indexation by matching rate
if newmatchrate < cap_matchrate123:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
spots = []
max_mr, min_mr = 0, 0
else:
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotationmatrix_indexed,
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotationmatrix_indexed)
spots = spots_prev
expected = theo_spots_prev
max_mr, min_mr = 100*(len(spots)/expected), 100*(len(spots)/expected)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
try_previous = False
calcul_done = True
elif use_previous_UBmatrix_name:
try:
try_previous = False
### try already indexed UB matricies
# xy = np.load('xy.npz')
# xy.zip.fp.close()
# xy.close()
with np.load(model_direc+"//rotation_matrix_indexed_1.npz") as load_objectind:
# load_objectind = np.load(model_direc+"//rotation_matrix_indexed.npz")
rotationmatrix_indexed = load_objectind["arr_0"]
mat_global_indexed = load_objectind["arr_1"]
match_rate_indexed = load_objectind["arr_2"]
avg_match_rate_indexed = load_objectind["arr_3"]
calcul_done = False
for ind_mat_UBmat in range(len(rotationmatrix_indexed[igrain][0])):
if calcul_done:
continue
if np.all(rotationmatrix_indexed[igrain][0][ind_mat_UBmat,:,:]) == 0:
continue
if match_rate_indexed[igrain][0][ind_mat_UBmat] < 0.8*avg_match_rate_indexed[igrain]:
continue
mat = mat_global_indexed[igrain][0][ind_mat_UBmat]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
spots_prev, theo_spots_prev = remove_spots(s_tth1, s_chi1, rotationmatrix_indexed[igrain][0][ind_mat_UBmat,:,:],
Keymaterial_, input_params, dict_dp['detectorparameters'],
dict_dp)
newmatchrate = 100*len(spots_prev)/theo_spots_prev
condition_prev = newmatchrate < 0.8*(match_rate_indexed[igrain][0][ind_mat_UBmat])
current_spots = [len(list(set(spots_prev) & set(spots1_global[igr]))) > coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if condition_prev or (newmatchrate <= cap_matchrate123) or np.any(current_spots):# or overlap:
try_previous = try_previous1
else:
try_previous = False
calcul_done = True
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotationmatrix_indexed[igrain][0][ind_mat_UBmat,:,:],
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotationmatrix_indexed[igrain][0][ind_mat_UBmat,:,:])
spots = spots_prev
expected = theo_spots_prev
max_mr, min_mr = 100*(len(spots)/expected), 100*(len(spots)/expected)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
break
except:
try_previous = False
calcul_done = False
if try_previous and (cnt % lim_y == 0) and cnt != 0:
if np.all(rotation_matrix[igrain][0][cnt-lim_y,:,:]) == 0:
try_previous = False
else:
mat = mat_global[igrain][0][cnt-lim_y]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
spots_lr, theo_spots_lr = remove_spots(s_tth1, s_chi1,
rotation_matrix[igrain][0][cnt-lim_y,:,:],
Keymaterial_, input_params, dict_dp['detectorparameters'],
dict_dp)
# last_row = len(spots_lr) <= coeff123*theo_spots_lr
newmatchrate = 100*(len(spots_lr)/theo_spots_lr)
condition_prev = newmatchrate < 0.9*(match_rate[igrain][0][cnt-lim_y])
last_row = condition_prev
if last_row or condition_prev: ## new spots less than 8 count, not good match SKIP
try_previous = False
else:
try_previous = True
current_spots = [len(list(set(spots_lr) & set(spots1_global[igr]))) > coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
try_previous = False
continue
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotation_matrix[igrain][0][cnt-lim_y,:,:],
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotation_matrix[igrain][0][cnt-lim_y,:,:])
spots = spots_lr
expected = theo_spots_lr
max_mr, min_mr = 100*(len(spots_lr)/theo_spots_lr), 100*(len(spots_lr)/theo_spots_lr)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
elif try_previous and (cnt % lim_y != 0):
last_row = True
left_row = True
condition_prev = True
condition_prev1 = True
if np.all(rotation_matrix[igrain][0][cnt-1,:,:]) == 0:
left_row = True
else:
mat = mat_global[igrain][0][cnt-1]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
## new row start when % == 0
## use left index pixels matrix values
spots_left, theo_spots_left = remove_spots(s_tth1, s_chi1, rotation_matrix[igrain][0][cnt-1,:,:],
Keymaterial_, input_params, dict_dp['detectorparameters'],
dict_dp)
# left_row = len(spots_left) <= coeff123*theo_spots_left
newmatchrate = 100*(len(spots_left)/theo_spots_left)
condition_prev = newmatchrate < 0.9*(match_rate[igrain][0][cnt-1])
left_row = condition_prev
if cnt >= lim_y:
if np.all(rotation_matrix[igrain][0][cnt-lim_y,:,:]) == 0:
last_row = True
else:
mat = mat_global[igrain][0][cnt-lim_y]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
## use bottom index pixels matrix values
spots_lr, theo_spots_lr = remove_spots(s_tth1, s_chi1, rotation_matrix[igrain][0][cnt-lim_y,:,:],
Keymaterial_, input_params, dict_dp['detectorparameters'],
dict_dp)
# last_row = len(spots_lr) <= coeff123*theo_spots_lr
newmatchrate1 = 100*(len(spots_lr)/theo_spots_lr)
condition_prev1 = newmatchrate1 < 0.9*(match_rate[igrain][0][cnt-lim_y])
last_row = condition_prev1
if (left_row and last_row):
try_previous = False
elif condition_prev and condition_prev1:
try_previous = False
elif not left_row and not last_row:
try_previous = True
if len(spots_lr) > len(spots_left):
current_spots = [len(list(set(spots_lr) & set(spots1_global[igr]))) > coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
try_previous = False
continue
mat = mat_global[igrain][0][cnt-lim_y]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotation_matrix[igrain][0][cnt-lim_y,:,:],
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotation_matrix[igrain][0][cnt-lim_y,:,:])
spots = spots_lr
expected = theo_spots_lr
max_mr, min_mr = 100*(len(spots_lr)/theo_spots_lr), 100*(len(spots_lr)/theo_spots_lr)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
else:
current_spots = [len(list(set(spots_left) & set(spots1_global[igr]))) > coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
try_previous = False
continue
mat = mat_global[igrain][0][cnt-1]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotation_matrix[igrain][0][cnt-1,:,:],
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotation_matrix[igrain][0][cnt-1,:,:])
spots = spots_left
expected = theo_spots_left
max_mr, min_mr = 100*(len(spots_left)/theo_spots_left), 100*(len(spots_left)/theo_spots_left)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
elif not left_row and last_row:
try_previous = True
current_spots = [len(list(set(spots_left) & set(spots1_global[igr]))) > coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
try_previous = False
continue
mat = mat_global[igrain][0][cnt-1]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotation_matrix[igrain][0][cnt-1,:,:],
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotation_matrix[igrain][0][cnt-1,:,:])
spots = spots_left
expected = theo_spots_left
max_mr, min_mr = 100*(len(spots_left)/theo_spots_left), 100*(len(spots_left)/theo_spots_left)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
elif left_row and not last_row:
try_previous = True
current_spots = [len(list(set(spots_lr) & set(spots1_global[igr]))) > coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
try_previous = False
continue
mat = mat_global[igrain][0][cnt-lim_y]
if mat == 1:
Keymaterial_ = material_
case = material_
Bkey = B0
input_params["mat"] = 1
input_params["Bmat"] = Bkey
elif mat == 2:
Keymaterial_ = material1_
case = material1_
Bkey = B1
input_params["mat"] = 2
input_params["Bmat"] = Bkey
else:
Keymaterial_ = None
Bkey = None
input_params["mat"] = 0
input_params["Bmat"] = None
continue
if strain_calculation:
strain_crystal, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth1, s_chi1,
rotation_matrix[igrain][0][cnt-lim_y,:,:],
Keymaterial_,
input_params, dict_dp['detectorparameters'],
dict_dp, spots1, Bkey,
strain_free_parameters)
else:
strain_crystal, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(rotation_matrix[igrain][0][cnt-lim_y,:,:])
spots = spots_lr
expected = theo_spots_lr
max_mr, min_mr = 100*(len(spots_lr)/theo_spots_lr), 100*(len(spots_lr)/theo_spots_lr)
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, len(spots), expected, max_mr, 0, rot_mat_UB]
else:
try_previous = False
if not try_previous and not calcul_done:
### old version
if mode_spotCycle == "slow":
# print("Slow mode of analysis")
first_match, max_mr, min_mr, spots, \
case, mat, strain_crystal, \
strain_sample, iR, fR = get_orient_mat(s_tth1, s_chi1,
material_, material1_, classhkl,
class_predicted1, predicted_hkl1,
input_params, hkl_all_class0, hkl_all_class1,
max_pred1, dict_dp,
spots1, dist,
Gstar_metric0, Gstar_metric1, B0, B1,
softmax_threshold=softmax_threshold_global123,
mr_threshold=mr_threshold_global123,
tab_distance_classhkl_data0=tab_distance_classhkl_data0,
tab_distance_classhkl_data1=tab_distance_classhkl_data1,
spots1_global = spots1_global,
coeff_overlap = coeff_overlap,
ind_mat=ind_mat, ind_mat1=ind_mat1,
strain_calculation=strain_calculation,
cap_matchrate123=cap_matchrate123,
material0_count=material0_count,
material1_count=material1_count,
material0_limit=material0_limit,
material1_limit=material1_limit,
igrain=igrain,
material_phase_always_present=material_phase_always_present,
strain_free_parameters=strain_free_parameters)
elif mode_spotCycle == "houghmode":
# print("Slow mode of analysis")
first_match, max_mr, min_mr, spots, \
case, mat, strain_crystal, \
strain_sample, iR, fR = get_orient_mat_HM(s_tth1, s_chi1,
material_, material1_, classhkl,
class_predicted1, predicted_hkl1,
input_params, hkl_all_class0, hkl_all_class1,
max_pred1, dict_dp,
spots1, dist,
Gstar_metric0, Gstar_metric1, B0, B1,
softmax_threshold=softmax_threshold_global123,
mr_threshold=mr_threshold_global123,
tab_distance_classhkl_data0=tab_distance_classhkl_data0,
tab_distance_classhkl_data1=tab_distance_classhkl_data1,
spots1_global = spots1_global,
coeff_overlap = coeff_overlap,
ind_mat=ind_mat, ind_mat1=ind_mat1,
strain_calculation=strain_calculation,
cap_matchrate123=cap_matchrate123,
material0_count=material0_count,
material1_count=material1_count,
material0_limit=material0_limit,
material1_limit=material1_limit,
igrain=igrain,
material_phase_always_present=material_phase_always_present,
strain_free_parameters=strain_free_parameters)
elif mode_spotCycle == "houghgraphmode":
# print("Fast mode of analysis")
first_match, max_mr, min_mr, spots, \
case, mat, strain_crystal, \
strain_sample, iR, fR,\
objective_function1 = get_orient_mat_graphv1HM(s_tth1, s_chi1,
material_, material1_, classhkl,
class_predicted1, predicted_hkl1,
input_params, hkl_all_class0, hkl_all_class1,
max_pred1, dict_dp,
spots1, dist,
Gstar_metric0, Gstar_metric1, B0, B1,
softmax_threshold=softmax_threshold_global123,
mr_threshold=mr_threshold_global123,
tab_distance_classhkl_data0=tab_distance_classhkl_data0,
tab_distance_classhkl_data1=tab_distance_classhkl_data1,
spots1_global = spots1_global,
coeff_overlap = coeff_overlap,
ind_mat=ind_mat, ind_mat1=ind_mat1,
strain_calculation=strain_calculation,
cap_matchrate123=cap_matchrate123,
material0_count=material0_count,
material1_count=material1_count,
material0_limit=material0_limit,
material1_limit=material1_limit,
igrain=igrain,
material_phase_always_present=material_phase_always_present,
objective_function= objective_function1,
crystal=crystal,
crystal1=crystal1,
strain_free_parameters=strain_free_parameters)
elif mode_spotCycle == "graphmode":
# print("Fast mode of analysis")
first_match, max_mr, min_mr, spots, \
case, mat, strain_crystal, \
strain_sample, iR, fR,\
objective_function1 = get_orient_mat_graphv1(s_tth1, s_chi1,
material_, material1_, classhkl,
class_predicted1, predicted_hkl1,
input_params, hkl_all_class0, hkl_all_class1,
max_pred1, dict_dp,
spots1, dist,
Gstar_metric0, Gstar_metric1, B0, B1,
softmax_threshold=softmax_threshold_global123,
mr_threshold=mr_threshold_global123,
tab_distance_classhkl_data0=tab_distance_classhkl_data0,
tab_distance_classhkl_data1=tab_distance_classhkl_data1,
spots1_global = spots1_global,
coeff_overlap = coeff_overlap,
ind_mat=ind_mat, ind_mat1=ind_mat1,
strain_calculation=strain_calculation,
cap_matchrate123=cap_matchrate123,
material0_count=material0_count,
material1_count=material1_count,
material0_limit=material0_limit,
material1_limit=material1_limit,
igrain=igrain,
material_phase_always_present=material_phase_always_present,
objective_function= objective_function1,
crystal=crystal,
crystal1=crystal1,
strain_free_parameters=strain_free_parameters)
elif mode_spotCycle == "update_reupdate":
# print("Fast mode of analysis")
first_match, max_mr, min_mr, spots, \
case, mat, strain_crystal, \
strain_sample, iR, fR, objective_function1,\
s_tth1, s_chi1, class_predicted1, \
predicted_hkl1, max_pred1, dist = get_orient_mat_repredict(s_tth1, s_chi1,
material_, material1_, classhkl,
class_predicted1, predicted_hkl1,
input_params, hkl_all_class0, hkl_all_class1,
max_pred1, dict_dp,
spots1, dist,
Gstar_metric0, Gstar_metric1, B0, B1,
softmax_threshold=softmax_threshold_global123,
mr_threshold=mr_threshold_global123,
tab_distance_classhkl_data0=tab_distance_classhkl_data0,
tab_distance_classhkl_data1=tab_distance_classhkl_data1,
spots1_global = spots1_global,
coeff_overlap = coeff_overlap,
ind_mat=ind_mat, ind_mat1=ind_mat1,
strain_calculation=strain_calculation,
cap_matchrate123=cap_matchrate123,
material0_count=material0_count,
material1_count=material1_count,
material0_limit=material0_limit,
material1_limit=material1_limit,
igrain=igrain,
material_phase_always_present=material_phase_always_present,
objective_function= objective_function1,
crystal=crystal,
crystal1=crystal1,
angbins=angbins,
wb=wb, temp_key=temp_key,
strain_free_parameters=strain_free_parameters)
else:
print("selected mode of treating spots is not ready")
for ispot in spots:
spots1.append(ispot)
spots1_global[igrain].append(ispot)
## make copy of best rotation matrix
best_match[igrain].append(np.copy(first_match))
best_matrix[igrain].append(np.copy(first_match[14]))
mr_highest[igrain].append(np.copy(max_mr))
mat_highest[igrain].append(np.copy(mat))
ir_pixels[igrain].append(np.copy(iR))
fr_pixels[igrain].append(np.copy(fR))
spots_len[igrain].append(np.copy(len(spots)))
strain_matrix[igrain].append(np.copy(strain_crystal))
strain_matrixs[igrain].append(np.copy(strain_sample))
if np.all(first_match[14] != 0):
check[igrain] = 1
if mat == 1:
material0_count += 1
if mat == 2:
material1_count += 1
return best_matrix, mr_highest, mat_highest, strain_matrix, strain_matrixs, ir_pixels, fr_pixels, spots_len, best_match, check
def get_material_dataP(Gstar, classhkl = None):
hkl2 = np.copy(classhkl)
hkl1 = np.copy(classhkl)
# compute square matrix containing angles
metrics = Gstar
H1 = hkl1
n1 = hkl1.shape[0]
H2 = hkl2
n2 = hkl2.shape[0]
dstar_square_1 = np.diag(np.inner(np.inner(H1, metrics), H1))
dstar_square_2 = np.diag(np.inner(np.inner(H2, metrics), H2))
scalar_product = np.inner(np.inner(H1, metrics), H2) * 1.0
d1 = np.sqrt(dstar_square_1.reshape((n1, 1))) * 1.0
d2 = np.sqrt(dstar_square_2.reshape((n2, 1))) * 1.0
outy = np.outer(d1, d2)
ratio = scalar_product / outy
ratio = np.round(ratio, decimals=7)
tab_angulardist = np.arccos(ratio) / (np.pi / 180.0)
np.putmask(tab_angulardist, np.abs(tab_angulardist) < 0.001, 400)
return tab_angulardist
def get_orient_mat_repredict(s_tth, s_chi, material0_, material1_, classhkl, class_predicted, predicted_hkl,
input_params, hkl_all_class0, hkl_all_class1, max_pred, dict_dp, spots,
dist, Gstar_metric0, Gstar_metric1, B0, B1, softmax_threshold=0.85, mr_threshold=0.85,
tab_distance_classhkl_data0=None, tab_distance_classhkl_data1=None, spots1_global=None,
coeff_overlap = None, ind_mat=None, ind_mat1=None, strain_calculation=None, cap_matchrate123=None,
material0_count=None, material1_count=None, material0_limit=None, material1_limit=None,
igrain=None, material_phase_always_present=None, objective_function=None, crystal=None,
crystal1=None, angbins=None, wb=None, temp_key=None, strain_free_parameters=None):
if objective_function == None:
call_global()
init_mr = 0
init_mat = 0
init_material = "None"
init_case = "None"
init_B = None
final_match_rate = 0
match_rate_mma = []
final_rmv_ind = []
if material0_ == material1_:
list_of_sets = []
for ii in range(0, min(nb_spots_consider, len(dist))):
if max_pred[ii] < softmax_threshold:
continue
a1 = np.round(dist[ii],3)
for i in range(0, min(nb_spots_consider, len(dist))):
if ii==i:
continue
if (ii,i) in list_of_sets or (i,ii) in list_of_sets:
continue
if max_pred[i] < softmax_threshold:
continue
hkl1 = hkl_all_class0[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class0[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric0
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
np.putmask(tab_angulardist_temp, np.abs(tab_angulardist_temp) < 0.001, 400)
list_ = np.where(np.abs(tab_angulardist_temp-a1[i]) < input_params["tolerance"])
if len(list_[0]) != 0:
list_of_sets.append((ii,i))
else:
list_of_sets = []
for ii in range(0, min(nb_spots_consider, len(dist))):
if max_pred[ii] < softmax_threshold:
continue
a1 = np.round(dist[ii],3)
for i in range(0, min(nb_spots_consider, len(dist))):
if ii==i:
continue
if (ii,i) in list_of_sets or (i,ii) in list_of_sets:
continue
if max_pred[i] < softmax_threshold:
continue
if class_predicted[ii] < ind_mat and class_predicted[i] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
tolerance_new = input_params["tolerance"]
hkl1 = hkl_all_class0[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class0[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric0
elif (ind_mat <= class_predicted[ii] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
tolerance_new = input_params["tolerance1"]
hkl1 = hkl_all_class1[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class1[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric1
else:
continue
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
np.putmask(tab_angulardist_temp, np.abs(tab_angulardist_temp) < 0.001, 400)
list_ = np.where(np.abs(tab_angulardist_temp-a1[i]) < tolerance_new)
if len(list_[0]) != 0:
list_of_sets.append((ii,i))
## build a direct connection graph object
graph_obj = nx.DiGraph(list_of_sets)
connected_nodes_length = []
connected_nodes = [[] for i in range(len(graph_obj))]
for i,line in enumerate(nx.generate_adjlist(graph_obj)):
connected_nodes_length.append(len(line.split(" ")))
connected_nodes[i].append([int(jj) for jj in line.split(" ")])
## sort by maximum node occurance
connected_nodes_length = np.array(connected_nodes_length)
connected_nodes_length_sort_ind = np.argsort(connected_nodes_length)[::-1]
mat = 0
case = "None"
tried_spots = []
objective_function = []
for toplist in range(len(graph_obj)):
# ## continue if less than 3 connections are found for a graph
# if connected_nodes_length[connected_nodes_length_sort_ind[toplist]] < 2:
# continue
for j in connected_nodes[connected_nodes_length_sort_ind[toplist]][0]:
init_mr = 0
final_match_rate = 0
final_rmv_ind = []
all_stats = []
for i in connected_nodes[connected_nodes_length_sort_ind[toplist]][0]:
if j == i:
continue
if j in tried_spots and i in tried_spots:
continue
if material0_ == material1_:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
else:
if class_predicted[i] < ind_mat and class_predicted[j] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
elif (ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[j] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
hkl_all_class = hkl_all_class1
material_ = material1_
B = B1
Gstar_metric = Gstar_metric1
case = material_
mat = 2
input_params["mat"] = mat
input_params["Bmat"] = B
else:
mat = 0
case = "None"
input_params["mat"] = mat
input_params["Bmat"] = None
if mat == 0:
continue
tth_chi_spot1 = np.array([s_tth[i], s_chi[i]])
tth_chi_spot2 = np.array([s_tth[j], s_chi[j]])
hkl1 = hkl_all_class[str(predicted_hkl[i])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class[str(predicted_hkl[j])]
hkl2_list = np.array(hkl2)
actual_mat, flagAM, \
spot1_hkl, spot2_hkl = propose_UB_matrix(hkl1_list, hkl2_list,
Gstar_metric, input_params,
dist[i,j],
tth_chi_spot1, tth_chi_spot2,
B, method=0, crystal=crystal,
crystal1=crystal1)
if flagAM:
continue
for iind in range(len(actual_mat)):
rot_mat123 = actual_mat[iind]
rmv_ind, theospots = remove_spots(s_tth, s_chi, rot_mat123,
material_, input_params,
dict_dp['detectorparameters'], dict_dp)
match_rate = np.round(100 * len(rmv_ind)/theospots, 3)
match_rate_mma.append(match_rate)
if match_rate > init_mr:
final_rmv_ind = rmv_ind
init_mat = np.copy(mat)
input_params["mat"] = init_mat
init_material = np.copy(material_)
init_case = np.copy(case)
init_B = np.copy(B)
input_params["Bmat"] = init_B
final_match_rate = np.copy(match_rate)
init_mr = np.copy(match_rate)
all_stats = [i, j, \
spot1_hkl[iind], spot2_hkl[iind], \
tth_chi_spot1, tth_chi_spot2, \
dist[i,j], tab_distance_classhkl_data[i,j], np.round(max_pred[i]*100,3), \
np.round(max_pred[j]*100,3), len(rmv_ind), theospots,\
match_rate, 0.0, rot_mat123, init_mat, init_material, init_B, init_case]
tried_spots.append(i)
if (final_match_rate <= cap_matchrate123): ## Nothing found!!
## Either peaks are not well defined or not found within tolerance and prediction accuracy
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
objective_function.append([0, [], []])
else:
objective_function.append([final_match_rate, final_rmv_ind, all_stats])
tried_spots.append(j)
sort_ind = []
for ijk in objective_function:
sort_ind.append(ijk[0])
sort_ind = np.array(sort_ind)
sort_ind = np.argsort(sort_ind)[::-1]
for gr_count123 in range(len(sort_ind)):
max_mr = objective_function[sort_ind[gr_count123]][0]
rmv_ind = objective_function[sort_ind[gr_count123]][1]
all_stats = objective_function[sort_ind[gr_count123]][2]
if len(rmv_ind) == 0 or max_mr==0:
continue
mat = all_stats[15]
if mat == 1:
if igrain==0 and material_phase_always_present ==2:
mat = 0
case="None"
if material0_count >= material0_limit:
mat = 0
case="None"
elif mat == 2:
if igrain==0 and material_phase_always_present ==1:
mat = 0
case="None"
if material1_count >= material1_limit:
mat = 0
case="None"
if mat == 0:
continue
current_spots = [len(list(set(rmv_ind) & set(spots1_global[igr])))> coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
continue
input_params["mat"] = all_stats[15]
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(all_stats[16]),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, all_stats[17],
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
## delete the indexed spots and repredict the spots HKL in the absence of indexed spots
## maybe it makes it easier to detect some grains
##update list
# s_tth = np.delete(s_tth, rmv_ind, axis=0)
# s_chi = np.delete(s_chi, rmv_ind, axis=0)
s_tth[rmv_ind] = np.nan
s_chi[rmv_ind] = np.nan
sorted_data = np.transpose(np.array([s_tth/2., s_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(sorted_data, sorted_data))
spots_in_center = np.arange(0,len(s_tth))
spots_in_center = spots_in_center[:nb_spots_consider]
codebars_all = []
for i in spots_in_center:
spotangles = tabledistancerandom[i]
spotangles = np.delete(spotangles, i)# removing the self distance
codebars = np.histogram(spotangles, bins=angbins)[0]
# codebars = histogram1d(spotangles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## normalize the same way as training data
max_codebars = np.max(codebars)
codebars = codebars/ max_codebars
codebars_all.append(codebars)
## reshape for the model to predict all spots at once
codebars = np.array(codebars_all)
## Do prediction of all spots at once
prediction = predict(codebars, wb, temp_key)
max_pred = np.max(prediction, axis = 1)
class_predicted = np.argmax(prediction, axis = 1)
predicted_hkl123 = classhkl[class_predicted]
predicted_hkl123 = predicted_hkl123.astype(int)
return all_stats, np.max(max_mr), np.min(max_mr), \
rmv_ind, str(all_stats[18]), all_stats[15], dev_strain, strain_sample, iR, fR, objective_function,\
s_tth, s_chi, class_predicted, predicted_hkl123, max_pred, tabledistancerandom
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
return all_stats, max_mr, min_mr, spot_ind, case, mat, np.zeros((3,3)), np.zeros((3,3)), 0, 0, objective_function,\
s_tth, s_chi, class_predicted, predicted_hkl, max_pred, dist
def get_orient_mat_graphv1HM(s_tth, s_chi, material0_, material1_, classhkl, class_predicted, predicted_hkl,
input_params, hkl_all_class0, hkl_all_class1, max_pred, dict_dp, spots,
dist, Gstar_metric0, Gstar_metric1, B0, B1, softmax_threshold=0.85, mr_threshold=0.85,
tab_distance_classhkl_data0=None, tab_distance_classhkl_data1=None, spots1_global=None,
coeff_overlap = None, ind_mat=None, ind_mat1=None, strain_calculation=None, cap_matchrate123=None,
material0_count=None, material1_count=None, material0_limit=None, material1_limit=None,
igrain=None, material_phase_always_present=None, objective_function=None, crystal=None,
crystal1=None, strain_free_parameters=None):
if objective_function == None:
call_global()
init_mr = 0
init_mat = 0
init_material = "None"
init_case = "None"
init_B = None
final_match_rate = 0
match_rate_mma = []
final_rmv_ind = []
#calculate the gnemonic projection space
imageGNO, nbpeaks, halfdiagonal = computeGnomonicImage(s_tth, s_chi)
hough, theta_h, d_h = hough_line(imageGNO)
if material0_ == material1_:
list_of_sets = []
for ii in range(0, min(nb_spots_consider, len(dist))):
if max_pred[ii] < softmax_threshold:
continue
a1 = np.round(dist[ii],3)
for i in range(0, min(nb_spots_consider, len(dist))):
if ii==i:
continue
if (ii,i) in list_of_sets or (i,ii) in list_of_sets:
continue
if max_pred[i] < softmax_threshold:
continue
hkl1 = hkl_all_class0[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class0[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric0
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
np.putmask(tab_angulardist_temp, np.abs(tab_angulardist_temp) < 0.001, 400)
list_ = np.where(np.abs(tab_angulardist_temp-a1[i]) < input_params["tolerance"])
if len(list_[0]) != 0:
list_of_sets.append((ii,i))
else:
list_of_sets = []
for ii in range(0, min(nb_spots_consider, len(dist))):
if max_pred[ii] < softmax_threshold:
continue
a1 = np.round(dist[ii],3)
for i in range(0, min(nb_spots_consider, len(dist))):
if ii==i:
continue
if (ii,i) in list_of_sets or (i,ii) in list_of_sets:
continue
if max_pred[i] < softmax_threshold:
continue
if class_predicted[ii] < ind_mat and class_predicted[i] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
tolerance_new = input_params["tolerance"]
hkl1 = hkl_all_class0[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class0[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric0
elif (ind_mat <= class_predicted[ii] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
tolerance_new = input_params["tolerance1"]
hkl1 = hkl_all_class1[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class1[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric1
else:
continue
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
np.putmask(tab_angulardist_temp, np.abs(tab_angulardist_temp) < 0.001, 400)
list_ = np.where(np.abs(tab_angulardist_temp-a1[i]) < tolerance_new)
if len(list_[0]) != 0:
list_of_sets.append((ii,i))
## build a direct connection graph object
graph_obj = nx.DiGraph(list_of_sets)
connected_nodes_length = []
connected_nodes = [[] for i in range(len(graph_obj))]
for i,line in enumerate(nx.generate_adjlist(graph_obj)):
connected_nodes_length.append(len(line.split(" ")))
connected_nodes[i].append([int(jj) for jj in line.split(" ")])
## sort by maximum node occurance
connected_nodes_length = np.array(connected_nodes_length)
connected_nodes_length_sort_ind = np.argsort(connected_nodes_length)[::-1]
mat = 0
case = "None"
tried_spots = []
objective_function = []
for toplist in range(len(graph_obj)):
# ## continue if less than 3 connections are found for a graph
# if connected_nodes_length[connected_nodes_length_sort_ind[toplist]] < 2:
# continue
for j in connected_nodes[connected_nodes_length_sort_ind[toplist]][0]:
init_mr = 0
final_match_rate = 0
final_rmv_ind = []
all_stats = []
for i in connected_nodes[connected_nodes_length_sort_ind[toplist]][0]:
if j == i:
continue
if j in tried_spots and i in tried_spots:
continue
## condition to check if spots lie on the same line
in_hough_line = False
for _, anglehs, disths in zip(*hough_line_peaks(hough, theta_h, d_h)):
y0 = (disths - 0 * np.cos(anglehs)) / np.sin(anglehs)
y1 = (disths - imageGNO.shape[1] * np.cos(anglehs)) / np.sin(anglehs)
p1 = np.array((0,y0))
p2 = np.array((imageGNO.shape[1], y1))
p3_0 = ComputeGnomon_singledata(s_tth[i], s_chi[i])
p3_1 = ComputeGnomon_singledata(s_tth[j], s_chi[j])
distance_0 = np.abs(np.cross(p2-p1, p3_0-p1)) / np.linalg.norm(p2-p1)
distance_1 = np.abs(np.cross(p2-p1, p3_1-p1)) / np.linalg.norm(p2-p1)
if distance_0 < dist_threshold and distance_1 < dist_threshold:
# print(distance_0, distance_1)
in_hough_line = True
if in_hough_line:
break
if not in_hough_line:
continue
if material0_ == material1_:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
else:
if class_predicted[i] < ind_mat and class_predicted[j] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
elif (ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[j] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
hkl_all_class = hkl_all_class1
material_ = material1_
B = B1
Gstar_metric = Gstar_metric1
case = material_
mat = 2
input_params["mat"] = mat
input_params["Bmat"] = B
else:
mat = 0
case = "None"
input_params["mat"] = mat
input_params["Bmat"] = None
if mat == 0:
continue
tth_chi_spot1 = np.array([s_tth[i], s_chi[i]])
tth_chi_spot2 = np.array([s_tth[j], s_chi[j]])
hkl1 = hkl_all_class[str(predicted_hkl[i])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class[str(predicted_hkl[j])]
hkl2_list = np.array(hkl2)
actual_mat, flagAM, \
spot1_hkl, spot2_hkl = propose_UB_matrix(hkl1_list, hkl2_list,
Gstar_metric, input_params,
dist[i,j],
tth_chi_spot1, tth_chi_spot2,
B, method=0, crystal=crystal,
crystal1=crystal1)
if flagAM:
continue
for iind in range(len(actual_mat)):
rot_mat123 = actual_mat[iind]
rmv_ind, theospots = remove_spots(s_tth, s_chi, rot_mat123,
material_, input_params,
dict_dp['detectorparameters'], dict_dp)
match_rate = np.round(100 * len(rmv_ind)/theospots, 3)
match_rate_mma.append(match_rate)
if match_rate > init_mr:
final_rmv_ind = rmv_ind
init_mat = np.copy(mat)
input_params["mat"] = init_mat
init_material = np.copy(material_)
init_case = np.copy(case)
init_B = np.copy(B)
input_params["Bmat"] = init_B
final_match_rate = np.copy(match_rate)
init_mr = np.copy(match_rate)
all_stats = [i, j, \
spot1_hkl[iind], spot2_hkl[iind], \
tth_chi_spot1, tth_chi_spot2, \
dist[i,j], tab_distance_classhkl_data[i,j], np.round(max_pred[i]*100,3), \
np.round(max_pred[j]*100,3), len(rmv_ind), theospots,\
match_rate, 0.0, rot_mat123, init_mat, init_material, init_B, init_case]
tried_spots.append(i)
if (final_match_rate <= cap_matchrate123): ## Nothing found!!
## Either peaks are not well defined or not found within tolerance and prediction accuracy
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
objective_function.append([0, [], []])
else:
objective_function.append([final_match_rate, final_rmv_ind, all_stats])
tried_spots.append(j)
sort_ind = []
for ijk in objective_function:
sort_ind.append(ijk[0])
sort_ind = np.array(sort_ind)
sort_ind = np.argsort(sort_ind)[::-1]
for gr_count123 in range(len(sort_ind)):
max_mr = objective_function[sort_ind[gr_count123]][0]
rmv_ind = objective_function[sort_ind[gr_count123]][1]
all_stats = objective_function[sort_ind[gr_count123]][2]
if len(rmv_ind) == 0 or max_mr==0:
continue
mat = all_stats[15]
if mat == 1:
if igrain==0 and material_phase_always_present ==2:
mat = 0
case="None"
if material0_count >= material0_limit:
mat = 0
case="None"
elif mat == 2:
if igrain==0 and material_phase_always_present ==1:
mat = 0
case="None"
if material1_count >= material1_limit:
mat = 0
case="None"
if mat == 0:
continue
current_spots = [len(list(set(rmv_ind) & set(spots1_global[igr])))> coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
continue
input_params["mat"] = all_stats[15]
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(all_stats[16]),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, all_stats[17],
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(max_mr), np.min(max_mr), \
rmv_ind, str(all_stats[18]), all_stats[15], dev_strain, strain_sample, iR, fR, objective_function
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
return all_stats, max_mr, min_mr, spot_ind, case, mat, np.zeros((3,3)), np.zeros((3,3)), 0, 0, objective_function
def get_orient_mat_graphv1(s_tth, s_chi, material0_, material1_, classhkl, class_predicted, predicted_hkl,
input_params, hkl_all_class0, hkl_all_class1, max_pred, dict_dp, spots,
dist, Gstar_metric0, Gstar_metric1, B0, B1, softmax_threshold=0.85, mr_threshold=0.85,
tab_distance_classhkl_data0=None, tab_distance_classhkl_data1=None, spots1_global=None,
coeff_overlap = None, ind_mat=None, ind_mat1=None, strain_calculation=None, cap_matchrate123=None,
material0_count=None, material1_count=None, material0_limit=None, material1_limit=None,
igrain=None, material_phase_always_present=None, objective_function=None, crystal=None,
crystal1=None, strain_free_parameters=None):
if objective_function == None:
call_global()
init_mr = 0
init_mat = 0
init_material = "None"
init_case = "None"
init_B = None
final_match_rate = 0
match_rate_mma = []
final_rmv_ind = []
if material0_ == material1_:
list_of_sets = []
for ii in range(0, min(nb_spots_consider, len(dist))):
if max_pred[ii] < softmax_threshold:
continue
a1 = np.round(dist[ii],3)
for i in range(0, min(nb_spots_consider, len(dist))):
if ii==i:
continue
if (ii,i) in list_of_sets or (i,ii) in list_of_sets:
continue
if max_pred[i] < softmax_threshold:
continue
hkl1 = hkl_all_class0[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class0[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric0
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
np.putmask(tab_angulardist_temp, np.abs(tab_angulardist_temp) < 0.001, 400)
list_ = np.where(np.abs(tab_angulardist_temp-a1[i]) < input_params["tolerance"])
if len(list_[0]) != 0:
list_of_sets.append((ii,i))
else:
list_of_sets = []
for ii in range(0, min(nb_spots_consider, len(dist))):
if max_pred[ii] < softmax_threshold:
continue
a1 = np.round(dist[ii],3)
for i in range(0, min(nb_spots_consider, len(dist))):
if ii==i:
continue
if (ii,i) in list_of_sets or (i,ii) in list_of_sets:
continue
if max_pred[i] < softmax_threshold:
continue
if class_predicted[ii] < ind_mat and class_predicted[i] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
tolerance_new = input_params["tolerance"]
hkl1 = hkl_all_class0[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class0[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric0
elif (ind_mat <= class_predicted[ii] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
tolerance_new = input_params["tolerance1"]
hkl1 = hkl_all_class1[str(predicted_hkl[ii])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class1[str(predicted_hkl[i])]
hkl2_list = np.array(hkl2)
Gstar_metric = Gstar_metric1
else:
continue
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
np.putmask(tab_angulardist_temp, np.abs(tab_angulardist_temp) < 0.001, 400)
list_ = np.where(np.abs(tab_angulardist_temp-a1[i]) < tolerance_new)
if len(list_[0]) != 0:
list_of_sets.append((ii,i))
## build a direct connection graph object
graph_obj = nx.DiGraph(list_of_sets)
connected_nodes_length = []
connected_nodes = [[] for i in range(len(graph_obj))]
for i,line in enumerate(nx.generate_adjlist(graph_obj)):
connected_nodes_length.append(len(line.split(" ")))
connected_nodes[i].append([int(jj) for jj in line.split(" ")])
## sort by maximum node occurance
connected_nodes_length = np.array(connected_nodes_length)
connected_nodes_length_sort_ind = np.argsort(connected_nodes_length)[::-1]
mat = 0
case = "None"
tried_spots = []
objective_function = []
for toplist in range(len(graph_obj)):
# ## continue if less than 3 connections are found for a graph
# if connected_nodes_length[connected_nodes_length_sort_ind[toplist]] < 2:
# continue
for j in connected_nodes[connected_nodes_length_sort_ind[toplist]][0]:
init_mr = 0
final_match_rate = 0
final_rmv_ind = []
all_stats = []
for i in connected_nodes[connected_nodes_length_sort_ind[toplist]][0]:
if j == i:
continue
if j in tried_spots and i in tried_spots:
continue
if material0_ == material1_:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
else:
if class_predicted[i] < ind_mat and class_predicted[j] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
elif (ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[j] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
hkl_all_class = hkl_all_class1
material_ = material1_
B = B1
Gstar_metric = Gstar_metric1
case = material_
mat = 2
input_params["mat"] = mat
input_params["Bmat"] = B
else:
mat = 0
case = "None"
input_params["mat"] = mat
input_params["Bmat"] = None
if mat == 0:
continue
tth_chi_spot1 = np.array([s_tth[i], s_chi[i]])
tth_chi_spot2 = np.array([s_tth[j], s_chi[j]])
hkl1 = hkl_all_class[str(predicted_hkl[i])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class[str(predicted_hkl[j])]
hkl2_list = np.array(hkl2)
actual_mat, flagAM, \
spot1_hkl, spot2_hkl = propose_UB_matrix(hkl1_list, hkl2_list,
Gstar_metric, input_params,
dist[i,j],
tth_chi_spot1, tth_chi_spot2,
B, method=0, crystal=crystal,
crystal1=crystal1)
if flagAM:
continue
for iind in range(len(actual_mat)):
rot_mat123 = actual_mat[iind]
rmv_ind, theospots = remove_spots(s_tth, s_chi, rot_mat123,
material_, input_params,
dict_dp['detectorparameters'], dict_dp)
match_rate = np.round(100 * len(rmv_ind)/theospots, 3)
match_rate_mma.append(match_rate)
if match_rate > init_mr:
final_rmv_ind = rmv_ind
init_mat = np.copy(mat)
input_params["mat"] = init_mat
init_material = np.copy(material_)
init_case = np.copy(case)
init_B = np.copy(B)
input_params["Bmat"] = init_B
final_match_rate = np.copy(match_rate)
init_mr = np.copy(match_rate)
all_stats = [i, j, \
spot1_hkl[iind], spot2_hkl[iind], \
tth_chi_spot1, tth_chi_spot2, \
dist[i,j], tab_distance_classhkl_data[i,j], np.round(max_pred[i]*100,3), \
np.round(max_pred[j]*100,3), len(rmv_ind), theospots,\
match_rate, 0.0, rot_mat123, init_mat, init_material, init_B, init_case]
tried_spots.append(i)
if (final_match_rate <= cap_matchrate123): ## Nothing found!!
## Either peaks are not well defined or not found within tolerance and prediction accuracy
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
objective_function.append([0, [], []])
else:
objective_function.append([final_match_rate, final_rmv_ind, all_stats])
tried_spots.append(j)
sort_ind = []
for ijk in objective_function:
sort_ind.append(ijk[0])
sort_ind = np.array(sort_ind)
sort_ind = np.argsort(sort_ind)[::-1]
for gr_count123 in range(len(sort_ind)):
max_mr = objective_function[sort_ind[gr_count123]][0]
rmv_ind = objective_function[sort_ind[gr_count123]][1]
all_stats = objective_function[sort_ind[gr_count123]][2]
if len(rmv_ind) == 0 or max_mr==0:
continue
mat = all_stats[15]
if mat == 1:
if igrain==0 and material_phase_always_present ==2:
mat = 0
case="None"
if material0_count >= material0_limit:
mat = 0
case="None"
elif mat == 2:
if igrain==0 and material_phase_always_present ==1:
mat = 0
case="None"
if material1_count >= material1_limit:
mat = 0
case="None"
if mat == 0:
continue
current_spots = [len(list(set(rmv_ind) & set(spots1_global[igr])))> coeff_overlap*len(spots1_global[igr]) for igr in range(len(spots1_global))]
if np.any(current_spots):
continue
input_params["mat"] = all_stats[15]
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(all_stats[16]),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, all_stats[17],
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(max_mr), np.min(max_mr), \
rmv_ind, str(all_stats[18]), all_stats[15], dev_strain, strain_sample, iR, fR, objective_function
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
return all_stats, max_mr, min_mr, spot_ind, case, mat, np.zeros((3,3)), np.zeros((3,3)), 0, 0, objective_function
def get_orient_mat_HM(s_tth, s_chi, material0_, material1_, classhkl, class_predicted, predicted_hkl,
input_params, hkl_all_class0, hkl_all_class1, max_pred, dict_dp, spots,
dist, Gstar_metric0, Gstar_metric1, B0, B1, softmax_threshold=0.85, mr_threshold=0.85,
tab_distance_classhkl_data0=None, tab_distance_classhkl_data1=None, spots1_global=None,
coeff_overlap = None, ind_mat=None, ind_mat1=None, strain_calculation=None,cap_matchrate123=None,
material0_count=None, material1_count=None, material0_limit=None, material1_limit=None,
igrain=None, material_phase_always_present=None, strain_free_parameters=None):
call_global()
init_mr = 0
init_mat = 0
init_material = "None"
init_case = "None"
init_B = None
final_match_rate = 0
match_rate_mma = []
final_rmv_ind = []
current_spots1 = [0 for igr in range(len(spots1_global))]
mat = 0
case = "None"
all_stats = []
#calculate the gnemonic projection space
imageGNO, nbpeaks, halfdiagonal = computeGnomonicImage(s_tth, s_chi)
hough, theta_h, d_h = hough_line(imageGNO)
for i in range(0, min(nb_spots_consider, len(s_tth))):
for j in range(i+1, min(nb_spots_consider, len(s_tth))):
overlap = False
## condition to check if spots lie on the same line
in_hough_line = False
for _, anglehs, disths in zip(*hough_line_peaks(hough, theta_h, d_h)):
y0 = (disths - 0 * np.cos(anglehs)) / np.sin(anglehs)
y1 = (disths - imageGNO.shape[1] * np.cos(anglehs)) / np.sin(anglehs)
p1 = np.array((0,y0))
p2 = np.array((imageGNO.shape[1], y1))
p3_0 = ComputeGnomon_singledata(s_tth[i], s_chi[i])
p3_1 = ComputeGnomon_singledata(s_tth[j], s_chi[j])
distance_0 = np.abs(np.cross(p2-p1, p3_0-p1)) / np.linalg.norm(p2-p1)
distance_1 = np.abs(np.cross(p2-p1, p3_1-p1)) / np.linalg.norm(p2-p1)
if distance_0 < dist_threshold and distance_1 < dist_threshold:
# print(distance_0, distance_1)
in_hough_line = True
if in_hough_line:
break
if not in_hough_line:
continue
if (max_pred[j] < softmax_threshold) or (j in spots) or \
(max_pred[i] < softmax_threshold) or (i in spots):
continue
if material0_ == material1_:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
else:
if class_predicted[i] < ind_mat and class_predicted[j] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
if igrain==0 and material_phase_always_present == 2:
mat = 0
case="None"
if material0_count >= material0_limit:
mat = 0
case="None"
input_params["mat"] = mat
input_params["Bmat"] = B
elif (ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[j] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
hkl_all_class = hkl_all_class1
material_ = material1_
B = B1
Gstar_metric = Gstar_metric1
case = material_
mat = 2
if igrain==0 and material_phase_always_present == 1:
mat = 0
case="None"
if material1_count >= material1_limit:
mat = 0
case="None"
input_params["mat"] = mat
input_params["Bmat"] = B
else:
mat = 0
case = "None"
input_params["mat"] = mat
input_params["Bmat"] = None
if mat == 0:
continue
tth_chi_spot1 = np.array([s_tth[i], s_chi[i]])
tth_chi_spot2 = np.array([s_tth[j], s_chi[j]])
hkl1 = hkl_all_class[str(predicted_hkl[i])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class[str(predicted_hkl[j])]
hkl2_list = np.array(hkl2)
actual_mat, flagAM, \
spot1_hkl, spot2_hkl = propose_UB_matrix(hkl1_list, hkl2_list,
Gstar_metric, input_params,
dist[i,j],
tth_chi_spot1, tth_chi_spot2,
B, method=0)
if flagAM:
continue
for iind in range(len(actual_mat)):
rot_mat123 = actual_mat[iind]
rmv_ind, theospots = remove_spots(s_tth, s_chi, rot_mat123,
material_, input_params,
dict_dp['detectorparameters'], dict_dp)
overlap = False
current_spots = [len(list(set(rmv_ind) & set(spots1_global[igr]))) for igr in range(len(spots1_global))]
for igr in range(len(spots1_global)):
if current_spots[igr] > coeff_overlap*len(spots1_global[igr]):
overlap = True
break
if overlap:
continue
match_rate = np.round(100 * len(rmv_ind)/theospots,3)
match_rate_mma.append(match_rate)
if match_rate > init_mr:
current_spots1 = current_spots
init_mat = np.copy(mat)
input_params["mat"] = init_mat
init_material = np.copy(material_)
init_case = np.copy(case)
init_B = np.copy(B)
input_params["Bmat"] = init_B
final_rmv_ind = rmv_ind
final_match_rate = np.copy(match_rate)
init_mr = np.copy(match_rate)
all_stats = [i, j, \
spot1_hkl[iind], spot2_hkl[iind], \
tth_chi_spot1, tth_chi_spot2, \
dist[i,j], tab_distance_classhkl_data[i,j], np.round(max_pred[i]*100,3), \
np.round(max_pred[j]*100,3), len(rmv_ind), theospots,\
match_rate, 0.0, rot_mat123]
if (final_match_rate >= mr_threshold*100.) and not overlap:
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(init_material),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, init_B,
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(match_rate_mma), np.min(match_rate_mma), \
final_rmv_ind, str(init_case), init_mat, dev_strain, strain_sample, iR, fR
overlap = False
for igr in range(len(spots1_global)):
if current_spots1[igr] > coeff_overlap*len(spots1_global[igr]):
overlap = True
if (final_match_rate <= cap_matchrate123) or overlap: ## Nothing found!!
## Either peaks are not well defined or not found within tolerance and prediction accuracy
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
return all_stats, max_mr, min_mr, spot_ind, case, mat, np.zeros((3,3)), np.zeros((3,3)), 0, 0
input_params["mat"] = init_mat
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(init_material),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, init_B,
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(match_rate_mma), np.min(match_rate_mma), \
final_rmv_ind, str(init_case), init_mat, dev_strain, strain_sample, iR, fR
def get_orient_mat(s_tth, s_chi, material0_, material1_, classhkl, class_predicted, predicted_hkl,
input_params, hkl_all_class0, hkl_all_class1, max_pred, dict_dp, spots,
dist, Gstar_metric0, Gstar_metric1, B0, B1, softmax_threshold=0.85, mr_threshold=0.85,
tab_distance_classhkl_data0=None, tab_distance_classhkl_data1=None, spots1_global=None,
coeff_overlap = None, ind_mat=None, ind_mat1=None, strain_calculation=None,cap_matchrate123=None,
material0_count=None, material1_count=None, material0_limit=None, material1_limit=None,
igrain=None, material_phase_always_present=None, strain_free_parameters=None):
call_global()
init_mr = 0
init_mat = 0
init_material = "None"
init_case = "None"
init_B = None
final_match_rate = 0
match_rate_mma = []
final_rmv_ind = []
current_spots1 = [0 for igr in range(len(spots1_global))]
mat = 0
case = "None"
all_stats = []
for i in range(0, min(nb_spots_consider, len(s_tth))):
for j in range(i+1, min(nb_spots_consider, len(s_tth))):
overlap = False
if (max_pred[j] < softmax_threshold) or (j in spots) or \
(max_pred[i] < softmax_threshold) or (i in spots):
continue
if material0_ == material1_:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
input_params["mat"] = mat
input_params["Bmat"] = B
else:
if class_predicted[i] < ind_mat and class_predicted[j] < ind_mat:
tab_distance_classhkl_data = tab_distance_classhkl_data0
hkl_all_class = hkl_all_class0
material_ = material0_
B = B0
Gstar_metric = Gstar_metric0
case = material_
mat = 1
if igrain==0 and material_phase_always_present == 2:
mat = 0
case="None"
if material0_count >= material0_limit:
mat = 0
case="None"
input_params["mat"] = mat
input_params["Bmat"] = B
elif (ind_mat <= class_predicted[i] < (ind_mat+ind_mat1)) and \
(ind_mat <= class_predicted[j] < (ind_mat+ind_mat1)):
tab_distance_classhkl_data = tab_distance_classhkl_data1
hkl_all_class = hkl_all_class1
material_ = material1_
B = B1
Gstar_metric = Gstar_metric1
case = material_
mat = 2
if igrain==0 and material_phase_always_present == 1:
mat = 0
case="None"
if material1_count >= material1_limit:
mat = 0
case="None"
input_params["mat"] = mat
input_params["Bmat"] = B
else:
mat = 0
case = "None"
input_params["mat"] = mat
input_params["Bmat"] = None
if mat == 0:
continue
tth_chi_spot1 = np.array([s_tth[i], s_chi[i]])
tth_chi_spot2 = np.array([s_tth[j], s_chi[j]])
hkl1 = hkl_all_class[str(predicted_hkl[i])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class[str(predicted_hkl[j])]
hkl2_list = np.array(hkl2)
actual_mat, flagAM, \
spot1_hkl, spot2_hkl = propose_UB_matrix(hkl1_list, hkl2_list,
Gstar_metric, input_params,
dist[i,j],
tth_chi_spot1, tth_chi_spot2,
B, method=0)
if flagAM:
continue
for iind in range(len(actual_mat)):
rot_mat123 = actual_mat[iind]
rmv_ind, theospots = remove_spots(s_tth, s_chi, rot_mat123,
material_, input_params,
dict_dp['detectorparameters'], dict_dp)
overlap = False
current_spots = [len(list(set(rmv_ind) & set(spots1_global[igr]))) for igr in range(len(spots1_global))]
for igr in range(len(spots1_global)):
if current_spots[igr] > coeff_overlap*len(spots1_global[igr]):
overlap = True
break
if overlap:
continue
match_rate = np.round(100 * len(rmv_ind)/theospots,3)
match_rate_mma.append(match_rate)
if match_rate > init_mr:
current_spots1 = current_spots
init_mat = np.copy(mat)
input_params["mat"] = init_mat
init_material = np.copy(material_)
init_case = np.copy(case)
init_B = np.copy(B)
input_params["Bmat"] = init_B
final_rmv_ind = rmv_ind
final_match_rate = np.copy(match_rate)
init_mr = np.copy(match_rate)
all_stats = [i, j, \
spot1_hkl[iind], spot2_hkl[iind], \
tth_chi_spot1, tth_chi_spot2, \
dist[i,j], tab_distance_classhkl_data[i,j], np.round(max_pred[i]*100,3), \
np.round(max_pred[j]*100,3), len(rmv_ind), theospots,\
match_rate, 0.0, rot_mat123]
if (final_match_rate >= mr_threshold*100.) and not overlap:
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(init_material),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, init_B,
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(match_rate_mma), np.min(match_rate_mma), \
final_rmv_ind, str(init_case), init_mat, dev_strain, strain_sample, iR, fR
overlap = False
for igr in range(len(spots1_global)):
if current_spots1[igr] > coeff_overlap*len(spots1_global[igr]):
overlap = True
if (final_match_rate <= cap_matchrate123) or overlap: ## Nothing found!!
## Either peaks are not well defined or not found within tolerance and prediction accuracy
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
return all_stats, max_mr, min_mr, spot_ind, case, mat, np.zeros((3,3)), np.zeros((3,3)), 0, 0
input_params["mat"] = init_mat
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUB(s_tth, s_chi, all_stats[14], str(init_material),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, init_B,
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(match_rate_mma), np.min(match_rate_mma), \
final_rmv_ind, str(init_case), init_mat, dev_strain, strain_sample, iR, fR
def propose_UB_matrix(hkl1_list, hkl2_list, Gstar_metric, input_params, dist123,
tth_chi_spot1, tth_chi_spot2, B, method=0, crystal=None,
crystal1=None):
if method == 0:
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
if input_params["mat"] == 1:
list_ = np.where(np.abs(tab_angulardist_temp-dist123) < input_params["tolerance"])
final_crystal=crystal
elif input_params["mat"] == 2:
list_ = np.where(np.abs(tab_angulardist_temp-dist123) < input_params["tolerance1"])
final_crystal=crystal1
if final_crystal != None:
symm_operator = final_crystal._hklsym
else:
symm_operator = np.eye(3)
if len(list_[0]) == 0:
return None, True, 0, 0
rot_mat_abs = []
actual_mat = []
spot1_hkl = []
spot2_hkl = []
triedspots = []
for ii, jj in zip(list_[0], list_[1]):
if ii in triedspots and jj in triedspots:
continue
conti_ = False
try:
rot_mat1 = FindO.OrientMatrix_from_2hkl(hkl1_list[ii], tth_chi_spot1, \
hkl2_list[jj], tth_chi_spot2,
B)
# rot_mat1 = find_uniq_u(rot_mat1, symm_operator)
except:
continue
copy_rm = np.copy(rot_mat1)
copy_rm = np.round(np.abs(copy_rm),5)
copy_rm.sort(axis=1)
for iji in rot_mat_abs:
iji.sort(axis=1)
if np.all(iji==copy_rm):
conti_ = True
break
if conti_:
continue
rot_mat_abs.append(np.round(np.abs(rot_mat1),5))
actual_mat.append(rot_mat1)
spot1_hkl.append(hkl1_list[ii])
spot2_hkl.append(hkl2_list[jj])
triedspots.append(ii)
triedspots.append(jj)
else:
# method 2
hkl_all = np.vstack((hkl1_list, hkl2_list))
LUT = FindO.GenerateLookUpTable(hkl_all, Gstar_metric)
if input_params["mat"] == 1:
hkls = FindO.PlanePairs_2(dist123, input_params["tolerance"], LUT, onlyclosest=1)
elif input_params["mat"] == 2:
hkls = FindO.PlanePairs_2(dist123, input_params["tolerance1"], LUT, onlyclosest=1)
if np.all(hkls == None):
return None, True, 0, 0
rot_mat_abs = []
actual_mat = []
spot1_hkl = []
spot2_hkl = []
for ii in range(len(hkls)):
if np.all(hkls[ii][0] == hkls[ii][1]):
continue
conti_ = False
try:
rot_mat1 = FindO.OrientMatrix_from_2hkl(hkls[ii][0], tth_chi_spot1, \
hkls[ii][1], tth_chi_spot2,
B)
# rot_mat1 = find_uniq_u(rot_mat1, symm_operator)
except:
continue
copy_rm = np.copy(rot_mat1)
copy_rm = np.round(np.abs(copy_rm),5)
copy_rm.sort(axis=1)
for iji in rot_mat_abs:
iji.sort(axis=1)
if np.all(iji==copy_rm):
conti_ = True
break
if conti_:
continue
rot_mat_abs.append(np.round(np.abs(rot_mat1),5))
actual_mat.append(rot_mat1)
spot1_hkl.append(hkls[ii][0])
spot2_hkl.append(hkls[ii][1])
#TODO
## just fixing a* to x seems ok; if not think of aligning b* to xy plane
sum_sign = []
for nkl in range(len(actual_mat)):
temp_mat = np.dot(actual_mat[nkl], B)
## fix could be to choose a matrix that aligns best the b* vector to Y axis or a* to X axis
# if np.argmax(np.abs(temp_mat[:2,0])) == 0 and \
# np.argmax(np.abs(temp_mat[:2,1])) == 1: ##a* along x, b*along y
if np.argmax(np.abs(temp_mat[:2,0])) == 0: ##a* along x
sum_sign.append(2)
elif np.argmax(np.abs(temp_mat[:2,0])) == np.argmax(np.abs(temp_mat[:2,1])):
sum_sign.append(0)
else:
sum_sign.append(1)
ind_sort = np.argsort(sum_sign)[::-1]
## re-arrange
actual_mat1 = []
spot1_hkl1, spot2_hkl1 = [], []
for inin in ind_sort:
actual_mat1.append(actual_mat[inin])
spot1_hkl1.append(spot1_hkl[inin])
spot2_hkl1.append(spot2_hkl[inin])
actual_mat, spot1_hkl, spot2_hkl = actual_mat1, spot1_hkl1, spot2_hkl1
return actual_mat, False, spot1_hkl, spot2_hkl
def find_uniq_u(u, syms):
"""
Unique representation of rotation matrix:
apply this function before strain
as distorted unit cell may produce undesireable matrix
"""
uniq = u
tmax = np.trace(uniq)
for sym in syms:
cand = np.dot(sym, uniq)
t = np.trace(cand)
if np.trace(cand) > tmax:
uniq = cand
tmax = t
return np.array(uniq)
def remove_spots(s_tth, s_chi, first_match123, material_, input_params, detectorparameters, dict_dp):
try:
grain = CP.Prepare_Grain(material_, first_match123, dictmaterials=dictLT.dict_Materials)
### initialize global variables to be used later
call_global()
except:
return [], 100
#### Perhaps better than SimulateResult function
kf_direction = dict_dp["kf_direction"]
detectordistance = dict_dp["detectorparameters"][0]
detectordiameter = dict_dp["detectordiameter"]
pixelsize = dict_dp["pixelsize"]
dim = dict_dp["dim"]
spots2pi = LT.getLaueSpots(CST_ENERGYKEV / input_params["emax"],
CST_ENERGYKEV / input_params["emin"],
[grain],
fastcompute=1,
verbose=0,
kf_direction=kf_direction,
ResolutionAngstrom=False,
dictmaterials=dictLT.dict_Materials)
TwicethetaChi = LT.filterLaueSpots_full_np(spots2pi[0][0], None, onlyXYZ=False,
HarmonicsRemoval=0,
fastcompute=1,
kf_direction=kf_direction,
detectordistance=detectordistance,
detectordiameter=detectordiameter,
pixelsize=pixelsize,
dim=dim)
## get proximity for exp and theo spots
if input_params["mat"] == 1:
angtol = input_params["tolerance"]
elif input_params["mat"] == 2:
angtol = input_params["tolerance1"]
else:
return [], 100
if option_global =="v1":
# print("entering v1")
List_Exp_spot_close, residues_link, _ = getProximityv1(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
elif option_global =="v2":
List_Exp_spot_close, residues_link, _ = getProximityv1_ambigious(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
else:
List_Exp_spot_close, residues_link, _ = getProximityv1_ambigious(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
List_Exp_spot_close, ind_uniq = np.unique(List_Exp_spot_close, return_index=True)
residues_link = np.take(residues_link, ind_uniq)
if np.average(residues_link) > residues_threshold:
return [], 100
if len(np.unique(List_Exp_spot_close)) < nb_spots_global_threshold:
return [], 100
return List_Exp_spot_close, len(TwicethetaChi[0])
def simulate_spots(rot_mat, material_, emax, emin, detectorparameters, dict_dp, angtol,
s_tth, s_chi):
try:
grain = CP.Prepare_Grain(material_, rot_mat, dictmaterials=dictLT.dict_Materials)
### initialize global variables to be used later
call_global()
except:
return [], [], [], [], []
#### Perhaps better than SimulateResult function
kf_direction = dict_dp["kf_direction"]
detectordistance = dict_dp["detectorparameters"][0]
detectordiameter = dict_dp["detectordiameter"]
pixelsize = dict_dp["pixelsize"]
dim = dict_dp["dim"]
spots2pi = LT.getLaueSpots(CST_ENERGYKEV / emax, CST_ENERGYKEV / emin,
[grain],
fastcompute=0,
verbose=0,
kf_direction=kf_direction,
ResolutionAngstrom=False,
dictmaterials=dictLT.dict_Materials)
TwicethetaChi = LT.filterLaueSpots_full_np(spots2pi[0][0], spots2pi[1][0], onlyXYZ=False,
HarmonicsRemoval=0,
fastcompute=0,
kf_direction=kf_direction,
detectordistance=detectordistance,
detectordiameter=detectordiameter,
pixelsize=pixelsize,
dim=dim)
if option_global =="v1":
List_Exp_spot_close, residues_link, theo_index = getProximityv1(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
elif option_global =="v2":
List_Exp_spot_close, residues_link, theo_index = getProximityv1_ambigious(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
else:
List_Exp_spot_close, residues_link, theo_index = getProximityv1_ambigious(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
List_Exp_spot_close, ind_uniq = np.unique(List_Exp_spot_close, return_index=True)
residues_link = np.take(residues_link, ind_uniq)
theo_index = np.take(theo_index, ind_uniq)
return TwicethetaChi[0], TwicethetaChi[1], TwicethetaChi[2], TwicethetaChi[3], List_Exp_spot_close, residues_link, theo_index
def getProximityv1_ambigious(TwicethetaChi, data_theta, data_chi, angtol=0.5):
# theo simul data
theodata = np.array([TwicethetaChi[0] / 2.0, TwicethetaChi[1]]).T
# exp data
sorted_data = np.array([data_theta, data_chi]).T
table_dist = GT.calculdist_from_thetachi(sorted_data, theodata)
prox_table = np.argmin(table_dist, axis=1)
allresidues = np.amin(table_dist, axis=1)
very_close_ind = np.where(allresidues < angtol)[0]
List_Exp_spot_close = []
theo_index = []
if len(very_close_ind) > 0:
for theospot_ind in very_close_ind: # loop over theo spots index
List_Exp_spot_close.append(prox_table[theospot_ind])
theo_index.append(theospot_ind)
return List_Exp_spot_close, allresidues[very_close_ind], theo_index
def getProximityv1( TwicethetaChi, data_theta, data_chi, angtol=0.5):
theodata = np.array([TwicethetaChi[0] / 2.0, TwicethetaChi[1]]).T
# exp data
sorted_data = np.array([data_theta, data_chi]).T
table_dist = GT.calculdist_from_thetachi(sorted_data, theodata)
prox_table = np.argmin(table_dist, axis=1)
allresidues = np.amin(table_dist, axis=1)
very_close_ind = np.where(allresidues < angtol)[0]
List_Exp_spot_close = []
Miller_Exp_spot = []
if len(very_close_ind) > 0:
for theospot_ind in very_close_ind: # loop over theo spots index
List_Exp_spot_close.append(prox_table[theospot_ind])
Miller_Exp_spot.append(1)
else:
return [], [], []
# removing exp spot which appears many times(close to several simulated spots of one grain)--------------
arrayLESC = np.array(List_Exp_spot_close, dtype=float)
sorted_LESC = np.sort(arrayLESC)
diff_index = sorted_LESC - np.array(list(sorted_LESC[1:]) + [sorted_LESC[0]])
toremoveindex = np.where(diff_index == 0)[0]
if len(toremoveindex) > 0:
# index of exp spot in arrayLESC that are duplicated
ambiguous_exp_ind = GT.find_closest(np.array(sorted_LESC[toremoveindex], dtype=float), arrayLESC, 0.1)[1]
for ind in ambiguous_exp_ind:
Miller_Exp_spot[ind] = None
ProxTablecopy = np.copy(prox_table)
for theo_ind, exp_ind in enumerate(prox_table):
where_th_ind = np.where(ProxTablecopy == exp_ind)[0]
if len(where_th_ind) > 1:
for indy in where_th_ind:
ProxTablecopy[indy] = -prox_table[indy]
closest = np.argmin(allresidues[where_th_ind])
ProxTablecopy[where_th_ind[closest]] = -ProxTablecopy[where_th_ind[closest]]
singleindices = []
refine_indexed_spots = {}
# loop over close exp. spots
for k in range(len(List_Exp_spot_close)):
exp_index = List_Exp_spot_close[k]
if not singleindices.count(exp_index):
singleindices.append(exp_index)
theo_index = np.where(ProxTablecopy == exp_index)[0]
if (len(theo_index) == 1): # only one theo spot close to the current exp. spot
refine_indexed_spots[exp_index] = [exp_index, theo_index, Miller_Exp_spot[k]]
else: # recent PATCH:
closest_theo_ind = np.argmin(allresidues[theo_index])
if allresidues[theo_index][closest_theo_ind] < angtol:
refine_indexed_spots[exp_index] = [exp_index, theo_index[closest_theo_ind], Miller_Exp_spot[k]]
listofpairs = []
theo_index = []
linkResidues = []
selectedAbsoluteSpotIndices = np.arange(len(data_theta))
for val in list(refine_indexed_spots.values()):
if val[2] is not None:
localspotindex = val[0]
if not isinstance(val[1], (list, np.ndarray)):
closetheoindex = val[1]
else:
closetheoindex = val[1][0]
absolute_spot_index = selectedAbsoluteSpotIndices[localspotindex]
listofpairs.append(absolute_spot_index) # Exp, Theo, where -1 for specifying that it came from automatic linking
theo_index.append(closetheoindex)
linkResidues.append(allresidues[closetheoindex])
return listofpairs, linkResidues, theo_index
def refineonce_fromUB(s_tth, s_chi, UBmat, grain, input_params,
detectorparameters, dict_dp, B_matrix):
# starting B0matrix corresponding to the unit cell -----
B0matrix = np.copy(B_matrix)
if input_params["mat"] == 1:
AngTol = input_params["tolerance"]
elif input_params["mat"] == 2:
AngTol = input_params["tolerance1"]
#### Spots in first match (no refining, just simple auto links to filter spots)
Twicetheta, Chi, Miller_ind, posx, posy, _ = LT.SimulateLaue(grain,
input_params["emin"],
input_params["emax"],
detectorparameters,
kf_direction=dict_dp['kf_direction'],
removeharmonics=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
ResolutionAngstrom=False,
detectordiameter=dict_dp['detectordiameter'],
dictmaterials=dictLT.dict_Materials)
## get proximity for exp and theo spots
linkedspots_link, linkExpMiller_link, \
linkResidues_link = getProximityv0(np.array([Twicetheta, Chi]), # warning array(2theta, chi)
s_tth/2.0, s_chi, Miller_ind, # warning theta, chi for exp
angtol=float(AngTol))
if len(linkedspots_link) < 8:
return UBmat
linkedspots_fit = linkedspots_link
linkExpMiller_fit = linkExpMiller_link
arraycouples = np.array(linkedspots_fit)
exp_indices = np.array(arraycouples[:, 0], dtype=np.int)
sim_indices = np.array(arraycouples[:, 1], dtype=np.int)
nb_pairs = len(exp_indices)
Data_Q = np.array(linkExpMiller_fit)[:, 1:]
sim_indices = np.arange(nb_pairs) # for fitting function this must be an arange...
pixX = np.take(dict_dp['peakX'], exp_indices)
pixY = np.take(dict_dp['peakY'], exp_indices)
weights = None #np.take(dict_dp['intensity'], exp_indices)
starting_orientmatrix = np.copy(UBmat)
results = None
# ----------------------------------
# refinement model
# ----------------------------------
# -------------------------------------------------------
allparameters = np.array(detectorparameters + [1, 1, 0, 0, 0] + [0, 0, 0])
# strain & orient
initial_values = np.array([1.0, 1.0, 0.0, 0.0, 0.0, 0, 0.0, 0.0])
arr_indexvaryingparameters = np.arange(5, 13)
results = FitO.fit_on_demand_strain(initial_values,
Data_Q,
allparameters,
FitO.error_function_on_demand_strain,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
verbose=0,
weights=weights,
kf_direction=dict_dp['kf_direction'])
if results is None:
return UBmat
residues, deltamat, newmatrix = FitO.error_function_on_demand_strain(
results,
Data_Q,
allparameters,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pureRotation=0,
verbose=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'])
UBmat = np.copy(newmatrix)
return UBmat
def calculate_strains_fromUB(s_tth, s_chi, UBmat, material_, input_params,
detectorparameters, dict_dp, spots, B_matrix, strain_free_parameters):
## for the moment strain_free_parameters is a trial implementation
#TODO to be verified
if ("a" not in strain_free_parameters) and len(strain_free_parameters)>=5:
if additional_expression[0] != "none":
print("Note: additional_expression is not applied for the current set of strain free parameters")
# starting B0matrix corresponding to the unit cell -----
B0matrix = np.copy(B_matrix)
latticeparams = dictLT.dict_Materials[material_][1]
## Included simple multi level refinement of strains
init_residues = -0.1
final_residues = -0.1
if input_params["mat"] == 1:
straintolerance = input_params["tolerancestrain"]
elif input_params["mat"] == 2:
straintolerance = input_params["tolerancestrain1"]
devstrain, deviatoricstrain_sampleframe = np.zeros((3,3)), np.zeros((3,3))
for ijk, AngTol in enumerate(straintolerance):
#### Spots in first match (no refining, just simple auto links to filter spots)
grain = CP.Prepare_Grain(material_, UBmat, dictmaterials=dictLT.dict_Materials)
Twicetheta, Chi, Miller_ind, posx, posy, _ = LT.SimulateLaue(grain,
input_params["emin"],
input_params["emax"],
detectorparameters,
kf_direction=dict_dp['kf_direction'],
removeharmonics=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
ResolutionAngstrom=False,
detectordiameter=dict_dp['detectordiameter'],
dictmaterials=dictLT.dict_Materials)
## get proximity for exp and theo spots
linkedspots_link, linkExpMiller_link, \
linkResidues_link = getProximityv0(np.array([Twicetheta, Chi]), # warning array(2theta, chi)
s_tth/2.0, s_chi, Miller_ind, # warning theta, chi for exp
angtol=float(AngTol))
if len(linkedspots_link) < 8:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
linkedspots_fit = linkedspots_link
linkExpMiller_fit = linkExpMiller_link
arraycouples = np.array(linkedspots_fit)
exp_indices = np.array(arraycouples[:, 0], dtype=np.int)
sim_indices = np.array(arraycouples[:, 1], dtype=np.int)
nb_pairs = len(exp_indices)
Data_Q = np.array(linkExpMiller_fit)[:, 1:]
sim_indices = np.arange(nb_pairs) # for fitting function this must be an arange...
pixX = np.take(dict_dp['peakX'], exp_indices)
pixY = np.take(dict_dp['peakY'], exp_indices)
weights = None #np.take(dict_dp['intensity'], exp_indices)
starting_orientmatrix = np.copy(UBmat)
results = None
# ----------------------------------
# refinement model
# ----------------------------------
# -------------------------------------------------------
allparameters = np.array(detectorparameters + [1, 1, 0, 0, 0] + [0, 0, 0])
# strain & orient
initial_values = np.array([1.0, 1.0, 0.0, 0.0, 0.0, 0, 0.0, 0.0])
arr_indexvaryingparameters = np.arange(5, 13)
residues, deltamat, newmatrix = FitO.error_function_on_demand_strain(
initial_values,
Data_Q,
allparameters,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pureRotation=0,
verbose=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'])
init_mean_residues = np.copy(np.mean(residues))
if ijk == 0:
init_residues = np.copy(init_mean_residues)
results = FitO.fit_on_demand_strain(initial_values,
Data_Q,
allparameters,
FitO.error_function_on_demand_strain,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
verbose=0,
weights=weights,
kf_direction=dict_dp['kf_direction'])
if results is None:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
residues, deltamat, newmatrix = FitO.error_function_on_demand_strain(
results,
Data_Q,
allparameters,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pureRotation=0,
verbose=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'])
# if np.mean(residues) > final_residues:
# return devstrain, deviatoricstrain_sampleframe, init_residues, final_residues, UBmat
final_mean_residues = np.copy(np.mean(residues))
final_residues = np.copy(final_mean_residues)
# building B mat
# param_strain_sol = results
# varyingstrain = np.array([[1.0, param_strain_sol[2], param_strain_sol[3]],
# [0, param_strain_sol[0], param_strain_sol[4]],
# [0, 0, param_strain_sol[1]]])
# newUmat = np.dot(deltamat, starting_orientmatrix)
# newUBmat = np.dot(newUmat, varyingstrain)
newUBmat = np.copy(newmatrix)
# Bstar_s = np.dot(newUBmat, B0matrix)
# ---------------------------------------------------------------
# postprocessing of unit cell orientation and strain refinement
# ---------------------------------------------------------------
UBmat = np.copy(newmatrix)
(devstrain, lattice_parameter_direct_strain) = CP.compute_deviatoricstrain(newUBmat, B0matrix, latticeparams)
# overwrite and rescale possibly lattice lengthes
# constantlength = "a"
# lattice_parameter_direct_strain = CP.computeLatticeParameters_from_UB(newUBmat, material_, constantlength, dictmaterials=dictLT.dict_Materials)
# print(lattice_parameter_direct_strain)
deviatoricstrain_sampleframe = CP.strain_from_crystal_to_sample_frame2(devstrain, newUBmat)
# in % already
devstrain = np.round(devstrain * 100, decimals=3)
deviatoricstrain_sampleframe = np.round(deviatoricstrain_sampleframe * 100, decimals=3)
else:
# starting B0matrix corresponding to the unit cell -----
B0matrix = np.copy(B_matrix)
latticeparams = dictLT.dict_Materials[material_][1]
## Included simple multi level refinement of strains
init_residues = -0.1
final_residues = -0.1
if input_params["mat"] == 1:
straintolerance = input_params["tolerancestrain"]
elif input_params["mat"] == 2:
straintolerance = input_params["tolerancestrain1"]
devstrain, deviatoricstrain_sampleframe = np.zeros((3,3)), np.zeros((3,3))
for ijk, AngTol in enumerate(straintolerance):
#### Spots in first match (no refining, just simple auto links to filter spots)
grain = CP.Prepare_Grain(material_, UBmat, dictmaterials=dictLT.dict_Materials)
Twicetheta, Chi, Miller_ind, posx, posy, _ = LT.SimulateLaue(grain,
input_params["emin"],
input_params["emax"],
detectorparameters,
kf_direction=dict_dp['kf_direction'],
removeharmonics=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
ResolutionAngstrom=False,
detectordiameter=dict_dp['detectordiameter'],
dictmaterials=dictLT.dict_Materials)
## get proximity for exp and theo spots
linkedspots_link, linkExpMiller_link, \
linkResidues_link = getProximityv0(np.array([Twicetheta, Chi]), # warning array(2theta, chi)
s_tth/2.0, s_chi, Miller_ind, # warning theta, chi for exp
angtol=float(AngTol))
if len(linkedspots_link) < 8:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
linkedspots_fit = linkedspots_link
linkExpMiller_fit = linkExpMiller_link
arraycouples = np.array(linkedspots_fit)
exp_indices = np.array(arraycouples[:, 0], dtype=np.int)
sim_indices = np.array(arraycouples[:, 1], dtype=np.int)
nb_pairs = len(exp_indices)
Data_Q = np.array(linkExpMiller_fit)[:, 1:]
sim_indices = np.arange(nb_pairs) # for fitting function this must be an arange...
pixX = np.take(dict_dp['peakX'], exp_indices)
pixY = np.take(dict_dp['peakY'], exp_indices)
weights = None #np.take(dict_dp['intensity'], exp_indices)
starting_orientmatrix = np.copy(UBmat)
results = None
# ----------------------------------
# refinement model
# ----------------------------------
# -------------------------------------------------------
allparameters = np.array(detectorparameters + [0, 0, 0] + latticeparams)
fitting_parameters_keys = ["anglex", "angley", "anglez"]
fitting_parameters_values = [0, 0, 0]
constantlength = "a"
if ("a" in strain_free_parameters) and ("b" in strain_free_parameters) and ("c" in strain_free_parameters):
constantlength = "a"
elif ("b" not in strain_free_parameters) and additional_expression[0]=="none" and\
"b" not in additional_expression[0]:
constantlength = "b"
elif ("c" not in strain_free_parameters):
constantlength = "c"
for jjkk in strain_free_parameters:
if jjkk == "a" and constantlength != "a":
fitting_parameters_keys.append("a")
fitting_parameters_values.append(latticeparams[0])
if jjkk == "b" and constantlength != "b":
fitting_parameters_keys.append("b")
fitting_parameters_values.append(latticeparams[1])
if jjkk == "c" and constantlength != "c":
fitting_parameters_keys.append("c")
fitting_parameters_values.append(latticeparams[2])
if jjkk == "alpha":
fitting_parameters_keys.append("alpha")
fitting_parameters_values.append(latticeparams[3])
if jjkk == "beta":
fitting_parameters_keys.append("beta")
fitting_parameters_values.append(latticeparams[4])
if jjkk == "gamma":
fitting_parameters_keys.append("gamma")
fitting_parameters_values.append(latticeparams[5])
pureUmatrix, _ = GT.UBdecomposition_RRPP(starting_orientmatrix)
absolutespotsindices = np.arange(len(pixX))
(residues, _, _,
_, _, ) = FitO.error_function_latticeparameters(fitting_parameters_values,
fitting_parameters_keys,
Data_Q,
allparameters,
absolutespotsindices,
pixX,
pixY,
initrot=pureUmatrix,
pureRotation=0,
verbose=0,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'],
returnalldata=True,
additional_expression = additional_expression[0])
init_mean_residues = np.copy(np.mean(residues))
if ijk == 0:
init_residues = np.copy(init_mean_residues)
results = FitO.fit_function_latticeparameters(fitting_parameters_values,
fitting_parameters_keys,
Data_Q,
allparameters,
absolutespotsindices,
pixX,
pixY,
UBmatrix_start=pureUmatrix,
nb_grains=1,
pureRotation=0,
verbose=0,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'],
additional_expression = additional_expression[0])
if results is None:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
(residues, Uxyz, newUmat,
newB0matrix, _, ) = FitO.error_function_latticeparameters(results,
fitting_parameters_keys,
Data_Q,
allparameters,
absolutespotsindices,
pixX,
pixY,
initrot=pureUmatrix,
pureRotation=0,
verbose=0,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'],
returnalldata=True,
additional_expression = additional_expression[0])
final_mean_residues = np.copy(np.mean(residues))
final_residues = np.copy(final_mean_residues)
newUBmat = np.dot(np.dot(newUmat, newB0matrix), np.linalg.inv(B0matrix))
UBmat = np.copy(newUBmat)
# ---------------------------------------------------------------
# postprocessing of unit cell orientation and strain refinement
# ---------------------------------------------------------------
(devstrain, lattice_parameter_direct_strain) = CP.compute_deviatoricstrain(newUBmat, B0matrix, latticeparams)
deviatoricstrain_sampleframe = CP.strain_from_crystal_to_sample_frame2(devstrain, newUBmat)
# in % already
devstrain = np.round(devstrain * 100, decimals=3)
deviatoricstrain_sampleframe = np.round(deviatoricstrain_sampleframe * 100, decimals=3)
return devstrain, deviatoricstrain_sampleframe, init_residues, final_residues, UBmat
def getProximityv0(TwicethetaChi, data_theta, data_chi, data_hkl, angtol=0.5):
# theo simul data
theodata = np.array([TwicethetaChi[0] / 2.0, TwicethetaChi[1]]).T
# exp data
sorted_data = np.array([data_theta, data_chi]).T
table_dist = GT.calculdist_from_thetachi(sorted_data, theodata)
prox_table = np.argmin(table_dist, axis=1)
allresidues = np.amin(table_dist, axis=1)
very_close_ind = np.where(allresidues < angtol)[0]
List_Exp_spot_close = []
Miller_Exp_spot = []
if len(very_close_ind) > 0:
for theospot_ind in very_close_ind: # loop over theo spots index
List_Exp_spot_close.append(prox_table[theospot_ind])
Miller_Exp_spot.append(data_hkl[theospot_ind])
else:
return [],[],[]
# removing exp spot which appears many times(close to several simulated spots of one grain)--------------
arrayLESC = np.array(List_Exp_spot_close, dtype=float)
sorted_LESC = np.sort(arrayLESC)
diff_index = sorted_LESC - np.array(list(sorted_LESC[1:]) + [sorted_LESC[0]])
toremoveindex = np.where(diff_index == 0)[0]
if len(toremoveindex) > 0:
# index of exp spot in arrayLESC that are duplicated
ambiguous_exp_ind = GT.find_closest(np.array(sorted_LESC[toremoveindex], dtype=float), arrayLESC, 0.1)[1]
for ind in ambiguous_exp_ind:
Miller_Exp_spot[ind] = None
ProxTablecopy = np.copy(prox_table)
for theo_ind, exp_ind in enumerate(prox_table):
where_th_ind = np.where(ProxTablecopy == exp_ind)[0]
if len(where_th_ind) > 1:
for indy in where_th_ind:
ProxTablecopy[indy] = -prox_table[indy]
closest = np.argmin(allresidues[where_th_ind])
ProxTablecopy[where_th_ind[closest]] = -ProxTablecopy[where_th_ind[closest]]
singleindices = []
refine_indexed_spots = {}
# loop over close exp. spots
for k in range(len(List_Exp_spot_close)):
exp_index = List_Exp_spot_close[k]
if not singleindices.count(exp_index):
singleindices.append(exp_index)
theo_index = np.where(ProxTablecopy == exp_index)[0]
if (len(theo_index) == 1): # only one theo spot close to the current exp. spot
refine_indexed_spots[exp_index] = [exp_index, theo_index, Miller_Exp_spot[k]]
else: # recent PATCH:
closest_theo_ind = np.argmin(allresidues[theo_index])
if allresidues[theo_index][closest_theo_ind] < angtol:
refine_indexed_spots[exp_index] = [exp_index, theo_index[closest_theo_ind], Miller_Exp_spot[k]]
listofpairs = []
linkExpMiller = []
linkResidues = []
selectedAbsoluteSpotIndices = np.arange(len(data_theta))
for val in list(refine_indexed_spots.values()):
if val[2] is not None:
localspotindex = val[0]
if not isinstance(val[1], (list, np.ndarray)):
closetheoindex = val[1]
else:
closetheoindex = val[1][0]
absolute_spot_index = selectedAbsoluteSpotIndices[localspotindex]
listofpairs.append([absolute_spot_index, closetheoindex]) # Exp, Theo, where -1 for specifying that it came from automatic linking
linkExpMiller.append([float(absolute_spot_index)] + [float(elem) for elem in val[2]]) # float(val) for further handling as floats array
linkResidues.append([absolute_spot_index, closetheoindex, allresidues[closetheoindex]])
linkedspots_link = np.array(listofpairs)
linkExpMiller_link = linkExpMiller
linkResidues_link = linkResidues
return linkedspots_link, linkExpMiller_link, linkResidues_link
def get_ipf_colour(orientation_matrix1, axis=np.array([0., 0., 1.]), symmetry=None, symm_operator=None):
"""Compute the IPF (inverse pole figure) colour for this orientation.
Given a particular axis expressed in the laboratory coordinate system,
one can compute the so called IPF colour based on that direction
expressed in the crystal coordinate system as :math:`[x_c,y_c,z_c]`.
There is only one tuple (u,v,w) such that:
.. math::
[x_c,y_c,z_c]=u.[0,0,1]+v.[0,1,1]+w.[1,1,1]
and it is used to assign the RGB colour.
:param ndarray axis: the direction to use to compute the IPF colour.
:param Symmetry symmetry: the symmetry operator to use.
:return tuple: a tuple contining the RGB values.
"""
if not np.all(orientation_matrix1==0):
orientation_matrix = orientation_matrix1
else:
return 0,0,0
# ## rotate orientation by 40degrees to bring in Sample RF
omega = np.deg2rad(-40.0)
# rotation de -omega autour de l'axe x (or Y?) pour repasser dans Rsample
cw = np.cos(omega)
sw = np.sin(omega)
mat_from_lab_to_sample_frame = np.array([[cw, 0.0, sw], [0.0, 1.0, 0.0], [-sw, 0, cw]])
orientation_matrix = np.dot(mat_from_lab_to_sample_frame.T, orientation_matrix)
if np.linalg.det(orientation_matrix) < 0:
orientation_matrix = -orientation_matrix
axis /= np.linalg.norm(axis)
# rgb = get_field_color(orientation_matrix, axis, symmetry=symmetry, syms=syms)
# return rgb
Vc = np.dot(orientation_matrix, axis)
# get the symmetry operators
syms = np.array(symm_operator) #symmetry.symmetry_operators()
syms = np.concatenate((syms, -syms))
syms = np.unique(syms, axis=0)
if symmetry == symmetry.cubic:
rgb = get_field_color(orientation_matrix, axis, symmetry, syms)
return rgb
# angleR = 45 - Vc_chi # red color proportional to (45 - chi)
# minAngleR = 0
# maxAngleR = 45
# angleB = Vc_phi # blue color proportional to phi
# minAngleB = 0
# maxAngleB = 45
elif symmetry == symmetry.hexagonal:
Vc_syms = np.dot(syms, Vc)
# phi: rotation around 001 axis, from 100 axis to Vc vector, projected on (100,010) plane
Vc_phi = np.arctan2(Vc_syms[:, 1], Vc_syms[:, 0]) * 180 / np.pi
# chi: rotation around 010 axis, from 001 axis to Vc vector, projected on (100,001) plane
# Vc_chi = np.arctan2(Vc_syms[:, 0], Vc_syms[:, 2]) * 180 / np.pi
# psi : angle from 001 axis to Vc vector
Vc_psi = np.arccos(Vc_syms[:, 2]) * 180 / np.pi
angleR = 90 - Vc_psi # red color proportional to (90 - psi)
minAngleR = 0
maxAngleR = 90
angleB = Vc_phi # blue color proportional to phi
minAngleB = 0
maxAngleB = 30
else:
rgb = get_field_color(orientation_matrix, axis, symmetry, syms)
return rgb
# find the axis lying in the fundamental zone
fz_list = ((angleR >= minAngleR) & (angleR < maxAngleR) &
(angleB >= minAngleB) & (angleB < maxAngleB)).tolist()
if not fz_list.count(True) == 1:
# print("funda problem")
rgb = get_field_color(orientation_matrix, axis, symmetry, syms)
return rgb
i_SST = fz_list.index(True)
r = angleR[i_SST] / maxAngleR
g = (maxAngleR - angleR[i_SST]) / maxAngleR * (maxAngleB - angleB[i_SST]) / maxAngleB
b = (maxAngleR - angleR[i_SST]) / maxAngleR * angleB[i_SST] / maxAngleB
rgb = np.array([r, g, b])
rgb = rgb / rgb.max()
return rgb
def get_field_color(orientation_matrix, axis=np.array([0., 0., 1.]), symmetry=None, syms=None):
"""Compute the IPF (inverse pole figure) colour for this orientation.
Given a particular axis expressed in the laboratory coordinate system,
one can compute the so called IPF colour based on that direction
expressed in the crystal coordinate system as :math:`[x_c,y_c,z_c]`.
There is only one tuple (u,v,w) such that:
.. math::
[x_c,y_c,z_c]=u.[0,0,1]+v.[0,1,1]+w.[1,1,1]
and it is used to assign the RGB colour.
:param ndarray axis: the direction to use to compute the IPF colour.
:param Symmetry symmetry: the symmetry operator to use.
:return tuple: a tuple contining the RGB values.
"""
for sym in syms:
Osym = np.dot(sym, orientation_matrix)
Vc = np.dot(Osym, axis)
if Vc[2] < 0:
Vc *= -1. # using the upward direction
uvw = np.array([Vc[2] - Vc[1], Vc[1] - Vc[0], Vc[0]])
uvw /= np.linalg.norm(uvw)
uvw /= max(uvw)
if (uvw[0] >= 0. and uvw[0] <= 1.0) and (uvw[1] >= 0. and uvw[1] <= 1.0) and (
uvw[2] >= 0. and uvw[2] <= 1.0):
break
uvw = uvw / uvw.max()
return uvw
class Symmetry(enum.Enum):
"""
Class to describe crystal symmetry defined by its Laue class symbol.
# Laue Groups
#group 1 -- triclinic: '-1'
#group 2 -- monoclinic: '2/m'
#group 3 -- orthorhombic: 'mmm'
#group 4 -- tetragonal: '4/m'
#group 5 -- tetragonal: '4/mmm'
#group 6 -- trigonal: '-3'
#group 7 -- trigonal: '-3m'
#group 8 -- hexagonal: '6/m'
#group 9 -- hexagonal: '6/mmm'
#group 10 -- cubic: 'm3'
#group 11 -- cubic: 'm3m'
"""
cubic = 'm3m'
hexagonal = '6/mmm'
orthorhombic = 'mmm'
tetragonal = '4/mmm'
trigonal = 'bar3m'
monoclinic = '2/m'
triclinic = 'bar1'
# operation_rotation = None
def symmetry_operators(self, use_miller_bravais=False):
"""Define the equivalent crystal symmetries.
Those come from Randle & Engler, 2000. For instance in the cubic
crystal struture, for instance there are 24 equivalent cube orientations.
:returns array: A numpy array of shape (n, 3, 3) where n is the \
number of symmetries of the given crystal structure.
"""
if self is Symmetry.cubic:
#m-3 only 24 component
#m-3m 48 component
sym = np.zeros((48, 3, 3), dtype=np.float)
sym[0] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[1] = np.array([[0., 0., -1.], [0., -1., 0.], [-1., 0., 0.]])
sym[2] = np.array([[0., 0., -1.], [0., 1., 0.], [1., 0., 0.]])
sym[3] = np.array([[-1., 0., 0.], [0., 1., 0.], [0., 0., -1.]])
sym[4] = np.array([[0., 0., 1.], [0., 1., 0.], [-1., 0., 0.]])
sym[5] = np.array([[1., 0., 0.], [0., 0., -1.], [0., 1., 0.]])
sym[6] = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
sym[7] = np.array([[1., 0., 0.], [0., 0., 1.], [0., -1., 0.]])
sym[8] = np.array([[0., -1., 0.], [1., 0., 0.], [0., 0., 1.]])
sym[9] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
sym[10] = np.array([[0., 1., 0.], [-1., 0., 0.], [0., 0., 1.]])
sym[11] = np.array([[0., 0., 1.], [1., 0., 0.], [0., 1., 0.]])
sym[12] = np.array([[0., 1., 0.], [0., 0., 1.], [1., 0., 0.]])
sym[13] = np.array([[0., 0., -1.], [-1., 0., 0.], [0., 1., 0.]])
sym[14] = np.array([[0., -1., 0.], [0., 0., 1.], [-1., 0., 0.]])
sym[15] = np.array([[0., 1., 0.], [0., 0., -1.], [-1., 0., 0.]])
sym[16] = np.array([[0., 0., -1.], [1., 0., 0.], [0., -1., 0.]])
sym[17] = np.array([[0., 0., 1.], [-1., 0., 0.], [0., -1., 0.]])
sym[18] = np.array([[0., -1., 0.], [0., 0., -1.], [1., 0., 0.]])
sym[19] = np.array([[0., 1., 0.], [1., 0., 0.], [0., 0., -1.]])
sym[20] = np.array([[-1., 0., 0.], [0., 0., 1.], [0., 1., 0.]])
sym[21] = np.array([[0., 0., 1.], [0., -1., 0.], [1., 0., 0.]])
sym[22] = np.array([[0., -1., 0.], [-1., 0., 0.], [0., 0., -1.]])
sym[23] = np.array([[-1., 0., 0.], [0., 0., -1.], [0., -1., 0.]])
elif self is Symmetry.hexagonal:
# using the Miller-Bravais representation here
if use_miller_bravais:
sym = np.zeros((12, 4, 4), dtype=np.int)
sym[0] = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
sym[1] = np.array([[0, 0, 1, 0], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1]])
sym[2] = np.array([[0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]])
sym[3] = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, -1]])
sym[4] = np.array([[0, 0, 1, 0], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, -1]])
sym[5] = np.array([[0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, -1]])
sym[6] = np.array([[-1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
sym[7] = np.array([[0, 0, -1, 0], [-1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 0, 1]])
sym[8] = np.array([[0, -1, 0, 0], [0, 0, -1, 0], [-1, 0, 0, 0], [0, 0, 0, 1]])
sym[9] = np.array([[-1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, -1]])
sym[10] = np.array([[0, 0, -1, 0], [-1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 0, -1]])
sym[11] = np.array([[0, -1, 0, 0], [0, 0, -1, 0], [-1, 0, 0, 0], [0, 0, 0, -1]])
else:
sym = np.zeros((12, 3, 3), dtype=np.float)
s60 = np.sin(60 * np.pi / 180)
sym[0] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[1] = np.array([[0.5, s60, 0.], [-s60, 0.5, 0.], [0., 0., 1.]])
sym[2] = np.array([[-0.5, s60, 0.], [-s60, -0.5, 0.], [0., 0., 1.]])
sym[3] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
sym[4] = np.array([[-0.5, -s60, 0.], [s60, -0.5, 0.], [0., 0., 1.]])
sym[5] = np.array([[0.5, -s60, 0.], [s60, 0.5, 0.], [0., 0., 1.]])
sym[6] = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
sym[7] = np.array([[0.5, s60, 0.], [s60, -0.5, 0.], [0., 0., -1.]])
sym[8] = np.array([[-0.5, s60, 0.], [s60, 0.5, 0.], [0., 0., -1.]])
sym[9] = np.array([[-1., 0., 0.], [0., 1., 0.], [0., 0., -1.]])
sym[10] = np.array([[-0.5, -s60, 0.], [-s60, 0.5, 0.], [0., 0., -1.]])
sym[11] = np.array([[0.5, -s60, 0.], [-s60, -0.5, 0.], [0., 0., -1.]])
elif self is Symmetry.orthorhombic:
sym = np.zeros((8, 3, 3), dtype=np.float)
sym[0] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[1] = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
sym[2] = np.array([[-1., 0., 0.], [0., 1., 0.], [0., 0., -1.]])
sym[3] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
sym[4] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
sym[5] = np.array([[-1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[6] = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
sym[7] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., -1.]])
elif self is Symmetry.tetragonal:
sym = np.zeros((8, 3, 3), dtype=np.float)
sym[0] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[1] = np.array([[0., -1., 0.], [1., 0., 0.], [0., 0., 1.]])
sym[2] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
sym[3] = np.array([[0., 1., 0.], [-1., 0., 0.], [0., 0., 1.]])
sym[4] = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
sym[5] = np.array([[-1., 0., 0.], [0., 1., 0.], [0., 0., -1.]])
sym[6] = np.array([[0., 1., 0.], [1., 0., 0.], [0., 0., -1.]])
sym[7] = np.array([[0., -1., 0.], [-1., 0., 0.], [0., 0., -1.]])
elif self is Symmetry.monoclinic:
sym = np.zeros((4, 3, 3), dtype=np.float)
sym[0] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[1] = np.array([[-1., 0., 0.], [0., 1., 0.], [0., 0., -1.]])
sym[2] = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
sym[3] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
elif self is Symmetry.triclinic:
sym = np.zeros((2, 3, 3), dtype=np.float)
sym[0] = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
sym[1] = np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., -1.]])
else:
raise ValueError('warning, symmetry not supported: %s' % self)
return sym
class Lattice:
'''
The Lattice class to create one of the 14 Bravais lattices.
This particular class has been partly inspired from the pymatgen
project at https://github.com/materialsproject/pymatgen
Any of the 7 lattice systems (each corresponding to one point group)
can be easily created and manipulated.
The lattice centering can be specified to form any of the 14 Bravais
lattices:
* Primitive (P): lattice points on the cell corners only (default);
* Body (I): one additional lattice point at the center of the cell;
* Face (F): one additional lattice point at the center of each of
the faces of the cell;
* Base (A, B or C): one additional lattice point at the center of
each of one pair of the cell faces.
::
a = 0.352 # FCC Nickel
l = Lattice.face_centered_cubic(a)
print(l.volume())
Addditionnally the point-basis can be controlled to address non
Bravais lattice cells. It is set to a single atoms at (0, 0, 0) by
default so that each cell is a Bravais lattice but may be changed to
something more complex to achieve HCP structure or Diamond structure
for instance.
'''
def __init__(self, matrix, centering='P', symmetry=None):
'''Create a crystal lattice (unit cell).
Create a lattice from a 3x3 matrix.
Each row in the matrix represents one lattice vector.
'''
m = np.array(matrix, dtype=np.float64).reshape((3, 3))
lengths = np.sqrt(np.sum(m ** 2, axis=1))
angles = np.zeros(3)
for i in range(3):
j = (i + 1) % 3
k = (i + 2) % 3
angles[i] = dot(m[j], m[k]) / (lengths[j] * lengths[k])
angles = np.arccos(angles) * 180. / pi
self._angles = angles
self._lengths = lengths
self._matrix = m
self._centering = centering
self._symmetry = symmetry
def __eq__(self, other):
"""Override the default Equals behavior.
The equality of two Lattice objects is based on the equality of their angles, lengths, and centering.
"""
if not isinstance(other, self.__class__):
return False
for i in range(3):
if self._angles[i] != other._angles[i]:
return False
elif self._lengths[i] != other._lengths[i]:
return False
if self._centering != other._centering:
return False
if self._symmetry != other._symmetry:
return False
return True
def reciprocal_lattice(self):
'''Compute the reciprocal lattice.
The reciprocal lattice defines a crystal in terms of vectors that
are normal to a plane and whose lengths are the inverse of the
interplanar spacing. This method computes the three reciprocal
lattice vectors defined by:
.. math::
* a.a^* = 1
* b.b^* = 1
* c.c^* = 1
'''
[a, b, c] = self._matrix
V = self.volume()
astar = np.cross(b, c) / V
bstar = np.cross(c, a) / V
cstar = np.cross(a, b) / V
return [astar, bstar, cstar]
@property
def matrix(self):
"""Returns a copy of matrix representing the Lattice."""
return np.copy(self._matrix)
def get_symmetry(self):
"""Returns the type of `Symmetry` of the Lattice."""
return self._symmetry
def symmetry(crystal_structure=Symmetry.cubic, use_miller_bravais=False):
"""Define the equivalent crystal symmetries.
Those come from Randle & Engler, 2000. For instance in the cubic
crystal struture, for instance there are 24 equivalent cube orientations.
:param crystal_structure: an instance of the `Symmetry` class describing the crystal symmetry.
:raise ValueError: if the given symmetry is not supported.
:returns array: A numpy array of shape (n, 3, 3) where n is the \
number of symmetries of the given crystal structure.
"""
return crystal_structure.symmetry_operators(use_miller_bravais=use_miller_bravais)
@staticmethod
def cubic(a):
'''
Create a cubic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter (a = b = c here)
*Returns*
A `Lattice` instance corresponding to a primitice cubic lattice.
'''
return Lattice([[a, 0.0, 0.0], [0.0, a, 0.0], [0.0, 0.0, a]], symmetry=Symmetry.cubic)
@staticmethod
def body_centered_cubic(a):
'''
Create a body centered cubic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter (a = b = c here)
*Returns*
A `Lattice` instance corresponding to a body centered cubic
lattice.
'''
return Lattice.from_parameters(a, a, a, 90, 90, 90, centering='I', symmetry=Symmetry.cubic)
@staticmethod
def face_centered_cubic(a):
'''
Create a face centered cubic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter (a = b = c here)
*Returns*
A `Lattice` instance corresponding to a face centered cubic
lattice.
'''
return Lattice.from_parameters(a, a, a, 90, 90, 90, centering='F', symmetry=Symmetry.cubic)
@staticmethod
def tetragonal(a, c):
'''
Create a tetragonal Lattice unit cell.
*Parameters*
**a**: first lattice length parameter
**c**: third lattice length parameter (b = a here)
*Returns*
A `Lattice` instance corresponding to a primitive tetragonal
lattice.
'''
return Lattice.from_parameters(a, a, c, 90, 90, 90, symmetry=Symmetry.tetragonal)
@staticmethod
def body_centered_tetragonal(a, c):
'''
Create a body centered tetragonal Lattice unit cell.
*Parameters*
**a**: first lattice length parameter
**c**: third lattice length parameter (b = a here)
*Returns*
A `Lattice` instance corresponding to a body centered tetragonal
lattice.
'''
return Lattice.from_parameters(a, a, c, 90, 90, 90, centering='I', symmetry=Symmetry.tetragonal)
@staticmethod
def orthorhombic(a, b, c):
'''
Create a tetragonal Lattice unit cell with 3 different length
parameters a, b and c.
'''
return Lattice.from_parameters(a, b, c, 90, 90, 90, symmetry=Symmetry.orthorhombic)
@staticmethod
def base_centered_orthorhombic(a, b, c):
'''
Create a based centered orthorombic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter
**b**: second lattice length parameter
**c**: third lattice length parameter
*Returns*
A `Lattice` instance corresponding to a based centered orthorombic
lattice.
'''
return Lattice.from_parameters(a, b, c, 90, 90, 90, centering='C', symmetry=Symmetry.orthorhombic)
@staticmethod
def body_centered_orthorhombic(a, b, c):
'''
Create a body centered orthorombic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter
**b**: second lattice length parameter
**c**: third lattice length parameter
*Returns*
A `Lattice` instance corresponding to a body centered orthorombic
lattice.
'''
return Lattice.from_parameters(a, b, c, 90, 90, 90, centering='I', symmetry=Symmetry.orthorhombic)
@staticmethod
def face_centered_orthorhombic(a, b, c):
'''
Create a face centered orthorombic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter
**b**: second lattice length parameter
**c**: third lattice length parameter
*Returns*
A `Lattice` instance corresponding to a face centered orthorombic
lattice.
'''
return Lattice.from_parameters(a, b, c, 90, 90, 90, centering='F', symmetry=Symmetry.orthorhombic)
@staticmethod
def hexagonal(a, c):
'''
Create a hexagonal Lattice unit cell with length parameters a and c.
'''
return Lattice.from_parameters(a, a, c, 90, 90, 120, symmetry=Symmetry.hexagonal)
@staticmethod
def rhombohedral(a, alpha):
'''
Create a rhombohedral Lattice unit cell with one length
parameter a and the angle alpha.
'''
return Lattice.from_parameters(a, a, a, alpha, alpha, alpha, symmetry=Symmetry.trigonal)
@staticmethod
def monoclinic(a, b, c, alpha):
'''
Create a monoclinic Lattice unit cell with 3 different length
parameters a, b and c. The cell angle is given by alpha.
The lattice centering id primitive ie. 'P'
'''
return Lattice.from_parameters(a, b, c, alpha, 90, 90, symmetry=Symmetry.monoclinic)
@staticmethod
def base_centered_monoclinic(a, b, c, alpha):
'''
Create a based centered monoclinic Lattice unit cell.
*Parameters*
**a**: first lattice length parameter
**b**: second lattice length parameter
**c**: third lattice length parameter
**alpha**: first lattice angle parameter
*Returns*
A `Lattice` instance corresponding to a based centered monoclinic
lattice.
'''
return Lattice.from_parameters(a, b, c, alpha, 90, 90, centering='C', symmetry=Symmetry.monoclinic)
@staticmethod
def triclinic(a, b, c, alpha, beta, gamma):
'''
Create a triclinic Lattice unit cell with 3 different length
parameters a, b, c and three different cell angles alpha, beta
and gamma.
..note::
This method is here for the sake of completeness since one can
create the triclinic cell directly using the `from_parameters`
method.
'''
return Lattice.from_parameters(a, b, c, alpha, beta, gamma, symmetry=Symmetry.triclinic)
@staticmethod
def from_parameters(a, b, c, alpha, beta, gamma, x_aligned_with_a=False, centering='P', symmetry=Symmetry.triclinic):
"""
Create a Lattice using unit cell lengths and angles (in degrees).
The lattice centering can also be specified (among 'P', 'I', 'F',
'A', 'B' or 'C').
:param float a: first lattice length parameter.
:param float b: second lattice length parameter.
:param float c: third lattice length parameter.
:param float alpha: first lattice angle parameter.
:param float beta: second lattice angle parameter.
:param float gamma: third lattice angle parameter.
:param bool x_aligned_with_a: flag to control the convention used to define the Cartesian frame.
:param str centering: lattice centering ('P' by default) passed to the `Lattice` class.
:param symmetry: a `Symmetry` instance to be passed to the lattice.
:return: A `Lattice` instance with the specified lattice parameters and centering.
"""
alpha_r = radians(alpha)
beta_r = radians(beta)
gamma_r = radians(gamma)
if x_aligned_with_a: # first lattice vector (a) is aligned with X
vector_a = a * np.array([1, 0, 0])
vector_b = b * np.array([np.cos(gamma_r), np.sin(gamma_r), 0])
c1 = c * np.cos(beta_r)
c2 = c * (np.cos(alpha_r) - np.cos(gamma_r) * np.cos(beta_r)) / np.sin(gamma_r)
vector_c = np.array([c1, c2, np.sqrt(c ** 2 - c1 ** 2 - c2 ** 2)])
else: # third lattice vector (c) is aligned with Z
cos_gamma_star = (np.cos(alpha_r) * np.cos(beta_r) - np.cos(gamma_r)) / (np.sin(alpha_r) * np.sin(beta_r))
sin_gamma_star = np.sqrt(1 - cos_gamma_star ** 2)
vector_a = [a * np.sin(beta_r), 0.0, a * np.cos(beta_r)]
vector_b = [-b * np.sin(alpha_r) * cos_gamma_star, b * np.sin(alpha_r) * sin_gamma_star, b * np.cos(alpha_r)]
vector_c = [0.0, 0.0, float(c)]
return Lattice([vector_a, vector_b, vector_c], centering=centering, symmetry=symmetry)
def volume(self):
"""Compute the volume of the unit cell."""
m = self._matrix
return abs(np.dot(np.cross(m[0], m[1]), m[2]))
def get_hkl_family(self, hkl):
"""Get a list of the hkl planes composing the given family for
this crystal lattice.
*Parameters*
**hkl**: miller indices of the requested family
*Returns*
A list of the hkl planes in the given family.
"""
planes = HklPlane.get_family(hkl, lattice=self, crystal_structure=self._symmetry)
return planes
class HklObject:
def __init__(self, h, k, l, lattice=None):
'''Create a new hkl object with the given Miller indices and
crystal lattice.
'''
if lattice == None:
lattice = Lattice.cubic(1.0)
self._lattice = lattice
self._h = h
self._k = k
self._l = l
@property
def lattice(self):
return self._lattice
def set_lattice(self, lattice):
"""Assign a new `Lattice` to this instance.
:param lattice: the new crystal lattice.
"""
self._lattice = lattice
@property
def h(self):
return self._h
@property
def k(self):
return self._k
@property
def l(self):
return self._l
def miller_indices(self):
'''
Returns an immutable tuple of the plane Miller indices.
'''
return (self._h, self._k, self._l)
class HklDirection(HklObject):
def direction(self):
'''Returns a normalized vector, expressed in the cartesian
coordinate system, corresponding to this crystallographic direction.
'''
(h, k, l) = self.miller_indices()
M = self._lattice.matrix.T # the columns of M are the a, b, c vector in the cartesian coordinate system
l_vect = M.dot(np.array([h, k, l]))
return l_vect / np.linalg.norm(l_vect)
def angle_with_direction(self, hkl):
'''Computes the angle between this crystallographic direction and
the given direction (in radian).'''
return np.arccos(np.dot(self.direction(), hkl.direction()))
@staticmethod
def angle_between_directions(hkl1, hkl2, lattice=None):
'''Computes the angle between two crystallographic directions (in radian).
:param tuple hkl1: The triplet of the miller indices of the first direction.
:param tuple hkl2: The triplet of the miller indices of the second direction.
:param Lattice lattice: The crystal lattice, will default to cubic if not specified.
:returns float: The angle in radian.
'''
d1 = HklDirection(*hkl1, lattice=lattice)
d2 = HklDirection(*hkl2, lattice=lattice)
return d1.angle_with_direction(d2)
@staticmethod
def three_to_four_indices(u, v, w):
"""Convert from Miller indices to Miller-Bravais indices. this is used for hexagonal crystal lattice."""
return (2 * u - v) / 3., (2 * v - u) / 3., -(u + v) / 3., w
@staticmethod
def four_to_three_indices(U, V, T, W):
"""Convert from Miller-Bravais indices to Miller indices. this is used for hexagonal crystal lattice."""
u, v, w = U - T, V - T, W
gcd = functools.reduce(math.gcd, (u, v, w))
return u / gcd, v / gcd, w / gcd
@staticmethod
def angle_between_4indices_directions(hkil1, hkil2, ac):
"""Computes the angle between two crystallographic directions in a hexagonal lattice.
The solution was derived by F. Frank in:
On Miller - Bravais indices and four dimensional vectors. Acta Cryst. 18, 862-866 (1965)
:param tuple hkil1: The quartet of the indices of the first direction.
:param tuple hkil2: The quartet of the indices of the second direction.
:param tuple ac: the lattice parameters of the hexagonal structure in the form (a, c).
:returns float: The angle in radian.
"""
h1, k1, i1, l1 = hkil1
h2, k2, i2, l20 = hkil2
a, c = ac
lambda_square = 2. / 3 * (c / a) ** 2
value = (h1 * h2 + k1 * k2 + i1 * i2 + lambda_square * l1 * l20) / \
(np.sqrt(h1 ** 2 + k1 ** 2 + i1 ** 2 + lambda_square * l1 ** 2) *
np.sqrt(h2 ** 2 + k2 ** 2 + i2 ** 2 + lambda_square * l20 ** 2))
return np.arccos(value)
class HklPlane(HklObject):
'''
This class define crystallographic planes using Miller indices.
A plane can be create by speficying its Miller indices and the
crystal lattice (default is cubic with lattice parameter of 1.0)
::
a = 0.405 # FCC Aluminium
l = Lattice.cubic(a)
p = HklPlane(1, 1, 1, lattice=l)
print(p)
print(p.scattering_vector())
print(p.interplanar_spacing())
.. note::
Miller indices are defined in terms of the inverse of the intercept
of the plane on the three crystal axes a, b, and c.
'''
def __eq__(self, other):
"""Override the default Equals behavior.
The equality of two HklObjects is based on the equality of their miller indices.
"""
if isinstance(other, self.__class__):
return self._h == other._h and self._k == other._k and \
self._l == other._l and self._lattice == other._lattice
return False
def __ne__(self, other):
"""Define a non-equality test"""
return not self.__eq__(other)
def normal(self):
'''Returns the unit vector normal to the plane.
We use of the repiprocal lattice to compute the normal to the plane
and return a normalised vector.
'''
n = self.scattering_vector()
return n / np.linalg.norm(n)
def scattering_vector(self):
'''Calculate the scattering vector of this `HklPlane`.
The scattering vector (or reciprocal lattice vector) is normal to
this `HklPlane` and its length is equal to the inverse of the
interplanar spacing. In the cartesian coordinate system of the
crystal, it is given by:
..math
G_c = h.a^* + k.b^* + l.c^*
:returns: a numpy vector expressed in the cartesian coordinate system of the crystal.
'''
[astar, bstar, cstar] = self._lattice.reciprocal_lattice()
(h, k, l) = self.miller_indices()
# express (h, k, l) in the cartesian crystal CS
Gc = h * astar + k * bstar + l * cstar
return Gc
def friedel_pair(self):
"""Create the Friedel pair of the HklPlane."""
(h, k, l) = self.miller_indices()
pair = HklPlane(-h, -k, -l, self._lattice)
return pair
def interplanar_spacing(self):
'''
Compute the interplanar spacing.
For cubic lattice, it is:
.. math::
d = a / \sqrt{h^2 + k^2 + l^2}
The general formula comes from 'Introduction to Crystallography'
p. 68 by Donald E. Sands.
'''
(a, b, c) = self._lattice._lengths
(h, k, l) = self.miller_indices()
(alpha, beta, gamma) = radians(self._lattice._angles)
# d = a / np.sqrt(h**2 + k**2 + l**2) # for cubic structure only
d = self._lattice.volume() / np.sqrt(h ** 2 * b ** 2 * c ** 2 * np.sin(alpha) ** 2 + \
k ** 2 * a ** 2 * c ** 2 * np.sin(
beta) ** 2 + l ** 2 * a ** 2 * b ** 2 * np.sin(gamma) ** 2 + \
2 * h * l * a * b ** 2 * c * (
np.cos(alpha) * np.cos(gamma) - np.cos(beta)) + \
2 * h * k * a * b * c ** 2 * (
np.cos(alpha) * np.cos(beta) - np.cos(gamma)) + \
2 * k * l * a ** 2 * b * c * (
np.cos(beta) * np.cos(gamma) - np.cos(alpha)))
return d
@staticmethod
def four_to_three_indices(U, V, T, W):
"""Convert four to three index representation of a slip plane (used for hexagonal crystal lattice)."""
return U, V, W
@staticmethod
def three_to_four_indices(u, v, w):
"""Convert three to four index representation of a slip plane (used for hexagonal crystal lattice)."""
return u, v, -(u + v), w
def is_in_list(self, hkl_planes, friedel_pair=False):
"""Check if the hkl plane is in the given list.
By default this relies on the built in in test from the list type which in turn calls in the __eq__ method.
This means it will return True if a plane with the exact same miller indices (and same lattice) is in the list.
Turning on the friedel_pair flag will allow to test also the Friedel pair (-h, -k, -l) and return True if it is
in the list.
For instance (0,0,1) and (0,0,-1) are in general considered as the same lattice plane.
"""
if not friedel_pair:
return self in hkl_planes
else:
return self in hkl_planes or self.friedel_pair() in hkl_planes
@staticmethod
def is_same_family(hkl1, hkl2, crystal_structure=Symmetry.cubic):
"""Static mtd to test if both lattice planes belongs to same family.
A family {hkl} is composed by all planes that are equivalent to (hkl)
using the symmetry of the lattice. The lattice assoiated with `hkl2`
is not taken into account here.
"""
return hkl1.is_in_list(HklPlane.get_family(hkl2.miller_indices(), lattice=hkl1._lattice,
crystal_structure=crystal_structure))
@staticmethod
def get_family(hkl, lattice=None, include_friedel_pairs=False, crystal_structure=Symmetry.cubic):
"""Static method to obtain a list of the different crystallographic
planes in a particular family.
:param str hkl: a sequence of 3 (4 for hexagonal) numbers corresponding to the miller indices.
:param Lattice lattice: The reference crystal lattice (default None).
:param bool include_friedel_pairs: Flag to include the Friedel pairs in the list (False by default).
:param str crystal_structure: A string descibing the crystal structure (cubic by default).
:raise ValueError: if the given string does not correspond to a supported family.
:returns list: a list of the :py:class:`~HklPlane` in the given hkl family.
.. note::
The method account for the lattice symmetry to create a list of equivalent lattice plane from the point
of view of the point group symmetry. A flag can be used to include or not the Friedel pairs. If not, the
family is contstructed using the miller indices limited the number of minus signs. For instance (1,0,0)
will be in the list and not (-1,0,0).
"""
if not (len(hkl) == 3 or (len(hkl) == 4 and crystal_structure == Symmetry.hexagonal)):
raise ValueError('warning, family not supported: {}'.format(hkl))
# handle hexagonal case
if len(hkl) == 4:
h = int(hkl[0])
k = int(hkl[1])
i = int(hkl[2])
l = int(hkl[3])
(h, k, l) = HklPlane.four_to_three_indices(h, k, i, l) # useless as it just drops i
else: # 3 indices
h = int(hkl[0])
k = int(hkl[1])
l = int(hkl[2])
if crystal_structure == Symmetry.hexagonal:
i = -(h + k)
family = []
# construct lattice plane family from the symmetry operators
if crystal_structure == Symmetry.hexagonal:
syms = Lattice.symmetry(crystal_structure, use_miller_bravais=True)
else:
syms = Lattice.symmetry(crystal_structure)
for sym in syms:
if crystal_structure == Symmetry.hexagonal:
n_sym = np.dot(sym, np.array([h, k, i, l]))
n_sym = HklPlane.four_to_three_indices(*n_sym)
else: # 3 indices
n_sym = np.dot(sym, np.array([h, k, l]))
hkl_sym = HklPlane(*n_sym, lattice=lattice)
if not hkl_sym.is_in_list(family, friedel_pair=True):
family.append(hkl_sym)
if include_friedel_pairs:
hkl_sym = HklPlane(-n_sym[0], -n_sym[1], -n_sym[2], lattice=lattice)
if not hkl_sym.is_in_list(family, friedel_pair=False):
family.append(hkl_sym)
if not include_friedel_pairs:
# for each hkl plane chose between (h, k, l) and (-h, -k, -l) to have the less minus signs
for i in range(len(family)):
hkl = family[i]
(h, k, l) = hkl.miller_indices()
if np.where(np.array([h, k, l]) < 0)[0].size > 0 and np.where(np.array([h, k, l]) <= 0)[0].size >= 2:
family[i] = hkl.friedel_pair()
#print('replacing plane (%d%d%d) by its pair: (%d%d%d)' % (h, k, l, -h, -k, -l))
return family
def multiplicity(self, symmetry=Symmetry.cubic):
"""compute the general multiplicity for this `HklPlane` and the given `Symmetry`.
:param Symmetry symmetry: The crystal symmetry to take into account.
:return: the number of equivalent planes in the family.
"""
return len(HklPlane.get_family(self.miller_indices(), include_friedel_pairs=True, crystal_structure=symmetry))
class PoleFigure:
"""A class to handle pole figures.
A pole figure is a popular tool to plot multiple crystal orientations,
either in the sample coordinate system (direct pole figure) or
alternatively plotting a particular direction in the crystal
coordinate system (inverse pole figure).
"""
def __init__(self, lattice=None, axis='Z', hkl='111', proj='stereo'):
"""
Create an empty PoleFigure object associated with an empty Microstructure.
:param microstructure: the :py:class:`~pymicro.crystal.microstructure.Microstructure` containing the collection of orientations to plot (None by default).
:param lattice: the crystal :py:class:`~pymicro.crystal.lattice.Lattice`.
:param str axis: the pole figure axis ('Z' by default), vertical axis in the direct pole figure and direction plotted on the inverse pole figure.
.. warning::
Any crystal structure is now supported (you have to set the proper
crystal lattice) but it has only really be tested for cubic.
:param str hkl: slip plane family ('111' by default)
:param str proj: projection type, can be either 'stereo' (default) or 'flat'
"""
self.proj = proj
self.axis = axis
if self.axis == 'Z':
self.axis_crystal = np.array([0, 0, 1])
elif self.axis == 'Y':
self.axis_crystal = np.array([0, 1, 0])
else:
self.axis_crystal = np.array([1, 0, 0])
if lattice:
self.lattice = lattice
else:
self.lattice = Lattice.cubic(1.0)
self.family = None
self.poles = []
self.set_hkl_poles(hkl)
self.mksize = 50
self.x = np.array([1., 0., 0.])
self.y = np.array([0., 1., 0.])
self.z = np.array([0., 0., 1.])
def set_hkl_poles(self, hkl='111'):
"""Set the pole (aka hkl planes) list to to use in the `PoleFigure`.
The list of poles can be given by the family type or directly by a list of `HklPlanes` objects.
:params str/list hkl: slip plane family ('111' by default)
"""
if type(hkl) is str:
self.family = hkl # keep a record of this
hkl_planes = self.lattice.get_hkl_family(self.family)
elif type(hkl) is list:
self.family = None
hkl_planes = hkl
self.poles = hkl_planes #[p.normal() for p in hkl_planes]
def plot_line_between_crystal_dir(self, c1, c2, ax=None, steps=25, col='k'):
'''Plot a curve between two crystal directions.
The curve is actually composed of several straight lines segments to
draw from direction 1 to direction 2.
:param c1: vector describing crystal direction 1
:param c2: vector describing crystal direction 2
:param ax: a reference to a pyplot ax to draw the line
:param int steps: number of straight lines composing the curve (11 by default)
:param col: line color (black by default)
'''
path = np.zeros((steps, 2), dtype=float)
for j, i in enumerate(np.linspace(0., 1., steps)):
ci = i * c1 + (1 - i) * c2
ci /= np.linalg.norm(ci)
if self.proj == 'stereo':
ci += self.z
ci /= ci[2]
path[j, 0] = ci[0]
path[j, 1] = ci[1]
ax.plot(path[:, 0], path[:, 1], color=col, markersize=self.mksize, linewidth=0.5, zorder=0)
plt.axis("off")
def plot_pf_background(self, ax, labels=True):
'''Function to plot the background of the pole figure.
:param ax: a reference to a pyplot ax to draw the backgroud.
:param bool labels: add lables to axes (True by default).
'''
an = np.linspace(0, 2 * np.pi, 100)
ax.plot(np.cos(an), np.sin(an), 'k-', zorder=0)
ax.plot([-1, 1], [0, 0], 'k-', zorder=0)
ax.plot([0, 0], [-1, 1], 'k-', zorder=0)
axe_labels = ['X', 'Y', 'Z']
if self.axis == 'Z':
(h, v, _) = (0, 1, 2)
elif self.axis == 'Y':
(h, v, _) = (0, 2, 1)
else:
(h, v, _) = (1, 2, 0)
if labels:
ax.annotate(axe_labels[h], (1.01, 0.0), xycoords='data', fontsize=8,
horizontalalignment='left', verticalalignment='center')
ax.annotate(axe_labels[v], (0.0, 1.01), xycoords='data', fontsize=8,
horizontalalignment='center', verticalalignment='bottom')
def sst_symmetry(self, v, symms):
"""Transform a given vector according to the lattice symmetry associated
with the pole figure.
This function transform a vector so that it lies in the smallest
symmetry equivalent zone.
:param v: the vector to transform.
:return: the transformed vector.
"""
# get the symmetry from the lattice associated with the pole figure
symmetry = self.lattice._symmetry
if symmetry == symmetry.cubic:
return PoleFigure.sst_symmetry_cubic(v)
elif symmetry == symmetry.hexagonal:
#syms = symmetry.symmetry_operators()
# syms = np.concatenate((symms, -symms))
syms = np.unique(symms, axis=0)
for i in range(len(syms)):
sym = syms[i]
v_sym = np.dot(sym, v)
# look at vectors pointing up
if v_sym[2] < 0:
v_sym *= -1
# now evaluate if projection is in the sst
if v_sym[1] < 0 or v_sym[0] < 0:
continue
elif v_sym[1] / v_sym[0] > np.tan(np.pi / 6):
continue
else:
break
return v_sym
else:
print('unsupported symmetry: %s' % symmetry)
return None
@staticmethod
def sst_symmetry_cubic(z_rot):
'''Transform a given vector according to the cubic symmetry.
This function transform a vector so that it lies in the unit SST triangle.
:param z_rot: vector to transform.
:return: the transformed vector.
'''
if z_rot[0] < 0: z_rot[0] = -z_rot[0]
if z_rot[1] < 0: z_rot[1] = -z_rot[1]
if z_rot[2] < 0: z_rot[2] = -z_rot[2]
if (z_rot[2] > z_rot[1]):
z_rot[1], z_rot[2] = z_rot[2], z_rot[1]
if (z_rot[1] > z_rot[0]):
z_rot[0], z_rot[1] = z_rot[1], z_rot[0]
if (z_rot[2] > z_rot[1]):
z_rot[1], z_rot[2] = z_rot[2], z_rot[1]
return np.array([z_rot[1], z_rot[2], z_rot[0]])
def plot_pf(self, col, orient_data, ax=None, mk='o', ann=False, ftsize=6):
"""Create the direct pole figure.
:param ax: a reference to a pyplot ax to draw the poles.
:param mk: marker used to plot the poles (disc by default).
:param bool ann: Annotate the pole with the coordinates of the vector
if True (False by default).
"""
self.plot_pf_background(ax)
cp_0, cp_1 = [], []
colors = []
for igr, g in enumerate(orient_data):
if np.isnan(g).all() or np.all(g==0):
continue
gt = g.transpose()
for i, hkl_plane in enumerate(self.poles):
c = hkl_plane.normal()
c_rot = gt.dot(c)
color = col[igr]
if self.axis == 'Z':
(h, v, u) = (0, 1, 2)
elif self.axis == 'Y':
(h, v, u) = (0, 2, 1)
else:
(h, v, u) = (1, 2, 0)
axis_rot = c_rot[[h, v, u]]
# the direction to plot is given by c_dir[h,v,u]
if axis_rot[2] < 0:
axis_rot *= -1 # make unit vector have z>0
if self.proj == 'flat':
cp = axis_rot
elif self.proj == 'stereo':
c = axis_rot + self.z
c /= c[2] # SP'/SP = r/z with r=1
cp = c
# cp = np.cross(c, self.z)
else:
raise ValueError('Error, unsupported projection type', self.proj)
cp_0.append(cp[0])
cp_1.append(cp[1])
colors.append(color)
# Next 3 lines are necessary in case c_dir[2]=0, as for Euler angles [45, 45, 0]
if axis_rot[2] < 0.000001:
cp_0.append(-cp[0])
cp_1.append(-cp[1])
colors.append(color)
# ax.scatter(-cp[0], -cp[1], linewidth=0, c=color, marker='o', s=axis_rot)
ax.scatter(cp_0, cp_1, c=colors, s=self.mksize, zorder=2)
ax.axis([-1.1, 1.1, -1.1, 1.1])
ax.axis('off')
ax.set_title('{%s} direct %s projection' % (self.family, self.proj), fontsize = ftsize)
def plot_sst_color(self, col, orient_data, ax=None, mk='s', \
ann=False, ftsize=6, phase = 0, symms=None):
""" Create the inverse pole figure in the unit standard triangle.
:param ax: a reference to a pyplot ax to draw the poles.
:param mk: marker used to plot the poles (square by default).
:param bool ann: Annotate the pole with the coordinates of the vector if True (False by default).
"""
system = None
symmetry = self.lattice._symmetry
if phase==0:
sst_poles = [(0, 0, 1), (1, 0, 1), (1, 1, 1)]
ax.axis([-0.05, 0.45, -0.05, 0.40])
system = 'cubic'
elif phase==1:
sst_poles = [(0, 0, 1), (2, -1, 0), (1, 0, 0)]
ax.axis([-0.05, 1.05, -0.05, 0.6])
system = 'hexa'
else:
print('unssuported symmetry: %s' % symmetry)
A = HklPlane(*sst_poles[0], lattice=self.lattice)
B = HklPlane(*sst_poles[1], lattice=self.lattice)
C = HklPlane(*sst_poles[2], lattice=self.lattice)
if system == 'cubic':
self.plot_line_between_crystal_dir(A.normal(), B.normal(), ax=ax, steps=int(1+(45/5)), col='k')
self.plot_line_between_crystal_dir(B.normal(), C.normal(), ax=ax, steps=int(1+(35/5)), col='k')
self.plot_line_between_crystal_dir(C.normal(), A.normal(), ax=ax, steps=int(1+(55/5)), col='k')
elif system == 'hexa':
self.plot_line_between_crystal_dir(A.normal(), B.normal(), ax=ax, steps=int(1+(90/5)), col='k')
self.plot_line_between_crystal_dir(B.normal(), C.normal(), ax=ax, steps=int(1+(30/5)), col='k')
self.plot_line_between_crystal_dir(C.normal(), A.normal(), ax=ax, steps=int(1+(90/5)), col='k')
else:
self.plot_line_between_crystal_dir(A.normal(), B.normal(), ax=ax, col='k')
self.plot_line_between_crystal_dir(B.normal(), C.normal(), ax=ax, col='k')
self.plot_line_between_crystal_dir(C.normal(), A.normal(), ax=ax, col='k')
# display the 3 crystal axes
poles = [A, B, C]
v_align = ['top', 'top', 'bottom']
for i in range(3):
hkl = poles[i]
c_dir = hkl.normal()
c = c_dir + self.z
c /= c[2] # SP'/SP = r/z with r=1
pole_str = '%d%d%d' % hkl.miller_indices()
if phase==1:
pole_str = '%d%d%d%d' % HklPlane.three_to_four_indices(*hkl.miller_indices())
ax.annotate(pole_str, (c[0], c[1] - (2 * (i < 2) - 1) * 0.01), xycoords='data',
fontsize=8, horizontalalignment='center', verticalalignment=v_align[i])
# now plot the sample axis
cp_0, cp_1 = [], []
colors = []
for igr, g in enumerate(orient_data):
if np.isnan(g).all() or np.all(g==0):
continue
# compute axis and apply SST symmetry
if self.axis == 'Z':
axis = self.z
elif self.axis == 'Y':
axis = self.y
else:
axis = self.x
axis_rot = self.sst_symmetry(g.dot(axis), symms)
color = np.round(col[igr],5)
if axis_rot[2] < 0:
axis_rot *= -1 # make unit vector have z>0
if self.proj == 'flat':
cp = axis_rot
elif self.proj == 'stereo':
c = axis_rot + self.z
c /= c[2] # SP'/SP = r/z with r=1
cp = c
# cp = np.cross(c, self.z)
else:
raise ValueError('Error, unsupported projection type', self.proj)
cp_0.append(cp[0])
cp_1.append(cp[1])
colors.append(color)
# Next 3 lines are necessary in case c_dir[2]=0, as for Euler angles [45, 45, 0]
if axis_rot[2] < 0.000001:
cp_0.append(-cp[0])
cp_1.append(-cp[1])
colors.append(color)
# ax.scatter(-cp[0], -cp[1], linewidth=0, c=color, marker='o', s=axis_rot)
ax.scatter(cp_0, cp_1, c=colors, s=self.mksize, zorder=2)
ax.set_title('%s-axis SST inverse %s projection' % (self.axis, self.proj), fontsize = ftsize)
plt.axis("off")
# =============================================================================
# Plot functions
# =============================================================================
# def rot_mat_to_euler(rot_mat):
# r = R.from_matrix(rot_mat)
# return r.as_euler('zxz')* 180/np.pi
def OrientationMatrix2Euler(g):
"""
Compute the Euler angles from the orientation matrix.
This conversion follows the paper of Rowenhorst et al. :cite:`Rowenhorst2015`.
In particular when :math:`g_{33} = 1` within the machine precision,
there is no way to determine the values of :math:`\phi_1` and :math:`\phi_2`
(only their sum is defined). The convention is to attribute
the entire angle to :math:`\phi_1` and set :math:`\phi_2` to zero.
:param g: The 3x3 orientation matrix
:return: The 3 euler angles in degrees.
"""
eps = np.finfo('float').eps
(phi1, Phi, phi2) = (0.0, 0.0, 0.0)
# treat special case where g[2, 2] = 1
if np.abs(g[2, 2]) >= 1 - eps:
if g[2, 2] > 0.0:
phi1 = np.arctan2(g[0][1], g[0][0])
else:
phi1 = -np.arctan2(-g[0][1], g[0][0])
Phi = np.pi
else:
Phi = np.arccos(g[2][2])
zeta = 1.0 / np.sqrt(1.0 - g[2][2] ** 2)
phi1 = np.arctan2(g[2][0] * zeta, -g[2][1] * zeta)
phi2 = np.arctan2(g[0][2] * zeta, g[1][2] * zeta)
# ensure angles are in the range [0, 2*pi]
if phi1 < 0.0:
phi1 += 2 * np.pi
if Phi < 0.0:
Phi += 2 * np.pi
if phi2 < 0.0:
phi2 += 2 * np.pi
return np.degrees([phi2, Phi, phi1])
def simple_plots(lim_x, lim_y, strain_matrix, strain_matrixs, col, colx, coly,
match_rate, mat_global, spots_len, iR_pix, fR_pix,
model_direc, material_, material1_, match_rate_threshold=5, bins=30):
if material_ == material1_:
matid = 0
for index in range(len(strain_matrix)):
nan_index = np.where(match_rate[index][0] <= match_rate_threshold)[0]
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan,np.nan,np.nan
col_plot = col_plot.reshape((lim_x, lim_y, 3))
mr_plot = np.copy(match_rate[index][0])
mr_plot[nan_index,:] = np.nan
mr_plot = mr_plot.reshape((lim_x, lim_y))
mat_glob = np.copy(mat_global[index][0])
mat_glob[nan_index,:] = np.nan
mat_glob = mat_glob.reshape((lim_x, lim_y))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"IPF Z map", loc='center', fontsize=8)
axs[0].imshow(col_plot, origin='lower')
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[1].set_title(r"Material Index", loc='center', fontsize=8)
im = axs[1].imshow(mat_glob, origin='lower', vmin=0, vmax=1)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Matching rate", loc='center', fontsize=8)
im = axs[2].imshow(mr_plot, origin='lower', cmap=plt.cm.jet, vmin=0, vmax=100)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ "//figure_global_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
spots_len_plot = np.copy(spots_len[index][0])
spots_len_plot[nan_index,:] = np.nan
spots_len_plot = spots_len_plot.reshape((lim_x, lim_y))
iR_pix_plot = np.copy(iR_pix[index][0])
iR_pix_plot[nan_index,:] = np.nan
iR_pix_plot = iR_pix_plot.reshape((lim_x, lim_y))
fR_pix_plot = np.copy(fR_pix[index][0])
fR_pix_plot[nan_index,:] = np.nan
fR_pix_plot = fR_pix_plot.reshape((lim_x, lim_y))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"Number of spots detected", loc='center', fontsize=8)
im = axs[0].imshow(spots_len_plot, origin='lower', cmap=plt.cm.jet)
axs[0].set_xticks([])
axs[0].set_yticks([])
divider = make_axes_locatable(axs[0])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[1].set_title(r"Initial pixel residues", loc='center', fontsize=8)
im = axs[1].imshow(iR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Final pixel residues", loc='center', fontsize=8)
im = axs[2].imshow(fR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+'//figure_mr_ir_fr_UB'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
else:
for matid in range(2):
for index in range(len(strain_matrix)):
nan_index1 = np.where(match_rate[index][0] <= match_rate_threshold)[0]
mat_id_index = np.where(mat_global[index][0] != matid+1)[0]
nan_index = np.hstack((mat_id_index,nan_index1))
nan_index = np.unique(nan_index)
try:
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan,np.nan,np.nan
col_plot = col_plot.reshape((lim_x, lim_y, 3))
mr_plot = np.copy(match_rate[index][0])
mr_plot[nan_index,:] = np.nan
mr_plot = mr_plot.reshape((lim_x, lim_y))
mat_glob = np.copy(mat_global[index][0])
mat_glob[nan_index,:] = np.nan
mat_glob = mat_glob.reshape((lim_x, lim_y))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"IPF Z map", loc='center', fontsize=8)
axs[0].imshow(col_plot, origin='lower')
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[1].set_title(r"Material Index", loc='center', fontsize=8)
im = axs[1].imshow(mat_glob, origin='lower', vmin=0, vmax=2)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Matching rate", loc='center', fontsize=8)
im = axs[2].imshow(mr_plot, origin='lower', cmap=plt.cm.jet, vmin=0, vmax=100)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ "//figure_global_mat"+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
print("Error in plots")
spots_len_plot = np.copy(spots_len[index][0])
spots_len_plot[nan_index,:] = np.nan
spots_len_plot = spots_len_plot.reshape((lim_x, lim_y))
iR_pix_plot = np.copy(iR_pix[index][0])
iR_pix_plot[nan_index,:] = np.nan
iR_pix_plot = iR_pix_plot.reshape((lim_x, lim_y))
fR_pix_plot = np.copy(fR_pix[index][0])
fR_pix_plot[nan_index,:] = np.nan
fR_pix_plot = fR_pix_plot.reshape((lim_x, lim_y))
try:
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"Number of spots detected", loc='center', fontsize=8)
im = axs[0].imshow(spots_len_plot, origin='lower', cmap=plt.cm.jet)
axs[0].set_xticks([])
axs[0].set_yticks([])
divider = make_axes_locatable(axs[0])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[1].set_title(r"Initial pixel residues", loc='center', fontsize=8)
im = axs[1].imshow(iR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Final pixel residues", loc='center', fontsize=8)
im = axs[2].imshow(fR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+'//figure_mr_ir_fr_mat'+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
print("Error in plots")
def global_plots(lim_x, lim_y, rotation_matrix1, strain_matrix, strain_matrixs, col, colx, coly,
match_rate, mat_global, spots_len, iR_pix, fR_pix,
model_direc, material_, material1_, match_rate_threshold=5, bins=30, constantlength="a"):
call_global()
if material_ == material1_:
mu_sd = []
mu_sdc = []
for index in range(len(spots_len)):
### index for nans
nan_index = np.where(match_rate[index][0] <= match_rate_threshold)[0]
if index == 0:
spots_len_plot = np.copy(spots_len[index][0])
mr_plot = np.copy(match_rate[index][0])
iR_pix_plot = np.copy(iR_pix[index][0])
fR_pix_plot = np.copy(fR_pix[index][0])
strain_matrix_plot = np.copy(strain_matrix[index][0])
e11c = strain_matrix_plot[:,0,0]#.reshape((lim_x, lim_y))
e22c = strain_matrix_plot[:,1,1]#.reshape((lim_x, lim_y))
e33c = strain_matrix_plot[:,2,2]#.reshape((lim_x, lim_y))
e12c = strain_matrix_plot[:,0,1]#.reshape((lim_x, lim_y))
e13c = strain_matrix_plot[:,0,2]#.reshape((lim_x, lim_y))
e23c = strain_matrix_plot[:,1,2]#.reshape((lim_x, lim_y))
strain_matrixs_plot = np.copy(strain_matrixs[index][0])
e11s = strain_matrixs_plot[:,0,0]#.reshape((lim_x, lim_y))
e22s = strain_matrixs_plot[:,1,1]#.reshape((lim_x, lim_y))
e33s = strain_matrixs_plot[:,2,2]#.reshape((lim_x, lim_y))
e12s = strain_matrixs_plot[:,0,1]#.reshape((lim_x, lim_y))
e13s = strain_matrixs_plot[:,0,2]#.reshape((lim_x, lim_y))
e23s = strain_matrixs_plot[:,1,2]#.reshape((lim_x, lim_y))
spots_len_plot[nan_index] = np.nan
mr_plot[nan_index] = np.nan
iR_pix_plot[nan_index] = np.nan
fR_pix_plot[nan_index] = np.nan
e11c[nan_index] = np.nan
e22c[nan_index] = np.nan
e33c[nan_index] = np.nan
e12c[nan_index] = np.nan
e13c[nan_index] = np.nan
e23c[nan_index] = np.nan
e11s[nan_index] = np.nan
e22s[nan_index] = np.nan
e33s[nan_index] = np.nan
e12s[nan_index] = np.nan
e13s[nan_index] = np.nan
e23s[nan_index] = np.nan
else:
temp = np.copy(spots_len[index][0])
temp[nan_index] = np.nan
spots_len_plot = np.vstack((spots_len_plot,temp))
temp = np.copy(match_rate[index][0])
temp[nan_index] = np.nan
mr_plot = np.vstack((mr_plot,temp))
temp = np.copy(iR_pix[index][0])
temp[nan_index] = np.nan
iR_pix_plot = np.vstack((iR_pix_plot,temp))
temp = np.copy(fR_pix[index][0])
temp[nan_index] = np.nan
fR_pix_plot = np.vstack((fR_pix_plot,temp))
strain_matrix_plot = np.copy(strain_matrix[index][0])
temp = np.copy(strain_matrix_plot[:,0,0])
temp[nan_index] = np.nan
e11c = np.vstack((e11c,temp))
temp = np.copy(strain_matrix_plot[:,1,1])
temp[nan_index] = np.nan
e22c = np.vstack((e22c,temp))
temp = np.copy(strain_matrix_plot[:,2,2])
temp[nan_index] = np.nan
e33c = np.vstack((e33c,temp))
temp = np.copy(strain_matrix_plot[:,0,1])
temp[nan_index] = np.nan
e12c = np.vstack((e12c,temp))
temp = np.copy(strain_matrix_plot[:,0,2])
temp[nan_index] = np.nan
e13c = np.vstack((e13c,temp))
temp = np.copy(strain_matrix_plot[:,1,2])
temp[nan_index] = np.nan
e23c = np.vstack((e23c,temp))
##
strain_matrixs_plot = np.copy(strain_matrixs[index][0])
temp = np.copy(strain_matrixs_plot[:,0,0])
temp[nan_index] = np.nan
e11s = np.vstack((e11s,temp))
temp = np.copy(strain_matrixs_plot[:,1,1])
temp[nan_index] = np.nan
e22s = np.vstack((e22s,temp))
temp = np.copy(strain_matrixs_plot[:,2,2])
temp[nan_index] = np.nan
e33s = np.vstack((e33s,temp))
temp = np.copy(strain_matrixs_plot[:,0,1])
temp[nan_index] = np.nan
e12s = np.vstack((e12s,temp))
temp = np.copy(strain_matrixs_plot[:,0,2])
temp[nan_index] = np.nan
e13s = np.vstack((e13s,temp))
temp = np.copy(strain_matrixs_plot[:,1,2])
temp[nan_index] = np.nan
e23s = np.vstack((e23s,temp))
spots_len_plot = spots_len_plot.flatten()
mr_plot = mr_plot.flatten()
iR_pix_plot = iR_pix_plot.flatten()
fR_pix_plot = fR_pix_plot.flatten()
e11c = e11c.flatten()
e22c = e22c.flatten()
e33c = e33c.flatten()
e12c = e12c.flatten()
e13c = e13c.flatten()
e23c = e23c.flatten()
e11s = e11s.flatten()
e22s = e22s.flatten()
e33s = e33s.flatten()
e12s = e12s.flatten()
e13s = e13s.flatten()
e23s = e23s.flatten()
spots_len_plot = spots_len_plot[~np.isnan(spots_len_plot)]
mr_plot = mr_plot[~np.isnan(mr_plot)]
iR_pix_plot = iR_pix_plot[~np.isnan(iR_pix_plot)]
fR_pix_plot = fR_pix_plot[~np.isnan(fR_pix_plot)]
e11c = e11c[~np.isnan(e11c)]
e22c = e22c[~np.isnan(e22c)]
e33c = e33c[~np.isnan(e33c)]
e12c = e12c[~np.isnan(e12c)]
e13c = e13c[~np.isnan(e13c)]
e23c = e23c[~np.isnan(e23c)]
e11s = e11s[~np.isnan(e11s)]
e22s = e22s[~np.isnan(e22s)]
e33s = e33s[~np.isnan(e33s)]
e12s = e12s[~np.isnan(e12s)]
e13s = e13s[~np.isnan(e13s)]
e23s = e23s[~np.isnan(e23s)]
try:
title = "Number of spots and matching rate"
fig = plt.figure()
axs = fig.subplots(1, 2)
axs[0].set_title("Number of spots", loc='center', fontsize=8)
axs[0].hist(spots_len_plot, bins=bins)
axs[0].set_ylabel('Frequency', fontsize=8)
axs[0].tick_params(axis='both', which='major', labelsize=8)
axs[0].tick_params(axis='both', which='minor', labelsize=8)
axs[1].set_title("matching rate", loc='center', fontsize=8)
axs[1].hist(mr_plot, bins=bins)
axs[1].set_ylabel('Frequency', fontsize=8)
axs[1].tick_params(axis='both', which='major', labelsize=8)
axs[1].tick_params(axis='both', which='minor', labelsize=8)
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+'.png', format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
title = "Initial and Final residues"
fig = plt.figure()
axs = fig.subplots(1, 2)
axs[0].set_title("Initial residues", loc='center', fontsize=8)
axs[0].hist(iR_pix_plot, bins=bins)
axs[0].set_ylabel('Frequency', fontsize=8)
axs[0].tick_params(axis='both', which='major', labelsize=8)
axs[0].tick_params(axis='both', which='minor', labelsize=8)
axs[1].set_title("Final residues", loc='center', fontsize=8)
axs[1].hist(fR_pix_plot, bins=bins)
axs[1].set_ylabel('Frequency', fontsize=8)
axs[1].tick_params(axis='both', which='major', labelsize=8)
axs[1].tick_params(axis='both', which='minor', labelsize=8)
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+'.png',format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
title = "strain Crystal reference"
fig = plt.figure()
fig.suptitle(title, fontsize=10)
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
logdata = e11c #np.log(e11c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 0].axvline(x=estimated_mu, c="k")
axs[0, 0].plot(x1, pdf, 'r')
axs[0, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[0, 0].set_ylabel('Frequency', fontsize=8)
axs[0, 0].tick_params(axis='both', which='major', labelsize=8)
axs[0, 0].tick_params(axis='both', which='minor', labelsize=8)
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
logdata = e22c #np.log(e22c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 1].axvline(x=estimated_mu, c="k")
axs[0, 1].plot(x1, pdf, 'r')
axs[0, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[0, 1].hist(e22c, bins=bins)
axs[0, 1].set_ylabel('Frequency', fontsize=8)
axs[0, 1].tick_params(axis='both', which='major', labelsize=8)
axs[0, 1].tick_params(axis='both', which='minor', labelsize=8)
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
logdata = e33c #np.log(e33c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 2].axvline(x=estimated_mu, c="k")
axs[0, 2].plot(x1, pdf, 'r')
axs[0, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[0, 2].hist(e33c, bins=bins)
axs[0, 2].set_ylabel('Frequency', fontsize=8)
axs[0, 2].tick_params(axis='both', which='major', labelsize=8)
axs[0, 2].tick_params(axis='both', which='minor', labelsize=8)
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
logdata = e12c#np.log(e12c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 0].axvline(x=estimated_mu, c="k")
axs[1, 0].plot(x1, pdf, 'r')
axs[1, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[1, 0].hist(e12c, bins=bins)
axs[1, 0].set_ylabel('Frequency', fontsize=8)
axs[1, 0].tick_params(axis='both', which='major', labelsize=8)
axs[1, 0].tick_params(axis='both', which='minor', labelsize=8)
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
logdata = e13c#np.log(e13c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 1].axvline(x=estimated_mu, c="k")
axs[1, 1].plot(x1, pdf, 'r')
axs[1, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[1, 1].hist(e13c, bins=bins)
axs[1, 1].set_ylabel('Frequency', fontsize=8)
axs[1, 1].tick_params(axis='both', which='major', labelsize=8)
axs[1, 1].tick_params(axis='both', which='minor', labelsize=8)
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
logdata = e23c#np.log(e23c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 2].axvline(x=estimated_mu, c="k")
axs[1, 2].plot(x1, pdf, 'r')
axs[1, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[1, 2].hist(e23c, bins=bins)
axs[1, 2].set_ylabel('Frequency', fontsize=8)
axs[1, 2].tick_params(axis='both', which='major', labelsize=8)
axs[1, 2].tick_params(axis='both', which='minor', labelsize=8)
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+'.png', format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
title = "strain Sample reference"
fig = plt.figure()
fig.suptitle(title, fontsize=10)
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
logdata = e11s #np.log(e11c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 0].axvline(x=estimated_mu, c="k")
axs[0, 0].plot(x1, pdf, 'r')
axs[0, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[0, 0].hist(e11s, bins=bins)
axs[0, 0].set_ylabel('Frequency', fontsize=8)
axs[0, 0].tick_params(axis='both', which='major', labelsize=8)
axs[0, 0].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
logdata = e22s #np.log(e22c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 1].axvline(x=estimated_mu, c="k")
axs[0, 1].plot(x1, pdf, 'r')
axs[0, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[0, 1].hist(e22s, bins=bins)
axs[0, 1].set_ylabel('Frequency', fontsize=8)
axs[0, 1].tick_params(axis='both', which='major', labelsize=8)
axs[0, 1].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
logdata = e33s #np.log(e33c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 2].axvline(x=estimated_mu, c="k")
axs[0, 2].plot(x1, pdf, 'r')
axs[0, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[0, 2].hist(e33s, bins=bins)
axs[0, 2].set_ylabel('Frequency', fontsize=8)
axs[0, 2].tick_params(axis='both', which='major', labelsize=8)
axs[0, 2].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
logdata = e12s#np.log(e12c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 0].axvline(x=estimated_mu, c="k")
axs[1, 0].plot(x1, pdf, 'r')
axs[1, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 0].hist(e12s, bins=bins)
axs[1, 0].set_ylabel('Frequency', fontsize=8)
axs[1, 0].tick_params(axis='both', which='major', labelsize=8)
axs[1, 0].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
logdata = e13s#np.log(e13c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 1].axvline(x=estimated_mu, c="k")
axs[1, 1].plot(x1, pdf, 'r')
axs[1, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 1].hist(e13s, bins=bins)
axs[1, 1].set_ylabel('Frequency', fontsize=8)
axs[1, 1].tick_params(axis='both', which='major', labelsize=8)
axs[1, 1].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
logdata = e23s#np.log(e23c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 2].axvline(x=estimated_mu, c="k")
axs[1, 2].plot(x1, pdf, 'r')
axs[1, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 2].hist(e23s, bins=bins)
axs[1, 2].set_ylabel('Frequency', fontsize=8)
axs[1, 2].tick_params(axis='both', which='major', labelsize=8)
axs[1, 2].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+'.png', format='png', dpi=1000)
plt.close(fig)
except:
pass
else:
mu_sd = []
mu_sdc = []
material_id = [material_, material1_]
for matid in range(2):
for index in range(len(spots_len)):
### index for nans
nan_index1 = np.where(match_rate[index][0] <= match_rate_threshold)[0]
mat_id_index = np.where(mat_global[index][0] != matid+1)[0]
nan_index = np.hstack((mat_id_index,nan_index1))
nan_index = np.unique(nan_index)
if index == 0:
spots_len_plot = np.copy(spots_len[index][0])
mr_plot = np.copy(match_rate[index][0])
iR_pix_plot = np.copy(iR_pix[index][0])
fR_pix_plot = np.copy(fR_pix[index][0])
strain_matrix_plot = np.copy(strain_matrix[index][0])
e11c = strain_matrix_plot[:,0,0]#.reshape((lim_x, lim_y))
e22c = strain_matrix_plot[:,1,1]#.reshape((lim_x, lim_y))
e33c = strain_matrix_plot[:,2,2]#.reshape((lim_x, lim_y))
e12c = strain_matrix_plot[:,0,1]#.reshape((lim_x, lim_y))
e13c = strain_matrix_plot[:,0,2]#.reshape((lim_x, lim_y))
e23c = strain_matrix_plot[:,1,2]#.reshape((lim_x, lim_y))
strain_matrixs_plot = np.copy(strain_matrixs[index][0])
e11s = strain_matrixs_plot[:,0,0]#.reshape((lim_x, lim_y))
e22s = strain_matrixs_plot[:,1,1]#.reshape((lim_x, lim_y))
e33s = strain_matrixs_plot[:,2,2]#.reshape((lim_x, lim_y))
e12s = strain_matrixs_plot[:,0,1]#.reshape((lim_x, lim_y))
e13s = strain_matrixs_plot[:,0,2]#.reshape((lim_x, lim_y))
e23s = strain_matrixs_plot[:,1,2]#.reshape((lim_x, lim_y))
spots_len_plot[nan_index] = np.nan
mr_plot[nan_index] = np.nan
iR_pix_plot[nan_index] = np.nan
fR_pix_plot[nan_index] = np.nan
e11c[nan_index] = np.nan
e22c[nan_index] = np.nan
e33c[nan_index] = np.nan
e12c[nan_index] = np.nan
e13c[nan_index] = np.nan
e23c[nan_index] = np.nan
e11s[nan_index] = np.nan
e22s[nan_index] = np.nan
e33s[nan_index] = np.nan
e12s[nan_index] = np.nan
e13s[nan_index] = np.nan
e23s[nan_index] = np.nan
else:
temp = np.copy(spots_len[index][0])
temp[nan_index] = np.nan
spots_len_plot = np.vstack((spots_len_plot,temp))
temp = np.copy(match_rate[index][0])
temp[nan_index] = np.nan
mr_plot = np.vstack((mr_plot,temp))
temp = np.copy(iR_pix[index][0])
temp[nan_index] = np.nan
iR_pix_plot = np.vstack((iR_pix_plot,temp))
temp = np.copy(fR_pix[index][0])
temp[nan_index] = np.nan
fR_pix_plot = np.vstack((fR_pix_plot,temp))
strain_matrix_plot = np.copy(strain_matrix[index][0])
temp = np.copy(strain_matrix_plot[:,0,0])
temp[nan_index] = np.nan
e11c = np.vstack((e11c,temp))
temp = np.copy(strain_matrix_plot[:,1,1])
temp[nan_index] = np.nan
e22c = np.vstack((e22c,temp))
temp = np.copy(strain_matrix_plot[:,2,2])
temp[nan_index] = np.nan
e33c = np.vstack((e33c,temp))
temp = np.copy(strain_matrix_plot[:,0,1])
temp[nan_index] = np.nan
e12c = np.vstack((e12c,temp))
temp = np.copy(strain_matrix_plot[:,0,2])
temp[nan_index] = np.nan
e13c = np.vstack((e13c,temp))
temp = np.copy(strain_matrix_plot[:,1,2])
temp[nan_index] = np.nan
e23c = np.vstack((e23c,temp))
##
strain_matrixs_plot = np.copy(strain_matrixs[index][0])
temp = np.copy(strain_matrixs_plot[:,0,0])
temp[nan_index] = np.nan
e11s = np.vstack((e11s,temp))
temp = np.copy(strain_matrixs_plot[:,1,1])
temp[nan_index] = np.nan
e22s = np.vstack((e22s,temp))
temp = np.copy(strain_matrixs_plot[:,2,2])
temp[nan_index] = np.nan
e33s = np.vstack((e33s,temp))
temp = np.copy(strain_matrixs_plot[:,0,1])
temp[nan_index] = np.nan
e12s = np.vstack((e12s,temp))
temp = np.copy(strain_matrixs_plot[:,0,2])
temp[nan_index] = np.nan
e13s = np.vstack((e13s,temp))
temp = np.copy(strain_matrixs_plot[:,1,2])
temp[nan_index] = np.nan
e23s = np.vstack((e23s,temp))
spots_len_plot = spots_len_plot.flatten()
mr_plot = mr_plot.flatten()
iR_pix_plot = iR_pix_plot.flatten()
fR_pix_plot = fR_pix_plot.flatten()
e11c = e11c.flatten()
e22c = e22c.flatten()
e33c = e33c.flatten()
e12c = e12c.flatten()
e13c = e13c.flatten()
e23c = e23c.flatten()
e11s = e11s.flatten()
e22s = e22s.flatten()
e33s = e33s.flatten()
e12s = e12s.flatten()
e13s = e13s.flatten()
e23s = e23s.flatten()
spots_len_plot = spots_len_plot[~np.isnan(spots_len_plot)]
mr_plot = mr_plot[~np.isnan(mr_plot)]
iR_pix_plot = iR_pix_plot[~np.isnan(iR_pix_plot)]
fR_pix_plot = fR_pix_plot[~np.isnan(fR_pix_plot)]
e11c = e11c[~np.isnan(e11c)]
e22c = e22c[~np.isnan(e22c)]
e33c = e33c[~np.isnan(e33c)]
e12c = e12c[~np.isnan(e12c)]
e13c = e13c[~np.isnan(e13c)]
e23c = e23c[~np.isnan(e23c)]
e11s = e11s[~np.isnan(e11s)]
e22s = e22s[~np.isnan(e22s)]
e33s = e33s[~np.isnan(e33s)]
e12s = e12s[~np.isnan(e12s)]
e13s = e13s[~np.isnan(e13s)]
e23s = e23s[~np.isnan(e23s)]
try:
title = "Number of spots and matching rate"
fig = plt.figure()
axs = fig.subplots(1, 2)
axs[0].set_title("Number of spots", loc='center', fontsize=8)
axs[0].hist(spots_len_plot, bins=bins)
axs[0].set_ylabel('Frequency', fontsize=8)
axs[0].tick_params(axis='both', which='major', labelsize=8)
axs[0].tick_params(axis='both', which='minor', labelsize=8)
axs[1].set_title("matching rate", loc='center', fontsize=8)
axs[1].hist(mr_plot, bins=bins)
axs[1].set_ylabel('Frequency', fontsize=8)
axs[1].tick_params(axis='both', which='major', labelsize=8)
axs[1].tick_params(axis='both', which='minor', labelsize=8)
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+"_"+material_id[matid]+'.png', format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
title = "Initial and Final residues"
fig = plt.figure()
axs = fig.subplots(1, 2)
axs[0].set_title("Initial residues", loc='center', fontsize=8)
axs[0].hist(iR_pix_plot, bins=bins)
axs[0].set_ylabel('Frequency', fontsize=8)
axs[0].tick_params(axis='both', which='major', labelsize=8)
axs[0].tick_params(axis='both', which='minor', labelsize=8)
axs[1].set_title("Final residues", loc='center', fontsize=8)
axs[1].hist(fR_pix_plot, bins=bins)
axs[1].set_ylabel('Frequency', fontsize=8)
axs[1].tick_params(axis='both', which='major', labelsize=8)
axs[1].tick_params(axis='both', which='minor', labelsize=8)
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+"_"+material_id[matid]+'.png',format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
title = "strain Crystal reference"+" "+material_id[matid]
fig = plt.figure()
fig.suptitle(title, fontsize=10)
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
logdata = e11c #np.log(e11c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 0].axvline(x=estimated_mu, c="k")
axs[0, 0].plot(x1, pdf, 'r')
axs[0, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[0, 0].set_ylabel('Frequency', fontsize=8)
axs[0, 0].tick_params(axis='both', which='major', labelsize=8)
axs[0, 0].tick_params(axis='both', which='minor', labelsize=8)
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
logdata = e22c #np.log(e22c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 1].axvline(x=estimated_mu, c="k")
axs[0, 1].plot(x1, pdf, 'r')
axs[0, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[0, 1].hist(e22c, bins=bins)
axs[0, 1].set_ylabel('Frequency', fontsize=8)
axs[0, 1].tick_params(axis='both', which='major', labelsize=8)
axs[0, 1].tick_params(axis='both', which='minor', labelsize=8)
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
logdata = e33c #np.log(e33c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 2].axvline(x=estimated_mu, c="k")
axs[0, 2].plot(x1, pdf, 'r')
axs[0, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[0, 2].hist(e33c, bins=bins)
axs[0, 2].set_ylabel('Frequency', fontsize=8)
axs[0, 2].tick_params(axis='both', which='major', labelsize=8)
axs[0, 2].tick_params(axis='both', which='minor', labelsize=8)
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
logdata = e12c#np.log(e12c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 0].axvline(x=estimated_mu, c="k")
axs[1, 0].plot(x1, pdf, 'r')
axs[1, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[1, 0].hist(e12c, bins=bins)
axs[1, 0].set_ylabel('Frequency', fontsize=8)
axs[1, 0].tick_params(axis='both', which='major', labelsize=8)
axs[1, 0].tick_params(axis='both', which='minor', labelsize=8)
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
logdata = e13c#np.log(e13c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 1].axvline(x=estimated_mu, c="k")
axs[1, 1].plot(x1, pdf, 'r')
axs[1, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
# axs[1, 1].hist(e13c, bins=bins)
axs[1, 1].set_ylabel('Frequency', fontsize=8)
axs[1, 1].tick_params(axis='both', which='major', labelsize=8)
axs[1, 1].tick_params(axis='both', which='minor', labelsize=8)
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
logdata = e23c#np.log(e23c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 2].axvline(x=estimated_mu, c="k")
axs[1, 2].plot(x1, pdf, 'r')
axs[1, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 2].hist(e23c, bins=bins)
axs[1, 2].set_ylabel('Frequency', fontsize=8)
axs[1, 2].tick_params(axis='both', which='major', labelsize=8)
axs[1, 2].tick_params(axis='both', which='minor', labelsize=8)
mu_sdc.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+'.png', format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
title = "strain Sample reference"+" "+material_id[matid]
fig = plt.figure()
fig.suptitle(title, fontsize=10)
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
logdata = e11s #np.log(e11c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 0].axvline(x=estimated_mu, c="k")
axs[0, 0].plot(x1, pdf, 'r')
axs[0, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[0, 0].hist(e11s, bins=bins)
axs[0, 0].set_ylabel('Frequency', fontsize=8)
axs[0, 0].tick_params(axis='both', which='major', labelsize=8)
axs[0, 0].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
logdata = e22s #np.log(e22c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 1].axvline(x=estimated_mu, c="k")
axs[0, 1].plot(x1, pdf, 'r')
axs[0, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[0, 1].hist(e22s, bins=bins)
axs[0, 1].set_ylabel('Frequency', fontsize=8)
axs[0, 1].tick_params(axis='both', which='major', labelsize=8)
axs[0, 1].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
logdata = e33s #np.log(e33c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[0, 2].axvline(x=estimated_mu, c="k")
axs[0, 2].plot(x1, pdf, 'r')
axs[0, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[0, 2].hist(e33s, bins=bins)
axs[0, 2].set_ylabel('Frequency', fontsize=8)
axs[0, 2].tick_params(axis='both', which='major', labelsize=8)
axs[0, 2].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
logdata = e12s#np.log(e12c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 0].axvline(x=estimated_mu, c="k")
axs[1, 0].plot(x1, pdf, 'r')
axs[1, 0].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 0].hist(e12s, bins=bins)
axs[1, 0].set_ylabel('Frequency', fontsize=8)
axs[1, 0].tick_params(axis='both', which='major', labelsize=8)
axs[1, 0].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
logdata = e13s#np.log(e13c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 1].axvline(x=estimated_mu, c="k")
axs[1, 1].plot(x1, pdf, 'r')
axs[1, 1].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 1].hist(e13s, bins=bins)
axs[1, 1].set_ylabel('Frequency', fontsize=8)
axs[1, 1].tick_params(axis='both', which='major', labelsize=8)
axs[1, 1].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
logdata = e23s#np.log(e23c)
xmin = logdata.min()
xmax = logdata.max()
x1 = np.linspace(xmin, xmax, 1000)
estimated_mu, estimated_sigma = scipy.stats.norm.fit(logdata)
pdf = scipy.stats.norm.pdf(x1, loc=estimated_mu, scale=estimated_sigma)
axs[1, 2].axvline(x=estimated_mu, c="k")
axs[1, 2].plot(x1, pdf, 'r')
axs[1, 2].hist(logdata, bins=bins, density=True, alpha=0.8)
# axs[1, 2].hist(e23s, bins=bins)
axs[1, 2].set_ylabel('Frequency', fontsize=8)
axs[1, 2].tick_params(axis='both', which='major', labelsize=8)
axs[1, 2].tick_params(axis='both', which='minor', labelsize=8)
mu_sd.append((estimated_mu-estimated_sigma, estimated_mu+estimated_sigma))
plt.tight_layout()
plt.savefig(model_direc+ "//"+title+'.png', format='png', dpi=1000)
plt.close(fig)
except:
pass
if material_ == material1_:
matid = 0
for index in range(len(strain_matrix)):
nan_index = np.where(match_rate[index][0] <= match_rate_threshold)[0]
strain_matrix_plot = np.copy(strain_matrixs[index][0])
strain_matrix_plot[nan_index,:,:] = np.nan
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin, vmax = mu_sd[matid*6]
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
im=axs[0, 0].imshow(strain_matrix_plot[:,0,0].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+1]
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
im=axs[0, 1].imshow(strain_matrix_plot[:,1,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+2]
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
im=axs[0, 2].imshow(strain_matrix_plot[:,2,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+3]
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
im=axs[1, 0].imshow(strain_matrix_plot[:,0,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+4]
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
im=axs[1, 1].imshow(strain_matrix_plot[:,0,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+5]
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
im = axs[1, 2].imshow(strain_matrix_plot[:,1,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ '//figure_strain_UBsample_UB'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
strain_matrix_plot = np.copy(strain_matrix[index][0])
strain_matrix_plot[nan_index,:,:] = np.nan
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin, vmax = mu_sdc[matid*6]
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
im=axs[0, 0].imshow(strain_matrix_plot[:,0,0].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+1]
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
im=axs[0, 1].imshow(strain_matrix_plot[:,1,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+2]
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
im=axs[0, 2].imshow(strain_matrix_plot[:,2,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+3]
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
im=axs[1, 0].imshow(strain_matrix_plot[:,0,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+4]
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
im=axs[1, 1].imshow(strain_matrix_plot[:,0,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+5]
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
im = axs[1, 2].imshow(strain_matrix_plot[:,1,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ '//figure_strain_UBcrystal_UB'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan,np.nan,np.nan
col_plot = col_plot.reshape((lim_x, lim_y, 3))
colx_plot = np.copy(colx[index][0])
colx_plot[nan_index,:] = np.nan,np.nan,np.nan
colx_plot = colx_plot.reshape((lim_x, lim_y,3))
coly_plot = np.copy(coly[index][0])
coly_plot[nan_index,:] = np.nan,np.nan,np.nan
coly_plot = coly_plot.reshape((lim_x, lim_y,3))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"IPF Z map", loc='center', fontsize=8)
axs[0].imshow(col_plot, origin='lower')
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[1].set_title(r"IPF Y map", loc='center', fontsize=8)
axs[1].imshow(coly_plot, origin='lower')
axs[1].set_xticks([])
axs[1].set_yticks([])
axs[2].set_title(r"IPF X map", loc='center', fontsize=8)
im = axs[2].imshow(colx_plot, origin='lower')
axs[2].set_xticks([])
axs[2].set_yticks([])
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ '//IPF_map_UB'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan,np.nan,np.nan
col_plot = col_plot.reshape((lim_x, lim_y, 3))
mr_plot = np.copy(match_rate[index][0])
mr_plot[nan_index,:] = np.nan
mr_plot = mr_plot.reshape((lim_x, lim_y))
mat_glob = np.copy(mat_global[index][0])
mat_glob[nan_index,:] = np.nan
mat_glob = mat_glob.reshape((lim_x, lim_y))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"IPF Z map", loc='center', fontsize=8)
axs[0].imshow(col_plot, origin='lower')
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[1].set_title(r"Material Index", loc='center', fontsize=8)
im = axs[1].imshow(mat_glob, origin='lower', vmin=0, vmax=1)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Matching rate", loc='center', fontsize=8)
im = axs[2].imshow(mr_plot, origin='lower', cmap=plt.cm.jet, vmin=0, vmax=100)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ "//figure_global_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
spots_len_plot = np.copy(spots_len[index][0])
spots_len_plot[nan_index,:] = np.nan
spots_len_plot = spots_len_plot.reshape((lim_x, lim_y))
iR_pix_plot = np.copy(iR_pix[index][0])
iR_pix_plot[nan_index,:] = np.nan
iR_pix_plot = iR_pix_plot.reshape((lim_x, lim_y))
fR_pix_plot = np.copy(fR_pix[index][0])
fR_pix_plot[nan_index,:] = np.nan
fR_pix_plot = fR_pix_plot.reshape((lim_x, lim_y))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"Number of spots detected", loc='center', fontsize=8)
im = axs[0].imshow(spots_len_plot, origin='lower', cmap=plt.cm.jet)
axs[0].set_xticks([])
axs[0].set_yticks([])
divider = make_axes_locatable(axs[0])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[1].set_title(r"Initial pixel residues", loc='center', fontsize=8)
im = axs[1].imshow(iR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Final pixel residues", loc='center', fontsize=8)
im = axs[2].imshow(fR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+'//figure_mr_ir_fr_UB'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
try:
a,b,c,alp,bet,gam = [],[],[],[],[],[]
constantlength = "a"
if ("a" in strain_free_parameters) and ("b" in strain_free_parameters) and ("c" in strain_free_parameters):
constantlength = "a"
elif ("b" not in strain_free_parameters) and additional_expression[0]=="none" and\
"b" not in additional_expression[0]:
constantlength = "b"
elif ("c" not in strain_free_parameters):
constantlength = "c"
for irot in range(len(rotation_matrix1[index][0])):
lattice_parameter_direct_strain = CP.computeLatticeParameters_from_UB(rotation_matrix1[index][0][irot,:,:],
material_,
constantlength,
dictmaterials=dictLT.dict_Materials)
a.append(lattice_parameter_direct_strain[0])
b.append(lattice_parameter_direct_strain[1])
c.append(lattice_parameter_direct_strain[2])
alp.append(lattice_parameter_direct_strain[3])
bet.append(lattice_parameter_direct_strain[4])
gam.append(lattice_parameter_direct_strain[5])
logdata = np.array(a)
logdata = logdata[~np.isnan(logdata)]
rangemina, rangemaxa = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(b)
logdata = logdata[~np.isnan(logdata)]
rangeminb, rangemaxb = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(c)
logdata = logdata[~np.isnan(logdata)]
rangeminc, rangemaxc = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(alp)
logdata = logdata[~np.isnan(logdata)]
rangeminal, rangemaxal = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(bet)
logdata = logdata[~np.isnan(logdata)]
rangeminbe, rangemaxbe = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(gam)
logdata = logdata[~np.isnan(logdata)]
rangeminga, rangemaxga = np.min(logdata)-0.01, np.max(logdata)+0.01
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin = rangemina
vmax = rangemaxa
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$a$", loc='center', fontsize=8)
strain_matrix_plot = np.array(a)
im=axs[0, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminb
vmax = rangemaxb
axs[0, 1].set_title(r"$b$", loc='center', fontsize=8)
strain_matrix_plot = np.array(b)
im=axs[0, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminc
vmax = rangemaxc
axs[0, 2].set_title(r"$c$", loc='center', fontsize=8)
strain_matrix_plot = np.array(c)
im=axs[0, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminal
vmax = rangemaxal
axs[1, 0].set_title(r"$\alpha$", loc='center', fontsize=8)
strain_matrix_plot = np.array(alp)
im=axs[1, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminbe
vmax = rangemaxbe
axs[1, 1].set_title(r"$\beta$", loc='center', fontsize=8)
strain_matrix_plot = np.array(bet)
im=axs[1, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminga
vmax = rangemaxga
axs[1, 2].set_title(r"$\gamma$", loc='center', fontsize=8)
strain_matrix_plot = np.array(gam)
im = axs[1, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.formatter.set_useOffset(False)
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ "//"+'figure_unitcell_'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
latticeparams = dictLT.dict_Materials[material_][1]
a,b,c,alp,bet,gam = [],[],[],[],[],[]
constantlength = "a"
if ("a" in strain_free_parameters) and ("b" in strain_free_parameters) and ("c" in strain_free_parameters):
constantlength = "a"
elif ("b" not in strain_free_parameters) and additional_expression[0]=="none" and \
"b" not in additional_expression[0]:
constantlength = "b"
elif ("c" not in strain_free_parameters):
constantlength = "c"
for irot in range(len(rotation_matrix1[index][0])):
lattice_parameter_direct_strain = CP.computeLatticeParameters_from_UB(rotation_matrix1[index][0][irot,:,:],
material_,
constantlength,
dictmaterials=dictLT.dict_Materials)
a.append(lattice_parameter_direct_strain[0])
b.append(lattice_parameter_direct_strain[1])
c.append(lattice_parameter_direct_strain[2])
alp.append(lattice_parameter_direct_strain[3])
bet.append(lattice_parameter_direct_strain[4])
gam.append(lattice_parameter_direct_strain[5])
logdata = np.array(a) - latticeparams[0]
logdata = logdata[~np.isnan(logdata)]
rangemina, rangemaxa = np.min(logdata) - 0.01e-2, np.max(logdata) + 0.01e-2
logdata = np.array(b) - latticeparams[1]
logdata = logdata[~np.isnan(logdata)]
rangeminb, rangemaxb = np.min(logdata) - 0.01e-2, np.max(logdata) + 0.01e-2
logdata = np.array(c) - latticeparams[2]
logdata = logdata[~np.isnan(logdata)]
rangeminc, rangemaxc = np.min(logdata) - 0.01e-2, np.max(logdata) + 0.01e-2
logdata = np.array(alp) - latticeparams[3]
logdata = logdata[~np.isnan(logdata)]
rangeminal, rangemaxal = np.min(logdata) - 0.01, np.max(logdata) + 0.01
logdata = np.array(bet) - latticeparams[4]
logdata = logdata[~np.isnan(logdata)]
rangeminbe, rangemaxbe = np.min(logdata) - 0.01, np.max(logdata) + 0.01
logdata = np.array(gam) - latticeparams[5]
logdata = logdata[~np.isnan(logdata)]
rangeminga, rangemaxga = np.min(logdata) - 0.01, np.max(logdata) + 0.01
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin = rangemina
vmax = rangemaxa
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$a$", loc='center', fontsize=8)
strain_matrix_plot = np.array(a) - latticeparams[0]
im=axs[0, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminb
vmax = rangemaxb
axs[0, 1].set_title(r"$b$", loc='center', fontsize=8)
strain_matrix_plot = np.array(b) - latticeparams[1]
im=axs[0, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminc
vmax = rangemaxc
axs[0, 2].set_title(r"$c$", loc='center', fontsize=8)
strain_matrix_plot = np.array(c) - latticeparams[2]
im=axs[0, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminal
vmax = rangemaxal
axs[1, 0].set_title(r"$\alpha$", loc='center', fontsize=8)
strain_matrix_plot = np.array(alp) - latticeparams[3]
im=axs[1, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminbe
vmax = rangemaxbe
axs[1, 1].set_title(r"$\beta$", loc='center', fontsize=8)
strain_matrix_plot = np.array(bet) - latticeparams[4]
im=axs[1, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminga
vmax = rangemaxga
axs[1, 2].set_title(r"$\gamma$", loc='center', fontsize=8)
strain_matrix_plot = np.array(gam) - latticeparams[5]
im = axs[1, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.formatter.set_useOffset(False)
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc + "//" + 'figure_unitcell_relative_'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
pass
else:
for matid in range(2):
for index in range(len(strain_matrix)):
nan_index1 = np.where(match_rate[index][0] <= match_rate_threshold)[0]
mat_id_index = np.where(mat_global[index][0] != matid+1)[0]
nan_index = np.hstack((mat_id_index,nan_index1))
nan_index = np.unique(nan_index)
strain_matrix_plot = np.copy(strain_matrixs[index][0])
strain_matrix_plot[nan_index,:,:] = np.nan
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
try:
vmin, vmax = mu_sd[matid*6]
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
im=axs[0, 0].imshow(strain_matrix_plot[:,0,0].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+1]
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
im=axs[0, 1].imshow(strain_matrix_plot[:,1,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+2]
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
im=axs[0, 2].imshow(strain_matrix_plot[:,2,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+3]
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
im=axs[1, 0].imshow(strain_matrix_plot[:,0,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+4]
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
im=axs[1, 1].imshow(strain_matrix_plot[:,0,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sd[matid*6+5]
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
im = axs[1, 2].imshow(strain_matrix_plot[:,1,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ '//figure_strain_UBsample_mat'+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
print("Error in strain plot")
strain_matrix_plot = np.copy(strain_matrix[index][0])
strain_matrix_plot[nan_index,:,:] = np.nan
try:
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin, vmax = mu_sdc[matid*6]
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$\epsilon_{11}$ (%)", loc='center', fontsize=8)
im=axs[0, 0].imshow(strain_matrix_plot[:,0,0].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+1]
axs[0, 1].set_title(r"$\epsilon_{22}$ (%)", loc='center', fontsize=8)
im=axs[0, 1].imshow(strain_matrix_plot[:,1,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+2]
axs[0, 2].set_title(r"$\epsilon_{33}$ (%)", loc='center', fontsize=8)
im=axs[0, 2].imshow(strain_matrix_plot[:,2,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+3]
axs[1, 0].set_title(r"$\epsilon_{12}$ (%)", loc='center', fontsize=8)
im=axs[1, 0].imshow(strain_matrix_plot[:,0,1].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+4]
axs[1, 1].set_title(r"$\epsilon_{13}$ (%)", loc='center', fontsize=8)
im=axs[1, 1].imshow(strain_matrix_plot[:,0,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin, vmax = mu_sdc[matid*6+5]
axs[1, 2].set_title(r"$\epsilon_{23}$ (%)", loc='center', fontsize=8)
im = axs[1, 2].imshow(strain_matrix_plot[:,1,2].reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ '//figure_strain_UBcrystal_mat'+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
print("Error in strain plots")
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan,np.nan,np.nan
col_plot = col_plot.reshape((lim_x, lim_y, 3))
colx_plot = np.copy(colx[index][0])
colx_plot[nan_index,:] = np.nan,np.nan,np.nan
colx_plot = colx_plot.reshape((lim_x, lim_y,3))
coly_plot = np.copy(coly[index][0])
coly_plot[nan_index,:] = np.nan,np.nan,np.nan
coly_plot = coly_plot.reshape((lim_x, lim_y,3))
try:
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"IPF Z map", loc='center', fontsize=8)
axs[0].imshow(col_plot, origin='lower')
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[1].set_title(r"IPF Y map", loc='center', fontsize=8)
axs[1].imshow(coly_plot, origin='lower')
axs[1].set_xticks([])
axs[1].set_yticks([])
axs[2].set_title(r"IPF X map", loc='center', fontsize=8)
im = axs[2].imshow(colx_plot, origin='lower')
axs[2].set_xticks([])
axs[2].set_yticks([])
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ '//IPF_map_mat'+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan,np.nan,np.nan
col_plot = col_plot.reshape((lim_x, lim_y, 3))
mr_plot = np.copy(match_rate[index][0])
mr_plot[nan_index,:] = np.nan
mr_plot = mr_plot.reshape((lim_x, lim_y))
mat_glob = np.copy(mat_global[index][0])
mat_glob[nan_index,:] = np.nan
mat_glob = mat_glob.reshape((lim_x, lim_y))
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"IPF Z map", loc='center', fontsize=8)
axs[0].imshow(col_plot, origin='lower')
axs[0].set_xticks([])
axs[0].set_yticks([])
axs[1].set_title(r"Material Index", loc='center', fontsize=8)
im = axs[1].imshow(mat_glob, origin='lower', vmin=0, vmax=2)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Matching rate", loc='center', fontsize=8)
im = axs[2].imshow(mr_plot, origin='lower', cmap=plt.cm.jet, vmin=0, vmax=100)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ "//figure_global_mat"+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
print("Error in plots")
spots_len_plot = np.copy(spots_len[index][0])
spots_len_plot[nan_index,:] = np.nan
spots_len_plot = spots_len_plot.reshape((lim_x, lim_y))
iR_pix_plot = np.copy(iR_pix[index][0])
iR_pix_plot[nan_index,:] = np.nan
iR_pix_plot = iR_pix_plot.reshape((lim_x, lim_y))
fR_pix_plot = np.copy(fR_pix[index][0])
fR_pix_plot[nan_index,:] = np.nan
fR_pix_plot = fR_pix_plot.reshape((lim_x, lim_y))
try:
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
axs = fig.subplots(1, 3)
axs[0].set_title(r"Number of spots detected", loc='center', fontsize=8)
im = axs[0].imshow(spots_len_plot, origin='lower', cmap=plt.cm.jet)
axs[0].set_xticks([])
axs[0].set_yticks([])
divider = make_axes_locatable(axs[0])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[1].set_title(r"Initial pixel residues", loc='center', fontsize=8)
im = axs[1].imshow(iR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[1].set_xticks([])
axs[1].set_yticks([])
divider = make_axes_locatable(axs[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
axs[2].set_title(r"Final pixel residues", loc='center', fontsize=8)
im = axs[2].imshow(fR_pix_plot, origin='lower', cmap=plt.cm.jet)
axs[2].set_xticks([])
axs[2].set_yticks([])
divider = make_axes_locatable(axs[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+'//figure_mr_ir_fr_mat'+str(matid)+"_UB"+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
print("Error in plots")
try:
a,b,c,alp,bet,gam = [],[],[],[],[],[]
constantlength = "a"
if ("a" in strain_free_parameters) and ("b" in strain_free_parameters) and ("c" in strain_free_parameters):
constantlength = "a"
elif ("b" not in strain_free_parameters) and additional_expression[0]=="none" and\
"b" not in additional_expression[0]:
constantlength = "b"
elif ("c" not in strain_free_parameters):
constantlength = "c"
for irot in range(len(rotation_matrix1[index][0])):
lattice_parameter_direct_strain = CP.computeLatticeParameters_from_UB(rotation_matrix1[index][0][irot,:,:],
material_,
constantlength,
dictmaterials=dictLT.dict_Materials)
a.append(lattice_parameter_direct_strain[0])
b.append(lattice_parameter_direct_strain[1])
c.append(lattice_parameter_direct_strain[2])
alp.append(lattice_parameter_direct_strain[3])
bet.append(lattice_parameter_direct_strain[4])
gam.append(lattice_parameter_direct_strain[5])
logdata = np.array(a)
logdata = logdata[~np.isnan(logdata)]
rangemina, rangemaxa = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(b)
logdata = logdata[~np.isnan(logdata)]
rangeminb, rangemaxb = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(c)
logdata = logdata[~np.isnan(logdata)]
rangeminc, rangemaxc = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(alp)
logdata = logdata[~np.isnan(logdata)]
rangeminal, rangemaxal = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(bet)
logdata = logdata[~np.isnan(logdata)]
rangeminbe, rangemaxbe = np.min(logdata)-0.01, np.max(logdata)+0.01
logdata = np.array(gam)
logdata = logdata[~np.isnan(logdata)]
rangeminga, rangemaxga = np.min(logdata)-0.01, np.max(logdata)+0.01
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin = rangemina
vmax = rangemaxa
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$a$", loc='center', fontsize=8)
strain_matrix_plot = np.array(a)
im=axs[0, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminb
vmax = rangemaxb
axs[0, 1].set_title(r"$b$", loc='center', fontsize=8)
strain_matrix_plot = np.array(b)
im=axs[0, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminc
vmax = rangemaxc
axs[0, 2].set_title(r"$c$", loc='center', fontsize=8)
strain_matrix_plot = np.array(c)
im=axs[0, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminal
vmax = rangemaxal
axs[1, 0].set_title(r"$\alpha$", loc='center', fontsize=8)
strain_matrix_plot = np.array(alp)
im=axs[1, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminbe
vmax = rangemaxbe
axs[1, 1].set_title(r"$\beta$", loc='center', fontsize=8)
strain_matrix_plot = np.array(bet)
im=axs[1, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminga
vmax = rangemaxga
axs[1, 2].set_title(r"$\gamma$", loc='center', fontsize=8)
strain_matrix_plot = np.array(gam)
im = axs[1, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.formatter.set_useOffset(False)
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc+ "//"+'figure_unitcell_'+str(matid)+'_'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
pass
try:
latticeparams = dictLT.dict_Materials[material_][1]
a,b,c,alp,bet,gam = [],[],[],[],[],[]
constantlength = "a"
if ("a" in strain_free_parameters) and ("b" in strain_free_parameters) and ("c" in strain_free_parameters):
constantlength = "a"
elif ("b" not in strain_free_parameters) and additional_expression[0]=="none" and \
"b" not in additional_expression[0]:
constantlength = "b"
elif ("c" not in strain_free_parameters):
constantlength = "c"
for irot in range(len(rotation_matrix1[index][0])):
lattice_parameter_direct_strain = CP.computeLatticeParameters_from_UB(rotation_matrix1[index][0][irot,:,:],
material_,
constantlength,
dictmaterials=dictLT.dict_Materials)
a.append(lattice_parameter_direct_strain[0])
b.append(lattice_parameter_direct_strain[1])
c.append(lattice_parameter_direct_strain[2])
alp.append(lattice_parameter_direct_strain[3])
bet.append(lattice_parameter_direct_strain[4])
gam.append(lattice_parameter_direct_strain[5])
logdata = np.array(a) - latticeparams[0]
logdata = logdata[~np.isnan(logdata)]
rangemina, rangemaxa = np.min(logdata) - 0.01e-2, np.max(logdata) + 0.01e-2
logdata = np.array(b) - latticeparams[1]
logdata = logdata[~np.isnan(logdata)]
rangeminb, rangemaxb = np.min(logdata) - 0.01e-2, np.max(logdata) + 0.01e-2
logdata = np.array(c) - latticeparams[2]
logdata = logdata[~np.isnan(logdata)]
rangeminc, rangemaxc = np.min(logdata) - 0.01e-2, np.max(logdata) + 0.01e-2
logdata = np.array(alp) - latticeparams[3]
logdata = logdata[~np.isnan(logdata)]
rangeminal, rangemaxal = np.min(logdata) - 0.01, np.max(logdata) + 0.01
logdata = np.array(bet) - latticeparams[4]
logdata = logdata[~np.isnan(logdata)]
rangeminbe, rangemaxbe = np.min(logdata) - 0.01, np.max(logdata) + 0.01
logdata = np.array(gam) - latticeparams[5]
logdata = logdata[~np.isnan(logdata)]
rangeminga, rangemaxga = np.min(logdata) - 0.01, np.max(logdata) + 0.01
fig = plt.figure(figsize=(11.69,8.27), dpi=100)
bottom, top = 0.1, 0.9
left, right = 0.1, 0.8
fig.subplots_adjust(top=top, bottom=bottom, left=left, right=right, hspace=0.15, wspace=0.25)
vmin = rangemina
vmax = rangemaxa
axs = fig.subplots(2, 3)
axs[0, 0].set_title(r"$a$", loc='center', fontsize=8)
strain_matrix_plot = np.array(a) - latticeparams[0]
im=axs[0, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[0, 0].set_xticks([])
axs[0, 0].set_yticks([])
divider = make_axes_locatable(axs[0,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminb
vmax = rangemaxb
axs[0, 1].set_title(r"$b$", loc='center', fontsize=8)
strain_matrix_plot = np.array(b) - latticeparams[1]
im=axs[0, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminc
vmax = rangemaxc
axs[0, 2].set_title(r"$c$", loc='center', fontsize=8)
strain_matrix_plot = np.array(c) - latticeparams[2]
im=axs[0, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(axs[0,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminal
vmax = rangemaxal
axs[1, 0].set_title(r"$\alpha$", loc='center', fontsize=8)
strain_matrix_plot = np.array(alp) - latticeparams[3]
im=axs[1, 0].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 0].set_xticks([])
axs[1, 0].set_yticks([])
divider = make_axes_locatable(axs[1,0])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminbe
vmax = rangemaxbe
axs[1, 1].set_title(r"$\beta$", loc='center', fontsize=8)
strain_matrix_plot = np.array(bet) - latticeparams[4]
im=axs[1, 1].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 1].set_xticks([])
divider = make_axes_locatable(axs[1,1])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.ax.tick_params(labelsize=8)
vmin = rangeminga
vmax = rangemaxga
axs[1, 2].set_title(r"$\gamma$", loc='center', fontsize=8)
strain_matrix_plot = np.array(gam) - latticeparams[5]
im = axs[1, 2].imshow(strain_matrix_plot.reshape((lim_x, lim_y)), origin='lower', cmap=plt.cm.jet, vmin=vmin, vmax=vmax)
axs[1, 2].set_xticks([])
divider = make_axes_locatable(axs[1,2])
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
cbar.formatter.set_useOffset(False)
cbar.ax.tick_params(labelsize=8)
for ax in axs.flat:
ax.label_outer()
plt.savefig(model_direc + "//" + 'figure_unitcell_relative_'+str(matid)+'_'+str(index)+'.png', bbox_inches='tight',format='png', dpi=1000)
plt.close(fig)
except:
pass
def sst_texture(orient_data=None, col_array=None, direc="", symmetry=None, symmetry_name=None, lattice=None,
axis="Z", fn="", symms=None):
print("symmetry of the current phase is : "+symmetry_name)
if np.max(col_array) > 1:
col_array[np.where(col_array>1)]=1
fig = plt.figure(1)
if symmetry_name == "cubic":
pole_hkls = ['111','110','100']
ax1 = fig.add_subplot(221, aspect='equal')
ax2 = fig.add_subplot(222, aspect='equal')
ax3 = fig.add_subplot(223, aspect='equal')
ax4 = fig.add_subplot(224, aspect='equal')
elif symmetry_name == "hexagonal":
pole_hkls = ['001','100','101','102','110']
ax1 = fig.add_subplot(231, aspect='equal')
ax2 = fig.add_subplot(232, aspect='equal')
ax3 = fig.add_subplot(233, aspect='equal')
ax4 = fig.add_subplot(234, aspect='equal')
ax5 = fig.add_subplot(235, aspect='equal')
ax6 = fig.add_subplot(236, aspect='equal')
else:
print("PF and IPF plots are only supported for Cubic and Hexagonal systems for now")
return
for pfs in range(len(pole_hkls)):
pf1 = PoleFigure(hkl=pole_hkls[pfs], proj='stereo', lattice=lattice, axis=axis)
pf1.mksize = 1.
if pfs == 0:
pf1.plot_pf(col_array, orient_data, ax=ax1, ftsize=6)
elif pfs == 1:
pf1.plot_pf(col_array, orient_data, ax=ax2, ftsize=6)
elif pfs == 2:
pf1.plot_pf(col_array, orient_data, ax=ax3, ftsize=6)
elif pfs == 3:
pf1.plot_pf(col_array, orient_data, ax=ax4, ftsize=6)
elif pfs == 4:
pf1.plot_pf(col_array, orient_data, ax=ax5, ftsize=6)
if symmetry_name == "cubic":
pf1.plot_sst_color(col_array, orient_data, ax=ax4, ftsize=6, phase=0, symms=symms)
elif symmetry_name == "hexagonal":
pf1.plot_sst_color(col_array, orient_data, ax=ax6, ftsize=6, phase=1, symms=symms)
plt.savefig(direc+"//PF_IPF_"+fn+".png", bbox_inches='tight',format='png', dpi=1000)
plt.close()
def save_sst(lim_x, lim_y, strain_matrix, strain_matrixs, col, colx, coly,
match_rate, mat_global, spots_len, iR_pix, fR_pix,
model_direc, material_, material1_, lattice_, lattice1_,
symmetry_, symmetry1_, crystal, crystal1, rotation_matrix1, symmetry_name, symmetry1_name,
mac_axis = [0., 0., 1.],axis_text="Z",match_rate_threshold = 5):
rotation_matrix_sst = [[] for i in range(len(rotation_matrix1))]
for i in range(len(rotation_matrix1)):
rotation_matrix_sst[i].append(np.zeros((lim_x*lim_y,3,3)))
for i in range(len(rotation_matrix1)):
temp_mat = rotation_matrix1[i][0]
for j in range(len(temp_mat)):
orientation_matrix123 = temp_mat[j,:,:]
# ## rotate orientation by 40degrees to bring in Sample RF
omega = np.deg2rad(-40.0)
# rotation de -omega autour de l'axe x (or Y?) pour repasser dans Rsample
cw = np.cos(omega)
sw = np.sin(omega)
mat_from_lab_to_sample_frame = np.array([[cw, 0.0, sw], [0.0, 1.0, 0.0], [-sw, 0, cw]])
orientation_matrix123 = np.dot(mat_from_lab_to_sample_frame.T, orientation_matrix123)
if np.linalg.det(orientation_matrix123) < 0:
orientation_matrix123 = -orientation_matrix123
rotation_matrix_sst[i][0][j,:,:] = orientation_matrix123
rangeval = len(match_rate)
if material_ == material1_:
for index in range(rangeval):
### index for nans
nan_index = np.where(match_rate[index][0] <= match_rate_threshold)[0]
if index == 0:
rotation_matrix_plot = np.copy(rotation_matrix_sst[index][0])
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan
rotation_matrix_plot[nan_index,:,:] = np.nan
sst_texture(orient_data=rotation_matrix_plot,
col_array=col_plot,
direc=model_direc,
symmetry=symmetry_,
symmetry_name = symmetry_name,
lattice=lattice_, axis=axis_text, fn="UB_"+str(index),
symms=crystal._hklsym)
else:
tempori = np.copy(rotation_matrix_sst[index][0])
tempori[nan_index,:,:] = np.nan
rotation_matrix_plot = np.vstack((rotation_matrix_plot,tempori))
tempcol = np.copy(col[index][0])
tempcol[nan_index,:] = np.nan
col_plot = np.vstack((col_plot,tempcol))
sst_texture(orient_data=tempori,
col_array=tempcol,
direc=model_direc,
symmetry=symmetry_,
symmetry_name = symmetry_name,
lattice=lattice_, axis=axis_text, fn="UB_"+str(index),
symms=crystal._hklsym)
### Plot pole figures and IPF (cubic and hexagonal are supported for now)
sst_texture(orient_data=rotation_matrix_plot,
col_array=col_plot,
direc=model_direc,
symmetry=symmetry_,
symmetry_name = symmetry_name,
lattice=lattice_, axis=axis_text, fn="all_UBs",
symms=crystal._hklsym)
else:
for matid in range(2):
if matid == 0:
symmetry_name_plot = symmetry_name
symmetry_plot = symmetry_
lattice_plot = lattice_
symms = crystal._hklsym
else:
symmetry_name_plot = symmetry1_name
symmetry_plot = symmetry1_
lattice_plot = lattice1_
symms = crystal1._hklsym
for index in range(rangeval):
### index for nans
nan_index1 = np.where(match_rate[index][0] <= match_rate_threshold)[0]
mat_id_index = np.where(mat_global[index][0] != matid+1)[0]
nan_index = np.hstack((mat_id_index,nan_index1))
nan_index = np.unique(nan_index)
if index == 0:
rotation_matrix_plot = np.copy(rotation_matrix_sst[index][0])
rotation_matrix_plot[nan_index,:,:] = np.nan
col_plot = np.copy(col[index][0])
col_plot[nan_index,:] = np.nan
sst_texture(orient_data=rotation_matrix_plot,
col_array=col_plot,
direc=model_direc,
symmetry=symmetry_plot,
symmetry_name = symmetry_name_plot,
lattice=lattice_plot, axis=axis_text, fn="mat_"+str(matid)+"_UB_"+str(index),
symms=symms)
else:
tempori = np.copy(rotation_matrix_sst[index][0])
tempori[nan_index,:,:] = np.nan
rotation_matrix_plot = np.vstack((rotation_matrix_plot,tempori))
tempcol = np.copy(col[index][0])
tempcol[nan_index,:] = np.nan
col_plot = np.vstack((col_plot,tempcol))
sst_texture(orient_data=tempori,
col_array=tempcol,
direc=model_direc,
symmetry=symmetry_plot,
symmetry_name = symmetry_name_plot,
lattice=lattice_plot, axis=axis_text, fn="mat_"+str(matid)+"_UB_"+str(index),
symms=symms)
sst_texture(orient_data=rotation_matrix_plot,
col_array=col_plot,
direc=model_direc,
symmetry=symmetry_plot,
symmetry_name = symmetry_name_plot,
lattice=lattice_plot, axis=axis_text, fn="mat_"+str(matid)+"_all_UBs",
symms=symms)
texttstr = "\n\
### config file for LaueNeuralNetwork \n\
[CPU]\n\
n_cpu = 8\n\
\n\
[GLOBAL_DIRECTORY]\n\
prefix = \n\
## directory where all training related data and results will be saved \n\
main_directory = C:\\Users\\purushot\\Desktop\\pattern_matching\\experimental\\GUIv0\\latest_version\n\
\n\
[MATERIAL]\n\
## same material key as lauetools (see dictlauetools.py for complete key)\n\
## as of now symmetry can be cubic, hexagonal, orthorhombic, tetragonal, trigonal, monoclinic, triclinic\n\
\n\
material = In2Bi\n\
symmetry = hexagonal\n\
space_group = between 1 and 230\n\
general_diffraction_rules = true\n\
\n\
## if second phase is present, else none\n\
material1 = In_epsilon\n\
symmetry1 = tetragonal\n\
space_group1 = between 1 and 230\n\
general_diffraction_rules1 = true\n\
\n\
[DETECTOR]\n\
## path to detector calibration file (.det)\n\
detectorfile = C:\\Users\\purushot\\Desktop\\In_JSM\\calib.det\n\
## Max and Min energy to be used for generating training dataset, as well as for calcualting matching rate\n\
emax = 21\n\
emin = 5\n\
\n\
[TRAINING]\n\
## classes_with_frequency_to_remove: HKL class with less appearance than specified will be ignored in output\n\
## desired_classes_output : can be all or an integer: to limit the number of output classes\n\
## max_HKL_index : can be auto or integer: Maximum index of HKL to build output classes\n\
## max_nb_grains : Maximum number of grains to simulate per lauepattern\n\
####### Material 0\n\
classes_with_frequency_to_remove = 500\n\
desired_classes_output = all\n\
max_HKL_index = 5\n\
max_nb_grains = 1\n\
####### Material 1\n\
## HKL class with less appearance than specified will be ignored in output\n\
classes_with_frequency_to_remove1 = 500\n\
desired_classes_output1 = all\n\
max_HKL_index1 = 5\n\
max_nb_grains1 = 1\n\
\n\
## Max number of simulations per number of grains\n\
## Include single crystal misorientation (1 deg) data in training\n\
## Maximum angular distance to probe (in deg)\n\
## step size in angular distribution to discretize (in deg)\n\
## batch size and epochs for training\n\
max_simulations = 1000\n\
include_small_misorientation = false\n\
misorientation_angle = 30\n\
angular_distance = 90\n\
step_size = 0.1\n\
batch_size = 50\n\
epochs = 5\n\
\n\
[PREDICTION]\n\
# model_weight_file: if none, it will select by default the latest H5 weight file, else provide a specific model\n\
# softmax_threshold_global: thresholding to limit the predicted spots search zone\n\
# mr_threshold_global: thresholding to ignore all matricies less than the MR threshold\n\
# cap_matchrate: any UB matrix providing MR less than this will be ignored\n\
# coeff: should be same as cap_matchrate or no? (this is for try previous UB matrix)\n\
# coeff_overlap: coefficient to limit the overlapping between spots; if more than this, new solution will be computed\n\
# mode_spotCycle: How to cycle through predicted spots (slow or graphmode )\n\
UB_matrix_to_detect = 1\n\
\n\
matrix_tolerance = 0.9\n\
matrix_tolerance1 = 0.9\n\
\n\
material0_limit = 1\n\
material1_limit = 1\n\
\n\
model_weight_file = none\n\
softmax_threshold_global = 0.85\n\
mr_threshold_global = 0.80\n\
cap_matchrate = 0.01\n\
coeff = 0.3\n\
coeff_overlap = 0.3\n\
mode_spotCycle = slow\n\
##true for few crystal and prefered texture case, otherwise time consuming; advised for single phase alone\n\
use_previous = true\n\
\n\
[EXPERIMENT]\n\
experiment_directory = C:\\Users\\purushot\\Desktop\\In_JSM\\ech875_ROI01\n\
experiment_file_prefix = ech875_ROI01_\n\
image_grid_x = 51\n\
image_grid_y = 51\n\
\n\
[PEAKSEARCH]\n\
intensity_threshold = 90\n\
boxsize = 15\n\
fit_peaks_gaussian = 1\n\
FitPixelDev = 15\n\
NumberMaxofFits = 3000\n\
\n\
[STRAINCALCULATION]\n\
strain_compute = true\n\
tolerance_strain_refinement = 0.7,0.6,0.5,0.4,0.3,0.2\n\
tolerance_strain_refinement1 = 0.7,0.6,0.5,0.4,0.3,0.2\n\
free_parameters = b,c,alpha,beta,gamma\n\
\n\
[POSTPROCESS]\n\
hkls_subsets = [1,1,0],[1,0,0],[1,1,1]\n\
\n\
\n\
[CALLER]\n\
residues_threshold=0.15\n\
nb_spots_global_threshold=10\n\
option_global = v1\n\
use_om_user = true\n\
nb_spots_consider = 100\n\
# User defined orientation matrix supplied in a file\n\
use_om_user = false\n\
path_user_OM = ""\n\
[DEVELOPMENT]\n\
# could be 1 or 2 / none in case of single phase\n\
material_phase_always_present = 1\n\
matrix_phase_always_present = 0.5673,0.5334,-0.6264,-0.6814,0.7330,0.00604,0.4625,0.4245,0.7805;Si\n\
generate_additional_data=false\n\
write_MTEX_file = true\n\
\n\
# Laue Groups\n\
# space group 1 -- triclinic: '-1'\n\
# space group 2 -- monoclinic: '2/m'\n\
# space group 3 -- orthorhombic: 'mmm'\n\
# space group 4 -- tetragonal: '4/m'\n\
# space group 5 -- tetragonal: '4/mmm'\n\
# space group 6 -- trigonal: '-3'\n\
# space group 7 -- trigonal: '-3m'\n\
# space group 8 -- hexagonal: '6/m'\n\
# space group 9 -- hexagonal: '6/mmm'\n\
# space group 10 -- cubic: 'm3'\n\
# space group 11 -- cubic: 'm3m'"
class Transform(object):
def __init__(self, matrix):
self.matrix = matrix
self._imatrix = None
@property
def imatrix(self):
if self._imatrix is None:
try:
self._imatrix = np.linalg.inv(self.matrix)
except np.linalg.LinAlgError:
raise Exception("XU.math.Transform: matrix cannot be inverted"
" - seems to be singular")
return self._imatrix
def inverse(self, args, rank=1):
"""
performs inverse transformation a vector, matrix or tensor of rank 4
Parameters
----------
args : list or array-like
object to transform, list or np array of shape (..., n)
(..., n, n), (..., n, n, n, n) where n is the size of the
transformation matrix.
rank : int
rank of the supplied object. allowed values are 1, 2, and 4
"""
it = Transform(self.imatrix)
return it(args, rank)
def __call__(self, args, rank=1):
"""
transforms a vector, matrix or tensor of rank 4
(e.g. elasticity tensor)
Parameters
----------
args : list or array-like
object to transform, list or np array of shape (..., n)
(..., n, n), (..., n, n, n, n) where n is the size of the
transformation matrix.
rank : int
rank of the supplied object. allowed values are 1, 2, and 4
"""
m = self.matrix
if rank == 1: # argument is a vector
# out_i = m_ij * args_j
out = np.einsum('ij,...j', m, args)
elif rank == 2: # argument is a matrix
# out_ij = m_ik * m_jl * args_kl
out = np.einsum('ik, jl,...kl', m, m, args)
elif rank == 4:
# cp_ijkl = m_in * m_jo * m_kp * m_lq * args_nopq
out = np.einsum('in, jo, kp, lq,...nopq', m, m, m, m, args)
return out
def __str__(self):
ostr = "Transformation matrix:\n"
ostr += str(self.matrix)
return ostr
def VecCross(v1, v2, out=None):
"""
Calculate the vector cross product.
Parameters
----------
v1, v2 : list or array-like
input vector(s), either one vector or an array of vectors with shape
(n, 3)
out : list or array-like, optional
output vector
Returns
-------
ndarray
cross product either of shape (3, ) or (n, 3)
"""
if isinstance(v1, np.ndarray):
if len(v1.shape) >= 2 or len(v2.shape) >= 2:
return np.cross(v1, v2)
if len(v1) != 3 or len(v2) != 3:
raise ValueError("Vectors must be of size 3! (len(v1)=%d len(v2)=%d)"
% (len(v1), len(v2)))
if out is None:
out = np.empty(3)
out[0] = v1[1] * v2[2] - v1[2] * v2[1]
out[1] = v1[2] * v2[0] - v1[0] * v2[2]
out[2] = v1[0] * v2[1] - v1[1] * v2[0]
return out
def get_possible_sgrp_suf(sgrp_nr):
"""
determine possible space group suffix. Multiple suffixes might be possible
for one space group due to different origin choice, unique axis, or choice
of the unit cell shape.
Parameters
----------
sgrp_nr : int
space group number
Returns
-------
str or list
either an empty string or a list of possible valid suffix strings
"""
sgrp_suf = ''
if sgrp_nr in [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]:
sgrp_suf = [':b', ':c']
elif sgrp_nr in [48, 50, 59, 68, 70, 85, 86, 88, 125, 126,
129, 130, 133, 134, 137, 138, 141, 142,
201, 203, 222, 224, 227, 228]:
sgrp_suf = [':1', ':2']
elif sgrp_nr in [146, 148, 155, 160, 161, 166, 167]:
sgrp_suf = [':H', ':R']
return sgrp_suf
def get_default_sgrp_suf(sgrp_nr):
"""
determine default space group suffix
"""
possibilities = get_possible_sgrp_suf(sgrp_nr)
if possibilities:
return possibilities[0]
else:
return ''
class SGLattice(object):
"""
lattice object created from the space group number and corresponding unit
cell parameters.
"""
def __init__(self, sgrp, *args):
"""
initialize class with space group number and atom list
Parameters
----------
sgrp : int or str
Space group number
*args : float
space group parameters. depending on the space group number this
are 1 (cubic) to 6 (triclinic) parameters.
cubic : a (lattice parameter).
hexagonal : a, c.
trigonal : a, c.
tetragonal : a, c.
orthorhombic : a, b, c.
monoclinic : a, b, c, beta (in degree).
triclinic : a, b, c, alpha, beta, gamma (in degree).
"""
self.space_group = str(sgrp)
self.space_group_nr = int(self.space_group.split(':')[0])
try:
self.space_group_suf = ':' + self.space_group.split(':')[1]
except IndexError:
self.space_group_suf = get_default_sgrp_suf(self.space_group_nr)
if self.space_group_suf != '':
self.space_group = str(self.space_group_nr) + self.space_group_suf
self.name = sgrp_name[str(self.space_group_nr)] + self.space_group_suf
self.crystal_system, nargs = sgrp_sym[self.space_group_nr]
self.crystal_system += self.space_group_suf
if len(args) != nargs:
raise ValueError('XU: number of parameters (%d) does not match the'
' crystal symmetry (%s:%d)' % (len(args), self.crystal_system, nargs))
self.free_parameters = OrderedDict()
for a, par in zip(args, sgrp_params[self.crystal_system][0]):
self.free_parameters[par] = a
self._parameters = OrderedDict()
for i, p in enumerate(('a', 'b', 'c', 'alpha', 'beta', 'gamma')):
key = sgrp_params[self.crystal_system][1][i]
if isinstance(key, str):
self._parameters[p] = self.free_parameters[key]
else:
self._parameters[p] = key
# define lattice vectors
self._ai = np.zeros((3, 3))
self._bi = np.empty((3, 3))
a, b, c, alpha, beta, gamma = self._parameters.values()
ra = radians(alpha)
self._paramhelp = [cos(ra), cos(radians(beta)),
cos(radians(gamma)), sin(ra), 0]
self._setlat()
# save general Wyckoff position
self._gplabel = sorted(wp[self.space_group],
key=lambda s: int(s[:-1]))[-1]
self._gp = wp[self.space_group][self._gplabel]
# symmetry operations and reflection conditions placeholder
self._hklmat = []
self._symops = []
self._hklcond = []
self._hklcond_wp = []
self._iscentrosymmetric = None
@property
def symops(self):
"""
return the set of symmetry operations from the general Wyckoff
position of the space group.
"""
if self._symops == []:
for p in self._gp[1]:
self._symops.append(SymOp.from_xyz(p))
return self._symops
@property
def _hklsym(self):
if self._hklmat == []:
for s in self.symops:
self._hklmat.append(np.round(self._qtransform.imatrix @
self._transform.matrix @ s.D @
self._transform.imatrix @
self._qtransform.matrix,
DIGITS))
return self._hklmat
def _setlat(self):
a, b, c, alpha, beta, gamma = self._parameters.values()
ca, cb, cg, sa, vh = self._paramhelp
vh = sqrt(1 - ca**2-cb**2-cg**2 + 2*ca*cb*cg)
self._paramhelp[4] = vh
self._ai[0, 0] = a * vh / sa
self._ai[0, 1] = a * (cg-cb*ca) / sa
self._ai[0, 2] = a * cb
self._ai[1, 1] = b * sa
self._ai[1, 2] = b * ca
self._ai[2, 2] = c
self._transform = Transform(self._ai.T)
self._setb()
def _setb(self):
V = self.UnitCellVolume()
p = 2. * np.pi / V
VecCross(p*self._ai[1, :], self._ai[2, :], out=self._bi[0, :])
VecCross(p*self._ai[2, :], self._ai[0, :], out=self._bi[1, :])
VecCross(p*self._ai[0, :], self._ai[1, :], out=self._bi[2, :])
self._qtransform = Transform(self._bi.T)
def _set_params_from_sym(self):
for i, p in enumerate(('a', 'b', 'c', 'alpha', 'beta', 'gamma')):
key = sgrp_params[self.crystal_system][1][i]
if isinstance(key, str):
if p not in self.free_parameters:
self._parameters[p] = self.free_parameters[key]
@property
def a(self):
return self._parameters['a']
@a.setter
def a(self, value):
if 'a' not in self.free_parameters:
raise RuntimeError("a can not be set, its not a free parameter!")
self._parameters['a'] = value
self.free_parameters['a'] = value
self._set_params_from_sym()
self._setlat()
@property
def b(self):
return self._parameters['b']
@b.setter
def b(self, value):
if 'b' not in self.free_parameters:
raise RuntimeError("b can not be set, its not a free parameter!")
self._parameters['b'] = value
self.free_parameters['b'] = value
self._set_params_from_sym()
self._setlat()
@property
def c(self):
return self._parameters['c']
@c.setter
def c(self, value):
if 'c' not in self.free_parameters:
raise RuntimeError("c can not be set, its not a free parameter!")
self._parameters['c'] = value
self.free_parameters['c'] = value
self._set_params_from_sym()
self._setlat()
@property
def alpha(self):
return self._parameters['alpha']
@alpha.setter
def alpha(self, value):
if 'alpha' not in self.free_parameters:
raise RuntimeError("alpha can not be set for this space group!")
self._parameters['alpha'] = value
self.free_parameters['alpha'] = value
self._set_params_from_sym()
ra = radians(value)
self._paramhelp[0] = cos(ra)
self._paramhelp[3] = sin(ra)
self._setlat()
@property
def beta(self):
return self._parameters['beta']
@beta.setter
def beta(self, value):
if 'beta' not in self.free_parameters:
raise RuntimeError("beta can not be set for this space group!")
self._parameters['beta'] = value
self.free_parameters['beta'] = value
self._set_params_from_sym()
self._paramhelp[1] = cos(radians(value))
self._setlat()
@property
def gamma(self):
return self._parameters['gamma']
@gamma.setter
def gamma(self, value):
if 'gamma' not in self.free_parameters:
raise RuntimeError("gamma can not be set for this space group!")
self._parameters['gamma'] = value
self.free_parameters['gamma'] = value
self._set_params_from_sym()
self._paramhelp[2] = cos(radians(value))
self._setlat()
def UnitCellVolume(self):
"""
function to calculate the unit cell volume of a lattice (angstrom^3)
"""
a, b, c, alpha, beta, gamma = self._parameters.values()
return a * b * c * self._paramhelp[4]
@property
def iscentrosymmetric(self):
"""
returns a boolean to determine if the lattice has centrosymmetry.
"""
if self._iscentrosymmetric is None:
self._iscentrosymmetric = False
for s in self.symops:
if np.all(-np.identity(3) == s.D):
self._iscentrosymmetric = True
break
return self._iscentrosymmetric
def isequivalent(self, hkl1, hkl2):
"""
determining if hkl1 and hkl2 are two crystallographical equivalent
pairs of Miller indices. Note that this function considers the effect
of non-centrosymmetry!
Parameters
----------
hkl1, hkl2 : list
Miller indices to be checked for equivalence
Returns
-------
bool
"""
return tuple(hkl2) in self.equivalent_hkls(hkl1)
def equivalent_hkls(self, hkl):
"""
returns a list of equivalent hkl peaks depending on the crystal system
"""
suf = self.space_group_suf
nr = self.space_group_nr
if suf == get_default_sgrp_suf(nr):
ehkl = set(eqhkl_default[nr](hkl[0], hkl[1], hkl[2]))
elif suf in get_possible_sgrp_suf(nr):
ehkl = set(eqhkl_custom[nr](hkl[0], hkl[1], hkl[2]))
else: # fallback calculation with symmetry operations
ehkl = np.unique(np.einsum('...ij,j', self._hklsym, hkl),
axis=0)
ehkl = set(tuple(e) for e in ehkl)
return ehkl
def hkl_allowed(self, hkl, returnequivalents=False):
"""
check if Bragg reflection with Miller indices hkl can exist according
to the reflection conditions. If no reflection conditions are available
this function returns True for all hkl values!
Parameters
----------
hkl : tuple or list
Miller indices of the reflection to check
returnequivalents : bool, optional
If True all the equivalent Miller indices of hkl are returned in a
set as second return argument.
Returns
-------
allowed : bool
True if reflection can have non-zero structure factor, false otherwise
equivalents : set, optional
set of equivalent Miller indices if returnequivalents is True
"""
# generate all equivalent hkl values which also need to be checked:
hkls = self.equivalent_hkls(hkl)
def build_return(allowed, requi=returnequivalents):
if requi:
return allowed, hkls
else:
return allowed
# load reflection conditions if needed
if self._gp[2] == 'n/a':
return build_return(True)
if self._hklcond == [] and self._gp[2] is not None:
self._hklcond = hklcond_group.findall(self._gp[2])
ret = testhklcond(hkls, self._hklcond)
return build_return(ret)
def check2n(h):
if (h % 2 == 0):
return 1
else:
return 0
def check2np1(h):
if ((h-1) % 2 == 0):
return 1
else:
return 0
def check3n(h):
if (h % 3 == 0):
return 1
else:
return 0
def check3np1(h):
if ((h-1) % 3 == 0):
return 1
else:
return 0
def check3np2(h):
if ((h-2) % 3 == 0):
return 1
else:
return 0
def check4n(h):
if (h % 4 == 0):
return 1
else:
return 0
def check4np2(h):
if ((h-2) % 4 == 0):
return 1
else:
return 0
def check6n(h):
if (h % 6 == 0):
return 1
else:
return 0
def check8n(h):
if (h % 8 == 0):
return 1
else:
return 0
def check8np1(h):
if ((h-1) % 8 == 0):
return 1
else:
return 0
def check8nm1(h):
if ((h+1) % 8 == 0):
return 1
else:
return 0
def check8np3(h):
if ((h-3) % 8 == 0):
return 1
else:
return 0
def check8nm3(h):
if ((h+3) % 8 == 0):
return 1
else:
return 0
def check8np4(h):
if ((h-4) % 8 == 0):
return 1
else:
return 0
def check8np5(h):
if ((h-5) % 8 == 0):
return 1
else:
return 0
def check8np7(h):
if ((h-7) % 8 == 0):
return 1
else:
return 0
def testhklcond(hkls, condition, verbose=False):
"""
* test if a Bragg peak is allowed according to reflection conditions
*
* Parameters
* ----------
* hkl : Miller indices of the peak to test (integer array)
* condgeneral : General reflection conditions (list of tuples)
* condwp : Reflection conditions for Wyckoff positions
* (list of list of tuples)
*
* Returns
* -------
* bool : True if peak is allowed, False otherwise
"""
# /* test general reflection conditions
# * if they are violated the peak is forbidden
# */
pattern_applied = 0
condition_met = 2
for hkl in hkls:
for i in condition:
hklpattern = i[0]
cond = i[1]
if hklpattern_applies(hkl, hklpattern):
pattern_applied = 1
if verbose:
print(hkl, hklpattern, cond)
r = reflection_condition_met(hkl, cond)
if r == 1:
condition_met = 1
else:
condition_met = 0
if verbose:
print(condition_met, pattern_applied)
if condition_met == 0:
break
if condition_met == 0:
break
if (condition_met == 1 or pattern_applied == 0):
return True
else:
if pattern_applied == 1:
return False
else:
return True
def hklpattern_applies(hkl, condhkl):
"""/*
* helper function to determine if Miller indices fit a certain pattern
*
* Parameters
* ----------
* hkl : array of three integers Miller indices
* condhkl : condition string similar to 'hkl', 'hh0', or '0k0'
*
* Returns
* -------
* 1 if hkl fulfills the pattern, 0 otherwise
*/"""
n=0
if (condhkl[n] == '0' and hkl[0] != 0):
return 0
n = n + 1
if (condhkl[n] == '-'):
n = n + 1
if (condhkl[n] == 'h' and hkl[1] != -hkl[0]):
return 0
elif (condhkl[n] == '0' and hkl[1] != 0):
return 0
elif (condhkl[n] == 'h' and hkl[1] != hkl[0]):
return 0
if (condhkl[len(condhkl)-1] == '0' and hkl[2] != 0):
return 0
return 1
def strcmp(expa, expb):
if expa == expb:
return 1
else:
return 0
def reflection_condition_met(hkl, cond):
"""/*
* helper function to determine allowed Miller indices
*
* Parameters
* ----------
* hkl: list or tuple
* Miller indices of the reflection
* cond: str
* condition string similar to 'h+k=2n, h+l,k+l=2n'
*
* Returns
* -------
* 1 if condition is met, 0 otherwise
*/"""
fulfilled = 1
condi = cond.split("=")
if len(condi) > 2:
condi = cond.split(", ")
if len(condi) >2:
fulfilled = 0
print("right hand expression error")
for kun in condi:
condi1 = kun.split("=")
rexpr = condi1[1]
lexpr_global = condi1[0]
if strcmp(rexpr, "2n"):
checkfunc = check2n
elif strcmp(rexpr, "2n+1"):
checkfunc = check2np1
elif strcmp(rexpr, "3n"):
checkfunc = check3n
elif strcmp(rexpr, "3n+1"):
checkfunc = check3np1
elif strcmp(rexpr, "3n+2"):
checkfunc = check3np2
elif strcmp(rexpr, "4n"):
checkfunc = check4n
elif strcmp(rexpr, "4n+2"):
checkfunc = check4np2
elif strcmp(rexpr, "6n"):
checkfunc = check6n
elif strcmp(rexpr, "8n"):
checkfunc = check8n
elif strcmp(rexpr, "8n+1"):
checkfunc = check8np1
elif strcmp(rexpr, "8n-1"):
checkfunc = check8nm1
elif strcmp(rexpr, "8n+3"):
checkfunc = check8np3
elif strcmp(rexpr, "8n-3"):
checkfunc = check8nm3
elif strcmp(rexpr, "8n+4"):
checkfunc = check8np4
elif strcmp(rexpr, "8n+5"):
checkfunc = check8np5
elif strcmp(rexpr, "8n+7"):
checkfunc = check8np7
else:
print("Right hand side of reflection condition (%s) not implemented" %(rexpr))
return -1
for lexpr in lexpr_global.split(','):
if strcmp(lexpr, "h"):
if (checkfunc(hkl[0]) == 0):
fulfilled = 0
elif strcmp(lexpr, "k"):
if (checkfunc(hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "l"):
if (checkfunc(hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h+k"):
if (checkfunc(hkl[0] + hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h-k"):
if (checkfunc(hkl[0] - hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "-h+k"):
if (checkfunc(-hkl[0] + hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h+l"):
if (checkfunc(hkl[0] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "k+l"):
if (checkfunc(hkl[1] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h+k+l"):
if (checkfunc(hkl[0] + hkl[1] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "-h+k+l"):
if (checkfunc(-hkl[0] + hkl[1] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "2h+l"):
if (checkfunc(2*hkl[0] + hkl[2]) == 0):
fulfilled = 0;
elif strcmp(lexpr, "2k+l"):
if (checkfunc(2*hkl[1] + hkl[2]) == 0):
fulfilled = 0
else:
rexpr = condi[1]
lexpr_global = condi[0]
if strcmp(rexpr, "2n"):
checkfunc = check2n
elif strcmp(rexpr, "2n+1"):
checkfunc = check2np1
elif strcmp(rexpr, "3n"):
checkfunc = check3n
elif strcmp(rexpr, "3n+1"):
checkfunc = check3np1
elif strcmp(rexpr, "3n+2"):
checkfunc = check3np2
elif strcmp(rexpr, "4n"):
checkfunc = check4n
elif strcmp(rexpr, "4n+2"):
checkfunc = check4np2
elif strcmp(rexpr, "6n"):
checkfunc = check6n
elif strcmp(rexpr, "8n"):
checkfunc = check8n
elif strcmp(rexpr, "8n+1"):
checkfunc = check8np1
elif strcmp(rexpr, "8n-1"):
checkfunc = check8nm1
elif strcmp(rexpr, "8n+3"):
checkfunc = check8np3
elif strcmp(rexpr, "8n-3"):
checkfunc = check8nm3
elif strcmp(rexpr, "8n+4"):
checkfunc = check8np4
elif strcmp(rexpr, "8n+5"):
checkfunc = check8np5
elif strcmp(rexpr, "8n+7"):
checkfunc = check8np7
else:
print("Right hand side of reflection condition (%s) not implemented" %(rexpr))
return -1
for lexpr in lexpr_global.split(','):
if strcmp(lexpr, "h"):
if (checkfunc(hkl[0]) == 0):
fulfilled = 0
elif strcmp(lexpr, "k"):
if (checkfunc(hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "l"):
if (checkfunc(hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h+k"):
if (checkfunc(hkl[0] + hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h-k"):
if (checkfunc(hkl[0] - hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "-h+k"):
if (checkfunc(-hkl[0] + hkl[1]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h+l"):
if (checkfunc(hkl[0] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "k+l"):
if (checkfunc(hkl[1] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "h+k+l"):
if (checkfunc(hkl[0] + hkl[1] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "-h+k+l"):
if (checkfunc(-hkl[0] + hkl[1] + hkl[2]) == 0):
fulfilled = 0
elif strcmp(lexpr, "2h+l"):
if (checkfunc(2*hkl[0] + hkl[2]) == 0):
fulfilled = 0;
elif strcmp(lexpr, "2k+l"):
if (checkfunc(2*hkl[1] + hkl[2]) == 0):
fulfilled = 0
if (fulfilled == 1):
return 1
else:
return 0
class SymOp(object):
"""
Class descriping a symmetry operation in a crystal. The symmetry operation
is characterized by a 3x3 transformation matrix as well as a 3-vector
describing a translation. For magnetic symmetry operations also the time
reversal symmetry can be specified (not used in xrayutilities)
"""
def __init__(self, D, t, m=1):
"""
Initialize the symmetry operation
Parameters
----------
D : array-like
transformation matrix (3x3)
t : array-like
translation vector (3)
m : int, optional
indicates time reversal in magnetic groups. +1 (default, no time
reveral) or -1
"""
self._W = np.zeros((4, 4))
self._W[:3, :3] = np.asarray(D)
self._W[:3, 3] = np.asarray(t)
self._W[3, 3] = 1
self._m = m
@classmethod
def from_xyz(cls, xyz):
"""
create a SymOp from the xyz notation typically used in CIF files.
Parameters
----------
xyz : str
string describing the symmetry operation (e.g. '-y, -x, z')
"""
D = np.zeros((3, 3))
t = np.array(eval(xyz, {'x': 0, 'y': 0, 'z': 0})[:3])
m = 1
for i, expr in enumerate(xyz.strip('()').split(',')):
if i == 3: # time reversal property
m = int(expr)
continue
if 'x' in expr:
D[i, 0] = -1 if '-x' in expr else 1
if 'y' in expr:
D[i, 1] = -1 if '-y' in expr else 1
if 'z' in expr:
D[i, 2] = -1 if '-z' in expr else 1
return SymOp(D, t, m)
def xyz(self, showtimerev=False):
"""
return the symmetry operation in xyz notation
"""
ret = ''
t = self.t
for i in range(3):
expr = ''
if abs(self._W[i, 0]) == 1:
expr += '+x' if self._W[i, 0] == 1 else '-x'
if abs(self._W[i, 1]) == 1:
expr += '+y' if self._W[i, 1] == 1 else '-y'
if abs(self._W[i, 2]) == 1:
expr += '+z' if self._W[i, 2] == 1 else '-z'
if t[i] != 0:
expr += '+' if t[i] > 0 else ''
expr += str(fractions.Fraction(t[i]).limit_denominator(100))
expr = expr.strip('+')
ret += expr + ', '
if showtimerev:
ret += '{:+d}'.format(self._m)
return ret.strip(', ')
@property
def D(self):
"""transformation matrix of the symmetry operation"""
return self._W[:3, :3]
@property
def t(self):
"""translation vector of the symmetry operation"""
return self._W[:3, 3]
def __eq__(self, other):
if not isinstance(other, SymOp):
return NotImplemented
return self._m == other._m and np.all(self._W == other._W)
@staticmethod
def foldback(v):
return v - np.round(v, DIGITS) // 1
def apply_rotation(self, vec):
return self.D @ vec
def apply(self, vec, foldback=True):
lv = np.asarray(list(vec) + [1, ])
result = (self._W @ lv)[:3]
if foldback:
return self.foldback(result)
return result
def apply_axial(self, vec):
return self._m * np.linalg.det(self.D) * self.D @ vec
def combine(self, other):
if not isinstance(other, SymOp):
return NotImplemented
W = self._W @ other._W
return SymOp(W[:3, :3], self.foldback(W[:3, 3]), self._m*other._m)
def __str__(self):
return '({})'.format(self.xyz(showtimerev=True))
def __repr__(self):
return self.__str__()
def _round_indices(indices, max_index=12):
"""Round a set of index triplet (Miller) or quartet (Miller-Bravais)
to the *closest* smallest integers.
Adopted from MTEX's Miller.round function.
Parameters
----------
indices : list, tuple, or np.ndarray
Set of index triplet(s) or quartet(s) to round.
max_index : int, optional
Maximum integer index to round to, by default 12.
Return
------
new_indices : np.ndarray
Integer array of rounded set of index triplet(s) or quartet(s).
"""
# Allow list and tuple input (and don't overwrite `indices`)
idx = np.asarray(indices)
# Flatten and remove redundant third index if Miller-Bravais
n_idx = idx.shape[-1] # 3 or 4
idx_flat = np.reshape(idx, (-1, n_idx))
if n_idx == 4:
idx_flat = idx_flat[..., [0, 1, 3]]
# Get number of sets, max. index per set, and all possible integer
# multipliers between 1 and `max_index`
n_sets = idx_flat.size // 3
max_per_set = np.max(np.abs(idx_flat), axis=-1)
multipliers = np.arange(1, max_index + 1)
# Divide by highest index, repeat array `max_index` number of times,
# and multiply with all multipliers
idx_scaled = (
np.broadcast_to(idx_flat / max_per_set[..., np.newaxis], (max_index, n_sets, 3))
* multipliers[..., np.newaxis, np.newaxis]
)
# Find the most suitable multiplier per set, which gives the
# smallest error between the initial set and the scaled and rounded
# set
error = 1e-7 * np.round(
1e7
* np.sum((idx_scaled - np.round(idx_scaled)) ** 2, axis=-1)
/ np.sum(idx_scaled ** 2, axis=-1)
)
idx_min_error = np.argmin(error, axis=0)
multiplier = (idx_min_error + 1) / max_per_set
# Reshape `multiplier` to match indices shape
multiplier = multiplier.reshape(idx.shape[:-1])[..., np.newaxis]
# Finally, multiply each set with their most suitable multiplier,
# and round
new_indices = np.round(multiplier * idx).astype(int)
return new_indices
# =============================================================================
# PYMICRO FUNCTION IMPORTS
# =============================================================================
def move_rotation_to_FZ(g, symmetry_operators = None):
"""Compute the rotation matrix in the Fundamental Zone of a given
`Symmetry` instance.
:param g: a 3x3 matrix representing the rotation.
:param verbose: flag for verbose mode.
:return: a new 3x3 matrix for the rotation in the fundamental zone.
"""
omegas = [] # list to store all the rotation angles
syms = symmetry_operators
for sym in syms:
# apply the symmetry operator
om = np.dot(sym, g)
cw = 0.5 * (om.trace() - 1)
omega = np.arccos(cw)
omegas.append(omega)
index = np.argmin(omegas)
return np.dot(syms[index], g)
def misorientation_axis_from_delta(delta):
"""Compute the misorientation axis from the misorientation matrix.
:param delta: The 3x3 misorientation matrix.
:returns: the misorientation axis (normalised vector).
"""
n = np.array([delta[1, 2] - delta[2, 1], delta[2, 0] -
delta[0, 2], delta[0, 1] - delta[1, 0]])
n /= np.sqrt((delta[1, 2] - delta[2, 1]) ** 2 +
(delta[2, 0] - delta[0, 2]) ** 2 +
(delta[0, 1] - delta[1, 0]) ** 2)
return n
def misorientation_angle_from_delta(delta):
"""Compute the misorientation angle from the misorientation matrix.
Compute the angle associated with this misorientation matrix :math:`\\Delta g`.
It is defined as :math:`\\omega = \\arccos(\\text{trace}(\\Delta g)/2-1)`.
To avoid float rounding error, the argument is rounded to 1.0 if it is
within 1 and 1 plus 32 bits floating point precison.
.. note::
This does not account for the crystal symmetries. If you want to
find the disorientation between two orientations, use the
:py:meth:`~pymicro.crystal.microstructure.Orientation.disorientation`
method.
:param delta: The 3x3 misorientation matrix.
:returns float: the misorientation angle in radians.
"""
cw = 0.5 * (delta.trace() - 1)
if cw > 1. and cw - 1. < 10 * np.finfo('float32').eps:
cw = 1.
omega = np.arccos(cw)
return omega
def disorientation(orientation_matrix, orientation_matrix1, crystal_structure=None):
"""Compute the disorientation another crystal orientation.
Considering all the possible crystal symmetries, the disorientation
is defined as the combination of the minimum misorientation angle
and the misorientation axis lying in the fundamental zone, which
can be used to bring the two lattices into coincidence.
.. note::
Both orientations are supposed to have the same symmetry. This is not
necessarily the case in multi-phase materials.
:param orientation: an instance of
:py:class:`~pymicro.crystal.microstructure.Orientation` class
describing the other crystal orientation from which to compute the
angle.
:param crystal_structure: an instance of the `Symmetry` class
describing the crystal symmetry, triclinic (no symmetry) by
default.
:returns tuple: the misorientation angle in radians, the axis as a
numpy vector (crystal coordinates), the axis as a numpy vector
(sample coordinates).
"""
the_angle = np.pi
symmetries = crystal_structure.symmetry_operators()
(gA, gB) = (orientation_matrix, orientation_matrix1) # nicknames
for (g1, g2) in [(gA, gB), (gB, gA)]:
for j in range(symmetries.shape[0]):
sym_j = symmetries[j]
oj = np.dot(sym_j, g1) # the crystal symmetry operator is left applied
for i in range(symmetries.shape[0]):
sym_i = symmetries[i]
oi = np.dot(sym_i, g2)
delta = np.dot(oi, oj.T)
mis_angle = misorientation_angle_from_delta(delta)
if mis_angle < the_angle:
# now compute the misorientation axis, should check if it lies in the fundamental zone
mis_axis = misorientation_axis_from_delta(delta)
the_angle = mis_angle
the_axis = mis_axis
the_axis_xyz = np.dot(oi.T, the_axis)
return the_angle, the_axis, the_axis_xyz
# =============================================================================
# Notebook functions
# =============================================================================
def generate_dataset(material_="Cu", material1_="Cu", ang_maxx=18.,step=0.1, mode=0,
nb_grains=1, nb_grains1=1, grains_nb_simulate=100, data_realism = False,
detectorparameters=None, pixelsize=None, type_="training",
var0 = 0, dim1=2048, dim2=2048, removeharmonics=1, save_directory="",
write_to_console=None, emin=5, emax=22, modelp = "random",
misorientation_angle = None, general_diff_rules = False,
crystal = None, crystal1 = None, include_scm=False,
matrix_phase_always_present=None):
"""
works for all symmetries now.
"""
from multiprocessing import Process, Queue, cpu_count
ncpu = cpu_count()
## make sure directory exists
save_directory_ = save_directory+"//"+type_
if not os.path.exists(save_directory_):
os.makedirs(save_directory_)
try:
with open(save_directory+"//classhkl_data_"+material_+".pickle", "rb") as input_file:
classhkl, _, _, n, _, \
hkl_all_class, _, lattice_material, symmetry = cPickle.load(input_file)
max_millerindex = int(n)
max_millerindex1 = int(n)
if material_ != material1_:
with open(save_directory+"//classhkl_data_"+material1_+".pickle", "rb") as input_file:
classhkl1, _, _, n1, _, \
hkl_all_class1, _, lattice_material1, symmetry1 = cPickle.load(input_file)
max_millerindex1 = int(n1)
except:
write_to_console("Class HKL library data not found, please run it first")
return None
if var0==1:
codebars, angbins = get_material_data(material_ = material_, ang_maxx = ang_maxx, step = step,
hkl_ref=n, classhkl=classhkl)
loc = np.array([ij for ij in range(len(classhkl))])
write_to_console("Verifying if two different HKL class have same angular distribution (can be very time consuming depending on the symmetry)")
index = []
list_appended = []
count_cbs = 0
for i, j in enumerate(codebars):
for k, l in enumerate(codebars):
# if i in list_appended and k in list_appended:
# continue
if i != k and np.all(j == l):
index.append((i,k))
string0 = "HKL's "+ str(classhkl[i])+" and "+str(classhkl[k])+" have exactly the same angular distribution."
write_to_console(string0)
list_appended.append(i)
list_appended.append(k)
count_cbs += 1
if len(index) == 0:
write_to_console("Great! No two HKL class have same angular distribution")
#np.savez_compressed(save_directory_+'//grain_init.npz', codebars, loc)
else:
write_to_console("Some HKL's have similar angular distribution; this will likely reduce the accuracy of the neural network; verify if symmetry matrix and other parameters are properly configured; this is just for the dictionary; keep eye on the dataset being generated for training")
write_to_console("This is likely the result of the symmetry operation available in a user_defined space group; this shouldn't affect the general accuracy of the model")
np.savez_compressed(save_directory+'//conflict_angular_distribution_debug.npz', codebars, index)
np.savez_compressed(save_directory+'//grain_classhkl_angbin.npz', classhkl, angbins)
if material_ != material1_:
codebars, angbins = get_material_data(material_ = material1_, ang_maxx = ang_maxx, step = step,
hkl_ref=n1, classhkl=classhkl1)
ind_offset = loc[-1] + 1
loc = np.array([ind_offset + ij for ij in range(len(classhkl1))])
write_to_console("Verifying if two different HKL class have same angular distribution (can be very time consuming depending on the symmetry)")
index = []
list_appended = []
count_cbs = 0
for i, j in enumerate(codebars):
for k, l in enumerate(codebars):
# if i in list_appended and k in list_appended:
# continue
if i != k and np.all(j == l):
index.append((i,k))
string0 = "HKL's "+ str(classhkl1[i])+" and "+str(classhkl1[k])+" have exactly the same angular distribution."
write_to_console(string0)
list_appended.append(i)
list_appended.append(k)
count_cbs += 1
if len(index) == 0:
write_to_console("Great! No two HKL class have same angular distribution")
#np.savez_compressed(save_directory_+'//grain_init1.npz', codebars, loc)
else:
write_to_console("Some HKL's have similar angular distribution; this will likely reduce the accuracy of the neural network; verify if symmetry matrix and other parameters are properly configured; this is just for the dictionary; keep eye on the dataset being generated for training")
write_to_console("This is likely the result of the symmetry operation available in a user_defined space group; this shouldn't affect the general accuracy of the model")
np.savez_compressed(save_directory+'//conflict_angular_distribution1_debug.npz', codebars, index)
np.savez_compressed(save_directory+'//grain_classhkl_angbin1.npz', classhkl1, angbins)
## make comprehensive list of dictionary
normal_hkl_ = np.zeros((1,3))
for j in hkl_all_class.keys():
normal_hkl_ = np.vstack((normal_hkl_, hkl_all_class[j]["family"]))
normal_hkl = np.delete(normal_hkl_, 0, axis =0)
if material_ != material1_:
normal_hkl1_ = np.zeros((1,3))
for j in hkl_all_class1.keys():
normal_hkl1_ = np.vstack((normal_hkl1_, hkl_all_class1[j]["family"]))
normal_hkl1 = np.delete(normal_hkl1_, 0, axis =0)
index_hkl = [j for j,k in enumerate(hkl_all_class.keys()) for i in range(len(hkl_all_class[k]["family"]))]
if material_ != material1_:
ind_offset = index_hkl[-1] + 1
index_hkl1 = [ind_offset+j for j,k in enumerate(hkl_all_class1.keys()) for i in range(len(hkl_all_class1[k]["family"]))]
if material_ == material1_:
index_hkl1 = None
normal_hkl1 = None
classhkl1 = None
hkl_all_class1 = None
lattice_material1 = None
symmetry1 = None
write_to_console("Generating "+type_+" and saving them")
if material_ != material1_:
nb_grains_list = list(range(nb_grains+1))
nb_grains1_list = list(range(nb_grains1+1))
list_permute = list(itertools.product(nb_grains_list, nb_grains1_list))
list_permute.pop(0)
max_progress = len(list_permute)*grains_nb_simulate
if matrix_phase_always_present != None and type_ != "testing_data":
dummy_, key_material_new = matrix_phase_always_present.split(';')
if key_material_new == material_:
max_progress = len(list_permute)*grains_nb_simulate + (len(nb_grains1_list)-1)*grains_nb_simulate
else:
max_progress = len(list_permute)*grains_nb_simulate + (len(nb_grains_list)-1)*grains_nb_simulate
else:
max_progress = nb_grains*grains_nb_simulate
if matrix_phase_always_present != None and type_ != "testing_data":
max_progress = nb_grains*grains_nb_simulate*2
if include_scm:
max_progress = max_progress + grains_nb_simulate
if material_ != material1_:
max_progress = max_progress + 2*grains_nb_simulate
_inputs_queue = Queue()
_outputs_queue = Queue()
_worker_process = {}
for i in range(ncpu):
_worker_process[i]= Process(target=worker_generation, args=(_inputs_queue,
_outputs_queue,
i+1),)
for i in range(ncpu):
_worker_process[i].start()
time.sleep(0.1)
if material_ != material1_:
if modelp == "uniform":
if type_ =="training_data":
xlim, ylim = 0, int(0.8*2000)
else:
xlim, ylim = int(0.8*2000), 2000-1
path_array = resource_path("uniform_orientations_2000.npz")
arr = np.load(path_array)
if symmetry == symmetry.cubic:
odf_data = arr["arr_6"][xlim:ylim]
# print("Laue group 11")
elif symmetry == symmetry.hexagonal:
odf_data = arr["arr_5"][xlim:ylim]
# print("Laue group 9")
elif symmetry == symmetry.trigonal:
odf_data = arr["arr_4"][xlim:ylim]
# print("Laue group 7")
elif symmetry == symmetry.tetragonal:
odf_data = arr["arr_3"][xlim:ylim]
# print("Laue group 5")
elif symmetry == symmetry.orthorhombic:
odf_data = arr["arr_2"][xlim:ylim]
# print("Laue group 3")
elif symmetry == symmetry.monoclinic:
odf_data = arr["arr_1"][xlim:ylim]
# print("Laue group 2")
elif symmetry == symmetry.triclinic:
odf_data = arr["arr_0"][xlim:ylim]
# print("Laue group 1")
if symmetry1 == symmetry.cubic:
odf_data1 = arr["arr_6"][xlim:ylim]
# print("Laue group 11")
elif symmetry1 == symmetry.hexagonal:
odf_data1 = arr["arr_5"][xlim:ylim]
# print("Laue group 9")
elif symmetry1 == symmetry.trigonal:
odf_data1 = arr["arr_4"][xlim:ylim]
# print("Laue group 7")
elif symmetry1 == symmetry.tetragonal:
odf_data1 = arr["arr_3"][xlim:ylim]
# print("Laue group 5")
elif symmetry1 == symmetry.orthorhombic:
odf_data1 = arr["arr_2"][xlim:ylim]
# print("Laue group 3")
elif symmetry1 == symmetry.monoclinic:
odf_data1 = arr["arr_1"][xlim:ylim]
# print("Laue group 2")
elif symmetry1 == symmetry.triclinic:
odf_data1 = arr["arr_0"][xlim:ylim]
# print("Laue group 1")
## list of combination of training dataset
## to be seen if this improves the prediction quality
## increases time significantly to generate the data
nb_grains_list = list(range(nb_grains+1))
nb_grains1_list = list(range(nb_grains1+1))
list_permute = list(itertools.product(nb_grains_list, nb_grains1_list))
list_permute.pop(0) ## removing the 0,0 index
# Idea 2 Or generate a database upto n grain LP
values = []
for i in range(len(list_permute)):
ii, jj = list_permute[i]
for j in range(grains_nb_simulate):
if data_realism:
## three types of data augmentation to mimic reality ?
if j < grains_nb_simulate*0.25:
noisy_data = False
remove_peaks = False
elif (j >= grains_nb_simulate*0.25) and (j < grains_nb_simulate*0.5):
noisy_data = True
remove_peaks = False
elif (j >= grains_nb_simulate*0.5) and (j < grains_nb_simulate*0.75):
noisy_data = False
remove_peaks = True
elif (j >= grains_nb_simulate*0.75):
noisy_data = True
remove_peaks = True
else:
noisy_data = False
remove_peaks = False
if modelp == "uniform":
rand_choice = np.random.choice(len(odf_data), ii, replace=False)
rand_choice1 = np.random.choice(len(odf_data1), jj, replace=False)
data_odf_data = odf_data[rand_choice,:,:]
data_odf_data1 = odf_data1[rand_choice1,:,:]
else:
data_odf_data = None
data_odf_data1 = None
seednumber = np.random.randint(1e6)
values.append([ii, jj, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
0, i, j, save_directory_,
data_odf_data,
data_odf_data1,
modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
None])
if matrix_phase_always_present != None and \
type_ != "testing_data":
dummy_, key_material_new = matrix_phase_always_present.split(';')
if key_material_new == material_ and ii == 0:
values.append([0, jj, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
0, i, j, save_directory_,
data_odf_data,
data_odf_data1,
modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
matrix_phase_always_present])
elif key_material_new == material1_ and jj == 0:
values.append([ii, 0, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
0, i, j, save_directory_,
data_odf_data,
data_odf_data1,
modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
matrix_phase_always_present])
chunks = chunker_list(values, ncpu)
chunks_mp = list(chunks)
if include_scm:
meta = {'t1':time.time(),
'flag':0}
else:
meta = {'t1':time.time(),
'flag':1}
for ijk in range(int(ncpu)):
_inputs_queue.put((chunks_mp[ijk], ncpu, meta))
else:
# Idea 2 Or generate a database upto n grain LP
if modelp == "uniform":
## training split
if type_ =="training_data":
xlim, ylim = 0, int(0.8*2000)
else:
xlim, ylim = int(0.8*2000), 2000-1
path_array = resource_path("uniform_orientations_2000.npz")
arr = np.load(path_array)
if symmetry == symmetry.cubic:
odf_data = arr["arr_6"][xlim:ylim]
print("Laue group 11")
elif symmetry == symmetry.hexagonal:
odf_data = arr["arr_5"][xlim:ylim]
print("Laue group 9")
elif symmetry == symmetry.trigonal:
odf_data = arr["arr_4"][xlim:ylim]
print("Laue group 7")
elif symmetry == symmetry.tetragonal:
odf_data = arr["arr_3"][xlim:ylim]
print("Laue group 5")
elif symmetry == symmetry.orthorhombic:
odf_data = arr["arr_2"][xlim:ylim]
print("Laue group 3")
elif symmetry == symmetry.monoclinic:
odf_data = arr["arr_1"][xlim:ylim]
print("Laue group 2")
elif symmetry == symmetry.triclinic:
odf_data = arr["arr_0"][xlim:ylim]
print("Laue group 1")
values = []
for i in range(nb_grains):
for j in range(grains_nb_simulate):
if data_realism:
## three types of data augmentation to mimic reality ?
if j < grains_nb_simulate*0.25:
noisy_data = False
remove_peaks = False
elif (j >= grains_nb_simulate*0.25) and (j < grains_nb_simulate*0.5):
noisy_data = True
remove_peaks = False
elif (j >= grains_nb_simulate*0.5) and (j < grains_nb_simulate*0.75):
noisy_data = False
remove_peaks = True
elif (j >= grains_nb_simulate*0.75):
noisy_data = True
remove_peaks = True
else:
noisy_data = False
remove_peaks = False
if modelp == "uniform":
rand_choice = np.random.choice(len(odf_data), i+1, replace=False)
data_odf_data = odf_data[rand_choice,:,:]
data_odf_data1 = None
else:
data_odf_data = None
data_odf_data1 = None
seednumber = np.random.randint(1e6)
values.append([i+1, 0, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
0, i, j, save_directory_,
data_odf_data,
data_odf_data1,
modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
None])
if matrix_phase_always_present != None and \
type_ != "testing_data":
values.append([i+1, 0, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
0, i, j, save_directory_,
data_odf_data,
data_odf_data1,
modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
matrix_phase_always_present])
chunks = chunker_list(values, ncpu)
chunks_mp = list(chunks)
if include_scm:
meta = {'t1':time.time(),
'flag':0}
else:
meta = {'t1':time.time(),
'flag':1}
for ijk in range(int(ncpu)):
_inputs_queue.put((chunks_mp[ijk], ncpu, meta))
if include_scm:
write_to_console("Generating small angle misorientation single crystals")
values = []
for i in range(grains_nb_simulate):
if data_realism:
## three types of data augmentation to mimic reality ?
if i < grains_nb_simulate*0.25:
noisy_data = False
remove_peaks = False
elif (i >= grains_nb_simulate*0.25) and (i < grains_nb_simulate*0.5):
noisy_data = True
remove_peaks = False
elif (i >= grains_nb_simulate*0.5) and (i < grains_nb_simulate*0.75):
noisy_data = False
remove_peaks = True
elif (i >= grains_nb_simulate*0.75):
noisy_data = True
remove_peaks = True
else:
noisy_data = False
remove_peaks = False
seednumber = np.random.randint(1e6)
values.append([1, 0, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
1, i, i, save_directory_,
None, None, modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
None])
if material_ != material1_:
seednumber = np.random.randint(1e6)
values.append([0, 1, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
2, i, i, save_directory_,
None, None, modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
None])
### include slightly misoriented two crystals of different materails
seednumber = np.random.randint(1e6)
values.append([1, 1, material_,material1_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl, classhkl1,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl,
index_hkl,
hkl_all_class1,
lattice_material1,
None,
normal_hkl1,
index_hkl1,
dim1, dim2,
removeharmonics,
3, i, i, save_directory_,
None, None, modelp,
misorientation_angle,
max_millerindex,max_millerindex1,
general_diff_rules,
crystal,
crystal1,
None])
chunks = chunker_list(values, ncpu)
chunks_mp = list(chunks)
meta = {'t1':time.time(),
'flag':1}
for ijk in range(int(ncpu)):
_inputs_queue.put((chunks_mp[ijk], ncpu, meta))
max_progress = max_progress
while True:
count = 0
for i in range(ncpu):
if not _worker_process[i].is_alive():
_worker_process[i].join()
count += 1
else:
time.sleep(0.1)
if count == ncpu:
return
def get_material_detail(material_=None, SG=None, symm_=None,
material1_=None, SG1=None, symm1_=None):
"""
Returns material details
"""
a, b, c, alpha, beta, gamma = dictLT.dict_Materials[material_][1]
# Gstar = CP.Gstar_from_directlatticeparams(a, b, c, alpha, beta, gamma)
rules = dictLT.dict_Materials[material_][-1]
if symm_ =="cubic":
symmetry = Symmetry.cubic
lattice_material = Lattice.cubic(a)
if SG == None:
SG = 230
crystal = SGLattice(int(SG), a)
elif symm_ =="monoclinic":
symmetry = Symmetry.monoclinic
lattice_material = Lattice.monoclinic(a, b, c, beta)
if SG == None:
SG = 10
crystal = SGLattice(int(SG),a, b, c, beta)
elif symm_ == "hexagonal":
symmetry = Symmetry.hexagonal
lattice_material = Lattice.hexagonal(a, c)
if SG == None:
SG = 191
crystal = SGLattice(int(SG),a, c)
elif symm_ == "orthorhombic":
symmetry = Symmetry.orthorhombic
lattice_material = Lattice.orthorhombic(a, b, c)
if SG == None:
SG = 47
crystal = SGLattice(int(SG),a, b, c)
elif symm_ == "tetragonal":
symmetry = Symmetry.tetragonal
lattice_material = Lattice.tetragonal(a, c)
if SG == None:
SG = 123
crystal = SGLattice(int(SG),a, c)
elif symm_ == "trigonal":
symmetry = Symmetry.trigonal
lattice_material = Lattice.rhombohedral(a, alpha)
if SG == None:
SG = 162
crystal = SGLattice(int(SG),a, alpha)
elif symm_ == "triclinic":
symmetry = Symmetry.triclinic
lattice_material = Lattice.triclinic(a, b, c, alpha, beta, gamma)
if SG == None:
SG = 2
crystal = SGLattice(int(SG),a, b, c, alpha, beta, gamma)
if material_ != material1_:
a1, b1, c1, alpha1, beta1, gamma1 = dictLT.dict_Materials[material1_][1]
# Gstar1 = CP.Gstar_from_directlatticeparams(a1, b1, c1, alpha1, beta1, gamma1)
rules1 = dictLT.dict_Materials[material1_][-1]
# =============================================================================
# Symmetry input
# =============================================================================
if symm1_ =="cubic":
symmetry1 = Symmetry.cubic
lattice_material1 = Lattice.cubic(a1)
if SG1 == None:
SG1 = 230
crystal1 = SGLattice(int(SG1), a1)
elif symm1_ =="monoclinic":
symmetry1 = Symmetry.monoclinic
lattice_material1 = Lattice.monoclinic(a1, b1, c1, beta1)
if SG1 == None:
SG1 = 10
crystal1 = SGLattice(int(SG1),a1, b1, c1, beta1)
elif symm1_ == "hexagonal":
symmetry1 = Symmetry.hexagonal
lattice_material1 = Lattice.hexagonal(a1, c1)
if SG1 == None:
SG1 = 191
crystal1 = SGLattice(int(SG1),a1, c1)
elif symm1_ == "orthorhombic":
symmetry1 = Symmetry.orthorhombic
lattice_material1 = Lattice.orthorhombic(a1, b1, c1)
if SG1 == None:
SG1 = 47
crystal1 = SGLattice(int(SG1),a1, b1, c1)
elif symm1_ == "tetragonal":
symmetry1 = Symmetry.tetragonal
lattice_material1 = Lattice.tetragonal(a1, c1)
if SG1 == None:
SG1 = 123
crystal1 = SGLattice(int(SG1),a1, c1)
elif symm1_ == "trigonal":
symmetry1 = Symmetry.trigonal
lattice_material1 = Lattice.rhombohedral(a1, alpha1)
if SG1 == None:
SG1 = 162
crystal1 = SGLattice(int(SG1),a1, alpha1)
elif symm1_ == "triclinic":
symmetry1 = Symmetry.triclinic
lattice_material1 = Lattice.triclinic(a1, b1, c1, alpha1, beta1, gamma1)
if SG1 == None:
SG1 = 2
crystal1 = SGLattice(int(SG1),a1, b1, c1, alpha1, beta1, gamma1)
else:
rules1 = None
symmetry1 = None
lattice_material1 = None
crystal1 = None
return rules, symmetry, lattice_material, crystal, SG, rules1, symmetry1, lattice_material1, crystal1, SG1
def predict_preprocessMultiProcess(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,
material_, material1_, symmetry, symmetry1,lim_x,lim_y,
strain_calculation, ind_mat, ind_mat1,
model_direc=None, tolerance =None, tolerance1 =None,
matricies=None, ccd_label=None,
filename_bkg=None,intensity_threshold=None,
boxsize=None,bkg_treatment=None,
filenameDirec=None, experimental_prefix=None,
blacklist_file =None, text_file=None,
files_treated=None,try_previous1=False,
wb=None, temp_key=None, cor_file_directory=None, mode_spotCycle1=None,
softmax_threshold_global123=None,mr_threshold_global123=None,
cap_matchrate123=None,tolerance_strain123=None,tolerance_strain1231=None,\
NumberMaxofFits123=None,fit_peaks_gaussian_global123=None,
FitPixelDev_global123=None,coeff123=None, coeff_overlap=None,
material0_limit=None, material1_limit=None, use_previous_UBmatrix_name=None,
material_phase_always_present=None, crystal=None, crystal1=None, strain_free_parameters=None):
if files in files_treated:
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
call_global()
# print("Predicting for "+files)
if files.split(".")[-1] != "cor":
CCDLabel=ccd_label
seednumber = "Experimental "+CCDLabel+" file"
try:
out_name = blacklist_file
except:
out_name = None
if bkg_treatment == None:
bkg_treatment = "A-B"
try:
### Max space = space betzeen pixles
peak_XY = RMCCD.PeakSearch(
files,
stackimageindex = -1,
CCDLabel=CCDLabel,
NumberMaxofFits=NumberMaxofFits123,
PixelNearRadius=10,
removeedge=2,
IntensityThreshold=intensity_threshold,
local_maxima_search_method=0,
boxsize=boxsize,
position_definition=1,
verbose=0,
fit_peaks_gaussian=fit_peaks_gaussian_global123,
xtol=0.001,
FitPixelDev=FitPixelDev_global123,
return_histo=0,
# Saturation_value=1e10, # to be merged in CCDLabel
# Saturation_value_flatpeak=1e10,
MinIntensity=0,
PeakSizeRange=(0.65,200),
write_execution_time=1,
Data_for_localMaxima = "auto_background",
formulaexpression=bkg_treatment,
Remove_BlackListedPeaks_fromfile=out_name,
reject_negative_baseline=True,
Fit_with_Data_for_localMaxima=False,
maxPixelDistanceRejection=15.0,
)
peak_XY = peak_XY[0]#[:,:2] ##[2] Integer peak lists
except:
print("Error in Peak detection for "+ files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
# files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
try:
s_ix = np.argsort(peak_XY[:, 2])[::-1]
peak_XY = peak_XY[s_ix]
except:
print("Error in Peak detection for "+ files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
# files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
framedim = dictLT.dict_CCD[CCDLabel][0]
twicetheta, chi = Lgeo.calc_uflab(peak_XY[:,0], peak_XY[:,1], detectorparameters,
returnAngles=1,
pixelsize=pixelsize,
kf_direction='Z>0')
data_theta, data_chi = twicetheta/2., chi
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peak_XY[:,0]
dict_dp['peakY']=peak_XY[:,1]
dict_dp['intensity']=peak_XY[:,2]
CCDcalib = {"CCDLabel":CCDLabel,
"dd":detectorparameters[0],
"xcen":detectorparameters[1],
"ycen":detectorparameters[2],
"xbet":detectorparameters[3],
"xgam":detectorparameters[4],
"pixelsize": pixelsize}
path = os.path.normpath(files)
IOLT.writefile_cor(cor_file_directory+"//"+path.split(os.sep)[-1].split(".")[0], twicetheta,
chi, peak_XY[:,0], peak_XY[:,1], peak_XY[:,2],
param=CCDcalib, sortedexit=0)
elif files.split(".")[-1] == "cor":
# print("Entering Cor file read section")
seednumber = "Experimental COR file"
allres = IOLT.readfile_cor(files, True)
data_theta, data_chi, peakx, peaky, intensity = allres[1:6]
CCDcalib = allres[-1]
detectorparameters = allres[-2]
# print('detectorparameters from file are: '+ str(detectorparameters))
pixelsize = CCDcalib['pixelsize']
CCDLabel = CCDcalib['CCDLabel']
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peakx
dict_dp['peakY']=peaky
dict_dp['intensity']=intensity
sorted_data = np.transpose(np.array([data_theta, data_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(sorted_data, sorted_data))
codebars_all = []
if len(data_theta) == 0:
print("No peaks Found for : " + files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
if not use_om_user:
# print("Entering GOOD section")
spots_in_center = np.arange(0,len(data_theta))
spots_in_center = spots_in_center[:nb_spots_consider]
for i in spots_in_center:
spotangles = tabledistancerandom[i]
spotangles = np.delete(spotangles, i)# removing the self distance
codebars = np.histogram(spotangles, bins=angbins)[0]
# codebars = histogram1d(spotangles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## normalize the same way as training data
max_codebars = np.max(codebars)
codebars = codebars/ max_codebars
codebars_all.append(codebars)
## reshape for the model to predict all spots at once
codebars = np.array(codebars_all)
## Do prediction of all spots at once
prediction = predict(codebars, wb, temp_key)
# prediction = model.predict(codebars)
max_pred = np.max(prediction, axis = 1)
class_predicted = np.argmax(prediction, axis = 1)
predicted_hkl123 = classhkl[class_predicted]
predicted_hkl123 = predicted_hkl123.astype(int)
else:
max_pred = None
class_predicted = None
predicted_hkl123 = None
spots_in_center = None
s_tth = data_theta * 2.
s_chi = data_chi
# print("Computing UB")
rotation_matrix1, mr_highest, mat_highest, \
strain_crystal, strain_sample, iR_pix1, \
fR_pix1, spots_len1,\
best_match1, check12 = predict_ubmatrix(seednumber, spots_in_center, classhkl,
hkl_all_class0,
hkl_all_class1, files,
s_tth1=s_tth,s_chi1=s_chi,
predicted_hkl1=predicted_hkl123,
class_predicted1=class_predicted,
max_pred1=max_pred,
emin=emin,emax=emax,
material_=material_,
material1_=material1_,
lim_y=lim_y, lim_x=lim_x,
cnt=cnt,
dict_dp=dict_dp,
rotation_matrix=rotation_matrix,
mat_global=mat_global,
strain_calculation=strain_calculation,
ind_mat=ind_mat,
ind_mat1=ind_mat1,
tolerance=tolerance,
tolerance1 =tolerance1,
matricies=matricies,
tabledistancerandom=tabledistancerandom,
text_file = text_file,
try_previous1=try_previous1,
mode_spotCycle=mode_spotCycle1,
softmax_threshold_global123 = softmax_threshold_global123,
mr_threshold_global123=mr_threshold_global123,
cap_matchrate123=cap_matchrate123,
tolerance_strain123=tolerance_strain123,
tolerance_strain1231=tolerance_strain1231,
coeff123=coeff123,
coeff_overlap=coeff_overlap,
material0_limit=material0_limit,
material1_limit=material1_limit,
model_direc=model_direc,
use_previous_UBmatrix_name=use_previous_UBmatrix_name,
material_phase_always_present=material_phase_always_present,
match_rate=match_rate,
check=check[cnt,:],
crystal=crystal,
crystal1=crystal1, angbins=angbins,
wb=wb, temp_key=temp_key,
strain_free_parameters=strain_free_parameters)
for intmat in range(matricies):
if len(rotation_matrix1[intmat]) == 0:
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
else:
mat_global[intmat][0][cnt] = mat_highest[intmat][0]
final_symm =symmetry
final_crystal = crystal
if mat_highest[intmat][0] == 1:
final_symm = symmetry
final_crystal = crystal
elif mat_highest[intmat][0] == 2:
final_symm = symmetry1
final_crystal = crystal1
symm_operator = final_crystal._hklsym
strain_matrix[intmat][0][cnt,:,:] = strain_crystal[intmat][0]
strain_matrixs[intmat][0][cnt,:,:] = strain_sample[intmat][0]
rotation_matrix[intmat][0][cnt,:,:] = rotation_matrix1[intmat][0]
col_temp = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 0., 1.]), final_symm, symm_operator)
col[intmat][0][cnt,:] = col_temp
col_tempx = get_ipf_colour(rotation_matrix1[intmat][0], np.array([1., 0., 0.]), final_symm, symm_operator)
colx[intmat][0][cnt,:] = col_tempx
col_tempy = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 1., 0.]), final_symm, symm_operator)
coly[intmat][0][cnt,:] = col_tempy
match_rate[intmat][0][cnt] = mr_highest[intmat][0]
spots_len[intmat][0][cnt] = spots_len1[intmat][0]
iR_pix[intmat][0][cnt] = iR_pix1[intmat][0]
fR_pix[intmat][0][cnt] = fR_pix1[intmat][0]
best_match[intmat][0][cnt] = best_match1[intmat][0]
check[cnt,intmat] = check12[intmat]
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, match_rate, \
mat_global, cnt, files_treated, spots_len, iR_pix, fR_pix, check, best_match
def new_MP_function(argu):
files, cnt, rotation_matrix, strain_matrix, strain_matrixs,\
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,\
check,detectorparameters,pixelsize,angbins,\
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,\
material_, material1_, symmetry, symmetry1,lim_x,lim_y,\
strain_calculation, ind_mat, ind_mat1,\
model_direc, tolerance , tolerance1,\
matricies, ccd_label,\
filename_bkg,intensity_threshold,\
boxsize,bkg_treatment,\
filenameDirec, experimental_prefix,\
blacklist_file, text_file, \
files_treated,try_previous1,\
wb, temp_key, cor_file_directory, mode_spotCycle1,\
softmax_threshold_global123,mr_threshold_global123,\
cap_matchrate123, tolerance_strain123, tolerance_strain1231,\
NumberMaxofFits123,fit_peaks_gaussian_global123,\
FitPixelDev_global123,coeff123,coeff_overlap,\
material0_limit, material1_limit, use_previous_UBmatrix_name1,\
material_phase_always_present1, crystal, crystal1, strain_free_parameters = argu
strain_matrix12, strain_matrixs12, \
rotation_matrix12, col12, \
colx12, coly12,\
match_rate12, mat_global12, cnt12,\
files_treated12, spots_len12, \
iR_pix12, fR_pix12, check12, best_match12 = predict_preprocessMultiProcess(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,
mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, hkl_all_class1, emin, emax,
material_, material1_, symmetry, symmetry1,lim_x,lim_y,
strain_calculation, ind_mat, ind_mat1,
model_direc, tolerance, tolerance1,
matricies, ccd_label,
filename_bkg,intensity_threshold,
boxsize,bkg_treatment,
filenameDirec, experimental_prefix,
blacklist_file, text_file,
files_treated,try_previous1,
wb, temp_key, cor_file_directory, mode_spotCycle1,
softmax_threshold_global123,mr_threshold_global123,
cap_matchrate123, tolerance_strain123,
tolerance_strain1231,NumberMaxofFits123,
fit_peaks_gaussian_global123,
FitPixelDev_global123, coeff123,coeff_overlap,
material0_limit,material1_limit,
use_previous_UBmatrix_name1,
material_phase_always_present1,
crystal, crystal1, strain_free_parameters)
meta = {}
return strain_matrix12, strain_matrixs12, rotation_matrix12, col12, \
colx12, coly12, match_rate12, mat_global12, cnt12, meta, \
files_treated12, spots_len12, iR_pix12, fR_pix12, best_match12, check12
def prepare_LP_NB(nbgrains, nbgrains1, material_, verbose, material1_=None, seed=None, sortintensity=False,
detectorparameters=None, pixelsize=None, dim1=2048,dim2=2048, emin=5, emax=23, flag=0, noisy_data=False,
remove_peaks = False):
if flag == 10:
s_tth, s_chi, s_miller_ind, s_posx, s_posy, \
s_intensity, g, g1 = simulatemultiplepatterns_NB(nbgrains, nbgrains1, seed=seed,
key_material=material_,
key_material1=material1_,
emin=emin, emax=emax,
detectorparameters=detectorparameters,
pixelsize=pixelsize,
sortintensity=sortintensity,
dim1=dim1,dim2=dim2, flag=flag)
else:
s_tth, s_chi, s_miller_ind, s_posx, s_posy, \
s_intensity = simulatemultiplepatterns_NB(nbgrains, nbgrains1, seed=seed,
key_material=material_,
key_material1=material1_,
emin=emin, emax=emax,
detectorparameters=detectorparameters,
pixelsize=pixelsize,
sortintensity=sortintensity,
dim1=dim1,dim2=dim2, flag=flag)
if noisy_data:
## apply random gaussian type noise to the data (tth and chi)
## So adding noise to the angular distances
## Instead of adding noise to all HKL's ... Add to few selected HKLs
## Adding noise to randomly 30% of the HKLs
indices_noise = np.random.choice(len(s_tth), int(len(s_tth)*0.2), replace=False)
noise_ = np.random.normal(0,0.1,len(indices_noise))
s_tth[indices_noise] = s_tth[indices_noise] + noise_
s_chi[indices_noise] = s_chi[indices_noise] + noise_
if remove_peaks:
len_mi = np.array([iq for iq in range(len(s_miller_ind))])
len_mi = len_mi[int(0.5*len(s_miller_ind)):]
indices_remove = np.random.choice(len_mi, int(len(len_mi)*0.2), replace=False)
## delete randomly selected less intense peaks
## to simulate real peak detection, where some peaks may not be
## well detected
s_tth = np.delete(s_tth, indices_remove)
s_chi = np.delete(s_chi, indices_remove)
s_posx = np.delete(s_posx, indices_remove)
s_posy = np.delete(s_posy, indices_remove)
s_intensity = np.delete(s_intensity, indices_remove)
s_miller_ind = np.delete(s_miller_ind, indices_remove, axis=0)
# considering all spots
allspots_the_chi = np.transpose(np.array([s_tth/2., s_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(allspots_the_chi, allspots_the_chi))
# ground truth
hkl_sol = s_miller_ind
if flag == 10:
return tabledistancerandom, hkl_sol, s_posx, s_posy, s_intensity, s_tth, s_chi, g, g1
return tabledistancerandom, hkl_sol, s_posx, s_posy, s_intensity, s_tth, s_chi
def simulatemultiplepatterns_NB(nbUBs, nbUBs1, seed=123, key_material=None, key_material1=None, emin=5, emax=23,
detectorparameters=None, pixelsize=None,
sortintensity = False, dim1=2048, dim2=2048, flag=0):
detectordiameter = pixelsize * dim1 #TODO
g = np.zeros((nbUBs, 3, 3))
g1 = np.zeros((nbUBs1, 3, 3))
for igr in range(nbUBs):
phi1 = np.random.rand() * 360.
phi = 180. * acos(2 * np.random.rand() - 1) / np.pi
phi2 = np.random.rand() * 360.
g[igr] = Euler2OrientationMatrix((phi1, phi, phi2))
if key_material != key_material1:
for igr in range(nbUBs1):
phi1 = np.random.rand() * 360.
phi = 180. * acos(2 * np.random.rand() - 1) / np.pi
phi2 = np.random.rand() * 360.
g1[igr] = Euler2OrientationMatrix((phi1, phi, phi2))
l_tth, l_chi, l_miller_ind, l_posx, l_posy, l_E, l_intensity = [],[],[],[],[],[],[]
for grainind in range(nbUBs):
UBmatrix = g[grainind]
grain = CP.Prepare_Grain(key_material, UBmatrix)
s_tth, s_chi, s_miller_ind, s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=1)
s_miller_ind = np.c_[ s_miller_ind, np.zeros(len(s_miller_ind)) ]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
if key_material != key_material1:
for grainind in range(nbUBs1):
if key_material1 != None:
UBmatrix = g1[grainind]
grain = CP.Prepare_Grain(key_material1, UBmatrix)
s_tth, s_chi, s_miller_ind, s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=1)
s_miller_ind = np.c_[ s_miller_ind, np.ones(len(s_miller_ind)) ]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
#flat_list = [item for sublist in l for item in sublist]
s_tth = np.array([item for sublist in l_tth for item in sublist])
s_chi = np.array([item for sublist in l_chi for item in sublist])
s_miller_ind = np.array([item for sublist in l_miller_ind for item in sublist])
s_posx = np.array([item for sublist in l_posx for item in sublist])
s_posy = np.array([item for sublist in l_posy for item in sublist])
s_E = np.array([item for sublist in l_E for item in sublist])
s_intensity=np.array([item for sublist in l_intensity for item in sublist])
if sortintensity:
indsort = np.argsort(s_intensity)[::-1]
s_tth=np.take(s_tth, indsort)
s_chi=np.take(s_chi, indsort)
s_miller_ind=np.take(s_miller_ind, indsort, axis=0)
s_posx=np.take(s_posx, indsort)
s_posy=np.take(s_posy, indsort)
s_E=np.take(s_E, indsort)
s_intensity=np.take(s_intensity, indsort)
if flag == 10:
return s_tth, s_chi, s_miller_ind, s_posx, s_posy, s_intensity, g, g1
return s_tth, s_chi, s_miller_ind, s_posx, s_posy, s_intensity
# =============================================================================
# Multi material functions
# =============================================================================
def generate_multimat_dataset( material_=["Cu"],
ang_maxx=18.,
step=0.1,
nb_grains=[1],
grains_nb_simulate=100,
data_realism = False,
detectorparameters=None,
pixelsize=None,
type_="training",
var0 = 0,
dim1=2048,
dim2=2048,
removeharmonics=1,
save_directory="",
write_to_console=None,
emin=5,
emax=22,
modelp = "random",
general_diff_rules = False,
crystal = [None]):
"""
works for n phases now.
"""
from multiprocessing import Process, Queue, cpu_count
ncpu = cpu_count()
## make sure directory exists
save_directory_ = save_directory+"//"+type_
if not os.path.exists(save_directory_):
os.makedirs(save_directory_)
classhkl, n, hkl_all_class, lattice_material, symmetry = [], [], [], [],[]
max_millerindex = []
try:
for imat in material_:
with open(save_directory+"//classhkl_data_"+imat+".pickle", "rb") as input_file:
classhkl_mat, _, _, n_mat, _, \
hkl_all_class_mat, _, \
lattice_material_mat, symmetry_mat = cPickle.load(input_file)
classhkl.append(classhkl_mat)
n.append(n_mat)
hkl_all_class.append(hkl_all_class_mat)
lattice_material.append(lattice_material_mat)
symmetry.append(symmetry_mat)
max_millerindex.append(int(n_mat))
if var0==1:
codebars, angbins = get_material_data(material_ = imat,
ang_maxx = ang_maxx,
step = step,
hkl_ref=n_mat,
classhkl=classhkl_mat)
np.savez_compressed(save_directory+'//grain_classhkl_angbin_'+imat+'.npz',\
classhkl_mat, angbins)
except:
write_to_console("Class HKL library data not found, please run it first")
return None
## make comprehensive list of dictionary
normal_hkl_multimat = []
index_hkl_mutimat = []
for ino, imat in enumerate(material_):
normal_hkl_ = np.zeros((1,3))
for j in hkl_all_class[ino].keys():
normal_hkl_ = np.vstack((normal_hkl_, hkl_all_class[ino][j]["family"]))
normal_hkl = np.delete(normal_hkl_, 0, axis =0)
normal_hkl_multimat.append(normal_hkl)
if ino > 0:
ind_offset = index_hkl_mutimat[ino-1][-1] + 1
index_hkl = [ind_offset+j for j,k in enumerate(hkl_all_class[ino].keys()) for i in range(len(hkl_all_class[ino][k]["family"]))]
else:
index_hkl = [j for j,k in enumerate(hkl_all_class[ino].keys()) for i in range(len(hkl_all_class[ino][k]["family"]))]
index_hkl_mutimat.append(index_hkl)
write_to_console("Generating "+type_+" and saving them")
_inputs_queue = Queue()
_outputs_queue = Queue()
_worker_process = {}
for i in range(ncpu):
_worker_process[i]= Process(target=worker_generation_multimat, args=(_inputs_queue,
_outputs_queue,
i+1),)
for i in range(ncpu):
_worker_process[i].start()
time.sleep(0.1)
## list of combination of training dataset
## to be seen if this improves the prediction quality
## increases time significantly to generate the data
nb_grains_list = []
for ino, imat in enumerate(material_):
nb_grains_list.append(list(range(nb_grains[ino]+1)))
list_permute = list(itertools.product(*nb_grains_list))
list_permute.pop(0)
max_progress = len(list_permute)*(grains_nb_simulate)
# generate a database upto n grain LP
values = []
for i in range(len(list_permute)):
for j in range(grains_nb_simulate):
if data_realism:
## three types of data augmentation to mimic reality ?
if j < grains_nb_simulate*0.25:
noisy_data = False
remove_peaks = False
elif (j >= grains_nb_simulate*0.25) and (j < grains_nb_simulate*0.5):
noisy_data = True
remove_peaks = False
elif (j >= grains_nb_simulate*0.5) and (j < grains_nb_simulate*0.75):
noisy_data = False
remove_peaks = True
elif (j >= grains_nb_simulate*0.75):
noisy_data = True
remove_peaks = True
else:
noisy_data = False
remove_peaks = False
seednumber = np.random.randint(1e6)
values.append([ list_permute[i],
material_,
emin, emax, detectorparameters,
pixelsize,True,
ang_maxx, step,
classhkl,
noisy_data,
remove_peaks,
seednumber,
hkl_all_class,
lattice_material,
None,
normal_hkl_multimat,
index_hkl_mutimat,
dim1, dim2,
removeharmonics,
0, i, j, save_directory_,
modelp,
max_millerindex,
general_diff_rules,
crystal,])
chunks = chunker_list(values, ncpu)
chunks_mp = list(chunks)
meta = {'t1':time.time(),
'flag':1}
for ijk in range(int(ncpu)):
_inputs_queue.put((chunks_mp[ijk], ncpu, meta))
max_progress = max_progress
while True:
count = 0
for i in range(ncpu):
if not _worker_process[i].is_alive():
_worker_process[i].join()
count += 1
else:
time.sleep(0.1)
if count == ncpu:
return
def getMMpatterns_(nb, material_=None, emin=5, emax=23, detectorparameters=None, pixelsize=None,
sortintensity = False, ang_maxx = 45, step = 0.5, classhkl = None, noisy_data=False,
remove_peaks=False, seed = None,hkl_all=None, lattice_material=None, family_hkl=None,
normal_hkl=None, index_hkl=None, dim1=2048, dim2=2048, removeharmonics=1, flag = 0,
img_i=None, img_j=None, save_directory_=None, modelp=None,
max_millerindex=0, general_diff_cond=False, crystal=None,
):
if np.all(np.array(nb)==0):
print("Skipping a simulation file: "+save_directory_+'//grain_'+\
str(img_i)+"_"+str(img_j)+'.npz'+"; Due to zero UBmatrix")
return
ori_mat, ori_mat1 = [], []
s_tth, s_chi, s_miller_ind, _, _, _ = simulatemultimatpatterns(nb, seed=seed, key_material=material_,
emin=emin, emax=emax,
detectorparameters=detectorparameters,
pixelsize=pixelsize,
sortintensity = sortintensity,
dim1=dim1, dim2=dim2,
removeharmonics=removeharmonics,
flag=flag, mode=modelp,
)
if noisy_data:
## apply random gaussian type noise to the data (tth and chi)
## So adding noise to the angular distances
## Instead of adding noise to all HKL's ... Add to few selected HKLs
## Adding noise to randomly 30% of the HKLs
## Realistic way of introducting strains is through Pixels and not 2theta
noisy_pixel = 0.15
indices_noise = np.random.choice(len(s_tth), int(len(s_tth)*0.3), replace=False)
noise_ = np.random.normal(0,noisy_pixel,len(indices_noise))
s_tth[indices_noise] = s_tth[indices_noise] + noise_
noise_ = np.random.normal(0,noisy_pixel,len(indices_noise))
s_chi[indices_noise] = s_chi[indices_noise] + noise_
if remove_peaks:
len_mi = np.array([iq for iq in range(len(s_miller_ind))])
len_mi = len_mi[int(0.6*len(s_miller_ind)):]
indices_remove = np.random.choice(len_mi, int(len(len_mi)*0.3), replace=False)
## delete randomly selected less intense peaks
## to simulate real peak detection, where some peaks may not be
## well detected
## Include maybe Intensity approach: Delete peaks based on their SF and position in detector
if len(indices_remove) !=0:
s_tth = np.delete(s_tth, indices_remove)
s_chi = np.delete(s_chi, indices_remove)
s_miller_ind = np.delete(s_miller_ind, indices_remove, axis=0)
# replace all hkl class with relevant hkls
## skip HKLS that dont follow the general diffraction rules
location = []
skip_hkl = []
delete_spots = []
for j, i in enumerate(s_miller_ind):
new_hkl = _round_indices(i[:3])
mat_index = int(i[3])
if general_diff_cond:
cond_proceed = crystal[mat_index].hkl_allowed(i[:3], returnequivalents=False)
else:
cond_proceed = True
if not cond_proceed:
delete_spots.append(j)
continue
if np.any(np.abs(new_hkl)>max_millerindex[mat_index]):
skip_hkl.append(j)
continue
temp_ = np.all(new_hkl == normal_hkl[mat_index], axis=1)
if len(np.where(temp_)[0]) == 1:
ind_ = np.where(temp_)[0][0]
location.append(index_hkl[mat_index][ind_])
elif len(np.where(temp_)[0]) == 0:
# print("Entering -100 for "+ str(i) + "\n")
skip_hkl.append(j)
elif len(np.where(temp_)[0]) > 1:
## first check if they both are same class or not
class_output = []
for ij in range(len(np.where(temp_)[0])):
indc = index_hkl[mat_index][np.where(temp_)[0][ij]]
class_output.append(indc)
if len(set(class_output)) <= 1:
location.append(class_output[0])
else:
skip_hkl.append(j)
print(i)
print(np.where(temp_)[0])
for ij in range(len(np.where(temp_)[0])):
indc = index_hkl[mat_index][np.where(temp_)[0][ij]]
print(classhkl[mat_index][indc])
print("Entering -500: Skipping HKL as something is not proper with equivalent HKL module")
allspots_the_chi = np.transpose(np.array([s_tth/2., s_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(allspots_the_chi, allspots_the_chi))
codebars = []
angbins = np.arange(0,ang_maxx+step,step)
for i in range(len(tabledistancerandom)):
if i in skip_hkl or i in delete_spots: ## not saving skipped HKL
continue
angles = tabledistancerandom[i]
spots_delete = [i]
for del_spts in delete_spots:
spots_delete.append(del_spts)
angles = np.delete(angles, spots_delete)
# angles = np.delete(angles, i)# removing the self distance
fingerprint = np.histogram(angles, bins=angbins)[0]
# fingerprint = histogram1d(angles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## same normalization as before
max_codebars = np.max(fingerprint)
fingerprint = fingerprint/ max_codebars
codebars.append(fingerprint)
suffix_ = ""
if flag == 0:
if len(codebars) != 0:
mat_prefix = ""
for no, i in enumerate(nb):
if i != 0:
mat_prefix = mat_prefix + material_[no]
np.savez_compressed(save_directory_+'//'+mat_prefix+'_grain_'+str(img_i)+"_"+\
str(img_j)+suffix_+'.npz', codebars, location, ori_mat, ori_mat1, flag,\
s_tth, s_chi, s_miller_ind)
else:
print("Skipping a simulation file: "+save_directory_+'//grain_'+\
str(img_i)+"_"+str(img_j)+suffix_+'.npz'+"; Due to no data conforming user settings")
def simulatemultimatpatterns(nbUBs, seed=123, key_material=None,
emin=5, emax=23, detectorparameters=None, pixelsize=None,
sortintensity = False, dim1=2048, dim2=2048, removeharmonics=1, flag = 0,
mode="random"):
l_tth, l_chi, l_miller_ind, l_posx, l_posy, l_E, l_intensity = [],[],[],[],[],[],[]
detectordiameter = pixelsize * dim1 #TODO * 2.0
if flag == 0:
if mode == "random":
for no, i in enumerate(nbUBs):
if i != 0:
for igr in range(i):
phi1 = rand1() * 360.
phi = 180. * acos(2 * rand1() - 1) / np.pi
phi2 = rand1() * 360.
UBmatrix = Euler2OrientationMatrix((phi1, phi, phi2))
grain = CP.Prepare_Grain(key_material[no], UBmatrix)
s_tth, s_chi, s_miller_ind, \
s_posx, s_posy, s_E= LT.SimulateLaue_full_np(grain, emin, emax,
detectorparameters,
pixelsize=pixelsize,
dim=(dim1, dim2),
detectordiameter=detectordiameter,
removeharmonics=removeharmonics)
s_miller_ind = np.c_[s_miller_ind, np.ones(len(s_miller_ind))*no]
s_intensity = 1./s_E
l_tth.append(s_tth)
l_chi.append(s_chi)
l_miller_ind.append(s_miller_ind)
l_posx.append(s_posx)
l_posy.append(s_posy)
l_E.append(s_E)
l_intensity.append(s_intensity)
#flat_list = [item for sublist in l for item in sublist]
s_tth = np.array([item for sublist in l_tth for item in sublist])
s_chi = np.array([item for sublist in l_chi for item in sublist])
s_miller_ind = np.array([item for sublist in l_miller_ind for item in sublist])
s_posx = np.array([item for sublist in l_posx for item in sublist])
s_posy = np.array([item for sublist in l_posy for item in sublist])
s_E = np.array([item for sublist in l_E for item in sublist])
s_intensity=np.array([item for sublist in l_intensity for item in sublist])
if sortintensity:
indsort = np.argsort(s_intensity)[::-1]
s_tth=np.take(s_tth, indsort)
s_chi=np.take(s_chi, indsort)
s_miller_ind=np.take(s_miller_ind, indsort, axis=0)
s_posx=np.take(s_posx, indsort)
s_posy=np.take(s_posy, indsort)
s_E=np.take(s_E, indsort)
s_intensity=np.take(s_intensity, indsort)
return s_tth, s_chi, s_miller_ind, s_posx, s_posy, s_intensity
def worker_generation_multimat(inputs_queue, outputs_queue, proc_id):
while True:
time.sleep(0.01)
if not inputs_queue.empty():
message = inputs_queue.get()
num1, _, meta = message
flag1 = meta['flag']
for ijk in range(len(num1)):
nb, material_, emin, emax, detectorparameters, pixelsize, \
sortintensity, ang_maxx, step, classhkl, noisy_data, \
remove_peaks, seed,hkl_all, lattice_material, family_hkl,\
normal_hkl, index_hkl, dim1, dim2, removeharmonics, flag,\
img_i, img_j, save_directory_, modelp, max_millerindex,\
general_diff_cond, crystal = num1[ijk]
getMMpatterns_(nb, material_, emin, emax, detectorparameters, pixelsize, \
sortintensity, ang_maxx, step, classhkl, noisy_data, \
remove_peaks, seed,hkl_all, lattice_material, family_hkl,\
normal_hkl, index_hkl, dim1, dim2, removeharmonics, flag,\
img_i, img_j, save_directory_, modelp, \
max_millerindex, general_diff_cond, crystal)
# if ijk%10 == 0 and ijk!=0:
# outputs_queue.put(11)
if flag1 == 1:
break
def get_multimaterial_detail(material_=None, SG_mat=None, symm_mat=None):
"""
Returns material details
"""
rules, symmetry, lattice_material, crystal, SG = [],[],[],[],[]
for ino, imat in enumerate(material_):
a, b, c, alpha, beta, gamma = dictLT.dict_Materials[imat][1]
rules.append(dictLT.dict_Materials[imat][-1])
symm_ = symm_mat[ino]
if symm_ =="cubic":
symmetry.append(Symmetry.cubic)
lattice_material.append(Lattice.cubic(a))
if SG_mat[ino] == None:
SG.append(230)
SG_mat[ino] = 230
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]), a))
elif symm_ =="monoclinic":
symmetry.append(Symmetry.monoclinic)
lattice_material.append(Lattice.monoclinic(a, b, c, beta))
if SG_mat[ino] == None:
SG.append(10)
SG_mat[ino] = 10
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]),a, b, c, beta))
elif symm_ == "hexagonal":
symmetry.append(Symmetry.hexagonal)
lattice_material.append(Lattice.hexagonal(a, c))
if SG_mat[ino] == None:
SG.append(191)
SG_mat[ino] = 191
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]),a, c))
elif symm_ == "orthorhombic":
symmetry.append(Symmetry.orthorhombic)
lattice_material.append(Lattice.orthorhombic(a, b, c))
if SG_mat[ino] == None:
SG.append(47)
SG_mat[ino] = 47
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]),a, b, c))
elif symm_ == "tetragonal":
symmetry.append(Symmetry.tetragonal)
lattice_material.append(Lattice.tetragonal(a, c))
if SG_mat[ino] == None:
SG.append(123)
SG_mat[ino] = 123
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]),a, c))
elif symm_ == "trigonal":
symmetry.append(Symmetry.trigonal)
lattice_material.append(Lattice.rhombohedral(a, alpha))
if SG_mat[ino] == None:
SG.append(162)
SG_mat[ino] = 162
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]),a, alpha))
elif symm_ == "triclinic":
symmetry.append(Symmetry.triclinic)
lattice_material.append(Lattice.triclinic(a, b, c, alpha, beta, gamma))
if SG_mat[ino] == None:
SG.append(2)
SG_mat[ino] = 2
else:
SG.append(SG_mat[ino])
crystal.append(SGLattice(int(SG_mat[ino]),a, b, c, alpha, beta, gamma))
return rules, symmetry, lattice_material, crystal, SG
def rmv_freq_class_MM(freq_rmv = [0], elements=["all"],
save_directory="", material_=None,
write_to_console=None,
progress=None, qapp=None):
classhkl_mm = []
ind_mat_mm = []
for ino, imat in enumerate(material_):
if ino == 0:
classhkl0 = np.load(save_directory+"//grain_classhkl_angbin_"+imat+".npz")["arr_0"]
angbins = np.load(save_directory+"//grain_classhkl_angbin_"+imat+".npz")["arr_1"]
if write_to_console != None:
write_to_console(imat +" material index length: " + str(len(classhkl0)))
ind_mat = np.array([ij for ij in range(len(classhkl0))])
classhkl_mm.append(classhkl0)
ind_mat_mm.append(ind_mat)
else:
classhkl0 = np.load(save_directory+"//grain_classhkl_angbin_"+imat+".npz")["arr_0"]
if write_to_console != None:
write_to_console(imat +" material index length: " + str(len(classhkl0)))
pre_ind = ind_mat_mm[ino-1][-1] + 1
ind_mat = np.array([pre_ind+ij for ij in range(len(classhkl0))])
classhkl_mm.append(classhkl0)
ind_mat_mm.append(ind_mat)
for ino, classhkl0 in enumerate(classhkl_mm):
if ino == 0:
classhkl = classhkl0
else:
classhkl = np.vstack((classhkl, classhkl0))
loc = np.array([ij for ij in range(len(classhkl))])
trainy_ = array_generatorV2(save_directory+"//training_data", 0, progress, qapp)
## split trainy_ for two materials index
trainy_mat_MM = [[] for _ in range(len(material_))]
for ino, imat in enumerate(material_):
for ijnode in trainy_:
if ijnode in ind_mat_mm[ino]:
trainy_mat_MM[ino].append(ijnode)
if write_to_console != None:
write_to_console("Class ID and frequency; check for data imbalance and select "+\
"appropriate LOSS function for training the model")
## lets extract the least common occuring classes to simplify the training dataset
for ino, imat in enumerate(material_):
if elements[ino] == "all":
most_common0 = collections.Counter(np.array(trainy_mat_MM[ino])).most_common()
else:
most_common0 = collections.Counter(np.array(trainy_mat_MM[ino])).most_common()[:elements[ino]]
print("Most common classhkl elements in "+imat+" are:")
print(most_common0)
if ino == 0:
most_common = most_common0
else:
most_common = most_common + most_common0
class_present = [most_common[i][0] for i in range(len(most_common))]
rmv_indices = []
for i in loc:
if i not in class_present:
rmv_indices.append(i)
elif i in class_present:
ind_ = np.where(np.array(class_present)==i)[0]
ij = most_common[ind_[0]]
for ino, imat in enumerate(material_):
if (ij[0] in ind_mat_mm[ino]) and (ij[1] <= freq_rmv[ino]):
rmv_indices.append(int(ij[0]))
else:
if write_to_console != None:
write_to_console("Something Fishy in Remove Freq Class module")
for ino, imat in enumerate(material_):
for i in rmv_indices:
if i in ind_mat_mm[ino]:
indd = np.where(ind_mat_mm[ino] == i)[0]
ind_mat_mm[ino] = np.delete(ind_mat_mm[ino], indd, axis=0)
loc_new = np.delete(loc, rmv_indices)
occurances = [most_common[i][1] for i in range(len(most_common)) if int(most_common[i][0]) in loc_new]
occurances = np.array(occurances)
class_weight = {}
class_weight_temp = {}
count = 0
for i in loc_new:
for ij in most_common:
if int(ij[0]) == i:
class_weight[count] = int(np.max(occurances)/ij[1])
class_weight_temp[int(ij[0])] = int(np.max(occurances)/ij[1])
count += 1
for occ in range(len(most_common)):
if int(most_common[occ][0]) in loc_new:
if write_to_console != None:
suffix_string = ""
for ino, imat in enumerate(material_):
if int(most_common[occ][0]) in ind_mat_mm[ino]:
suffix_string = "; material: "+imat
if int(most_common[occ][0]) == -100:
write_to_console("Unclassified HKL (-100); occurance : "+str(most_common[occ][1])+\
"; NN_weights : 0.0 "+suffix_string)
else:
write_to_console("HKL : " +str(classhkl[int(most_common[occ][0])])+"; occurance : "+\
str(most_common[occ][1])+\
"; NN_weights : "+ \
str(class_weight_temp[int(most_common[occ][0])])+suffix_string)
if write_to_console != None:
write_to_console(str(len(rmv_indices))+ " classes removed from the classHKL object [removal frequency: "+\
str(freq_rmv)+"] (before:"+str(len(classhkl))+", now:"+str(len(classhkl)-len(rmv_indices))+")")
print(str(len(rmv_indices))+ " classes removed from the classHKL object [removal frequency: "+\
str(freq_rmv)+"] (before:"+str(len(classhkl))+", now:"+str(len(classhkl)-len(rmv_indices))+")")
if len(rmv_indices) == len(classhkl):
if write_to_console != None:
write_to_console("Error; no classes left in the classhkl array; please reduce frequency to remove some classes")
else:
print("Error; no classes left in the classhkl array; please reduce frequency to remove some classes")
return None
classhkl = np.delete(classhkl, rmv_indices, axis=0)
## save the altered classHKL object
np.savez_compressed(save_directory+'//MOD_grain_classhkl_angbin.npz', classhkl, angbins, loc_new,
rmv_indices, freq_rmv, ind_mat_mm)
with open(save_directory + "//class_weights.pickle", "wb") as output_file:
cPickle.dump([class_weight], output_file)
if write_to_console != None:
write_to_console("Saved class weights data")
def predict_preprocessMultiMatProcess(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, emin, emax,
material_, symmetry, lim_x, lim_y,
strain_calculation, ind_mat,
model_direc=None, tolerance =None,
matricies=None, ccd_label=None,
filename_bkg=None,intensity_threshold=None,
boxsize=None,bkg_treatment=None,
filenameDirec=None, experimental_prefix=None,
blacklist_file =None, text_file=None,
files_treated=None,try_previous1=False,
wb=None, temp_key=None, cor_file_directory=None, mode_spotCycle1=None,
softmax_threshold_global123=None,mr_threshold_global123=None,
cap_matchrate123=None,tolerance_strain123=None,\
NumberMaxofFits123=None,fit_peaks_gaussian_global123=None,
FitPixelDev_global123=None,coeff123=None, coeff_overlap=None,
material0_limit=None, use_previous_UBmatrix_name=None,
material_phase_always_present=None, crystal=None, strain_free_parameters=None):
if files in files_treated:
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
call_global()
# print("Predicting for "+files)
if files.split(".")[-1] != "cor":
CCDLabel=ccd_label
seednumber = "Experimental "+CCDLabel+" file"
try:
out_name = blacklist_file
except:
out_name = None
if bkg_treatment == None:
bkg_treatment = "A-B"
try:
### Max space = space betzeen pixles
peak_XY = RMCCD.PeakSearch(
files,
stackimageindex = -1,
CCDLabel=CCDLabel,
NumberMaxofFits=NumberMaxofFits123,
PixelNearRadius=10,
removeedge=2,
IntensityThreshold=intensity_threshold,
local_maxima_search_method=0,
boxsize=boxsize,
position_definition=1,
verbose=0,
fit_peaks_gaussian=fit_peaks_gaussian_global123,
xtol=0.001,
FitPixelDev=FitPixelDev_global123,
return_histo=0,
# Saturation_value=1e10, # to be merged in CCDLabel
# Saturation_value_flatpeak=1e10,
MinIntensity=0,
PeakSizeRange=(0.65,200),
write_execution_time=1,
Data_for_localMaxima = "auto_background",
formulaexpression=bkg_treatment,
Remove_BlackListedPeaks_fromfile=out_name,
reject_negative_baseline=True,
Fit_with_Data_for_localMaxima=False,
maxPixelDistanceRejection=15.0,
)
peak_XY = peak_XY[0]#[:,:2] ##[2] Integer peak lists
except:
print("Error in Peak detection for "+ files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
# files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
try:
s_ix = np.argsort(peak_XY[:, 2])[::-1]
peak_XY = peak_XY[s_ix]
except:
print("Error in Peak detection for "+ files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
# files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
framedim = dictLT.dict_CCD[CCDLabel][0]
twicetheta, chi = Lgeo.calc_uflab(peak_XY[:,0], peak_XY[:,1], detectorparameters,
returnAngles=1,
pixelsize=pixelsize,
kf_direction='Z>0')
data_theta, data_chi = twicetheta/2., chi
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peak_XY[:,0]
dict_dp['peakY']=peak_XY[:,1]
dict_dp['intensity']=peak_XY[:,2]
CCDcalib = {"CCDLabel":CCDLabel,
"dd":detectorparameters[0],
"xcen":detectorparameters[1],
"ycen":detectorparameters[2],
"xbet":detectorparameters[3],
"xgam":detectorparameters[4],
"pixelsize": pixelsize}
path = os.path.normpath(files)
IOLT.writefile_cor(cor_file_directory+"//"+path.split(os.sep)[-1].split(".")[0], twicetheta,
chi, peak_XY[:,0], peak_XY[:,1], peak_XY[:,2],
param=CCDcalib, sortedexit=0)
elif files.split(".")[-1] == "cor":
# print("Entering Cor file read section")
seednumber = "Experimental COR file"
allres = IOLT.readfile_cor(files, True)
data_theta, data_chi, peakx, peaky, intensity = allres[1:6]
CCDcalib = allres[-1]
detectorparameters = allres[-2]
# print('detectorparameters from file are: '+ str(detectorparameters))
pixelsize = CCDcalib['pixelsize']
CCDLabel = CCDcalib['CCDLabel']
framedim = dictLT.dict_CCD[CCDLabel][0]
dict_dp={}
dict_dp['kf_direction']='Z>0'
dict_dp['detectorparameters']=detectorparameters
dict_dp['detectordistance']=detectorparameters[0]
dict_dp['detectordiameter']=pixelsize*framedim[0]
dict_dp['pixelsize']=pixelsize
dict_dp['dim']=framedim
dict_dp['peakX']=peakx
dict_dp['peakY']=peaky
dict_dp['intensity']=intensity
sorted_data = np.transpose(np.array([data_theta, data_chi]))
tabledistancerandom = np.transpose(GT.calculdist_from_thetachi(sorted_data, sorted_data))
codebars_all = []
if len(data_theta) == 0:
print("No peaks Found for : " + files)
for intmat in range(matricies):
rotation_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrix[intmat][0][cnt,:,:] = np.zeros((3,3))
strain_matrixs[intmat][0][cnt,:,:] = np.zeros((3,3))
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
match_rate[intmat][0][cnt] = 0
mat_global[intmat][0][cnt] = 0
spots_len[intmat][0][cnt] = 0
iR_pix[intmat][0][cnt] = 0
fR_pix[intmat][0][cnt] = 0
check[cnt,intmat] = 0
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, \
match_rate, mat_global, cnt, files_treated,spots_len,iR_pix,fR_pix, check, best_match
# print("Entering GOOD section")
spots_in_center = np.arange(0,len(data_theta))
spots_in_center = spots_in_center[:nb_spots_consider]
for i in spots_in_center:
spotangles = tabledistancerandom[i]
spotangles = np.delete(spotangles, i)# removing the self distance
codebars = np.histogram(spotangles, bins=angbins)[0]
# codebars = histogram1d(spotangles, range=[min(angbins),max(angbins)], bins=len(angbins)-1)
## normalize the same way as training data
max_codebars = np.max(codebars)
codebars = codebars/ max_codebars
codebars_all.append(codebars)
codebars = np.array(codebars_all)
## Do prediction of all spots at once
prediction = predict(codebars, wb, temp_key)
# prediction = model.predict(codebars)
max_pred = np.max(prediction, axis = 1)
class_predicted = np.argmax(prediction, axis = 1)
predicted_hkl123 = classhkl[class_predicted]
predicted_hkl123 = predicted_hkl123.astype(int)
s_tth = data_theta * 2.
s_chi = data_chi
# print("Computing UB")
rotation_matrix1, mr_highest, mat_highest, \
strain_crystal, strain_sample, iR_pix1, \
fR_pix1, spots_len1,\
best_match1, check12 = predict_ub_MM(seednumber, spots_in_center, classhkl,
hkl_all_class0,
files,
s_tth1=s_tth,s_chi1=s_chi,
predicted_hkl1=predicted_hkl123,
class_predicted1=class_predicted,
max_pred1=max_pred,
emin=emin,emax=emax,
material_=material_,
lim_y=lim_y, lim_x=lim_x,
cnt=cnt,
dict_dp=dict_dp,
rotation_matrix=rotation_matrix,
mat_global=mat_global,
strain_calculation=strain_calculation,
ind_mat=ind_mat,
tolerance=tolerance,
matricies=matricies,
tabledistancerandom=tabledistancerandom,
text_file = text_file,
try_previous1=try_previous1,
mode_spotCycle=mode_spotCycle1,
softmax_threshold_global123 = softmax_threshold_global123,
mr_threshold_global123=mr_threshold_global123,
cap_matchrate123=cap_matchrate123,
tolerance_strain123=tolerance_strain123,
coeff123=coeff123,
coeff_overlap=coeff_overlap,
material0_limit=material0_limit,
model_direc=model_direc,
use_previous_UBmatrix_name=use_previous_UBmatrix_name,
material_phase_always_present=material_phase_always_present,
match_rate=match_rate,
check=check[cnt,:],
crystal=crystal,
angbins=angbins,
wb=wb, temp_key=temp_key,
strain_free_parameters=strain_free_parameters)
for intmat in range(matricies):
if len(rotation_matrix1[intmat]) == 0:
col[intmat][0][cnt,:] = 0,0,0
colx[intmat][0][cnt,:] = 0,0,0
coly[intmat][0][cnt,:] = 0,0,0
else:
mat_global[intmat][0][cnt] = mat_highest[intmat][0]
final_symm = symmetry[mat_highest[intmat][0]-1]
final_crystal = crystal[mat_highest[intmat][0]-1]
symm_operator = final_crystal._hklsym
strain_matrix[intmat][0][cnt,:,:] = strain_crystal[intmat][0]
strain_matrixs[intmat][0][cnt,:,:] = strain_sample[intmat][0]
rotation_matrix[intmat][0][cnt,:,:] = rotation_matrix1[intmat][0]
col_temp = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 0., 1.]), final_symm, symm_operator)
col[intmat][0][cnt,:] = col_temp
col_tempx = get_ipf_colour(rotation_matrix1[intmat][0], np.array([1., 0., 0.]), final_symm, symm_operator)
colx[intmat][0][cnt,:] = col_tempx
col_tempy = get_ipf_colour(rotation_matrix1[intmat][0], np.array([0., 1., 0.]), final_symm, symm_operator)
coly[intmat][0][cnt,:] = col_tempy
match_rate[intmat][0][cnt] = mr_highest[intmat][0]
spots_len[intmat][0][cnt] = spots_len1[intmat][0]
iR_pix[intmat][0][cnt] = iR_pix1[intmat][0]
fR_pix[intmat][0][cnt] = fR_pix1[intmat][0]
best_match[intmat][0][cnt] = best_match1[intmat][0]
check[cnt,intmat] = check12[intmat]
files_treated.append(files)
return strain_matrix, strain_matrixs, rotation_matrix, col, colx, coly, match_rate, \
mat_global, cnt, files_treated, spots_len, iR_pix, fR_pix, check, best_match
def new_MP_multimat_function(argu):
files, cnt, rotation_matrix, strain_matrix, strain_matrixs,\
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,mat_global,\
check,detectorparameters,pixelsize,angbins,\
classhkl, hkl_all_class0, emin, emax,\
material_, symmetry, lim_x, lim_y,\
strain_calculation, ind_mat,\
model_direc, tolerance,\
matricies, ccd_label,\
filename_bkg,intensity_threshold,\
boxsize,bkg_treatment,\
filenameDirec, experimental_prefix,\
blacklist_file, text_file, \
files_treated,try_previous1,\
wb, temp_key, cor_file_directory, mode_spotCycle1,\
softmax_threshold_global123,mr_threshold_global123,\
cap_matchrate123, tolerance_strain123,\
NumberMaxofFits123,fit_peaks_gaussian_global123,\
FitPixelDev_global123,coeff123,coeff_overlap,\
material0_limit, use_previous_UBmatrix_name1,\
material_phase_always_present1, crystal, strain_free_parameters = argu
strain_matrix12, strain_matrixs12, \
rotation_matrix12, col12, \
colx12, coly12,\
match_rate12, mat_global12, cnt12,\
files_treated12, spots_len12, \
iR_pix12, fR_pix12, check12, best_match12 = predict_preprocessMultiMatProcess(files, cnt,
rotation_matrix,strain_matrix,strain_matrixs,
col,colx,coly,match_rate,spots_len,iR_pix,fR_pix,best_match,
mat_global,
check,detectorparameters,pixelsize,angbins,
classhkl, hkl_all_class0, emin, emax,
material_, symmetry,lim_x, lim_y,
strain_calculation, ind_mat,
model_direc, tolerance,
matricies, ccd_label,
filename_bkg,intensity_threshold,
boxsize,bkg_treatment,
filenameDirec, experimental_prefix,
blacklist_file, text_file,
files_treated,try_previous1,
wb, temp_key, cor_file_directory, mode_spotCycle1,
softmax_threshold_global123,mr_threshold_global123,
cap_matchrate123, tolerance_strain123,
NumberMaxofFits123,
fit_peaks_gaussian_global123,
FitPixelDev_global123, coeff123,coeff_overlap,
material0_limit,
use_previous_UBmatrix_name1,
material_phase_always_present1,
crystal, strain_free_parameters)
meta = {}
return strain_matrix12, strain_matrixs12, rotation_matrix12, col12, \
colx12, coly12, match_rate12, mat_global12, cnt12, meta, \
files_treated12, spots_len12, iR_pix12, fR_pix12, best_match12, check12
def predict_ub_MM(seednumber, spots_in_center, classhkl, hkl_all_class0,
filename,
s_tth1,s_chi1,predicted_hkl1,class_predicted1,max_pred1,
emin, emax, material_, lim_y, lim_x, cnt,
dict_dp,rotation_matrix,mat_global,strain_calculation,
ind_mat,
tolerance=None, matricies=None, tabledistancerandom=None,
text_file=None, try_previous1=False, mode_spotCycle=None,
softmax_threshold_global123=None,mr_threshold_global123=None,
cap_matchrate123=None, tolerance_strain123=None, coeff123=None,
coeff_overlap=None, material0_limit=None, model_direc=None,
use_previous_UBmatrix_name=None, material_phase_always_present=None, match_rate=None,
check = None, crystal=None, angbins=None, wb=None, temp_key=None,
strain_free_parameters=None):
input_params = {"tolerance": tolerance,
"tolerancestrain": tolerance_strain123, ## For strain calculations
"emin": emin,
"emax": emax,
"mat":0}
call_global()
strain_matrix = [[] for i in range(matricies)]
strain_matrixs = [[] for i in range(matricies)]
best_matrix = [[] for i in range(matricies)]
mr_highest = [[] for i in range(matricies)]
ir_pixels = [[] for i in range(matricies)]
fr_pixels = [[] for i in range(matricies)]
spots_len = [[] for i in range(matricies)]
mat_highest = [[] for i in range(matricies)]
best_match = [[] for i in range(matricies)]
spots1 = []
spots1_global = [[] for i in range(matricies)]
dist = tabledistancerandom
## one time calculations
B0mat, Gstar_metric0mat, tab_distance_classhkl_data0mat = [], [], []
for ino, imat in enumerate(material_):
lattice_params00 = dictLT.dict_Materials[imat][1]
B00 = CP.calc_B_RR(lattice_params00)
Gstar_metric00 = CP.Gstar_from_directlatticeparams(lattice_params00[0],lattice_params00[1],\
lattice_params00[2],lattice_params00[3],\
lattice_params00[4],lattice_params00[5])
tab_distance_classhkl_data00 = get_material_dataP(Gstar_metric00, predicted_hkl1[:nb_spots_consider,:])
B0mat.append(B00)
Gstar_metric0mat.append(Gstar_metric00)
tab_distance_classhkl_data0mat.append(tab_distance_classhkl_data00)
spots = []
first_match = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr = 0
mat = 0
iR = 0
fR = 0
strain_crystal = np.zeros((3,3))
strain_sample = np.zeros((3,3))
material0_count = [0 for _ in range(len(material_))]
objective_function1 = None
for igrain in range(matricies):
### old version
if mode_spotCycle == "slow":
# print("Slow mode of analysis")
first_match, max_mr, min_mr, spots, \
case, mat, strain_crystal, \
strain_sample, iR, fR = get_orient_matMM(s_tth1, s_chi1,
material_, classhkl,
class_predicted1, predicted_hkl1,
input_params, hkl_all_class0,
max_pred1, dict_dp,
spots1, dist,
Gstar_metric0mat, B0mat,
softmax_threshold=softmax_threshold_global123,
mr_threshold=mr_threshold_global123,
tab_distance_classhkl_data0=tab_distance_classhkl_data0mat,
spots1_global = spots1_global,
coeff_overlap = coeff_overlap,
ind_mat=ind_mat,
strain_calculation=strain_calculation,
cap_matchrate123=cap_matchrate123,
material0_count=material0_count,
material0_limit=material0_limit,
igrain=igrain,
material_phase_always_present=material_phase_always_present,
strain_free_parameters=strain_free_parameters)
else:
print("selected mode of treating spots is not ready")
for ispot in spots:
spots1.append(ispot)
spots1_global[igrain].append(ispot)
## make copy of best rotation matrix
best_match[igrain].append(np.copy(first_match))
best_matrix[igrain].append(np.copy(first_match[14]))
mr_highest[igrain].append(np.copy(max_mr))
mat_highest[igrain].append(np.copy(mat))
ir_pixels[igrain].append(np.copy(iR))
fr_pixels[igrain].append(np.copy(fR))
spots_len[igrain].append(np.copy(len(spots)))
strain_matrix[igrain].append(np.copy(strain_crystal))
strain_matrixs[igrain].append(np.copy(strain_sample))
if np.all(first_match[14] != 0):
check[igrain] = 1
material0_count[mat-1] = material0_count[mat-1]+1
return best_matrix, mr_highest, mat_highest, strain_matrix, strain_matrixs, ir_pixels, fr_pixels, spots_len, best_match, check
def get_orient_matMM(s_tth, s_chi, material0_, classhkl, class_predicted, predicted_hkl,
input_params, hkl_all_class0, max_pred, dict_dp, spots,
dist, Gstar_metric0, B0, softmax_threshold=0.85, mr_threshold=0.85,
tab_distance_classhkl_data0=None, spots1_global=None,
coeff_overlap = None, ind_mat=None, strain_calculation=None,cap_matchrate123=None,
material0_count=None, material0_limit=None,
igrain=None, material_phase_always_present=None, strain_free_parameters=None):
call_global()
init_mr = 0
init_mat = 0
init_material = "None"
init_case = "None"
init_B = None
final_match_rate = 0
match_rate_mma = []
final_rmv_ind = []
current_spots1 = [0 for igr in range(len(spots1_global))]
mat = 0
case = "None"
all_stats = []
for i in range(0, min(nb_spots_consider, len(s_tth))):
for j in range(i+1, min(nb_spots_consider, len(s_tth))):
overlap = False
if (max_pred[j] < softmax_threshold) or (j in spots) or \
(max_pred[i] < softmax_threshold) or (i in spots):
continue
mat = 0
case = "None"
input_params["mat"] = mat
input_params["Bmat"] = None
for ino, imat in enumerate(material0_):
if ino == 0:
if class_predicted[i] < ind_mat[ino] and class_predicted[j] < ind_mat[ino] :
tab_distance_classhkl_data = tab_distance_classhkl_data0[ino]
hkl_all_class = hkl_all_class0[ino]
material_ = imat
B = B0[ino]
Gstar_metric = Gstar_metric0[ino]
case = imat
mat = ino + 1
if material0_count[ino] >= material0_limit[ino]:
mat = 0
case="None"
input_params["mat"] = mat
input_params["Bmat"] = B
else:
if (ind_mat[ino-1] <= class_predicted[i] < ind_mat[ino]) and \
(ind_mat[ino-1] <= class_predicted[j] < ind_mat[ino]):
tab_distance_classhkl_data = tab_distance_classhkl_data0[ino]
hkl_all_class = hkl_all_class0[ino]
material_ = imat
B = B0[ino]
Gstar_metric = Gstar_metric0[ino]
case = imat
mat = ino + 1
if material0_count[ino] >= material0_limit[ino]:
mat = 0
case="None"
input_params["mat"] = mat
input_params["Bmat"] = B
if mat == 0:
continue
tth_chi_spot1 = np.array([s_tth[i], s_chi[i]])
tth_chi_spot2 = np.array([s_tth[j], s_chi[j]])
hkl1 = hkl_all_class[str(predicted_hkl[i])]
hkl1_list = np.array(hkl1)
hkl2 = hkl_all_class[str(predicted_hkl[j])]
hkl2_list = np.array(hkl2)
actual_mat, flagAM, \
spot1_hkl, spot2_hkl = propose_UB_matrixMM(hkl1_list, hkl2_list,
Gstar_metric, input_params,
dist[i,j],
tth_chi_spot1, tth_chi_spot2,
B, method=0)
if flagAM:
continue
for iind in range(len(actual_mat)):
rot_mat123 = actual_mat[iind]
rmv_ind, theospots = remove_spotsMM(s_tth, s_chi, rot_mat123,
material_, input_params,
dict_dp['detectorparameters'], dict_dp)
overlap = False
current_spots = [len(list(set(rmv_ind) & set(spots1_global[igr]))) for igr in range(len(spots1_global))]
for igr in range(len(spots1_global)):
if current_spots[igr] > coeff_overlap*len(spots1_global[igr]):
overlap = True
break
if overlap:
continue
match_rate = np.round(100 * len(rmv_ind)/theospots,3)
match_rate_mma.append(match_rate)
if match_rate > init_mr:
current_spots1 = current_spots
init_mat = np.copy(mat)
input_params["mat"] = init_mat
init_material = np.copy(material_)
init_case = np.copy(case)
init_B = np.copy(B)
input_params["Bmat"] = init_B
final_rmv_ind = rmv_ind
final_match_rate = np.copy(match_rate)
init_mr = np.copy(match_rate)
all_stats = [i, j, \
spot1_hkl[iind], spot2_hkl[iind], \
tth_chi_spot1, tth_chi_spot2, \
dist[i,j], tab_distance_classhkl_data[i,j], np.round(max_pred[i]*100,3), \
np.round(max_pred[j]*100,3), len(rmv_ind), theospots,\
match_rate, 0.0, rot_mat123]
if (final_match_rate >= mr_threshold*100.) and not overlap:
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUBMM(s_tth, s_chi, all_stats[14], str(init_material),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, init_B,
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(match_rate_mma), np.min(match_rate_mma), \
final_rmv_ind, str(init_case), init_mat, dev_strain, strain_sample, iR, fR
overlap = False
for igr in range(len(spots1_global)):
if current_spots1[igr] > coeff_overlap*len(spots1_global[igr]):
overlap = True
if (final_match_rate <= cap_matchrate123) or overlap: ## Nothing found!!
## Either peaks are not well defined or not found within tolerance and prediction accuracy
all_stats = [0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, np.zeros((3,3))]
max_mr, min_mr = 0, 0
spot_ind = []
mat = 0
input_params["mat"] = 0
case = "None"
return all_stats, max_mr, min_mr, spot_ind, case, mat, np.zeros((3,3)), np.zeros((3,3)), 0, 0
input_params["mat"] = init_mat
if strain_calculation:
dev_strain, strain_sample, iR, fR, rot_mat_UB = calculate_strains_fromUBMM(s_tth, s_chi, all_stats[14], str(init_material),
input_params, dict_dp['detectorparameters'],
dict_dp, spots, init_B,
strain_free_parameters)
else:
dev_strain, strain_sample, iR, fR = np.zeros((3,3)), np.zeros((3,3)), 0, 0
rot_mat_UB = np.copy(all_stats[14])
all_stats[14] = rot_mat_UB
return all_stats, np.max(match_rate_mma), np.min(match_rate_mma), \
final_rmv_ind, str(init_case), init_mat, dev_strain, strain_sample, iR, fR
def propose_UB_matrixMM(hkl1_list, hkl2_list, Gstar_metric, input_params, dist123,
tth_chi_spot1, tth_chi_spot2, B, method=0, crystal=None):
if method == 0:
tab_angulardist_temp = CP.AngleBetweenNormals(hkl1_list, hkl2_list, Gstar_metric)
list_ = np.where(np.abs(tab_angulardist_temp-dist123) < input_params["tolerance"][input_params["mat"]-1])
if crystal != None:
final_crystal=crystal[input_params["mat"]-1]
symm_operator = final_crystal._hklsym
else:
symm_operator = np.eye(3)
if len(list_[0]) == 0:
return None, True, 0, 0
rot_mat_abs = []
actual_mat = []
spot1_hkl = []
spot2_hkl = []
triedspots = []
for ii, jj in zip(list_[0], list_[1]):
if ii in triedspots and jj in triedspots:
continue
conti_ = False
try:
rot_mat1 = FindO.OrientMatrix_from_2hkl(hkl1_list[ii], tth_chi_spot1, \
hkl2_list[jj], tth_chi_spot2,
B)
# rot_mat1 = find_uniq_u(rot_mat1, symm_operator)
except:
continue
copy_rm = np.copy(rot_mat1)
copy_rm = np.round(np.abs(copy_rm),5)
copy_rm.sort(axis=1)
for iji in rot_mat_abs:
iji.sort(axis=1)
if np.all(iji==copy_rm):
conti_ = True
break
if conti_:
continue
rot_mat_abs.append(np.round(np.abs(rot_mat1),5))
actual_mat.append(rot_mat1)
spot1_hkl.append(hkl1_list[ii])
spot2_hkl.append(hkl2_list[jj])
triedspots.append(ii)
triedspots.append(jj)
else:
# method 2
hkl_all = np.vstack((hkl1_list, hkl2_list))
LUT = FindO.GenerateLookUpTable(hkl_all, Gstar_metric)
hkls = FindO.PlanePairs_2(dist123, input_params["tolerance"][input_params["mat"]-1], LUT, onlyclosest=1)
if np.all(hkls == None):
return None, True, 0, 0
rot_mat_abs = []
actual_mat = []
spot1_hkl = []
spot2_hkl = []
for ii in range(len(hkls)):
if np.all(hkls[ii][0] == hkls[ii][1]):
continue
conti_ = False
try:
rot_mat1 = FindO.OrientMatrix_from_2hkl(hkls[ii][0], tth_chi_spot1, \
hkls[ii][1], tth_chi_spot2,
B)
# rot_mat1 = find_uniq_u(rot_mat1, symm_operator)
except:
continue
copy_rm = np.copy(rot_mat1)
copy_rm = np.round(np.abs(copy_rm),5)
copy_rm.sort(axis=1)
for iji in rot_mat_abs:
iji.sort(axis=1)
if np.all(iji==copy_rm):
conti_ = True
break
if conti_:
continue
rot_mat_abs.append(np.round(np.abs(rot_mat1),5))
actual_mat.append(rot_mat1)
spot1_hkl.append(hkls[ii][0])
spot2_hkl.append(hkls[ii][1])
#TODO
## just fixing a* to x seems ok; if not think of aligning b* to xy plane
sum_sign = []
for nkl in range(len(actual_mat)):
temp_mat = np.dot(actual_mat[nkl], B)
## fix could be to choose a matrix that aligns best the b* vector to Y axis or a* to X axis
# if np.argmax(np.abs(temp_mat[:2,0])) == 0 and \
# np.argmax(np.abs(temp_mat[:2,1])) == 1: ##a* along x, b*along y
if np.argmax(np.abs(temp_mat[:2,0])) == 0: ##a* along x
sum_sign.append(2)
elif np.argmax(np.abs(temp_mat[:2,0])) == np.argmax(np.abs(temp_mat[:2,1])):
sum_sign.append(0)
else:
sum_sign.append(1)
ind_sort = np.argsort(sum_sign)[::-1]
## re-arrange
actual_mat1 = []
spot1_hkl1, spot2_hkl1 = [], []
for inin in ind_sort:
actual_mat1.append(actual_mat[inin])
spot1_hkl1.append(spot1_hkl[inin])
spot2_hkl1.append(spot2_hkl[inin])
actual_mat, spot1_hkl, spot2_hkl = actual_mat1, spot1_hkl1, spot2_hkl1
return actual_mat, False, spot1_hkl, spot2_hkl
def remove_spotsMM(s_tth, s_chi, first_match123, material_, input_params, detectorparameters, dict_dp):
try:
grain = CP.Prepare_Grain(material_, first_match123, dictmaterials=dictLT.dict_Materials)
### initialize global variables to be used later
call_global()
except:
return [], 100
#### Perhaps better than SimulateResult function
kf_direction = dict_dp["kf_direction"]
detectordistance = dict_dp["detectorparameters"][0]
detectordiameter = dict_dp["detectordiameter"]
pixelsize = dict_dp["pixelsize"]
dim = dict_dp["dim"]
spots2pi = LT.getLaueSpots(CST_ENERGYKEV / input_params["emax"],
CST_ENERGYKEV / input_params["emin"],
[grain],
fastcompute=1,
verbose=0,
kf_direction=kf_direction,
ResolutionAngstrom=False,
dictmaterials=dictLT.dict_Materials)
TwicethetaChi = LT.filterLaueSpots_full_np(spots2pi[0][0], None, onlyXYZ=False,
HarmonicsRemoval=0,
fastcompute=1,
kf_direction=kf_direction,
detectordistance=detectordistance,
detectordiameter=detectordiameter,
pixelsize=pixelsize,
dim=dim)
## get proximity for exp and theo spots
if input_params["mat"] == 0:
return [], 100
angtol = input_params["tolerance"][input_params["mat"] -1]
if option_global =="v1":
# print("entering v1")
List_Exp_spot_close, residues_link, _ = getProximityv1(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
elif option_global =="v2":
List_Exp_spot_close, residues_link, _ = getProximityv1_ambigious(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
else:
List_Exp_spot_close, residues_link, _ = getProximityv1_ambigious(np.array([TwicethetaChi[0], TwicethetaChi[1]]), # warning array(2theta, chi)
s_tth/2.0, s_chi, # warning theta, chi for exp
angtol=angtol)
List_Exp_spot_close, ind_uniq = np.unique(List_Exp_spot_close, return_index=True)
residues_link = np.take(residues_link, ind_uniq)
if np.average(residues_link) > residues_threshold:
return [], 100
if len(np.unique(List_Exp_spot_close)) < nb_spots_global_threshold:
return [], 100
return List_Exp_spot_close, len(TwicethetaChi[0])
def calculate_strains_fromUBMM(s_tth, s_chi, UBmat, material_, input_params,
detectorparameters, dict_dp, spots, B_matrix, strain_free_parameters):
## for the moment strain_free_parameters is a trial implementation
#TODO to be verified
if ("a" not in strain_free_parameters) and len(strain_free_parameters)>=5:
if additional_expression[0] != "none":
print("Note: additional_expression is not applied for the current set of strain free parameters")
# starting B0matrix corresponding to the unit cell -----
B0matrix = np.copy(B_matrix)
latticeparams = dictLT.dict_Materials[material_][1]
## Included simple multi level refinement of strains
init_residues = -0.1
final_residues = -0.1
straintolerance = input_params["tolerancestrain"][input_params["mat"]-1]
devstrain, deviatoricstrain_sampleframe = np.zeros((3,3)), np.zeros((3,3))
for ijk, AngTol in enumerate(straintolerance):
#### Spots in first match (no refining, just simple auto links to filter spots)
grain = CP.Prepare_Grain(material_, UBmat, dictmaterials=dictLT.dict_Materials)
Twicetheta, Chi, Miller_ind, posx, posy, _ = LT.SimulateLaue(grain,
input_params["emin"],
input_params["emax"],
detectorparameters,
kf_direction=dict_dp['kf_direction'],
removeharmonics=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
ResolutionAngstrom=False,
detectordiameter=dict_dp['detectordiameter'],
dictmaterials=dictLT.dict_Materials)
## get proximity for exp and theo spots
linkedspots_link, linkExpMiller_link, \
linkResidues_link = getProximityv0(np.array([Twicetheta, Chi]), # warning array(2theta, chi)
s_tth/2.0, s_chi, Miller_ind, # warning theta, chi for exp
angtol=float(AngTol))
if len(linkedspots_link) < 8:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
linkedspots_fit = linkedspots_link
linkExpMiller_fit = linkExpMiller_link
arraycouples = np.array(linkedspots_fit)
exp_indices = np.array(arraycouples[:, 0], dtype=np.int)
sim_indices = np.array(arraycouples[:, 1], dtype=np.int)
nb_pairs = len(exp_indices)
Data_Q = np.array(linkExpMiller_fit)[:, 1:]
sim_indices = np.arange(nb_pairs) # for fitting function this must be an arange...
pixX = np.take(dict_dp['peakX'], exp_indices)
pixY = np.take(dict_dp['peakY'], exp_indices)
weights = None #np.take(dict_dp['intensity'], exp_indices)
starting_orientmatrix = np.copy(UBmat)
results = None
# ----------------------------------
# refinement model
# ----------------------------------
# -------------------------------------------------------
allparameters = np.array(detectorparameters + [1, 1, 0, 0, 0] + [0, 0, 0])
# strain & orient
initial_values = np.array([1.0, 1.0, 0.0, 0.0, 0.0, 0, 0.0, 0.0])
arr_indexvaryingparameters = np.arange(5, 13)
residues, deltamat, newmatrix = FitO.error_function_on_demand_strain(
initial_values,
Data_Q,
allparameters,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pureRotation=0,
verbose=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'])
init_mean_residues = np.copy(np.mean(residues))
if ijk == 0:
init_residues = np.copy(init_mean_residues)
results = FitO.fit_on_demand_strain(initial_values,
Data_Q,
allparameters,
FitO.error_function_on_demand_strain,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
verbose=0,
weights=weights,
kf_direction=dict_dp['kf_direction'])
if results is None:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
residues, deltamat, newmatrix = FitO.error_function_on_demand_strain(
results,
Data_Q,
allparameters,
arr_indexvaryingparameters,
sim_indices,
pixX,
pixY,
initrot=starting_orientmatrix,
Bmat=B0matrix,
pureRotation=0,
verbose=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'])
# if np.mean(residues) > final_residues:
# return devstrain, deviatoricstrain_sampleframe, init_residues, final_residues, UBmat
final_mean_residues = np.copy(np.mean(residues))
final_residues = np.copy(final_mean_residues)
# building B mat
# param_strain_sol = results
# varyingstrain = np.array([[1.0, param_strain_sol[2], param_strain_sol[3]],
# [0, param_strain_sol[0], param_strain_sol[4]],
# [0, 0, param_strain_sol[1]]])
# newUmat = np.dot(deltamat, starting_orientmatrix)
# newUBmat = np.dot(newUmat, varyingstrain)
newUBmat = np.copy(newmatrix)
# Bstar_s = np.dot(newUBmat, B0matrix)
# ---------------------------------------------------------------
# postprocessing of unit cell orientation and strain refinement
# ---------------------------------------------------------------
UBmat = np.copy(newmatrix)
(devstrain, lattice_parameter_direct_strain) = CP.compute_deviatoricstrain(newUBmat, B0matrix, latticeparams)
# overwrite and rescale possibly lattice lengthes
# constantlength = "a"
# lattice_parameter_direct_strain = CP.computeLatticeParameters_from_UB(newUBmat, material_, constantlength, dictmaterials=dictLT.dict_Materials)
# print(lattice_parameter_direct_strain)
deviatoricstrain_sampleframe = CP.strain_from_crystal_to_sample_frame2(devstrain, newUBmat)
# in % already
devstrain = np.round(devstrain * 100, decimals=3)
deviatoricstrain_sampleframe = np.round(deviatoricstrain_sampleframe * 100, decimals=3)
else:
# starting B0matrix corresponding to the unit cell -----
B0matrix = np.copy(B_matrix)
latticeparams = dictLT.dict_Materials[material_][1]
## Included simple multi level refinement of strains
init_residues = -0.1
final_residues = -0.1
straintolerance = input_params["tolerancestrain"][input_params["mat"]-1]
devstrain, deviatoricstrain_sampleframe = np.zeros((3,3)), np.zeros((3,3))
for ijk, AngTol in enumerate(straintolerance):
#### Spots in first match (no refining, just simple auto links to filter spots)
grain = CP.Prepare_Grain(material_, UBmat, dictmaterials=dictLT.dict_Materials)
Twicetheta, Chi, Miller_ind, posx, posy, _ = LT.SimulateLaue(grain,
input_params["emin"],
input_params["emax"],
detectorparameters,
kf_direction=dict_dp['kf_direction'],
removeharmonics=1,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
ResolutionAngstrom=False,
detectordiameter=dict_dp['detectordiameter'],
dictmaterials=dictLT.dict_Materials)
## get proximity for exp and theo spots
linkedspots_link, linkExpMiller_link, \
linkResidues_link = getProximityv0(np.array([Twicetheta, Chi]), # warning array(2theta, chi)
s_tth/2.0, s_chi, Miller_ind, # warning theta, chi for exp
angtol=float(AngTol))
if len(linkedspots_link) < 8:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
linkedspots_fit = linkedspots_link
linkExpMiller_fit = linkExpMiller_link
arraycouples = np.array(linkedspots_fit)
exp_indices = np.array(arraycouples[:, 0], dtype=np.int)
sim_indices = np.array(arraycouples[:, 1], dtype=np.int)
nb_pairs = len(exp_indices)
Data_Q = np.array(linkExpMiller_fit)[:, 1:]
sim_indices = np.arange(nb_pairs) # for fitting function this must be an arange...
pixX = np.take(dict_dp['peakX'], exp_indices)
pixY = np.take(dict_dp['peakY'], exp_indices)
weights = None #np.take(dict_dp['intensity'], exp_indices)
starting_orientmatrix = np.copy(UBmat)
results = None
# ----------------------------------
# refinement model
# ----------------------------------
# -------------------------------------------------------
allparameters = np.array(detectorparameters + [0, 0, 0] + latticeparams)
fitting_parameters_keys = ["anglex", "angley", "anglez"]
fitting_parameters_values = [0, 0, 0]
constantlength = "a"
if ("a" in strain_free_parameters) and ("b" in strain_free_parameters) and ("c" in strain_free_parameters):
constantlength = "a"
elif ("b" not in strain_free_parameters) and additional_expression[0]=="none" and\
"b" not in additional_expression[0]:
constantlength = "b"
elif ("c" not in strain_free_parameters):
constantlength = "c"
for jjkk in strain_free_parameters:
if jjkk == "a" and constantlength != "a":
fitting_parameters_keys.append("a")
fitting_parameters_values.append(latticeparams[0])
if jjkk == "b" and constantlength != "b":
fitting_parameters_keys.append("b")
fitting_parameters_values.append(latticeparams[1])
if jjkk == "c" and constantlength != "c":
fitting_parameters_keys.append("c")
fitting_parameters_values.append(latticeparams[2])
if jjkk == "alpha":
fitting_parameters_keys.append("alpha")
fitting_parameters_values.append(latticeparams[3])
if jjkk == "beta":
fitting_parameters_keys.append("beta")
fitting_parameters_values.append(latticeparams[4])
if jjkk == "gamma":
fitting_parameters_keys.append("gamma")
fitting_parameters_values.append(latticeparams[5])
pureUmatrix, _ = GT.UBdecomposition_RRPP(starting_orientmatrix)
absolutespotsindices = np.arange(len(pixX))
(residues, _, _,
_, _, ) = FitO.error_function_latticeparameters(fitting_parameters_values,
fitting_parameters_keys,
Data_Q,
allparameters,
absolutespotsindices,
pixX,
pixY,
initrot=pureUmatrix,
pureRotation=0,
verbose=0,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'],
returnalldata=True,
additional_expression = additional_expression[0])
init_mean_residues = np.copy(np.mean(residues))
if ijk == 0:
init_residues = np.copy(init_mean_residues)
results = FitO.fit_function_latticeparameters(fitting_parameters_values,
fitting_parameters_keys,
Data_Q,
allparameters,
absolutespotsindices,
pixX,
pixY,
UBmatrix_start=pureUmatrix,
nb_grains=1,
pureRotation=0,
verbose=0,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'],
additional_expression = additional_expression[0])
if results is None:
return np.zeros((3,3)), np.zeros((3,3)), init_residues, final_residues, UBmat
(residues, Uxyz, newUmat,
newB0matrix, _, ) = FitO.error_function_latticeparameters(results,
fitting_parameters_keys,
Data_Q,
allparameters,
absolutespotsindices,
pixX,
pixY,
initrot=pureUmatrix,
pureRotation=0,
verbose=0,
pixelsize=dict_dp['pixelsize'],
dim=dict_dp['dim'],
weights=weights,
kf_direction=dict_dp['kf_direction'],
returnalldata=True,
additional_expression = additional_expression[0])
final_mean_residues = np.copy(np.mean(residues))
final_residues = np.copy(final_mean_residues)
newUBmat = np.dot(np.dot(newUmat, newB0matrix), np.linalg.inv(B0matrix))
UBmat = np.copy(newUBmat)
# ---------------------------------------------------------------
# postprocessing of unit cell orientation and strain refinement
# ---------------------------------------------------------------
(devstrain, lattice_parameter_direct_strain) = CP.compute_deviatoricstrain(newUBmat, B0matrix, latticeparams)
deviatoricstrain_sampleframe = CP.strain_from_crystal_to_sample_frame2(devstrain, newUBmat)
# in % already
devstrain = np.round(devstrain * 100, decimals=3)
deviatoricstrain_sampleframe = np.round(deviatoricstrain_sampleframe * 100, decimals=3)
return devstrain, deviatoricstrain_sampleframe, init_residues, final_residues, UBmat | 50.811969 | 300 | 0.484897 | 69,147 | 646,125 | 4.318871 | 0.034391 | 0.005853 | 0.004651 | 0.003737 | 0.81579 | 0.794764 | 0.778145 | 0.765146 | 0.752442 | 0.74604 | 0 | 0.036286 | 0.41212 | 646,125 | 12,716 | 301 | 50.811969 | 0.749921 | 0.084222 | 0 | 0.780985 | 0 | 0.001773 | 0.033222 | 0.001815 | 0.000197 | 0 | 0 | 0.000472 | 0.000099 | 1 | 0.019803 | false | 0.001478 | 0.005616 | 0.001675 | 0.052808 | 0.008966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a96eb5eb68c6ef17f0357b34dc617192c8dd62bc | 169 | py | Python | Coding-Challenges/removeParens/remove_parens.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 33 | 2019-12-02T23:29:47.000Z | 2022-03-24T02:40:36.000Z | Coding-Challenges/removeParens/remove_parens.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 39 | 2020-01-15T19:28:12.000Z | 2021-11-26T05:13:29.000Z | Coding-Challenges/removeParens/remove_parens.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 49 | 2019-12-02T23:29:53.000Z | 2022-03-03T01:11:37.000Z | #!/usr/bin/env python3
def remove_parens(string):
pass
print(remove_parens("()())()")) # 1
print(remove_parens("()((()()")) # 2
print(remove_parens(")(")) # 2
| 16.9 | 37 | 0.585799 | 21 | 169 | 4.52381 | 0.571429 | 0.505263 | 0.536842 | 0.378947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027586 | 0.142012 | 169 | 9 | 38 | 18.777778 | 0.627586 | 0.159763 | 0 | 0 | 0 | 0 | 0.123188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0 | 0 | 0.2 | 0.6 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 7 |
a9970a2835f87639a4994842e4f985a29c90b127 | 155,923 | py | Python | angr/procedures/definitions/win32_shell32.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_shell32.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_shell32.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | # pylint:disable=line-too-long
import logging
from ...sim_type import SimTypeFunction, SimTypeShort, SimTypeInt, SimTypeLong, SimTypeLongLong, SimTypeDouble, SimTypeFloat, SimTypePointer, SimTypeChar, SimStruct, SimTypeFixedSizeArray, SimTypeBottom, SimUnion, SimTypeBool
from ...calling_conventions import SimCCStdcall, SimCCMicrosoftAMD64
from .. import SIM_PROCEDURES as P
from . import SimLibrary
_l = logging.getLogger(name=__name__)
lib = SimLibrary()
lib.set_default_cc('X86', SimCCStdcall)
lib.set_default_cc('AMD64', SimCCMicrosoftAMD64)
lib.set_library_names("shell32.dll")
prototypes = \
{
#
'SHGetPropertyStoreFromIDList': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="GETPROPERTYSTOREFLAGS"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "flags", "riid", "ppv"]),
#
'SHGetPropertyStoreFromParsingName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeBottom(label="IBindCtx"), SimTypeInt(signed=False, label="GETPROPERTYSTOREFLAGS"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath", "pbc", "flags", "riid", "ppv"]),
#
'SHAddDefaultPropertiesByExt': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeBottom(label="IPropertyStore")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszExt", "pPropStore"]),
#
'PifMgr_OpenProperties': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["pszApp", "pszPIF", "hInf", "flOpt"]),
#
'PifMgr_GetProperties': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProps", "pszGroup", "lpProps", "cbProps", "flOpt"]),
#
'PifMgr_SetProperties': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProps", "pszGroup", "lpProps", "cbProps", "flOpt"]),
#
'PifMgr_CloseProperties': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hProps", "flOpt"]),
#
'SHPropStgCreate': SimTypeFunction([SimTypeBottom(label="IPropertySetStorage"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="IPropertyStorage"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psstg", "fmtid", "pclsid", "grfFlags", "grfMode", "dwDisposition", "ppstg", "puCodePage"]),
#
'SHPropStgReadMultiple': SimTypeFunction([SimTypeBottom(label="IPropertyStorage"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"ulKind": SimTypeInt(signed=False, label="PROPSPEC_KIND"), "Anonymous": SimUnion({"propid": SimTypeInt(signed=False, label="UInt32"), "lpwstr": SimTypePointer(SimTypeChar(label="Char"), offset=0)}, name="<anon>", label="None")}, name="PROPSPEC", pack=False, align=None), label="LPArray", offset=0), SimTypePointer(SimStruct({"Anonymous": SimUnion({"Anonymous": SimStruct({"vt": SimTypeShort(signed=False, label="UInt16"), "wReserved1": SimTypeShort(signed=False, label="UInt16"), "wReserved2": SimTypeShort(signed=False, label="UInt16"), "wReserved3": SimTypeShort(signed=False, label="UInt16"), "Anonymous": SimUnion({"cVal": SimTypeBottom(label="CHAR"), "bVal": SimTypeChar(label="Byte"), "iVal": SimTypeShort(signed=True, label="Int16"), "uiVal": SimTypeShort(signed=False, label="UInt16"), "lVal": SimTypeInt(signed=True, label="Int32"), "ulVal": SimTypeInt(signed=False, label="UInt32"), "intVal": SimTypeInt(signed=True, label="Int32"), "uintVal": SimTypeInt(signed=False, label="UInt32"), "hVal": SimTypeBottom(label="LARGE_INTEGER"), "uhVal": SimTypeBottom(label="ULARGE_INTEGER"), "fltVal": SimTypeFloat(size=32), "dblVal": SimTypeFloat(size=64), "boolVal": SimTypeShort(signed=True, label="Int16"), "__OBSOLETE__VARIANT_BOOL": SimTypeShort(signed=True, label="Int16"), "scode": SimTypeInt(signed=True, label="Int32"), "cyVal": SimTypeBottom(label="CY"), "date": SimTypeFloat(size=64), "filetime": SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), "puuid": SimTypePointer(SimTypeBottom(label="Guid"), offset=0), "pclipdata": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0), "bstrVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "bstrblobVal": SimTypeBottom(label="BSTRBLOB"), "blob": SimTypeBottom(label="BLOB"), "pszVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pwszVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "punkVal": SimTypeBottom(label="IUnknown"), "pdispVal": SimTypeBottom(label="IDispatch"), "pStream": SimTypeBottom(label="IStream"), "pStorage": SimTypeBottom(label="IStorage"), "pVersionedStream": SimTypePointer(SimStruct({"guidVersion": SimTypeBottom(label="Guid"), "pStream": SimTypeBottom(label="IStream")}, name="VERSIONEDSTREAM", pack=False, align=None), offset=0), "parray": SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), "cac": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAC", pack=False, align=None), "caub": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAUB", pack=False, align=None), "cai": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CAI", pack=False, align=None), "caui": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)}, name="CAUI", pack=False, align=None), "cal": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CAL", pack=False, align=None), "caul": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CAUL", pack=False, align=None), "cah": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="LARGE_INTEGER"), offset=0)}, name="CAH", pack=False, align=None), "cauh": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="ULARGE_INTEGER"), offset=0)}, name="CAUH", pack=False, align=None), "caflt": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=32), offset=0)}, name="CAFLT", pack=False, align=None), "cadbl": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADBL", pack=False, align=None), "cabool": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CABOOL", pack=False, align=None), "cascode": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CASCODE", pack=False, align=None), "cacy": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CY"), offset=0)}, name="CACY", pack=False, align=None), "cadate": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADATE", pack=False, align=None), "cafiletime": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), offset=0)}, name="CAFILETIME", pack=False, align=None), "cauuid": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="Guid"), offset=0)}, name="CACLSID", pack=False, align=None), "caclipdata": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0)}, name="CACLIPDATA", pack=False, align=None), "cabstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CABSTR", pack=False, align=None), "cabstrblob": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="BSTRBLOB"), offset=0)}, name="CABSTRBLOB", pack=False, align=None), "calpstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0)}, name="CALPSTR", pack=False, align=None), "calpwstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CALPWSTR", pack=False, align=None), "capropvar": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="CAPROPVARIANT", pack=False, align=None), "pcVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pbVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "piVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "puiVal": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), "plVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pulVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pintVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "puintVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pfltVal": SimTypePointer(SimTypeFloat(size=32), offset=0), "pdblVal": SimTypePointer(SimTypeFloat(size=64), offset=0), "pboolVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "pdecVal": SimTypePointer(SimTypeBottom(label="DECIMAL"), offset=0), "pscode": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pcyVal": SimTypePointer(SimTypeBottom(label="CY"), offset=0), "pdate": SimTypePointer(SimTypeFloat(size=64), offset=0), "pbstrVal": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), "ppunkVal": SimTypePointer(SimTypeBottom(label="IUnknown"), offset=0), "ppdispVal": SimTypePointer(SimTypeBottom(label="IDispatch"), offset=0), "pparray": SimTypePointer(SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), offset=0), "pvarVal": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="<anon>", label="None")}, name="_Anonymous_e__Struct", pack=False, align=None), "decVal": SimTypeBottom(label="DECIMAL")}, name="<anon>", label="None")}, name="PROPVARIANT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pps", "uCodePage", "cpspec", "rgpspec", "rgvar"]),
#
'SHPropStgWriteMultiple': SimTypeFunction([SimTypeBottom(label="IPropertyStorage"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"ulKind": SimTypeInt(signed=False, label="PROPSPEC_KIND"), "Anonymous": SimUnion({"propid": SimTypeInt(signed=False, label="UInt32"), "lpwstr": SimTypePointer(SimTypeChar(label="Char"), offset=0)}, name="<anon>", label="None")}, name="PROPSPEC", pack=False, align=None), label="LPArray", offset=0), SimTypePointer(SimStruct({"Anonymous": SimUnion({"Anonymous": SimStruct({"vt": SimTypeShort(signed=False, label="UInt16"), "wReserved1": SimTypeShort(signed=False, label="UInt16"), "wReserved2": SimTypeShort(signed=False, label="UInt16"), "wReserved3": SimTypeShort(signed=False, label="UInt16"), "Anonymous": SimUnion({"cVal": SimTypeBottom(label="CHAR"), "bVal": SimTypeChar(label="Byte"), "iVal": SimTypeShort(signed=True, label="Int16"), "uiVal": SimTypeShort(signed=False, label="UInt16"), "lVal": SimTypeInt(signed=True, label="Int32"), "ulVal": SimTypeInt(signed=False, label="UInt32"), "intVal": SimTypeInt(signed=True, label="Int32"), "uintVal": SimTypeInt(signed=False, label="UInt32"), "hVal": SimTypeBottom(label="LARGE_INTEGER"), "uhVal": SimTypeBottom(label="ULARGE_INTEGER"), "fltVal": SimTypeFloat(size=32), "dblVal": SimTypeFloat(size=64), "boolVal": SimTypeShort(signed=True, label="Int16"), "__OBSOLETE__VARIANT_BOOL": SimTypeShort(signed=True, label="Int16"), "scode": SimTypeInt(signed=True, label="Int32"), "cyVal": SimTypeBottom(label="CY"), "date": SimTypeFloat(size=64), "filetime": SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), "puuid": SimTypePointer(SimTypeBottom(label="Guid"), offset=0), "pclipdata": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0), "bstrVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "bstrblobVal": SimTypeBottom(label="BSTRBLOB"), "blob": SimTypeBottom(label="BLOB"), "pszVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pwszVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "punkVal": SimTypeBottom(label="IUnknown"), "pdispVal": SimTypeBottom(label="IDispatch"), "pStream": SimTypeBottom(label="IStream"), "pStorage": SimTypeBottom(label="IStorage"), "pVersionedStream": SimTypePointer(SimStruct({"guidVersion": SimTypeBottom(label="Guid"), "pStream": SimTypeBottom(label="IStream")}, name="VERSIONEDSTREAM", pack=False, align=None), offset=0), "parray": SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), "cac": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAC", pack=False, align=None), "caub": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAUB", pack=False, align=None), "cai": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CAI", pack=False, align=None), "caui": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)}, name="CAUI", pack=False, align=None), "cal": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CAL", pack=False, align=None), "caul": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CAUL", pack=False, align=None), "cah": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="LARGE_INTEGER"), offset=0)}, name="CAH", pack=False, align=None), "cauh": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="ULARGE_INTEGER"), offset=0)}, name="CAUH", pack=False, align=None), "caflt": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=32), offset=0)}, name="CAFLT", pack=False, align=None), "cadbl": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADBL", pack=False, align=None), "cabool": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CABOOL", pack=False, align=None), "cascode": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CASCODE", pack=False, align=None), "cacy": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CY"), offset=0)}, name="CACY", pack=False, align=None), "cadate": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADATE", pack=False, align=None), "cafiletime": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), offset=0)}, name="CAFILETIME", pack=False, align=None), "cauuid": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="Guid"), offset=0)}, name="CACLSID", pack=False, align=None), "caclipdata": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0)}, name="CACLIPDATA", pack=False, align=None), "cabstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CABSTR", pack=False, align=None), "cabstrblob": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="BSTRBLOB"), offset=0)}, name="CABSTRBLOB", pack=False, align=None), "calpstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0)}, name="CALPSTR", pack=False, align=None), "calpwstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CALPWSTR", pack=False, align=None), "capropvar": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="CAPROPVARIANT", pack=False, align=None), "pcVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pbVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "piVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "puiVal": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), "plVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pulVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pintVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "puintVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pfltVal": SimTypePointer(SimTypeFloat(size=32), offset=0), "pdblVal": SimTypePointer(SimTypeFloat(size=64), offset=0), "pboolVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "pdecVal": SimTypePointer(SimTypeBottom(label="DECIMAL"), offset=0), "pscode": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pcyVal": SimTypePointer(SimTypeBottom(label="CY"), offset=0), "pdate": SimTypePointer(SimTypeFloat(size=64), offset=0), "pbstrVal": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), "ppunkVal": SimTypePointer(SimTypeBottom(label="IUnknown"), offset=0), "ppdispVal": SimTypePointer(SimTypeBottom(label="IDispatch"), offset=0), "pparray": SimTypePointer(SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), offset=0), "pvarVal": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="<anon>", label="None")}, name="_Anonymous_e__Struct", pack=False, align=None), "decVal": SimTypeBottom(label="DECIMAL")}, name="<anon>", label="None")}, name="PROPVARIANT", pack=False, align=None), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pps", "puCodePage", "cpspec", "rgpspec", "rgvar", "propidNameFirst"]),
#
'SHGetPropertyStoreForWindow': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "riid", "ppv"]),
#
'SHSimpleIDListFromPath': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pszPath"]),
#
'SHCreateItemFromIDList': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "riid", "ppv"]),
#
'SHCreateItemFromParsingName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeBottom(label="IBindCtx"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath", "pbc", "riid", "ppv"]),
#
'SHCreateItemWithParent': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlParent", "psfParent", "pidl", "riid", "ppvItem"]),
#
'SHCreateItemFromRelativeName': SimTypeFunction([SimTypeBottom(label="IShellItem"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeBottom(label="IBindCtx"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psiParent", "pszName", "pbc", "riid", "ppv"]),
#
'SHCreateItemInKnownFolder': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["kfid", "dwKFFlags", "pszItem", "riid", "ppv"]),
#
'SHGetIDListFromObject': SimTypeFunction([SimTypeBottom(label="IUnknown"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["punk", "ppidl"]),
#
'SHGetItemFromObject': SimTypeFunction([SimTypeBottom(label="IUnknown"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["punk", "riid", "ppv"]),
#
'SHGetNameFromIDList': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="SIGDN"), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "sigdnName", "ppszName"]),
#
'SHGetItemFromDataObject': SimTypeFunction([SimTypeBottom(label="IDataObject"), SimTypeInt(signed=False, label="DATAOBJ_GET_ITEM_FLAGS"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pdtobj", "dwFlags", "riid", "ppv"]),
#
'SHCreateShellItemArray': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeBottom(label="IShellFolder"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), label="LPArray", offset=0), SimTypePointer(SimTypeBottom(label="IShellItemArray"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlParent", "psf", "cidl", "ppidl", "ppsiItemArray"]),
#
'SHCreateShellItemArrayFromDataObject': SimTypeFunction([SimTypeBottom(label="IDataObject"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pdo", "riid", "ppv"]),
#
'SHCreateShellItemArrayFromIDLists': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), label="LPArray", offset=0), SimTypePointer(SimTypeBottom(label="IShellItemArray"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["cidl", "rgpidl", "ppsiItemArray"]),
#
'SHCreateShellItemArrayFromShellItem': SimTypeFunction([SimTypeBottom(label="IShellItem"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psi", "riid", "ppv"]),
#
'SHCreateAssociationRegistration': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["riid", "ppv"]),
#
'SHCreateDefaultExtractIcon': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["riid", "ppv"]),
#
'SetCurrentProcessExplicitAppUserModelID': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["AppID"]),
#
'GetCurrentProcessExplicitAppUserModelID': SimTypeFunction([SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["AppID"]),
#
'SHGetTemporaryPropertyForItem': SimTypeFunction([SimTypeBottom(label="IShellItem"), SimTypePointer(SimStruct({"fmtid": SimTypeBottom(label="Guid"), "pid": SimTypeInt(signed=False, label="UInt32")}, name="PROPERTYKEY", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"Anonymous": SimUnion({"Anonymous": SimStruct({"vt": SimTypeShort(signed=False, label="UInt16"), "wReserved1": SimTypeShort(signed=False, label="UInt16"), "wReserved2": SimTypeShort(signed=False, label="UInt16"), "wReserved3": SimTypeShort(signed=False, label="UInt16"), "Anonymous": SimUnion({"cVal": SimTypeBottom(label="CHAR"), "bVal": SimTypeChar(label="Byte"), "iVal": SimTypeShort(signed=True, label="Int16"), "uiVal": SimTypeShort(signed=False, label="UInt16"), "lVal": SimTypeInt(signed=True, label="Int32"), "ulVal": SimTypeInt(signed=False, label="UInt32"), "intVal": SimTypeInt(signed=True, label="Int32"), "uintVal": SimTypeInt(signed=False, label="UInt32"), "hVal": SimTypeBottom(label="LARGE_INTEGER"), "uhVal": SimTypeBottom(label="ULARGE_INTEGER"), "fltVal": SimTypeFloat(size=32), "dblVal": SimTypeFloat(size=64), "boolVal": SimTypeShort(signed=True, label="Int16"), "__OBSOLETE__VARIANT_BOOL": SimTypeShort(signed=True, label="Int16"), "scode": SimTypeInt(signed=True, label="Int32"), "cyVal": SimTypeBottom(label="CY"), "date": SimTypeFloat(size=64), "filetime": SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), "puuid": SimTypePointer(SimTypeBottom(label="Guid"), offset=0), "pclipdata": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0), "bstrVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "bstrblobVal": SimTypeBottom(label="BSTRBLOB"), "blob": SimTypeBottom(label="BLOB"), "pszVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pwszVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "punkVal": SimTypeBottom(label="IUnknown"), "pdispVal": SimTypeBottom(label="IDispatch"), "pStream": SimTypeBottom(label="IStream"), "pStorage": SimTypeBottom(label="IStorage"), "pVersionedStream": SimTypePointer(SimStruct({"guidVersion": SimTypeBottom(label="Guid"), "pStream": SimTypeBottom(label="IStream")}, name="VERSIONEDSTREAM", pack=False, align=None), offset=0), "parray": SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), "cac": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAC", pack=False, align=None), "caub": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAUB", pack=False, align=None), "cai": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CAI", pack=False, align=None), "caui": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)}, name="CAUI", pack=False, align=None), "cal": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CAL", pack=False, align=None), "caul": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CAUL", pack=False, align=None), "cah": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="LARGE_INTEGER"), offset=0)}, name="CAH", pack=False, align=None), "cauh": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="ULARGE_INTEGER"), offset=0)}, name="CAUH", pack=False, align=None), "caflt": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=32), offset=0)}, name="CAFLT", pack=False, align=None), "cadbl": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADBL", pack=False, align=None), "cabool": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CABOOL", pack=False, align=None), "cascode": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CASCODE", pack=False, align=None), "cacy": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CY"), offset=0)}, name="CACY", pack=False, align=None), "cadate": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADATE", pack=False, align=None), "cafiletime": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), offset=0)}, name="CAFILETIME", pack=False, align=None), "cauuid": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="Guid"), offset=0)}, name="CACLSID", pack=False, align=None), "caclipdata": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0)}, name="CACLIPDATA", pack=False, align=None), "cabstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CABSTR", pack=False, align=None), "cabstrblob": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="BSTRBLOB"), offset=0)}, name="CABSTRBLOB", pack=False, align=None), "calpstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0)}, name="CALPSTR", pack=False, align=None), "calpwstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CALPWSTR", pack=False, align=None), "capropvar": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="CAPROPVARIANT", pack=False, align=None), "pcVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pbVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "piVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "puiVal": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), "plVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pulVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pintVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "puintVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pfltVal": SimTypePointer(SimTypeFloat(size=32), offset=0), "pdblVal": SimTypePointer(SimTypeFloat(size=64), offset=0), "pboolVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "pdecVal": SimTypePointer(SimTypeBottom(label="DECIMAL"), offset=0), "pscode": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pcyVal": SimTypePointer(SimTypeBottom(label="CY"), offset=0), "pdate": SimTypePointer(SimTypeFloat(size=64), offset=0), "pbstrVal": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), "ppunkVal": SimTypePointer(SimTypeBottom(label="IUnknown"), offset=0), "ppdispVal": SimTypePointer(SimTypeBottom(label="IDispatch"), offset=0), "pparray": SimTypePointer(SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), offset=0), "pvarVal": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="<anon>", label="None")}, name="_Anonymous_e__Struct", pack=False, align=None), "decVal": SimTypeBottom(label="DECIMAL")}, name="<anon>", label="None")}, name="PROPVARIANT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psi", "propkey", "ppropvar"]),
#
'SHSetTemporaryPropertyForItem': SimTypeFunction([SimTypeBottom(label="IShellItem"), SimTypePointer(SimStruct({"fmtid": SimTypeBottom(label="Guid"), "pid": SimTypeInt(signed=False, label="UInt32")}, name="PROPERTYKEY", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"Anonymous": SimUnion({"Anonymous": SimStruct({"vt": SimTypeShort(signed=False, label="UInt16"), "wReserved1": SimTypeShort(signed=False, label="UInt16"), "wReserved2": SimTypeShort(signed=False, label="UInt16"), "wReserved3": SimTypeShort(signed=False, label="UInt16"), "Anonymous": SimUnion({"cVal": SimTypeBottom(label="CHAR"), "bVal": SimTypeChar(label="Byte"), "iVal": SimTypeShort(signed=True, label="Int16"), "uiVal": SimTypeShort(signed=False, label="UInt16"), "lVal": SimTypeInt(signed=True, label="Int32"), "ulVal": SimTypeInt(signed=False, label="UInt32"), "intVal": SimTypeInt(signed=True, label="Int32"), "uintVal": SimTypeInt(signed=False, label="UInt32"), "hVal": SimTypeBottom(label="LARGE_INTEGER"), "uhVal": SimTypeBottom(label="ULARGE_INTEGER"), "fltVal": SimTypeFloat(size=32), "dblVal": SimTypeFloat(size=64), "boolVal": SimTypeShort(signed=True, label="Int16"), "__OBSOLETE__VARIANT_BOOL": SimTypeShort(signed=True, label="Int16"), "scode": SimTypeInt(signed=True, label="Int32"), "cyVal": SimTypeBottom(label="CY"), "date": SimTypeFloat(size=64), "filetime": SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), "puuid": SimTypePointer(SimTypeBottom(label="Guid"), offset=0), "pclipdata": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0), "bstrVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "bstrblobVal": SimTypeBottom(label="BSTRBLOB"), "blob": SimTypeBottom(label="BLOB"), "pszVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pwszVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "punkVal": SimTypeBottom(label="IUnknown"), "pdispVal": SimTypeBottom(label="IDispatch"), "pStream": SimTypeBottom(label="IStream"), "pStorage": SimTypeBottom(label="IStorage"), "pVersionedStream": SimTypePointer(SimStruct({"guidVersion": SimTypeBottom(label="Guid"), "pStream": SimTypeBottom(label="IStream")}, name="VERSIONEDSTREAM", pack=False, align=None), offset=0), "parray": SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), "cac": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAC", pack=False, align=None), "caub": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="CAUB", pack=False, align=None), "cai": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CAI", pack=False, align=None), "caui": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)}, name="CAUI", pack=False, align=None), "cal": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CAL", pack=False, align=None), "caul": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CAUL", pack=False, align=None), "cah": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="LARGE_INTEGER"), offset=0)}, name="CAH", pack=False, align=None), "cauh": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="ULARGE_INTEGER"), offset=0)}, name="CAUH", pack=False, align=None), "caflt": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=32), offset=0)}, name="CAFLT", pack=False, align=None), "cadbl": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADBL", pack=False, align=None), "cabool": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0)}, name="CABOOL", pack=False, align=None), "cascode": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)}, name="CASCODE", pack=False, align=None), "cacy": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CY"), offset=0)}, name="CACY", pack=False, align=None), "cadate": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeFloat(size=64), offset=0)}, name="CADATE", pack=False, align=None), "cafiletime": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), offset=0)}, name="CAFILETIME", pack=False, align=None), "cauuid": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="Guid"), offset=0)}, name="CACLSID", pack=False, align=None), "caclipdata": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="CLIPDATA"), offset=0)}, name="CACLIPDATA", pack=False, align=None), "cabstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CABSTR", pack=False, align=None), "cabstrblob": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="BSTRBLOB"), offset=0)}, name="CABSTRBLOB", pack=False, align=None), "calpstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Byte"), offset=0), offset=0)}, name="CALPSTR", pack=False, align=None), "calpwstr": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)}, name="CALPWSTR", pack=False, align=None), "capropvar": SimStruct({"cElems": SimTypeInt(signed=False, label="UInt32"), "pElems": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="CAPROPVARIANT", pack=False, align=None), "pcVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "pbVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "piVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "puiVal": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), "plVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pulVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pintVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "puintVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pfltVal": SimTypePointer(SimTypeFloat(size=32), offset=0), "pdblVal": SimTypePointer(SimTypeFloat(size=64), offset=0), "pboolVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "pdecVal": SimTypePointer(SimTypeBottom(label="DECIMAL"), offset=0), "pscode": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pcyVal": SimTypePointer(SimTypeBottom(label="CY"), offset=0), "pdate": SimTypePointer(SimTypeFloat(size=64), offset=0), "pbstrVal": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), "ppunkVal": SimTypePointer(SimTypeBottom(label="IUnknown"), offset=0), "ppdispVal": SimTypePointer(SimTypeBottom(label="IDispatch"), offset=0), "pparray": SimTypePointer(SimTypePointer(SimTypeBottom(label="SAFEARRAY"), offset=0), offset=0), "pvarVal": SimTypePointer(SimTypeBottom(label="PROPVARIANT"), offset=0)}, name="<anon>", label="None")}, name="_Anonymous_e__Struct", pack=False, align=None), "decVal": SimTypeBottom(label="DECIMAL")}, name="<anon>", label="None")}, name="PROPVARIANT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psi", "propkey", "propvar"]),
#
'SHShowManageLibraryUI': SimTypeFunction([SimTypeBottom(label="IShellItem"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="LIBRARYMANAGEDIALOGOPTIONS")], SimTypeInt(signed=True, label="Int32"), arg_names=["psiLibrary", "hwndOwner", "pszTitle", "pszInstruction", "lmdOptions"]),
#
'SHResolveLibrary': SimTypeFunction([SimTypeBottom(label="IShellItem")], SimTypeInt(signed=True, label="Int32"), arg_names=["psiLibrary"]),
#
'SHAssocEnumHandlers': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="ASSOC_FILTER"), SimTypePointer(SimTypeBottom(label="IEnumAssocHandlers"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszExtra", "afFilter", "ppEnumHandler"]),
#
'SHAssocEnumHandlersForProtocolByApplication': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["protocol", "riid", "enumHandlers"]),
#
'SHCreateDefaultPropertiesOp': SimTypeFunction([SimTypeBottom(label="IShellItem"), SimTypePointer(SimTypeBottom(label="IFileOperation"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psi", "ppFileOp"]),
#
'SHSetDefaultProperties': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IShellItem"), SimTypeInt(signed=False, label="UInt32"), SimTypeBottom(label="IFileOperationProgressSink")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "psi", "dwFileOpFlags", "pfops"]),
#
'SHGetMalloc': SimTypeFunction([SimTypePointer(SimTypeBottom(label="IMalloc"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["ppMalloc"]),
#
'SHAlloc': SimTypeFunction([SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0)], SimTypePointer(SimTypeBottom(label="Void"), offset=0), arg_names=["cb"]),
#
'SHFree': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeBottom(label="Void"), arg_names=["pv"]),
#
'SHGetIconOverlayIndexA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszIconPath", "iIconIndex"]),
#
'SHGetIconOverlayIndexW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszIconPath", "iIconIndex"]),
#
'ILClone': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidl"]),
#
'ILCloneFirst': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidl"]),
#
'ILCombine': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidl1", "pidl2"]),
#
'ILFree': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeBottom(label="Void"), arg_names=["pidl"]),
#
'ILGetNext': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidl"]),
#
'ILGetSize': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["pidl"]),
#
'ILFindChild': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidlParent", "pidlChild"]),
#
'ILFindLastID': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidl"]),
#
'ILRemoveLastID': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl"]),
#
'ILIsEqual': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl1", "pidl2"]),
#
'ILIsParent': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl1", "pidl2", "fImmediate"]),
#
'ILSaveToStream': SimTypeFunction([SimTypeBottom(label="IStream"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pstm", "pidl"]),
#
'ILLoadFromStreamEx': SimTypeFunction([SimTypeBottom(label="IStream"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pstm", "pidl"]),
#
'ILCreateFromPathA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pszPath"]),
#
'ILCreateFromPathW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pszPath"]),
#
'SHILCreateFromPath': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath", "ppidl", "rgfInOut"]),
#
'ILAppendID': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["pidl", "pmkid", "fAppend"]),
#
'SHGetPathFromIDListEx': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "pszPath", "cchPath", "uOpts"]),
#
'SHGetPathFromIDListA': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "pszPath"]),
#
'SHGetPathFromIDListW': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "pszPath"]),
#
'SHCreateDirectory': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPath"]),
#
'SHCreateDirectoryExA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimStruct({"nLength": SimTypeInt(signed=False, label="UInt32"), "lpSecurityDescriptor": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "bInheritHandle": SimTypeInt(signed=True, label="Int32")}, name="SECURITY_ATTRIBUTES", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPath", "psa"]),
#
'SHCreateDirectoryExW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimStruct({"nLength": SimTypeInt(signed=False, label="UInt32"), "lpSecurityDescriptor": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "bInheritHandle": SimTypeInt(signed=True, label="Int32")}, name="SECURITY_ATTRIBUTES", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPath", "psa"]),
#
'SHOpenFolderAndSelectItems': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlFolder", "cidl", "apidl", "dwFlags"]),
#
'SHCreateShellItem': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="IShellItem"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlParent", "psfParent", "pidl", "ppsi"]),
#
'SHGetSpecialFolderLocation': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "csidl", "ppidl"]),
#
'SHCloneSpecialIDList': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=True, label="Int32")], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["hwnd", "csidl", "fCreate"]),
#
'SHGetSpecialFolderPathA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPath", "csidl", "fCreate"]),
#
'SHGetSpecialFolderPathW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPath", "csidl", "fCreate"]),
#
'SHFlushSFCache': SimTypeFunction([], SimTypeBottom(label="Void")),
#
'SHGetFolderPathA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "csidl", "hToken", "dwFlags", "pszPath"]),
#
'SHGetFolderPathW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "csidl", "hToken", "dwFlags", "pszPath"]),
#
'SHGetFolderLocation': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "csidl", "hToken", "dwFlags", "ppidl"]),
#
'SHSetFolderPathA': SimTypeFunction([SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["csidl", "hToken", "dwFlags", "pszPath"]),
#
'SHSetFolderPathW': SimTypeFunction([SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["csidl", "hToken", "dwFlags", "pszPath"]),
#
'SHGetFolderPathAndSubDirA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "csidl", "hToken", "dwFlags", "pszSubDir", "pszPath"]),
#
'SHGetFolderPathAndSubDirW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "csidl", "hToken", "dwFlags", "pszSubDir", "pszPath"]),
#
'SHGetKnownFolderIDList': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["rfid", "dwFlags", "hToken", "ppidl"]),
#
'SHSetKnownFolderPath': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["rfid", "dwFlags", "hToken", "pszPath"]),
#
'SHGetKnownFolderPath': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["rfid", "dwFlags", "hToken", "ppszPath"]),
#
'SHGetKnownFolderItem': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeInt(signed=False, label="KNOWN_FOLDER_FLAG"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["rfid", "flags", "hToken", "riid", "ppv"]),
#
'SHGetSetFolderCustomSettings': SimTypeFunction([SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwMask": SimTypeInt(signed=False, label="UInt32"), "pvid": SimTypePointer(SimTypeBottom(label="Guid"), offset=0), "pszWebViewTemplate": SimTypePointer(SimTypeChar(label="Char"), offset=0), "cchWebViewTemplate": SimTypeInt(signed=False, label="UInt32"), "pszWebViewTemplateVersion": SimTypePointer(SimTypeChar(label="Char"), offset=0), "pszInfoTip": SimTypePointer(SimTypeChar(label="Char"), offset=0), "cchInfoTip": SimTypeInt(signed=False, label="UInt32"), "pclsid": SimTypePointer(SimTypeBottom(label="Guid"), offset=0), "dwFlags": SimTypeInt(signed=False, label="UInt32"), "pszIconFile": SimTypePointer(SimTypeChar(label="Char"), offset=0), "cchIconFile": SimTypeInt(signed=False, label="UInt32"), "iIconIndex": SimTypeInt(signed=True, label="Int32"), "pszLogo": SimTypePointer(SimTypeChar(label="Char"), offset=0), "cchLogo": SimTypeInt(signed=False, label="UInt32")}, name="SHFOLDERCUSTOMSETTINGS", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pfcs", "pszPath", "dwReadWrite"]),
#
'SHBrowseForFolderA': SimTypeFunction([SimTypePointer(SimStruct({"hwndOwner": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "pidlRoot": SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), "pszDisplayName": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "lpszTitle": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "ulFlags": SimTypeInt(signed=False, label="UInt32"), "lpfn": SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "uMsg", "lParam", "lpData"]), offset=0), "lParam": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "iImage": SimTypeInt(signed=True, label="Int32")}, name="BROWSEINFOA", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["lpbi"]),
#
'SHBrowseForFolderW': SimTypeFunction([SimTypePointer(SimStruct({"hwndOwner": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "pidlRoot": SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), "pszDisplayName": SimTypePointer(SimTypeChar(label="Char"), offset=0), "lpszTitle": SimTypePointer(SimTypeChar(label="Char"), offset=0), "ulFlags": SimTypeInt(signed=False, label="UInt32"), "lpfn": SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "uMsg", "lParam", "lpData"]), offset=0), "lParam": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "iImage": SimTypeInt(signed=True, label="Int32")}, name="BROWSEINFOW", pack=False, align=None), offset=0)], SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), arg_names=["lpbi"]),
#
'SHLoadInProc': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["rclsid"]),
#
'SHGetDesktopFolder': SimTypeFunction([SimTypePointer(SimTypeBottom(label="IShellFolder"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["ppshf"]),
#
'SHChangeNotify': SimTypeFunction([SimTypeInt(signed=False, label="SHCNE_ID"), SimTypeInt(signed=False, label="SHCNF_FLAGS"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeBottom(label="Void"), arg_names=["wEventId", "uFlags", "dwItem1", "dwItem2"]),
#
'SHAddToRecentDocs': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeBottom(label="Void"), arg_names=["uFlags", "pv"]),
#
'SHHandleUpdateImage': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlExtra"]),
#
'SHUpdateImageA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32")], SimTypeBottom(label="Void"), arg_names=["pszHashItem", "iIndex", "uFlags", "iImageIndex"]),
#
'SHUpdateImageW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32")], SimTypeBottom(label="Void"), arg_names=["pszHashItem", "iIndex", "uFlags", "iImageIndex"]),
#
'SHChangeNotifyRegister': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="SHCNRF_SOURCE"), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimStruct({"pidl": SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), "fRecursive": SimTypeInt(signed=True, label="Int32")}, name="SHChangeNotifyEntry", pack=False, align=None), offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["hwnd", "fSources", "fEvents", "wMsg", "cEntries", "pshcne"]),
#
'SHChangeNotifyDeregister': SimTypeFunction([SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["ulID"]),
#
'SHChangeNotification_Lock': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hChange", "dwProcId", "pppidl", "plEvent"]),
#
'SHChangeNotification_Unlock': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hLock"]),
#
'SHGetRealIDL': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psf", "pidlSimple", "ppidlReal"]),
#
'SHGetInstanceExplorer': SimTypeFunction([SimTypePointer(SimTypeBottom(label="IUnknown"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["ppunk"]),
#
'SHGetDataFromIDListA': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="SHGDFIL_FORMAT"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["psf", "pidl", "nFormat", "pv", "cb"]),
#
'SHGetDataFromIDListW': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="SHGDFIL_FORMAT"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["psf", "pidl", "nFormat", "pv", "cb"]),
#
'RestartDialog': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPrompt", "dwReturn"]),
#
'RestartDialogEx': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszPrompt", "dwReturn", "dwReasonCode"]),
#
'SHCoCreateInstance': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeBottom(label="IUnknown"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszCLSID", "pclsid", "pUnkOuter", "riid", "ppv"]),
#
'SHCreateDataObject': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), label="LPArray", offset=0), SimTypeBottom(label="IDataObject"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlFolder", "cidl", "apidl", "pdtInner", "riid", "ppv"]),
#
'CIDLData_CreateFromIDArray': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), label="LPArray", offset=0), SimTypePointer(SimTypeBottom(label="IDataObject"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlFolder", "cidl", "apidl", "ppdtobj"]),
#
'SHCreateStdEnumFmtEtc': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"cfFormat": SimTypeShort(signed=False, label="UInt16"), "ptd": SimTypePointer(SimStruct({"tdSize": SimTypeInt(signed=False, label="UInt32"), "tdDriverNameOffset": SimTypeShort(signed=False, label="UInt16"), "tdDeviceNameOffset": SimTypeShort(signed=False, label="UInt16"), "tdPortNameOffset": SimTypeShort(signed=False, label="UInt16"), "tdExtDevmodeOffset": SimTypeShort(signed=False, label="UInt16"), "tdData": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="DVTARGETDEVICE", pack=False, align=None), offset=0), "dwAspect": SimTypeInt(signed=False, label="UInt32"), "lindex": SimTypeInt(signed=True, label="Int32"), "tymed": SimTypeInt(signed=False, label="UInt32")}, name="FORMATETC", pack=False, align=None), label="LPArray", offset=0), SimTypePointer(SimTypeBottom(label="IEnumFORMATETC"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["cfmt", "afmt", "ppenumFormatEtc"]),
#
'SHDoDragDrop': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IDataObject"), SimTypeBottom(label="IDropSource"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pdata", "pdsrc", "dwEffect", "pdwEffect"]),
#
'DAD_SetDragImage': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["him", "pptOffset"]),
#
'DAD_DragEnterEx': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwndTarget", "ptStart"]),
#
'DAD_DragEnterEx2': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), SimTypeBottom(label="IDataObject")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwndTarget", "ptStart", "pdtObject"]),
#
'DAD_ShowDragImage': SimTypeFunction([SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["fShow"]),
#
'DAD_DragMove': SimTypeFunction([SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None)], SimTypeInt(signed=True, label="Int32"), arg_names=["pt"]),
#
'DAD_DragLeave': SimTypeFunction([], SimTypeInt(signed=True, label="Int32")),
#
'DAD_AutoScroll': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"iNextSample": SimTypeInt(signed=True, label="Int32"), "dwLastScroll": SimTypeInt(signed=False, label="UInt32"), "bFull": SimTypeInt(signed=True, label="Int32"), "pts": SimTypeFixedSizeArray(SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), 3), "dwTimes": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 3)}, name="AUTO_SCROLL_DATA", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pad", "pptNow"]),
#
'ReadCabinetState': SimTypeFunction([SimTypePointer(SimStruct({"cLength": SimTypeShort(signed=False, label="UInt16"), "nVersion": SimTypeShort(signed=False, label="UInt16"), "_bitfield": SimTypeInt(signed=True, label="Int32"), "fMenuEnumFilter": SimTypeInt(signed=False, label="UInt32")}, name="CABINETSTATE", pack=False, align=None), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pcs", "cLength"]),
#
'WriteCabinetState': SimTypeFunction([SimTypePointer(SimStruct({"cLength": SimTypeShort(signed=False, label="UInt16"), "nVersion": SimTypeShort(signed=False, label="UInt16"), "_bitfield": SimTypeInt(signed=True, label="Int32"), "fMenuEnumFilter": SimTypeInt(signed=False, label="UInt32")}, name="CABINETSTATE", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pcs"]),
#
'PathMakeUniqueName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszUniqueName", "cchMax", "pszTemplate", "pszLongPlate", "pszDir"]),
#
'PathIsExe': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath"]),
#
'PathCleanupSpec': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=False, label="PCS_RET"), arg_names=["pszDir", "pszSpec"]),
#
'PathResolve': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), offset=0), SimTypeInt(signed=False, label="PRF_FLAGS")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath", "dirs", "fFlags"]),
#
'GetFileNameFromBrowse': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszFilePath", "cchFilePath", "pszWorkingDir", "pszDefExt", "pszFilters", "pszTitle"]),
#
'DriveType': SimTypeFunction([SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["iDrive"]),
#
'RealDriveType': SimTypeFunction([SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["iDrive", "fOKToHitNet"]),
#
'IsNetDrive': SimTypeFunction([SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["iDrive"]),
#
'Shell_MergeMenus': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="MM_FLAGS")], SimTypeInt(signed=False, label="UInt32"), arg_names=["hmDst", "hmSrc", "uInsert", "uIDAdjust", "uIDAdjustMax", "uFlags"]),
#
'SHObjectProperties': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="SHOP_TYPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "shopObjectType", "pszObjectName", "pszPropertyPage"]),
#
'SHFormatDrive': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="SHFMT_ID"), SimTypeInt(signed=False, label="SHFMT_OPT")], SimTypeInt(signed=False, label="UInt32"), arg_names=["hwnd", "drive", "fmtID", "options"]),
#
'SHDestroyPropSheetExtArray': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeBottom(label="Void"), arg_names=["hpsxa"]),
#
'SHAddFromPropSheetExtArray': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["hpsxa", "lpfnAddPage", "lParam"]),
#
'SHReplaceFromPropSheetExtArray': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["hpsxa", "uPageID", "lpfnReplaceWith", "lParam"]),
#
'OpenRegStream': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeBottom(label="IStream"), arg_names=["hkey", "pszSubkey", "pszValue", "grfMode"]),
#
'SHFindFiles': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlFolder", "pidlSaveFile"]),
#
'PathGetShortPath': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0)], SimTypeBottom(label="Void"), arg_names=["pszLongPath"]),
#
'PathYetAnotherMakeUniqueName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszUniqueName", "pszPath", "pszShort", "pszFileSpec"]),
#
'Win32DeleteFile': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath"]),
#
'SHRestricted': SimTypeFunction([SimTypeInt(signed=False, label="RESTRICTIONS")], SimTypeInt(signed=False, label="UInt32"), arg_names=["rest"]),
#
'SignalFileOpen': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl"]),
#
'AssocGetDetailsOfPropKey': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"fmtid": SimTypeBottom(label="Guid"), "pid": SimTypeInt(signed=False, label="UInt32")}, name="PROPERTYKEY", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"Anonymous": SimUnion({"Anonymous": SimStruct({"vt": SimTypeShort(signed=False, label="UInt16"), "wReserved1": SimTypeShort(signed=False, label="UInt16"), "wReserved2": SimTypeShort(signed=False, label="UInt16"), "wReserved3": SimTypeShort(signed=False, label="UInt16"), "Anonymous": SimUnion({"llVal": SimTypeLongLong(signed=True, label="Int64"), "lVal": SimTypeInt(signed=True, label="Int32"), "bVal": SimTypeChar(label="Byte"), "iVal": SimTypeShort(signed=True, label="Int16"), "fltVal": SimTypeFloat(size=32), "dblVal": SimTypeFloat(size=64), "boolVal": SimTypeShort(signed=True, label="Int16"), "__OBSOLETE__VARIANT_BOOL": SimTypeShort(signed=True, label="Int16"), "scode": SimTypeInt(signed=True, label="Int32"), "cyVal": SimTypeBottom(label="CY"), "date": SimTypeFloat(size=64), "bstrVal": SimTypePointer(SimTypeChar(label="Char"), offset=0), "punkVal": SimTypeBottom(label="IUnknown"), "pdispVal": SimTypeBottom(label="IDispatch"), "parray": SimTypePointer(SimStruct({"cDims": SimTypeShort(signed=False, label="UInt16"), "fFeatures": SimTypeShort(signed=False, label="UInt16"), "cbElements": SimTypeInt(signed=False, label="UInt32"), "cLocks": SimTypeInt(signed=False, label="UInt32"), "pvData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "rgsabound": SimTypePointer(SimStruct({"cElements": SimTypeInt(signed=False, label="UInt32"), "lLbound": SimTypeInt(signed=True, label="Int32")}, name="SAFEARRAYBOUND", pack=False, align=None), offset=0)}, name="SAFEARRAY", pack=False, align=None), offset=0), "pbVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "piVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "plVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pllVal": SimTypePointer(SimTypeLongLong(signed=True, label="Int64"), offset=0), "pfltVal": SimTypePointer(SimTypeFloat(size=32), offset=0), "pdblVal": SimTypePointer(SimTypeFloat(size=64), offset=0), "pboolVal": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "__OBSOLETE__VARIANT_PBOOL": SimTypePointer(SimTypeShort(signed=True, label="Int16"), offset=0), "pscode": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "pcyVal": SimTypePointer(SimTypeBottom(label="CY"), offset=0), "pdate": SimTypePointer(SimTypeFloat(size=64), offset=0), "pbstrVal": SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), "ppunkVal": SimTypePointer(SimTypeBottom(label="IUnknown"), offset=0), "ppdispVal": SimTypePointer(SimTypeBottom(label="IDispatch"), offset=0), "pparray": SimTypePointer(SimTypePointer(SimStruct({"cDims": SimTypeShort(signed=False, label="UInt16"), "fFeatures": SimTypeShort(signed=False, label="UInt16"), "cbElements": SimTypeInt(signed=False, label="UInt32"), "cLocks": SimTypeInt(signed=False, label="UInt32"), "pvData": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "rgsabound": SimTypePointer(SimStruct({"cElements": SimTypeInt(signed=False, label="UInt32"), "lLbound": SimTypeInt(signed=True, label="Int32")}, name="SAFEARRAYBOUND", pack=False, align=None), offset=0)}, name="SAFEARRAY", pack=False, align=None), offset=0), offset=0), "pvarVal": SimTypePointer(SimTypeBottom(label="VARIANT"), offset=0), "byref": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "cVal": SimTypeBottom(label="CHAR"), "uiVal": SimTypeShort(signed=False, label="UInt16"), "ulVal": SimTypeInt(signed=False, label="UInt32"), "ullVal": SimTypeLongLong(signed=False, label="UInt64"), "intVal": SimTypeInt(signed=True, label="Int32"), "uintVal": SimTypeInt(signed=False, label="UInt32"), "pdecVal": SimTypePointer(SimTypeBottom(label="DECIMAL"), offset=0), "pcVal": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "puiVal": SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), "pulVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "pullVal": SimTypePointer(SimTypeLongLong(signed=False, label="UInt64"), offset=0), "pintVal": SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), "puintVal": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), "Anonymous": SimStruct({"pvRecord": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "pRecInfo": SimTypeBottom(label="IRecordInfo")}, name="_Anonymous_e__Struct", pack=False, align=None)}, name="<anon>", label="None")}, name="_Anonymous_e__Struct", pack=False, align=None), "decVal": SimTypeBottom(label="DECIMAL")}, name="<anon>", label="None")}, name="VARIANT", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psf", "pidl", "pkey", "pv", "pfFoundPropKey"]),
#
'SHStartNetConnectionDialogW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszRemoteName", "dwType"]),
#
'SHDefExtractIconA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszIconFile", "iIndex", "uFlags", "phiconLarge", "phiconSmall", "nIconSize"]),
#
'SHDefExtractIconW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszIconFile", "iIndex", "uFlags", "phiconLarge", "phiconSmall", "nIconSize"]),
#
'SHOpenWithDialog': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"pcszFile": SimTypePointer(SimTypeChar(label="Char"), offset=0), "pcszClass": SimTypePointer(SimTypeChar(label="Char"), offset=0), "oaifInFlags": SimTypeInt(signed=False, label="OPEN_AS_INFO_FLAGS")}, name="OPENASINFO", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwndParent", "poainfo"]),
#
'Shell_GetImageLists': SimTypeFunction([SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["phiml", "phimlSmall"]),
#
'Shell_GetCachedImageIndex': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pwszIconPath", "iIconIndex", "uIconFlags"]),
#
'Shell_GetCachedImageIndexA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszIconPath", "iIconIndex", "uIconFlags"]),
#
'Shell_GetCachedImageIndexW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszIconPath", "iIconIndex", "uIconFlags"]),
#
'SHValidateUNC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="VALIDATEUNC_OPTION")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwndOwner", "pszFile", "fConnect"]),
#
'SHSetInstanceExplorer': SimTypeFunction([SimTypeBottom(label="IUnknown")], SimTypeBottom(label="Void"), arg_names=["punk"]),
#
'IsUserAnAdmin': SimTypeFunction([], SimTypeInt(signed=True, label="Int32")),
#
'SHShellFolderView_Message': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hwndMain", "uMsg", "lParam"]),
#
'SHCreateShellFolderView': SimTypeFunction([SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "pshf": SimTypeBottom(label="IShellFolder"), "psvOuter": SimTypeBottom(label="IShellView"), "psfvcb": SimTypeBottom(label="IShellFolderViewCB")}, name="SFV_CREATE", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="IShellView"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pcsfv", "ppsv"]),
#
'CDefFolderMenu_Create2': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), label="LPArray", offset=0), SimTypeBottom(label="IShellFolder"), SimTypePointer(SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IDataObject"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psf", "hwnd", "pdtobj", "uMsg", "wParam", "lParam"]), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypePointer(SimTypeBottom(label="IContextMenu"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidlFolder", "hwnd", "cidl", "apidl", "psf", "pfn", "nKeys", "ahkeys", "ppcm"]),
#
'SHCreateDefaultContextMenu': SimTypeFunction([SimTypePointer(SimStruct({"hwnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "pcmcb": SimTypeBottom(label="IContextMenuCB"), "pidlFolder": SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), "psf": SimTypeBottom(label="IShellFolder"), "cidl": SimTypeInt(signed=False, label="UInt32"), "apidl": SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0), "punkAssociationInfo": SimTypeBottom(label="IUnknown"), "cKeys": SimTypeInt(signed=False, label="UInt32"), "aKeys": SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0)}, name="DEFCONTEXTMENU", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pdcm", "riid", "ppv"]),
#
'SHFind_InitMenuPopup': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypeBottom(label="IContextMenu"), arg_names=["hmenu", "hwndOwner", "idCmdFirst", "idCmdLast"]),
#
'SHCreateShellFolderViewEx': SimTypeFunction([SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "pshf": SimTypeBottom(label="IShellFolder"), "psvOuter": SimTypeBottom(label="IShellView"), "pidl": SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), "lEvents": SimTypeInt(signed=True, label="Int32"), "pfnCallback": SimTypePointer(SimTypeFunction([SimTypeBottom(label="IShellView"), SimTypeBottom(label="IShellFolder"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psvOuter", "psf", "hwndMain", "uMsg", "wParam", "lParam"]), offset=0), "fvm": SimTypeInt(signed=False, label="FOLDERVIEWMODE")}, name="CSFV", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="IShellView"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pcsfv", "ppsv"]),
#
'SHGetSetSettings': SimTypeFunction([SimTypePointer(SimStruct({"_bitfield1": SimTypeInt(signed=True, label="Int32"), "dwWin95Unused": SimTypeInt(signed=False, label="UInt32"), "uWin95Unused": SimTypeInt(signed=False, label="UInt32"), "lParamSort": SimTypeInt(signed=True, label="Int32"), "iSortDirection": SimTypeInt(signed=True, label="Int32"), "version": SimTypeInt(signed=False, label="UInt32"), "uNotUsed": SimTypeInt(signed=False, label="UInt32"), "_bitfield2": SimTypeInt(signed=True, label="Int32")}, name="SHELLSTATEA", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="SSF_MASK"), SimTypeInt(signed=True, label="Int32")], SimTypeBottom(label="Void"), arg_names=["lpss", "dwMask", "bSet"]),
#
'SHGetSettings': SimTypeFunction([SimTypePointer(SimStruct({"_bitfield": SimTypeInt(signed=True, label="Int32")}, name="SHELLFLAGSTATE", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeBottom(label="Void"), arg_names=["psfs", "dwMask"]),
#
'SHBindToParent': SimTypeFunction([SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pidl", "riid", "ppv", "ppidlLast"]),
#
'SHBindToFolderIDListParent': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psfRoot", "pidl", "riid", "ppv", "ppidlLast"]),
#
'SHBindToFolderIDListParentEx': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeBottom(label="IBindCtx"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psfRoot", "pidl", "ppbc", "riid", "ppv", "ppidlLast"]),
#
'SHBindToObject': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypeBottom(label="IBindCtx"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psf", "pidl", "pbc", "riid", "ppv"]),
#
'SHParseDisplayName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeBottom(label="IBindCtx"), SimTypePointer(SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszName", "pbc", "ppidl", "sfgaoIn", "psfgaoOut"]),
#
'SHPathPrepareForWriteA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IUnknown"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "punkEnableModless", "pszPath", "dwFlags"]),
#
'SHPathPrepareForWriteW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IUnknown"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "punkEnableModless", "pszPath", "dwFlags"]),
#
'SHCreateFileExtractIconW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszFile", "dwFileAttributes", "riid", "ppv"]),
#
'SHLimitInputEdit': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IShellFolder")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwndEdit", "psf"]),
#
'SHGetAttributesFromDataObject': SimTypeFunction([SimTypeBottom(label="IDataObject"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pdo", "dwAttributeMask", "pdwAttributes", "pcItems"]),
#
'SHMapPIDLToSystemImageListIndex': SimTypeFunction([SimTypeBottom(label="IShellFolder"), SimTypePointer(SimStruct({"mkid": SimStruct({"cb": SimTypeShort(signed=False, label="UInt16"), "abID": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHITEMID", pack=False, align=None)}, name="ITEMIDLIST", pack=False, align=None), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pshf", "pidl", "piIndexSel"]),
#
'SHCLSIDFromString': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["psz", "pclsid"]),
#
'PickIconDlg': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszIconPath", "cchIconPath", "piIconIndex"]),
#
'StgMakeUniqueName': SimTypeFunction([SimTypeBottom(label="IStorage"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pstgParent", "pszFileSpec", "grfMode", "riid", "ppv"]),
#
'SHChangeNotifyRegisterThread': SimTypeFunction([SimTypeInt(signed=False, label="SCNRT_STATUS")], SimTypeBottom(label="Void"), arg_names=["status"]),
#
'PathQualify': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeBottom(label="Void"), arg_names=["psz"]),
#
'PathIsSlowA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszFile", "dwAttr"]),
#
'PathIsSlowW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszFile", "dwAttr"]),
#
'SHCreatePropSheetExtArray': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hKey", "pszSubKey", "max_iface"]),
#
'SHOpenPropSheetW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypeBottom(label="IDataObject"), SimTypeBottom(label="IShellBrowser"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszCaption", "ahkeys", "ckeys", "pclsidDefault", "pdtobj", "psb", "pStartPage"]),
#
'SHMultiFileProperties': SimTypeFunction([SimTypeBottom(label="IDataObject"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pdtobj", "dwFlags"]),
#
'SHCreateQueryCancelAutoPlayMoniker': SimTypeFunction([SimTypePointer(SimTypeBottom(label="IMoniker"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["ppmoniker"]),
#
'CommandLineToArgvW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), arg_names=["lpCmdLine", "pNumArgs"]),
#
'DragQueryFileA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["hDrop", "iFile", "lpszFile", "cch"]),
#
'DragQueryFileW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["hDrop", "iFile", "lpszFile", "cch"]),
#
'DragQueryPoint': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hDrop", "ppt"]),
#
'DragFinish': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeBottom(label="Void"), arg_names=["hDrop"]),
#
'DragAcceptFiles': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeBottom(label="Void"), arg_names=["hWnd", "fAccept"]),
#
'ShellExecuteA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hwnd", "lpOperation", "lpFile", "lpParameters", "lpDirectory", "nShowCmd"]),
#
'ShellExecuteW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hwnd", "lpOperation", "lpFile", "lpParameters", "lpDirectory", "nShowCmd"]),
#
'FindExecutableA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["lpFile", "lpDirectory", "lpResult"]),
#
'FindExecutableW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["lpFile", "lpDirectory", "lpResult"]),
#
'ShellAboutA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hWnd", "szApp", "szOtherStuff", "hIcon"]),
#
'ShellAboutW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hWnd", "szApp", "szOtherStuff", "hIcon"]),
#
'DuplicateIcon': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "hIcon"]),
#
'ExtractAssociatedIconA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "pszIconPath", "piIcon"]),
#
'ExtractAssociatedIconW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "pszIconPath", "piIcon"]),
#
'ExtractAssociatedIconExA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "pszIconPath", "piIconIndex", "piIconId"]),
#
'ExtractAssociatedIconExW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0), SimTypePointer(SimTypeShort(signed=False, label="UInt16"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "pszIconPath", "piIconIndex", "piIconId"]),
#
'ExtractIconA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "pszExeFileName", "nIconIndex"]),
#
'ExtractIconW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["hInst", "pszExeFileName", "nIconIndex"]),
#
'SHAppBarMessage': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "hWnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "uCallbackMessage": SimTypeInt(signed=False, label="UInt32"), "uEdge": SimTypeInt(signed=False, label="UInt32"), "rc": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None), "lParam": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)}, name="APPBARDATA", pack=False, align=None), offset=0)], SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), arg_names=["dwMessage", "pData"]),
#
'DoEnvironmentSubstA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["pszSrc", "cchSrc"]),
#
'DoEnvironmentSubstW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["pszSrc", "cchSrc"]),
#
'ExtractIconExA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["lpszFile", "nIconIndex", "phiconLarge", "phiconSmall", "nIcons"]),
#
'ExtractIconExW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["lpszFile", "nIconIndex", "phiconLarge", "phiconSmall", "nIcons"]),
#
'SHFileOperationA': SimTypeFunction([SimTypePointer(SimStruct({"hwnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "wFunc": SimTypeInt(signed=False, label="UInt32"), "pFrom": SimTypePointer(SimTypeChar(label="SByte"), offset=0), "pTo": SimTypePointer(SimTypeChar(label="SByte"), offset=0), "fFlags": SimTypeShort(signed=False, label="UInt16"), "fAnyOperationsAborted": SimTypeInt(signed=True, label="Int32"), "hNameMappings": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "lpszProgressTitle": SimTypePointer(SimTypeChar(label="Byte"), offset=0)}, name="SHFILEOPSTRUCTA", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["lpFileOp"]),
#
'SHFileOperationW': SimTypeFunction([SimTypePointer(SimStruct({"hwnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "wFunc": SimTypeInt(signed=False, label="UInt32"), "pFrom": SimTypePointer(SimTypeChar(label="Char"), offset=0), "pTo": SimTypePointer(SimTypeChar(label="Char"), offset=0), "fFlags": SimTypeShort(signed=False, label="UInt16"), "fAnyOperationsAborted": SimTypeInt(signed=True, label="Int32"), "hNameMappings": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "lpszProgressTitle": SimTypePointer(SimTypeChar(label="Char"), offset=0)}, name="SHFILEOPSTRUCTW", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["lpFileOp"]),
#
'SHFreeNameMappings': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeBottom(label="Void"), arg_names=["hNameMappings"]),
#
'ShellExecuteExA': SimTypeFunction([SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fMask": SimTypeInt(signed=False, label="UInt32"), "hwnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "lpVerb": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "lpFile": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "lpParameters": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "lpDirectory": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "nShow": SimTypeInt(signed=True, label="Int32"), "hInstApp": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "lpIDList": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "lpClass": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "hkeyClass": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "dwHotKey": SimTypeInt(signed=False, label="UInt32"), "Anonymous": SimUnion({"hIcon": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hMonitor": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)}, name="<anon>", label="None"), "hProcess": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)}, name="SHELLEXECUTEINFOA", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pExecInfo"]),
#
'ShellExecuteExW': SimTypeFunction([SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fMask": SimTypeInt(signed=False, label="UInt32"), "hwnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "lpVerb": SimTypePointer(SimTypeChar(label="Char"), offset=0), "lpFile": SimTypePointer(SimTypeChar(label="Char"), offset=0), "lpParameters": SimTypePointer(SimTypeChar(label="Char"), offset=0), "lpDirectory": SimTypePointer(SimTypeChar(label="Char"), offset=0), "nShow": SimTypeInt(signed=True, label="Int32"), "hInstApp": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "lpIDList": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "lpClass": SimTypePointer(SimTypeChar(label="Char"), offset=0), "hkeyClass": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "dwHotKey": SimTypeInt(signed=False, label="UInt32"), "Anonymous": SimUnion({"hIcon": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hMonitor": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)}, name="<anon>", label="None"), "hProcess": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)}, name="SHELLEXECUTEINFOW", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pExecInfo"]),
#
'SHCreateProcessAsUserW': SimTypeFunction([SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fMask": SimTypeInt(signed=False, label="UInt32"), "hwnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "pszFile": SimTypePointer(SimTypeChar(label="Char"), offset=0), "pszParameters": SimTypePointer(SimTypeChar(label="Char"), offset=0), "pszCurrentDirectory": SimTypePointer(SimTypeChar(label="Char"), offset=0), "hUserToken": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "lpProcessAttributes": SimTypePointer(SimStruct({"nLength": SimTypeInt(signed=False, label="UInt32"), "lpSecurityDescriptor": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "bInheritHandle": SimTypeInt(signed=True, label="Int32")}, name="SECURITY_ATTRIBUTES", pack=False, align=None), offset=0), "lpThreadAttributes": SimTypePointer(SimStruct({"nLength": SimTypeInt(signed=False, label="UInt32"), "lpSecurityDescriptor": SimTypePointer(SimTypeBottom(label="Void"), offset=0), "bInheritHandle": SimTypeInt(signed=True, label="Int32")}, name="SECURITY_ATTRIBUTES", pack=False, align=None), offset=0), "bInheritHandles": SimTypeInt(signed=True, label="Int32"), "dwCreationFlags": SimTypeInt(signed=False, label="UInt32"), "lpStartupInfo": SimTypePointer(SimStruct({"cb": SimTypeInt(signed=False, label="UInt32"), "lpReserved": SimTypePointer(SimTypeChar(label="Char"), offset=0), "lpDesktop": SimTypePointer(SimTypeChar(label="Char"), offset=0), "lpTitle": SimTypePointer(SimTypeChar(label="Char"), offset=0), "dwX": SimTypeInt(signed=False, label="UInt32"), "dwY": SimTypeInt(signed=False, label="UInt32"), "dwXSize": SimTypeInt(signed=False, label="UInt32"), "dwYSize": SimTypeInt(signed=False, label="UInt32"), "dwXCountChars": SimTypeInt(signed=False, label="UInt32"), "dwYCountChars": SimTypeInt(signed=False, label="UInt32"), "dwFillAttribute": SimTypeInt(signed=False, label="UInt32"), "dwFlags": SimTypeInt(signed=False, label="STARTUPINFOW_FLAGS"), "wShowWindow": SimTypeShort(signed=False, label="UInt16"), "cbReserved2": SimTypeShort(signed=False, label="UInt16"), "lpReserved2": SimTypePointer(SimTypeChar(label="Byte"), offset=0), "hStdInput": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hStdOutput": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hStdError": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)}, name="STARTUPINFOW", pack=False, align=None), offset=0), "lpProcessInformation": SimTypePointer(SimStruct({"hProcess": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hThread": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "dwProcessId": SimTypeInt(signed=False, label="UInt32"), "dwThreadId": SimTypeInt(signed=False, label="UInt32")}, name="PROCESS_INFORMATION", pack=False, align=None), offset=0)}, name="SHCREATEPROCESSINFOW", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pscpi"]),
#
'SHEvaluateSystemCommandTemplate': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszCmdTemplate", "ppszApplication", "ppszCommandLine", "ppszParameters"]),
#
'AssocCreateForClasses': SimTypeFunction([SimTypePointer(SimStruct({"ac": SimTypeInt(signed=False, label="ASSOCCLASS"), "hkClass": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "pszClass": SimTypePointer(SimTypeChar(label="Char"), offset=0)}, name="ASSOCIATIONELEMENT", pack=False, align=None), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["rgClasses", "cClasses", "riid", "ppv"]),
#
'SHQueryRecycleBinA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "i64Size": SimTypeLongLong(signed=True, label="Int64"), "i64NumItems": SimTypeLongLong(signed=True, label="Int64")}, name="SHQUERYRBINFO", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszRootPath", "pSHQueryRBInfo"]),
#
'SHQueryRecycleBinW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "i64Size": SimTypeLongLong(signed=True, label="Int64"), "i64NumItems": SimTypeLongLong(signed=True, label="Int64")}, name="SHQUERYRBINFO", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszRootPath", "pSHQueryRBInfo"]),
#
'SHEmptyRecycleBinA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszRootPath", "dwFlags"]),
#
'SHEmptyRecycleBinW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "pszRootPath", "dwFlags"]),
#
'SHQueryUserNotificationState': SimTypeFunction([SimTypePointer(SimTypeInt(signed=False, label="QUERY_USER_NOTIFICATION_STATE"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pquns"]),
#
'Shell_NotifyIconA': SimTypeFunction([SimTypeInt(signed=False, label="NOTIFY_ICON_MESSAGE"), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "hWnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "uID": SimTypeInt(signed=False, label="UInt32"), "uFlags": SimTypeInt(signed=False, label="NOTIFY_ICON_DATA_FLAGS"), "uCallbackMessage": SimTypeInt(signed=False, label="UInt32"), "hIcon": SimTypeBottom(label="HICON"), "szTip": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 128), "dwState": SimTypeInt(signed=False, label="UInt32"), "dwStateMask": SimTypeInt(signed=False, label="UInt32"), "szInfo": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 256), "Anonymous": SimUnion({"uTimeout": SimTypeInt(signed=False, label="UInt32"), "uVersion": SimTypeInt(signed=False, label="UInt32")}, name="<anon>", label="None"), "szInfoTitle": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 64), "dwInfoFlags": SimTypeInt(signed=False, label="UInt32"), "guidItem": SimTypeBottom(label="Guid"), "hBalloonIcon": SimTypeBottom(label="HICON")}, name="NOTIFYICONDATAA", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["dwMessage", "lpData"]),
#
'Shell_NotifyIconW': SimTypeFunction([SimTypeInt(signed=False, label="NOTIFY_ICON_MESSAGE"), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "hWnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "uID": SimTypeInt(signed=False, label="UInt32"), "uFlags": SimTypeInt(signed=False, label="NOTIFY_ICON_DATA_FLAGS"), "uCallbackMessage": SimTypeInt(signed=False, label="UInt32"), "hIcon": SimTypeBottom(label="HICON"), "szTip": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 128), "dwState": SimTypeInt(signed=False, label="UInt32"), "dwStateMask": SimTypeInt(signed=False, label="UInt32"), "szInfo": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 256), "Anonymous": SimUnion({"uTimeout": SimTypeInt(signed=False, label="UInt32"), "uVersion": SimTypeInt(signed=False, label="UInt32")}, name="<anon>", label="None"), "szInfoTitle": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 64), "dwInfoFlags": SimTypeInt(signed=False, label="UInt32"), "guidItem": SimTypeBottom(label="Guid"), "hBalloonIcon": SimTypeBottom(label="HICON")}, name="NOTIFYICONDATAW", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["dwMessage", "lpData"]),
#
'Shell_NotifyIconGetRect': SimTypeFunction([SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "hWnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "uID": SimTypeInt(signed=False, label="UInt32"), "guidItem": SimTypeBottom(label="Guid")}, name="NOTIFYICONIDENTIFIER", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["identifier", "iconLocation"]),
#
'SHGetFileInfoA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="FILE_FLAGS_AND_ATTRIBUTES"), SimTypePointer(SimStruct({"hIcon": SimTypeBottom(label="HICON"), "iIcon": SimTypeInt(signed=True, label="Int32"), "dwAttributes": SimTypeInt(signed=False, label="UInt32"), "szDisplayName": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 260), "szTypeName": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 80)}, name="SHFILEINFOA", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="SHGFI_FLAGS")], SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), arg_names=["pszPath", "dwFileAttributes", "psfi", "cbFileInfo", "uFlags"]),
#
'SHGetFileInfoW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="FILE_FLAGS_AND_ATTRIBUTES"), SimTypePointer(SimStruct({"hIcon": SimTypeBottom(label="HICON"), "iIcon": SimTypeInt(signed=True, label="Int32"), "dwAttributes": SimTypeInt(signed=False, label="UInt32"), "szDisplayName": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 260), "szTypeName": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 80)}, name="SHFILEINFOW", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="SHGFI_FLAGS")], SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), arg_names=["pszPath", "dwFileAttributes", "psfi", "cbFileInfo", "uFlags"]),
#
'SHGetStockIconInfo': SimTypeFunction([SimTypeInt(signed=False, label="SHSTOCKICONID"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "hIcon": SimTypeBottom(label="HICON"), "iSysImageIndex": SimTypeInt(signed=True, label="Int32"), "iIcon": SimTypeInt(signed=True, label="Int32"), "szPath": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 260)}, name="SHSTOCKICONINFO", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["siid", "uFlags", "psii"]),
#
'SHGetDiskFreeSpaceExA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimUnion({"Anonymous": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous_e__Struct", pack=False, align=None), "u": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_u_e__Struct", pack=False, align=None), "QuadPart": SimTypeLongLong(signed=False, label="UInt64")}, name="<anon>", label="None"), offset=0), SimTypePointer(SimUnion({"Anonymous": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous_e__Struct", pack=False, align=None), "u": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_u_e__Struct", pack=False, align=None), "QuadPart": SimTypeLongLong(signed=False, label="UInt64")}, name="<anon>", label="None"), offset=0), SimTypePointer(SimUnion({"Anonymous": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous_e__Struct", pack=False, align=None), "u": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_u_e__Struct", pack=False, align=None), "QuadPart": SimTypeLongLong(signed=False, label="UInt64")}, name="<anon>", label="None"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszDirectoryName", "pulFreeBytesAvailableToCaller", "pulTotalNumberOfBytes", "pulTotalNumberOfFreeBytes"]),
#
'SHGetDiskFreeSpaceExW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimUnion({"Anonymous": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous_e__Struct", pack=False, align=None), "u": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_u_e__Struct", pack=False, align=None), "QuadPart": SimTypeLongLong(signed=False, label="UInt64")}, name="<anon>", label="None"), offset=0), SimTypePointer(SimUnion({"Anonymous": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous_e__Struct", pack=False, align=None), "u": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_u_e__Struct", pack=False, align=None), "QuadPart": SimTypeLongLong(signed=False, label="UInt64")}, name="<anon>", label="None"), offset=0), SimTypePointer(SimUnion({"Anonymous": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous_e__Struct", pack=False, align=None), "u": SimStruct({"LowPart": SimTypeInt(signed=False, label="UInt32"), "HighPart": SimTypeInt(signed=False, label="UInt32")}, name="_u_e__Struct", pack=False, align=None), "QuadPart": SimTypeLongLong(signed=False, label="UInt64")}, name="<anon>", label="None"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszDirectoryName", "pulFreeBytesAvailableToCaller", "pulTotalNumberOfBytes", "pulTotalNumberOfFreeBytes"]),
#
'SHGetNewLinkInfoA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszLinkTo", "pszDir", "pszName", "pfMustCopy", "uFlags"]),
#
'SHGetNewLinkInfoW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszLinkTo", "pszDir", "pszName", "pfMustCopy", "uFlags"]),
#
'SHInvokePrinterCommandA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "uAction", "lpBuf1", "lpBuf2", "fModal"]),
#
'SHInvokePrinterCommandW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hwnd", "uAction", "lpBuf1", "lpBuf2", "fModal"]),
#
'SHLoadNonloadedIconOverlayIdentifiers': SimTypeFunction([], SimTypeInt(signed=True, label="Int32")),
#
'SHIsFileAvailableOffline': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pwszPath", "pdwStatus"]),
#
'SHSetLocalizedName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath", "pszResModule", "idsRes"]),
#
'SHRemoveLocalizedName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath"]),
#
'SHGetLocalizedName': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath", "pszResModule", "cch", "pidsRes"]),
#
'IsLFNDriveA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath"]),
#
'IsLFNDriveW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPath"]),
#
'SHEnumerateUnreadMailAccountsW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hKeyUser", "dwIndex", "pszMailAddress", "cchMailAddress"]),
#
'SHGetUnreadMailCountW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimStruct({"dwLowDateTime": SimTypeInt(signed=False, label="UInt32"), "dwHighDateTime": SimTypeInt(signed=False, label="UInt32")}, name="FILETIME", pack=False, align=None), offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hKeyUser", "pszMailAddress", "pdwCount", "pFileTime", "pszShellExecuteCommand", "cchShellExecuteCommand"]),
#
'SHSetUnreadMailCountW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszMailAddress", "dwCount", "pszShellExecuteCommand"]),
#
'SHTestTokenMembership': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["hToken", "ulRID"]),
#
'SHGetImageList': SimTypeFunction([SimTypeInt(signed=True, label="Int32"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["iImageList", "riid", "ppvObj"]),
#
'InitNetworkAddressControl': SimTypeFunction([], SimTypeInt(signed=True, label="Int32")),
#
'SHGetDriveMedia': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszDrive", "pdwMediaContent"]),
}
lib.set_prototypes(prototypes)
| 306.935039 | 8,757 | 0.724864 | 16,831 | 155,923 | 6.68766 | 0.0609 | 0.06449 | 0.079886 | 0.110386 | 0.905224 | 0.883884 | 0.865227 | 0.857294 | 0.849458 | 0.838184 | 0 | 0.020781 | 0.076313 | 155,923 | 507 | 8,758 | 307.540434 | 0.760755 | 0.00018 | 0 | 0 | 0 | 0 | 0.205409 | 0.020186 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019455 | 0 | 0.019455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a99cab549123ed8980e3f0bbc7343dbdb0b793e7 | 1,564 | py | Python | seg_word_no_stopwords.py | cugdeeplearn/Visual-analytics | ecbeea8c609a9ab5600339a88973c75eefe748ae | [
"Apache-2.0"
] | 1 | 2021-01-21T07:44:01.000Z | 2021-01-21T07:44:01.000Z | seg_word_no_stopwords.py | cugdeeplearn/Visual-analytics | ecbeea8c609a9ab5600339a88973c75eefe748ae | [
"Apache-2.0"
] | null | null | null | seg_word_no_stopwords.py | cugdeeplearn/Visual-analytics | ecbeea8c609a9ab5600339a88973c75eefe748ae | [
"Apache-2.0"
] | null | null | null | '''
#!/usr/bin/python
# -*- coding: UTF-8 -*-
import jieba
#给出文档路径
filename = r"D:\code\bp\visulization\files\new_outtext_zhengze_out28.txt"
outfilename = r"D:\code\bp\visulization\files\new_outtext_zhengze_out28_seg.txt"
#inputs = open(filename, 'r', encoding = 'UTF-8')
outputs = open(outfilename, 'w', encoding = 'UTF-8')
jieba.load_userdict(r"D:\code\bp\visulization\dict_stopwords\out_dict.txt")
#将输出结果写入seg_outtext.txt中
with open (filename,'r', encoding = 'UTF-8') as f:
outstr = ''
line_content = f.read()
line_seg = jieba.cut(line_content)
#去停用词
for word in line_seg:
outstr += word
if word != '\n'
outstr += " "
outputs.write(outstr)
outputs.close()
f.close()
print ("删除停用词和分词成功!")
'''
#!/usr/bin/python
# -*- coding: UTF-8 -*-
import jieba
#给出文档路径
filename = r"D:\code\bp\visulization\files\new_outtext_zhengze_out28-1.txt"
outfilename = r"D:\code\bp\visulization\files\new_outtext_zhengze_out28-1_seg1.txt"
#inputs = open(filename, 'r', encoding = 'UTF-8')
outputs = open(outfilename, 'w', encoding = 'UTF-8')
jieba.load_userdict(r"D:\code\bp\visulization\dict_stopwords\out_dict.txt")
#将输出结果写入seg_outtext.txt中
with open (filename,'r', encoding = 'UTF-8') as f:
outstr = ''
line_content = f.read()
line_seg = jieba.cut(line_content)
#去停用词
for word in line_seg:
outstr += word
if word != '\n':
outstr += " "
outputs.write(outstr)
outputs.close()
f.close()
print ("删除停用词和分词成功!") | 28.436364 | 84 | 0.637468 | 213 | 1,564 | 4.544601 | 0.262911 | 0.033058 | 0.03719 | 0.049587 | 0.991736 | 0.991736 | 0.991736 | 0.991736 | 0.991736 | 0.991736 | 0 | 0.015212 | 0.201407 | 1,564 | 55 | 85 | 28.436364 | 0.759007 | 0.076726 | 0 | 0 | 0 | 0 | 0.323296 | 0.282092 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8d2072616c1ce537a518b97ed5feac5c7c36f592 | 19,761 | py | Python | tests/p4transfer/test_historical.py | nrs-cerickson/p4transfer | a2e2bfd0a9f870e2e8fb9087e1f4ca73d8d4b3f2 | [
"BSD-2-Clause"
] | null | null | null | tests/p4transfer/test_historical.py | nrs-cerickson/p4transfer | a2e2bfd0a9f870e2e8fb9087e1f4ca73d8d4b3f2 | [
"BSD-2-Clause"
] | null | null | null | tests/p4transfer/test_historical.py | nrs-cerickson/p4transfer | a2e2bfd0a9f870e2e8fb9087e1f4ca73d8d4b3f2 | [
"BSD-2-Clause"
] | null | null | null | from __future__ import annotations
import logging
import pytest
import p4transfer
@pytest.fixture
def historical_transfer_config(request, default_transfer_config):
historical_start_change = request.node.get_closest_marker("historical_start_change")
if historical_start_change is not None:
historical_start_change = int(historical_start_change.args[0])
# Adjust the start change settings for historical tests.
transfer_config = default_transfer_config.copy()
if historical_start_change:
transfer_config["historical_start_change"] = historical_start_change
return transfer_config
@pytest.fixture
def historical_add_delete_config(request, default_transfer_config):
"""Parameterizable historical start config."""
historical_start_change = int(request.param)
# Adjust the start change settings for historical tests.
transfer_config = default_transfer_config.copy()
transfer_config["historical_start_change"] = historical_start_change
return transfer_config
@pytest.mark.parametrize(
"historical_add_delete_config",
[2, 3],
indirect=True,
)
def test_historical_add_delete(source, target, historical_add_delete_config):
"""Old style rename for historical start mode transfer."""
file1 = source.local_path("inside/file1")
file2 = source.local_path("inside/file2")
file3 = source.local_path("inside/file3")
contents = [b"0"] * 10
file1.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
source.p4("integ", file1, file2)
source.p4("submit", "-d", "2: file1 -> file2")
chg = source.fetch("change")
chg._description = "Test desc"
source.save("change", chg)
source.p4("integ", file2, file3)
source.p4("delete", file2)
source.p4("submit", "-d", "4: file2 delete/copy")
p4transfer.test_transfer(historical_add_delete_config)
assert target.counter == 4
assert len(target.p4("changes")) == 2
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 0
assert filelog[0].revisions[0].action == "delete"
filelog = target.filelog("//depot/import/file3")
assert len(filelog[0].revisions) == 1
assert len(filelog[0].revisions[0].integrations) == 1
assert filelog[0].revisions[0].action == "branch"
@pytest.mark.xfail(reason="First historical change does not fix up file types yet.")
@pytest.mark.historical_start_change(3)
def test_historical_types(source, target, historical_transfer_config):
"""Test that file types in the historical change are transferred correctly."""
file1 = source.local_path("inside/file1")
file2 = source.local_path("inside/file2")
file3 = source.local_path("inside/file3")
file1.write_bytes(b"I am file 1")
file2.write_bytes(b"I am file 2")
file3.write_bytes(b"I am file 3")
source.p4("add", "-t", "binary", file1)
source.p4("submit", "-d", "1: file1 added")
source.p4("add", "-t", "binary+w", file2)
source.p4("submit", "-d", "2: file2 added")
source.p4("add", "-t", "text+D", file3)
source.p4("submit", "-d", "3: file3 added")
p4transfer.test_transfer(historical_transfer_config)
files = target.p4("files", "//depot/import/...")
assert len(files) == 3
assert files[0]["depotFile"] == "//depot/import/file1"
assert files[0]["type"] == "binary"
assert files[1]["depotFile"] == "//depot/import/file2"
assert files[1]["type"] == "binary+w"
assert files[2]["depotFile"] == "//depot/import/file3"
assert files[2]["type"] in "text+D"
@pytest.mark.skip("Targets with purged revisions need to be revisited.")
def test_historical_existing_purged_target_revs():
"""Historical integration where target has extra revs.
Additionally, some target revisions have been purged.
"""
pass
@pytest.mark.historical_start_change(2)
def test_historical_existing_target_revs(source, target, historical_transfer_config):
"""Historical integration where target has extra revs."""
file1 = source.local_path("inside/file1")
file1.write_bytes(b"Test content\n")
file2 = source.local_path("inside/file2")
tfile1 = target.local_path("import/file1")
tfile1.write_bytes(b"Test content\n")
tfile2 = target.local_path("import/file2")
tfile2.write_bytes(b"Test content\n")
# Add pre-existing target files.
target.p4("add", tfile1)
target.p4("submit", "-d", "1: file1 added")
target.p4("add", tfile2)
target.p4("submit", "-d", "2: file2 added")
target.p4("delete", tfile2)
target.p4("submit", "-d", "3: file2 deleted")
tfile2.write_bytes(b"Test content\n")
target.p4("add", tfile2)
target.p4("submit", "-d", "4: file2 re-added to match source @2")
# Now source files we want to replicate.
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
source.p4("integrate", file1, file2)
source.p4("submit", "-d", "2: file1 -> file2")
# This transfer should create no new changes because everything is up-to-date.
# It should only set the transfer counters appropriately.
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 2
assert target.historical_commit == 2
assert len(target.p4("changes")) == 4
source.p4("edit", file1)
file1.write_bytes(b"Test content\nMore again\n")
source.p4("submit", "-d", "3: file1 edited")
source.p4("integrate", file1, file2)
source.p4("resolve", "-as")
source.p4("submit", "-d", "4: file1 -> file2")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 4
assert len(target.p4("changes")) == 6
source_filelog = source.filelog("//depot/inside/file1@>2")
target_filelog = target.filelog("//depot/import/file1@>4")
assert len(target_filelog[0].revisions) == 1
assert len(source_filelog[0].revisions) == len(target_filelog[0].revisions)
source_how = source_filelog[0].revisions[0].integrations[0].how
target_how = target_filelog[0].revisions[0].integrations[0].how
assert source_how == target_how
source_filelog = source.filelog("//depot/inside/file2@>2")
target_filelog = target.filelog("//depot/import/file2@>4")
assert len(target_filelog[0].revisions) == 1
assert len(source_filelog[0].revisions) == len(target_filelog[0].revisions)
source_how = source_filelog[0].revisions[0].integrations[0].how
target_how = target_filelog[0].revisions[0].integrations[0].how
assert source_how == target_how
@pytest.mark.historical_start_change(3)
def test_historical_rename(source, target, historical_transfer_config):
"""Rename for historical start mode transfer."""
file1 = source.local_path("inside/file1")
file2 = source.local_path("inside/file2")
contents = [b"0"] * 10
file1.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
source.p4("edit", file1)
file1.write_bytes(b"file1\n")
source.p4("submit", "-d", "2: file1 edit")
chg = source.fetch("change")
chg._description = "Test desc"
source.save("change", chg)
source.p4("edit", file1)
source.p4("move", file1, file2)
source.p4("submit", "-d", "4: file1 renamed to file2")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 4
assert len(target.p4("changes")) == 2
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 1
assert len(filelog[0].revisions[0].integrations) == 1
# assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "move/add"
source.p4("edit", file2)
source.p4("move", file2, file1)
source.p4("submit", "-d", "5: renamed back")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 5
assert len(target.p4("changes")) == 3
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 0
assert len(filelog[0].revisions[1].integrations) == 2
# assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "move/delete"
assert filelog[0].revisions[1].action == "move/add"
filelog = target.filelog("//depot/import/file1")
assert len(filelog[0].revisions) == 3
assert len(filelog[0].revisions[0].integrations) == 1
assert len(filelog[0].revisions[1].integrations) == 0
assert len(filelog[0].revisions[2].integrations) == 1
# assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "move/add"
@pytest.mark.historical_start_change(3)
def test_historical_rename_add(source, target, historical_transfer_config):
"""Rename with subsequent add for historical start mode transfer."""
file1 = source.local_path("inside/file1")
file2 = source.local_path("inside/file2")
file3 = source.local_path("inside/file3")
contents = [b"0"] * 10
file1.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
source.p4("edit", file1)
file1.write_bytes(b"file1\n")
source.p4("submit", "-d", "2: file1 edit")
chg = source.fetch("change")
chg._description = "Test desc"
source.save("change", chg)
source.p4("edit", file1)
source.p4("move", file1, file2)
source.p4("submit", "-d", "4: file1 renamed to file2")
contents[0] = b"file1"
file3.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file3)
source.p4("submit", "-d", "5: add file3")
source.p4("edit", file3)
source.p4("move", file3, file1)
source.p4("submit", "-d", "6: rename 3->1")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 6
assert len(target.p4("changes")) == 4
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 1
assert len(filelog[0].revisions[0].integrations) == 1
# assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "move/add"
filelog = target.filelog("//depot/import/file3")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 0
# assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "move/delete"
filelog = target.filelog("//depot/import/file1")
assert len(filelog[0].revisions) == 3
assert len(filelog[0].revisions[0].integrations) == 1
assert len(filelog[0].revisions[1].integrations) == 0
assert len(filelog[0].revisions[2].integrations) == 1
# assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "move/add"
@pytest.mark.historical_start_change(4)
def test_historical_start_merge(source, target, historical_transfer_config):
"""Merge for historical start mode transfer."""
file1 = source.local_path("inside/file1")
contents = [b"0"] * 10
contents2 = contents[:]
file1.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
file2 = source.local_path("inside/file2")
source.p4("integrate", file1, file2)
source.p4("submit", "-d", "2: file1 -> file2")
source.p4("edit", file1)
source.p4("edit", file2)
contents[0] = b"file1"
file1.write_bytes(b"\n".join(contents) + b"\n")
contents2[5] = b"file2"
file2.write_bytes(b"\n".join(contents2) + b"\n")
source.p4("submit", "-d", "3: file1&2 edited")
source.p4("integrate", file1, file2)
source.p4("resolve", "-am")
source.p4("submit", "-d", "4: file1 -> file2 (merge)")
filelog = source.filelog("//depot/inside/file2")
assert filelog[0].revisions[0].integrations[0].how == "merge from"
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 4
assert len(target.p4("changes")) == 1
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 1
assert len(filelog[0].revisions[0].integrations) == 0
assert filelog[0].revisions[0].action == "add"
source.p4("integrate", file2, file1)
source.p4("resolve", "-am")
source.p4("submit", "-d", "5: file2 -> file1 (merge)")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 5
assert len(target.p4("changes")) == 2
filelog = target.filelog("//depot/import/file1")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 1
assert filelog[0].revisions[0].action == "integrate"
@pytest.mark.historical_start_change(4)
def test_historical_start_simple(source, target, historical_transfer_config):
"""Simple integration options for historical start mode transfer."""
file1 = source.local_path("inside/file1")
contents = [b"0"] * 10
contents2 = contents[:]
file1.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
file2 = source.local_path("inside/file2")
source.p4("integrate", file1, file2)
source.p4("submit", "-d", "2: file1 -> file2")
source.p4("edit", file1)
source.p4("edit", file2)
contents[0] = b"file1"
file1.write_bytes(b"\n".join(contents) + b"\n")
contents2[5] = b"file2"
file2.write_bytes(b"\n".join(contents2) + b"\n")
source.p4("submit", "-d", "3: file1&2 edited")
source.p4("integrate", file1, file2)
source.p4("resolve", "-am")
source.p4("submit", "-d", "4: file1 -> file2 (merge)")
filelog = source.filelog("//depot/inside/file2")
assert filelog[0].revisions[0].integrations[0].how == "merge from"
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 4
assert len(target.p4("changes")) == 1
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 1
assert len(filelog[0].revisions[0].integrations) == 0
assert filelog[0].revisions[0].action == "add"
@pytest.mark.historical_start_change(3)
def test_historical_start_simple2(source, target, historical_transfer_config):
"""Simple integration options for historical start mode transfer."""
file1 = source.local_path("inside/file1")
contents = [b"0"] * 10
contents2 = contents[:]
file1.write_bytes(b"\n".join(contents) + b"\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
file2 = source.local_path("inside/file2")
source.p4("integrate", file1, file2)
source.p4("submit", "-d", "2: file1 -> file2")
source.p4("edit", file1)
source.p4("edit", file2)
contents[0] = b"file1"
file1.write_bytes(b"\n".join(contents) + b"\n")
contents2[5] = b"file2"
file2.write_bytes(b"\n".join(contents2) + b"\n")
source.p4("submit", "-d", "3: file1&2 edited")
source.p4("integrate", file1, file2)
source.p4("resolve", "-am")
source.p4("submit", "-d", "file1 -> file2 (merge)")
filelog = source.filelog("//depot/inside/file2")
assert filelog[0].revisions[0].integrations[0].how == "merge from"
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 4
assert len(target.p4("changes")) == 2
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 1
assert len(filelog[0].revisions[1].integrations) == 0
assert filelog[0].revisions[0].action == "integrate"
assert filelog[0].revisions[1].action == "add"
@pytest.mark.historical_start_change(3)
def test_historical_start_simple3(source, target, historical_transfer_config):
"""Simple integration options for historical start mode transfer."""
file1 = source.local_path("inside/file1")
file1.write_bytes(b"Test content\n")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
file2 = source.local_path("inside/file2")
source.p4("integrate", file1, file2)
source.p4("submit", "-d", "2: file1 -> file2")
source.p4("edit", file1)
file1.write_bytes(file1.read_bytes() + b"Rev2 chg3\n")
source.p4("submit", "-d", "3: file1 edited")
source.p4("integrate", file1, file2)
source.p4("resolve", "-at")
source.p4("submit", "-d", "4: file1 -> file2 (copy rev2)")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 4
assert len(target.p4("changes")) == 2
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 1
assert filelog[0].revisions[0].action == "integrate"
assert filelog[0].revisions[1].action == "add"
logging.debug("========================================== Incremental change")
# Now make 2 changes and integrate them one at a time.
source.p4("edit", file1)
file1.write_bytes(file1.read_bytes() + b"Rev3 chg5\n")
source.p4("submit", "-d", "5: file1 edited")
source.p4("edit", file1)
file1.write_bytes(file1.read_bytes() + b"Rev4 chg6\n")
source.p4("submit", "-d", "6: file1 edited")
source.p4("integrate", f"{file1}#3", file2)
source.p4("resolve", "-at")
source.p4("submit", "-d", "7: file1 -> file2 (copy rev3)")
source.p4("integrate", file1, file2)
source.p4("resolve", "-at")
source.p4("submit", "-d", "8: file1 -> file2 (copy rev4)")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 8
assert len(target.p4("changes")) == 6
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 4
assert len(filelog[0].revisions[1].integrations) == 1
assert filelog[0].revisions[0].integrations[0].how == "copy from"
assert filelog[0].revisions[1].integrations[0].how == "copy from"
assert filelog[0].revisions[2].integrations[0].how == "copy from"
assert filelog[0].revisions[3].action == "add"
@pytest.mark.historical_start_change(2)
def test_historical_subsequent_merge(source, target, historical_transfer_config):
"""Integration after historical start mode transfer."""
file1 = source.local_path("inside/file1")
file1.write_bytes(b"Test content\n")
file2 = source.local_path("inside/file2")
file2.write_bytes(b"Test content2\n")
file3 = source.local_path("inside/file3")
source.p4("add", file1)
source.p4("submit", "-d", "1: file1 added")
source.p4("add", file2)
source.p4("submit", "-d", "2: file2 added")
source.p4("integrate", file2, file3)
source.p4("submit", "-d", "3: file2 -> file3")
source.p4("edit", file2)
file2.write_bytes(file2.read_bytes() + b"More\n")
source.p4("submit", "-d", "4: file2 edited")
source.p4("integrate", file2, file3)
source.p4("resolve", "-as")
source.p4("submit", "-d", "5: file2 -> file3")
p4transfer.test_transfer(historical_transfer_config)
assert target.counter == 5
assert len(target.p4("changes")) == 4
filelog = target.filelog("//depot/import/file2")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 1
assert filelog[0].revisions[0].action == "edit"
assert filelog[0].revisions[1].action == "add"
filelog = target.filelog("//depot/import/file3")
assert len(filelog[0].revisions) == 2
assert len(filelog[0].revisions[0].integrations) == 1
assert len(filelog[0].revisions[1].integrations) == 1
assert filelog[0].revisions[0].integrations[0].how == "copy from"
assert filelog[0].revisions[1].integrations[0].how == "branch from"
| 44.011136 | 88 | 0.66945 | 2,669 | 19,761 | 4.862121 | 0.073061 | 0.065963 | 0.107421 | 0.052015 | 0.850736 | 0.820991 | 0.774447 | 0.748016 | 0.733297 | 0.695307 | 0 | 0.045916 | 0.157988 | 19,761 | 448 | 89 | 44.109375 | 0.733998 | 0.070948 | 0 | 0.727041 | 0 | 0 | 0.192537 | 0.012639 | 0 | 0 | 0 | 0 | 0.270408 | 1 | 0.033163 | false | 0.002551 | 0.071429 | 0 | 0.109694 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8d4f9bcd6fa437fed5edbc3da34f6c0b0ee2289f | 5,485 | py | Python | wildlifecompliance/migrations/0381_auto_20200115_1056.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 1 | 2020-12-07T17:12:40.000Z | 2020-12-07T17:12:40.000Z | wildlifecompliance/migrations/0381_auto_20200115_1056.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 14 | 2020-01-08T08:08:26.000Z | 2021-03-19T22:59:46.000Z | wildlifecompliance/migrations/0381_auto_20200115_1056.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 15 | 2020-01-08T08:02:28.000Z | 2021-11-03T06:48:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2020-01-15 02:56
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('wildlifecompliance', '0380_physicalartifact_custodian_email'),
]
operations = [
migrations.AddField(
model_name='legalcase',
name='accused_bad_character',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='accused_bad_character_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='applications_orders_requests',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='applications_orders_requests_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='applications_orders_required',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='applications_orders_required_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='further_persons_interviews_pending',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='further_persons_interviews_pending_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='local_public_interest',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='local_public_interest_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='other_interviews',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='other_interviews_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='other_legal_matters',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='other_legal_matters_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='other_persons_receiving_sanction_outcome',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='other_persons_receiving_sanction_outcome_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='problems_needs_prosecution_witnesses',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='problems_needs_prosecution_witnesses_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='relevant_persons_pending_charges',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='relevant_persons_pending_charges_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='statements_pending',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='statements_pending_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='victim_impact_statement_taken',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='victim_impact_statement_taken_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='vulnerable_hostile_witnesses',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='vulnerable_hostile_witnesses_details',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='legalcase',
name='witness_refusing_statement',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='legalcase',
name='witness_refusing_statement_details',
field=models.TextField(blank=True, null=True),
),
]
| 35.160256 | 72 | 0.591613 | 479 | 5,485 | 6.536534 | 0.164927 | 0.160971 | 0.205685 | 0.241456 | 0.930693 | 0.930693 | 0.930693 | 0.930693 | 0.903865 | 0.82274 | 0 | 0.005509 | 0.305014 | 5,485 | 155 | 73 | 35.387097 | 0.815845 | 0.012397 | 0 | 0.756757 | 1 | 0 | 0.216291 | 0.156631 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013514 | 0 | 0.033784 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8d750b3b1d270ea31d55fd4c96c9474202eeab46 | 5,175 | py | Python | webstore/carts/views.py | dmusial98/WebStorePython | ed98764a40dd82db2b57e030ff9bf0bc777075a7 | [
"Unlicense"
] | null | null | null | webstore/carts/views.py | dmusial98/WebStorePython | ed98764a40dd82db2b57e030ff9bf0bc777075a7 | [
"Unlicense"
] | null | null | null | webstore/carts/views.py | dmusial98/WebStorePython | ed98764a40dd82db2b57e030ff9bf0bc777075a7 | [
"Unlicense"
] | null | null | null | from urllib import request
from .models import Cart, CartProduct
from . import serializers
from rest_framework.permissions import AllowAny
# from . import permissions
from rest_framework import generics, status
from rest_framework.response import Response
from django_filters.rest_framework import DjangoFilterBackend
class CartListView(generics.ListAPIView):
queryset = Cart.objects.all()
serializer_class = serializers.CartSerializer
# permission_classes = []
filter_backends = [DjangoFilterBackend]
# filterset_fields = ['productId', 'productName', 'categoryType']
# ordering = ['id']
def get_permissions(self):
permission_classes = [AllowAny]
return [permission() for permission in permission_classes]
class CartCreateView(generics.CreateAPIView):
queryset = Cart.objects.all()
serializer_class = serializers.CartSerializer
def get_permissions(self):
permission_classes = [AllowAny]
return [permission() for permission in permission_classes]
def create(self, request, *args, **kwargs):
super(CartCreateView, self).create(request, args, kwargs)
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully created",
"result": request.data}
return Response(response)
class CartDetailView(generics.RetrieveUpdateDestroyAPIView):
queryset = Cart.objects.all()
serializer_class = serializers.CartSerializer
def get_permissions(self):
permission_classes = [AllowAny]
return [permission() for permission in permission_classes]
def retrieve(self, request, *args, **kwargs):
super(CartDetailView, self).retrieve(request, args, kwargs)
instance = self.get_object()
serializer = self.get_serializer(instance)
data = serializer.data
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully retrieved",
"result": data}
return Response(response)
def patch(self, request, *args, **kwargs):
super(CartDetailView, self).patch(request, args, kwargs)
instance = self.get_object()
serializer = self.get_serializer(instance)
data = serializer.data
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully updated",
"result": data}
return Response(response)
def delete(self, request, *args, **kwargs):
super(CartDetailView, self).delete(request, args, kwargs)
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully deleted"}
return Response(response)
class ProductCartListView(generics.ListAPIView):
queryset = CartProduct.objects.all()
serializer_class = serializers.CartProductSerializer
# permission_classes = []
filter_backends = [DjangoFilterBackend]
# filterset_fields = ['productId', 'productName', 'categoryType']
# ordering = ['id']
def get_permissions(self):
permission_classes = [AllowAny]
return [permission() for permission in permission_classes]
class ProductCartCreateView(generics.CreateAPIView):
queryset = CartProduct.objects.all()
serializer_class = serializers.CartProductSerializer
def get_permissions(self):
permission_classes = [AllowAny]
return [permission() for permission in permission_classes]
def create(self, request, *args, **kwargs):
super(ProductCartCreateView, self).create(request, args, kwargs)
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully created",
"result": request.data}
return Response(response)
class ProductCartDetailView(generics.RetrieveUpdateDestroyAPIView):
queryset = CartProduct.objects.all()
serializer_class = serializers.CartProductSerializer
def get_permissions(self):
permission_classes = [AllowAny]
return [permission() for permission in permission_classes]
def retrieve(self, request, *args, **kwargs):
super(ProductCartDetailView, self).retrieve(request, args, kwargs)
instance = self.get_object()
serializer = self.get_serializer(instance)
data = serializer.data
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully retrieved",
"result": data}
return Response(response)
def patch(self, request, *args, **kwargs):
super(ProductCartDetailView, self).patch(request, args, kwargs)
instance = self.get_object()
serializer = self.get_serializer(instance)
data = serializer.data
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully updated",
"result": data}
return Response(response)
def delete(self, request, *args, **kwargs):
super(ProductCartDetailView, self).delete(request, args, kwargs)
response = {"status_code": status.HTTP_200_OK,
"message": "Successfully deleted"}
return Response(response) | 39.503817 | 74 | 0.671111 | 490 | 5,175 | 6.95102 | 0.146939 | 0.051674 | 0.079859 | 0.049325 | 0.833529 | 0.833529 | 0.833529 | 0.802701 | 0.762184 | 0.762184 | 0 | 0.00603 | 0.230918 | 5,175 | 131 | 75 | 39.503817 | 0.849749 | 0.045797 | 0 | 0.796117 | 0 | 0 | 0.069763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135922 | false | 0 | 0.067961 | 0 | 0.533981 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
a5f61165d6f7b3352cc301538df06f12606c1d03 | 199 | py | Python | evennia/contrib/game_systems/clothing/__init__.py | davidrideout/evennia | 879eea55acdf4fe5cdc96ba8fd0ab5ccca4ae84b | [
"BSD-3-Clause"
] | null | null | null | evennia/contrib/game_systems/clothing/__init__.py | davidrideout/evennia | 879eea55acdf4fe5cdc96ba8fd0ab5ccca4ae84b | [
"BSD-3-Clause"
] | null | null | null | evennia/contrib/game_systems/clothing/__init__.py | davidrideout/evennia | 879eea55acdf4fe5cdc96ba8fd0ab5ccca4ae84b | [
"BSD-3-Clause"
] | null | null | null | """
Clothing contrib - Tim Ashley Jenkins 2017
"""
from .clothing import ClothedCharacter # noqa
from .clothing import ClothedCharacterCmdSet # noqa
from .clothing import ContribClothing # noqa
| 22.111111 | 52 | 0.773869 | 21 | 199 | 7.333333 | 0.571429 | 0.233766 | 0.350649 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023952 | 0.160804 | 199 | 8 | 53 | 24.875 | 0.898204 | 0.291457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
f580fa7aabf31b408f37c1d558a8408adbce265f | 49,394 | py | Python | src/fidalgo/azext_fidalgo/generated/custom.py | tbyfield/azure-cli-extensions | e7e5f37fdcea3afb5c4aecb61fa72eac72c2128e | [
"MIT"
] | null | null | null | src/fidalgo/azext_fidalgo/generated/custom.py | tbyfield/azure-cli-extensions | e7e5f37fdcea3afb5c4aecb61fa72eac72c2128e | [
"MIT"
] | null | null | null | src/fidalgo/azext_fidalgo/generated/custom.py | tbyfield/azure-cli-extensions | e7e5f37fdcea3afb5c4aecb61fa72eac72c2128e | [
"MIT"
] | 1 | 2022-02-14T21:43:29.000Z | 2022-02-14T21:43:29.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
# pylint: disable=unused-argument
from azure.cli.core.util import sdk_no_wait
def fidalgo_dev_center_list(client,
resource_group_name=None,
top=None):
if resource_group_name:
return client.list_by_resource_group(resource_group_name=resource_group_name,
top=top)
return client.list_by_subscription(top=top)
def fidalgo_dev_center_show(client,
resource_group_name,
dev_center_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name)
def fidalgo_dev_center_create(client,
resource_group_name,
dev_center_name,
location,
tags=None,
type_=None,
user_assigned_identities=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
body['identity'] = {}
if type_ is not None:
body['identity']['type'] = type_
if user_assigned_identities is not None:
body['identity']['user_assigned_identities'] = user_assigned_identities
if len(body['identity']) == 0:
del body['identity']
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
body=body)
def fidalgo_dev_center_update(client,
resource_group_name,
dev_center_name,
tags=None,
location=None,
type_=None,
user_assigned_identities=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
body['identity'] = {}
if type_ is not None:
body['identity']['type'] = type_
if user_assigned_identities is not None:
body['identity']['user_assigned_identities'] = user_assigned_identities
if len(body['identity']) == 0:
del body['identity']
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
body=body)
def fidalgo_dev_center_delete(client,
resource_group_name,
dev_center_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name)
def fidalgo_project_list(client,
resource_group_name=None,
top=None):
if resource_group_name:
return client.list_by_resource_group(resource_group_name=resource_group_name,
top=top)
return client.list_by_subscription(top=top)
def fidalgo_project_show(client,
resource_group_name,
project_name):
return client.get(resource_group_name=resource_group_name,
project_name=project_name)
def fidalgo_project_create(client,
resource_group_name,
project_name,
location,
tags=None,
dev_center_id=None,
description=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
if dev_center_id is not None:
body['dev_center_id'] = dev_center_id
if description is not None:
body['description'] = description
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
project_name=project_name,
body=body)
def fidalgo_project_update(client,
resource_group_name,
project_name,
tags=None,
location=None,
dev_center_id=None,
description=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
if dev_center_id is not None:
body['dev_center_id'] = dev_center_id
if description is not None:
body['description'] = description
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
project_name=project_name,
body=body)
def fidalgo_project_delete(client,
resource_group_name,
project_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
project_name=project_name)
def fidalgo_attached_network_list(client,
resource_group_name,
project_name=None,
top=None,
dev_center_name=None):
if resource_group_name and project_name is not None:
return client.list_by_project(resource_group_name=resource_group_name,
project_name=project_name,
top=top)
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
def fidalgo_attached_network_show(client,
resource_group_name,
attached_network_connection_name,
project_name=None,
dev_center_name=None):
if resource_group_name and project_name is not None and attached_network_connection_name is not None:
return client.get_by_project(resource_group_name=resource_group_name,
project_name=project_name,
attached_network_connection_name=attached_network_connection_name)
return client.get_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
attached_network_connection_name=attached_network_connection_name)
def fidalgo_attached_network_create(client,
resource_group_name,
dev_center_name,
attached_network_connection_name,
network_connection_resource_id=None,
no_wait=False):
body = {}
if network_connection_resource_id is not None:
body['network_connection_resource_id'] = network_connection_resource_id
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
attached_network_connection_name=attached_network_connection_name,
body=body)
def fidalgo_attached_network_update(client,
resource_group_name,
dev_center_name,
attached_network_connection_name,
network_connection_resource_id=None,
no_wait=False):
body = {}
if network_connection_resource_id is not None:
body['network_connection_resource_id'] = network_connection_resource_id
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
attached_network_connection_name=attached_network_connection_name,
body=body)
def fidalgo_attached_network_delete(client,
resource_group_name,
dev_center_name,
attached_network_connection_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
attached_network_connection_name=attached_network_connection_name)
def fidalgo_environment_list(client,
resource_group_name,
project_name,
top=None):
return client.list_by_project(resource_group_name=resource_group_name,
project_name=project_name,
top=top)
def fidalgo_environment_show(client,
resource_group_name,
project_name,
environment_name):
return client.get(resource_group_name=resource_group_name,
project_name=project_name,
environment_name=environment_name)
def fidalgo_environment_create(client,
resource_group_name,
project_name,
environment_name,
location,
tags=None,
description=None,
catalog_item_name=None,
template_uri=None,
deployment_parameters=None,
environment_type=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
if description is not None:
body['description'] = description
if catalog_item_name is not None:
body['catalog_item_name'] = catalog_item_name
if template_uri is not None:
body['template_uri'] = template_uri
if deployment_parameters is not None:
body['deployment_parameters'] = deployment_parameters
if environment_type is not None:
body['environment_type'] = environment_type
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
project_name=project_name,
environment_name=environment_name,
body=body)
def fidalgo_environment_update(client,
resource_group_name,
project_name,
environment_name,
tags=None,
location=None,
description=None,
catalog_item_name=None,
template_uri=None,
deployment_parameters=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
if description is not None:
body['description'] = description
if catalog_item_name is not None:
body['catalog_item_name'] = catalog_item_name
if template_uri is not None:
body['template_uri'] = template_uri
if deployment_parameters is not None:
body['deployment_parameters'] = deployment_parameters
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
project_name=project_name,
environment_name=environment_name,
body=body)
def fidalgo_environment_delete(client,
resource_group_name,
project_name,
environment_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
project_name=project_name,
environment_name=environment_name)
def fidalgo_environment_deploy(client,
resource_group_name,
project_name,
environment_name,
parameters=None,
no_wait=False):
deployment = {}
if parameters is not None:
deployment['parameters'] = parameters
return sdk_no_wait(no_wait,
client.begin_deploy,
resource_group_name=resource_group_name,
project_name=project_name,
environment_name=environment_name,
deployment=deployment)
def fidalgo_deployment_list(client,
resource_group_name,
project_name,
environment_name,
top=None):
return client.list_by_environment(resource_group_name=resource_group_name,
project_name=project_name,
environment_name=environment_name,
top=top)
def fidalgo_environment_type_list(client,
resource_group_name,
project_name=None,
top=None,
dev_center_name=None):
if resource_group_name and project_name is not None:
return client.list_by_project(resource_group_name=resource_group_name,
project_name=project_name,
top=top)
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
def fidalgo_environment_type_show(client,
resource_group_name,
dev_center_name,
environment_type_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
environment_type_name=environment_type_name)
def fidalgo_environment_type_create(client,
resource_group_name,
dev_center_name,
environment_type_name,
tags=None,
description=None):
body = {}
if tags is not None:
body['tags'] = tags
if description is not None:
body['description'] = description
return client.create_or_update(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
environment_type_name=environment_type_name,
body=body)
def fidalgo_environment_type_update(client,
resource_group_name,
dev_center_name,
environment_type_name,
tags=None,
description=None):
body = {}
if tags is not None:
body['tags'] = tags
if description is not None:
body['description'] = description
return client.update(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
environment_type_name=environment_type_name,
body=body)
def fidalgo_environment_type_delete(client,
resource_group_name,
dev_center_name,
environment_type_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
environment_type_name=environment_type_name)
def fidalgo_catalog_item_list(client,
resource_group_name,
dev_center_name=None,
catalog_name=None,
top=None,
project_name=None):
if resource_group_name and dev_center_name is not None and catalog_name is not None:
return client.list_by_catalog(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
top=top)
return client.list_by_project(resource_group_name=resource_group_name,
project_name=project_name,
top=top)
def fidalgo_catalog_item_show(client,
resource_group_name,
dev_center_name,
catalog_name,
catalog_item_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
catalog_item_name=catalog_item_name)
def fidalgo_catalog_item_create(client,
resource_group_name,
dev_center_name,
catalog_name,
catalog_item_name,
description=None,
template_path=None,
parameters=None):
body = {}
if description is not None:
body['description'] = description
body['engine'] = {}
body['engine']['type'] = "ARM"
if template_path is not None:
body['engine']['template_path'] = template_path
if parameters is not None:
body['engine']['parameters'] = parameters
if len(body['engine']) == 0:
del body['engine']
return client.create_or_update(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
catalog_item_name=catalog_item_name,
body=body)
def fidalgo_catalog_item_update(client,
resource_group_name,
dev_center_name,
catalog_name,
catalog_item_name,
tags=None,
description=None):
body = {}
if tags is not None:
body['tags'] = tags
if description is not None:
body['description'] = description
return client.update(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
catalog_item_name=catalog_item_name,
body=body)
def fidalgo_catalog_item_delete(client,
resource_group_name,
dev_center_name,
catalog_name,
catalog_item_name):
return client.delete(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
catalog_item_name=catalog_item_name)
def fidalgo_gallery_list(client,
resource_group_name,
dev_center_name,
top=None):
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
def fidalgo_gallery_show(client,
resource_group_name,
dev_center_name,
gallery_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name)
def fidalgo_gallery_create(client,
resource_group_name,
dev_center_name,
gallery_name,
gallery_resource_id=None,
no_wait=False):
body = {}
if gallery_resource_id is not None:
body['gallery_resource_id'] = gallery_resource_id
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name,
body=body)
def fidalgo_gallery_update(instance,
resource_group_name,
dev_center_name,
gallery_name,
gallery_resource_id=None,
no_wait=False):
if gallery_resource_id is not None:
instance.gallery_resource_id = gallery_resource_id
return instance
def fidalgo_gallery_delete(client,
resource_group_name,
dev_center_name,
gallery_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name)
def fidalgo_image_list(client,
resource_group_name,
dev_center_name,
gallery_name=None,
top=None):
if resource_group_name and dev_center_name is not None and gallery_name is not None:
return client.list_by_gallery(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name,
top=top)
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
def fidalgo_image_show(client,
resource_group_name,
dev_center_name,
gallery_name,
image_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name,
image_name=image_name)
def fidalgo_image_version_list(client,
resource_group_name,
dev_center_name,
gallery_name,
image_name):
return client.list_by_image(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name,
image_name=image_name)
def fidalgo_image_version_show(client,
resource_group_name,
dev_center_name,
gallery_name,
image_name,
version_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
gallery_name=gallery_name,
image_name=image_name,
version_name=version_name)
def fidalgo_catalog_list(client,
resource_group_name,
dev_center_name,
top=None):
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
def fidalgo_catalog_show(client,
resource_group_name,
dev_center_name,
catalog_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name)
def fidalgo_catalog_create(client,
resource_group_name,
dev_center_name,
catalog_name,
git_hub=None,
ado_git=None,
no_wait=False):
body = {}
if git_hub is not None:
body['git_hub'] = git_hub
if ado_git is not None:
body['ado_git'] = ado_git
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
body=body)
def fidalgo_catalog_update(client,
resource_group_name,
dev_center_name,
catalog_name,
tags=None,
git_hub=None,
ado_git=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if git_hub is not None:
body['git_hub'] = git_hub
if ado_git is not None:
body['ado_git'] = ado_git
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name,
body=body)
def fidalgo_catalog_delete(client,
resource_group_name,
dev_center_name,
catalog_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name)
def fidalgo_catalog_sync(client,
resource_group_name,
dev_center_name,
catalog_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_sync,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
catalog_name=catalog_name)
def fidalgo_mapping_list(client,
resource_group_name,
dev_center_name,
top=None):
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
def fidalgo_mapping_show(client,
resource_group_name,
dev_center_name,
mapping_name):
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
mapping_name=mapping_name)
def fidalgo_mapping_create(client,
resource_group_name,
dev_center_name,
mapping_name,
mapped_subscription_id=None,
environment_type=None,
project_id=None):
body = {}
if mapped_subscription_id is not None:
body['mapped_subscription_id'] = mapped_subscription_id
if environment_type is not None:
body['environment_type'] = environment_type
if project_id is not None:
body['project_id'] = project_id
return client.create_or_update(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
mapping_name=mapping_name,
body=body)
def fidalgo_mapping_update(client,
resource_group_name,
dev_center_name,
mapping_name,
mapped_subscription_id=None):
body = {}
if mapped_subscription_id is not None:
body['mapped_subscription_id'] = mapped_subscription_id
return client.update(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
mapping_name=mapping_name,
body=body)
def fidalgo_mapping_delete(client,
resource_group_name,
dev_center_name,
mapping_name):
return client.delete(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
mapping_name=mapping_name)
def fidalgo_dev_box_definition_list(client,
resource_group_name,
dev_center_name=None,
top=None,
project_name=None):
if resource_group_name and dev_center_name is not None:
return client.list_by_dev_center(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
top=top)
return client.list_by_project(resource_group_name=resource_group_name,
project_name=project_name,
top=top)
def fidalgo_dev_box_definition_show(client,
resource_group_name,
dev_box_definition_name,
dev_center_name=None,
project_name=None):
if resource_group_name and dev_center_name is not None and dev_box_definition_name is not None:
return client.get(resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
dev_box_definition_name=dev_box_definition_name)
return client.get_by_project(resource_group_name=resource_group_name,
project_name=project_name,
dev_box_definition_name=dev_box_definition_name)
def fidalgo_dev_box_definition_create(client,
resource_group_name,
dev_center_name,
dev_box_definition_name,
location,
tags=None,
image_reference=None,
name=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
if image_reference is not None:
body['image_reference'] = image_reference
body['sku'] = {}
if name is not None:
body['sku']['name'] = name
if len(body['sku']) == 0:
del body['sku']
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
dev_box_definition_name=dev_box_definition_name,
body=body)
def fidalgo_dev_box_definition_update(client,
resource_group_name,
dev_center_name,
dev_box_definition_name,
tags=None,
location=None,
image_reference=None,
name=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
if image_reference is not None:
body['image_reference'] = image_reference
body['sku'] = {}
if name is not None:
body['sku']['name'] = name
if len(body['sku']) == 0:
del body['sku']
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
dev_box_definition_name=dev_box_definition_name,
body=body)
def fidalgo_dev_box_definition_delete(client,
resource_group_name,
dev_center_name,
dev_box_definition_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
dev_center_name=dev_center_name,
dev_box_definition_name=dev_box_definition_name)
def fidalgo_operation_statuses_show(client,
location,
operation_id):
return client.get(location=location,
operation_id=operation_id)
def fidalgo_sku_list(client,
top=None):
return client.list_by_subscription(top=top)
def fidalgo_pool_list(client,
resource_group_name,
project_name,
top=None):
return client.list_by_project(resource_group_name=resource_group_name,
project_name=project_name,
top=top)
def fidalgo_pool_show(client,
resource_group_name,
project_name,
pool_name):
return client.get(resource_group_name=resource_group_name,
project_name=project_name,
pool_name=pool_name)
def fidalgo_pool_create(client,
resource_group_name,
project_name,
pool_name,
location,
tags=None,
machine_definition_id=None,
dev_box_definition_name=None,
network_settings_id=None,
network_connection_name=None,
name=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
if machine_definition_id is not None:
body['machine_definition_id'] = machine_definition_id
if dev_box_definition_name is not None:
body['dev_box_definition_name'] = dev_box_definition_name
if network_settings_id is not None:
body['network_settings_id'] = network_settings_id
if network_connection_name is not None:
body['network_connection_name'] = network_connection_name
body['sku'] = {}
if name is not None:
body['sku']['name'] = name
if len(body['sku']) == 0:
del body['sku']
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
project_name=project_name,
pool_name=pool_name,
body=body)
def fidalgo_pool_update(client,
resource_group_name,
project_name,
pool_name,
tags=None,
location=None,
machine_definition_id=None,
dev_box_definition_name=None,
network_settings_id=None,
network_connection_name=None,
name=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
if machine_definition_id is not None:
body['machine_definition_id'] = machine_definition_id
if dev_box_definition_name is not None:
body['dev_box_definition_name'] = dev_box_definition_name
if network_settings_id is not None:
body['network_settings_id'] = network_settings_id
if network_connection_name is not None:
body['network_connection_name'] = network_connection_name
body['sku'] = {}
if name is not None:
body['sku']['name'] = name
if len(body['sku']) == 0:
del body['sku']
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
project_name=project_name,
pool_name=pool_name,
body=body)
def fidalgo_pool_delete(client,
resource_group_name,
project_name,
pool_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
project_name=project_name,
pool_name=pool_name)
def fidalgo_machine_definition_list(client,
resource_group_name=None,
top=None):
if resource_group_name:
return client.list_by_resource_group(resource_group_name=resource_group_name,
top=top)
return client.list_by_subscription(top=top)
def fidalgo_machine_definition_show(client,
resource_group_name,
machine_definition_name):
return client.get(resource_group_name=resource_group_name,
machine_definition_name=machine_definition_name)
def fidalgo_machine_definition_create(client,
resource_group_name,
machine_definition_name,
location,
tags=None,
image_reference=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
if image_reference is not None:
body['image_reference'] = image_reference
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
machine_definition_name=machine_definition_name,
body=body)
def fidalgo_machine_definition_update(client,
resource_group_name,
machine_definition_name,
tags=None,
location=None,
image_reference=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
if image_reference is not None:
body['image_reference'] = image_reference
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
machine_definition_name=machine_definition_name,
body=body)
def fidalgo_machine_definition_delete(client,
resource_group_name,
machine_definition_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
machine_definition_name=machine_definition_name)
def fidalgo_network_setting_list(client,
resource_group_name=None,
top=None):
if resource_group_name:
return client.list_by_resource_group(resource_group_name=resource_group_name,
top=top)
return client.list_by_subscription(top=top)
def fidalgo_network_setting_show(client,
resource_group_name,
network_setting_name):
return client.get(resource_group_name=resource_group_name,
network_setting_name=network_setting_name)
def fidalgo_network_setting_create(client,
resource_group_name,
network_setting_name,
location,
tags=None,
subnet_id=None,
networking_resource_group_id=None,
domain_name=None,
organization_unit=None,
domain_username=None,
domain_password=None,
networking_resource_group_name=None,
domain_join_type=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
body['location'] = location
if subnet_id is not None:
body['subnet_id'] = subnet_id
if networking_resource_group_id is not None:
body['networking_resource_group_id'] = networking_resource_group_id
if domain_name is not None:
body['domain_name'] = domain_name
if organization_unit is not None:
body['organization_unit'] = organization_unit
if domain_username is not None:
body['domain_username'] = domain_username
if domain_password is not None:
body['domain_password'] = domain_password
if networking_resource_group_name is not None:
body['networking_resource_group_name'] = networking_resource_group_name
if domain_join_type is not None:
body['domain_join_type'] = domain_join_type
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
network_setting_name=network_setting_name,
body=body)
def fidalgo_network_setting_update(client,
resource_group_name,
network_setting_name,
tags=None,
location=None,
subnet_id=None,
networking_resource_group_id=None,
domain_name=None,
organization_unit=None,
domain_username=None,
domain_password=None,
no_wait=False):
body = {}
if tags is not None:
body['tags'] = tags
if location is not None:
body['location'] = location
if subnet_id is not None:
body['subnet_id'] = subnet_id
if networking_resource_group_id is not None:
body['networking_resource_group_id'] = networking_resource_group_id
if domain_name is not None:
body['domain_name'] = domain_name
if organization_unit is not None:
body['organization_unit'] = organization_unit
if domain_username is not None:
body['domain_username'] = domain_username
if domain_password is not None:
body['domain_password'] = domain_password
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
network_setting_name=network_setting_name,
body=body)
def fidalgo_network_setting_delete(client,
resource_group_name,
network_setting_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
network_setting_name=network_setting_name)
def fidalgo_network_setting_list_health_detail(client,
resource_group_name,
network_setting_name,
top=None):
return client.list_health_details(resource_group_name=resource_group_name,
top=top,
network_setting_name=network_setting_name)
def fidalgo_network_setting_show_health_detail(client,
resource_group_name,
network_setting_name):
return client.get_health_details(resource_group_name=resource_group_name,
network_setting_name=network_setting_name)
| 42.217094 | 106 | 0.497206 | 4,515 | 49,394 | 5.010631 | 0.029457 | 0.14998 | 0.18711 | 0.096937 | 0.943774 | 0.923131 | 0.912611 | 0.88105 | 0.840251 | 0.823189 | 0 | 0.000258 | 0.451391 | 49,394 | 1,169 | 107 | 42.253208 | 0.8346 | 0.010163 | 0 | 0.864677 | 0 | 0 | 0.027981 | 0.008677 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075622 | false | 0.00597 | 0.000995 | 0.037811 | 0.163184 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
194e08d962c78fd13939a25aac9ccaa1feff1d35 | 210 | py | Python | categories/tests/__init__.py | s1n4/django-categories | 6af6d815e214bddbaac572c19e9c738ef1f752d6 | [
"Apache-2.0"
] | 1 | 2019-02-06T14:23:55.000Z | 2019-02-06T14:23:55.000Z | categories/tests/__init__.py | s1n4/django-categories | 6af6d815e214bddbaac572c19e9c738ef1f752d6 | [
"Apache-2.0"
] | null | null | null | categories/tests/__init__.py | s1n4/django-categories | 6af6d815e214bddbaac572c19e9c738ef1f752d6 | [
"Apache-2.0"
] | null | null | null | from categories.tests.category_import import *
from categories.tests.templatetags import *
from categories.tests.manager import *
from categories.tests.registration import *
__fixtures__ = ['categories.json']
| 30 | 46 | 0.819048 | 24 | 210 | 6.958333 | 0.416667 | 0.335329 | 0.45509 | 0.449102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 210 | 6 | 47 | 35 | 0.878947 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
197d981f745c1929a6f3bc12909392f984e64785 | 3,653 | py | Python | test/test_interruptions_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | test/test_interruptions_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | test/test_interruptions_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Octopus Server API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 2019.6.7+Branch.tags-2019.6.7.Sha.aa18dc6809953218c66f57eff7d26481d9b23d6a
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import octopus_deploy_swagger_client
from octopus_deploy_client.interruptions_api import InterruptionsApi # noqa: E501
from octopus_deploy_swagger_client.rest import ApiException
class TestInterruptionsApi(unittest.TestCase):
"""InterruptionsApi unit test stubs"""
def setUp(self):
self.api = octopus_deploy_client.interruptions_api.InterruptionsApi() # noqa: E501
def tearDown(self):
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder_0(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder_0
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder_spaces(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder_spaces
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder_spaces_0(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_interruption_responsibility_responder_spaces_0
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_submit_interruption_responder(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_submit_interruption_responder
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_submit_interruption_responder_spaces(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_submit_interruption_responder_spaces
"""
pass
def test_custom_query_response_descriptor_octopus_server_web_api_actions_list_interruptions_responder(self):
"""Test case for custom_query_response_descriptor_octopus_server_web_api_actions_list_interruptions_responder
"""
pass
def test_custom_query_response_descriptor_octopus_server_web_api_actions_list_interruptions_responder_spaces(self):
"""Test case for custom_query_response_descriptor_octopus_server_web_api_actions_list_interruptions_responder_spaces
"""
pass
def test_load_response_descriptor_server_tasks_interruption_interruption_resource(self):
"""Test case for load_response_descriptor_server_tasks_interruption_interruption_resource
Get a InterruptionResource by ID # noqa: E501
"""
pass
def test_load_response_descriptor_server_tasks_interruption_interruption_resource_spaces(self):
"""Test case for load_response_descriptor_server_tasks_interruption_interruption_resource_spaces
Get a InterruptionResource by ID # noqa: E501
"""
pass
if __name__ == '__main__':
unittest.main()
| 37.659794 | 136 | 0.802628 | 427 | 3,653 | 6.274005 | 0.180328 | 0.134379 | 0.149309 | 0.185144 | 0.817469 | 0.79134 | 0.79134 | 0.762971 | 0.734602 | 0.730123 | 0 | 0.018692 | 0.150561 | 3,653 | 96 | 137 | 38.052083 | 0.844666 | 0.453874 | 0 | 0.34375 | 1 | 0 | 0.004264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.34375 | 0.15625 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 11 |
27a4313d707a2de40650a2db4c11d5cab5084c9b | 186 | py | Python | src/python/providers/movement/__init__.py | daboross/dxnr | 8f73e9d5f4473b97dcfe05804a40c9a0826e51b6 | [
"MIT"
] | null | null | null | src/python/providers/movement/__init__.py | daboross/dxnr | 8f73e9d5f4473b97dcfe05804a40c9a0826e51b6 | [
"MIT"
] | null | null | null | src/python/providers/movement/__init__.py | daboross/dxnr | 8f73e9d5f4473b97dcfe05804a40c9a0826e51b6 | [
"MIT"
] | null | null | null | from providers.movement import passing_movement
def apply_prototypes() -> None:
passing_movement.apply_prototypes()
def instantiate() -> None:
passing_movement.instantiate()
| 18.6 | 47 | 0.768817 | 20 | 186 | 6.9 | 0.5 | 0.326087 | 0.275362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 186 | 9 | 48 | 20.666667 | 0.8625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0.6 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
27e603a406e4174d1a0ca5baad275ac169e4d22d | 6,269 | py | Python | loldib/getratings/models/NA/na_kayn/na_kayn_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_kayn/na_kayn_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_kayn/na_kayn_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Kayn_Sup_Aatrox(Ratings):
pass
class NA_Kayn_Sup_Ahri(Ratings):
pass
class NA_Kayn_Sup_Akali(Ratings):
pass
class NA_Kayn_Sup_Alistar(Ratings):
pass
class NA_Kayn_Sup_Amumu(Ratings):
pass
class NA_Kayn_Sup_Anivia(Ratings):
pass
class NA_Kayn_Sup_Annie(Ratings):
pass
class NA_Kayn_Sup_Ashe(Ratings):
pass
class NA_Kayn_Sup_AurelionSol(Ratings):
pass
class NA_Kayn_Sup_Azir(Ratings):
pass
class NA_Kayn_Sup_Bard(Ratings):
pass
class NA_Kayn_Sup_Blitzcrank(Ratings):
pass
class NA_Kayn_Sup_Brand(Ratings):
pass
class NA_Kayn_Sup_Braum(Ratings):
pass
class NA_Kayn_Sup_Caitlyn(Ratings):
pass
class NA_Kayn_Sup_Camille(Ratings):
pass
class NA_Kayn_Sup_Cassiopeia(Ratings):
pass
class NA_Kayn_Sup_Chogath(Ratings):
pass
class NA_Kayn_Sup_Corki(Ratings):
pass
class NA_Kayn_Sup_Darius(Ratings):
pass
class NA_Kayn_Sup_Diana(Ratings):
pass
class NA_Kayn_Sup_Draven(Ratings):
pass
class NA_Kayn_Sup_DrMundo(Ratings):
pass
class NA_Kayn_Sup_Ekko(Ratings):
pass
class NA_Kayn_Sup_Elise(Ratings):
pass
class NA_Kayn_Sup_Evelynn(Ratings):
pass
class NA_Kayn_Sup_Ezreal(Ratings):
pass
class NA_Kayn_Sup_Fiddlesticks(Ratings):
pass
class NA_Kayn_Sup_Fiora(Ratings):
pass
class NA_Kayn_Sup_Fizz(Ratings):
pass
class NA_Kayn_Sup_Galio(Ratings):
pass
class NA_Kayn_Sup_Gangplank(Ratings):
pass
class NA_Kayn_Sup_Garen(Ratings):
pass
class NA_Kayn_Sup_Gnar(Ratings):
pass
class NA_Kayn_Sup_Gragas(Ratings):
pass
class NA_Kayn_Sup_Graves(Ratings):
pass
class NA_Kayn_Sup_Hecarim(Ratings):
pass
class NA_Kayn_Sup_Heimerdinger(Ratings):
pass
class NA_Kayn_Sup_Illaoi(Ratings):
pass
class NA_Kayn_Sup_Irelia(Ratings):
pass
class NA_Kayn_Sup_Ivern(Ratings):
pass
class NA_Kayn_Sup_Janna(Ratings):
pass
class NA_Kayn_Sup_JarvanIV(Ratings):
pass
class NA_Kayn_Sup_Jax(Ratings):
pass
class NA_Kayn_Sup_Jayce(Ratings):
pass
class NA_Kayn_Sup_Jhin(Ratings):
pass
class NA_Kayn_Sup_Jinx(Ratings):
pass
class NA_Kayn_Sup_Kalista(Ratings):
pass
class NA_Kayn_Sup_Karma(Ratings):
pass
class NA_Kayn_Sup_Karthus(Ratings):
pass
class NA_Kayn_Sup_Kassadin(Ratings):
pass
class NA_Kayn_Sup_Katarina(Ratings):
pass
class NA_Kayn_Sup_Kayle(Ratings):
pass
class NA_Kayn_Sup_Kayn(Ratings):
pass
class NA_Kayn_Sup_Kennen(Ratings):
pass
class NA_Kayn_Sup_Khazix(Ratings):
pass
class NA_Kayn_Sup_Kindred(Ratings):
pass
class NA_Kayn_Sup_Kled(Ratings):
pass
class NA_Kayn_Sup_KogMaw(Ratings):
pass
class NA_Kayn_Sup_Leblanc(Ratings):
pass
class NA_Kayn_Sup_LeeSin(Ratings):
pass
class NA_Kayn_Sup_Leona(Ratings):
pass
class NA_Kayn_Sup_Lissandra(Ratings):
pass
class NA_Kayn_Sup_Lucian(Ratings):
pass
class NA_Kayn_Sup_Lulu(Ratings):
pass
class NA_Kayn_Sup_Lux(Ratings):
pass
class NA_Kayn_Sup_Malphite(Ratings):
pass
class NA_Kayn_Sup_Malzahar(Ratings):
pass
class NA_Kayn_Sup_Maokai(Ratings):
pass
class NA_Kayn_Sup_MasterYi(Ratings):
pass
class NA_Kayn_Sup_MissFortune(Ratings):
pass
class NA_Kayn_Sup_MonkeyKing(Ratings):
pass
class NA_Kayn_Sup_Mordekaiser(Ratings):
pass
class NA_Kayn_Sup_Morgana(Ratings):
pass
class NA_Kayn_Sup_Nami(Ratings):
pass
class NA_Kayn_Sup_Nasus(Ratings):
pass
class NA_Kayn_Sup_Nautilus(Ratings):
pass
class NA_Kayn_Sup_Nidalee(Ratings):
pass
class NA_Kayn_Sup_Nocturne(Ratings):
pass
class NA_Kayn_Sup_Nunu(Ratings):
pass
class NA_Kayn_Sup_Olaf(Ratings):
pass
class NA_Kayn_Sup_Orianna(Ratings):
pass
class NA_Kayn_Sup_Ornn(Ratings):
pass
class NA_Kayn_Sup_Pantheon(Ratings):
pass
class NA_Kayn_Sup_Poppy(Ratings):
pass
class NA_Kayn_Sup_Quinn(Ratings):
pass
class NA_Kayn_Sup_Rakan(Ratings):
pass
class NA_Kayn_Sup_Rammus(Ratings):
pass
class NA_Kayn_Sup_RekSai(Ratings):
pass
class NA_Kayn_Sup_Renekton(Ratings):
pass
class NA_Kayn_Sup_Rengar(Ratings):
pass
class NA_Kayn_Sup_Riven(Ratings):
pass
class NA_Kayn_Sup_Rumble(Ratings):
pass
class NA_Kayn_Sup_Ryze(Ratings):
pass
class NA_Kayn_Sup_Sejuani(Ratings):
pass
class NA_Kayn_Sup_Shaco(Ratings):
pass
class NA_Kayn_Sup_Shen(Ratings):
pass
class NA_Kayn_Sup_Shyvana(Ratings):
pass
class NA_Kayn_Sup_Singed(Ratings):
pass
class NA_Kayn_Sup_Sion(Ratings):
pass
class NA_Kayn_Sup_Sivir(Ratings):
pass
class NA_Kayn_Sup_Skarner(Ratings):
pass
class NA_Kayn_Sup_Sona(Ratings):
pass
class NA_Kayn_Sup_Soraka(Ratings):
pass
class NA_Kayn_Sup_Swain(Ratings):
pass
class NA_Kayn_Sup_Syndra(Ratings):
pass
class NA_Kayn_Sup_TahmKench(Ratings):
pass
class NA_Kayn_Sup_Taliyah(Ratings):
pass
class NA_Kayn_Sup_Talon(Ratings):
pass
class NA_Kayn_Sup_Taric(Ratings):
pass
class NA_Kayn_Sup_Teemo(Ratings):
pass
class NA_Kayn_Sup_Thresh(Ratings):
pass
class NA_Kayn_Sup_Tristana(Ratings):
pass
class NA_Kayn_Sup_Trundle(Ratings):
pass
class NA_Kayn_Sup_Tryndamere(Ratings):
pass
class NA_Kayn_Sup_TwistedFate(Ratings):
pass
class NA_Kayn_Sup_Twitch(Ratings):
pass
class NA_Kayn_Sup_Udyr(Ratings):
pass
class NA_Kayn_Sup_Urgot(Ratings):
pass
class NA_Kayn_Sup_Varus(Ratings):
pass
class NA_Kayn_Sup_Vayne(Ratings):
pass
class NA_Kayn_Sup_Veigar(Ratings):
pass
class NA_Kayn_Sup_Velkoz(Ratings):
pass
class NA_Kayn_Sup_Vi(Ratings):
pass
class NA_Kayn_Sup_Viktor(Ratings):
pass
class NA_Kayn_Sup_Vladimir(Ratings):
pass
class NA_Kayn_Sup_Volibear(Ratings):
pass
class NA_Kayn_Sup_Warwick(Ratings):
pass
class NA_Kayn_Sup_Xayah(Ratings):
pass
class NA_Kayn_Sup_Xerath(Ratings):
pass
class NA_Kayn_Sup_XinZhao(Ratings):
pass
class NA_Kayn_Sup_Yasuo(Ratings):
pass
class NA_Kayn_Sup_Yorick(Ratings):
pass
class NA_Kayn_Sup_Zac(Ratings):
pass
class NA_Kayn_Sup_Zed(Ratings):
pass
class NA_Kayn_Sup_Ziggs(Ratings):
pass
class NA_Kayn_Sup_Zilean(Ratings):
pass
class NA_Kayn_Sup_Zyra(Ratings):
pass
| 15.033573 | 46 | 0.75642 | 972 | 6,269 | 4.452675 | 0.151235 | 0.223198 | 0.350739 | 0.446396 | 0.791359 | 0.791359 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177221 | 6,269 | 416 | 47 | 15.069712 | 0.839085 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
27e97720b7f7607b81cff91683a91c7ab63ddf6f | 6,268 | py | Python | tests/test_add.py | noqqe/rvo | 423e1ea1aea0a2dc849ceae838e18896a13e7771 | [
"MIT"
] | 14 | 2016-05-04T13:56:10.000Z | 2019-08-01T14:31:33.000Z | tests/test_add.py | noqqe/rvo | 423e1ea1aea0a2dc849ceae838e18896a13e7771 | [
"MIT"
] | 12 | 2016-08-01T12:42:53.000Z | 2022-02-16T09:37:47.000Z | tests/test_add.py | noqqe/rvo | 423e1ea1aea0a2dc849ceae838e18896a13e7771 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from conftest import rvo_output, rvo_err
from click.testing import CliRunner
from rvo import cli
def test_add_all_parameters(isatty_true):
options = ['add', '-t', 'test', '-c', 'test', '--content', 'test']
output = ['Document "test" created.']
rvo_output(options,output)
def test_add_tags(isatty_true):
options = ['add', '-t', 'test', '--content', 'test']
output = ['Document "test" created.']
rvo_output(options,output)
def test_add_title_test(isatty_true):
options = ['add', '-t', 'test', '--content', 'THIS IS A TITLE']
output = ['Document "THIS IS A TITLE" created.']
rvo_output(options,output)
def test_add_title_test_gnarf(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-c', 'töstcät', '-x', 'gnarf'])
assert not result.exception
assert result.output.strip().endswith('Document "gnarf" created.')
def test_add_title_test_gnarf(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-c', 'töstcät', '-x', 'gnarf\nfoo'])
assert not result.exception
assert result.output.strip().endswith('Document "gnarf" created.')
def test_add_title_test_hashtag(isatty_true):
options = ['add', '-t', 'test', '--content', '# THIS IS A TITLE']
output = ['Document "THIS IS A TITLE" created.']
rvo_output(options,output)
def test_add_title_test_hashtag(isatty_true):
options = ['add', '-t', 'test', '--content', '# THIS IS A TITLE\nmutliline']
output = ['Document "THIS IS A TITLE" created.']
rvo_output(options,output)
def test_add_very_long_title(isatty_true):
options = ['add', '-t', 'test', '--content', '# THIS IS A VERY VERY LONG NEVER ENDING TITLE THAT EXCEEDS LIMITS']
output = ['Document "THIS IS A VERY VERY LONG NEVER ENDING TITLE THAT E" created.']
rvo_output(options,output)
def test_add_no_parameters(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_one_parameters_tag(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-t', 'testtag'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_utf8_cat(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-c', 'töstcät'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_utf8_cat_multi(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-c', 'tüütüü', '-c', 'töstcät'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_utf8_tag(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-t', 'töstcät'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_utf8_tag_multi(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-t', 'tüütüü', '-t', 'töstcät'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_encrypt_by_parameter_wrong_pw(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-e', '-p', 'thispasswordistotallywrong', '-t', 'encryption', '-c', 'test'])
assert result.output.strip().endswith('Invalid Password')
assert result.exception
def test_add_encrypt_by_parameter(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-e', '-p', 'test123', '-t', 'encryption', '-c', 'test'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_encrypt_by_input(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-e', '-t', 'encryption', '-c', 'test'], input="test123\n")
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_encrypt_by_input_with_content(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-e', '-t', 'encryption', '-x', 'TEST', '-c', 'test'], input="test123\n")
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_encrypt_by_input_wrong_pw(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-e', '-t', 'encryption', '-c', 'test'], input="test2123\n")
assert result.output.strip().endswith('Invalid Password')
assert result.exception
def test_add_read_from_stdin(isatty_false):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add'], input="Schwifty\nSchwifty..lol\nMorty\n\n")
assert result.output.strip().endswith('Document "Schwifty" created.')
assert not result.exception
def test_add_read_from_stdin_with_cat(isatty_false):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-c', 'test'], input="Schwifty\nSchwifty..lol\nMorty\n\n")
assert result.output.strip().endswith('Document "Schwifty" created.')
assert not result.exception
def test_add_read_from_stdin_with_tag(isatty_false):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-t', 'tag'], input="Schwifty\nSchwifty..lol\nMorty\n\n")
assert not result.exception
assert result.output.strip().endswith('Document "Schwifty" created.')
def test_add_conflicting_stdin_reading(isatty_false):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-e'], input="Schwifty\nSchwifty..lol\nMorty\n\n")
assert result.exception
assert result.output.strip().endswith('Invalid Password')
def test_add_location_germany(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-l', 'Nuremberg', '-c', 'test'])
assert result.output.strip().endswith('Document "TEST" created.')
assert not result.exception
def test_add_location_invalid(isatty_true):
runner = CliRunner()
result = runner.invoke(cli.cli, ['add', '-l', 'DOESNOTEXISTTOWNATLEASTIHOPE', '-c', 'test'])
assert result.exception
| 41.509934 | 120 | 0.685546 | 821 | 6,268 | 5.070646 | 0.115713 | 0.042037 | 0.060053 | 0.123228 | 0.895988 | 0.892626 | 0.88638 | 0.868124 | 0.837617 | 0.819601 | 0 | 0.003366 | 0.146777 | 6,268 | 150 | 121 | 41.786667 | 0.775056 | 0.006701 | 0 | 0.581967 | 0 | 0 | 0.228149 | 0.030527 | 0 | 0 | 0 | 0 | 0.303279 | 1 | 0.204918 | false | 0.032787 | 0.032787 | 0 | 0.237705 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7e0c7360cb4fc3b74d3575fe05bf3a83e6c59a89 | 5,932 | py | Python | mltrain-nips-2017/ben_athiwaratkun/pytorch-bayesgan/models/discriminators.py | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | mltrain-nips-2017/ben_athiwaratkun/pytorch-bayesgan/models/discriminators.py | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | mltrain-nips-2017/ben_athiwaratkun/pytorch-bayesgan/models/discriminators.py | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-10-14T07:30:18.000Z | 2019-10-14T07:30:18.000Z | import torch
import torch.nn as nn
class _netD64(nn.Module):
def __init__(self, ngpu, num_classes=1, nc=3, ndf=64):
super(_netD64, self).__init__()
self.ngpu = ngpu
self.num_classes = num_classes
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
#nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Conv2d(ndf * 8, num_classes, 4, 1, 0, bias=False),
# out size = batch x num_classes x 1 x 1
#nn.Sigmoid()
)
if self.num_classes == 1:
self.main.add_module('prob', nn.Sigmoid())
# output = probability
else:
pass
# output = scores
def forward(self, input):
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(input.size(0), self.num_classes).squeeze(1)
class _netD(nn.Module):
def __init__(self, ngpu, num_classes=1, nc=3, ndf=64):
super(_netD, self).__init__()
self.ngpu = ngpu
self.num_classes = num_classes
self.main = nn.Sequential(
# input is (nc) x 32 x 32
# conv2D(in_channels, out_channels, kernelsize, stride, padding)
nn.Conv2d(nc, ndf , 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 16 x 16
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 8 x 8
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 4 x 4
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 2 x 2
nn.Conv2d(ndf * 8, num_classes, 2, 1, 0, bias=False),
# out size = batch x num_classes x 1 x 1
)
if self.num_classes == 1:
self.main.add_module('prob', nn.Sigmoid())
# output = probability
else:
pass
# output = scores
def forward(self, input):
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(input.size(0), self.num_classes).squeeze(1)
class Reshape(nn.Module):
def __init__(self, *args):
super(Reshape, self).__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape)
class _netD_v2(nn.Module):
def __init__(self, ngpu, num_classes=1, nc=3, ndf=64):
super(_netD_v2, self).__init__()
self.ngpu = ngpu
self.num_classes = num_classes
self.main = nn.Sequential(
# input is (nc) x 32 x 32
# conv2D(in_channels, out_channels, kernelsize, stride, padding)
nn.Conv2d(nc, ndf , 4, 2, 1),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 16 x 16
nn.Conv2d(ndf, ndf * 2, 4, 2, 1),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 8 x 8
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 4 x 4
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 2 x 2
Reshape(-1, ndf*8*2*2),
nn.Linear(ndf*8*2*2, num_classes),
# Note: the difference from v1 is: using linear at the last layer
# and use bias=True
)
if self.num_classes == 1:
self.main.add_module('prob', nn.Sigmoid())
# output = probability
else:
pass
# output = scores
def forward(self, input):
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(input.size(0), self.num_classes).squeeze(1)
class _netD_synth(nn.Module):
def __init__(self, ngpu, dimx=100, leaky_inplace=False):
super(_netD_synth, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
nn.Linear(dimx, 1000, bias=True),
nn.LeakyReLU(0.2, inplace=leaky_inplace),
nn.Linear(1000, 1, bias=True)
# This has two classes (one logit)
)
def forward(self, input):
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(input.size(0), self.num_classes).squeeze(1) | 37.308176 | 82 | 0.538604 | 836 | 5,932 | 3.7189 | 0.110048 | 0.067546 | 0.050177 | 0.054358 | 0.870698 | 0.853972 | 0.824702 | 0.817626 | 0.817626 | 0.817626 | 0 | 0.059164 | 0.330411 | 5,932 | 159 | 83 | 37.308176 | 0.723565 | 0.150708 | 0 | 0.711712 | 0 | 0 | 0.002396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09009 | false | 0.027027 | 0.018018 | 0.009009 | 0.198198 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fdbcfaf4be5b5c59a34735ee19bf93ebb2c0a6af | 2,044 | py | Python | jpntextgen/date_provider.py | nerophung/japanese-fake-text | 96576bdba3482923da94eeee99554c46c2ef5a54 | [
"MIT"
] | null | null | null | jpntextgen/date_provider.py | nerophung/japanese-fake-text | 96576bdba3482923da94eeee99554c46c2ef5a54 | [
"MIT"
] | null | null | null | jpntextgen/date_provider.py | nerophung/japanese-fake-text | 96576bdba3482923da94eeee99554c46c2ef5a54 | [
"MIT"
] | 1 | 2022-01-18T04:29:50.000Z | 2022-01-18T04:29:50.000Z | # -*- coding: utf-8 -*-
import random
from .utils import constants
class DateProvider(object):
date_format = {
0: '{}{}/{:02d}/{:02d}'.format(random.choice(constants.YEAR_LIST), random.randint(0, 70), random.randint(1, 31), random.randint(1, 12)),
1: '{}{}/{:02d}'.format(random.choice(constants.YEAR_LIST),random.randint(0, 70), random.randint(1, 12)),
2: '{}{}.{:02d}'.format(random.choice(constants.YEAR_LIST),random.randint(0, 70), random.randint(1, 12)),
3: '{}{}.{:02d}.{:02d}'.format(random.choice(constants.YEAR_LIST),random.randint(0, 70), random.randint(1, 31), random.randint(1, 12)),
4: '{}{}年{:02d}月{:02d}日'.format(random.choice(constants.YEAR_LIST),random.randint(0, 70), random.randint(0, 31), random.randint(0, 12)),
5: 'R{}年{:02d}月{:02d}日'.format(random.randint(0, 70), random.randint(1, 12), random.randint(1, 31)),
6: 'R{}年{:02d}月'.format(random.randint(0, 70), random.randint(1, 12)),
7: 'R{}/{:02d}/{:02d}'.format(random.randint(0, 70), random.randint(1, 12), random.randint(1, 31)),
8: 'R{}/{:02d}'.format(random.randint(0, 70), random.randint(1, 12)),
9: '{}年{:02d}月{:02d}日'.format(random.randint(1800, 2100), random.randint(1, 31), random.randint(1, 12)),
10: '{}{}.{:02d}.{:02d}'.format(random.choice(constants.YEAR_LIST),random.randint(0, 70), random.randint(1, 31), random.randint(1, 12)),
11: '{}{}/{:02d}'.format(random.choice(constants.YEAR_LIST),random.randint(0, 70), random.randint(1, 12)),
12: '{}.{:02d}.{:02d}'.format(random.randint(1800, 2100), random.randint(1, 12), random.randint(1, 31)),
13: '{}/{:02d}/{:02d}'.format(random.randint(1800, 2100), random.randint(1, 12), random.randint(1, 31)),
14: '{}/{:02d}'.format(random.randint(1800, 2100), random.randint(1, 12)),
15: '{}.{:02d}'.format(random.randint(1800, 2100), random.randint(1, 12)),
}
def __init__(self):
pass
def get_date(self):
return self.date_format[random.randint(0, 15)]
| 65.935484 | 144 | 0.611546 | 301 | 2,044 | 4.106312 | 0.156146 | 0.441748 | 0.260518 | 0.194175 | 0.843042 | 0.840615 | 0.840615 | 0.833333 | 0.771036 | 0.740291 | 0 | 0.127708 | 0.141879 | 2,044 | 30 | 145 | 68.133333 | 0.576967 | 0.010274 | 0 | 0 | 0 | 0 | 0.113366 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.04 | 0.08 | 0.04 | 0.28 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fdbf8325010225d45378db8bdca140cb5bc3fe0f | 2,570 | py | Python | utils/builder/shared/basic_test_template.py | jeremybennett/force-riscv | a5222a3b3fa8a0b9464204056ddca148f16b7e49 | [
"Apache-2.0"
] | null | null | null | utils/builder/shared/basic_test_template.py | jeremybennett/force-riscv | a5222a3b3fa8a0b9464204056ddca148f16b7e49 | [
"Apache-2.0"
] | null | null | null | utils/builder/shared/basic_test_template.py | jeremybennett/force-riscv | a5222a3b3fa8a0b9464204056ddca148f16b7e49 | [
"Apache-2.0"
] | 1 | 2020-06-17T09:37:45.000Z | 2020-06-17T09:37:45.000Z | #
# Copyright (C) [2020] Futurewei Technologies, Inc.
#
# FORCE-RISCV is licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR
# FIT FOR A PARTICULAR PURPOSE.
# See the License for the specific language governing permissions and
# limitations under the License.
#
basic_template_str = """from riscv.EnvRISCV import EnvRISCV
from riscv.GenThreadRISCV import GenThreadRISCV
from riscv.ModifierUtils import PageMemoryAttributeModifier
from base.Sequence import Sequence
class MainSequence(Sequence):
def generate(self, **kargs):
for instr in [%s]:
self.genInstruction(instr, {"NoSkip":1})
def gen_thread_initialization(gen_thread):
gen_thread.applyChoiceModifier(PageMemoryAttributeModifier)
## Points to the generator thread initialization function defined in this file, optional
GenThreadInitialization = gen_thread_initialization
## Points to the MainSequence defined in this file
MainSequenceClass = MainSequence
## Using GenThreadRISCV by default, can be overriden with extended classes
GenThreadClass = GenThreadRISCV
## Using EnvRISCV by default, can be overriden with extended classes
EnvClass = EnvRISCV
"""
basic_non_standard_template_str = """from riscv.EnvRISCV import EnvRISCV
from riscv.GenThreadRISCV import GenThreadRISCV
from riscv.ModifierUtils import PageMemoryAttributeModifier
from base.Sequence import Sequence
class MainSequence(Sequence):
def generate(self, **kargs):
for instr in [%s]:
if (self.isRegisterReserved("X17", "Write") or self.isRegisterReserved("X16", "Read")):
self.genInstruction(instr, {"NoSkip":0})
else:
self.genInstruction(instr, {"NoSkip":1})
def gen_thread_initialization(gen_thread):
gen_thread.applyChoiceModifier(PageMemoryAttributeModifier)
## Points to the generator thread initialization function defined in this file, optional
GenThreadInitialization = gen_thread_initialization
## Points to the MainSequence defined in this file
MainSequenceClass = MainSequence
## Using GenThreadRISCV by default, can be overriden with extended classes
GenThreadClass = GenThreadRISCV
## Using EnvRISCV by default, can be overriden with extended classes
EnvClass = EnvRISCV
"""
| 35.694444 | 99 | 0.772374 | 310 | 2,570 | 6.345161 | 0.383871 | 0.036604 | 0.046772 | 0.03457 | 0.705643 | 0.705643 | 0.705643 | 0.705643 | 0.705643 | 0.705643 | 0 | 0.006912 | 0.155642 | 2,570 | 71 | 100 | 36.197183 | 0.899539 | 0.225681 | 0 | 0.878049 | 0 | 0.02439 | 0.95694 | 0.267984 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.195122 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fde5162ca16843c5abd88a750ab85142dc8aca19 | 105 | py | Python | 5 kyu/Not very secure/Not very secure.py | anthonyjatoba/codewars | 76b0d66dd1ba76a4d136b658920cdf85fd5c4b06 | [
"MIT"
] | null | null | null | 5 kyu/Not very secure/Not very secure.py | anthonyjatoba/codewars | 76b0d66dd1ba76a4d136b658920cdf85fd5c4b06 | [
"MIT"
] | null | null | null | 5 kyu/Not very secure/Not very secure.py | anthonyjatoba/codewars | 76b0d66dd1ba76a4d136b658920cdf85fd5c4b06 | [
"MIT"
] | null | null | null | import re
def alphanumeric(password):
return True if re.match('^[a-zA-Z0-9]+$', password) else False | 26.25 | 66 | 0.695238 | 17 | 105 | 4.294118 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.142857 | 105 | 4 | 66 | 26.25 | 0.788889 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.666667 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 7 |
fdfb4ce4e3c3d33ec652d33f1c240fe8e53ba6cb | 22,657 | py | Python | sdk/communication/azure-communication-callingserver/tests/test_callingserver_client_async.py | zihzhan-msft/azure-sdk-for-python | f4b3484dbf75ec9db1f0ade2ca568c9bd538d62e | [
"MIT"
] | null | null | null | sdk/communication/azure-communication-callingserver/tests/test_callingserver_client_async.py | zihzhan-msft/azure-sdk-for-python | f4b3484dbf75ec9db1f0ade2ca568c9bd538d62e | [
"MIT"
] | null | null | null | sdk/communication/azure-communication-callingserver/tests/test_callingserver_client_async.py | zihzhan-msft/azure-sdk-for-python | f4b3484dbf75ec9db1f0ade2ca568c9bd538d62e | [
"MIT"
] | null | null | null | # -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import pytest
import utils._test_mock_utils_async as _mock_utils_async
import utils._test_constants as _test_constants
from typing import List
from parameterized import parameterized
from azure.communication.callingserver import (
CommunicationIdentifier,
CallLocator,
CallMediaType,
CallingEventSubscriptionType,
CallRejectReason
)
from utils._unit_test_utils import CallingServerUnitTestUtils
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_create_connection())
@pytest.mark.asyncio
async def test_create_connection_succeed(
test_name, # type: str
source_user, # type: CommunicationIdentifier
target_users, # type: List[CommunicationIdentifier]
callback_uri, # type: str
requested_media_types, # type: List[CallMediaType]
requested_call_events, # type: List[CallingEventSubscriptionType]
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=201,
payload=_test_constants.CreateOrJoinCallPayload,
use_managed_identity = use_managed_identity
)
call_connection = await calling_server_client.create_call_connection(
source_user,
target_users,
callback_uri,
requested_media_types,
requested_call_events
)
assert call_connection.call_connection_id == _test_constants.CALL_ID
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_create_connection())
@pytest.mark.asyncio
async def test_create_connection_failed(
test_name, # type: str
source_user, # type: CommunicationIdentifier
target_users, # type: List[CommunicationIdentifier]
callback_uri, # type: str
requested_media_types, # type: List[CallMediaType]
requested_call_events, # type: List[CallingEventSubscriptionType]
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.create_call_connection(
source_user,
target_users,
callback_uri,
requested_media_types,
requested_call_events
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_join_call())
@pytest.mark.asyncio
async def test_join_call_succeed(
test_name, # type: str
call_locator, # type: CallLocator
source_user, # type: CommunicationIdentifier
callback_uri, # type: str
requested_media_types, # type: List[CallMediaType]
requested_call_events, # type: List[CallingEventSubscriptionType]
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=_test_constants.CreateOrJoinCallPayload,
use_managed_identity = use_managed_identity
)
call_connection = await calling_server_client.join_call(
call_locator,
source_user,
callback_uri,
requested_media_types,
requested_call_events
)
assert call_connection.call_connection_id == _test_constants.CALL_ID
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_join_call())
@pytest.mark.asyncio
async def test_join_call_failed(
test_name, # type: str
call_locator, # type: CallLocator
source_user, # type: CommunicationIdentifier
callback_uri, # type: str
requested_media_types, # type: List[CallMediaType]
requested_call_events, # type: List[CallingEventSubscriptionType]
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.join_call(
call_locator,
source_user,
callback_uri,
requested_media_types,
requested_call_events
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_answer_call())
@pytest.mark.asyncio
async def test_answer_call_succeed(
test_name, # type: str
incoming_call_context, # type: str
callback_uri, # type: str
requested_media_types, # type: List[CallMediaType]
requested_call_events, # type: List[CallingEventSubscriptionType]
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=_test_constants.AnswerCallResponsePayload,
use_managed_identity = use_managed_identity
)
result = await calling_server_client.answer_call(
incoming_call_context,
callback_uri=callback_uri,
requested_media_types=requested_media_types,
requested_call_events=requested_call_events
)
CallingServerUnitTestUtils.verify_answer_call_result(result)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_answer_call())
@pytest.mark.asyncio
async def test_answer_call_failed(
test_name, # type: str
incoming_call_context, # type: str
callback_uri, # type: str
requested_media_types, # type: List[CallMediaType]
requested_call_events, # type: List[CallingEventSubscriptionType]
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.answer_call(
incoming_call_context=incoming_call_context,
callback_uri=callback_uri,
requested_media_types=requested_media_types,
requested_call_events=requested_call_events
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_reject_call())
@pytest.mark.asyncio
async def test_reject_call_succeed(
test_name, # type: str
incoming_call_context, # type: str
call_reject_reason, # type: CallRejectReason
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=None,
use_managed_identity = use_managed_identity
)
await calling_server_client.reject_call(
incoming_call_context=incoming_call_context,
call_reject_reason=call_reject_reason
)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_reject_call())
@pytest.mark.asyncio
async def test_reject_call_failed(
test_name, # type: str
incoming_call_context, # type: str
call_reject_reason, # type: CallRejectReason
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.reject_call(
incoming_call_context=incoming_call_context,
call_reject_reason=call_reject_reason
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_redirect_call())
@pytest.mark.asyncio
async def test_redirect_call_succeed(
test_name, # type: str
incoming_call_context, # type: str
target, # type: CommunicationIdentifier
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=None,
use_managed_identity = use_managed_identity
)
await calling_server_client.redirect_call(
incoming_call_context=incoming_call_context,
target=target
)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_redirect_call())
@pytest.mark.asyncio
async def test_redirect_call_failed(
test_name, # type: str
incoming_call_context, # type: str
target, # type: CommunicationIdentifier
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.redirect_call(
incoming_call_context=incoming_call_context,
target=target
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_play_audio())
@pytest.mark.asyncio
async def test_play_audio_succeed(
test_name, # type: str
call_locator, # type: CallLocator
audio_url, # type: str
is_looped, # type: bool
audio_file_id, # type: str
callback_uri, # type: str
operation_context, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=_test_constants.PlayAudioResponsePayload,
use_managed_identity=use_managed_identity
)
result = await calling_server_client.play_audio(
call_locator,
audio_url,
is_looped,
audio_file_id = audio_file_id,
callback_uri = callback_uri,
operation_context = operation_context
)
CallingServerUnitTestUtils.verify_play_audio_result(result)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_play_audio())
@pytest.mark.asyncio
async def test_play_audio_failed(
test_name, # type: str
call_locator, # type: CallLocator
audio_url, # type: str
is_looped, # type: bool
audio_file_id, # type: str
callback_uri, # type: str
operation_context, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.play_audio(
call_locator,
audio_url,
is_looped,
audio_file_id = audio_file_id,
callback_uri = callback_uri,
operation_context = operation_context
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_play_audio_to_participant())
@pytest.mark.asyncio
async def test_play_audio_to_participant_succeed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
audio_url, # type: str
is_looped, # type: bool
audio_file_id, # type: str
callback_uri, # type: str
operation_context, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=_test_constants.PlayAudioResponsePayload,
use_managed_identity=use_managed_identity
)
result = await calling_server_client.play_audio_to_participant(
call_locator,
participant,
audio_url,
is_looped,
audio_file_id = audio_file_id,
callback_uri = callback_uri,
operation_context = operation_context
)
CallingServerUnitTestUtils.verify_play_audio_result(result)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_play_audio_to_participant())
@pytest.mark.asyncio
async def test_play_audio_to_participant_failed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
audio_url, # type: str
is_looped, # type: bool
audio_file_id, # type: str
callback_uri, # type: str
operation_context, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.play_audio_to_participant(
call_locator,
participant,
audio_url,
is_looped,
audio_file_id = audio_file_id,
callback_uri = callback_uri,
operation_context = operation_context
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_add_participant())
@pytest.mark.asyncio
async def test_add_participant_succeed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
callback_uri, # type: str
alternate_caller_id, # type: str
operation_context, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=_test_constants.AddParticipantResultPayload,
use_managed_identity=use_managed_identity
)
result = await calling_server_client.add_participant(
call_locator,
participant,
callback_uri,
alternate_caller_id=alternate_caller_id,
operation_context=operation_context
)
CallingServerUnitTestUtils.verify_add_participant_result(result)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_add_participant())
@pytest.mark.asyncio
async def test_add_participant_failed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
callback_uri, # type: str
alternate_caller_id, # type: str
operation_context, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.add_participant(
call_locator,
participant,
callback_uri,
alternate_caller_id=alternate_caller_id,
operation_context=operation_context
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_remove_participant_with_call_locator())
@pytest.mark.asyncio
async def test_remove_participant_succeed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=202,
payload=None,
use_managed_identity=use_managed_identity
)
await calling_server_client.remove_participant(
call_locator,
participant
)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_remove_participant_with_call_locator())
@pytest.mark.asyncio
async def test_remove_participant_failed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.remove_participant(
call_locator,
participant
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_list_participants_with_call_locator())
@pytest.mark.asyncio
async def test_list_participants_succeed(
test_name, # type: str
call_locator, # type: CallLocator
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=200,
payload=_test_constants.GetParticipantsResponsePayload,
use_managed_identity=use_managed_identity
)
result = await calling_server_client.list_participants(
call_locator
)
CallingServerUnitTestUtils.verify_list_participants_result(result)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_list_participants_with_call_locator())
@pytest.mark.asyncio
async def test_list_participants_failed(
test_name, # type: str
call_locator, # type: CallLocator
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.list_participants(
call_locator
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_get_participant_with_call_locator())
@pytest.mark.asyncio
async def test_get_participant_succeed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=200,
payload=_test_constants.GetParticipantResponsePayload,
use_managed_identity=use_managed_identity
)
result = await calling_server_client.get_participant(
call_locator,
participant=participant
)
CallingServerUnitTestUtils.verify_get_participant_result(result)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_get_participant_with_call_locator())
@pytest.mark.asyncio
async def test_get_participant_failed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.get_participant(
call_locator,
participant=participant
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_cancel_media_operation())
@pytest.mark.asyncio
async def test_cancel_media_operation_succeed(
test_name, # type: str
call_locator, # type: CallLocator
media_operation_id, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=200,
payload=None,
use_managed_identity=use_managed_identity
)
await calling_server_client.cancel_media_operation(
call_locator,
media_operation_id
)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_cancel_media_operation())
@pytest.mark.asyncio
async def test_cancel_media_operation_failed(
test_name, # type: str
call_locator, # type: CallLocator
media_operation_id, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.cancel_media_operation(
call_locator,
media_operation_id
)
except:
raised = True
assert raised == True
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_cancel_participant_media_operation_with_callLocator())
@pytest.mark.asyncio
async def test_cancel_participant_media_operation(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
media_operation_id, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=200,
payload=None,
use_managed_identity=use_managed_identity
)
await calling_server_client.cancel_participant_media_operation(
call_locator,
participant,
media_operation_id
)
@parameterized.expand(CallingServerUnitTestUtils.data_source_test_cancel_participant_media_operation_with_callLocator())
@pytest.mark.asyncio
async def test_cancel_participant_media_operation_failed(
test_name, # type: str
call_locator, # type: CallLocator
participant, # type: CommunicationIdentifier
media_operation_id, # type: str
use_managed_identity = False # type: bool
):
calling_server_client = _mock_utils_async.create_mock_calling_server_client(
status_code=404,
payload=_test_constants.ErrorPayload,
use_managed_identity = use_managed_identity
)
raised = False
try:
await calling_server_client.cancel_participant_media_operation(
call_locator,
participant,
media_operation_id
)
except:
raised = True
assert raised == True
| 32.459885 | 120 | 0.719116 | 2,455 | 22,657 | 6.184929 | 0.049287 | 0.05137 | 0.092466 | 0.083904 | 0.953701 | 0.952845 | 0.950738 | 0.950738 | 0.945601 | 0.930453 | 0 | 0.004381 | 0.21415 | 22,657 | 697 | 121 | 32.506456 | 0.848413 | 0.114446 | 0 | 0.809211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024671 | 1 | 0 | false | 0 | 0.011513 | 0 | 0.011513 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8bfb4573ac4f556160bc85ccf5baf960bf3d9278 | 161 | py | Python | mmdetect/configs/cascade_rcnn_r50_fpn_1x_coco.py | aydindemircioglu/knee.lat | 555725222f860d4ad8fea7452685803d9e323d43 | [
"MIT"
] | null | null | null | mmdetect/configs/cascade_rcnn_r50_fpn_1x_coco.py | aydindemircioglu/knee.lat | 555725222f860d4ad8fea7452685803d9e323d43 | [
"MIT"
] | null | null | null | mmdetect/configs/cascade_rcnn_r50_fpn_1x_coco.py | aydindemircioglu/knee.lat | 555725222f860d4ad8fea7452685803d9e323d43 | [
"MIT"
] | null | null | null | _base_ = [
'./cascade_rcnn_r50_fpn.py',
'/mmdetection/configs/_base_/datasets/coco_detection.py',
'/mmdetection/configs/_base_/default_runtime.py'
]
| 26.833333 | 61 | 0.726708 | 19 | 161 | 5.578947 | 0.684211 | 0.245283 | 0.377358 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.118012 | 161 | 5 | 62 | 32.2 | 0.732394 | 0 | 0 | 0 | 0 | 0 | 0.776398 | 0.776398 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e37ef428803efe0f80a87e7678a1c535d9d7d89d | 2,151 | py | Python | smart/wavelength_calibration/cal_param.py | Lingfeng-Wei/smart | 2316e50bfb6f050d5dcdd0ee1e5eab6831e8a669 | [
"MIT"
] | 10 | 2020-01-21T09:09:54.000Z | 2022-02-12T18:24:02.000Z | smart/wavelength_calibration/cal_param.py | Lingfeng-Wei/smart | 2316e50bfb6f050d5dcdd0ee1e5eab6831e8a669 | [
"MIT"
] | 9 | 2020-02-07T19:03:11.000Z | 2022-02-07T01:21:56.000Z | smart/wavelength_calibration/cal_param.py | Lingfeng-Wei/smart | 2316e50bfb6f050d5dcdd0ee1e5eab6831e8a669 | [
"MIT"
] | 2 | 2021-07-22T21:54:39.000Z | 2021-10-11T05:16:53.000Z | ## define the telluric wavelength calibration parameters for each orders
import numpy as np
cal_param_nirspec = {
'30':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'31':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'32':{'xcorr_range':40, 'outlier_rej':3., 'pixel_range_start':10, 'pixel_range_end':-60 },
'33':{'xcorr_range':25, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-50 },
'34':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-50 },
'35':{'xcorr_range':5, 'outlier_rej':2., 'pixel_range_start':10, 'pixel_range_end':-10},
'36':{'xcorr_range':10, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-50 },
'37':{'xcorr_range':10, 'outlier_rej':2.5, 'pixel_range_start':50, 'pixel_range_end':-20 },
'38':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':50, 'pixel_range_end':-20 },
'39':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':50, 'pixel_range_end':-20 },
'55':{'xcorr_range':5, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-90 },
'56':{'xcorr_range':5, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-30 },
'57':{'xcorr_range':20, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'58':{'xcorr_range':15, 'outlier_rej':2., 'pixel_range_start':0, 'pixel_range_end':-30 },
'59':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'60':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':5, 'pixel_range_end':-5 },
'61':{'xcorr_range':10, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'62':{'xcorr_range':10, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'63':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'64':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'65':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':0, 'pixel_range_end':-1 },
'66':{'xcorr_range':15, 'outlier_rej':3., 'pixel_range_start':10, 'pixel_range_end':-1 },
} | 79.666667 | 93 | 0.674105 | 347 | 2,151 | 3.792507 | 0.164265 | 0.334347 | 0.25076 | 0.231003 | 0.851824 | 0.817629 | 0.800912 | 0.778116 | 0.728723 | 0.690729 | 0 | 0.086181 | 0.088331 | 2,151 | 27 | 94 | 79.666667 | 0.584906 | 0.032078 | 0 | 0 | 0 | 0 | 0.592308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.04 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
8b4d6700b13d3e3a327cbb4bcbb40f50b6b525f6 | 2,588 | py | Python | test/pyaz/acr/token/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | test/pyaz/acr/token/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | 9 | 2021-09-24T16:37:24.000Z | 2021-12-24T00:39:19.000Z | test/pyaz/acr/token/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | import json, subprocess
from ... pyaz_utils import get_cli_name, get_params
def create(registry, name, scope_map=None, repository=None, gateway=None, status=None, resource_group=None, no_passwords=None, expiration=None, expiration_in_days=None):
params = get_params(locals())
command = "az acr token create " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def delete(registry, name, yes=None, resource_group=None):
params = get_params(locals())
command = "az acr token delete " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def update(registry, name, scope_map=None, status=None, resource_group=None):
params = get_params(locals())
command = "az acr token update " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def show(registry, name, resource_group=None):
params = get_params(locals())
command = "az acr token show " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def list(registry, resource_group=None):
params = get_params(locals())
command = "az acr token list " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
| 34.972973 | 169 | 0.664606 | 321 | 2,588 | 5.299065 | 0.165109 | 0.082305 | 0.058789 | 0.05585 | 0.874192 | 0.848325 | 0.824221 | 0.824221 | 0.824221 | 0.79953 | 0 | 0.004941 | 0.217929 | 2,588 | 73 | 170 | 35.452055 | 0.835474 | 0 | 0 | 0.820896 | 0 | 0 | 0.056414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074627 | false | 0.014925 | 0.029851 | 0 | 0.179104 | 0.223881 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8b8b94ca714d1629960c4b4fd6767d39ce6ccf8c | 14,452 | py | Python | tests/services/test_risk_profile_service.py | thiagosalvatore/origin-takehome | 5a348099d03dd518f495a9f3a9217120e8cc195d | [
"Apache-2.0"
] | null | null | null | tests/services/test_risk_profile_service.py | thiagosalvatore/origin-takehome | 5a348099d03dd518f495a9f3a9217120e8cc195d | [
"Apache-2.0"
] | null | null | null | tests/services/test_risk_profile_service.py | thiagosalvatore/origin-takehome | 5a348099d03dd518f495a9f3a9217120e8cc195d | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
from origin_takehome.services.risk_profile import RiskProfileService
def test_calculate_base_score_should_return_3():
profile = {
"age": 35,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 1500,
"marital_status": "married",
"risk_questions": [1, 1, 1],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_base_score()
assert score == {
"auto": 3,
"disability": 3,
"home": 3,
"life": 3
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_base_score_should_return_1():
profile = {
"age": 35,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 1500,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_base_score()
assert score == {
"auto": 1,
"disability": 1,
"home": 1,
"life": 1
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_age_score_should_deduct_one():
profile = {
"age": 35,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 1500,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_age_score()
assert score == {
"auto": -1,
"disability": -1,
"home": -1,
"life": -1
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_age_score_should_deduct_two():
profile = {
"age": 25,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 1500,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_age_score()
assert score == {
"auto": -2,
"disability": -2,
"home": -2,
"life": -2
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_age_score_should_be_ineligible_due_to_age():
profile = {
"age": 61,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 1500,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_age_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "ineligible",
"home": "economic",
"life": "ineligible"
}
def test_calculate_income_score_high_should_deduct_one_from_everything():
profile = {
"age": 35,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 201000,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_income_score()
assert score == {
"auto": -1,
"disability": -1,
"home": -1,
"life": -1
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_income_score_low_should_do_nothing():
profile = {
"age": 35,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 20,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_income_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_income_score_no_income_should_be_ineligible():
profile = {
"age": 35,
"dependents": 1,
"house": {"ownership_status": "wrong"},
"income": 0,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_income_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "ineligible",
"home": "economic",
"life": "economic"
}
def test_calculate_house_score_without_house_should_be_ineligible():
profile = {
"age": 35,
"dependents": 1,
"income": 201000,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_house_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "ineligible",
"life": "economic"
}
def test_calculate_house_score_mortgaged_house_should_add_points():
profile = {
"age": 35,
"dependents": 1,
"income": 201000,
"house": {"ownership_status": "mortgaged"},
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_house_score()
assert score == {
"auto": 0,
"disability": 1,
"home": 1,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_house_score_owned_house_should_do_nothing():
profile = {
"age": 35,
"dependents": 1,
"income": 201000,
"marital_status": "married",
"house": {"ownership_status": "owned"},
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_house_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_dependent_score_zero_dependents_should_do_nothing():
profile = {
"age": 35,
"dependents": 0,
"income": 201000,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_dependent_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_dependent_score_two_dependents_should_add_one():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_dependent_score()
assert score == {
"auto": 0,
"disability": 1,
"home": 0,
"life": 1
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_marital_status_score_married_should_change_score():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_relationship_score()
assert score == {
"auto": 0,
"disability": -1,
"home": 0,
"life": 1
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_marital_status_score_single_should_not_change_score():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "single",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_relationship_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_vehicle_status_score_without_vehicle_should_be_ineligible():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "single",
"risk_questions": [0, 1, 0],
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_vehicle_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "ineligible",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_vehicle_status_score_old_vehicle_should_not_change_score():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "single",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 1990}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_vehicle_score()
assert score == {
"auto": 0,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_calculate_vehicle_status_score_new_vehicle_should_add_one_auto():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "single",
"risk_questions": [0, 1, 0],
"vehicle": {"year": datetime.now().year - 1}
}
rp_service = RiskProfileService(profile)
score = rp_service.calculate_vehicle_score()
assert score == {
"auto": 1,
"disability": 0,
"home": 0,
"life": 0
}
assert rp_service.risk_profile == {
"auto": "economic",
"disability": "economic",
"home": "economic",
"life": "economic"
}
def test_set_risk_from_score_should_work_without_ineligible():
rp_service = RiskProfileService(None)
rp_service.score = {
"auto": 3,
"disability": 2,
"home": 1,
"life": 0
}
profile = rp_service.set_risk_from_score()
assert profile == {
"auto": "responsible",
"disability": "regular",
"home": "regular",
"life": "economic"
}
def test_set_risk_from_score_should_work_with_ineligible():
rp_service = RiskProfileService(None)
rp_service.score = {
"auto": 3,
"disability": 2,
"home": 1,
"life": 0
}
rp_service.risk_profile["life"] = "ineligible"
profile = rp_service.set_risk_from_score()
assert profile == {
"auto": "responsible",
"disability": "regular",
"home": "regular",
"life": "ineligible"
}
def test_calculate_risk_profile_should_work_without_income():
profile = {
"age": 35,
"dependents": 2,
"house": {"ownership_status": "owned"},
"income": 0,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
profile = rp_service.calculate_risk_profile()
assert profile == {
"auto": "regular",
"disability": "ineligible",
"home": "economic",
"life": "regular"
}
def test_calculate_risk_profile_should_work_without_house():
profile = {
"age": 35,
"dependents": 2,
"income": 1000,
"marital_status": "married",
"risk_questions": [0, 1, 0],
"vehicle": {"year": 2018}
}
rp_service = RiskProfileService(profile)
profile = rp_service.calculate_risk_profile()
assert profile == {
"auto": "regular",
"disability": "economic",
"home": "ineligible",
"life": "regular"
}
def test_calculate_risk_profile_should_work_without_vehicle():
profile = {
"age": 35,
"dependents": 2,
"income": 201000,
"marital_status": "married",
"house": {"ownership_status": "owned"},
"risk_questions": [0, 1, 0]
}
rp_service = RiskProfileService(profile)
profile = rp_service.calculate_risk_profile()
assert profile == {
"auto": "ineligible",
"disability": "economic",
"home": "economic",
"life": "regular"
}
| 24.167224 | 79 | 0.54297 | 1,385 | 14,452 | 5.398556 | 0.059206 | 0.080647 | 0.083055 | 0.095493 | 0.928581 | 0.911061 | 0.907048 | 0.882172 | 0.859569 | 0.853685 | 0 | 0.038039 | 0.306947 | 14,452 | 597 | 80 | 24.207705 | 0.708466 | 0 | 0 | 0.78937 | 0 | 0 | 0.232909 | 0 | 0 | 0 | 0 | 0 | 0.080709 | 1 | 0.045276 | false | 0 | 0.003937 | 0 | 0.049213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8b8d7fc9ab3fa3b20c9a4e1fbfe709640a333cf6 | 43,801 | py | Python | src/vigorish/data/json_storage.py | a-luna/vigorish | 6cede5ced76c7d2c9ad0aacdbd2b18c2f1ee4ee6 | [
"MIT"
] | 2 | 2021-07-15T13:53:33.000Z | 2021-07-25T17:03:29.000Z | src/vigorish/data/json_storage.py | a-luna/vigorish | 6cede5ced76c7d2c9ad0aacdbd2b18c2f1ee4ee6 | [
"MIT"
] | 650 | 2019-05-18T07:00:12.000Z | 2022-01-21T19:38:55.000Z | src/vigorish/data/json_storage.py | a-luna/vigorish | 6cede5ced76c7d2c9ad0aacdbd2b18c2f1ee4ee6 | [
"MIT"
] | 2 | 2020-03-28T21:01:31.000Z | 2022-01-06T05:16:11.000Z | """Functions for reading and writing files."""
import json
from vigorish.enums import DataSet, LocalFileTask, S3FileTask, VigFile
from vigorish.util.dt_format_strings import HTTP_TIME
from vigorish.util.result import Result
from vigorish.util.string_helpers import (
validate_bbref_game_id,
validate_brooks_game_id,
validate_pitch_app_id,
)
from vigorish.util.sys_helpers import get_last_mod_time_utc
class JsonStorage:
"""Perform CRUD operations on JSON files stored locally and/or in S3."""
def __init__(self, config, file_helper):
self.config = config
self.file_helper = file_helper
def save_json(self, data_set, parsed_data):
local_filepath = None
s3_object_key = None
result_local = Result.Ok()
result_s3 = Result.Ok()
if self.json_stored_local_folder(VigFile.PARSED_JSON, data_set):
result_local = self.save_json_local(data_set, parsed_data)
if result_local.success:
local_filepath = result_local.value
if self.json_stored_s3(VigFile.PARSED_JSON, data_set): # pragma: no cover
result_s3 = self.save_json_s3(data_set, parsed_data)
if result_s3.success:
s3_object_key = result_s3.value
result = Result.Combine([result_local, result_s3])
if result.failure:
return result
return Result.Ok({"local_filepath": local_filepath, "s3_object_key": s3_object_key})
def json_stored_local_folder(self, file_type, data_set):
return self.file_helper.check_file_stored_local(file_type, data_set)
def save_json_local(self, data_set, parsed_data):
save_json_local_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.save_json_brooks_games_for_date_local_file,
DataSet.BROOKS_PITCH_LOGS: self.save_json_brooks_pitch_logs_for_game_local_file,
DataSet.BROOKS_PITCHFX: self.save_json_brooks_pitchfx_log_local_file,
DataSet.BBREF_GAMES_FOR_DATE: self.save_json_bbref_games_for_date_local_file,
DataSet.BBREF_BOXSCORES: self.save_json_bbref_boxscore_local_file,
}
return save_json_local_dict[data_set](parsed_data)
def save_json_brooks_games_for_date_local_file(self, games_for_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=games_for_date.game_date,
scraped_data=games_for_date,
)
def save_json_brooks_pitch_logs_for_game_local_file(self, pitch_logs_for_game):
result = validate_brooks_game_id(pitch_logs_for_game.bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
scraped_data=pitch_logs_for_game,
bb_game_id=pitch_logs_for_game.bb_game_id,
)
def save_json_brooks_pitchfx_log_local_file(self, pitchfx_log):
result = validate_pitch_app_id(pitchfx_log.pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
scraped_data=pitchfx_log,
pitch_app_id=pitchfx_log.pitch_app_id,
)
def save_json_bbref_games_for_date_local_file(self, games_for_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=games_for_date.game_date,
scraped_data=games_for_date,
)
def save_json_bbref_boxscore_local_file(self, boxscore):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=boxscore.game_date,
scraped_data=boxscore,
bbref_game_id=boxscore.bbref_game_id,
)
def json_stored_s3(self, file_type, data_set): # pragma: no cover
return self.file_helper.check_file_stored_s3(file_type, data_set)
def save_json_s3(self, data_set, parsed_data): # pragma: no cover
save_json_s3_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.save_json_brooks_games_for_date_s3,
DataSet.BROOKS_PITCH_LOGS: self.save_json_brooks_pitch_logs_for_game_s3,
DataSet.BROOKS_PITCHFX: self.save_json_brooks_pitchfx_log_s3,
DataSet.BBREF_GAMES_FOR_DATE: self.save_json_bbref_games_for_date_s3,
DataSet.BBREF_BOXSCORES: self.save_json_bbref_boxscore_s3,
}
return save_json_s3_dict[data_set](parsed_data)
def save_json_brooks_games_for_date_s3(self, games_for_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=games_for_date.game_date,
scraped_data=games_for_date,
)
def save_json_brooks_pitch_logs_for_game_s3(self, pitch_logs_for_game): # pragma: no cover
result = validate_brooks_game_id(pitch_logs_for_game.bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
scraped_data=pitch_logs_for_game,
bb_game_id=pitch_logs_for_game.bb_game_id,
)
def save_json_brooks_pitchfx_log_s3(self, pitchfx_log): # pragma: no cover
result = validate_pitch_app_id(pitchfx_log.pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
scraped_data=pitchfx_log,
pitch_app_id=pitchfx_log.pitch_app_id,
)
def save_json_bbref_games_for_date_s3(self, games_for_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=games_for_date.game_date,
scraped_data=games_for_date,
)
def save_json_bbref_boxscore_s3(self, boxscore): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=boxscore.game_date,
scraped_data=boxscore,
bbref_game_id=boxscore.bbref_game_id,
)
def save_patch_list(self, data_set, patch_list):
local_filepath = None
s3_object_key = None
result_local = Result.Ok()
result_s3 = Result.Ok()
if self.json_stored_local_folder(VigFile.PATCH_LIST, data_set):
result_local = self.save_patch_list_local(data_set, patch_list)
if result_local.success:
local_filepath = result_local.value
if self.json_stored_s3(VigFile.PATCH_LIST, data_set): # pragma: no cover
result_s3 = self.save_patch_list_s3(data_set, patch_list)
if result_s3.success:
s3_object_key = result_s3.value
result = Result.Combine([result_local, result_s3])
if result.failure:
return result
return Result.Ok({"local_filepath": local_filepath, "s3_object_key": s3_object_key})
def save_patch_list_local(self, data_set, patch_list):
save_patch_list_local_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.save_brooks_games_for_date_patch_list_local_file,
DataSet.BROOKS_PITCHFX: self.save_brooks_pitchfx_patch_list_local_file,
DataSet.BBREF_GAMES_FOR_DATE: self.save_bbref_games_for_date_patch_list_local_file,
DataSet.BBREF_BOXSCORES: self.save_bbref_boxscore_patch_list_local_file,
}
save_patch_list_for_data_set = save_patch_list_local_dict.get(data_set)
return save_patch_list_for_data_set(patch_list) if save_patch_list_for_data_set else None
def save_brooks_games_for_date_patch_list_local_file(self, patch_list):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=patch_list.game_date,
scraped_data=patch_list,
)
def save_brooks_pitchfx_patch_list_local_file(self, patch_list):
result = validate_bbref_game_id(patch_list.url_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
scraped_data=patch_list,
bbref_game_id=patch_list.url_id,
)
def save_bbref_games_for_date_patch_list_local_file(self, patch_list):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=patch_list.game_date,
scraped_data=patch_list,
)
def save_bbref_boxscore_patch_list_local_file(self, patch_list):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PATCH_LIST,
game_date=patch_list.game_date,
scraped_data=patch_list,
pitch_app_id=patch_list.url_id,
)
def save_patch_list_s3(self, data_set, patch_list): # pragma: no cover
save_patch_list_s3_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.save_brooks_games_for_date_patch_list_s3,
DataSet.BROOKS_PITCHFX: self.save_brooks_pitchfx_patch_list_s3,
DataSet.BBREF_GAMES_FOR_DATE: self.save_bbref_games_for_date_patch_list_s3,
DataSet.BBREF_BOXSCORES: self.save_bbref_boxscore_patch_list_s3,
}
save_patch_list_for_data_set = save_patch_list_s3_dict.get(data_set)
return save_patch_list_for_data_set(patch_list) if save_patch_list_for_data_set else None
def save_brooks_games_for_date_patch_list_s3(self, patch_list): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=patch_list.game_date,
scraped_data=patch_list,
)
def save_brooks_pitchfx_patch_list_s3(self, patch_list): # pragma: no cover
result = validate_bbref_game_id(patch_list.url_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
scraped_data=patch_list,
bbref_game_id=patch_list.url_id,
)
def save_bbref_games_for_date_patch_list_s3(self, patch_list): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=patch_list.game_date,
scraped_data=patch_list,
)
def save_bbref_boxscore_patch_list_s3(self, patch_list): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PATCH_LIST,
game_date=patch_list.game_date,
scraped_data=patch_list,
pitch_app_id=patch_list.url_id,
)
def save_combined_game_data(self, combined_data):
local_filepath = None
s3_object_key = None
result_local = Result.Ok()
result_s3 = Result.Ok()
if self.json_stored_local_folder(VigFile.COMBINED_GAME_DATA, DataSet.ALL):
result_local = self.save_combined_game_data_local_file(combined_data)
if result_local.success:
local_filepath = result_local.value
if self.json_stored_s3(VigFile.COMBINED_GAME_DATA, DataSet.ALL): # pragma: no cover
result_s3 = self.save_combined_game_data_s3(combined_data)
if result_s3.success:
s3_object_key = result_s3.value
result = Result.Combine([result_local, result_s3])
if result.failure:
return result
return Result.Ok({"local_filepath": local_filepath, "s3_object_key": s3_object_key})
def save_combined_game_data_local_file(self, combined_data):
result = validate_bbref_game_id(combined_data["bbref_game_id"])
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.WRITE_FILE,
data_set=DataSet.ALL,
file_type=VigFile.COMBINED_GAME_DATA,
game_date=game_dict["game_date"],
scraped_data=combined_data,
bbref_game_id=combined_data["bbref_game_id"],
)
def save_combined_game_data_s3(self, combined_data): # pragma: no cover
result = validate_bbref_game_id(combined_data["bbref_game_id"])
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.UPLOAD,
data_set=DataSet.ALL,
file_type=VigFile.COMBINED_GAME_DATA,
game_date=game_dict["game_date"],
scraped_data=combined_data,
bbref_game_id=combined_data["bbref_game_id"],
)
def get_combined_game_data(self, bbref_game_id):
if self.json_stored_local_folder(VigFile.COMBINED_GAME_DATA, DataSet.ALL):
result = self.decode_combined_game_data_local_file(bbref_game_id)
if result.success:
return result.value
if self.json_stored_s3(VigFile.COMBINED_GAME_DATA, DataSet.ALL): # pragma: no cover
result = self.decode_combined_game_data_s3(bbref_game_id)
if result.success:
return result.value
return None
def decode_combined_game_data_local_file(self, bbref_game_id):
result = self.get_combined_game_data_local_file(bbref_game_id)
if result.failure:
return result
filepath = result.value
try:
json_dict = json.loads(filepath.read_text())
json_dict["last_modified"] = get_last_mod_time_utc(filepath).strftime(HTTP_TIME)
return Result.Ok(json_dict)
except Exception as e:
error = f"Error: {repr(e)}"
return Result.Fail(error)
def get_combined_game_data_local_file(self, bbref_game_id):
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.ALL,
file_type=VigFile.COMBINED_GAME_DATA,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def decode_combined_game_data_s3(self, bbref_game_id): # pragma: no cover
result = self.get_combined_game_data_s3(bbref_game_id)
if result.failure:
return result
filepath = result.value
try:
json_dict = json.loads(filepath.read_text())
json_dict["last_modified"] = get_last_mod_time_utc(filepath).strftime(HTTP_TIME)
return Result.Ok(json_dict)
except Exception as e:
error = f"Error: {repr(e)}"
return Result.Fail(error)
def get_combined_game_data_s3(self, bbref_game_id): # pragma: no cover
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.ALL,
file_type=VigFile.COMBINED_GAME_DATA,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def get_scraped_data(self, data_set, url_id):
if self.json_stored_local_folder(VigFile.PARSED_JSON, data_set):
result = self.get_scraped_data_local(data_set, url_id)
if result.success:
return result.value
if self.json_stored_s3(VigFile.PARSED_JSON, data_set): # pragma: no cover
result = self.get_scraped_data_s3(data_set, url_id)
if result.success:
return result.value
return None
def get_scraped_data_local(self, data_set, url_id):
get_scraped_data_local_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.decode_json_brooks_games_for_date_local_file,
DataSet.BROOKS_PITCH_LOGS: self.decode_json_brooks_pitch_logs_for_game_local_file,
DataSet.BROOKS_PITCHFX: self.decode_json_brooks_pitchfx_log_local_file,
DataSet.BBREF_GAMES_FOR_DATE: self.decode_json_bbref_games_for_date_local_file,
DataSet.BBREF_BOXSCORES: self.decode_json_bbref_boxscore_local_file,
}
return get_scraped_data_local_dict[data_set](url_id)
def decode_json_brooks_games_for_date_local_file(self, game_date):
result = self.get_json_brooks_games_for_date_local_file(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
delete_file=False,
)
def get_json_brooks_games_for_date_local_file(self, game_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def decode_json_brooks_pitch_logs_for_game_local_file(self, bb_game_id):
result = self.get_json_brooks_pitch_logs_for_game_local_file(bb_game_id)
if result.failure:
return result
result = validate_brooks_game_id(bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bb_game_id=bb_game_id,
delete_file=False,
)
def get_json_brooks_pitch_logs_for_game_local_file(self, bb_game_id):
result = validate_brooks_game_id(bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bb_game_id=bb_game_id,
)
def decode_json_brooks_pitchfx_log_local_file(self, pitch_app_id):
result = self.get_json_brooks_pitchfx_local_file(pitch_app_id)
if result.failure:
return result
result = validate_pitch_app_id(pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
pitch_app_id=pitch_app_id,
delete_file=False,
)
def get_json_brooks_pitchfx_local_file(self, pitch_app_id):
result = validate_pitch_app_id(pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
pitch_app_id=pitch_app_id,
)
def decode_json_bbref_games_for_date_local_file(self, game_date):
result = self.get_json_bbref_games_for_date_local_file(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
delete_file=False,
)
def get_json_bbref_games_for_date_local_file(self, game_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def decode_json_bbref_boxscore_local_file(self, bbref_game_id):
result = self.get_json_bbref_boxscore_local_file(bbref_game_id)
if result.failure:
return result
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
delete_file=False,
)
def get_json_bbref_boxscore_local_file(self, bbref_game_id):
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def get_scraped_data_s3(self, data_set, url_id): # pragma: no cover
get_scraped_data_s3_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.decode_json_brooks_games_for_date_s3,
DataSet.BROOKS_PITCH_LOGS: self.decode_json_brooks_pitch_logs_for_game_s3,
DataSet.BROOKS_PITCHFX: self.decode_json_brooks_pitchfx_log_s3,
DataSet.BBREF_GAMES_FOR_DATE: self.decode_json_bbref_games_for_date_s3,
DataSet.BBREF_BOXSCORES: self.decode_json_bbref_boxscore_s3,
}
return get_scraped_data_s3_dict[data_set](url_id)
def decode_json_brooks_games_for_date_s3(self, game_date): # pragma: no cover
result = self.get_json_brooks_games_for_date_s3(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
delete_file=True,
)
def get_json_brooks_games_for_date_s3(self, game_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def decode_json_brooks_pitchfx_log_s3(self, pitch_app_id): # pragma: no cover
result = self.get_json_brooks_pitchfx_s3(pitch_app_id)
if result.failure:
return result
result = validate_pitch_app_id(pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
pitch_app_id=pitch_app_id,
delete_file=True,
)
def decode_json_brooks_pitch_logs_for_game_s3(self, bb_game_id): # pragma: no cover
result = self.get_json_brooks_pitch_logs_for_game_s3(bb_game_id)
if result.failure:
return result
result = validate_brooks_game_id(bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bb_game_id=bb_game_id,
delete_file=True,
)
def get_json_brooks_pitch_logs_for_game_s3(self, bb_game_id): # pragma: no cover
result = validate_brooks_game_id(bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bb_game_id=bb_game_id,
)
def get_json_brooks_pitchfx_s3(self, pitch_app_id): # pragma: no cover
result = validate_pitch_app_id(pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
pitch_app_id=pitch_app_id,
)
def decode_json_bbref_games_for_date_s3(self, game_date): # pragma: no cover
result = self.get_json_bbref_games_for_date_s3(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
delete_file=True,
)
def get_json_bbref_games_for_date_s3(self, game_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def decode_json_bbref_boxscore_s3(self, bbref_game_id): # pragma: no cover
result = self.get_json_bbref_boxscore_s3(bbref_game_id)
if result.failure:
return result
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
delete_file=True,
)
def get_json_bbref_boxscore_s3(self, bbref_game_id): # pragma: no cover
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def get_patch_list(self, data_set, url_id):
if self.json_stored_local_folder(VigFile.PATCH_LIST, data_set):
result = self.get_patch_list_local(data_set, url_id)
if result.success:
return result.value
if self.json_stored_s3(VigFile.PATCH_LIST, data_set): # pragma: no cover
result = self.get_patch_list_s3(data_set, url_id)
if result.success:
return result.value
return None
def get_patch_list_local(self, data_set, url_id):
get_patch_list_local_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.decode_brooks_games_for_date_patch_list_local_file,
DataSet.BROOKS_PITCHFX: self.decode_brooks_pitchfx_patch_list_local_file,
DataSet.BBREF_GAMES_FOR_DATE: self.decode_bbref_games_for_date_patch_list_local_file,
DataSet.BBREF_BOXSCORES: self.decode_bbref_boxscore_patch_list_local_file,
}
get_patch_list_for_data_set = get_patch_list_local_dict.get(data_set)
return get_patch_list_for_data_set(url_id) if get_patch_list_for_data_set else Result.Ok({})
def decode_brooks_games_for_date_patch_list_local_file(self, game_date):
result = self.get_brooks_games_for_date_patch_list_local_file(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
delete_file=False,
)
def get_brooks_games_for_date_patch_list_local_file(self, game_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
)
def decode_brooks_pitchfx_patch_list_local_file(self, bbref_game_id):
result = self.get_brooks_pitchfx_patch_list_local_file(bbref_game_id)
if result.failure:
return result
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
delete_file=False,
)
def get_brooks_pitchfx_patch_list_local_file(self, bbref_game_id):
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def decode_bbref_games_for_date_patch_list_local_file(self, game_date):
result = self.get_bbref_games_for_date_patch_list_local_file(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
delete_file=False,
)
def get_bbref_games_for_date_patch_list_local_file(self, game_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
)
def decode_bbref_boxscore_patch_list_local_file(self, bbref_game_id):
result = self.get_bbref_boxscore_patch_list_local_file(bbref_game_id)
if result.failure:
return result
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
delete_file=False,
)
def get_bbref_boxscore_patch_list_local_file(self, bbref_game_id):
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.READ_FILE,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def get_patch_list_s3(self, data_set, url_id): # pragma: no cover
get_patch_list_s3_dict = {
DataSet.BROOKS_GAMES_FOR_DATE: self.decode_brooks_games_for_date_patch_list_s3,
DataSet.BROOKS_PITCHFX: self.decode_brooks_pitchfx_patch_list_s3,
DataSet.BBREF_GAMES_FOR_DATE: self.decode_bbref_games_for_date_patch_list_s3,
DataSet.BBREF_BOXSCORES: self.decode_bbref_boxscore_patch_list_s3,
}
get_patch_list_for_data_set = get_patch_list_s3_dict.get(data_set)
return get_patch_list_for_data_set(url_id) if get_patch_list_for_data_set else None
def decode_brooks_games_for_date_patch_list_s3(self, game_date): # pragma: no cover
result = self.get_brooks_games_for_date_patch_list_s3(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
delete_file=True,
)
def get_brooks_games_for_date_patch_list_s3(self, game_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
)
def decode_brooks_pitchfx_patch_list_s3(self, bbref_game_id): # pragma: no cover
result = self.get_brooks_pitchfx_patch_list_local_file(bbref_game_id)
if result.failure:
return result
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
delete_file=True,
)
def get_brooks_pitchfx_patch_list_s3(self, bbref_game_id): # pragma: no cover
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def decode_bbref_games_for_date_patch_list_s3(self, game_date): # pragma: no cover
result = self.get_bbref_games_for_date_patch_list_s3(game_date)
if result.failure:
return result
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
delete_file=True,
)
def get_bbref_games_for_date_patch_list_s3(self, game_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PATCH_LIST,
game_date=game_date,
)
def decode_bbref_boxscore_patch_list_s3(self, bbref_game_id): # pragma: no cover
result = self.get_bbref_boxscore_patch_list_s3(bbref_game_id)
if result.failure:
return result
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DECODE_JSON,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
delete_file=True,
)
def get_bbref_boxscore_patch_list_s3(self, bbref_game_id): # pragma: no cover
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DOWNLOAD,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PATCH_LIST,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def delete_brooks_games_for_date_local_file(self, game_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DELETE_FILE,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def delete_brooks_pitch_logs_for_game_local_file(self, bb_game_id):
result = validate_brooks_game_id(bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DELETE_FILE,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bb_game_id=bb_game_id,
)
def delete_brooks_pitchfx_log_local_file(self, pitch_app_id):
result = validate_pitch_app_id(pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DELETE_FILE,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
pitch_app_id=pitch_app_id,
)
def delete_bbref_games_for_date_local_file(self, game_date):
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DELETE_FILE,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def delete_bbref_boxscore_local_file(self, bbref_game_id):
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_local_file_task(
task=LocalFileTask.DELETE_FILE,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def delete_brooks_games_for_date_s3(self, game_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.DELETE,
data_set=DataSet.BROOKS_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def delete_brooks_pitch_logs_for_game_s3(self, bb_game_id): # pragma: no cover
result = validate_brooks_game_id(bb_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DELETE,
data_set=DataSet.BROOKS_PITCH_LOGS,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bb_game_id=bb_game_id,
)
def delete_brooks_pitchfx_log_s3(self, pitch_app_id): # pragma: no cover
result = validate_pitch_app_id(pitch_app_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DELETE,
data_set=DataSet.BROOKS_PITCHFX,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
pitch_app_id=pitch_app_id,
)
def delete_bbref_games_for_date_s3(self, game_date): # pragma: no cover
return self.file_helper.perform_s3_task(
task=S3FileTask.DELETE,
data_set=DataSet.BBREF_GAMES_FOR_DATE,
file_type=VigFile.PARSED_JSON,
game_date=game_date,
)
def delete_bbref_boxscore_s3(self, bbref_game_id): # pragma: no cover
result = validate_bbref_game_id(bbref_game_id)
if result.failure:
return result
game_dict = result.value
return self.file_helper.perform_s3_task(
task=S3FileTask.DELETE,
data_set=DataSet.BBREF_BOXSCORES,
file_type=VigFile.PARSED_JSON,
game_date=game_dict["game_date"],
bbref_game_id=bbref_game_id,
)
def rename_brooks_pitchfx_log(self, old_pitch_app_id, new_pitch_app_id, year): # pragma: no cover
result = validate_pitch_app_id(old_pitch_app_id)
if result.failure:
return result
game_dict = result.value
old_key = self.file_helper.get_object_key(
file_type=VigFile.PARSED_JSON,
data_set=DataSet.BROOKS_PITCHFX,
game_date=game_dict["game_date"],
pitch_app_id=old_pitch_app_id,
)
result = validate_pitch_app_id(new_pitch_app_id)
if result.failure:
return result
game_dict = result.value
new_key = self.file_helper.get_object_key(
file_type=VigFile.PARSED_JSON,
data_set=DataSet.BROOKS_PITCHFX,
game_date=game_dict["game_date"],
pitch_app_id=new_pitch_app_id,
)
return self.file_helper.rename_s3_object(old_key, new_key)
| 41.20508 | 102 | 0.67124 | 5,776 | 43,801 | 4.614093 | 0.020256 | 0.05043 | 0.045402 | 0.053281 | 0.97077 | 0.953923 | 0.939477 | 0.92687 | 0.9146 | 0.894 | 0 | 0.005209 | 0.263624 | 43,801 | 1,062 | 103 | 41.243879 | 0.821076 | 0.020707 | 0 | 0.719298 | 0 | 0 | 0.012443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090815 | false | 0 | 0.006192 | 0.026832 | 0.25903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8b90fe744d4ba9389235837569f03bb377402e47 | 2,113 | py | Python | mitre_attack/cli/command_groups/relationships.py | check-spelling/mitre-attack | f3be1ccff235593c4277f3b9ec2696757924894b | [
"MIT"
] | 1 | 2022-01-13T06:32:10.000Z | 2022-01-13T06:32:10.000Z | mitre_attack/cli/command_groups/relationships.py | check-spelling/mitre-attack | f3be1ccff235593c4277f3b9ec2696757924894b | [
"MIT"
] | null | null | null | mitre_attack/cli/command_groups/relationships.py | check-spelling/mitre-attack | f3be1ccff235593c4277f3b9ec2696757924894b | [
"MIT"
] | 1 | 2022-01-14T00:00:27.000Z | 2022-01-14T00:00:27.000Z | from mitre_attack.api.client import MitreAttack
import mitre_attack.cli.click as click
@click.group()
@click.pass_context
def relationships(_):
"""
Query or count relationships.
"""
pass
@relationships.command()
@click.option('--relationship-ids')
@click.option('--relationship-types')
@click.option('--source-refs')
@click.option('--source-ref-types')
@click.option('--target-refs')
@click.option('--target-ref-types')
@click.pass_context
def get_relationships(
_: click.Context,
relationship_ids: str,
relationship_types: str,
source_refs: str,
source_ref_types: str,
target_refs: str,
target_ref_types: str):
api = MitreAttack()
for relationship in api.enterprise.iter_relationships(
relationship_ids=click.str_to_strs(relationship_ids),
relationship_types=click.str_to_strs(relationship_types),
source_refs=click.str_to_strs(source_refs),
source_ref_types=click.str_to_strs(source_ref_types),
target_refs=click.str_to_strs(target_refs),
target_ref_types=click.str_to_strs(target_ref_types),
):
click.echo(relationship.to_json())
@relationships.command()
@click.option('--relationship-ids')
@click.option('--relationship-types')
@click.option('--source-refs')
@click.option('--source-ref-types')
@click.option('--target-refs')
@click.option('--target-ref-types')
@click.pass_context
def count_relationships(
_: click.Context,
relationship_ids: str,
relationship_types: str,
source_refs: str,
source_ref_types: str,
target_refs: str,
target_ref_types: str):
api = MitreAttack()
n = api.enterprise.count_relationships(
relationship_ids=click.str_to_strs(relationship_ids),
relationship_types=click.str_to_strs(relationship_types),
source_refs=click.str_to_strs(source_refs),
source_ref_types=click.str_to_strs(source_ref_types),
target_refs=click.str_to_strs(target_refs),
target_ref_types=click.str_to_strs(target_ref_types),
)
click.echo(n)
| 29.760563 | 65 | 0.702792 | 266 | 2,113 | 5.263158 | 0.142857 | 0.091429 | 0.085714 | 0.12 | 0.84 | 0.84 | 0.84 | 0.84 | 0.84 | 0.84 | 0 | 0 | 0.17274 | 2,113 | 70 | 66 | 30.185714 | 0.800915 | 0.013725 | 0 | 0.775862 | 0 | 0 | 0.096712 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0.068966 | 0.034483 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
8bb37a45f8ee91bc8a6d89438c0021585522e82c | 1,850 | py | Python | docs/papers/wpmvp14/experiments/rosen_fs.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 1,647 | 2015-01-13T01:45:38.000Z | 2022-03-28T01:23:41.000Z | docs/papers/wpmvp14/experiments/rosen_fs.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 1,116 | 2015-01-01T09:52:05.000Z | 2022-03-18T21:06:40.000Z | docs/papers/wpmvp14/experiments/rosen_fs.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 180 | 2015-02-12T02:47:28.000Z | 2022-03-14T10:28:18.000Z | import numpy as np
#pythran export rosen(float[])
def rosen(x):
t0 = 100 * (x[1:] - x[:-1] ** 2) ** 2
t1 = (1 - x[:-1]) ** 2
return np.sum(t0 + t1)
#pythran export rosen2(float[])
def rosen2(x):
return np.sum(100 * (x[1:] - x[:-1] ** 2) ** 2 + (1 - x[:-1]) ** 2)
# a = numpy.arange(100000)
# CPython
# In [5]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 1.64 ms per loop
#
# In [6]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 1.38 ms per loop
#
# In [7]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 1.37 ms per loop
#
# In [8]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 1.62 ms per loop
#
# Pythran
# In [7]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 494 us per loop
#
# In [8]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 495 us per loop
#
# In [9]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 201 us per loop
#
# In [10]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 201 us per loop
#
# Pytran SIMD
# In [4]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 414 us per loop
#
# In [6]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 415 us per loop
#
# In [7]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 244 us per loop
# In [5]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 245 us per loop
#
#
#
#
# Pythran with lazy
# In [4]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 287 us per loop
#
# In [5]: %timeit -n100 rosen.rosen(a)
# 100 loops, best of 3: 288 us per loop
#
# In [6]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 201 us per loop
#
# In [7]: %timeit -n100 rosen.rosen2(a)
# 100 loops, best of 3: 201 us per loop
#
| 25.694444 | 71 | 0.550811 | 321 | 1,850 | 3.174455 | 0.17757 | 0.157017 | 0.235525 | 0.204122 | 0.756624 | 0.75368 | 0.75368 | 0.736016 | 0.701668 | 0.701668 | 0 | 0.164894 | 0.288649 | 1,850 | 71 | 72 | 26.056338 | 0.609422 | 0.815676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 11 |
4782c161717a5653ea2032866f0d63082400b27f | 135 | py | Python | mamp_cli/api/mampapi.py | Honda-a/mamp-cli | 562b5a3dbac1a2d05b600be7e860c0ec1fa21c1f | [
"Apache-2.0"
] | null | null | null | mamp_cli/api/mampapi.py | Honda-a/mamp-cli | 562b5a3dbac1a2d05b600be7e860c0ec1fa21c1f | [
"Apache-2.0"
] | 3 | 2019-12-02T02:03:32.000Z | 2021-06-02T00:58:01.000Z | mamp_cli/api/mampapi.py | Honda-a/mamp-cli | 562b5a3dbac1a2d05b600be7e860c0ec1fa21c1f | [
"Apache-2.0"
] | null | null | null | """
api wraper for mamp tools
"""
from mamp_cli.base import ApiBase
from mamp_cli.tools import mamp
class MampApi(ApiBase):
pass
| 13.5 | 33 | 0.740741 | 21 | 135 | 4.666667 | 0.619048 | 0.163265 | 0.22449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 135 | 9 | 34 | 15 | 0.882883 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
47a95f09fa9707c2dd51520b73118f661597cd68 | 14,136 | py | Python | restApp/migrations/0001_initial.py | ibamacsr/painelmma_api | a11a6cd63e312f09f445b139fcff8c11ab383764 | [
"MIT"
] | null | null | null | restApp/migrations/0001_initial.py | ibamacsr/painelmma_api | a11a6cd63e312f09f445b139fcff8c11ab383764 | [
"MIT"
] | null | null | null | restApp/migrations/0001_initial.py | ibamacsr/painelmma_api | a11a6cd63e312f09f445b139fcff8c11ab383764 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.contrib.gis.db.models.fields
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='DailyAlertaAwifs',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('area_km2', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=38)),
('dominio', models.CharField(blank=True, null=True, max_length=200)),
('tipo', models.CharField(blank=True, null=True, max_length=15)),
('uf', models.SmallIntegerField(blank=True, null=True)),
('estado', models.CharField(blank=True, null=True, max_length=2)),
('data_imagem', models.DateTimeField(blank=True, null=True)),
('shape', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('centroide', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('mesid', models.TextField(blank=True, null=True)),
('estagio', models.CharField(blank=True, null=True, max_length=50)),
('periodo_prodes', models.CharField(blank=True, null=True, max_length=10)),
],
options={
'db_table': 'ibama"."vw_alerta_awifs',
'managed': False,
},
),
migrations.CreateModel(
name='DailyAlertaDeter',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('area_km2', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=38)),
('dominio', models.CharField(blank=True, null=True, max_length=200)),
('tipo', models.CharField(blank=True, null=True, max_length=15)),
('uf', models.SmallIntegerField(blank=True, null=True)),
('estado', models.CharField(blank=True, null=True, max_length=2)),
('data_imagem', models.DateTimeField(blank=True, null=True)),
('shape', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('centroide', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('mesid', models.TextField(blank=True, null=True)),
('estagio', models.CharField(blank=True, null=True, max_length=50)),
('periodo_prodes', models.CharField(blank=True, null=True, max_length=10)),
],
options={
'db_table': 'ibama"."vw_alerta_deter',
'managed': False,
},
),
migrations.CreateModel(
name='DailyAlertaDeterQualif',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('periodo_prodes', models.CharField(blank=True, null=True, max_length=10)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('mes_ano', models.CharField(blank=True, null=True, max_length=6)),
('cicatriz_fogo', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('corte_raso_deter', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('degradacao_deter', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('alta', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('leve', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('moderada', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('falso_positivo', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('nao_avaliado', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('deter_total', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('total_avaliado', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('porc_area_avaliada', models.SmallIntegerField(blank=True, null=True)),
('mesid', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'ibama"."vw_deter_qualificado',
'managed': False,
},
),
migrations.CreateModel(
name='DailyAlertaLandsat',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('area_km2', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=38)),
('dominio', models.CharField(blank=True, null=True, max_length=200)),
('tipo', models.CharField(blank=True, null=True, max_length=15)),
('uf', models.SmallIntegerField(blank=True, null=True)),
('estado', models.CharField(blank=True, null=True, max_length=2)),
('data_imagem', models.DateTimeField(blank=True, null=True)),
('shape', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('centroide', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('mesid', models.TextField(blank=True, null=True)),
('estagio', models.CharField(blank=True, null=True, max_length=50)),
('periodo_prodes', models.CharField(blank=True, null=True, max_length=10)),
],
options={
'db_table': 'ibama"."vw_alerta_indicar',
'managed': False,
},
),
migrations.CreateModel(
name='PublicAlertaDeter',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('area_km2', models.DecimalField(blank=True, null=True, decimal_places=8, max_digits=38)),
('area_ha', models.DecimalField(blank=True, null=True, decimal_places=8, max_digits=38)),
('municipio', models.CharField(blank=True, null=True, max_length=200)),
('dominio', models.CharField(blank=True, null=True, max_length=200)),
('tipo', models.CharField(blank=True, null=True, max_length=15)),
('quinzena', models.CharField(blank=True, null=True, max_length=5)),
('id_des', models.CharField(blank=True, null=True, unique=True, max_length=16)),
('ai', models.IntegerField(blank=True, null=True)),
('tei', models.IntegerField(blank=True, null=True)),
('processo', models.CharField(blank=True, null=True, max_length=20)),
('url', models.CharField(blank=True, null=True, max_length=200)),
('vistoria', models.CharField(blank=True, null=True, max_length=100)),
('resp_vistoria', models.CharField(blank=True, null=True, max_length=150)),
('longitude', models.CharField(blank=True, null=True, max_length=17)),
('latitude', models.CharField(blank=True, null=True, max_length=17)),
('uf', models.SmallIntegerField(blank=True, null=True)),
('estado', models.CharField(blank=True, null=True, max_length=2)),
('obs', models.CharField(blank=True, null=True, max_length=250)),
('id_tablet', models.CharField(blank=True, null=True, max_length=10)),
('data_vist', models.CharField(blank=True, null=True, max_length=50)),
('globalid', models.CharField(blank=True, null=True, max_length=50)),
('dado_final', models.CharField(blank=True, null=True, max_length=1)),
('estagio', models.CharField(blank=True, null=True, max_length=50)),
('data_imagem', models.DateTimeField(blank=True, null=True)),
('shape', django.contrib.gis.db.models.fields.GeometryField(blank=True, null=True, srid=4326)),
('veg_sec', models.CharField(blank=True, null=True, max_length=100)),
('periodo_prodes', models.CharField(blank=True, null=True, max_length=10)),
('mesid', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'ibama"."vw_publica_alerta_deter_por_periodo',
'managed': False,
},
),
migrations.CreateModel(
name='PublicAlertaDeterQualif',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('periodo_prodes', models.CharField(blank=True, null=True, max_length=10)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('mes_ano', models.CharField(blank=True, null=True, max_length=6)),
('cicatriz_fogo', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('corte_raso_deter', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('degradacao_deter', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('alta', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('leve', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('moderada', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('falso_positivo', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('nao_avaliado', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('deter_total', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('total_avaliado', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=6)),
('porc_area_avaliada', models.SmallIntegerField(blank=True, null=True)),
('mesid', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'ibama"."vw_publica_deter_qualificado',
'managed': False,
},
),
migrations.CreateModel(
name='TaxaNuvens',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('mes', models.CharField(blank=True, null=True, max_length=10)),
('ano', models.SmallIntegerField(blank=True, null=True)),
('uf', models.CharField(blank=True, null=True, max_length=2)),
('area_km2', models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=10)),
('porc_area_km2', models.DecimalField(blank=True, null=True, decimal_places=0, max_digits=2)),
('dat_cadastro', models.DateTimeField(blank=True, null=True)),
],
options={
'db_table': 'ibama"."taxa_nuvem',
'managed': False,
},
),
migrations.CreateModel(
name='TaxaNuvensAml',
fields=[
('objectid', models.AutoField(primary_key=True, serialize=False)),
('dat_src', models.DateTimeField(blank=True, null=True)),
('f_area', models.DecimalField(blank=True, null=True, decimal_places=8, max_digits=38)),
('percent', models.DecimalField(blank=True, null=True, decimal_places=8, max_digits=38)),
('mes_maiusc', models.TextField(blank=True, null=True)),
('ano', models.SmallIntegerField(blank=True, null=True)),
],
options={
'db_table': 'ibama"."vw_taxa_nuvem_aml',
'managed': False,
},
),
migrations.CreateModel(
name='TaxaProdes',
fields=[
('ano_prodes', models.CharField(blank=True, max_length=9, primary_key=True, serialize=False)),
('ac', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('am', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('ap', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('ma', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('mt', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('pa', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('ro', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('rr', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
('to', models.DecimalField(blank=True, decimal_places=2, max_digits=7)),
],
options={
'db_table': 'public"."taxa_prodes',
'managed': False,
},
),
]
| 62.548673 | 115 | 0.586729 | 1,533 | 14,136 | 5.267449 | 0.101109 | 0.135975 | 0.18031 | 0.235789 | 0.920867 | 0.886192 | 0.859071 | 0.842724 | 0.817957 | 0.745511 | 0 | 0.020625 | 0.262592 | 14,136 | 225 | 116 | 62.826667 | 0.754029 | 0.001486 | 0 | 0.671233 | 0 | 0 | 0.104018 | 0.017572 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013699 | 0 | 0.027397 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
47e0f3b2f925e0a70aa1a62776cf6313d1a78234 | 157,366 | py | Python | dark2/dark.py | Alpha-Demon404/RE-14 | b5b46a9f0eee218f2a642b615c77135c33c6f4ad | [
"MIT"
] | 39 | 2020-02-26T09:44:36.000Z | 2022-03-23T00:18:25.000Z | dark2/dark.py | B4BY-DG/reverse-enginnering | b5b46a9f0eee218f2a642b615c77135c33c6f4ad | [
"MIT"
] | 15 | 2020-05-14T10:07:26.000Z | 2022-01-06T02:55:32.000Z | dark2/dark.py | B4BY-DG/reverse-enginnering | b5b46a9f0eee218f2a642b615c77135c33c6f4ad | [
"MIT"
] | 41 | 2020-03-16T22:36:38.000Z | 2022-03-17T14:47:19.000Z | #Compile By Ariya Saputra
#GitHub : https://github.com/Ariya-Coder
import marshal
exec(marshal.loads('c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsv\xd5\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe1\xd4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsL\xd4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb7\xd3\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs"\xd3\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8d\xd2\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf6\xd1\x00\x00x\x9cT}iC\xd5J\xd7\xec\xf7\xfb+P\xd4\xc3\xa4dNGQ\x14\x9c\x15\'@P\x83\x92\xee$\xca\x8c\x88\n\n\xfc\xf6\xbb\xabV\x05\x9f\xf7\x83\xe7(\xec\x9d\xf4\xb0\xc6Z\xd5\xab\xc7\x17\x0f\xf6\x0e\xb7v\xbb\xb1\x85\xd3\xb1\x07G[\xa7\xcd\xd8rs\xf8\xf3\xf8\xa8\xf9\x7f\xe3O\xb6\x8e\xbf\xfd\xf4c\xb7\xc7\xbe\x1d\x1f\x1f\xfe\xb8=;\xfb\x95?\xb8\x15\x0e\xf6f\xf9\xd1\x9b\x8b\x07mw\xf4\xff\xb6\xf6\x0e\x0f\x8e\x8e\xc7\xfe\xecn\xf9\xff\xd7\x9dta\x02\x7f\xbb\xd5v\xa3\xcf\x1d\x1eu?~L\xfcwR\x9fTa\xf7|k\xb1>i\xf3\xe7\xf5\x89\xcf\xea\xd1\xe7\xea\x93\xbe\xc4\x9f\xe9\x95\xfa$D\xf5I\x14\xd7\'q\x9e\x8c~\xd5]\x1b\xbbZ\x9f\xb8\x04\xff\xad\xd2z\x7f\xf4\xf7f\xf4\xfb"~\xf6{\xf4\x91\x08\x1fk\xdf\xdd\x1f\xfdt\xf4\x8d\xb6\x1f=\xa3\xfb\x85\'\xd7\'M\xb3;g\x8fm\xbb\xd1\x17F\xbf\xaaF\xffo\xd2\xd1+\xa3\xd1W]\x81\x87\x8f>\xd7\x16\xf8\xec\xe8\xa7\xf9\x9fo\xa3\xcf\xe3\xc7\x18\xca\xe8\xb3]\xfc\xef\xbb\xe5\xe8O\xe7G?\xafF\x03\xc4\xd3\xdb\xd1\xf3F\x7f\xef\xf0\xbc\xd1\xdf\xdd\xe8w]cOk\xf5\xf7~4\xbav\xf4\xac\xae\xfa>\xfa\xc7\xe8K}o\xef\x8d\x0b\xbc\xe8\xd5\xe8\x9b\xbd\xbd\xa9i\x0e\x96F\x9f\xc524\x7f\xfe\xfc9I\xe2{\xa3_\x8e~\x11F\x8f\x8a\xf1\xad\x90\xcf\xdbdBi\x83hc\xfb\xbbOG\xbf\xa8\xday{P\xd9\xff\x1d\xfde\xf4%W\x8e~\xe2\xfa\xd7\x07\xa3\x7f\xb6\xf6\r\x1f\xafa\x08\xf3\x9a\xcc\xe8\xd3\x95\xb7\x0f\xf7\xce\x06]\x8d\xfe\xdf\xa7\xf6\x8d0\xfaL<\xfa\x7f\xdc\xd9gFk\xcf?\xfc<\xc6\xa1\xd1\xe1\xf3M2i_\xc0J\xe0AX5\xaeVo?\xc7~\xe0w6D|a\xf4\xf7\xd1\x1f\x1fla\x1c&\x11\x95\xe3z|\xf3\xe1\x9d\xedV\xa7\xc9b\x95\xf1]7\xfaw3\xfaD\xc0\x8a\xe3\xb9\x95=\'T\xaf\xcfl\xc1\xf0F\x8f!5\xa3\xe5\xad\x8a\x7fc\xf4\x8d\xcd\x19\xafo\xf5Z\xfc\xbe\x1f\xfd\xbc\xea\'\xed/\xd8\xad\x90i\xcc\xad>\x88\xc9\x8f~\x1eka d.\xc3\xa2\xed\xe1u;\xa3\x7fA\x1e:\x1b\x8d\xef\x9f\x8c>>\xda\x9b*\xc7\xf0\x1f\x8c~\x1a\xd9\xa8\xb8\x80\x90\x86\xb21\xc1\xa5\xc4c\x95\x13\x13\x84\xae\xc2\xe8\xee\x8e\x9e\x05\x19\x83\xfc\xa47\xfeg\x19K{\x11\x96\x17\xc3\xc07\x9a\xd1\xbfCjB\xebF_j\xa2\xe1)\xd2+-\x08\x04\x16o\xf3\xfd\xfb\xd1_\xb2\xd9\xd1SR\xfd\x05\x1f\xef\xfd\xfc\xf4\xe8\xa9\xde\x1f\xdb\xce\xe1+Xq(\x1a\xfe\xdf\xb89[~\x08w\xdf\xbf\xd0 \xfaG\x92h\xfc\xa4\x97\xa0R\x07\xfa\xd9\xd7&\xfd!\xe6\x17\xff\xb3]k\x8b}\xbc \xc3\x82c\xbd\xe7\xf5z\xcc\x04;\x15/c*\xb3&.\xfc\x0b\x054\xfe\xf4z\xb7\xb2\xe7A\xcb|\xd8\xf8\x9f\xefi\r\xfb\xf8\xb6-\x0bw\xa6\xd7\xe4b)3^GQ\xf2\xb66\x10\x9f.@\x06\xdd{\x9b2\x05\x16\x02\xda_\xc7\xd2\xcd/\x8d4\xb6u_\xb0\xeb3\xcf`\\\xbe\xfe\xb4\xe7\xf3Q\t\xf6\xfb\x14\x9f\xfeaR\x07E\x196\n3\xf5\x95m\x96\x8dd4\xcf6\xb1\xaf\x05\x08:$\xad\x84\xc8\xc1\xeeE\xf9\xb9\x1e\x14a\x0c\xfd\xb0\x05\x9dI\x0e\x86[\x95S\xfb\xf8\xd6\xda\xf6\xf4H\xbb\x9bp\xc3\x043\xfc\x8f6\xf6\xfd\xea\xe6\xa4\xfd3\xfa_\xe5\xeel\x17\xa17\xb0V\xbe\x1d\xed\x99s\xbfM$\xb0\x0c\xd0\x1e|*\xce\xcf\xfd\xfc\xb1\xcd1\xc2\x033S\x11\xbc\xa4\xd3\x93\xba\xf2\xe3\xe8/xs\xb7\x84\xe9\xcc\x8f\xff\xc2c\xa7F\xcfM\xbe\x8elgD!zf3\xf0\xfd\xb9\xc4\xac\x97\xee\x14\xf8\xe1:\x1e{\xb5\xae\xbb\x7f{\x87\x81v\xee\x17\xfe\xf1\x11\xab>\xfaM\x86\xbd\xdb1}\xf4\xfds\xfb)\xa5:@w\xa6\xdfa\x05.0\xaa\xdf\xa7f\xbdm\x9d6G\xbf\x8dm\tLD\x9fA\x97\xf7?A\xa4\xd6m\xf3\xdaV\xfe\x06\xba\x82wu7\xb1\x87\xb3\xb6-q\xf2\xde\xe6\r\xf5jK\x7f\xce\x15\xa9\xe5G\xb4\xe01\xcd0\xa6\x17O\x98-s\xb2O\xf4\'\xa9y\x07\xfc\x1b\xfa^a\xfd\xdbq[\x04\x8e\xac\xb4yx\xba\x91)\xbc\xf87\xb6\xd5\x1e\xee\xcb\xdf\xda\x85\xdcf\xd6G\x9fOe\x8dz3-0e}\xbf\x0c\x85\x1bY\x8b\x90|\x85m\xc7L\xae\xc0k\xf8\xb7?\xbf\x0c\xc2lf\x81\xfe\x00\xe3\x0e\xa6\x12\xad\xfb5\xb2\x96qs\xd0\xdb\x92\xf8\xc2\xd6\xafwO?\r\xf6\xea\xa1Y\xb0\xb6\xda1\xd5\t\xd5"\xf6\x1eOhw\xcc\xd4a\xdfB\xf1\xe2\xa5\xed\x0e\xa6\xd9\xc1s\xa7\'\x14\xe3\xfa\x9fO\xec\xd2\xdf\x83\xcd\xda\xc4\x1b\xdfA\x06\xb5\xcc\xd8\x81\xf6\xb1\t?\\\xf8\xa0\xd8\x1e\xda\xde\xfd\x8f\x9bI4\xffr\xb4\xe4-v\xaaD$\x01)-o`]\x97l\x03a\xfb\xcb~\x18\x9c)\x14\x86Uq\xf1\'\xcdH\xf5\xe5\xcb7\xe6\x15 \x01\x98g\xd5AK\xa9>X\xe0jO\x1e;\xc0\xd3\xf6\'X\xf35\xcd\xb30\x1d\xc0b\xd2\xf5`\xf9!Pe\x7f\xdb\xb6\x89\xdb\x13\xceL\x87\xf0\xfe\x88J:\xcfg\x1fC\xbao-\xc0\xd2%\x7f\xbe\x9a\xb4\xc0"z\x07\xdb\xd8\xfe}jV\xb6\x8aW\xea\xff\xf6\xb0\x9aX\xc2\xaf\xf2t\xfd\xbem\x0f\x94\x12\xe2\x01i\x82\xa3\xf0\xf0 \xc9\xddI,\x1bc\xa6\x1d\xcc\x04\xe1K\xe3l\xd4n\xf4\xbc\x91e:\x9f\xb7%\xa7\x82\xe2\xad\xed\xeeh&\x98\x15\xb4"\xd9Cl\x86!BT\x19-\xc5&\x83T\x1cg*\xeb<\r\x7fc\x8e\xabm>\xca\xb6\xb6\xda(g\x13w0\x96\xe5\xd8\xe8\x13\xd9;{>]Zfs\xa2\xbd\x8f\xcaY{\xc5\xc8\xd2\xdeS\x14\x12\x1eZ\xc8\xd1&[\x9f\x10\xff\xc1\xfb7\x85\xe9#d\x01\x96\x95\x11F0\xb9msD\x84\xfe\x13\xe6\xf5\n2_n\x1f\xca\x9df6\xa0\xa8\xb8\x80\x82\xb7\xdb\x1f\xe00\xd7\xa1a-6\xb0\xf3G\xa6&e?\x03]\xca\x15@t\xb6j\xf8*\x9cI\xd4\xbd\xd0\x96\xc6;\xd8J\x1a\xad\xee\x0e,\x15\x1di\x7f\xd3\xe4\xb1\x91\xd05aN\x11A0\xbf\x00[\xd0\xc5\x1b\x7f\xe5\x1d\x82-\x97\xebF\xa6)\x8a\xb0\r\x859w\xcc..\'WL\xb4*-e\x03c\r\xa1\xf6r\xca\x18\x17>^%\x0b&\xbe\x11\x8c\x8e\xc7\x9b\x8b\xc3M\xf9\xce\xd8\xa4\xbb\tw-\xe0d\xc0\x18\xce\x9a\xd7\xeb\x0fm\xf9"\x06\x9exB\xfe\xd8\x1cA\x84}s\xdf\xfe`\x8f\x1e\xafK\xcecSD\xfc\x1f\x86\'\n\xe6\x1c\x10"Q s\x13}\x86M\xfd\xe4\xa2I\'\x942*\xfe@\x97\'\x9eF\x13\x17\x92\xb0\xfe\x9e\x05(\x90\xbf\x10\x1dS\x9d\xeb=<\xec\xf3=\xf3P]\xb6\x8a\xb1\x8c\xdb\xb6\xe0\xc5\x98\x16^\x14G\xcf\xcd\x91\xd1\xa1\xa6\xb7%f\xf1ghK\x91?\xb2!a\xa6Qv\xa2 /\x86t\xc1\xca\xc36\x96\xfd\x15\xb9|\xcc\x0b.\xba\x82\xe5u\xfc1\xf4-\xb2!\x98`\xd8\xa3\xf0b\'C\x0e\xf3\x0f\xa7\xda\xc1\x8e\xb4\x88\x93\xb1\x16P!\xc8H\xa3u\xa1\x94\x97\xefvw\x7f\xc2\xbcb\xa3w1\xcd\xd76\xa1\xd6\x82\x98c\xac\xc5\x1b,\x1aF\x1f\xf2E\xe8\t\x8ck\xbb=2\x9aG6B\x88vh\xae#rB\xa88\x92v\xa8~\xb97+\x99N?m\xedbE^\xdb\xe6`\xc8xE\x93\xee\x7fW\x1c\x86?\xf1\xb31X\xbe\xe8\xfdj\xdc\x1f\xe0\x13\x8d\x85\xb6.\xb1\x19a\xd6\x08\x0b\x9a\x161]\xfah\xe2\xa9YC\xdf\x9e~\x1d}\xdeE\xb6\xc5\x94 \xe8z\xb9\x0f/\xf3\xd5\x9e\xee\xba;Xn\x1b V\x84\x9b\xdb~\xb0`\x07\xcb\x81\x1d\x0c\x05d)@]\xe3n\xc7\xf6-N\xbf\x1c@\\\x97L\xe8a\x01\xb8+\xdd\xb92\x08n\xf4O\xf3\x12\x8eV\xf8\xb3\xa5\x85A\x8e\x06?\x8c\xba3\x9b\x05\x9c\x92\xef\xd7\xe47*E\xed\x9d}\xbaI\x97\xcc\x9c\xc5\xe9\xc4z}\x99\xd98?\x03A\xfai\xd2\xd1\x95w\x87\xec\xed\xc1Ww\xeb;\xbe\x06\x97\x95|6\x87\xd4\x95{&\xb7\xf0\xf3P\x94\xd8BVDe\x900\xa6\x07\x95\xc9\x03\xe3\x0b\xbc\xa5\x83\xaf\x8b\x94\x97\xc5\x88\xbc\xb9X\xd0\xd6*\xba\xa2D\xce\xff1\xbb\x88\x97\xd3])\xf7E:\xc1\x80)b\xf0Q\x05}\xac\x92M\tC@{w\x17\x86\xfb\x16>}\xcb\xbc\\;$\n\xe5C\x8d(Z\xb0o`\xe5\xe2\xc1\x96r\xad~\x9a\x86\xd09\x967\xde\xd8\x12\xc2\xa8P\xa0|}\xfc\x14O\xff\xbb_\xbeW\xd28zv$OL\rD\xc0\xe2n\xdb\x0f\x9a\xe6%FwE\xdb\xe3\x1e\x98H\xf4\xc5H\xabco\x92\x06\xb3\xe6\xd2\xced\xaa\xd2S\x9a\xfe\xca\x078\xd2;fM*%\\\xa18\x80\xab\xfb6fjX*Wn\xa5h\x10\x9b\xa8\xb0-\xe5Z\x14\xa7\x1a\x7fq\x05\xc6\x19\xae\xa7\xcd\x91\x05\xb876g\xaeb5w\x01\xfd9Pv\x04+\xd9\xdf\xc0va\x1d\xa2F\xca\xcc\x87\xf6{f*bH)>\x1d\x14\\\xc0C\xfb\xec\xcf>>\xf3\x05\x1a\xf8\x8b\x86|\xdd4+\x0e\xefe\x13\xa3d\\9_\xfc\x0eK\xbe3%\xa8\x01o\xcd\xa7\xe1\xb4w\xce`\xe2\xda{\xe6o\xfb\x91\x91\xbecv\xbeM\xe7\xf0\xdd\x9bP\xc3\xd4\xf6\xab\xc1\x80\xbcog\x1b&Y\xf7w\xf0\xae\xd6\xdc\x95\xa7\xceM\x9b\xabr\xfe\xbe\x19(\x18<\xf8m\xfc?\xee\xbe\xed[\x84\x1a+s\x86\xbbC\x18\x01\xd3\xdeWs6\xf5\xa6CZ\xdbU\x83\x1cC\x1aBq\xff\xedm\x8b\xc7h\xc8\x10\x0b#zH\xbc):\x02\x01\xe6\xa0J\x00\x91\xe9\xf8\xe2\xd0\xe6\x847\xc0\x9c\xc1 \x8e\xf2\xc1\xfd\x99\xe5u9$*\xd1\xce#\xfb\x8e\x83\x7fh\xc3\x04\xf2\xafn\xeeb\xd36\x17\x01G\x03\x0f\x8a8\x96\x98\n\xd6\x00\xff\x88\x0be\xa7m\xb9d\xbb\xc2 <\x82R\x06f\xd4\x08a\xdbs\xec\t\x90\xa2\nQg\xb6e\xb3\xe8[\x049\t<{\xb6\xffK\xa6>?\xda\xb3\xf9\xb9V\xfb\xcdH\x87\xe2\x88Mk\x16\xb4\xcc\xc5\x91\xd0\x89tCc\x8f?\xe8\x19\xccs\x12%]\x84\x05\xde+\xae\xa5u\xde\xb9gB4\xca \x8e\xccQ\x1a\x14d\x12]\x95\x85M)\x8e\xaf^S\xe0P\xfe\xf3\xce-\x9d\xfdOx\x00\x198OMy\xa3`\xa5\xb3\xcc\xc4G\xd3\n\xcf\x10\x00\x85\xabx\x93O\x94v4\x02\x8e\xa2\x9f\xa6\x82\xcc\xc1Z\x0c \xf2W\x85M4\xed\xc6\xa6\xe2\xa0\xc2\xc0\x08/c\xca\xff+\xc1\x86\x0c\xc2$;X\xdev0_p\x86\xdd\xc6\xbf\xc0\x91\xd1\x8daf\xf7\xe1]\x97\xcc\xb4\xc4\xee\xd6\xc4\xfe\xce\xbf\xd03\xb4\x07\xf6\x06\x82:\xd9\x1d3\x18>\xd9\x94\x8fJ\xf1\xa2\xf8\xf1\xe1\xac\x82\x15H1\xa6HS\x15\xe4\xef\x1b[\xc9 @\x11\xd3s^Ye\x15\xd6\xa1\xfbw\x143+e\xeb\x92g\x08\x07\r\xa7\x08\xe9\r\xd8\xa5`{E\xaf\xcb@F.\x16\x91+\xd7;\xb6(\x81!a\xaa\xcc\x9ezr\x03\xa6\x1a;\xd1T\x9b\x8a&\x8aAh\xa1\xa4\x99\x00\x14\x82\x10\xf7\x9e\xfd\x92\xa9\x0e\xa6"\x1cSo\x9fm\xc3\xb8\x05\x94P\x11\x06X\x95\\\x82W\xf2\xda\'k\xf8\x1a\x84\x0bR\x8c\xb9Wn\xdbb~\x84\xcf\x912\x94\x96i \xbe\n\x11\x86\xa7\xec\xc2[[]n0\xecsr\x15n\xce]}\xb0\xbb#\x9b\x90\xd9\xf0\x1d\x9dO\x94\xcc?\xb3\xa16\xfdK{\x03#1\'\x1bW\xce\x9c\xeccY\xde\\\x04\x93\x08\x82\x0f\x95\xa2\xdb\xfc\xec\xf3\x85@D:\xda\xf9\x8f{65\x00\x10q\x18S0\x84\xa7\xb9f\xd3\xbe\xd4*_\xa59\xc2\xd4\xf0\xc3.\xd9\xc6OK\xfb5\xac5qM\x04!H\xd2\x9a\xfc\xd3\xa2\xf44R\xd6\x80E\x8f\xff\xd4\xff\xdd\xb5\xe5\xa8\x04\xb3E\xee\xd6mH\xa7\x7fo\xfb\x11\x88\xbaN\x1d<m\x15\x8eD\xa6:\xf8|\x93\xdd\x80\x86\xbd7\xa8\r[\x1ew\x07\xda\xe4\xc8~\xd85\x0f\xed\xf9X\x14\xe2>\x15\x0c\xa8\x9fy!?\xd1\xc8]\xf6\xeb_\x14Je\x16\'\xfbp\xdb&Sa\xa7\x98s\xa4\xa6\x91\x86v<|\x87\xf0\xeb5\xdc\xc5[\xb3\x06\xa5p\xc0\x90\x0b\xec\x83!hC\xfa\xcc\xdc84#\xfcO\xbc\x06\xad\x0f\n&\xab\x08\xa0@\xf3r\x1d\x99\xce3\xb3I\xbeyj\x8fa\xc0U)p\x8c\x05=\x84\xfb\xf7`K\x91\x17\x11\xd8i&\xb0\xec\xe1\x83D\xa9\xa9\x8f\x8a\xbbx3\x94"\x8e\xe7\'>\xd8\xcb\xe0\xee|5\xe4\x7f\xf6w\x06c\x18\xbb?\x83\xc4\xc4\x8fn\x19N\xd3\xd1\x96\x9c\xda\x8b\x99c\xfa\xbb\xc9\x1er\xcfd\xd5D\xa0\x1d\x05\xc5\xb6F\xcdm\x812\xd2\x91\xa8<\x96\x12vOlM\xe3X\t}+p\xc93e\x01\xb0\xe4\xbf\x98H\xba\xcc^\x0c=b\xc2\x06\x15\xc7G\xa3by\x00\xe8:&\xe5\x80/\x9a\'&\xccq\x89\x942\x7fk\xdfs\xf8X\x85\xb4\xb3\xf7?\x1e#B\x0f\xe9\xd1\xb2\x99\xfb\xaa0\xed\t\xcd>J\x1b\xf9\xb6\xfd\x08\xea\xd6U\xef\xd6\xcf\xcct`{-\xa4s\x8b\xb6\xe90}qB\x00\xfd\xae\x8d\x12\xdb\x8c8\xb6ez\xff\xa2\x80P\x96\xd52F\xfb6}\x8cQ\xbc\x1dl\xc7\xa3\'\xd3\xf2#\x9d=\x9f S?g\x0b\xe1\xf2\xf9%[\x1f\xfa\\\xc0\xba%\xf0Z\xa4\xa9\xa3\xac\x08\xce\xb3\x13\x10\x17\\\xf2Ev\x9ch\xf6\x13\xaa?\xa6\x1c\x8f/c)\xa1Q0\xce\xa3\x17\x1dk*\xb9\x10\x08\xe1\xd8#=\xde\xff\x0b?\xea\xac\xd6r\xc4\x0c\x1f\x0b\xd6_3k\x14\xca\'\xd1\x8dW/\x948a\xdeaM\xd6\xb2}b\x8a\xed"\xcb\x03\x1awsO\xd1|\xf8\xaa\x80\x0f\xab\xcc\xa0\x16\x9b\x96N\xcar\xd0\x19\xba\xafr\n\xed\x82M\x1d\xae\xc3\x0f&5E\x00Q<\x818\xb0\xb8D\x10\xfe\xab\xb9w\xd8\x97\x8a%\x97\x88\x111\x02\xd6r1\x1a*J\x95-QU\xdc\x9b\x8b\x9f(\xba!\xf0\x94\x0b\xa7J\xcc\xd9!\x05\x88\xdd\xf2\x0f\x81C\x19\xd4\xa7\x03\xb8U\x95[\x0b\xb6^N\x06\'\xa21\xef\x97\x1f\xd4\xff!\xb5\t\xe9\x1fgz\xda\xb7\x93\xdf\xb0\xeb\x8f\x97\x0e-\xceA\xf0\xc7\xac\x149\x08mS/\xac\x04\xbb\x0f{H\x0f\x8d=(\x81\xc5\xf4\xc57\xb3\xb7\x04c\n\xf3I\xf9\x1bs\\\\fg\xfe\xae\x15\xa2\xe2\xf21\x81A\xf0\xf6Q\xb0E\x0c\xee\x9dYs\xdf\\\xfc[\xc5\x10\xbf\xc4\xa8\xae\xc3\xce|\x16\x98\x18L\xcf\x91\x89\xc1\xcf4U\xfe\x04\xbf\x81\xfc\xc6\xb3\xe3\x96^C"c\x14\xe6 X\x11\x83\xc1\x0c\x00A\xfa\xed{\xb0(\xa0K\xa6N\x11N\xcfo\x9e\xdb\xe6a\x85\xaab\xd7\x96\xbfJ\xef\xde0s\xda\x0c\xc99\xcb+\x7fVm\\\xae\x81\x0f\t\xbb\x9ay\xf3\xce\x82}KzN\xccICF\xa2r\xff\x07\xd6c\x7f\t\x1b\xff\xee\xe7\xfe\x9d)\xd8\xd7\xf2\xc0V\xa0\xa1\xc3n\xcd\xa6aB\xae\xffs\x1f\xc3lfm\xa7\x19g\xe4\x96P\xfa\xe8)\xe6\x02\x9b\x11]5[\x86\xbd\xaa\xdae\xbc|\x0b\x9a\xf3N\x15+\xd4q"b\xe6\xc4\x1c{3$\x8c\xf0\xa9i\xf1\xb9\xadc\x14\xe2_C\x800fsA\n\x8c\xa8\x0eF\n1\x89\x0b\xfbS\x9f\xcd\xd8t\xf9A\xfd\xdfs\x00\xd0\xd3/M!\xe0b\xe9\xce\xc2\x13ax0\xc2n\xf5\x869\x1c(?\xb6\xb4\x89\x17\xe7`\x07\xdf* \x89\x85\x88\xa4\xa9-Y\x1b\xbd\xa6\xd1\xfd\x0f;1\xb5c\xfe\x80q\x0e\\k\xba\xf1\xd9\xf4\xc1\xabx\xea\x08\xd6L..)F\xc1\xd0\xb2\x9b7\xd3\x96\x0f\xf0\xe5\xef7\xd1+\xfcx\xc1"^\x1f\xfc7\xe8\xccw\x18\xe8\x05\x13\xdf&,\xda\x18Br\xd7\x0c\xbf\x8d\xf3\x12\x01\xff\xf9C\xd1\x146\xbb\x9d\xc5\xc2\xaf\xcd\xa5C\xb2\x86-}\xc38\xf5\xaa\xcd\x92S\xa29:\xdf\xc5\xdf\xde^G\x1c3a\x968\x12^\xe6:\x02z\xf5\xf1\x910\xa0\x06\x8b\xd9\xce\x9f\xdb\xde\xf7\x8a+\x99\xa6\xe0\x0f\xcc\xd3\x80\x881 \xcbo\x02O\xce\x83Y\x9a\xb6\xb8j\x86\x9c\x18p4F4f\xde\x02z\x06\xcb\x8cE\xcf\xe1\x85\xe7\xe4\x9f\x07\xe4/\xb3\xe5E\xd9\x83\xf8\xfeHk\xc7\xf1\xf2\xa5M\xfbA\xe7V\x9f\x7fV*\x16\xcb\xba\x10\x18^\xb0U\xc1T\x9b\xea\x9e-\x0f\x86\xd6v\x0b\x95\xd9\xd0Qj_\xdf\xff\x845}\x0c\xad\xdb\xc0\xfa|\xb3\x9d\x84\xf6rg\x11M\xf8\x91y\xd9\xb7\x90\xc1\xb5\xa8\x814\xb3\x9f\xccFW\xc2\x1b\x1a\xbf\x8f\xcfE\x9f,\xd8`\xed*h\xad;\xed{\xa3\xe9;\xe4\xdd\xac&\x87\xec\'\xd2\x91\xe47\x0ch9c\xc6\x85xIX\x1c\xfbIe\xb6\xe2d\xad\xb9)\xba\xaf*\x14t\xe37\xd4\x02\xd8\xc8&zc\xa2F\x7f\xd9_\x19\xdb]|\xf6Bk\xa7\x12\x1f#\x7f\xff/\xbf\r\x8a\xe3\xbbB\x10BS~3\xff\xc6\x00\xc2\xad|\x13t\x8c/\xc6^\x06\xac\xef\xee\xda\xd21\xd2\xad\xe6^CR\xda[\x1b\x1b\xf8\xea\xa3\x1b\xa5\x90\xf1\x0eI<\x8bL\xd1\x1d\xe1\x9e\xe9i]\xef*m\xc8\x84\x96\n\x83\xef\x95%\xbb\xf0=]\xb0\xd1E\xd1\x19\xfe\xf2@ay\xce\x80\x16(_\xf6\x08&\x01E\xafx\xec\x9dT]`#mSa\x0e\x0b\x83\x06\x16\xc3\x82W\xf9K\xf0\xa7\x7f\xf5\xd5&\xde\x87\xdf\xa6I\xb0\xdd\xae\\\xfc}\x82\xa0\xab\x99\xdc}\x8c)nY@V\t\xb8t\xc9\xc8\x11\xcdH\x9d3s\xfa4\t\xd0\xa1\x14\xca\x93\xbd\xbdo;\n\x83\x14\x95A5\xbd&\xd9\x85\x0cF\x7f\xd7\x84\xd4\xf6\xdd\xf4\xaf\'\x10\x82\x1f\xe6Z}%\x0c\x81\xb9N\xa7p\xad\xca\xc2MS\\\xa6&\x80\xd8\r\xb3>9\xb6-\x83\xea\x87d\xca\xc2G<\xc2\x95\xa5\no\xfa\x01\xfe\xef\xab\xe2\xae\xfd+\xee&,w\xc1\xf3\x18\x0bE6\x83\xa8T\xf5\x01*\xd8\x10]\x83\xd2f;\xca@\xf0\x00\xaef$\xd0\x0b\xa2\xe0\x94.\x0f0\xd7(]{4\x8c\xeb\xc6\xbe\xe5\x8e\x8d\xff\xaa\xfd\xf7\xa8)\x14\xf3\xa6\x8a\xf0K\xf4\xa11\xac\xdf\xff\xe6\xdd}\xeb\x9fE\xa6\xabX\xf6\x0eoc\x9d\x90\xe5\xcf;\xb6\xd1\xb0-\x91\x07:\x08\x18\xa6\x1b8\x17\xfd6&\xbf\xf6\x7f\xe1\x9b(\xbef\xe3\xad\x12\x04\x1a\xa8\xbb\xf5\xf1\xfc h\x85\x8a\x15\xd8/\xa3\xc3LALU\x02d\xb6\x19\xdb\x04\xa0\xdc\x9e\xaby\xb4!\x93\x97\x98\x0c9\xffm[\xab\x0e4\x93\xf5\x96\xf4\x08\x12\xfd\xc0\xf4\xbd\x85\xb2\xc7~J1+\xd0L\x88$K\xbe\xe5\xaf\xaf\xeb\xb0i\xb7/\xdaW\x82\xca\x95\x9d\xc5\t 87\xcd\x9f\x1cM@.\xc6\xbc\x80\x89\xc6t\xa9g\x10\xfd\xc9\x86\xcf9v\x85H*!\xad/\xabf\x98\n3\x99\xac>z\x08\x9f\x06\xd4\x8f#\xc7<\xab#\xd5Q\xe2)s\xa0U1]_\x12q8\xd0x\xd7"\rV\x81i]J\x14<\xb3\x8f\x98\xe4c\xb3\x8fq\xf4\xd0B1\xf3\x08\x17\x98\xd4\xd3$\x1d\x12\x14\xd8\x97\xd7K\xf6[\xe6\x9f\xc8\xb8\xaa\x83\x05\xc55\x85}\x86\x10+"TK2$p\xe1\xfe!\x9e\x7f\x1b\xa3i\xc6\xe1\xf2\xb3\xb7BA\xe2\xf2\xca\xe7\xaf_.l\xbbK\xfe\x01\xf4\x16\xdf\xb2x\x93\x03\xa3\xddc\xb9\xe4/F\xd8\x9dp\x0e\xc8H\xa2\x17\x98\xf1!\xac*YL\xc8\xfeX\xd9\xec\x95\xc6z\x81s\x9d\xfd\xddu\xf3\xa6t>\x997\xd3@`\x16:\tF\x04#\xdc|\x19\x93[<\xb4X%\xee\xde\xee*\xa8\xec\xee+\xc7\xeco\xa7\x8a\xd3\xbb\xe4\xfb\xe6\xdf\x86X\x1cH^\x88\x82\x10\xcdT\xc4{v\x15\xd473\xa6\xafC\xad\x979\xb6\xbf\xda\xde\xae\x8f\x97m-#a\x1f\xa1\x88M\xbd]w!\x9b \xc8\x9d|\xae\xcc\xdc\n\x04\x88\x00A\xfe/\x05\x85\xa07\xe1\xfa-\xec\xec\xd8}\xb8\xc9\xec\xbd\x12\xb2\xc4\xc0\x99\x10d\x85\xfa\xa9o\xd8\xa7O\xf5\xd1G*\xe3\x7ff8\xa2\x00\xcf\xe9_\xffOd\xd8>\xb9cV\x83U\x91\xf8\x8eT\x03V\xa6\xfaC\xb3qtw|\xdbP;|\t\x1fm\xba\x87\xca=iv\xf6\xbe\x7f\xd9\xb1\x19\x8f\xd6\x96\xbc\x9b)K\x0cI\xd3\xc8o\x0e\x18N#\xd4\xad\xd0\xb6#\xffn"\xc0\x8d\xf9o{^\x8c\xdc\x08zS\xb5P\xf1\xca\xfd\xdf\xc4\x9fl\xb9\xd8\x04\xd0\x11vl*\x86\xf3\x8d\x16\n\x8e\xb3\x9a\xbcoV\tz\xd7\x10\xc4\xb8-8\x02\x8f\n\xaak\x18\x1d\x03r\xbbh\xc3"\xd8\\p\xfc\xde<\x02\x83{\xe5\xb8\xacpD\xfc\xe4\x91\xb9\xd4P\xc0H\x95\xa7\xe9\x0c\x17g_\x164\xbbb\x8f\xeb\xfd\x13\xd3X\xee\x0b?\x9f\x9b?\xad\x08j\x06\xd3\xa7&\x85\xd1No\x19N\xc3|\x06D\x8dv\xc8\xe4J\xa9e\xfe\xa5\x16\xf1\xcd\xa4\xaf\x8b\x14\xf0f\xf6\x06&a\x89\x95\xbc\x98 \xbb\x93\xef\x16\x85T\x94\xcao\xff\\\ti\x12\xbe\x17A\x81\x15\'dsa\xe9\x08.\xe6\xb5\xe9#\xe7\x1a\xad\xff[+\x88\x18\xd2\x85\xb6\x17D\xddx\xaf8\xb6\xb2\xd57[\xc7$y\x86\t\xafm\\\x0b<\x14\xef\xae\xba\xcf\xf0p\xe1\xbd\x8d\xb8I\xbf\\]\xc4"\\U\x1c\xd3~1\x84\x81\xd9Wa\x02\xe1Ie\xa0\xac\x1e\xee\xed\x1b\x84\x10\x89\xed\xc2\x9dW\xd1\x04\x9f\xef\x88\xbbeg\x02\x8e\x9a\x89[\x17?\xe5\xae\x1a\x0b\'C\x99\x8e\xc9\x94\xf1\x91W^<5Q\xec\x95\xfc\xb5Ea\x1b\xd1\xc2N\xd2\xcdC+\xb2)\t\x856\xc0\xa9$o\xc9%\xeam\x90\xdbH\xf8\xb6/\x8e\xb0\x9e\xe9K\xb3\xe7dN\xe62\xcd\xe0U\xc0Q\x12m\x03\x16\xd7\xc3\xba\x11\x93\xc2\n\xa1\x8c\xc8p\x8b6]`h#fD\xa7\x00\xa9Qh\x88@\xd5\t^\xec\x88r\x1dH\xfb\x9d%4\xb4A\x04"\x14\xe2t\x88,*/W\xeb\x8c=t,\xa3U\x9c\x1f\x08\x03\xc9\x94\x18\x96\xa6\xf5\xd0\xa1(k\xfeZ\xa0\x8e\xd5w92\x1bhn\xd4\xcd\x01\xad\xd8Q\xaaT\xe8\xff\xd1\xf2P\xc2\x0c\xa7\xe1\xb5I\xabO\x1e}\xe2\x06\xef\xa3\x8e\xd0\n\x13\n\xc5o\xdb\xfe\xa0B\x1e\x06\xcd\nHuSv)\x14\xd7\xf4\xab\xd4\xb2|\xe8\xb2\x93N\xb2\xa4\xe9\x17,2\xed\xaa8y\x88\xdd\\\xff3w=X\xee\xddE\x87\xd0|x\xab\xf8\xc9\xbaE\x08\x01\x1epd\x9c\x8e-F\x8f\xb5K\r\x00\xba\xb8\xdf\x95\xa7\xf6L"Z\xfbW%~Y$s\x14\xd2/r\xc8\ni\xe9\x81R3\xda>\xfb\t\xf2\x11\xe31?p\x1a\xc3\xd1\xc7{gw\xcdx\x11\x0bgp\xb1\xf876\x90h\xf4\xf0\xfa\x80\x10\xec\xfc\xada\xff\x9cy\x9b.zyH\xf0\x83\x95e\xacj|\xed\xeb\xacmG\xe5E\x16\x88c\xe5e\x95\xaa`\x10\xf3Vq6\xa2\xa8\xae?Q\xceYa\xfd\xab\xdf\x9b\xab6\x18(c\x9c\x7fT\xb6\x89\xa9\x14\xdb2\xe7\xbd\xe6\x97\xcb=dg\xcb\x1b\x8bB\xc7X\xe7:\x99\xd2v\xe4\xb6\x1d\x90d\x062\x02\x1f\xaa|\xfa\x99\xe9\xcd(\xa8\x02u\xb2\x9c\xc3\xaen\xe1\x05\xdd\xcb\xba\x1eBW\xbc,\xbf6`\x017\xbf\xcaS\xc4G\xa6V\x97\x842\xd82\x80\x12\x8d\x7f:/\xb9nm\xaad\xe1\xa4\xca\x06\xbbo\xf7\xf1\x9b\x83\x17\xcbfS\xcd\x82\xaf\xf5*Y\xf8JZh\xf9%v\x15\xc5\x18\x04PV\xd1\x01\x1d\xcf\xff\xc2Tn`\x05\x9e(m6\x7f\xf6\xd5\x1c\x11\xcd\x88"\'\xa70\xbf\x8b\xef\xcaR\xf4\x8f\x15\x95\xb5\xe7\x13\x00\xbb\xaa+\xb7\x1e@\x07\xeeB\xf8\x19\xbf\xb7\xa6\xe6\xcc\xc2\x1beM!\x06\t\xd3\xff\xaf\x91\xabH\xc9\xdc\xb6\x90\xbdj\xfd\xa2\xf91\xafH\x8aK\xe0\x97\xd2\xea\xd1\x9a\r\xab7\xf8\xc8\xd6\x95\x02\x93M\n\xb8\x82\x11Jn\xe0\x1f\xd4\x95\x8f\x08Z\xf3-K\xf4\x9b\xe6-\x1e\xaa\xb2\xcb\xe0\xe8\xfd@\xd0"\xc57?\xae\xeb\xcd\'\x9b\xca\xb6!s=B\x98\x91\xb0\xed\xa3\xa2\x9a*?\xac\x06\xe4&\x939\xe2\xc4\xbc-\xbdc\x841\x94~\xc2\x0bK~ P\x91\xc4\x88\xf0\x9d\xb3\xffc\x8dY\xdd\xcf\xe7wSA\x13\x19\xc6\x94\xb7C\x06\xa6\x1d\xf5\xb6@\x88\x0e8\xf1\xb0z\xb7\xe3n\x02}\x18\xcdk\x11#\xfe%\xf5/%\xd2q}I\xa2 #"1\x83\x07!\xb6\xd2\xe0_zK\x957#L9tG\x02\x96@\xb8p\x82P\xe2\xf4B\x95\xaf\x08\xd9D\xfaQy]b\x96\xd6\xb7g\xa0\xe3\x83\xfeT\xd9\xe0\xb0\x86\x80\x9d\xe3\xef&\r\x15\x08Yd\\\xa2\xb6H\xa0,VhW\xda\x97\xc8\xa5\xc4\x08\xbb\x10\xfd\x92\x1d$&0\x0em\x83p\xba\xeb\xe6\xed\xa2x\x9e\xf8\xc4\xb9\xb7\xc5%\\\xd6\xda\x97+\x95\xbd#\x9f\xdb\\\xab\xee\x83=\xaceR\xf3\xcb\x82\xf9V\xdc\xa6XE?\xcc\x93\xd5,\x9c6\x08\xd5"\xc8u\xc0\xbb\x82\x9fh\xb5\x8e\x9c\xd7US\x87^\x85\xc7Q\x80R\x7fK\xec1q\xb2{\xca\xd2\xcc\x15\xd8\x927\xbb|\xc4\xfa5\xc5(\xc9\xde\xa6\x19\xa7(\xa9\xea\xa3\xd5)\x1b\x86a+&Q\x84\xc5\xbb\xffq\xd5\xde\x8c$MGq\xc7&A\xd2\x1a\x8b\x95`\xb2\xb9\xf1\xc7\n\xf7\xe3\xd3\t\x13sN939q1\td\x8b\n\x8d\xfb/[W\xebK\xee\x15\xe6\x14e\xb6`\xb0 #? .!\xcb\x03\xa1\x16g\x1fX^\xf9\xc5\x9cq\xd5\xbd\xdd\xdc4\x7f\xc9m\xa9\xa2?\xf6\x158\x0cf\xdcz\x98\x1bj\xb0\xed)yX$\x8a\xcc\xde\xb5\t\xf5\xe0\xa46\xe9M\xcb\xd3|\xf2\x17\xb0\x9f\xb9\x1d\xe4&\xf9;\x91\x7f+\xb9f\xc3\x80X\xee\xbca\xc1K\xa3\xd0\x8el\xeet\xe2\xca\xeb\x8e\x9eL\x04)\x86j\xc0J\xfc\xa73e3\xcc\x1a\xde\xef\xa9\x80 \xab\xe2#\x10\xb9\xb2\xe7X\xb4\xa3\xbf\x10\xc0\x97\x12v8\x19\x10e\x1a\x05\x94\xb1*\xba\xceo`\x80+\xf7!#\xd7-t\xa7\rk\xee\x9bL\xc0\xe93\xb2\xc8\x14K\x15\x96\xe84\xf1\x97c\xf3(\xbe<x\xfdp\xe0\xe4:3\np\xfb\x95\x9b\xd0\xfb\xca+Z\xecpM\xe1_\x03\xeeU\xd86]fu\x14\xd4\x88@\xf0 \x13\xc4\x10~\xbf\xde\x15\x0c\xd4l\x9e"n\x99Vd\x86\r\xf7\xf1\xf7\x1b62\x0f\xf8\xdb\xbbc{;\xdd\x06\xa1\xf6I\xd9\xc2\xf0\x198\x1bX\xbf>~o\xb62\x1436;z\xc8\xfc\x85<ol\xf1z\\]_\x95\x89\xe9\xc5\xb0k\xc12\x08\xa5\\J\'\xe0\x82@\x06r\x87t\x080\x86\x94\x0e\xcb\xe1\xd3\xd9{\x8b\x1ab0eE\xf4\x0f\xe3P\xf6K\xcf\r\x0ej\x85\xa4\x91\x0e\x07\xc9@\x12\x1d\xbc2\x1a\x86jD;\x0b,h\xf3\xa7f\xa9{\xdd4\xbdknR\x18\xc48!\x17\xbfxa\x99+\xc9N\x9d\xfd\x82\xa6\xc2\xed\x9em\x08\x90\xc7\xe7\x9c*\x0c#\xe3\xf3\x9f\xc2\x89pmO\x915+\xa1\xab\x0f\x94z\x90\xbeD\x9a\xc8v\xab$WEz\x1ex\x88-\x84\x89\xc4fb\xba\xcf\xff\x8b\xc0\xe3-n\x1b\x89\xe0\x8d\x17\xf4\xb4\xaa\xb8\x90\xaa\x14\x19\xc50\xb1\xc5p\t\x8e`\x84d\xce\x8c_\x88\xff\xd2\xe6\xae\nm\xc3\x81\x97(2[\xcf\x92\x0f\x02\x8fv\xcb\x04\xd6\x10te\x16\xee\xaa2+\x02^\xeb\n[\xddc\x0f\xd3\xedTS#\x07\x1e\x13l\xb6\xed\x05\x18\x113\xca\xe4\xd3Q\x05P\xc9l\xccg\xb9)o\xde;"\x93\x1a\xe9EW|\xc1\x8f\xee\xdb\x0eBG\x01\xcc9\xd5S\tdyE\'\xcaW\x1a(\x00\tY\xd5oX\x82\xfdZ\xec\xd0\xd3O\xb5\x18?\xbf\x01F\xb6(\xbf\xb8n\xc6\xc2\x83VU\xb3\xb6\xb9\xa6\xf0Pgk\x88~\x8c\xb2\x81i\x8ceV@U\xc98b\xeb\x9f#\xb0\xf2\xd6\xcb\xe7fA\xa3hI\x1e\x13O\x05\xac\x06\x81\x8c\x9b\x0f/\xc0Bk\x9e\x99y\xf1\xfelr\xf75\x118Y\xab\xe4j]gf\x88-<~y\xcf\x16\x1da\x83\xe1\xb8\x13\xab\x9c\xee3\xc3m\x1d\x11\xf0\xf7fR\xc8\xa2\x11.\x06\xcf\x08\x1b\xd1\xaad\xe9\xda\xb7\x8d\xe99\x96\x89\xb02lL\xf4\xf6\x96\t`\xf0W!\xf0\xd1\xcf\x9b2]\xfd\xe3\xc5\x01\x7f\xc5OX\xa3\xbd-\xf4"\xb7g2\x07\n\xc5\x0f\x99|!\x8cX;:\x0bV\xe9\xba\x8f\xa6\xe3\xad\xb0\x0e\xe2[\xd8\x03\x14jZ\xffu<\xd8Z\x90\xfe\x9dC\x1b\x8a\xb5\xf2\'&g\x88\xd0\x17\xe9\x87\x97m\xc7\xdf\x8b\xe1p\x12\xf2J\x96\xb1\xdc\xef\xc5\x87\xc2\x93;{\x07\x14\xc3\xeb\x0b\xcc\x01\xfb\t[\xd3H\xf0\x06\xd9\x08\xfe\xe5\x15{5\t\x19\xa9y\xb8JA0\t\xf7\xc8\xd1\x1b\xb7e\xf2Ai\xeb\x1ae\x18]\xf1\xc2\'\x1f\x14 \xe0\x89\xd1\x9b\x87\xb3Rl\xb1+\xc9\x17\xaeP&\x8c\xee\xfeA\x9a\xeb\x9f\x9b\x02\xc5\x9d\x88)\x98;iz\x05 \xf6\xea\xd7*\x86\x95\xbc\x86^\x14~c\xe69^MAJ\x868U\xfb\xd5,\xcd\xdf\xc1F\xc0\tt\xed\xca\x8dm\xe5,<}P\t\xdegY\xaa<21\xe0\x89&\xf2\xb2\xdf\xdfY\xb7(\xd1\x81\x0b\x19\xb7\xbb\xbd\xe5s}\xf4\x04\x16`\xed\xce\x9cm\x96C\xb9\xa9r(!\xe6swm\xb0=\x00\x17\xc6\xb9\x90\xa3\xf2\xda8\x14i\xb95\x11\x1d \x1d\x16\xf7R\xb8\x7f\x1a\xe3\x08\x06\xb5\xffd\xbb\xc3\x947\x95u \x03\xf3\xaf4\x90\x95\xdf\xbdO\x08}\x9b\xfd`AzEt{R"\xe5%\xb6\xca\x14]\xf4\xe4\x11F\x05\xaeava&\x12K\xcf|\r\xbb\x17\xf2\xd3\xad\xd5\x9f6\xf0\x10\xd6\xdf\xfe\xb2\x95\xf0\xa2\xfea\xed"\xbf\xac\x88<\xbaG\xde\x91|\x10kM\xed\x91\xe4\x19\xf6.{hrH\xd6X\xfb\xec\xebma\xc8\x0cHU\xe9k\xfd\xec\x92En\xe4s"\xf0-\xc4?\xee\xfb\xb3\xf7\xa08\x93\xbc!q&w#\xba0\xb5\xedZ\x16"\xf1\xa3f\xdd\xfcl\xc8\t\xf0\xcdwJ \x12\x93\xce>9\xb1g2\xea\xc8\xe5$+\xf3\x17\x0eX\t2!2DxT\x8e\x03\xde\xc3\xc1\x81\xdeod2\x8eE}y\xb2\x92\xf5\x94L Sk\x8f\x0e"{ \xa8\x89\x9b\x95[X\xde\n\xbc\x8a\x0c\xe7\xad\x10\x05\xf2\x08K\xfb\xd1\xf4n@ =p2.\xa7\xea\x13n\xc8\x04a\rP\xbc\'@\x17\xe3\xa0\x06\tm\xe5\x99\x9c\xe5p\x86\x89\xc71p2\x0f\x0c\x0frm\x8aId\xdd\xaf\x95\ty%R\x004\x98!\xc8<\xf8b\x08X\xfd\xe6m\xfa\xee#\x1b\'\xa9r\xf0-\xf9\x87k\x90\xfbv{\xda\xacU\x93o\xbd\x91\x83sw\xef\x99\x97m\xd2\x17C!>\xc3\x82\xfak\xe3\x9b\x1fv\xf0\xe0Ma]\x8dW\xe2\xdf\xa9f\xca\xea\xbe0ll\xee@\xe9$\xfc\x07\xbf\x9cd\x0b\x89Y\xcc\x1e\x04\x15V\x14B]#5\x84\x8a02i@\x08i\xd7%\xbf\x85\x1cKb&\x95\x80g{\xf5\xb1\rq$\x7f\xff=\xd0_1\xd6\xe4\x8d\xad^\x15\xdf"Y\x9d9\xf4\x98\xdeS=\xffc\x16\x87g\xc5\xc8\x90\xe3!\x01\x1dL\x0c\xd9_\x8b\n\xe8\\\xe3Z\xfc\xf4\xe9? \xa3es&]\xa5\n)\x95\xc0\xba\x92\xb0!\xb3\xfb\xdf\x8d\xb9\x9b\xd6\xdf\x80Y]\xb2\xf1v\xd1\x87uV\x90\xc1\xca$\xa4\xe4\x1f\xc1\xa2<7\xd1\xaa*\x91\n|u\x0b:T\xda\x08)q\xd1\xd6\xd4\x83\x19\xef\xf7\xffm]\x0bb\x01\xeb\xb6\xfe\x11\xce\xa1\xd91\x1d,\xd0\xe9\xf9\xd9\xc5\xd6{\xad\x04\xce\x14\xf1\xf0 K\xe5\xd1\xf8Ec\xba\xee<H6\xa3\xf5\x01E\x90\xeb5\xe4\xcb\xce\xb6\x8f@6\xf7\x07\x8f\xc2\xe9G\xaf\x93WC\xf1\x89Q9\xdc\\\x9b\x1c\x06s\xa2\xe4\x1e\xc6\x8f\xef\x98$\xb0\xea\x81\xaf\x13=$Mp}\xbf\xad\xeb\xa9\x15<\x1e\xa9,\x93\x9d\xfe\xe2c-^\xd3\x9e\xc9R%\x1e \x02\xcd(\xff\x81he\xc6\xb48\x90P=!\xf8\x9b\xb8\x089\xa8\x00\x80\x1a\x16\xc9\x83\xd9@ja^\x1f/H\\\x80\x98\xfb\xe4\x97`\n\x12w\x88\xd2}\x1b\x0e\xebE\x8a\x0e\x98\xce\xa2\xaa\xd3E0\xf7\xa8X\xf7\xe5\xe9\xc5\x86\xe1\x16\xa8w7\xc5\xf4\x05a\x7f\x95Z\xc8\xbfSq\xba\x13\x14\xc7\xc4;\x7f\xf3\xc5~K\x00a0\x9ea\xea\xaf\xa0V\xf7\xf2\xfa\x0f\x00|\x0c\xbe\x90\x00UB\xc4\xfa\xf8T\x07\x0chv\xa0uPv7\xb0u[%\xa4m\xd4?\xc5\xf6\xa2\x8c\xe8W#\xe5iNF\xae\xd3\xa1\x89\xa6\x9b\xfa\x95_\xd8\xe6\\\x96\x9d33=\x04\xc8\x94\xd2\xf9\xea\xf1}9\x02q\x17;H.Q\x83\xf6\xc5)\xac\xceL" \xc6\x035\xf7\xaf\xa7\xb1*<N\xe2mH\xbd\xf1\x0eW&\xa1\xb6\xb3\x8f,\x96\xc6\xcb*\xff\xd4\x12v\xbay\x9c\xac\xac\x06\x9e*\xe1\xbfO\xb6~,Y\xa5\n\xda\xc1\xa3\xea\xfc\xd8\x8b\xbf\x02v\x82I9\t\x92\xcc+\n\x931b4V\t5\x0e3\xd1\xcb\xa0\xe8 \xb6A\xb0\xfa wJYd\xa4\x06\xf6\xa0[1sNn\x8b0sO\x02\xcdg\xd8eP\x85\xe8\x878\xf8\xe9\x9f\xd7lm|81K\x1e\t\r\xea\xca\ry\x9dF\xe1\xf9pf N\x8eu0\xb3\x92?\x823K/\xd6\xa7\x04\xe5\xfa\x99OP\xc2\x8f8\xc4\x11\x84S\x12\x81)\x1eC{\xda\x9b\x10\x98%\xc34\xacj2nq>)eJ`\x88\xe5V\xcfm\xc3\x8c\xb3\x97C\x9e \xf4\xbd\x80N\xfc\x9f\x93\xcf\x050d\x1f\x14k\x01\x02\xc0Y\xcbN\x13 (\x06A\x86\\v \xd0P\xd1\xa0U\xd1\xf4\x86\xa9\xa9Qg\xff(\xfd\xcf?<\xaf/\xbb\nP\x8b\x01~T\x8f\xce6\xb7\xb6_\x8c\xf4\xe5\x18C\xdc6\x01oX1=\xd5\x02\x1a\x1d\x13#;4\xcdaT\x9c\xf9\x15\x13\xe8J[\xeb\xfd\xfcS33\xf8\xa9\xe3\x19\xc0TIw\xb8\xb1\xc5\xdao\xb6\xac\xfd\xeb\xebK\xc6\xc4H\x02\xf6\xed\xb5Q\xf7\xc8\xec)\xd0\xcb\x1a[\x0f\xa1\x0c\xc0`\xd2\xa7\x0f\xebKn\xbc\x8f\xc7^\n\xc8\xa2I\xbeu0{\xdd\x84\x80\x0e\xa9s\xf3\x0f \xd6\xcd\xeb\xd7/_(\x88l$\xead\xa2\x9d"\xd6Z_\xd8\x91X\xfa\xafo\x84a\xf6\'\'W^\xc1n\xc4\xc973\xd7H\xdbY`C\n\x8az6+\xd2X\x03\xb0\xf6Yp\x03\xdb\x95\xd9\x937\x0b=\xc0g\xb0!Q\xb4bb\xc5$\xc2\x7f0gf\xe1\xf9\xc0(\xaa^ZD\xd9\x80(C\xad#c~E\xdaDZ\x1a\t\x1b\x1d\x12\xe0\xde_\x99\x9f\xb7\xe1\x86*{\n\x91\x9e\x90SD8\x94\xa7b\r\xd3b$\x7f,\xc6w\xc8-y^%]\xb6l\xb3K\xdfn@\xba\x97d\x17XQ\xeaM\x02\x83;0q\xaaT5j\n1XF\x81\xb3m*KR\x8c\xe7\xcd\x1a\xb5\x0e\xb5\xff~\xd5\xc2\x88*b;\x8fV\x13#\xf7\t\x00\x18\x0f>\xc38v\x8b\x84\x918\xfb\x9b\xb2\xc6\xe2\xf0U~\x80\xb3\xec\x95\x15\xe8Z<j"\x90\xbd\x01\x11\xd8\xc5\xd7\xaf\xefZv\xc3sw\xf9\xe4cs\xd2\xaezlc\x8eI\xb6y7\x06\r#c\xc2Y`\xd8\xb9\xa5%\x8d\x12\x15\x01\x18\x14\xefnXV\xd6D\xd7s\xfb\x94\xeb\xa6&_\x9ac \x9e\x9d\xd6\xc7\xa8\xbb\xb9\xf3}\xdb\xa2\x91\xda\x11\xa7=3\x81o\x141\xf24\x90q\xb6\'OVlb!9;\x92\x81L\n.p=\xad$\x81\xd84\x0e6"\xfdy\x89\x83f]?\xf6\xe0\xfc7dUl\xa7\xb8"\'\x8av\xe4\xcc\x0cJ\x07\xc2A\x05\x9c\x93\xe8SG\xb8\xaf\xff:\xff\t!S1\xddY\xe6\xe5\x925R\xe4\x95\\7\xedo%\x1f0\x1c\xdd-\xe1B\x97\xd0\x96V\xdf\x8e6\xfc\xc5?n\x9a/0\x16\xcd\x8e\xce~s\x89Ik\xa0"\x9dT\xe6RG\x0b~\x8c\x12\x90\x0c\xfc\x00w\xd0\xa6\'\xa6u\xdc\xa6\xf0\x0f.\xf1\xc4\x82\xae\x13!z\xf4\xf1\xc2T\xa6\x8dd\xff\x18\xc9\xef\xd7B\xdd\xb7d\x9e\xe98\xef\xd7\x97t\xf9\xb6\x02\xcd\x95>=B"\xd9}\x00\xf9\'\x14o\x04\xd3\xf7\x0f,\x0e\xe1!\x8c\xfet\xcf|\x98\x13A\x07\xa1`\x1b\xb5\xe6jXv\xc2i\x0fB2\xd1S\x1c\x94&DL\xb8\x02\xe1%\xb1\xd9R\xceA\x98k\x95\xa0\xa2\xe0.\xce\x0edq\x0bK\xbc\xe2~\xec\xbd\x18\xb9\x1d\xe8\xb61\xb0\xd6>VT\x19\xea\xcb\x83g\x01*CRUo\xe0\tN\xb1&\x1f\x9f\xd1\x95\xc2\xee\xc5[P\xda\x19Y\x88\x81\x1e-\x83R\xa9,4P\xee\x1a\xa2\xac/\xc7l\'\x88\xef\xb4{\xe7f\xe9\xd91\xa0\xb5?4\x00\xd9\xcb[\xa6u\x91\x08\xd9\x8c\x97\x11?\xf6\xbfT\xfc\x88\x15na\xc1\xd2\xabx\xe7[`@\xee\x8f\xd9\t\x96\xbc\n\xf3\xbc\xa4\xe34\xf8\x05\x0f\x9d\xab\x8cC\x93\x18l\x03\x88\x9e\xeagA\xb8_\x07\x1a]\xef`\x9c\x9a\x95s3vM\xa5\xc3*\xcc$#\x9b\x0b\x81\x8e~\xf3-\x81\xef\x8eRtl\xe6\x97I\x06\xdc&\x98\\\x1d\xcbg\xf1E}\xd9\x8a\x86\t\x006=\xfa6m\xda\xce\xd0\xab\xb8bt\xc6ue\x19\xf0\xa8\xd5kE\x0e\x91\xfd\x80G\xc3D\xec\x8a{m~2c\xcfe\xbaB$oG\x0e\xb9\xfdx1\xbe\xfc\x99\xe1\xc4\xe2W`\xbc\xd1\xfbie\xde\xdc\xd6H\x19~\x9c\xffy%\xad\x11\xa6\\\x91\xaeq[^\x1eHS\xfa\x89\xbe\xf9\xabE.C\x83\x02\'P2\x88\xe2\xca#7\xb0\x05\xe5\xa3\x1bCH\x86\xa7\x9d)\x18\xc2\x02\xb3MO\xa9Y\xd1\x96\xe3a(\x8869<\x0c\xf9\xb4APA2\xf1tI(m[\xeb\xc0\xa0`\t\xecE\xfaV\xb4\xed>0\x12Bw\x052}\xb0\xa2\xe8RB\xac\xb5\\\x18\x97\x0f\xec-"\xeaT\xda\xa4\x1f\xee\xa7\xed\x87\xe4\x8f\xabj\xde&6u\xd6.\xfa\xfa\xe8\x96\x96\xc7\xdb\xf4\x91\x81b\x0cd\xb5D\xa7X\x8b5q\xef\xe3\xfc\xa1\xa5\xd7}\xe7\x16\xbf\xae\th\xa2S\x17\xa1d\x00r\x9d\x1c2\x93\xf7\xd6\x02\x96\x8au\x1b\xec*\x08.4LN\x95\x0f\xb7\x0c*\x8b\x7f5A\x04\x1ck\xd3?\xd8\x9d\xb4\xdc\xc1\xd8\x1d{\xc4\xf1\xab\xfciyvq\xa56N\x01\xd1\x85\xc9\x95\xc7\xcbfHy"\x19\xc7/\xfbt\xea\x8f\x80^\x84\xbf\x03\x02\xec[S>S\xdf?\xb6\xb1\x1c0\x16\xa2\xfd\x8e\x9dE\x8a\xd2\x003\xa0\x1dC\'!\xa6K\x1e\xc5^\x08)s\xd2(\x00\xc0\x8b\xaf \xc6\x88Yy\x99y\xf0\xde\xac)\xe9\xad\xad\x05r]u1aSh\x06\xd4\xc2w\x1f\xea\xfdg\xbbf\x8e\x99\x13\xfa=\x0b#\x9d\x9d\xa3\x89\xfe-!\xfd\x0c\xf4\xa2\xf8j\x82Q\xc9x\xb0\x08\x87@\x91\x10\x13w\xfb\xc3\xb9\x00\x82\x08\x07\x02\xfc]S\x84.\x03\xa3\xb9\x8b\xee#NN\xde\xaf\xd4\xff\x91\xaczC\x1cM\xec\x0f\x93\x87J\x05(\x16\xea\x86\xe2W.X\xda\xc0\x16\x1c\xdei\xabY\x1b\x9dI\xcb\xee\x13 $\x9d\xc8\xea\xac\x08Td{\x9d\x9bX\xf8f\xdc\x8c\n6\x82\xf1l\xfb\xbc\xb1\x93\xc5\xf5e\x06S[\xd4E\xf6I\xd3\xa0\xfe\xd9^s?\xcd\x93\xf3\xdcVk\xb2\x15Xy)\x95zTL\x14\xfe\xdb}\xa6\xb4U\xd1Q\x93\x1d\x9e\xff\xd1\x90\x9b\xb7\x88\x9e\xd9,\xacW\xe5\xaaPp\xec\x05|\xe57\xf7P5n\x1e\x9by\xb1\x98\x1a\x06\xeb\xe3s[db\x94\xc4\x11\x1e\x9f\xe3\xc8B\x1f_5\x0b\xec\xba\x97\xb7O\xb4)\x8d\xa1!M\x96=1\xef\xc0\xa3\x191\x83\x14d\xc5`\x9eF\xaa=\xf0\xb4Wa\x1b\xee\xe1L{\xd0\xe0\xa3\xa0Z\x89\xc7a\x95Q\x98p\x84Q\xfd\xb1\xcd\xb6CE+\xf7\x93\x9b,\xa6\'\x9cPU\xff\xeb\xb3\x03\xbc\xb1\xcff\x96\xa0\xa2-\xb8B\xd9\xe3\xe8\xfei\xb0\x91W\x8a\n<\xc0O\xe8;O\xdf\xe0\xec&\x0f\xc6\xa1\xf7\x00q*X;\xe0\xe9\x03\xfd\x92L5\xaf\xd9\xe5*|W\xa5\x97\x80{"]\xd2lV0\xc5\xc60\xdc\xd9?\x9c\x9d\xba>`t\xb5X\xe9f\xb7F\x8b\x0cr9\xc5\xd7\xd5\xff\xaa\xd2m-\xd8\xf2\xf1w\xf3\xb5\x04\x9a\xbd):[\x9b@\xba\xb3\xc7\xa6Y\xec\xab\x94\xb0\xd2\x94\x9b/\xf4\x03\xd9\x0f\x9b\xdbl\xaf@\x8f\xfa\xe313DA\xe5\xf7\xd13\x8e\xe0\x81\xe29\x10\xb0\x82\xb3t\xc3\xb8l\xc8^q\x86\xcc\xfb\x1d\xae\x7f0\x13`\x87h\x7f\x8d[\xc0\xd0\x8ao\xcc\xb4f\x88\x18s\x10\x1a\xd1q!\x16\xc7\x11\xe1B\xd7\xa0\xdf\x9b\xb5\xf0\xf2\xc4T\x123\x86NT\rk\xc0\xf0\x13\xc3\xf8E\x08\x92\x91\xfd5S\x05s\x1a\xfdw\xc5\x92\x97i\xc1\x8c\xc2\x0c\xaaKnx\x11\x0bM |c\x89\xbb\xec#aW0\xc5\xe2\xbd\xa7\x02\x10\xaa/)\xc9C\xe4L\xfe\x822B\x91\x91FG%F_\xf5S\xe9\xbeb\xf3\xd2\x94\xc6\xe3\xacs\xe8\xe6/\xee\xed\x99M\x88\x18\x007\x02\xa3\x19\x8b\x1d\x91\xb0\xbc`\xd2\xde\x1a\xfa\xb8A}\x83\x11gJ\x92\x98\xaa\xe9\xac\xf3\xb1\xfc^jY\'\x8f\x0f\x88\x8e\xca\x14\xdfq\xeb\xf77@\xe0l\x90o\x04ZFA\x92L\xaf\x9d0\xafN+\x1a\xd9\x16u\xd5\xcc\xef\x0c\xd4\x9e\x06\x00U\xa8\xde*\x1d\xa4\xa1\x82L\xa3eU\xd3\x0f>\xfd\xadM\x83\xe5\xb2\xe6\x83\x02\x81J\x0e\xa1\xbb\xfbH\xdf\x1e\xb6\xaaS^\xd5\xae`\x13\xae\xa1\x93Q\x8f\xd2\x1aA\xa7\x8em\x15A\xcb\xa9d\xe9]\x84\xb3w\xd9\xfe}\x13V\xb6\x03t\xf7-\x86\xc3\x9bZ\xf1C\x07\x92*L8a\xb8\xd2\xde;2\x1c@!yZ\x18\x1b\x0ex\x141\xcd\xd32\x1dl5\xaa\xc3a\xfe\xa5\t\xaf\xe3\x89\x19\xd6\x15\n\x1b\xadC\xb3\x19\x1er\x16\x1c\xc4\xe4\xd7:9\xbd\xff\xa6 N\xac\xe8\x91e\xde\x17\x10V\x98\x97oy\xe8&}\x11)\xbf\x8ef\xe5\xa5{{\x98\xcf\x9e\xf5\xb6xC\x84\xc4t8B\xd3\x91\xea\xc7\x03|\x97\x87t\x1e\x02cL\xef\x9b\xc9\xa4\x8d\xcb\xed\x0fs\x91\xb0\xb5l\xe6\xa1#\xef\x9b\x08x\xa3\x1c(\x12\x8c#"x\xcf\x02\x19\x1a\x86\xb9^\xb1oh~Z:\xdaw\xf5\xd0\x16g\xb4\x80\xdf ]o\x86\xe4\xd9\x04\xd6x\t?L\x17h\xd6\xb1\x84(\xc9\xb1\x85\npc\x97]\xb3\x05\xe9U\xf1d\xff\x81\x9c\xff\xd8\xb0\xe0\xfc\xf2\x101\xc9\xe0<\x10@\xf1\xa9^\xdf\xb6\x81R\x0f\xe3U\x13m\xf6\xdbb>\x12\x93\xc4\xf8\xfc\xc0\x84\xab\'_\xd4\xa5\xd3x\xc5\xde\xd2Ix\x83\x8f\x10\xc4\xf8f\xbe\xb4\x07D\x1e\xb2+W\xcc(\xc6\xd5\xdc+\xe8\xc1\xca\x95Z\xbcz\xf0\xcc\xd0\xd3\xc3\xabTJ\xc6{\'\xec\x15\xc6\xbc\xaa\x9ec\x05\xaa\xe7\xab\xd8\x989\xbc\xe9f-\x8aucJ\xe8\xd6!\xd9\xc7\xf5\xfe*\xads-\x85R};\x98L\x19\xe4\t\xed\xf5\x07\xa6\xa4\x91\x88\xa3$\xd3{\x05<\xad\x04?\xb3\x01\xc6\xa0\xc7F\xc5\x1aDv\xe7_*\xc4CvxNT\xbcZ\x13D\x19\xc9E\xf4\x16:\xd2T\t\x8d\x08\xca\x91)`0\x11\xe0\x054\x80I\xdb~\n2;d!\xe8\xd6\xc3\x87A\xb7\xf2\xdf\xe2\x16!H\x81\xc0\xb1\xba\x86\x13_\xe4\xcc3\xd7\xf2\x02\xf8H)\x9a\xb0\x7f\xb8\xe6\xc6\xce\'hEbKj\x02r_U-?90jk\xabG7\xa0\xc4\x12\xf5g\x1f\xa2\xf0r\xf5\x0b\xaa\xd4\xd1\xb3O\x96\xd7\xb6]2\xbfjrg\x98-\x9d\xea\xa9\x81\x9b\xec3\xe8WM\xf8\xaa~\x02\x9d]\xb2\xd2l\xc9h\x13j\x89\xa8G\'B\xf0~\xaab\xfd\xe5\x1a%\xec?Y\xe6d\x0c\x8a\r\x0c\x15}:\x8e0\xb9M\x93\\6\x13\xf3\x95@=\xd7\xed\xcc\x1d\xfc\xac\x87\xc6\x11\xc7\xe6\xd0\\\x89\x05d\xb3<oa3\x19:<?T~\x1a\xe2\x8ah\x87\xde\x12(\x98\xdb\xb4\xad\xe9\x07fWt\xff\xcc\x96\x89\xad\x8d\xfa\xed\xbf\xb6\xbb4\x15\xed\x89P\x9cvhD\xe0\xdb\xfbJ%J\x83S\xd8=B0H\xa7\x08\xcf\xce\xa6M\x98\xefxm\xbe\xcd\xcb#\xb0\t\n\xfe\xf4\xaf\xe0\x92pR\x8e\x19\x13\xb9\x9f\xf0\x85\xd9:\xbb\xe7\xcc\x19\xecK\xe8\x86F\x16\xa2\x84\xc6\x80^\xe2\xd7\xbb\xbdGft;f\xe9\xcd\x80\x88\xb3\xbc\x9bb\x93\xe1\xf4\xc1\xb0\x1d\xa5bp\xea\x80\x1c}\xfb\xed\nVng\xd3\x9e=\nu\xb6\x98\xcb\xb6f,{.\xf7\x00\x00\xc6\x82>\x83\xed/1\x8a\xbe\xd6\xc1\x7f\xe5#\x018\x18t\xac\x12\xb2A\xf0\xa0\x82oq\xa72?\xe0\xc90\x89\x87Q\xedH\xdf\r\xb6\x0c\xc4\x87\x82\xb0\x1f\x88\'\xc8&\x91H(\x11+\xaak\x10\xb9s{\x00k\xaa\xd5\x1a\xedd\xfd\xf8\x86\xb0\xaa\xa1Z\'\xbeP\x95\\5;\xce\x10.\xbfu\xc4\x08(}T|B\x0e\xd6\xb8\xee\xd6\xbf:H\x8b\xfaS_\xdc\xdf\xb3(\x88\xe0@\xd1\xcf\x99\xc9\xb4Fb`\x07$\x02\xe3\xa4\x88\xac\x07u\xb6b\xe4\xf7U\xf5\x7f\x85\xff\xdb\xac\xeaW\xd5\xb8%\xd3\xbd\xd5\xb4\xf6M\xb8b\xa6\xa1~\xe3`\xdd\x96\x91x\x0b_\xf1\n\x03\xfav\xff\x85\xd9"\xef^V\x8cu\xde\x9b\x13\xf3h\xba\x12\x85w\xc2\x12hF\x81W\xc52H\xcc\xd4\x0b\xf3\xb0|c\xb4\xf0\xc0f\xe8\xc4\xd3\xe9\x14\x17\xf6\xc9\xc9\x03\xb3s1\xe3H\xb7j\xb6\xaaO\xbd\xa9\x13\x95<\x91=\xa8\x1c\n\x99\xe56b\xed\xea\xa01\xb9\xc0\x0b\xad\xae\xf3q]N\xa9h\xedG\x0c\xa5\xca\'6\x98\x81!\xc6\xeaPU\x8b\xb7;\xb3\xf0\xe0\xb9\x121\x0c<\xec\xa0\xfc\x90N\x8d\xdb\x12w\xf1\x15a\x88\x9d\xcd<\x84}\x1b\x17\xeb\xea\xb1\x00&0\xbe=\xf8\xb6\xf4\xb4\xce\x0c5\xbb\x97& \xff\x91{\x96\xdd\xae/\xfb\xfa\xd1{\xc2z\x13\xe4\x92\xc5dM\x85{|\xd7\xa4\xa6G\x16\xdfu8wOH\x8be\x1e\x18\xb0\xa6\xbde\x1f\xa5\xads\xf3o\xb0\x01h.\xd1\x0e\xebN\xae\xc4\xb9P\x01,\x02N\xbd\xf3dD\xaa\xf46Ql\xcePz\xad6Z\x9f\xb7\xd5\xa1\xd1\xca4O\x12@A1\x8b\xef\x0bD\'\xc2\xf3qB!\x01\x8a\x0fl\xf0\x00\x8f\x12\xab\x95\x14\xb3R\x89\x1b\xbbC\xe05\xd5C\x93\x95\xe0?\\?\xa9u\x8c\x8aF[\x19\x1c\xf1\x12g\xdfbL\x08\x9e2y*\xcd\xd4F\xb2;m\xc3&v~I\xdc\xf8\xf8\xf0\xd5\nI\xd9\x13\x93\xc7\xb6\x96\xb4m\xed\x87\xab\xb0\x80g\xe6d\x19\xd5\x86\x99RH\x84`\x13\x16\xa4\xe8N\xf6l\x94\xcc\x85s{\x03;\xbe\xe0\xf8*\xa1p\xb0QLa\x10E\xe4\xa5\x84\xc2+[(\xcc\xe2F\xd9\x8f\xd5\xfa\x7f\xfa\x80\xdd\xc2\xc4\\1\xf7\xc4B\xf2\x18\r\x10\x99\xb8\xa7\x04faB\x86\xa30<.\x92\xaeh\xdd\xe3?dS=\xb4\x94\xae\xc5\t\x9e\xb6\xc2\x89\x9f\x90_1,\x8a\x19:N\x16\xb3\xefF~\xe75\x8bq\x9f\xf1\xc0\xb7\x9d\xd6 \x16\xe0@\xd2\x18\x86H\xac\x08\xac\xc3&\x0c \xa2\x962\x98~\xb1\xfa\x01\xdb\x9e\\_\x04\xf1/\xf1lr\xfb\xab{j\xf60J\x7f\x98^D\n\xff\xbc8\xcd\xcc:\xc2_\x96\xc4\x1a=\x1f\xc7%\xc8\xd3V\xf6Hvf\xb4~\xfa]\x0e,y\x0c\xd7\xda};fE\xa3\x94\xf5\x15\xc7\xaa\x17\x7f\x8e)\x15\xa9\xce\xd3\x8aC\x9c\xbd\x95\xdd\x84z3$\xd62\xc7,\xb4%\xe2\x88\x0e\xdb\xa37?\xb6\x14\xf5\x94\xb5\xfaF\xa1%\x00\x0e\xfc\xf8v\xe7\x00F?\x8f\xff\xe2\xe4Y\xf2\xec\xcf\xc0\t(\x8ex\xec\t\xfe=\xe5R\xe0t\x18\xb6\xc2_f\x02}\x9f\x00\r\xa8\xd2\xc3G\x16\x1f79\x9b\x9b\xa0\xa6\\V\x1f\xb5\x86q\xb9\xc0c\x8d~b\xc8\x05\x83\x19\x85\xae\xc5yst\xd6\x8e\xcbO\xc1r\x10v|\x10\xa3\x96\xa9B\xf2\xc4\x02\x87>?\x14\x94W\xd6\xa2\xb7\x81\xb9\xdc\xef\r\x08\x14#z\xc6*\x17\xc2\xa1Sy\xbf\xbe\xbed\xe1\xb3\xbb\x03\xb8\xad\xbd\xb8\x7f!\x85)O\xe1Z\xdd\xebu\xf3z\xacF\xb6\x06e\xb2\x11h\xbb(\x07\xce\xd3\x0c\xc8\x13\x90?\x93\xca(\x92A@\x8f\xe9\x16\xde\r\xe8\x0bW(y|O`~3\xf9\xc1F\x80OG\xee|\x97\xb6\x7f\xfd\x85\x12\x85\xc8>F\x1bMNWQ_\x92=\x19\x80\x90\x96t\xfew\x89\xb6|\xff%\xbf\xb2\x7f\xf6/]\'\x8a\xd6\x9b\x85\x08\xc8\xc8\x19\x00\x88\xda6\x9c:\x8d+\xb2M\xe7\xfe\xdcT \x12C\xec\xe8\xf8\x1c\xd9\xa8\xd9n}\x0c\xcb\x86v\x16\xa4\x94\xe0\xe0\x14\x1b.\x94\xe6\xefI\x92H\x00\x9fV\x0b&\x94}7f\xaaE\x0b\x82\x98(\xd9\xe5!\x0c\x88Z?\x9b\xd4\xff\xc5\x076L2\x9e\xc9\xef\xe1\x16\xa1\x8f5[+\x16\x02>\x90\xdf\xba\'\xab_}}I\x88\xe7\xc1\x1a\xc4<\xb1\xfcH\x9c\x9cK\x81`\'\xb2\xaf4\xe2\xc7\x10\xadH\x04\x99\xa8\xc8\xdf\x99\xd0\xb1\xa7\x96\x08h\xcczp\xf0"\x88\x81\xd8v\x17\xe8\x8c\xc9\xce\xd4\x10\xb1\xb4\x15n\x07\xf9\xb3\x02",\xc7\x03\x93\xc2JpL\\\xbe~%\xe7\x03\x0fG,:\xe1\xd9M\xc1}<\xa6\x97\x7f\xda\xb54\x85l\rHa~\xd3\x14\xb3A\xcd\x9bqf\x02\xdeQ:\x8f.\x0c\xcd\x80M+\x9f\xa0\xf1Al\x94~\x12n\xc5\xba\xfe\x0f\xa5BN\xc6 \xbd\x7f\xd54\x9d\xa9Bt\x03Z\x86\xe0\xb1C\xc8\xd4m\xaa%\r)\x85Qc\xee\x96x\x01\xce\x8b\xf5\xc5\n\x8e\xb8\x17\xfb+gf\xe1\x19p\x91|Sp\x19\x8f\x94\xc5#f\xe9\x9b\xb7\xed\xdb\xbaf\xef\xae\x87_{\x1a\xd4c0\xe7\xdb\x87\n\xc2\xf2o\xcc\xa0H`Y\xb7\x89td\x12\x9d\xd4bn\xac\xd8\x82r\xea\x015\xac\xa1%8-o\x8arl\x9f\x11\x82\x9e{l\xebJ\x82.\x9a\\Rt\xca{\x96\xc2Y\x87V\x08DH\x17X\xec:3\x1dnY\x86\xe9D\xc3\xee\xfa\xc1\xf5]\x87\xf1\xfb.\xc9\x0c\x14\xc0c\xf0{c\x90\x80+\x964\x90r\xf0\x104j\xcf\x8d\xc3\xf5\r(*\x07\xf7x\x8c\x8c\xdagh\xe4\xcaHN\x04\xbb\xcb^\xf9\xf1o\x181\xf5\xa9\t\xb9\xd1\x9a\x8f:\xc5\xe0\\\x02\xb05\xa3\x089\x15\xabPOt\x1c\xb7\xac\xff\xbb\x9d\xcb\x02\xe6\xf6\xd1\xc6\xff^\xb8\xf9C\x995-\xd0=\xdb\x80\x061.I#\x90\xf9~\xf3\xf5\xdaPY\xa8lu\xaa\xde\xf6\xba\x02S\xa4-7v\x05\xfc\x89\xf1\xc9\xa2\x8ehc\x0e\x8d\x07+!\xab\x01=\x1d\x89p\xca\x87\x07\x05B1\xe1\xd0?J\x9f<\x94\xa0`\x91\xfa\xc1n=t\x87\xb5\xeb*\xe8[R\xb2\xadE\x01\xa8\x06\xe2y\xfb@:\x0c\x13\x0c,\xd5N\xe50 G\xc3]\xd2\x12\x15\x17\xf3\x041NJ\x91\xdf\x1c\x9b*6\xf9\xd63{\x1e\xcf+\xa1\xc6\xcdp\x02\xb4\x99\x00\x97\xd0)\x11j\x89\x16\xe0Z\x96J\xf9J+\xe8\x90\xb0ek\xa2\x13\'\xf5\xfe3\x92/\x16\x0bSX\x03?g\xf7\xd6\xb6?\x0b\x83\xd3/B2\xb5\xf5\x9ao;B\xa8\x88\x8eT\xccf\x9b\x01\xf9\x11\xcc\xc0\x18\x06\x87Pb\x183\xde\xce\x92\x9eC\x92\x90\xc3P*e\x00\xb0\xc8\xbe\xbf\x80\xbc\xa7_\xae\xcd\x06\x8bLYI\xc7\x97\xa2[\x82I\x9ad\xee\xce\xbfl\x89\xb5d\xc67\x7f\xc5\xd1e\xf8\x18\xd6.l\xecr\xc2\xa3}\x84o\xcau\x92\x80\x01\xb6\xdc\x9e\xd5\x9al\x1f\xbc\xb8=\r3\x1ev\xa3\xc7\x9fh\xf2\xc5\x90\x97a\xd3\x93/\xf7\xce\x94\xadR\xcd\x8a\x13P\xb0\x82\xa2\x11\x97\xcf}0\xac\xa4Bw\x06\x1e\x1a\x8eo\x1e\xaf\xde%\xf7\xe1\xd5!Z\xb4\xf9M\x9b/\xf3Fo\xde\xa9M0xH\x0e\x93k\xb1\x83\xad\x07\xc8\xc2\x81\xd9\xfb\xbe|y\xa3_1\xed\xa2\xb6\xe6\xb3CED\x94\xdf\xb65T\tmo\xfc5\xf6\x92C\x04\xd8\xef\x9f\x9b\xefkR\x8bn8Z<?>^\x19\x92\xd7\xcc\x04\x97\xc2\xc8\xf0\x02V\xb2\xe2\xb5\rKg\xb7\xff\xac\xd5v\x16\xa5S\xd4Ap\x18pY;\xc7\x02\x8b\x08\n1\xca\x16q\x7f\x17\t}\x1c\x8d\x9d+>\xe1\xe1\xe4\x08\xa0z\x8f\x9e\r\x95\x10\xc8\x88m\xc2\x85\x9a\xf1\xb4\x96g\x97\xd8VE\x10\xf6\xfe\x16\x0b#\nh\xc7\x96\xaf\x8c=\\\xfb\r?(\xb1\x1e\xbc\xc8(\x9fS\xa8G$\x82\xb6\x8eUIU<\xd8\xcf\x85E`p\xd6\xf2E\xed\x7f+\x97\xddg\xa0\xbefo-iip\x8e\xc8z\xe4\xe0\xc5\x1f\xed\x1fdK\x07\xd3\x19\xa28\x80M\x10E\xb3\x82\xe9\xc61\xceIm\x8f60\xe4?\x17lv\xd6\xac\xfa\xce/\x89z\xbb\xf9O\xd5\xbf@F\xf2\xee\xc5\x89\xd9t6\xa1\xc3\to\x1eS\xcdM\xb3\xd8\xf58\x91\xa0\xb9r\xf9\xc6\xc2\x83\xc3o\xfd0\xc8\xed\t,\x14^\x1e\x89\x9d\x14\xa3\'\xa4\x1f\x8c\xa7(Y\xc4\x86\xabo\xb6\xae\xacf\xb1\x83\x07Q\x92G\x16>\x90\x99\x06\xce\xd1pj\xd4\x08g\xb0\xdb\xcfj\x91@nH0\xdbmTY\xc2;\x95,y \xb1P^\x94\xee\x99\xba\xf7h\x8fH\x18\xb12es~\xa3R\x19\xa8\x918\t\x18v\xfe\xdd\x9e\xeb\xc7~\x98\x1dfU"6\xaf`\xdc\xa2\x03\xc5%\xb8\xc6\xa7\x87K\xe7\x8fi\xa5\xe1\x8cR\xf8\xf3\xf2\xea\x8e\x88\xc6\x03\xcd\xc5\xf3t4:\x8b\x840__\x1exo\x066J\xf6u\xb7\xbel\xe4\xd4f\xafM\xa2\x1ck\x03_~\xc6&\xbf\xf4\xccY\xbd\x7f\xbe\xbdQ_\x9ebj\xd0\xd3/f\xabq\x84\x8e\xae\xcc\x84\xc9\xa3I\xe1\xa5\xbd`\x11L\xf4\x89\xaa\x1d\xfb&\xe1w\xc0\xf6\xc0N0\x98@@\x90jI\xb1\x82\xbd Jl;\xa4\xa2%L\x1f,Ir\xfe\x90U\xdbb\x8f\x0c\x9c\xed\xab\x7f\xf1\xcc\x87G\x84\x93\t\xac\xd2\xd1W\x7f\x0f\x7fM\xe1<\x87\x9f\xc2\xb5"\xd1\x01H\x1e]\xf1\x9c+\t.I\x8e\xef\xbb\xe4\xef\xfe\xf4k%\xd4y\xd8\x7f\xaf\xb7\xf7\xa78eU>U\xa9\x97F\x03\x07w\xbc\xbf\x9e\x82$\x1cO\xa0\xe8\xda\x13\xac\x02\x0e\x92mXf\xb1f\xc12\xa9\xf3aJ\xc99\x91\xa4\x19a\x02\xc2\xca\x1b\x95\x8c\xb0\xd7n`B4\xf7L\x12I\x86\x894\x14W_2]\xda\xe8\xfd\x17\xb4\x0e\x87\xc3l\xd3\xebd:c\x112\x80\x16\x1e\xe5\xa3\n\xce\xa5\xe9~Vo~\xd5\x97}\xc9Y\x1f,-\x89\xf582\xe8D\x1f\x89-\xe6\x19I\xfe\xbb\x0b-=\x83\xdbC9G\x08Zr\xf3%C\xf8cs\x1cv\x1a\xf2\x88y\xc0x<\x1e\x9d#\x7fq\xcb\xe6\x8fxp!=\xdc\x97\xa5\r\xc2&\x19\xbb)\x03\xb5\x9b*N,Le\x94\x90Y\xf8\xc9|\xaa\xb1\xff\xbb\x01\x7f\x8f\x84O\x12\xf2B\xa9\x90\xd9KP\xd5\x8e\xb5\xc9\xe6\xd6=\xb4\x08\x18\x8e\x19\xf1\x10*\xd8\x9aQ\xf6F\xc6\x14+\x9b\xb3\x8e\xfbY\xa9Ad\xc6\x82\xa6\xb5?\xfd(S\x88X\rbX\xf6S\x9f`9\xcb\xbb\x8b(u\xe0\xe8/\xbb\xf7\xb6\xf6f\xa2b\xe2\x95\x0c\x85f\xa2\xf3\xe1c)2\xd8p\x96mh\xdc\xdb\xcb\x9ew\xf9m\x9a,\x8b\xedB\xcaf\x86\x15)5\x00\xd1cm\x8dO\xb6\x89\x91\xfb+\xeb\xa6\xb1\xecZ\xeb\xafx\x13.\xba\xab\xcb\x92\xd9\x1d\x91\xe6"\x01|\xf8-\x1b^\xa7\xb2j\x99-\n\xbbg\xd14c\'\xf3\xe7o@\xc8\xe5c\xcawg\x8a\xa4\x12\xdbB\xcb"vm\xdc<\xda\xe3\xfc_\x14\x8fh\x84\xbd\xed\x17\xebS\xac\t\x02\x16\xe8_\xef\xd5\x97\xb7\xcfq+\xa95\t4\xb5\xcdQ\xb8E\xbf\x96j\x94\xdc[\xf6\x07\n\x17\x9aFy\xb4\x80b\xad3\x07u\xb9A\xc6\xde\x9c\x8c\xdbX"\xd9\x81\xa6"\xe3\x81\x9c\xc8S\x9b)D\xb1\x12\x9b\xb8m\xfe\x85\x85\x1dx\x04\xec\xdd\xd8\xea\x87\xed\xe7\xe5\xdaN\x02\xc6*\x93\x93^\xe0\xf0L\xef\x7f}\xde\xb2p\xc7\xb1\xf6d\xed*7\x86l\x81Wy\xf8&\x7fX^\xc0\xc2V\x0b\xf3\xb5\xae\xbf\x1a\xba\x99\xca\x99\x13\x1er\'\xf5e\xbb\xe5\xb6?2\xb5m\xed<\x98\xfb\xcb\x89#\xbe\x89\x7f\x90\x9e\x97L\x9bR4\xee\xde\xba\xa9h\x9c]\xfb\xf5U\xbd\xf2\xbb\x14\x86*\xda\x90#\xc8K\xc9\xb2\x98t\xace\x15\x91\xba\xc4\x18\xb23%\x01\xc2\xc3\xad\x1byc\x0e\x89\xe5\r\xf0\xabYP\xa1\x10}3\x07\x19\xdcS\xec\x01$\xa8\x87\xe1\x88;\x84\x99\xeeK\x8b\xdc2\xb9\x8eP\x1a\xb7\'8\x1cej\xd0\x9a\x85HF<\xf8\x88g&aT\xbd\x8a]\x00\x00\x06\xa3gA\xccJn[=\x82\x12\xe7\xef\xb7\xf1\xbcn\xd5\x04\x90](\n\xa5\xf1\xe9S\xc2\xf47\r\xd7\xabp\xc0\xd5\xc2\xf2\xb97\xb3\xdae\x7f\xe5\xad(\x19\xb1\xff\x88\xe7|\x80\xd60#\xc1\xad\x81>Q\xd1\x8e\x97\x1b\x85\x04\x954\xdaP\x9a\x8d\xcdZ\xac\xbfZ,\xf9\xab\x0fl\xbdB\x81\xf6\x05\xc3y\xe8\x18\x87\xd2I\xf7\xed\x95\t2F\x9aye\x0b\xcc\xe6\x10\x08\xd1\xc2\r\x10\xf8@T\x80\x8a\xb5\xf9U<H\xfd\x00\x19\t\xb5B2p\x0c4x\x12\xa9\xd8\x9f\x82\x15+$A\xed\xf6\xd57\xd7\x88lR\xc5\xf3q\x88\xff\x15\xe1^^U\x90\xce\x11m\xad\xae\x99\r&\x96\x89\x08]\xbd\x8cG\xa3\x1b\xfbb\xb1\x0e\x894\x91\x05]\xff\x8e$\xfe6\xfbH\xf83\x12\xfb\x97A o\x0e\x18\xd0\xfd\xc2\xbe6\xdce\xc5\xb4 ;7\x11i\x9a\xdfJ@S\x1bC\x8bky\xe2\x98\xedd\x84\xc2\xb9b\x991w\xe8?.\xa3\xf6\x99~W$\x81\xe2j\x8c\xc2"\x8b\x05\x1a3\xa3\xb3\x14\xc86\x9b\xc49\xc5\xfe\x90X\x94E\xdb\xc17PBB}\t\xb1s\xa8\xee\x15\xfa\xa2\xf7\xc2\xca\x1b\xfd\xd1QY\xa0T\xe9\xaa6:\xfe\x17B\xf0\xd6\x92\x81s\x83u\xf2\x00\xc8\xe2i!\xdd\x00YH $\x17\xef}+\xcb\x9e\x9a4\xf9\x94\x04\x8c\xa1\tq\xf4\xcf\xdc\xf4\x16\xbb\xedo_?@\x07\xfaf\x7f\xef\xb3\x85.$\x9cfK\xda\x98\xc8\x9e\xd7\xe7g\xc7\xc2\xd7\x10\xb25\x87\xa6\xc8A\x99\xb9G\xfbD\xe29\xdd<*6\x01y\x1a|\x00\x01\xbf\x90\x00\x0c\xf1\xdf^\x03EC\x04Q\xb9GR\x90\xf0\xd2,\x01G\x03\xb0\xaaee\xaf\x9b\x06^\x1e\xd8\xfa\xee\x97\xa2~\xb2\x82T\xf1\xea\xba\x1bP\x83\xec9\xae\xfeh\xc3\xe7\x05\x8b2\x19\xdcBF\xdd\x16\x8e\x05$\x0b8\x17\x87\xae\xbf\xad\xdf\x7f\xa7B\x133\x8d\xbc\xbe\xbc_4J\xe7\xf7\xff\xa2:\xcb\x8e\xaa\xb19iR?X\xf6\xa8,\xb7\xa2V \x08b\xa3\xf2\xc2\xd4\x83\x18\xbe3d\x83\xd6\xbb\x17\x8b\x95=\xcf`\x8d\x0bO&R{\x8ao\x8e"\xf4\xd5\xe1\xfa\xc8\xdfB\x7f\x8b\x076!\x96\xa8Yl,^?\xa2\x06a\xb9\x9b]\x9br\xcc\xa4\xfc\x8a\xa9+\xe9r\x1a\xbce\x01\xeb&SL\x9e\x0b3\xc7\x91\xf8\x13A\\\xa1\xb8\xdd\x86eC \xc2\x121\xf2\xa0\xe2tm\xbd\xfew\xe5\x00\'\xd0L\x0b`\x8a-\xf8\xe9\x13\xdc7\xd1`{\r\xd4}h\xb2{\x19\xd7B\xe3\xfc\x83\xbf\x1bNp\x16\xf9C\xf9\x80J\x0e\xf6*&7\xe4;;\x18)\xc5\xf7\xecc)L\xb5jVii\xbb\xb3\xc9\x7f8l\x97Ds\xa0u\xa0\xfd\x1c\xf4\x84M\x18(4\xe2\xa2\x12~\x15\x02\xc0\x1dC\xe0\xd9\xf6\xd0p\x10\x92X\xc0\xcf\xd0\x9d\xa4\xf1\xed\xf2\x8d\x07P\xa7\xb9\xeb\xa6-\x0eWh\xb1\tu|S\x88$\xcd\r\xeeo\xa8P\xe7d\x87?(@\n\x96V\xf5\xeb\x8a9\x98@LO\xd5/\n7F\x9f,\x99ac\x0c\x85\x10\x1c"\xd3f\x16\xba\x92\xe8\x97\xf7r`\xf1\xd64I[R\x1e\x86E\x85@\x0b\x9f\xbc\xf8T\xebX\xfb1\x89lH\\\x10Z\xc0\xf2\xf9beZXil\xbb\xd1\xc6\xd5\xfc\xeag\xa1\x16\xe53\x10\x05\xd0\xbc\xafj\xce\x94d2\\\xe0\xc9\x19\x90b\x873\xf6UFt\xee)\x82R\x82pb\x0b\xf1\x9a\xb3V\xb4)kT\xff\x19$\xac\xecY\xbd\x7fjK\xdf$\x17\xea\xdb\xd7\xb8\xed\xf7\x1b\xa7\x87\xe0G2/j\x94-$B\x1cEjTLb\xa7\xe8\x92\xf7\x87\x03o\x18T\x90\xd4\xc1\x14\xb0\xf8\x19\xd5\x97W\xa2t\xde\x06\xdf\xe0>\xd3!8aD(\xac\xa5\x19\xac\xaa\xb7\x91va\xfa_6\xea\x9a\xa5\xdc4\x8aA_\xbeE\xb3\xeak]\xccb\xca@\xcf\xc8.y\xd9\x03\x19"F\x03n\xeb\xc5\xcd\xcfT\xb3t\xb1?\x98\x11\xd6\xd7ZR0\\"[\x89\x14\xc4k\x82\x1a3\xcd\x8d\xf5I\xe9O\x9ej\xfc\xad\xe9U\xcb\x89a\x13\xd3\xee\xb8]\x15\xf6\xd0\xfe\xe9\xcd\xa8\xb0\xf9M\xe8\x9e\x80\x9f\xd4\xbezv`\xba\x10\xec\x18\x03:\xd7\xc4\x9b\xdb\nV0\xc7\ng\x9d\xaa\x92\x94\x7f\xb0\xef\xbc\xc8\\,2\xe7\x137\xccNz\x1c\xec\xb6\xce<-\xef\x1e\x91\xdd\xa7\xd9\xc0\x12F\x9b2\xd5\xb9\x19\xcbF)R(\xc6\xaf\xc9\xe1\xf6*\xab\x0c\xb9\x0c$2y\x87Hvg\x9d\xbd\xa4\xbb\xa9)\xa5>\xed\x12\x97c\xff\xcb\x07C\x1f\xed[M\xc1X\xf9\x17\xf9.\xe9\xda\xe1\x02w\x88\xb9A\x18\xae\x90\\\xb6m$Y&yY\xabS\xac\x92\x10\xb4\xf2\x0bQ\xccn\xfd\x93\x13\x7fyu\x02\x83\xb0\x08\xc7\x17z\xd0\xc5yI\x12\x94\r\x0b\xd2\xa4\x9b\xef\xe0\x15\xb7\xb59\xb1\x10\x81\x9c\x17\x94\x01\xa2G/5\x12@\n\xd9\xa6D\xe8\x13\x92H e\x8c\xbbY\xa7\xffU\x1f\xbd\xba\x1bn]\xdb\x064\xcb\x06j\xddc\x8d\xab]D>\x11\xcf\xf2`\r<.\x8f\x84b\x97\xd3F\xa6\x916z\x00\x90Y\xd9.\xd7\xc7Y\xd3;\xb3\x9c\xa2I\xd4\xbd4\xa0G:Ajq\xc4\xc2\xe0X\xfapx\xf5Y\x00\x00cP\xd4\xad\x1f\xc2\xeb\x80[e\xd1\xe35\xb3\xd9L\x84dB\xad\xfe\xb5\x16\x9b\x88G*\xe6\x0c\xc7\x16\x98;\xbb\xf5\xc3S\tI6\x93Z(\xdf\xcfO\x9b\xc5\x85\x16\x0cw-\x0c\xdd\xbaY\x9bVF\xda\xa8\xb2H\xfc<3\xa33\xb2\xed\xfb\x88+\xa3_\xe0n\xba\xfcT\xbb8x)2\x1c\xd2\xff\x81f#J\r\xc9>SCr\x86\xd6\xadQ\xf1\x84 \xd7\xd5s\x85G^\x90J\xf7R\xf0\xbd\xdd\xb2\x83U\xcfq*\xa5!\x9a\xcaX\xa2Rz\x0e\xd3\x1d\xc9\x88\xb0\xe8\x1e\xd5j\xca?{a\xe1\x0f\xab\xa9L\xa5\x97L|x\x80,\xab\xeb\t\xb3\xa9!\xff\xac\xea\x19=<\x8eb\xb9Dy}\x93\x1e=x\xf1\xbb\xd6\xd9H\x16\xd8\xef\x9b\xcc\xf8\xf4\x83L"\x9a\xe9T\xfein\xf9\n\x9b%\x8c"\xa6z\xce$\x84&0\xb6\xc1\x06\xa2\xa9\xa4\x87&662\x92\x1c\x1a%\xf7\xd9x\xbe\xb1\xfd@\xc5\xb4\xfc\x1f>:\x10\xa4|v\xf8m\xf2\x81-\x12\x1e\xdc\xcaB\x11\x06Iq\x072\x01\xb3\xf4\x10V\xfe\xc4\xd4\x99\xf4I\x1e\xbe\x80?\xf3\xc9\xc4=\xc5\x87\x99Nd\x0f-)\xa2\xfe\x96\xe9\tm\x1d\xf2\x1a^{\x05\xe1\xc2\xa94\xeb6\xc7\\~\xf9\x96\x99\xf86\x89\xe0t\n\xb5z\xef\xd1\xba\xbb\x17q\x9c5db\xf2_-\xd7a\xee\x86\xf3\xb0%\x8f+\xa2\x95\x911\xe9\xaaxO7\xdd\x19\x8df\x81l\xd4\x19r\xf4*\x8bA\xd9%\xba\xbd\x8b\x9b\xad\xfa/\x93\xb7\x85v\xa1\xcbKP\xc2qyG\x8f\xbbJ\x9cr\xcf&\xcf \xae\xb4\t\xd3\xd9&\x86\x10\x91n\xda\x9b\xa4P<A\xebb%X\xc8%\xf9\xf5\x95\xe9D\xec\x7f\x02\xb3\xc2\x8d\x9e1\xce\xe8\xf2*\xf7\xd4bb\x1eL\xd5\x0e4\xb8\xa4\xb1\x17\x84)\x02\xd5\xa4\xed\xbe\xc3\xc4\x99\xa3t\xe6%+}\x8f\tu*c\xc4R.~\x99>\xeeL\xeb\xed\xccNd\xb17\x0fD\xd02\xe3n\xb2\x18UG\xf2\xb9\xd3\xc9\xe5Y\x92H\x94\xecU\x1d\xd2-\xd6uz\xd5XX?\xe8\xe4Uq{)1_\xe0\x81\x95$`8_\xc7\xca\t\xab?\xe3B\x02\x18L\xb6[\x8d`e\x92f#-#|~>\xfd\x02\xb6\x8c1\xea\xc3\xfa\xb2\xef]\xd3\xa3KX\xe3\xa6\x94\xf15\xe6Q\x02\xaa\x85\x9e4\x86\xae\x80T6\xf3{B\x18\xb1<8/\xd9\xdb\r[\xbfLi:\x85\\\xb0\xdc<\x86\xd5^\x9d\xc7u\xbe=:Gz\xf4\x8fn\xf2? \x9f\xa6\xfb\xc8\x13x\x0c\xbdX=;1c\xd4f\xaf\xa4\x1eDr\xe8\x06[\xf4]\xaa\x92S<s\xde~\xc6\xe6y\xbc\xf50gDv,(\x10&\x00\xa0[5\x8a\xa7\xf6\xcdE\xb2\xb4\x91\x82\x9f\xcaK\x08@\xef\xa9@\xdc`.\x87\xa6\x06\x8d\x8a\x92\x04O\x92\x17)\xa9\x18\x08\xc7\x88\x8aW7,\x1aq\xe1\xe3\xc6M\x0b\xe6\x1a0\x8dx\xb8\x83\xec\xa6#\x91\xd4|\x05t$\x1c]<\xb3\x18&\xb6\xeb>\xdeKN"\xd4\x1c{\x11BY\xfa\x8e\xcd\x1d\xb8\xe8\xa5m\x9a\xc3\xb9\x1763\xe8\x0c\xf0\xa8\xe2\xf7$\x0f\x01\xbf\x8e\xd1\xd2\x89j\xd0\xd9\xa21\xecs\xf5\xe55\xa9x\xe5\xd0\r<\xe2\xd5\xe6\xcc\x1f]}\xc9\xc2sD\x1f\xb1\xc8\x1c)I\x08\xa4\x10\x8a\xfcG\xfaC@m\xdc\x89\x8b\xd0\xa1+\xe3@\xc8\x0b\xdd>R\xa0\x987\xd2\xb4\xb5(\xfd\xcc\x1a\xed=mu\x0b\xe7\x01\x98\n\xf1$jF\xf0\xadC"\xdf\xaaH]Q\xd4\x08mtf@*a\xbcdlf\xb3\xb4\xd5\xdd\xf9Oi&\x91\xb6\xf5M\x98\xe0~ip\x89\xf1\x8bg\ny*^\xcd\x19\xe2\xf7?\xe6(\x01\xc7\xe6\x19Ix\xe3\xd1e4\x86n\xf3\x8bU\x9b%\xa3=\x12\xb7\xdf\xbcW\x0b /rL\x8b\xc4\xc9\x8e\xdbE\xa8\xed\xb2\xf2N\x18Chk\x85\x84\xca\x8e\x13\xde\x91M\xc1\xe0\xd8\t%\x91\xd9F.\xc3s\x17B8\xaat\x02aV|\x8d\x89s|\x08{\xdb\x01\xd1\x00&\xce\xd2Mq\xcfB\x05^\x85\x19\x1e\xe9k\xeeh\xda^\x1e\x07v\x0f\xdc;\x10\x1c\x93J8\xdaNL\x1e\xda<o3n\x8b\x87\x8a\n\xa5\x18LY\xba\xdf\x02\xcc\xf0\x86\xf4\xfe\xa3\xe99\x01\x02\x914.\xc8\xf0\x97\xd1\x93\xc2\x1e\xda\'c\x0f~\x91$pWn\x0f\xfb\xc3\x8eh\xdeL\xacS\xbd\xb7I\x80\x9f\xa6\x1b\x8f\x04\xea!b\x8d\x15\xa2\xb48\xf0\xd6y\xa5W\xd5\xb1\xbd\xd6E\x13\xef\x97m\xb7\x8du\xfb\xb4>\x06\x0e\x81\x0bo\xba\xea\x9f\x83`\xa8\x1f\xa4;\x98\xb67j\xe0\xfe\xd7\xef\xb5\xda\xaa\xa1)\x14k\xd5L\xee\xc6\xc7\xde.\x98\xfa\x90x\x95+\x1f\'v-\x00\x90\xb9\x993\xdd\xa2\xb0\x83\xac\x19\x95\xcb&LU\xd5\xaa8\xd6\xc9\'v\xf9\xbfLd\xe8\x92F\x0eBf\x86 V\xa42\xb4\xf1\xe5\xb5z)`\x90\xaek\xc9\x9d\x9b\x16\xd2\xef\xc8\xe6J\xb7&\xeeX\xec\r\xf8\xca\xf1\xe2\xde\xdc\xfc\x8ash1\xc0\xcbi"\x93\xb9 \n\x955\xe6;\x7f\xf5\xab\xb6\x8b\x0eY\xc6%\xfe\x8eV\x92$O\x18\xcb,\x0e\xc5\xddC\x0cf\xca\xbe\xdcz\x02\xeaG\x16\xcc\xf1\xca\x88R\x81T#s\to\x05\xc42\xa03fU|\xfe\x9f\x8f1\xcb\x8a\xccss\xdf\x9adq\xfe\x96yi\x1e\xda)\x10\x11c\xed\x18c\xc4\xb5\xfa\xa2\xe2\xab\xdf\x17\xeb\xcb\xd6\xbf\xa4he?\xa6\x84\xbb\xb8c\x91\xbf:^S\x9b\xf0}\x88\x9c\xb2[&Ul\x82\xd5\x1f+\x06a\xad\x06\x9d\x04;\x04\xe0\x8d\x9d0\xd8\x9c3\x83\x1b7jI\xc3{\xb9\xf3U\xa0\xa9\x0e^\x9f\x87\x9cS\xf3\xa6<\xad?\xb4\xb5\xe9q\xdf\x89\xc3\x91\x016^\xc8\x04\xa1\x81\xc5\xc5v]\xd0\xe1\xee\x8d\xf2}\xcf\xd6\t\x08\x9d\xb2\x85\xc5j\xe7j}Y<\x8f\xca\'&H\x9d\xa2p\xd7\x9d\xbc\xc7.\xebp\xab\xa3\x02V6\xcaf\xa0O2>|\x93*\x1ad\'\r\x01\xb5\x0c\x1b\x91\xfa0\xabNj1\xabI\xf1K-\xc4\xe8\x86\x02\x1a\xc1\x0e\x14\t\x86FM\xa4\x8d\x14\xf6\xe2x\xc0.\xc8Ej\x15a\xc3\xc3\x82\x93\xd1\xa0\x91cW\xf2R?\xde*\x81s\x19Di[\xdc\x0c\x12\x0fA\x1f@(k6x\xce\x86\xcd7lg:\x19\xcf\xe1\xda\r\x98,\x02LC\xb4\x9c+,\xe21\xd9^\xfb\x8a\xb2D\x1b\xab\x1d\xe1\xd0q\xa4K\x0f\xaf\x08\xa9k,\x16\xe5o\x9c\x1e\x1a\xcf\xd5B\xa8\xee\rp\xee\xa19;R{q\xb52/\xba\xf6\xd9\xce\xc4\x100\xd7\xbaSY\xd2\xa0\xcd\x19\xa0\x1e\x07\xf2\x1a@\xba\xfa{i\x9f`\x07\xe8\xc6\xf47\xe4k\xa6\x85<\x0b\nS\x8d\xcb:\xe8\x9b\x114\xc5\xb0j\x9d\xa8{Q\xfe\xcb\xf6\x9eAWc\x7f\xf7m \x1b\xf7\x81D\x87\\\xaa5\\\xe2P<TR\xdf\x95\xab\xe8\xa1\xea\xaf\xd7v\x1ap\x7f\x85\xc7\x1b\xd0#\xa5R\xae\xdb\xca\xf4\xb9LJ\xc8\xaeR\xed\x8fu\x88\xc0\xb1\xd6\x14\x97\x1at8.\x12\xe5S\xfb\xffr6\xc3al!\x8c\x07\x08=\xa8\xe6\xd0J\x84\xfdb\x91\xd2\xc6\xdc\x90\xb7\xa9\xd9\xf1V\x15s\xb6V\xc0\x15\x8d\x04\xcc\xc8"\x87^5/\x85\xbft\x8aG}\xaa\x1b\xcf+\xdc\xdf\x12\xfb\x17\x16\xe7\xf3\x92\x87\xd4NU\x9e\xb5\xf1\x8f\x1b\xb0-\xb8\x13\x08\x85\xd3\xbe\xdf\xdc\x91u\xc5@\xc3L\xad\xba:\xfe\x80\xd9\x93\xaeL\x9ck\xcda\x90\x8a1drY_\xef\xa7\xf5\xbf\xd6\x9a\x8caV\xe5p-\x114.\x17\x8e{\\6"\x1d\xe8W\x04\\o\xd6\x97\x17\x9b0\xbb\xcc\t\x86T\xa6\x8a\xbe2\xa5j\xd0\xd6\x0b\x12\xc7F\x08\xf9\x8b\x1dK\x05y\x9e\x9ej~\xa7\xdeg!D\x97\xc2D,#\xc6\x97$\xeb/6\x1b\xbb\x06\xf6o-\x12\xca\xcb\xaf\x02\xd9z!\x018/\xc1{-X\xd6\x80\xae\xa3\xe1NS\xe1\x82\xab\x912\x1d\x01\x90\xf7\x0b+\xb5z>\xbe\x16t\x98\tp\xac\xdc\xbd%\x81[\x97\xda\xcc\x8e\x80\xfb\x90\t\xb8w2\x94\xc32\x16~\xae\x16\r\xca\xf6\x8c\x1e\x14|\x01\xf6\x91p\x96\xa52\xe4v\xe5\x15\x84l\xd8S6\xfa\x8bM\x9a\xd9E\x03\xedi\xads(\xd4U\xecnk~\x7f.\x11a\x9e\x02\xe6\xd3\xd0\xdc*\x04P\xc6\xb1K\xac\xd4\xa4h\x8e\x96/\xd7\xfb\x9b\xc2*\xba\x7f\xae\xd3\xe9(\x04\xd3\x98\xa6\xbe<\x93\xd6\x94\xfeJ\xad\x8a\xc7]\xd3\xe5(\xff\xfd4\x12\x94G@\x06y\x87\xfb\xac\xfdcQ0R\xe9\xc3-]\xfb$\x97]`\x15\xbaI\xa1\x82I\xb0J\xf7\x91\xc2w\x1eN\xc8\xf2\xe7\x89\xa2n\xf8\x04\xba\xd1@\x90h{\x17\xadE\x8d\x9c\xc2\x8a\xd0\xf6\x8f\xbe\xbe<\x9a\xe9\xb3\r\\\xeb\x85~)\x15\xc4\xc2eWo\x9b\x89\t\xda\xa08\xc5\xa9\x8bh\xee\xfb\xb7q\xec\x1f2Q\xe4\xe6\x11\xea4^\xc6\xde\xa9B\xd8\t\xf8\x88\x8a\x15\xa1\x0e\x852Jg\xee\xc9l\xd8\x1d\xf3"\xecd\x0b\xf1\x0f\xa0\xd10\xd0k\xf3]\xcb\xec\xcc\x86\xa4fHd\x95\x8f\xe6\xf3\x0f\xd7\xc9\\\x81u\xc6\xad~1\x11W\xf6\x81a\xc3o!\x1ad\xe5\xb6\xd2\x18\xa4\x18\xd1\xb5\xf8\xea\xd5\xa9W\x7f\xa5\x99H\x07=|\x7f9\xf9\x18aJ5c\x9b\x16pq([\xcb\x88\xd3\xc9\xd3\xbd\xdd\xfb\xb1\xf5\x1b\xc5\x9d/\xacy\x1f\xda\xbeU\xd5\x9d]\x04\xeae\xe0\xd5;vF\xd2\xddQ\x86\x1b\x0bv\xe6\x05\xac\xa4\xaa<\xa6t\x1c\xad"X\xaexz3\xe6\xf1\xcedbBH\nT\xa9\xdc\x91\xd2\xd9}\xc7\x88;\xd1\xc9\x95,\x12\x91J{\xf7\xa2\xaa/\xdb\xbb\xb4"342\xfc\x03\xb1\x81\x87\xdb L\xb87\x8b\xf7\x17\t\xb2\x8b\xf2\xdd9\xb2\xfc\xf0\xf2\xc2\xdfV\xba"G7\x92\x8f\xfd\xe4\xb7yG\xf6\x93-\xeb\xcb\xfe\xfb\xa1\xd1;]\xbc\x03w\r\xad\xa8T}\xe2\r.a\xa6\x04\xf1\xa4\xfa |\xb7[\xd9\xb5w\xb3\xfbp\xc2\xdb\x9a\x91,g\x02\xd5[\xd1\xed\xaaxy \x19W\x02\x0c\xc8|x$,\xa7\xb1\xadp\xedz\xca8\x1bS\x8d\xc9yz\xb5h\x9d\x0b\xad\x19Ta\x8ao\xde\x11*\xfe?d\xda\xe1\xd27\xde\xa9\x90\xec\x14\xf5%\xd1\x98Xy\x01t\xafO\x9f\x9f\x11\x97\xcc\x95\x9cP\xf2\xe0\xdc\xe3\xc7?\xb7L\xa3\x03\xa4\xcc\xe7\xd4\x13\x9c\x89\x1cN\xfey\xf6y`>\x98\x9bl\x13\x9fa_;occ_i\xdc]\xd1$\xb3wj]\x14\xaa0\x82U^)$;EeHS3\\c_>\xbb\xa3\xf5u&\xdbx0\x93\xab~I\xd1^\x98\x03\xdb\xbc\x0f\xef\xae\x99\xea\xb4\xed\xc6U4\xf1M\x94;#x\xc4\xd5\xf6\x84l\xe9\x82\xae=\x07\'\xa1\x88\xe7v\xdf\x1d\xfe\xc0\xd6\xbe\xc1\x89\x86\xfc\xc7\x89\x05\xbdl\xe9\xe2Z\x84I\xc9\xbd\xa7\xa6F\x11\x9am\xf6\xd9\x91y\xe3\n\x0c\x9c\xe12\x12B\xac\xb0X\xe8tf\xd5\x98i{\xb5\xa5\xc0\xf88\xe8D\xee\x87\xd9\xff\xcb\xc6\xe5\x18\x90\xf5\xaf91K\xc6EK\xf7\xd6myB\xb4\x7fc\x16{\xf6\xbd\xbe\xec|\xd5W\x17p5\xcd\xf7u\xf9\x83\x0e\x05\xd8*r\xf7\xea\xfd\xad;\xa7o\x15*rM\x17\x96?\xb0\x04\xda\xb5\xd3\n\xd6\xb3Y\x13\x01\xd3\xacU\xc6\xeb\x88\x84Vp\xe3$\x91\xe6lh\x91\x1e\xc9;\x89\xe8\x14D;\xec\x19\x03&rD\x12\xb5*\xc7A\x93N\xd6\xbfC\xd7\x02\x9f\xcfY \x19\x83\xa1N\xc2w\xfe\xd4\x044$\xd1X\xdc<\xc0\xd9c4\x0bc\x81\'iQ\xc6\x89\xd1\x81\x93 PS\x8bG\x05\xfdF\x15\xad\xa5\xbbM\xcc5Q{\x14\xd8\xf6\xc5\xa3C\xd8\x7f\x80\xc5A\xa7Y\xbc l\x16\x12\t\xfc\x91dsW?aX\xf9\xc6\x04\xb7\xecy\x96\xe5\xfc\x06H?*\xd1\xf78m\xdb\xe1\xfa\x0b\xaf\xb38\xd6\xe6\x18\x02\xfe\xa9\xbe\xa4\xd4\x86\xf8&H\x9d\xc9s^t\xfc\xe8\xa1\x82Y\xac\r\xf8\x9b\xbc\xff\'\xb2\x9d#\x1eI\x02\xc7\xe1P\x17@G\x14\x18\'F\x9c\xf2\x1eA0\x9dC\xa62\x1ca\xe5\x03sq\x81|\xb4v\xf5O\x86k\x07\xe0\xc5\xab\xe6\x96\xe9D\xd9\x87\xd7\x8b\xb5\xee\x00B\xa02\xa1z\x05*G<\xe7\xcc\x9b\xca\xda\x0c\xce\x1b\x97m\xf7\xc0\xa3G\xcb\r"\x97\x13-$\x16\xb7\x90\x9c\xb6\xccB\x8e\x1e\xd4\x14\xdeP\xda\xd0\x90\xad\xd3j\xc2\xb6\xfe\xe2\xd5]\xde\x02\r\xc6onAS\xb1\xc6\xd0K&s1x,\x88\x9cb\x10\xf6I\xbc&~\xf7\xe1)\x88\xb0X\xa8X\xfc\x10v\xb5\x830\xc4b\x81X\x0f \x99)\x92\xb1\xf6\xb4\x95\xf9\x06\xf9\x9e\x80\x19\x80\xc2\xb5~\x8e\xc7\xfeC\xf6\xf5\x9d\xcc<\xab\xb7\xdf\xae\xd7\x97\xf7)0\xd8\x08\xdc\xca\xda~\xca\xf6\xb6M\x89\xa0\xa9\x8f*\x8bj\xe8<\xfb\xc9\x9c]\xb2\xa3\x97\xa7o\xcc\x15zL\x1c\xa1J\xaf\xca\x95\xefq\xae\x8cA#bl^\x7f\xc2\x88\'\xc1\nw\x0f\xd6,H\'h\x8c\xa33\r\x8fL\xf1\x12\xa9\xaa\xdf\xbb\xf7\xeeOC\t>\x8al\xd3{\xd5\xab\xfbh\x92$\xd0\xc1\xe5\xc4\x0c\xe9\x98\x0c#K\xc8\x95\xe7V\xc5\x93{\xb6\x12-\xa9\xda\xbd\xc0\xf5\xa8c\xefr\x02\x1e\x8fk\xf12U\x96\xa3l\xf6?\xc6\x00\x9c\xe5\xa0]\x95\x0b\xafq\xc0\x86\x8a\xee\xbb\xeb\xe6"}~r%\x13\xb4\x06}a\x97\rw_\xe4\x9f\xa6]\xc1\xe1\xfcRG1\x98\x9e\x80=\x1c\xe5{K&\x04<\xca\x96\xdf\xfds\xa7Ve\xdc\x06\x16\x10\xbe\xc7r\x94\xa1#PP\xd7\x0f\xea\xcb{\xbb\x19\xa9\xe67\n\xdb\xf0>\xbb\xfaAh\xc5pj-\x068\xe4Q\x85e\xef>\xecI\x0eVe\xf9\xe1y\xb2\\\xeb\xbeR\x1bR\x9f\x9f\xc1[a:P\x19&\xdb\xd15Y\xad\xc6\xf6\xb9\xc5IZ\xe2\xce\xb1\xa9\r\x9b\xcb\xe1\xa0\x1f\x8b\xc3,\xf0\x89\x8aQ\x91B3\x9ao\xad6\xc5d=a\xd3\xca;\xb6\xac\xc3\xbd\x00ms\xb8)\x1c\xa1\xe9\tvg\xca\xfcU\xcf\x0b\xc8\'\xed\xc6\x0b\xc2\xef\x99\tj\xe8\x9e\xc1\xf8\xc1\xb7\xb8\xf8\xe5,\xa1)\xf8\x19\x88v#\xdb\x1b4\xc1\x80^\xe31\x9a\xc3\xc7\xe5\xe2k\xd0k\xca\xbf\xf5eu\x8dd\x15\x8c\x1b\x19\xf5\x90\x89G.\x03\'(\xc6\x19$\xde\x1f\x8d\xce@\xad7\xba\xcd~lB\xcd6\x86\xf1\xda\xad\x17\xd70\x88}[\xa5\x0e\t\xc1p\xd7\x15;[\xa0/nP\xe4P\t\xbb&\xe7I\xce{`\xb7\x8e2\xc7`\xf68\x14P\xd0\xec<\x97D`/\x8bD\xb9C\xb2b\x86\xaf\x17\x8e\xc5>)\xf1\xc1\xf5Gc\x16\x81\x10\x08t\xf9\xcck\x01Q\xadB\x01|\xdc\x93c\xc9\x98\xba7\x0f\xd3\x08\x9a$\xab\xa5\xb2e\xbfDc\\}y\xf3\'\x8b\xa5#\xe1:\xa6\xc1[5H\x8b\xdb])X\x0b\xf6\x8e&=S\x0c\xa8X\x96f\x85}-F\x01\xd8\xf1\xac\xbc\x0cn\x13\xe9\xf2u|t]\xb0\rd\xaf\xfc\x11\xdf0!\x8b2\xf8\x91l\x00\xb2\x92\xeal[an\xf2-\x12\'\xb2\xea\xbe\xad\xab\xde\xe2V\xda;\xf6>_\xd6\xff\xbd\x06#\x18}\xb7\xfa\xf8\xc1C\x9c\xf3\xf5}\xb2\xf6\xf0\xa32\x04\x11W\xd9\xf0x\xb0\xfe\xadb\xbd\xe2\xd1\xf8L\xad\xbed=\xfb6U\x1d\xd9`h\xdf\xd3\x91L{j\x11g\xcd\xb0\x9c\x1dkPB\xa4\x117\xec\xdd\x9b\x0c\x91mQ(H\xa6\x00\xcf\xf0%\xff\xd9\xdey\x1cr\xc3X"\xc1\x0b\x04T\x9b+0\xd2W\xf7L\x07y\xabzo\xc1~\xdc\x1e\x1e<Ax\n\xc2tl\x95\xf4\x1a1]\x8e\xae\xaf(p\x91\xda\x916\xb2wnr\xdc$\x808\xb73\x81o\xb5!p\x15\xae\x15?\x90E\xaa\xfcV-\x9a.`\xa0\xfc\xcd\x89AXvw\xf5\x07\x04l\x89iV\x8b\xee\x91}\x85\x12\x99G\x13\x9c\xde\xad\x95\xa0\xdc\xe3.\xf0&\xfd\xf8\x05\xa0C\xb39a\xe2\xc3\x061r\xea\xa4\x10\xa8N\x13+\xff#\xf8\xea\x14,\xb0\x89\x04\x19\xa0\xbd\xa9\x90\xaf>_)\xea\xcb\xfbi\x9d\x9b\xab/\xbbn5\xd9\xd4*\xe70}\xc7\x1eA-\x91Z5*\xbd\xf9p\x83z\xfaDh9\xbd\xd0\x81Y\x06v\x98\x89k\xdd\x82\xe2\xee\xbc\xfc\xb2\r\xd1\x03\x7f\x9d*%\xaa\t\xb3#aB\xcc\xb7\xa2\xa7\xf6=oW\xfc\\}\xfc\xa4\xbe<\xd2L\x13\x07\x10/\xacm\xaf\x7f\xa8Eu1\xb9\xech\xa0\x1a8\x02\x97\x82\x1e\xda\xa4\xf6a\x02c\xf2\xb4vW\xcb\x87E\x93\x08r>h5\xceqon\x03\x9an\xf5\xf6\xcf\xe1\xec\x94\x19\xe1\x16\x0b\xcf\x85\xf2\x1f\x1f\xe3\x03\xa7H\xcb#\xa5\xf8\xb1\xce<\xc4\xf9\xbd\xab\xb6\xda}$\x9bP\xd5\x97\xd7p\xb0C\x17\x9eU\x15S\xa4f~\xb4x\x9e\x1b\xd1\xd4\xf5g-[\x04\xc9\xae\xbe\x9bo\x8a\xcb\x99\x97\xb5hh\x7f\xea\xcb{\x08\x8c\xec\x9en\xf2\r(z\xb9}s\x87\xbdB%\x9a\xc1\xe2\x9f} q\x03!t\xff\xd0\x82Fb\xbf*\xec6\x00x<\x12+v\x8e\x19\r\xe6\xc8R>\xb6&R\xa5\xbe\x010\x1bp6\xba\xec\x0f\xb0\\\x03\xf67\x9c_\xa8hj\xdd\xc2\xa3\t^\t\xad\xbb\x07#!0\r\x1c!\x11\xa7R\x00o\x15vV\xb0\x1a \x82\x0f\xb2\xd05\xb8\x1e\rU(\x1a\xb4NT\xa06\x06<\x82\x7f\xc5\xe59\xd1\x98\x1f/\xe5?\xc9@\xfa\x84cV\xc5\xa7\x1f\xb68T?\x1cq\x8f\xdb\x81\xae\\\x99}\xbf\xec\x97\xcf\xa2h\xb4\x87\xf0 .\xeb\xa3\x03\xb3!,`\x95\xb5:\xecb\xf1z\x8b\xb2\xd9\xaf\x13dh\xbb\xa3\xe6\xc7\x18\xbc\x06\n~\xe4\xc68\x93!\xe6a\xa5\xd0\xbd\xc8\x0c:Yn*}X\x0b,\xfb\\p\x17\x0f\xcc\x10\xf3(EbC\x8b\xc8\xa9\xb4v\x00\x1bk\xe6t\x08)bl|\x1ck\xdahv\x16\x0f\x82\xe5\xcb\xdb\x8b&8\xc4kJ\x94\x9a\x0b\xdc(\xca\xa6\x1a\xd0\xe5\n\x16\xc2_\xcc\xd1&\x1f\x99\xba\x1a\x0b\x18\x91\x12\x9d\x81\xb2\x15s\x81\x10\xa7\x02\xd9\x172;\xe2\x07\xc9\xac\x17\xd8\x92[\x9e\x17\x87l9\x9a4\x8d\xe6A_\xdc\x18\xc3Kp\x04\xd6\xf5\xe8\x80@`\xaf\x0b\xb7\xb1\xc1\rZ\x1e\xc7\x89G\x07#\xd4\xaa\xacg/\xec4\x1b;E\xc5\x1d\x18\xc3\xf8\x9c4\xc1\xa6\xfew\xfe\x0bf\xbd\xdc\xd1\xa5p\x8c\xb7\xb0(\xe9\xba\x9c\xa8t70&\r&\x90\xec\xb0\x91/\x9aBp\xfa\x91\x90L?P{,\x9d9>\xa9\xff\xcb^\xbfP\xc6^\xda\xd6\x0cs\x1aND\x06\xc6\\\xd1L\xa6"\x1e\x91\xafC\x0b9\xe9\xcf\x11\xbex\xa4\xe3\x167\xb9\xf6\x1c\xe5I4y\xa1\x95\'\x99\xee\xed\x90)\xbd\xd7\xb4H\xa78&T\xee]\xf4\xa8\xdcyyd\xafo\xcb/\x0b\x16\xa5\xb8\x94\xe8\xc6\x9a\xbd\xca\xa9\x84\xdf\x83B\xdb\x89@\xd6d\x88\x9f\xfb9\x0b\xeb\x90D\xb4\xe2@\xb2\x81\xae0\xd5\x00>H\xe5\x16\xeb\xcb\xfe\xf4}\xffg\xc0e\x97\xee\xf6\xd5O%\xae,\x01\xae\xa2s\x18\xae}\x1cN\xd9\x19\xbd\x13&~@\x13\x9a\xf7\x96]\xf08K\x84\xdb\x83\xbb\xea\xd7\xa16+\x08st\xa7\x8a\xdd\x86\x18\x07\x83\x87Y\xa3\xbd\x17\xa1\x83\xfb\x16-\xa0P\x94"\xe7\xec\x85/\x12kb\xd5>6s\xdd\xf0.\x10>\xb3\xb5\xa1\x91\xbc\x0f\x17\xeaQ\xdak\xe5D\xfa\xe6tK\xe6\xc8/\xbc\xe1E\xec\xb9l@!\xd4#\xb7\x90\x8a\x05-\xe5n\x91\x10I\xde\xa9gG\xda\xc6>\xeb\xfe&\xf2\x0f\xc4\x05\xa2h\x17\x7fk\xddD\xf1\xd4&PUn\xf6\xad\x99\x9dF\x81\x1e\xbf\xc8\xd2\xf8MS\xdfN\xa8\x05\xc9@\xb9M2.\xde\xe2\x8e\x9b\xe1\xaa\x08\xe6\x8e\x10(\xf6\x82\xa0,\xb1^\x82\xde\xf9\xbd,6\xe1\x07\x1c@\xf1\xd9/H\xc9\x1dX\xa9\xee\n,FX\xbf\xffq\xf0\xe6(aF\xfd&C\xc1J\xd5\xd8X\xc1\x01\xa5\x84\xd7\x18d\x16x\xb05}\xfb\xc7V\xa5\x01\xd9\x88\x97b\x85\xee\xf7\xdaK\x9b w#\x9fz#D9]\xfbr0y,\x83\xd3Z\xbes\xe9\xdf1\t4J\xa9\xf2\xe1\xb0C\xa1\x1c\x89\x1dcp\xe4\xa3R\xc8\xc66\xc7\xad9\xd88\x88\x91\xcaX\x89\x17\xaa\x06\xa9\x01\xec.{\xa9\x02\x0f\x8d\x15\xeb\xf3hL\x15/\x9aI\xe5\x15=\xae\xbe\xbc\x07\x81\xbc\xaen\xc8_\xea\xcb\xcb\xda\x86f\xfbQ\xb1\x00\xd3\xd4\xad\xff\x03\x08H\xb0\x8d\xc5\x08\xe0\xf1\xb7\xec\xa3\xda\xd21fV\xb5\x92g\x9f:\xed\x1f\xdbW\xb9\xfa\xf2x<\xeeW<\x821\x800\xf6\xeb?\x7fb\xb0,\x8b{\x93\x7f\xa6\xe3\xed\xf2\xcd\xa2\x97\xffgzA\x17\xf6\xd0v\xa9\xf2K\xbb\x93\x80E\xf2\x0f\xa6\xa1\xe4\xe8y\x8b\'\x99b\xb7\xa8|\x0c\'\x9d\xd8\xa7\xd8)\xfc\xe9\x05\x0e\xb1\xf7\xa3\xca\xb3]\xf4~\x93\xb4\xca\xc8\x86\xea\x12~j^\xf8\x9c_\x9e\x84\xa3z4>/\xcd\xe5\xb9\xa6>~j\xdf\x0e\xee/\r\xeci\xad\xc3\xe1\xecZ\xf7\xca\xb6\x8f\x04\xf8\xe0\x164\x03,\x1b`\xe46\x9d0\xe3Ik\x86\n\x0e\x1c\xc3\xc8\x1e\xd5\xc3Q\x01,;O\xbed\xa9}\xb2\xcfq\xfd\x0b\xb3\xd7lB\x9b]\x92wT\xd5\x97-\x19\xba\xfc\x07O\x13V\xe1o]\xbf\xfe`[\x12\xc4h\x8d+h\x02y\xc7I,p(\xbdb\xeb\xc4\xd3w\r\x8a\x97\x88\xac\xac\x8bZ}y\x06\x82#\x8d\xf2\x95L\x118\x02\xb4\xec\xf1/\x05\x04xhu\xfe\xc4<\x84/V\x01\xcc\xf6Bn{\xaft\xa6\x8a?\n#\x00\x14\x15\x17\xe3J%\xbc|\xa2\xb7\xf0\xcb\xf8\xc4\xa6"\xcc\xf2\x8b\xad\x89\xb7f\x11Z\x05c\x9d\x00\xf1\xa8 \xb3b\xee\xc2,J\xdf\\@v\x8f_\xd9>2\xedig\x1e\x080\xe8\xbf\xfcV\xa6\xaf4\x88\xb1qT\xab\xd9+\xa8\x16\xce\xd6\xa6o\x8f7\xcc\x817#\x19\xa9\'\x87\xf0\tS^/\xccW\xb1\x16\x16\xc7\xe3\xe3t\x03\xb7/\x94=\xb3\x1a@c\x0b\x18\xa4j\x1f\xbf\x15\x1c5\xc0c\n\xafc\x11$\x87{d\x19\xa3\x11C\xbd\xf2\x0e~\xb2\xda\x07\xac\xc3\xa6\xc2\xa4\xfb\x0e\xc7U\x87\x96v\xad\xd5\\k\xa1(\x99\xc2r=\x89\tU\xf7\xca\xfe14\xf2\xf0\xa8\x89D\x88*9\x98\xbc>\xe24\x1e\xd6:0Y\x0f]\xc2\x991\xfe\xde]3-\xeeu0 F\xa7\x03\xc2al\xc2\xe8\x9a\x99u3 \x8cZ+!G\xac\xf8\xa2\xedK\xf1\x82U\xcaG\xc7u}V_\xde\xc1\xcb,\x14\xb11\xd3L\x01\xcf\xe4\xb9\x8aaIA\x80%oy\xa8B\x0c\x9e\xa6\xfbs4#\xb7R)\xf6D\xaa\xe8V7\xde\x17\xd17s\xeb\x0c\xe7\x95\xb5\x0e\xd7\xb8\xb3\x7f0\x8aq\x1e\xbd\x8f\t\x82\x0c\xadLx\x83\r\x93\xbc\xb2\xbel\xe0d\xac\xaa7\xc8\xea\x90Tv\xbc\xee\x06\x8a\x92\xf6{+\xf6H\x92\x0c\x98\x0c\xf0d\xc4\x90;D\x96\x94\x91\x94\x1aI\xc6\xdc\x87\xbbh\xd2N\xaf\xda\x9a\x96x\xdc\x06\x13)\xec\xf4C\xf3\xe6Aw\xa3\xee\xde\xa4\xc5\x18\x0c~\xe0|\xd0C\x8e\xe19\xf8{\xbd\xa6\xd8)\xc2\x88qu\n\x11K\xe1\xf8\x91W\xff\x838z\x7f*?V)B\x1e\x90\xd9~\x08{ 8\x95\x98\xb5q\'o\x12\xd9&\xb39h\xac\xdd\x8d\xc0\x8a!\xa4\x87(\x17\x8dr\x1c\x9a\x81\xf2\xbe\xdf\x92\xb5\xf7X\x08\x82hS|\x12za\xf5\xe0\xce\r\x80\x0c/\x9e-\x04\xd2t\xe7\xa8A\xa1\r\x11K_\xb8\x8d\xaf\x0f\x88\xe1\xd2Wo-\x88&\x04\xa6\xfc\x90\xfd\x06\x1a\xa6d\xd7\xb0\x90`~\xe0R6\x96\xcc\xdc\xfd\x1d%\x81\x91\x99\x07\xdeR\x0bk\xc6c{b\x12\x0eI\x02\x0f\xde\xa3)nC\x87)}\xe2\xd9@^\xe5\x91\xd4*1\xca\x01cZ-\x8f\xb2\xb6f>\x88n\xb7c\xd7$\xfd\xfe\x16\x10\xa1\xe2\xa9)\x1b\x1bH\xa3A\xaf\xeb^\xee\x19\xfc\xd5\xf4\xcf\x7f\x9f\xc2\xac\x1dS\xddk\x94\xf6A\xecb\xfe+I\xa6\xf7\x02\xc72\xce\x1e\xbczS_\xde\xef`\xd7;\x9e\xaf\xda\xd0\x9d\xd4\xdbz\x93\xda\xc2\xf6i!\xf0\xb5`T\x8aO\xae\xd8.\xf0|\x1b\x86_t\xf7n;\x0bbhT\xab\xfa\xbf\x0b\xf3\x96Ly3\x90\xf1\x89\xad\xe2\xec/o%d1pE\xbe\xae\xb4\x15%\xc5\x03}\ri\xe3{\t\x15r\x97\x08\x97\xe4\x05u[\xb4d\x98:H\xca-\x82\xb88!2\xfd\x00n\xaa\x00I4\x83p\xf2HU\xb5\xf4\r\x00\xe0@\x9f\xb5\xbb\xd8PxIn\xd6\xff\x9aw\xc08!\x92\xeb\x86\x12$\xf4=\xbc\x9f~]\xc0\xfdw\xdf\xa5\x1bDZ\xbc\x99\x00&\xc69\x12v\\Vb\x17%\xdc\xb2\r\x8f\xe2YC\xea\xbd\x88)\xbct8\xd2\x97\xac\xb2s\x0f8n`\x97\x88\xbf\x16\x04\x93\xa6\x08\x9c6\x88kG\x83\x9f\xc9"\x0c\xf7v7el\xfa\x10G\xe0\x83\x11\xf3\x91\x0b\xa5{i\xf6\xcc\x7f\x90N\x9e\xe1\x92\xdcfChX\xe1\'\x9e\xff[\xff\x9e]j#{\x98\xf3\x1fW-po\x8d\xa2QQR\x8f\xbf\xd4\x97|\xe5\x91S-\x15\xa2\xb1\xea\x08\x02J\x98\xbc\x8e}8\x8d\x89D\xa7f+\xc8P\x8e\xcd\x1a2\xf4\xc9\x05y\xe4WfO\x12e\'\xed\xbbq[J/\xcb\x18\xc8fE\xfc1J3\xa1\xdd\xf3\x93\n)\x107\xe6/\xa6\x0f$\xb1h`\xd0&\x8f\xfeZl\xc5\xf3{v\xfb\x9d9\xb8X\xe9\x81\x0f\xcf\xcc<\xbb\xfe\xef\xb2\xcd8\xe8U\x91\x12\x03\xdeH\x94\xe2\xa0u\xf4\x01v\xb8YB\xf3\x12\\\x0c\xdf\x83\xea\xcdC-\xb9\xd9\xb1\xbe;;\xacu\xf7\x89\x82\x16\x8f\x14\x9fX\xc3\x90\x9c\x94\x8d\xc2\xfe~\xe0\x17\xb1|\xdd\x99\xd6\xf6Cy\xa5\xbd\xce\xe4\xfb\xf1\x15%\x04\xbc\xd3#\xd8Wa\x1a{\xf0\xed\xdbpB\x88\xf5\'\x86\xbe(\x7f\x96\x8a\x0f`\xa7\xdb\xb6M\x8d\x11\xce8\xb6c\xe5]\xdad\xaaC\x87\xd3O\xdfn\xdfB\x9c\xe4\xd8\xe9\x95U\xe8)\x00\xb0a\xc1\xec\x82K\xa3\xb7\xe0\x9f\xa4\xfb\xf3l\x8b\x9f^\x98!c#\xdbj\x01\x07t\xc9\xf6hMOze\xb4\xe4&\xb6H\xeb[\x90C\x98\x7f\xfaZTZ\xb34}\xa9\xbc\xd8\tn!\xc3\x03\xe5\xeb\xbe\xbbO"Hf\xae\xda\xcb\xb6\xb4DZ\xd7q\xfc*C\xfd5_\xb6\x1dbS\xc3\xc2\x04\x85\xb7!\xb0\xd2\x02\xfc\'\xa5)\xbf2k\xe9\x13\x83\xf2\xa0\x08\x99Mw\xba\x87\xd3\xcf\xd1\x18\x96\xfd}:\xa1\x08\xe9\x92\r\x07\x8f!\x0c\x91~\x9a1?\xc0[{\xd8t\xcd\x9b\xea\x10\x01\xe4\xe6\xcf\xdb0\xd9[35Me\xc1\xca\x0b\x12k\x15x\xb9\xdd\x17k\xa8\x0c\x95\x7f\xd3\x8f^\xd2\xe0\x90\x18\xb9\xa0\xec\xb6i\r\x87\xdb7\xbb\xe8`p\x02\x1b\x15\x176\xc0H\x89)\xb1&t\x9d\xe3\xfd@`\x1e\xf2J\xad\x18\r\xb8\x86\x03T\x1ez\x1c\x04{\xd9\x89\x04\x84\x16\xbb(\xbd\xe0:F^{m\xf1\x96q\xc0\xc9\xbbg\xa8\xe9\xb5\xa9\xc46\xe6\xcc\xfa\xb1W=\xa3\x95\xa0\\\x8a\x91\xca}\xd9\xfel\xf3\x91-\x12\xefk\xc3\xae\xb2UP\xaeT\xbe\x15feQ,\x92\xfdf\xeb\xa7\x89\'~O\xa6:-\x00\xfeR\x8a0\xc0\\\x89\x97O\r\xc1\x05\xd4\xdbm~;\x07\x1b\xc7?|P\xab\xc0|\xaa\xb0\xb6\xb3?}B\xb6$\xd6\x9d7\x1f\xe0\x1f\xb8\xd0\x84T=a\x1a\xa3\x95\xfeo\xaa\xfe\xc7\x8a\x82\xfc\xc6+1\n^Ne\x7f\xd7\xdd6K\xc1\xb8\x0e\x12\xcc\xfb\x85\xa2\x8d\x19\xdb\xf9\xa1\xa2_U\xab\xf6\xa0P\xbc\xb7\x81\xb8\xe8\xdaV}\xf4i\xa1\x1e\x8e\x12\x17\xb5\xee\x8b\xc8\xcc\x08\xf7\xa1B{\xb8\x01\x82\xe9\x84\x877\x02\xa2\x9b\x88wN>\x10J\xc0\xe4\x06\xd2\x05f(kG\x00\xee\xdd\xf1\xca_\x8e\xedX) N\x19w\x03A\xa7\xdc5S\xd6(\xf3eq\x0b\x93\x04\x80\xcbj8*\x07<q\xc8\xe3\x14+\x8a\xc2b\xf49\xcd\xc1l\xea\xd0\x92\xa9\x83\xca\x85(\x9b\xa9u\x84\x0e\x16mE\xb9\xbc\x82\x14\x1f\xcd\xa8\x84\xd4*@\xa1\xbc\x08\xa9\xa1<\xf4* \xa0O\xb6\xcbJ\x1e\x02\xcc\xcc2U\xaa5\x1a8}H\x91\xfb\x88\xde\xb5\xd5\xe4\xdbC\x8bQ\\\x8f\xf3\x13$Q\xb6\xb8X\x007)D\xf9\x98\xe2^\xc4c}\xb7\xf2\xc5$,\xc6\x99\xb9^\xe1\x97\xe3\xa5S\xb8<\xc0\x9a\x9ea*\xc0\xf5*\xb9\x16\x9e\xd3.~\x8fCl\xc0\x82C\x9d\xd2+\xff\xee\x9ak\x10\xc8j\xec\xc57b\xb1C\xee\x80\x93r<1\x9e\x9d\xe1\xd4F\xf5u\xe7\t\xb6\xf9\x1d\x14\x0eM\xf5\xc3oC([\\\x00\xeeYpP\xf1\xab\xea\xa6-\xcd\x88\xc3\r%\x9a\x90v\xc4\xd9\xa8\x1c1C*\xe7\xb2\x99\xe6\xcd:\x89m8\xcaDE\xb6\xab,\x9c\x00\xa0\x1c*\xea~\xe3T\x17\x13\xd5j\xa6\xfeo\xd5\xe4\xabm1y \xf3\xceQ\xbc\x96\x99\xe5/\xe0\xdfk\n\xda\xab\xc1\xd9\x8c\xef\x1c\xde\xbcm\naW\x8fAH\x988\x1e\xeei\xc1"\x8b\xd5\t(\xfb#\x9bw\x93l\xfc<P4\xcezf\xfe\xb1VY\x0c\x93@\xcby\xbe\xa7\xfd\xfd\xeb\xc73\x91\x04\x9b\xf6\twz\x8f\xfby,P\xa6d\xeba\xb1\xd4\x15\x0b\xd4\x164\x8e\x14\xbd\x86\xe5n\xaeN~\x19\xa0\xed\xfb\x0fgw\xc7\x91\x99\xa2\xa9\xb2\xaf\x86\xe4)\xff\xfb\xfd\x89\xcce\x7f\x87\xc1\xc1\x82\x19tZ0w\xf7\xaaY\xb9\n\xf74\x91\x1f\xd6\xbe<6\xa3\xd3e\xdf\xe1D\xf6\xcd\x13\r\xdd\xcdy\xe3N\xb9\xbc\xf5\xc5<)\x8b\xeb\xc9S\x93\xe5\x08D~JIP\xe9\x99YV\xac\xd4\xa0\x1b8\xeb\x10\xf5\n\xf1\x07\x12Qf\x811\xdbx"\x9e\x02\x87\xda\x97\xa69N\xd9\x1cC|2\x07\xa6O\x1b)\x16X\xfeD\x80c{\x04\xd4\x8f\xed\xdf\xfabs\xe1QjoiQ\xec\x8f\xe2\x0f{\xf5e;r\xf6\x13\xcd\xa9\xba\xc7\xf6\xed6\x9c\xaf\xc3\xec\x128\x06\x9a\xe6\xae\xff}e\x03h\xaa\xa4\xbbaf9b\xe1\x1c\xb07\x03\xfbVI\x88\n\x1e<2\xac\xac\xd0\xe7\x1b\x1b6\xf3Nl\xe6\xbe\xbf@\x85\xbf\xdd\xb2p\xb3\t\xf9\xca\xe2o\x88\xd3\xfc\xa92s\x95\xcc\xbbtw\x05\xdb:\xfd\x89\x177]\xb55\xe6\xd5-\xa4Y6&x\x81\x98i4\xb4\xff5I\x03\xd2\x80\x129\x91$\xc4\xd0\xbcK\xd7\xc5\xb8\xfe\x96a\x82el\xff)\xcf\xc7Q>\x97\xa1\xcd\x95Sq\x97\xb1\x10\x890\xe9;\xb9?\x04U4)\x83\xff\xd1\x1f\xbb\xcc\t\xcc$T\xc7l\x11\xef\xbe\xdcS0Q\xc1\x8cQ]\xd3\xc3\x03\x8e\xed\xc8\x94*\xb4S\x16>\x07E\xb1\xf0\xabl\xa4\x8aK\xac\xaaT\x8e\xa5rl\xea0\xf2\x100\x0ec81\x1c:\xd0=\xc36*\xc5\xa5\n\xd9\x8a/\xbaZ]P\xdf\xd6v\xb9C\xec\xb7\x84\x98$\x8fl\x1b\xfbBG\x7f\x86\xa2\n\xe9\tdDf\x10\xd0\xbe{\xa2\xc5\xed\xcc\x0c\x1a\xef\xcc\xe2\x95\xa8\\\xf8L;\xb4.\xd4\x8a\xed\xd1Y\x87S[[\x12\xdar\x93K. >0\x8a\x06qX\xb0\xd8\xaf\x07\xca\xab1\xd3\xed\x90\x81)N\x13\x7f\xdd1\xf5ca\x17\x89\x97\xcf\x94\x04\xb0E\x89\xbcU\xdcO\xef\xf4<\xe7\x058\x137\xd4\xf2\xd4\x14\xfc6z\x83\rw\xe6E\xd9\x17\x85\xeenm\xfe\xf6\xe9/\xf3n\xb1[{{l\xcbN\xdec6\x10\xf5\xa2\'\x9aO\xb9\xf4\x03W\xc3\x92\r\xe8-@\x1a=\x9f\xbc\xff7\xb5n|\x83\xf3\xd9\tJf\xa8Z\xc5\xe3\xe5\xb9\x93\x94q)\x8e\x1e\xa6\xb8pa\x98\x18KO\xe5s$\xab\xc5\xc3\x1d\xb3\x7fd;\xc43c\xdb&>\xac\xef`\xd1\xe8~@\x0c#\xab&\x03\xd9\xac\x1f/I\x1e\xce\xcc\xa4s@\x08d\x927\xa0\xc9#\xfc\xae4(\x16\x14\xcb\x1fx\x16\xearAL\xef\x06\xc7VB\xfc\xf3\xee\xab\x0ff\xa2I\xe4\xc5\xc1\t&\xe1]\xe3\x1f\xd6\xea\xc9\n%J^mC+>H\xea\xbbz\xa8\xa3\xfd\xf7\x8c\xb5\x86kf@+w\xba\x8c\xed\x07\x9d+}kZ\x16\x8b+\x13u\x93wq\x84\x83\xc0J\xf6\xfb\xab\x05y\xad(1\x0e\x17\xdc\xf0n&\x96\x9fN\xe5\x9aD\xbf\xb4[7\xbf\t\xc8B\xc9"\xc2\x01\x1e\xc7\xeb\x86\x942\xd2\xc7\x14~\xe1\xc3\xa6\xc0\xa8\x14\xe7\xe3)\xc2\xee\xbb\xb2\\R\x8dX\x1d%X\xbc\xa0\xe0\xdd\xed\xbc\x03\xb3\x8c\x1cn/\xdbLH\x10g<\x1dz\xb5\x0ew,\xb6\xbc\xce5\xd2\xf4\x13\x9d\xf8i\x10\x83w`I\xb1\xd3\xf0e)\xcc\x02\x12\x93E2\'y\x96<\xb6Q\xc5\xcd\xa7\x87\x02\xc0\xe14\xd0\xd5\x92\xe7`\xecBb\xa8<Q\x7f\xc6\xc9p\xb9\xec\xcd\x95\x99\xe6U\xc2[x\xf2,F\xbf/t4s\xc6t\xaa\xed\xf0\x87\x9d+,Ms\xc8N\xea\xb4HD\xcbS\xb3W\xc3}\r\xac\xaeF\xe6\x11}9o=\xfb\x04I\xe5\xfc\xf1\x7f\x13\xd1\x11\xb4\xa3X\xc4 \xfd<\xda\xff\x81L\xc0^".\xbd\xfb\xe8\xae\xad\x8a+\xc1\x17\x8f\x16\x85\xbb\xe9\x14K\xa7\xbaAhH\xd3\xe3+1&W\xf2\n\xc8%\xe1\xab\xa3,\x7flw\x87\x84\x88\x1fB\x88\xe9v\x15C\xb3:\x82\xad\x088D\x11\x06\x82=/\x85!\xc7)Kr\xf9\x89\xceT\x97\xed\xa2\x196\xd1xm0\x9f>\xb6-!\x95\x13\xccU6F\x8f\xea\xff\xc3\x0e\x8a\xfd\xdbM\x03\x8e\xaaf.\xb3\xd7Z]\xd0\xff\x8b\x89\x1b\x9c\xacw\x82s\xa3t[\xb4\xc86\x9e\xae/K\xdbd\xbd\x17\x82\\\xe8V\xdf^Q\xc5\xc4\xdd|\xfa\xcd6\xb5\xc9\xf6\xd9\xe5@f\xcd\x0eN}>\x1d-\xfa5\r=\xd8fTCxN\xb8\x15\x01b7\xfd\xf2\x9aIB[,Ef6l\xdb~\xd6:S(\xc1\xe1\xcc\x96\x04~U\xa6\xde\xac\xbav\x90s\x16\xad\x86\xa4\x9a\x8b\x01\xaax\x0eN3\xfb\x050\xec\\\x80\xe4\x94\xa6M\x11NDT*3\xf6\xe1\xee\x86\xad(96\x04,\xb7\\}y%s\x94\xfdX3\xbd\xe4y e\xb6M\xfejn\x17\xa1\xdb\x9bW\x96\x00UnnS\x890\xefG\x8b\x14&z\x94\x8c\xdd\xa0\xc7\x8dS\xaf%\x86=\xac\x89]A.\xd8\xdc:\x8b\xda\x17\xefmK\xac<\xf1\x0eW\xb5\x06\xd0\x1f\x9c\xf2\x97\xe1\x801\xb1$D\xf7\xf9q\xad:#\xe2)\x98\x9f\xea\xe8\x8d\xb2\xf7\xbc2\xb3\\\x95\xba\x1f\x80\xb7x\xb5\x82Z*\x01\xe5\x9d\r\xabe\xe0\xc8Z\x07\x04\xb5OO\x96\xcd4\x1b\xfdp\xea>\xaf\xa5Eg|c*N\xff\x03OY_E\xc3\xd9\x0e\x82m\xb7\xe7\xc6\x92R\xc5s\xc3u\x1d\xec\x98ZE\x8f\x96Y\xf2-\x9f|\x13N\x88\xfb=I\xfe\xe4!\x80w\xf6e\x16\xb0\xd0x\x8b\xe83\x02\x7f\x8a%Z\x15\xb4\xe2\x826\xb8\x08\x86\xe7\xf6\x99t\xa9l\xde\xf8I\xd4\x80\xcbX0\x1f\xcb-?\xe4\xf7\x9d\xc8\x97\x06p\xd8\nEZ\x1c#m\xff1\xabF\xde5\xf0T\\"\xc0\x02]X\x13S\x8c\xd5U\xb9.\xf6\xde\x90p3"d-\xf0\\\xd1\x14\xcf<\xe2BU\xcf3\xe0\x95v\xca[V\xcfK\xb44\x06\xb61\xab\xfe\xde\xb6\xa9\xb0k\\K/\x82X\x11-\x91X\xcb\xd4\x89\x9c\x06\xb1\x0f\x8f\xef\xb2\x86\x854\x1f,\xcb\x1e\xf9\xab\xddOiQ@,\xa4b\xb8\xf7\xa7\xef\x1e\xe17=\xea\xe6\xbc\xd3c\xa8;\xc2r&B\xeeX\x99n\x1e\xc0\xc9\xd9\xf5W\x02\x17\x1aQ|\xbc\x16\x86!s\xb9b\xfe\xbf)\xaeQ\xf8\xb4oU|\xff\xb5\xcd\xad\xed\xce\xb0\x85\x0cxZ\x05sq]\xfb\xe9\xb7\x8f\xf0\xaa\xdf\x16\x14\x0e!(\x03\x90\xb43\x8919\xfdhqw%\x1f\xe8d \xbb\xe4\xdd\xfa\xac\xb2\xd2\xb8Y\xdd>\xfe\xffT]iC\xd5\xda\xb2\xfc+\x02\x0e\x80p\xcc\xb4\x93\xb5DEADEA\xc4\x015"IV"\n\x82\x0c2*\xbf\xfdQ\xd5\x95\xcd}\x1f\xce=\xe7*\xeca\xa5W\x0f\xd5\xd5\xd5\x16qC\x81M4\xa6\xc1\xa7R\x80\xac.\xa5G\x82\xad\xea0\xb6$\xc0\xd3M\x01?\xe2\x0eY\xf5\xf8\xb8\x1e\xa6\x1d)\xa5\x88\x12\xa9\x86E\xb0\x87r!\x1bE\x8aaM\xf6g\x7fNx\x1dl"\xfd4ai\xa6\xedF\xc7\t\xaa\xa8f[\x86\x05\xcaM{\xfeq\xdd\xcf\xa8\xe9\xc7#,Vv\xcd\xfe\xb9\xb7\x173\x99|Ab_gKuN\xcc\'z\xb7t\x9d\xef\x9a\xbcmeU\x127\x06\xfa\xfbB\xc2\xb0L\xb6\x8aA`\xeb\x94\x05P\xf0\xc4\x9bK\xa9\xda1\x0beD\xa60\'G\xf9Y\xd6y\x99\x9d\x7fU\xbf\xc5;\xfe\x9c\xd0:a\xc6\'v\xd3\x00 W\x05T\xc8\xea\xe2\xd6\x88U[G\xaa\xe4b\xf3\x83\xdeLo\xf7\xe7O\xf8\xf6t\x0f\x9f\x97[\x05\xac\x0068\x17\x87\x93\xaf\xbc|\xad\xb2\x0b}6_\x83@\xd3,\xdbO\xda\xf4\xd7>\x9c\xec\xcf}\xf3Z\xfd\xf2v\xf6\x1a\xbb\x97\xf3f\x83\xaey\x87\xb1Zt\x17X\x0f\xf6\x0f\x9e\xbd\x87/f\t]\xf1\xf8^\x9fR\xe0\xa9\xb5OJ\xe9<(B\xd7="\xcf\xf9\xc7\xf9\x7f\x16=(HS\x88+XW\xdbX\x97\x9b\xab\x05\x1473\xef\x853\xab\xc0 \x97\x07\xaf\xde\x91s\x84e\x1e\xa6\x94\xf3\xd9\xce\xa7\xae\xd76\x15\x002A\xa6PE\xf1\xd5\xbe\x9d]\x0f\xd4\x1b\x98z:a\xd9s\\\x81\xe0Jq\x93N\xd6\x85mi\x0c\xfe\xe9\xd8\x9a}\xe1\xa8\xad\xb6\xed`Y\xceW\xe6\xf99\x9c\xde\xbe\x7ff7 \xb8\xfaG\xbe\xb6\xac\xbc2Y\x7fx\xfb\xf6a)\xee\x94}\xac(\xff=\xd7\x03\xa96\xe5\xfan\xf9\xfb3\xb8\x9b{\xd3\xb3\xffc\xd0\xaa\x95\xc8d\xd4\xe3 \x95p\xc0\xf6\xe4\x8aj\x9d\xff\xe1\xbc\xc6\xc5\xd2k\x1d"\xa6\x19\xba\x14\x02i\xc9Me\x17\xbeY\xf9I3\xbbssG\x07s\x95\x1c\xde@\xed\x1e\xeb&s\x90\x9c}&\x88Ks<rps\xc3R\x1c\x8e(\xb6\xdf@\x11JW\xca\xeb\xe1\xf1\xc4n\x1c\xeb\xc9BN=R2\x96\xbc\xc7\xb8Y\xfcN\x7fU\xf1\x9dJr\xd7\x90B\x91hE<\rf\t\xfd\x18\x07h\xc2c\xae\x89h#]\xc4\x99\x1e\xa6\xc8\xac\x95\x80\x01\xa6]\xd9\xd9\x98\x9d?gK\xb2\xf5\x17\x0fJ\xad!;i_\x93\x16\x80*\x0c"\x8e\xb5\xf8:\xa6\xaa\xa7$\x01\xce\x9e\xc0M\xf7\xd3V\xba\xde\x83\xf4\x02\xe7\xd2\x13\x01\xaf\xce\xf5\x8d\x95\xe2\x87\xb9>\x87\xde\xaf\xd5q\xd9W\x99Ja>5(]\xecW\xa5\x84J\xbb\xde\xad1\x08\xeaI\xabN^\x8f\x05\xb2j$\x1bq[\x99,.e2\xdb\xd8_\xb1w\x1cE\xfe\xf6G\xbdb\xfd\xe1\x9de*U2?^\x0e\x07C:V=\xb8K\x85L \xc6R^\x9e\xaeKG\xe5^\x14\xd3\x98\xb5\r(\xac\xa1\x81\x91\x08=\xcb.}}\xacR6\xf4\xb13W)\x91m\x80\xd3\x14\x8d\xdc&#\x8d\x91\x0f\x8f\xb1\xf9\xf2\xc0"C\xdbB\xc5\xcfz\x07K3\xcac\xd2_k\x00Z\x18i*i\xffql\xbe\xbe@\xb5=0j7\xf9\x16\x83m\x0b@\xdc\x19P\x19\xff\xc8^\xc5\xc5$\x15\x95\xbb\xb7q\xe0\xeb#\xb7\xfb\xe0\xfb\xf4\xbd\x1d\x11\x8f\x00\xe1\x87\xebE\xbbK\xc4\x88\x95\xa2\'L\xfbr\xb8\xca,n\xb9\x06\xf6\x04\xa3\xa1\xa8t\xe2f\xf3`T%W\xa1\x82\x90\xdcV8\x1cwWYLg1\xb4Qk\xaf\x8d\x8ff\xcd2j(\x9e1g\xac\x05^\xc1J\xda\x8b\x0f\xb4%\x16& %6\xa4\xcf7\xaa\xb9\xb9t\xb3Z\xa2\xd2v\xfbn>c3\xf8\x05C\xd3\x8c\x19\'\xdd\x92\xb8\xfc\x94\xe1\xaaU*P\x08\x1e\x83g!\x1f\xb1/\xc6\xa8\x99(L\xa3\xaan\xe3\x9d7\xe6\x85\xae\x92\xa4\xff\xd8)\xf9s\xb9\xaf\x8b\xca\xf4\xf8\xc3\x82]\x05\xaf2\x90,\xa8v\xf6\x1a3\xa1\x84*t\x88+As\xb1\xdf\xda\x1d[\x0b\x02\x9d`\x0b\xa8\xc9\xd8\x0b!\x12@|\xb9V\x8aF1~\xf1s\x8d\x81m\xff\x1d\xc0\xd4\xeb\xc8<\xbe\x8a\xee\x8cd\xd9\xf6\xc8\xb8P8\x8eBcs.\\\x9eW\xb9\xe5\xea\x9d^\xa0\\\xd8V\x1c\xa3!\xd9\xacl\xd8AUh\xc66\xfa\x94xS\x1bXE\x97\xb1~\xb1\x05,\x04\xab\xd6\x82zFQ\xd4gy\xec\x1b\x81\xde\x83\x1c\xa1\x85\x1aj\x01\xf6B\x18\xe4z]\xb5D\x0c\x92\x1f\xd8\x17\xa1\xbf\x87@m\xe8{\xf96\x00\x95\x99\x07f\xb7\xa5\x9a\xf9\xbb\xae\xa2\x1aL\x02\x1fyp\xf3\xbb\xf4\r\x15\xecT\x9a\xb1\xeeR\xd5\xe8\xe9n\x0f\xe4_\xc5\xde\x8e\xd4nlm\xf4\xdf^\x9b\xea\x92\xed\xe1>\xd9R\x08\xa2V\xb2\xec\xda\xf9q\xd6>\xbee\x96\x18\xd7R\x89%0\xea\xe4\x90\x98\x8a~(M\x908\xd9\x96\xcb\xaag\x1f\xac\xdc\xfal9N\xacn2^\xba\x97Tl\xb3\xc7\x17o9\x95\xe3\xfe\xbc\xfb\xfe\xc5.$stv\x84O\xe7\xfa2\x1a\xb7\xe7\xfc3\x03\xd7\xa6bCn\xde\xd2\x8b\x07\xc6\x18\x8f\xa5\xab\xa4W\xa3\xbeN\xc9L\x03\x0f\x03\x10b5 \x969o\x19\xfa\x90\x06\xa1\x8e\x0f\xc9\x04l\xe8\xea\x08)\xd2\xc7V\xcasd\x06\xf7\x81\x06s\xa72\xa70\xc6N\x7f\xeb\xf9F\xcd\x8f\xf0\xd0N:\xca\xcb\xa3\xff\xde\xc3\xe5\xe7\xa2Q\xb7\xee\xa7\x95\x1b]5\xf3\x13\xb1\x0e\xe79\xe8\xf9K\xce\x8e\xd1\t\xffp\xee\x86\xc8\x9b^\xe9:\xe76\xd1Nc\xe8Dg\xb3\x82`3o\x12\x08\x1e\xd4\x90\x1e\x94\xc3]1\x9e\x9c\xa4\x0f*\xa2\xe3?\xb7\xdf\xb1\xd9[[>X\x89\x7fO\x0fxuG~\x8dXC\xd3\xa6s\xf1C\xd8^OJ|\xf2I[h\x1an\xfe\xb4\xcc\xd7\t\x06\xc0\x93G\xd0\x88\xc3\xa8\xfdISa\x9f\x0e=\x10\x0b\xf0GV\xdc4\xdd\xb3\xd7s\xf2\xa0\x8d\xe5\xe5|7d-\xc8>\x99\xb4\xab\xeb\xc0\xda6\xb2o\xc1\x00\xdf(9c\xdf\x8cr\x97\xd3\xc2\x16`\xb9\xdd4\xe1\x92\xe9\xcd\x03;\xb0N\xa8`\x10\x9c\xc4\xaaW\xfd\xdc\xabB\xdf\xdc\x04/\x15S\xec\x8f*\'\xbc=\xceF\xd4V\'P\xd00\x00|\xe2]3\xb5\xbe^o\xb3\xc4\x92\x0cB!\xd5\xd9!\xc4J(5\x16T\xac\xa9*\xe4@8\xfe\xa1\xdc\xbb\xb7d\xa0\'\xbb\xb1fO\xed-\xbb\x02\xe0\xb7_\x03:D%\xd5\xdc\x12\x13\x8f*\x9a=\xcd\xcc\xfe\x909%\x0e\xc4\xff9\xb1\xafAe\xaa\xfc\x87\x1ev\xa1\xb2\x8a\x91\xad\x95\xaf\xa2\x16\x91:\xfb$\xee9{\x046\xb1\xf5K\xf7G\x05~\xdd\x82Q\xdb\x8d\xe3\xf72\xc6\xc0}\x98\xc6\r\xe5\xf9d\x17\x9eX\xeeM\xd52\xe4\xf7$\xde\xf4\x8c>b\xe7_\xf4\xcc28\xbdu\x86"}\x1e\xcc}\x12\xaf\x82&%)\x84q?t\xd2\x13\xe5\xd0J\xe1`\x92\x17\xb3\x82,i\x86($\x1e\x03\x08\xa0\x1a\x07\xf9\x90^G\xb0\x95\'0S\xd9]\xb8\xb2\xbc\x83\xc4\x1eY\xe7\xec\xf9R\xd7\x05/\x90Mp"\x95\x8d\xdc\xda\x92)\xef\xc7\xc6\x94\x08\xf1\x9a,\x8e\x7fz(\xe8\x84\xa6\x1dV\xfbk\x08\x1b|x\x84\xb9\xad\xbc\xdf\xa0\xcdmDx\x11$_1\x01xQa\xb8\x19\xb4yb\x19\x11\xfd+W\xb5\r,\xcb\xac\x9ceP\x98\x19\x06]\xb9>\xf8\xaai=:[\xdc|\x0f\xaarD\xcd\x82W\xa5\x8dN\xf5\x8b\x1ck\x91\xa2\xae\xfe\xfb\x00.\xe6\xd5\xbe\x8a~gv\xc5\r+\xed\xd9\xa6}%N\xa7\xe2\x15\xa3\x1c\x9a\xa5!\xbb?\xad\xaa\r\'\x12\xdf\xfcV\x0e[ \\1<\xb0\xa3t\xf1\x87r\xb8\xf1\xba..\x9e\x9c\xe1\xda}g;\xec\xfd\'\xd4\xd0\xf5\xc7E<\xdfz\xaf!\x03\xab\xfa\xef\x86}\x84:\xcf\xf1\xfby\xfc\xba\x1c\x8a\x01\x90B\xc2v\xc7Y4\x8b\xb3\xdbTHh\xca\xa3\x8f\xcf\xfe\x98\r3k\xe8\xc6\xfe\xe9\n\xc3f\x07?\xd0"\x00D\xd8d?\xecTY\x98B\xc9\x82\xac\x1av%\xa0\x86\xea\x7flY~\xd7\x0f\xc2\x86\xeeS9$ 1\x03\x85\x1c\x8d\xf7\x17ig~\x82\xee\xa7\xd1\xd5\xc1\x183\xc1\x91Z(hf\x81\xb8\x8a?\xeb\xc3y{\x95\xb8\xb9e\xd5\x013\x97\x02\\\x8eAc\xc1(\x16\xae\xc9fw\xabz"\x0b\xa8W\x1d\xb9U\x95\xb9\x1d\xcf\x99\xfd\xc1)\x17\x9f\x93\x1b\xa6c\x8e{\xd4<\xb6\xb8TA\x01\x84\xcd\'\xb6G\xef\xf9R"+oI\xf7}c%\x139=\xb9`\xb6x\xe2\xfd\xbe}e&F\x11\x92\xb1\x1c\x10@:\x90\xc9B\xc3\x92\x80\x17\xc3p\xa5\xf0\x94\x905\x7f\x93i\xa2\xbe\x05a\xdb?XdA\x117v\xa60nT\xc7\xbbT\xe6#NT\xf4Y\x1e4\xa5\xeb\xea\x87\x8c(Rz\x8a\xdd4\xe4\x06%\x02s\xa2]\xb5e\x850\x92z\x95\x80\xa6\xd1R8\x0c\xf9\x1e7\xbb\xc6\x96\x9f\x1a\x94\xa0.2\xfdt\xa3\xe4\xca\xe5\xa0x@\xb4\x87hku\xff\x8f\xca\xa9\xca\xde.\xb8K\x9c\x92{\xb0]\\?N\xee:%!fC\xf8\x11\x9cyz`GB\xdc\x03\x02\xdfx\x9a\x14)I\x8c\xbc\xf0L0[(\x8f6Fy\xa7\x0fT\x13\xd7\xaa\x91\x93R"2\xce\xe2\xab\x0b_gJ\xf5\x15\xdeA\xd7\xbc\x8b\xe7D\x17j\x8b\x7fA\x8f \xb5\xa4\xc1\x90/\xbc_X)T\x8a\xa7o\xbd\xb2\x98\xe6\x86\xd6\x8c:J\x19\xca\x0e+\xc1X\x8cY\xe4\xe1\xcd\xad\xe3\xc7\x1e\xdc\x187l\xab\xcb\xac\xa3p\xa0\xa6!]u\xbc0\xa5\xec$\xf8\x1f{\xbb\xe6\xd6"Jn\xc25uO\xf8\xa2w\xcczXMg\x96?\xf4\xfa\x9f\xd4\x9aQ\xb2U\xa5\x1fW\x14\x90;!{>z\x02\xfb\xfe\xf9\xee\x1a|\xb6\x0e\x88\xb2X\x866g\x97\xb8\xe1\xe0@\xf4)\x92g\x13b\xc5^,UW\xea\xb3\xfd\x8c\x95\xe1ai\x8a\x97U\xf1?0\xb3\xbf\x9c\x16\x12\xd7*\xabV\x9f!n\xf6W\xdeL\xd8\xc5\tC\xc2\xeb\xa5\x12\x0fYfDJ\x12\xee2\xff_\xcd=\xf2\xb7\xcd@bey\\\xe3W-\x1e\xcb\xf9\xe3\xf2\xfb\xc9\x8d\xd7\x97\xe5\xf5\x02\xc4\x81P\x02\xa6\x1bG7\xca!\x8d\x90\xdd\x0c\xd1\xe4\x83h\x08\xadyA\x14\x84\x90\xd3\'.U\x94\xc3-\xf5\x9e\xee\xfd?9$\x86\x8c\t\xf5\xe4\x82\xdf\xb1`\xc3\xf2W\xa0c,c\x8f\xcdP-cnu\x1d\xe3\x17/\xcd6\x1a\xd8X\xa7\xcc\x8a\xe5{\xa2z\x12\xc4\xaf\xa0\xb8\xed\xfb\x96\x0cz46\xd8\xa7\xb8\xc0\\<\xda\xffq*[\x88Vf-\x9f`\x96\r\x90\x85q\xbf\x18\x997\x0f\xe4(\xd8\xef\xa8\xb4\xa2\x87c9\xddm\xbb\xa8\xdc\x95\xc1\xa4\xf4\x15\x1e\xcb\xca\xa6R\xfe\xfc\xf2\xe4\x992\x1e\xceP\xe3\xc1\rn\rn\xe3\xaa\xc7\xa90\xfe\xe4@\x80\x15N4\x01\x1c\x10c\xf9\xacON!\x08\x18\xe1Cvn\xebgn\x86\xd1w\xc0"\xaaH%/\xbcRLP/L\xf5\xe9\xf5\xbb\xbej\xbd\xdb\'\x16/\xf2r(\xb1\x10\xc4\x0f\x8d\xa1\x8f\x83\x98\xc2\xadbd\'g\x96\x9b\xb0-\xcd\xb2;\xb1"\x90\xe1>\xa1V)\xf0\xf1\xf8\xbf7w\xebU\xc0\x08\x1d\x9b\xf1\xec\x9fN\xc8\xe8Hze];\xed0\x16\xdf6\xbf\xa0kA\x1a\x81\xb3\x93\x8a\xebU\xbb\x11\xf4\x89\xea\x91\xb2\xcb\x06\xfb\xa2\xd7\xc1\x93\x03o\xa9\xc2\n\xe8\x08\xb82\xd9\x15\x8d"h+\xd0\x84\x11\x90f\xc6\xa7\xd0X\xdf\xa9W\xd2\xe4\xac`\xac"\xa2\tK\xf7\x94\x1c\xe1\x14\xc9\x15QvlD\xeb}x\xac}\xbb\x8b^`\x14A.`,m\xf5R\xcc7\xd3\xf2\x1c\xd9\x11\xbaZ\xb5c\x0f\x1e>{\x94\xed~3\x08\xa8A\xe7^z\x9f*3\xbb\xee\xd5\x19\xfcN\xbd\x91\x99\xf3\xeb\xa2s4\x8c\xb1W%N>\xed/\x1b\xd4\xc2e\xdfY[\x1e\xbdf\x03\x16\xb4\xe6hd\xaa/\r`\xe6\x87fA0\xef6`\x9e\xbf\x0b\\\xfbKN\xc0\x8d\xb7\x16F\x18fc{gN<F\xe5p\x83\x13\'\xb0\x85\xe2;x\xb0.[\x99FW\x90\x83\x11\xce\x8e\x86\xb8\t\x88\xca\x14V\xab\xf4\x1d\x88\xcc\x05\x99>\xeek\xf6\xfc\xe9+K\xb7\x9a\xde\x83\xe2\x19\xb7*\xda\x82P\x0c\xdc_\xb6,\xf39\xf4n\xaa\x93\x0bC\tL\xfat\xd7.;y\x8c-(\x84\xee\xd7\xad\xfc\xd9\x89A|\x1c\xc7\n\xd3\xb7\xcd9t\xd8\x86e"\xe0\x8d\x1d\x88o\xb1\x1f\xa1\x9dY\x17\xe6[\xadZr\xcc/\xe3\x1f\xda\xd33\x98\x12\x08W\xf1\x9f\xbc?\xd9O\xb1r\xdc\xbe\x08\xc8om\x95\xd7kss\xc2\xa2?\xf4Q\x8bR\xab\x81\x0b4\x8dP\x1fr\xf4\x10D\x84N9\x13[\xad\x91Yg\x15\xde\xe2\xc0\x1eo\xc8K\xabc\xccb85\x9bu)\xb5\xd8\x1f(f\xb0\xd7$,\x88)\x12\x17\x1c\xb9\xe2\x83\x96>_=\xed`\xdf\xcb\r^\x02\x81K\xd7\xe88\xf6\xe5H\xc5\x91\xaf-\x9a[\xe7\xa7\xb0G\xed\xa0*\xe3\xfa\xae\x93\xfb\x1fh\xa9\xe7\x1d\xf1\xe1\x9c\xa8\xd4\xc8-\xe2\xd5X\x1f\xc1\x96Q}\xa0\xc4\x89/\xb1Yj\xc5\xc6\x94\xbc6Q\x8aeF\x02\xa4\x1b\xed\x8d\x13U\xd8*=\x08\x15#\x95\x8d\xe2M_\n\xf1\x06\xc6\x18cA\\\x00\xf3\x95Pc"\xa4#\xb9\x0e\xca\x15\xcdIU\x17\xdd\x7fVJy\xe1\xd2\x82W\xe7\xe0\xb5\xeb{\xc4\r\xb3\xed\xd5\x17\xa5)U\x18s\x18?<?\xfd\x1b\xa6j9\xe5\xae\x85\n\xf2\x11\xb2K\xb8\x8a\xef\xc8\x9dsDw\x8e\x9c)\xccT\xc9\xb8a/\xec\xee\x15f\x88\x11\x1c3w\x8f\xe0~AA4\x16\x95\xa6\xdf#\x1d\xb8~*\xeb\xa5,\x9brH;\xe3(=/5\x9e\x9a\xbb\\\xfdl\x18$\x8b\x18\x18y}\x95\xbb\xdc\x91\xbfb\x9axn\x0f0\xe2v\x17\xaf\xeeO\\t\x8b\x7f\xf9\x9e\xbb\xa54{>\xe0$\x0f6\x85\xb8T\xf2\xda)\x86o=\xf6\t\x13\xf4\xc7\xc8(\xa9p t\xf8\x1e\xd9\x8a\xee\xfd\xb4t\x98\xbcj\xebP[\x89\xdaa\xd9\x05\xc1;$/\x80\xc5\xc8\xd2\x86(\x84\x13,\xdc\xf11\xdc?6\x97\xddX\x99N\xbcKn\xb5\xc3|+W\x8dE\x05$\x17\x90\x07G\xfc\xcc\x08\xb0E\xf2\xe6\x9b*\xbaxq\xe9R\x19\x7fuy\xeb\xbe9\xd5\xa8\xe6.\xdd\x9c`\xf4\xe7r\xa8u\xd8Z\xc2\x8d\xaf\xcf\xf7\x04Q\x87\xccV\x80\xa0$\xde\x08\x7f\x8cIBQ8\x8b{\xb0\xd1[\xda\x1e\xa1\x0f\x135\xaf\x90\xffBr\xdb\xb6\x8c\xc0\x86/\xf7\x7f\xed(}\xc9\xcc+\xd1\x93\x02\x1b\'\x0e\x93rX\x9c\xd0\x82\xb7\xb0\xc1\xa8\xe4-EAxc@\xc7&e\xee\xdchD\x02v\xa4\x0e6\xea\xe6\xd5\xa8\x0cj\xff]\x0f\x03\xf9D\x98\x13s*j\xd1\x15M\x02\xf7\x14\xddGH\x01\x13\xd1Q\xc0\x97\xd6\x08iPB^\xe9\xa3\'vg81\xd9B\x845\\\x9a\x1f\xe8\xa01\x10\xf0\x06\xfdJ\x19\xcf\xfc?\xb1\xb2\x88\xd36\x19\x92\xd9b\xe1\xac\xa4\xac\xbb\x04\x08\xed\x0e4\x10\xec\xa3B\x83zd\x01x\x0c\x85\xf7\xb3\xcd\xe7\xef\xc6\xf0\x88[\xf4\x1a#\x94\x1b\xd1\x98\xaai\xc2\x13o9\x98\xdd>\xe0\xfa\x1b\xf4\xc2\xba\xcd]t\xcf\x9awO,]l\xba\xafv\r\xb9\x13@\xe9~\x9d\xfc\xbe<\xd82\xd3\x82\xa53v\xd5\xd79:s\x1e\x92:\xfeXH\xa2\x86\x03\x17`\xb2\x9eX\xb7_%i9Y\xbf\xd8\x80\xd8K\x00F\x05w\x92\xbd\x95}e\xf6\xd8\x08\x96\x87sD\x13l\xa3Xx\x87Oy\xb7\x1c\xb2\xe4mC\xa7\xd2\xca\xf6\xc5\xc5\xbc\x80g \x85|\xf4\x033\xa9(\x8b\xec\x05\x87\xe2WX\x0f\xed\xd0M\xf1P\xb5e\xff\x85\x970=\xb1\xdc\xa0\x13\xa6\xde\xa8\x1c\xf2\xe1\x1aBkPt\xf3\x16\xf6I/\x93\x18\x08F\xf6\xa5E\xd7#m\x8d\x08\x12Q\xf6\xdb\xa2o\xa7\x82\x9f;\x1ar{\x1e\xa4\xb0\x10\x9c\x1c\xd8\xa3\xa5\x08-\xbe_\x91\xd87\xa2}3\xdd>\xfd\xa2\xbeQ\xf8(TC^+\xd2\xd5q(\xcb\xeb\xae\xea%\xec\xfa6w\xa7\xa4>\xd2\xcf\x86\\T\xfcNe\xd1p\xeeV\x8e\xa6m\x8f\xe7{\x8c:\xd8\xd7\xc1C\xee\x94\x006\x02H\xdb\xc1\x0cQ\xa0~\x05c\xeb\xd1)\x87\x00\n\x1b\x1a])y(\x98?\xb7\xa3\xe4vh\x9c\xb4t2\x9e\xcc\xae5\x0b\x15g\xe1\x8cD\xe5\xdc~\xbb\xd1\x15n\xd5\xa4\r\xd5\x1a\x9e\xc0\xae\x9d\x96s\xd3\x93g\xa3\xb8\xd4\xff\xcc\xbd\xd7\x03\xd9,\xa4\x91\xa2\xf6\x87~\x11\x8fL\xbc\xe5\x83\xc3\xbbBQ\\\xb6\xa7\xab\x84\xcb\x9dBb\x90\x13h\xcdk\x0b\x0c}y\xdbU\xf3\xa49\x91\x12\xad\xb8\xech\xb48>\\}\xf6\xca\xb1\x99\xbc\x1f\xc5f\xa5\xc1\x04f\xa9\xec\xf7j\x0c)\r\xe6\xef\xf8\x05#\x01\xea9\xd1w\xafb\x8cX\xd4\xd2C\xbf9[\x0ee\x06Z\x00\x05\\\xa5g\x1a\xca\xb0nhh\xc6\xd6R\xd4\x1eZ\x98\xcawd#\x8f\xfem<y%2\x14\x05R\x06\xf6\xda\xd4(#-Et\xe4(\xdf\x10\xfa\x04\xa5\xeb z S\xd4\xe4\xc02\xab:\xed\xe7\r(\xe6\x92^\x1eoZ\xc2\xc3\x1b\xc1\x95\xady\xa9\xf1\x1b`\xb7\xe9lf_\x82\x19}7_\x967^*"\x05E%7X\xc67\t\x0b8\x99?\x93\xebJ.3\x1deUU\xc6H6\x93\xe4\x86\xe0H\x07\xd37.\xe4\x07\xf0\xd0I\x15\xe3V)W\xfd\x91\xd9\x16Os^\x8e]}o\\\x8f\xea\xbb\xf9\x8bN\xdc+Re\x9c\xb2\\\xfcYr\xf9\xca\xfe\x94\xc21\x9d\x14\xdc\xf9Q\x04$\xc5J\xae\xfby\xf6\x16\xda\xa8=\x89\xbe\xa2 a(T\x99\xf8\x83\xc9\x87\xca\xc9\xf3\xe2\xbaLs\xaa\x05]v\x8aQ#\xd7{\xe5J\xcb\xe1\xb0\x89\xdb\xa9\xef\xd9\xe5\x8f\xb9F\x0e;sl\x0b\xc8\xd6\xc6#\xcb\x8d\xdb\xe8\xe3\xb9]\x11\'$\x87\xab\x8apE\x8b\xd1eV\\\xab\x1b\xc2,\xba\xa9\xd5\x95Y\xcc\x06\x05w\x82`\xc6m\x10\xcd\xd9W\xbc\xe3;\x85\x1ag\xd9#\xd9$\xb6\xb2\n7\xba\x1b\xebs>t\x93\x08\x84\x81rQ\xa9\x8b\x15\xb9\xbf\xca:c7\xf1\xabFx\xf3\xa4\x01\xfc=\xb3/\x10a3\x19\xa1\x98\xea\x1d\x9aX\xcdwE\xfb\x88y\xb4\xc0/\xb6o\xe0z\x8d\x15\x92U\xcb\x9f\xed\x0fM\xbf\xf2\x04\x16\xba\xf5\xee\x80d\xc9/1\x95\xe7)\x02K>\xdc\xc23\x0b{U\xf4Bf\xd9\xfeE\xd5[=\xb6\xb3\xee\x1al\xc5l;\xb1\xc2\xd8\xf9+\xe4\x91kU[rn\xe4{A\x01\x84V\x08\xec\xb2u?0\xa5\x84E\xb3\xe4\x17`\x05j\x00&\xd95\x9c\xec4\xeaLi_\xc3 \x8e\x05\x90\x19\x12v%\xaa\xe4\xcf-+xm\xbe\x18\x99\xc5\t\x9a\xf7!\x03\xef.?\x05\xdf\x8c\x8b\xdc\x92kD\x8d\x03SPF\xe0V[\xf7\x0f\x8f"\x82\x1a\x14\x03\xa7\x7f5\xe1\xf9\xa8w\x1f\x9b\x19\x91M\x9e\xa0\x8a\xf1\x98\x01\xb4\x9d\xec\xc1.\x94O\xb0D \xdb\xb5\x84\x95\xfc\x9d\xda\x9c\x93s/\xd7-Y\x89\x94?\xd5\x03\xad\rh9hD\xdf\x86\x98\xd1\xee?\xc3\xed{o\x80\x9e\xa2\x97\xb7r\x91Z`\xb9\xa2\xbeS\xec\x88\xee\xfe\x12\xc6Q?\xf5f*M*\xcbl\xa7\xc6\xcd\xddZ\xba3\xc1\xbb\\n_\xfb\'\x82B\xd1\xc9qi\x9b\xd2\xd8O\x86hw\x10W\xed\xaa\xa6\xf9\xdd\x98W0\xa0\xa7\xcbf\xe4\x98j\xff\x01\x1c\x96\xf0f\xfb\xa7:g\x95U\x85\xfd$ 1\xfe\n\xf9Xx\x8f\xbc\x17\x83\x84M\xb8\xa9\xc6\x05(\x1c\xe4\xcd\xa9[M;qf\x1f\x8d\xf8\x93p\xb4L\x11\xd0O\xe2\xa4G:\n*x\x8f\xc2P\x9d\\\x8e\xc9&D\xe7?\x98\xd59\x05\xc4Z\xecP\x9e`,(V\xdf\x1b9(\xe509\x1a\x9df\xb02\xe8\xc2\xb2\x961\x8e\xd8\xb7\x93\xb7\xa5\r\xbb\xf6\xd3\xa2\x9c_\xc0Rm\xaeb@X \x05\xb2\x16\xf2\xc2^\xf5\x87M\n\x8a\x1f[\x16\x1a)\x07\x8b\xa9\xc7\x80\x10\x1c\x8e\x05\xd03#i\xf6\xe8\xea\xee\xb4v\xc9\xa8\xfa\x18\xab\xfe\x0e[x\x91zd\xce\xaa`\xaf1\xabX$G\xd3v\xe2\xc0Qk\xe1\x88\xc1\xa40g\xc2\xfe\x14s\xc5\x1d\xa1\x0f\t\x8f\xf8\xc8\x12svC\xab\x87s\xd0`l\xe3\xdf\xa5\xa6\x87\x8eU\xd8\xa4\x9d\x9d)\xa9\x05H\xa9\x1c\x87:y)g\x00\x08\xb1\xb3\xc6\x16\xe6\xd3J\xf6\xeb\x85\x0fevV&\x89p\xdf\x9ek]\xbfSv!\xee\x14\xe1\x0bgiZ\x9d\xcf\xcby\x14\xca\xd6[d~1g"Dz\xbd*\x12\x8e\x16\xec\x88\x19\xc3]\xa9\x11\xf5\xc8\x04?\xe1\xf7\xd1\xa74\x0e\xcb\xa9"~\x8f]$\x1b\x80\r\xa9\x05\x98~\x84\xe4\xec\xe0\xe6\x81=3G\xe7O\x8a\xde\xcd\xb3ov\x95\xfa\xe45`B\xbc\x1a,\x9e\x9bE3(6O8[\x8f\x01\xa8N\xe2\x08\x84\tQTBm\xcf+!e\x9a\x19+\x1d\xa1\xa4\xb8\xdb\xd9\xb2Z\xa7j\x7f\x89\xbdG\xa684!\xa2\xee]~\xf7]y\xb4[i+I\xe3\x1e\xdf\xb7\xe7g\xcb\xf2\xf6\xcf\xcc>\x9af\xe4\x9d\x00%\xbc\x12\xf6@F\xc5\xd2\xeck;e\x0f6={\xfc\x98\x90d3\x192\xb8,!\xe0\xbc\xea\xfb\x88/\xb5\xd6)\xc6=r7X\x06/\xaey\xb2wj>\x80\xc9j\xd0\x97\x0bf^\x16\xd9\xf0\xefGf_\x81\xa2\x8eUu\xf9\xea\xcc\xf0\xf5;\xfa\xf3\xf8\xf0HPC\x98^VO\xb4.\xde`r\xa3\xc1:\xf5\x16sa\x8c\xa2t\xc1_\xed\xbd\xaa\xfc\xac\xbc#\xee,\t\xe9\xb9]]j\xe2\n\x03\'\xf010\x07\xd7\x00(w\x11p\x13\x0e\xe5#\xa1p`\xe8\xb5.\x07 \xeea\xc9N@P/\xa6\xc2\xe6}m\xa88E>\xf0\x10:0\xa3\xe3\xb8\xc5\xa6\x97\xa0\xea\xc3%\'\x0b}x#~;a\xee\x864l\xb7\xb8l)&\x0b\x9cf\x9c\x9e\x1d\xfc\xa1\xc1<y\x12\xf1\xe7\x0b!\xa6\xe2\xb3hTt\xd7\xd2c."i\xad\x8a\xe4\x9e\x13D\x0c\xdf~Y7\x08:\x80\xe8ue\xf1G\xd5\xe8)~\xe1\x114[<8\x02\xcdX\xb3\xa8\x92\xa0}}o\x19_!S\x11^3\xb4\xef\x13\xa7\xf13\xed\xbc\x9f_\xa0\x17\xbcs\x0c\xf1\xa5\xe6\x9b\x12\xc6x\xad\x94\x82\x0f)\xcct\x13\xe44\xb8\x8d}\xbbf}\xd7\x84P\x0fA\xe7/\xb7\x85IQ\x86>*\x0fn\x98\xf7\xb2\xee\xecL\x8f\xf1C\x90\xbeW\xe9a3\x8b\xa21\xe9};O\xea<\nJ\x8a\xb0[\xa5\xf1k\x1f\x93\xeds9\xf0\xda\x8c\x9b\xa8\x17\x98\xf1$5B\xdc(\xd6\x03\x8b\xda\x13\xdcr\xc2q\xe24E*\xdd\xd8\xea\xb4\xaeWi/Q\x80\xcc[\'X\xeeA\xa68\xd2\rR\xef\xf2\x99\x0f\x9f?Z\x1c\x0e\xa8\xf5\xd9\xb3.\xe4\xa7\x8b\xf2\xe8\xfc\xf2\xac\x1a+\x0f\xf6K\xe9\xa9\xdc:\xc7\x81^ \xcbnvn\xb3\xe4\xda.\x87l\xa4\x90\x9c\xec\x0e\xfe\xed\xc8s\xf5\xac\x96X\xaf\x9fZZI\xb8\xb1z\xd6]G\xb3\x88;\xf18\x1e\x8f\x7f\x80\xae\xb7\x10)\xab\xc4\xd3\xeb\'\xff]\xcf\x97@\xb8o\xd4\xa8\xaez\xbaPx\x87\x8d\x0cX6\xc9\xe4\x17\x0b\xbd\xda\xfat\xf9\x97`\xa8\xa0C\xe5\xf9E\xc2\xe4By\xc0Z\x81G\xf1W7\xa00X#\xf6\xe7\xd7\x1f\x98{3\x81\x91\xd6\xbdt\xc8\xd59\xdeaS\xe0[\xa9\x81\x1e\xf10\xa2\xc1\x86\x92\x9f\\f\x11\x9bo\xf7\xaa\x16H\xed\x14\xf6\xc7\xfa4\x13\x9b\x9e\xde\xc7\x95\xda\xe3\r\xea\x03\xcd\x86`\xc47\x95\x0b\x15;Vj\xb0\xb5\xaaU9\x82M\xbf\x15)-\x80\'M!k\xcb\xd6!\xaeX\xb6U\xee\xa2\xc32\xd8@\x06d|\x04\xf3\'TC%\xb3\xcc\xe53+f\xf4\x04\xd5\xf3\xf7\x1f\xcb\xe1\xfa\xe1\x18\x924m\x07\xba#\x87\xe0S9%\x11\x11\xbaV\xdf\xc6\x89\x86\xd0#\x16Uw>\xf1\xa9\xd4\x9e\xef\x17\xa5\xf6o\x9c[\xedYt\xa3\xc4\xf10\x0c\x11C\xbe\n\xba\xa5\xb6!V\x1d\x836\x02H\x19o\x1dZ\xb9C\xc281S`GD\x9e"+\xd0w\xb3\xe7\xf8\xe8\'\x03\x15\xfe\x8dL6\xd6u\x8c\xa6f\x94\x0b\x16f\x9fl\x97\xaa\x922\x14\x8c5\xd2\x1eut\xde\xcf\xc1a\x8f"d$\xfb\x9b\x0c\xaeGv;\xe2d\xf9\xc1uIG]\x9e\xb4\xdf\xef\xde_t\x98\x13\xdcI-z\xa5\x11~Erhdp\xcc\xed\xbc\x1dn3\x905G\x9c\xac\xa4;D\xf1\x17\xf6Zu\x1eh(tXgv\x86L\x9b\xf1\x07\xf9Z\xa9M\xb0+\xe6\xach!D\x95\x10\x8d\xda=\x10\xef\xa1mP7$\xa8\xffS\xf1(\xc2L\xbfC\xdb\xa5\x7f\xb1\x9a-O\x8b=\x1a\xd5\x91\xaeI\xb6{\xae\x1a)}\x00\xc2Y\xb2\x84\xdf\x07G\x1bk\xf5\xc2\xe0\xcd\x8d\xd26\xae\xb1\xf5\xb0&\xe3\xacF?]\x9e)\xf3m&-\x1f$"\xc7p\xf8\xf3X\xc5\x05;\xdd\xe5\xee\xec\xbc=0\xebD\x98\r\xe1W#t\x18\xdb\xe4\x02#\xb3\xfe\xfe\xbf[X*\x00\x87\xcdT\x1f\xb7\xcb\xfd\xe4r\x11\xf7\xe2B\x9aA\x1c\x80\xa3\xea3\xe1\xce\xc1\xf9\xde\xdd\xdd\xa7\xf8r\xa0\xd2\x80w\xeb\xb1\x9a\xb5\x82\xe2k\x0cm7\x96\xd6\xedM\xe1i\xcd\xc9-\xc9\xab\x98\xd0/,uSpq\xbea\x91\xf7\xca=\x8d\n\xc3`\xb1\x84\x11T\x8e\xc9\n\xa9\xef\xa0\xd7\xc0a\x02\xfcT\xa2\xe6\x18CmS|\x99\xb9g.\x8dWR%BS\xfc\x94\x97$)GU\xe3\xd5{\x1dXB\xcd\xca8\xb1o\xdcAj\xd7\xf9d<\x7f\xf3\x0e\xa7\x81\x88@\x14\xa1z\x8e\x9e\n\x0e\xb4\xa7\xe2\xc5\x82\x0em\x94g\x04?\'\x1a\xb6\xd5#\x9b\xc5\x7f\xa0;\xfb\xe6\xe1\x8b\xcds3\xde:Y\x04\x18\xd3\xbe\x19\xb7PGj%"\x10\x04\xed\xa2dz\xd3\x9c\x1aG\xc1\xda\xcb\x1b\xeb\xc8 \x13\\\x9ff\x9a\xa4\xf4\x7f\x9f\x95l#5\x04\xcc\x19\xaa\xcd\xe7\xf7\xec\xa9\xbb\xaa\x97q\xb0\x1d\xe4XR\xafm\xdcW\xa7pL5\xb6\x81\x1e;\xc2\xd3\xc0z\x18W\x9f\xeb\xe9?\x95&\xd5o~\xab\xdd\xf3/\x13\xe5p=:\xb1A\xe2\xdbc\x8f_\x1c\xc3\xd1\xac\xd8\xd3\xa9\x84\x85rf\x97N\x13\xa5\x0c4/\xc9\x01\xee\x1e\xfc:\xf95S\x0e\x15\xae9\xb0\x06\xef\x9c\xc5\xe7H\xdc\xaa\xe6\xde\xe8\xcf\xfe\x18\xd6\xf0YG\xa6-\xd7"\xdf\x83W\x1d:ND!\x84+\x06\xf1\x15\xaf\xa7\xb1TE_}\xd1;,\x99\x8eJ\xed\x9bf\x02\xbdl\xd7\xcf\xb7\xd3k\xeb\xa5\x88\x82\xb8E\x07\xf7\xff\xe3\x9a\xa5\xbc79K\x98\xfc\xe0\x052\x0f\xac\xdc\xe3\x1c\x95\xc8t\xfd\xc4R+\x88\x8f\x83\x0c@\x82\xe2\xc1\xb1\xa2Ljw\x8bw&\xedU\xc3\xf1\x90\xaa\x97\xa8*\xb8\xf2Y\xb8TW|yb`\x1b\xcb\xa7\x1e\xd2\xf1\xf6\xf9\xf8\xe7L\xed\xbc\x13\x93\xa0K\x80\xb80r\xa6\x16yBu\xf9~\x06\x0eg\xd0\x1d,\xff\x1c\xb9\xf1\xe9\xe3/\xc0\x17\xf5\xd6\x9c\xbdM\x05\xcd\x15f\x19Qz\xbdn\xec\xce\xdeZ9\\}i\x8a\x86\xca\xd1!\xfa\xe8\xc3\x8f\xc5\x13\xc8\xfe\xa4\x1f\xf0\x8d?\x97Caa:\xb0\xc4\x0e\xbd\xe6\xc2\x00\xae\xb4\x84+\xe9KJ\xa7\x9c\x87\xe2\x9b>\xcc\xff\xb6\x14\xa2jO\xbf\xceY\x16\xcbf\xafP\xb4\xab\xf0{\xb4\xf9\x04Os\x07-Db\xf7\xd8\x00W\xb9\xa7\xdbL\x1c\xdcPS\xc7\x98\xeaV\xbd\xe1\xaf\xf2\xbb\xb9\xf2\xc7xKo]-<\xc7Vw\x9aW\xc4R\x13\xf4+\xc2\xc4\xb0\x94\xc8i\xbf.\x17@\x83\x1c\x17\xc4t\xb0\x00:oQ\xc4\x9a1\xf2\x1d\xf87\x80\x95\xaa\xa7 \xc8>\x08\xb5C]\x87K\xba\xe3\xf5\xd6.Z\xa9\xcd[\xb8\xac;\xda\xdc\x1e\r\xd0\xa6\x03\x9f\xba\xca\xf7\x7f\xbf\x9e\xee\xbe\n4fq\xc1\xcc\x15\xd2\xe4\xcc%\xe2\xef0^.3wXX\xc4\xfd\x00\x02\x18\x90\xfd\x13\x80\xc1\x8aZ\xef\xcf\x17.-\xcf\xaf\xf0\xf9\xc8\xf9\xcd_\xbc_\xb9k\x1f\x9e)\x9b\xfa\xd7d\xe4\xc0{\xf5\xdb\xd5\x1cv\xfbDM\xb4\x85\x9c\xa3S\x15\xccN\xd0@\xb1\x12\xde\xa1Y\xbf\x87\xae\x13\xa9r\xd9[\x18fT\x8f\x8f\xd9\xbdq\xe0\xfa\xd3x\xd4\x90\xed\xc7\xda\x1cv\x18rl\x83\x17\xd0&h\x17\x946\x85?c\xf6\x10m\x10\xec\x950\x0b/\x98D\xf3\x138b\xaag\x0e\x05\x01+\xf0\x02s\xf2\x03^\x83\x16\x17\x89\x81T\x8b\xe9\xc6\x86]n\x00\x07\xd7\t\x10\\\xd6\xc3h\x94\x7ft\x06\xc7<Va\x897i\xf7\x7f\xfc>V\xab\x84\nZ\xad2\xc8\x81!\x92\xcc\xcc\xea\xcb_\x16\xc1MW\xe8\xab\xac\x14\xb5\x16\xe1\x89\x88+\x97\xde\n8\x82\x08\x0e\x93Q\x9c"V\x1c6\x10\xf9\x0e\xd9{g\xd7\xa2\x8e\xbfO\xd8c\xf4\xf1G,\ng,\x18\xdc\xb2\xcc\x9b\xf3w\x1c\xc5B\x83\x8a\xe0 V;\xd1}\x0cX\xcc\xc6onv\xcfgJ\x8d\x12p\xba\xa8\xb0G\xc2~i\xae\x14\xbeF"\x00oXW_E{Su\xe4\x93\xa6\xfbn\xd7\xa8\xab\x97\xdf\x97\xc3\x01c.\xf4P\xb2N\xc6\xa2\xb2 O2\x8b2\x97\xab\xe4\x18\xdf\x1f\xa5\x03\x1a\x12M\xa7C\xc22>\xae\xde\xcd,\xbc3\xe3\xb1\x00d\x19\x0f\xa3\x12[\x908\xc0\x08\xd0}\x17}\xf9!\xa7\x9c\xde\xb0\xac.\x06\xb2\xd1\x17/L\x89\xe0k\xc3\xf4\x14t\x86\xfc\xbc=\x1d2\x06 \xb8\xd6O:w8}*\x18\xd6\xbaGm7a\xff\xd1`\xbbNg%\x84\x91(\xd1z\xa7\x08\x93SG3d_\xca!\xa9!\x8a\x0f]9T\x9d\xefg(\r\xa5-\xb5j\xda\xe2O\xacF\x0b+ml\xff\xb3\x9d6\x97\xf8\x9e\x93\xc4\xd9\xec$\xc8$\xc9\xff)\xc0ry\xb2*Q\x96\xdb)a\x89\x11T\xd9\xd9<\xdc\xc1;\xdc\xa7\x9e\x9b\x8b\xaf\x01o\xcfn"\xb9uKv\xc6\xec\x01\xe3\xee\x17\x98\x9f\xe6\xe6g\xf0i{"?g}\xecEJ+a\x83\xb0\xa8\xca(\xea\xa5\x99+\xe9_\x9d\xc2\xa10]<{\xb2\xaa\xb8\xdfJ)=\x01/PL\x98\x1b\x0f\xb2W\xf6\xe5k?e\xed\x18\x83\xdb\xb2\xf7\nL\xd9\xcb[l\xaa\x8c\xe1\xdb}\x16\xfa\tG\xe7\xff5J0j\x05U\x80C-(D\xf5\xa1\xe5\x1051r^\x97\x13\x15\xc7\xd0\xba!XUi\x0b\x10\x9b\xa6\xc8\x16\xc0\x80\xc0Wk\xe1L\xd8B*>+\x82V\xba\x9fP\x06\xe8\x08\xc1\x13\xfdo\xff\xdeW&CY\x14\xbf7\x13\x99S\xb4\xc2N\t!\x06\xe7"n\xdb\xae\x06"\xf8\x07\x95\xa6l\xaa"\xbdm\xea\xb5Rs\x9c=\xf4\x8d\x1f#\x10B\x94E\xa8\xba\x1f\xd4\x80\x06\xe9\x85\xabk\xe4\x80\xc5\x9e#s\t\xd1\xbd\xc1\xf4Ov\xbav\xf2\xd3\x8aRn\xb7n\xef\xa3\x11\x18}2?\xceV9\xd2\x9e\xee\x19U\x16>\xdb\xbb\xda.v\xbb\x17\x8c\x07\xee\xbf_\xd0\x10J\xe7\xdb\xafx\x0f\xcd\x94\xe0\xb4\xb9*\xd3\x9f]\x84\xe7Jkz\xf6\x9e\'H\xce*\xe9\xe9\xf6\xedRS\x0c\xca\x91\x08\x11\xea\x0e\xd1,\x13\xbb\x1e\x0c\x8e\x94\x88u\x7f\xff\x06\xb3\x88\xa0R\x892\xbeE\x00\xc7\xde\xa1Q\x9c\x8b\x9aDy\xf0\xc6<2\x83\x9b\xbcs\xa8\xe3/H\x8f\x8a\x87\xa5\xf8P\x00\xd3\x12\xd9\x8e\x91\x94>+>\xe0\xff@&\xa3\x8aE\x8a\x0c\xd4\xb3\xc6"\x91\xb6\xff\xb2\xad\x9d\x0b\x8b)ny%b}j_\x80\x0b\xe7(l\x8aO\x90b/S[\x1f\x81\xf4\x1cI\x8c\xbbJ\xd0Gi\xbf=\xec\x05_`\xaa\xdd\x0c\x8e\x0b"U\xa42&\xcb0qX\x8d\x9f+\xb5\xc5\xd7 \x84&,\x99\x87\xee\xa3pSM\xbd\xf82o\xdf\x1e&\x11r\x94rM,\x99xfe\xc8\x88\xb2\xe5G\xdc.\xd2W\xbb\xb9`154\x1c\xa8L\x04!\xaa92\xc6;\x84\x1b\xaeF\n\x16\x08\x88\xfb\xd2\x80\xdf\x8d\xee\xfe-\x87\xea\x83t\xee\xc4\x83\x11\xd2\xfd3\xc1F\xd1\xcd\xe9\xdbh\xfe\xbb\xc1\x016\x08B\x8e\x82y\xa9\xb1\xee\x16\xedJ1\xed\t\x8f\xd6\xcc\x0724d\xe5p9[\\\x1c\xee\xf1\x91\xc3\xa5\x80\xbb\xcf\xdc\xb0\xb6\x03!*\x90p\xb1\xd3v9\xd4\x90\xb2^<u\xf7\x13{AV$*\x0eX?\xb4\xf6\xd3\x04\xa1\xe0`\xab\x91s\xee\\/\x94e\x90u\x04\x81D\xe7$=\xca\xd1\x05u\xd4X\x0e6\xa5v\xe7\xfe\x80\xfa\x8e\xf3\x8fO\xff\xfb_\x84E\x97\xc1\xe6}\xec"\x85\xee7\xfc=(N\xdd(H\xcb\x1di\xc0\xa0\x7f\xbbxjG`\'+\xc0\xa02\xe0\xea\x1d\x8e~ \x19\xb9\xbf2\xa7\x86v\x1c\xed\x82\xa0\x97\xfc\x99\xfbb\xefG\x1c\xd9\xb0a\xb8#\xf4/\x18\xbd\x07\xe7\xf8\xfc\xf1\xd1\xe9S\x94\xe4W_\xad\\\xfcn\x19-\x01\xe2\x9c\xd0\x15\xbe\x9e\x8b\x8eUyd\x8f\xcd\xc88\x07\xeczs_\x84!\x009!\x0b\xd4\xff~f\x86G<\xb4\x92\'D\x8dB\xa9\x01\x87\t"G.\xd7\x819\\\x87Ej\x1d\x15\xc7\x04\x02\xc6\xdd\xed\x1f\x1b\xa5v\xcb}(\x87Crt_P\xfa\x8d\xe3\xe9Q\x8e$\xbe\xe8\xfe\xee\xff\xe83L\xc0\x88\xc9\x1c\xb2\x97\xe4\xc1\xe1\x8f\x97\x8a\xdexa\xfa\xbc\xda> \xcb\x8d\xeeu\xb9\xebV\xf1\x8b\x03{\xaaA\x018\xa8\xbb\xd4\xa2\xe5\x1aC\xc0\x8b3d~\xfc\xad:/\xca\x1e#\xab\xb8\x0f\xb6\x00\xcf4\x0b\x0f\xc4\xb1$G\x1d,\x05\xaa@\xa7v\x94Z\x13\xb3\xdboM\x17\x03\x80\xec#\x805d\xbd\xd5(\xac\x82\xd1\x12-\x16$b6\x91\x88\x18\xd7=&\x95\xea\xfa\xe3\xd2\xc7 y\xb1V`W\xa388\xf5\x8f@/\n+/\xdf\xdby\xf3\xb6#\x85\x08s\xdfp\x05\x82r\xdb\xce\x0c(\x8aG\r\xd2\xf0"\x12\xd4\xe9\xb1\xfd\x12-\xa7\xfe\x87\xcc\x8f\x85\xa0\x08\xa5\xae\xd5\xecj7\xa0tS\x93\xdcV\xa6D\xbf\xf3\x08\x9f\xffG\xa95ZM;]\xdeY\xb9i\xb1\xb2\xc5q4\xf5\x93I\x03\xea\n\xaa+\x0f\xec*\xf8\xe8\x17\x9b,\xe0(\xa4\x8b\xe5\xb5\xf4"}\x03\xb3L\xaeP\x8aeY\x1d\xd2\xff\xba{~\xff\xd0Tja6\x17W\xf9\xfe\xc1:\x1e\x81\xba\xac\xcc\xce;\xa5\x1f\x08\xf7\x15\xfa{Ls[x\xef\x16\x8b\x07\x89;\xd7\xa0\x8d\xc5\x97r\xce\x91\x92U7:y\xf0\xdb\x0e\x9d\x07\xc2k\x84U\x9c\xf5%l\xe5\xe9:\xcc6\xf9\xd5\x9d\xcd!t\xe6\x1f\x13s\xec\\\x14\x85%\xbc\xf4\x8bTwA\xad\xd5\xbd\xb9\xabLPEBUL\x7fvJ\xab\xe8\xcff\xca!\xfd\x93C\xf5\xdc\xa03\xb0P\xd7Oo\xb2M\xe9\xcb\xd3\x9e\x9c\xe9\x9a?\xb0\x14\xd0\x11Q\x90\xf4\x1bK\x99.\xc5\xab\xf8"Gpv\xe45\xd3/\xc2\xdc9\xb1\xe4TL2\x9a\x15\x0f\xec}=Z\xef\xb6\xf7\xe9U9\x1c\x98l\r\xc4:P*\x86\x89\xbd\xde\xc3U\x86\xe4\x1d\x84\xc1]r\x94\xf6\xed\xbd\t\x01:\xfb\xaeU\xd3\'\xa1\xfb\xf6\x05\xd9\x86\xa6\xb3\xea\xbd\x08>y\x8e\x0c\xaaQ\xbb\xcd,\xf1\x99\xf4\xc0\t\xd5`p\x8b\xae\xb8\xfe\xe6\xfc?\xa05\xd5\n\xa1x\x8aA\x9f2\xcf\x84\xca\x14\x0b\xc2\x017\x9e\xc5\xf6\xdc9f\x84\t\xd7F\x0c\x84\n\xea\xb0U\x9fCW\xeaKA\x03\x8d\xf8\xbb\xea\x91\x86\xe7>xq`\xd9\x08U\x9e\xd8<[\xb0\x1b\x1dcAw\xd1\xbd\xb0/EA\xa0\xf0ZO\xca\x9fY\xbec\xea\xe1J\xe0\x83<\xa0\xf8\x12\x95\x80\xa3\xb6\x81\xc5_\xa5\xacp0\xcf-\xc8\xc5\xf9\x9fC\x842\xa2\x92\x9fo\x99\x893\xa0%\xf6K8\x92Zmn\xb2\xc8\x13;lb\x8fxq\xcf\xbe#\'\xf6\xf85P\xcb\x93\x95*\n\x12a\x12\x15U^\xbe\xaf\xe6\xe8\x1b\x19Y\x9aM,\xbaO\xff\xb6\xba\x1e\x01)\xec\xc48\xc0\xa4\xb4\xc9\xa5\x8bX\xe7B\x87\x94\x99\xa3\xae\xc2\xfe(\x01\xb2!\xa3\xad\xb0\xd3\xe30\x0b}\xe4\xe6\xdc\xb2\xbd;\xeb\xb7\x96\xd7t\xd7N\xbe\xe2V\xf0\xb8\x1c\xf2\xccX\x1c\x17\xc7^\xc9y\x85)\x88\xea\xeay\xdb\xfc[\xf4\xf9\xb7zb\x10B\x0e"\xe4\xb1Q\x9f\xeb\xe9\xc7(\x84\nP\x02#\x10S]\x0f\xaeGg\xe7H\x03\xba%nT\xbemv\xd2Q\xba@\x107\x87\xadrEP\x97\xb6\x87\'\x94\xd0\x83\x80\xd5\x95\xdb>R\x12\x8a$\xb0\x06\xb9\x8c\xb2\rI\xf6\x8d\x04\xb6NEvg\xbf\xcd\x03\x11;\x80\xbb\x03\xbb\x13\x9c\xf0N\xb0\xdc\x83\xaa\xa8\xbe\xec\x05\xe6K\xe5*`L\xbb\xec>\x1a\xb4\x9dh\x8b\xe6\r\xe3\x02\xc5h\xa5T\x88`\x08J\x84\xa8aGlGvQ\xfc\xfb\x84.$\x16l\xd5}\xbd\x97<\xc7j5\xf2\xeb\xf2\xc7#x(wo\xdb\x17\xe1Z\x15\xf1\x98a\xb3\x11\xd0\x7f\xf3\x12\xf7\x05\x9a\xa5\x8a\x15\xb5\x1d\xe3U\xa5\xf1\xc0\xe2\xd3\x1dL D\xbd|\x05\xc4h"hM\x92\x17\x9a\xa9,\x0c\x13k\x16U\xda\xf0\xc0\t~#\xe4\x98b\xa0\xd6\xb5G\xcfw\xfa\xc4v\xce<\xb1\x87\xdeu\xc0zT\x16\xe9\xe0#\xd6\x10\xa0a\x8a\xe8\xec\xe4b\xc5X\xdf\xfd\xd9\x120\xef\x00\xf7\x00\x1e\xe1\xa8{e\xd7\xa8g\x14b3\xa4=\x92\x0e\xab\x9b8\x16X\x97\x1a\x81\x06y\xbcS2\xa6ytj\xcf\x1f\x08\xe8\x83\xeab\x1c\xbdc\xbf\x84\xc5\xef;\xcb\xfb\x0c:\xc0S\xbfan;\xe8e;\x14\xaa\xbe\xbf\x9b\x85\xa5\x101\xe7\xf3S\x99hk\x11\x9fSf\xe8/x\xa6\xecS\xb01\xf1\x03kn\xfeH\xcd\x98\xb8\xdf\x1e\x95\x06K\xc6@\xd99\x1cd|\xfa\x0b\x89\x126\xc8Q\x04?\x81R\x8d\xaf\xfe\xe1\xb9\xe2\x02\xa3\x15\x12H9\x03\xdaL\xa7O\x95\xa7\xf4\xa7`\x94\x9a\x12-\xdc\xf4\xaaF\x19\xb9\x0e6g\xcf\xbb&\xdaJ\xa4d7\xa6V\xbaSq\n\xf2G(\xee\xf5!\xc2\\\xbe\xb9\x04p=\xbb~\xab`TK_.\x1al\xff\x9a|\xf7\x1d\x07UL\x1c\xaazp\xe6B:\xa2d\x88\x83XT\xd2s2\x9a\xa6\xaf\x03\xd1.%Z\x17.\'\xc1\xc1\x8c0\xebK\xd7\x97\xaeO\x80\xf2\xce\xa6Yj\xc8\xaf\xab6\xcd\xff\xf8\xfa\xd3\x0b}W\xd2\x93\xe8\\\xbe\x7f\xea\xb16\x99?\xbc@\xe8N\xad.h\xeb\xfeSk5)s\x98Lun1o\xbfR\x18\xf6\x83\xf1n[?\xf9G\x95m\xf2\xe6\x87r\x9fNe\x10\xdd\xf7\xdd\xdf\xaa\x96\xbb|\x03\xcb\r\t\xb6{\xbf\xcaYB\xeb\x93\x1fm\xa1\r\xcf\xcc\xb8RW\xb7\xd7`\x8d\x84\x93P\xf3\x00<\xe20\x98\x9c\xc5\x87\xd8*\x87\x1b\x81\t_\xb5K\xf2=\xb0\xce\xfa\xa3\xdd\xdd~\xedSP\xb1\xce\xfd\x8b\x9d\xfd\xd3b\xc6\xa0\x85\xacF\x0cL\xaa\x86T\x1fi_\x18\xf1cc\x9e\xfd\\\x00\xb9\x10Q\xc7\x9bT\xf2\xebMu\x13\xff\xf1vr0)h\xab\t7n\x08I \x03\x86\xdc\xce\xca\xef\x81k\xd5\xfd\xed\xc9&Jm|7wR\xaaK{\xd0\xe3I\xf8\xe7*\xa5\xc3c\xe4`\x91\xf2x\xee\x04\x8c\xceq\n\xe4V\xf0\xe7<\xdc;vs5\xf5<\xce\x82;\xado\x9a3k\xb0M\x9c\x02\xd8\x1ez;\xc1\xff\xb6\xcfu\xf5\x82G/\x84\xb5\xf0`#+\x12\xac\xe0{z\x08\xe0\x139T\xbf\xb4\xaa\t+f\xec\xb6\xc0\xfa\xeb\x84\xc0\xa08\xf9&\xdf\xcc\xac\xd1\x9b>\xcfU\x8a\xd6*\x98\xc3\xb9\xe6\xdf\xedO\x19\xf1QV\xb0\xa1\x88d\xdb\x83\x03\x1ae\x8bJ\xab\xd5pj\x88e\x89A\x15\xa1\xa5\xe7\xdc\x12\xe7n\x9e\xc9\x94\xf2R\xf2\xef\xce\x1e\xb1\xab\xb7\xebW\xadr\x17\x02\xd3\x93\xf8\xa9\x11\xbb\x05\xc4\xe7#\xec\x92\x0bX\x9f\xc1\x04\x91\xf1\xfb\x06z\x90\x83\xb1\xab\xa0\xa5\xad#\xd6\x86AVf\xb4\xda\xa3\xc7j|\x89e\xc7VQ3;g_\x81pC\xa4\x7f\xe4o+\xae\xd4c\xa9L\t\xa1g\xf0V\xde~>(C\x0eyu1JOp\xa0h\x89\x11\xdb\xb6\xc7+`\t\x05\x1e\xe4\xec\xdd\xf5\x0fBtjUh\xc0\xd5\x9b\x07k\xca\xa2\xbd%%\x84\xe7\x12\\\xf0\x1a\xca|\xa4}"\x18\\\x85\xd8U{\xacL\x03\xd9C\x02\xde\xda}V\xc5/T\xca\x8a\x84O\xe5\x90)A\xc7\xdc_\x10}\x9c\x86\x90\xae\xa0\x0e\xae\xaa\x02F\x19\xaa\x87\x83oKV^\xf3\xabyT\xa4\xdd\xfa\xeb\x1f2\x0b\xe2\xac\x7f\xdf\\\xe7\x9a.\x85\x9eK\xacV\x0es\xfa\xca\x10\x96\xaac\':3\xcf\xef\xd5\xe9k\x8a\xd9s\x8c\xda\x16\xb7\xc6,\x95@a{\xf5\xd5\x94.Q\xf4\x8c\xce\xa8Pv!bB\xd7<\xe2\x08\x16\xf49\x98\x8e\r\x96,r;$\xc8V4\xe3,\xa6@vN\xb8\x9eqs\r]qdKd\xe1\xc0\xe5%\xdd\xefC^\xbd\xa31\xb3Q\x0e\xb8\xab\x18\xa1\xf4\x88\xf2JV\x10\xf5\xe6\x88=\xfc\xce9m\xb70\x15-\x8b\x88\x9c\x17\x87S@\xb6@b\xae\xff\xf0\x01D]n\xbb\xcc\x8e(u\xf2?\x8dA[\xe2\x88#\x9b!\xd9j\xec\xb1j\x02\x1cZ~y\xff\x9fU1\xb55ZX\x0c~&{\xe9\xfc\x8d\xc1\x14\x84q|u\xf1\xc4>_\x17\x0e\x9e\xd8\x1bG\x805\xc9\t@\xf9\x0fVV\\\x7f\x81\x02\x07\xd96\x00\x81\x9b\xec\x90uV%7\x88O\x9bj@\x13\xc7\x0b\xe7\xd7\xe6k\xf3\xa5$a\x86\xfe\x9b\xbcI\xbc?f\xe0\xaf\xbe\xc5\x1d\x9a\xef\x7f\xf8\xfc\x07\xd7\xe9\x0b\xe5\xa5\xc8 \x8a\x06\xca\x90Q\xdfvj\xdb\xd5\x91\xba\xf2m\x04.\x0f\xd0\x8a\xb8F\xee\xd9\x9e\x8e\x95jJO\x99\x87\x8b\x9a\xf3\xb0f\x16\xee\xd2W\xeb\x9b\x8b\xe5P\x15\xc7\xf6\xb2\xd9\x97\xe4\x17U\x03\x99VP\x97"\xda0\x879:\xb6\x8bV\xfb\xc9\xfb\xe5P|\xbe\x8b\xd10\xa9\xd7\xe1/\xc7\xcd\xd9\xd3?\xb1\xb1^\x97\x12Y\x96\xe5\xd3W\xed\xaa\x83\xc1\x14\ri\x9c\x97\xc8\xa1ux\n;$&\xb8B\x0b\x19\xb22\x8b\xc3f\xbb\xbbMygUq\x131\r]\x93\x90\xcd-}\xb6?\xe37I,#\xf2J2\x9b B55\xb4\x14\xfc\x89L\xc0\xe48\xf8N?\x05\x05\x84v\xf2\x9e\xdd\xac^{;\xe6\xf3n\x95g\xb1\x8a\xc6\x949\xa43\xab\xab\xc0\x8a\xe2\x92\xf3N\x95\x99G\xd1]#+\xc4\x07\xb2R\x8a\xfd\x89]\x11\xe6mh\xfc\xc7\x88\x02\xa6\x9a\xa6\x92\x10\xbd\x9f(\xbe\xd8}\xfc\xec\x89~\xc9\xa8\xb4\xc54\x00\x9d\x9e\xf4\xcb\x01tR\xe0.,\xfbi\xe3~\x83m\xbf@\xa7\xce\xb7W\xc7?\xb38\x86\x0cu\x84\\\xb0\xa9\x9f}y#\xe4\xa7\xef\xa7\xa4\x98\tm\xc5\xcc\x8b\xc3\xca\xec\xee\x148\xda\r\xc4\xf2B=\xf6\xf3>-\xe2\xceXo\x0cS\xf60\x9b\xe4\xf5\x05\xf9\xe6\x1a\xa4t\xd5\x04\xf2Td\xd7\xb1\xba\xa6D\x9eX\x99As\x90c\xa1l\xc1\xee\xd8\xa54\x9a\xc1\xd8\x13\xde\x8bXh*\xf1\xb6\x1f\xc2\x18\x07\xb9\xaa\xfb4\xfb+\xe4%\x17h\x83\xd0\x1e\xc3\xea\xdc\xcc\x07\xc0\xc5\x19\xa8=\x98\x7f"\xa3%\xb6\x928R\xeb\x81Yq\xfb\xdc\xb28G>\xc1\xdf\xe9\xf2\x00\xae\x100{\xd7\xaf\xacp\x80nl\x1f\rV\x8e\x10\x10%Z\x9f\x95\xc3E\xc6]x3j\'h[*\xcd\xe6\xc9!H-,D\xe9\x85\xef~\xa0\xd3\x16\x1d\x1c\xfff=6\xc8\xb1\x93\x9a\xac\xbfv\xc7\t\xb7\x86\xcct\xdc<R\xd5\x9c\xf41\xe4\xbb\xd9.\xf9\xd4\x11vL@\\\xafJ\xf7-\x0er\xea\x8f]\x0cT\xb4m\xfcs\xae\x87\xc8\x0c\xda\xe6:\xc0\xea>\x1f\xf2\x1e`\x1b\xc8\xff\xb0\t\x1d\xec\xb9E\xd8\'c\x1eE\x04\x11\xb6x\x9b\xd9YK&\x9c\xe7\x08\xc4\xfa\xd4\x89yf^`2.\x1b\x01XD\xfb\'\xcb#R(7\xd5yn\x07#*\x95Ds\xec\x1cv\xcb2\x13\x05\xa7\x84\xa4\xc6\xa4\x1cn\xe8d\xb8E\xb3\x95%\x1f\x96\x86\r\xd9\x82x\xbcl78\x8b\xdf\xcd \x1a\xb5r\x99V\x1fn\xe4v\x89\xa9*\x13\xca\xa1.D\xbf\x84\x9b\x9b\x1eY\xa0\xd3\x04\xe080\x86\xdfP\xc5>\x03\xd7\x1dV\xd4%Xw\x12\xe5\x87w\xdf\xe0\x08VK-\xcc\xbc\xf9\xda\\:{\x82\x85]N\xa3{\x8f\xdbQY\xd5\n\xb7\x04bC\x94`\x85\x11*C\n\xcc\xb2\xdd\xf2\xdd"\\\xc0\x1b\xb6"25\x9a\x80\x881\xe4\xc1\x91\xb9\xca\xec\xb2\x8a~\xe3\r\x9f\x95Z\xfe\x03\x03\xfc\xb4l\x06\xd1e\xffm\xdb\xa5F\x04a_\xca\xfd(\xef\xbc\xb3,\x8e-\x0ebu7\xd1\x96\x89\xfd\x84l\x12\xb0\x03\xb8\xeemd\x1a\xa2\x96\x02p\x05_S\xaa\xc5\x814\xba\xbe7\xebg\x0b%g\r\xc3R8V\x8es\xcd/\x90\x81\x83lXC\xd8\xb0\x1eP\x18G\\+\xd6D\x91\x99\x173\xfej\x8fd5h\x01\xb2\x1f\r\xf8\x9f\x8b\xa6\xd8\x94\x9f\xb9k\xd1\xab\xca\xe5\xc8\xd8p\xcbX\xc9\'B\xc7\x08z!\xe1\xa5t\x05\x18\r\x84\'\x8aRJ4\xf0\xe9S\x97\x0fU\x8bA5=\\\xd5_w\xd0O"\x04QY!Ji0\x180\xd2M\'L\xab\x0b\xeb\xd3S\x96\xa4\xb2\xff&\x80\x85\xae\xbf\xbd{b\x06\xd7\xc7Z\xe7@\x9a\x89\xe3a\x9aq\xc7\xec\x98\x89\x026F\x93A\xdc\xb9#\xf3\x9fF\xbe\xb3\xdb\xc6\xb9s\xd1@\xa2\x16\xe3\xd1\xbc\xbc\xa0\xc3\xb7\xe9=a\x9a\xb1\xf9\x8dV|_j\xa6\xa4\xef\x94y\xa5{\x1bB`Yx\x8f>\x99\x1a1\xbc\xb7\xe8~\xf3\x83TtPG\xb7p4\xdb\xa5\x94bv\xf45\x07\xc7\x08\x8dh\xccT\xe8\x196\x10(`\xafa`\xa7\xc9b\xa6\xa7\x11\xf4r\x87\xcc\xf6\xa8\xb6\x89\x8fU\x83\x18\x1f\xe2G%\x87\x91\xea\xfb~\xa7Q\xcfB<\xb4X\xd5\x16\xc5\x13\xb2r86Iew\x95\xb0\xb5~\'\xa8G\xd0G\xc5V\x7f^@\xc1\x8e\x83\xfa\xac@s\xf39<\x15\xbc\x03v\xd1PDk\xa0\x15\x11!\xdf\x93\x9f\xe4\x13~\x8bW\xd9P\xaaN\xb4\r^\x9c\xcdv\xbdI$\x98\x91\xcb\xc3s\xbbdl5\x03E\x8a\x93\xd5_\xcf\xecf\xd8j\xf0Ct\xd4 \xf5\xd7\xb9\xa7\xf6\x019\xba"\xda\x82\xcf?*sB\xb6\x92\xd1\x10\xdf\xbe\xe9I\xaa\xf0\x08\xc5\xd81bRU/\\=\xa2\xa3Ov\xa4][?\xda\xe2+\xdd\xf9R\x0e\xd5\xceC\x8c\t\xe5\xae\x02?"\x81\xe2\'\x19\xcc\xc4\xdd\xdbN\x99m\xeb\xafO0&+\xc2\x16\\\xaeX8\xe2\x0ctj\x9f\xdf\xf6\xe6\xa2\x1f\xed\xbe\x9b}\xbbbB\x05x~\x8c\x8e\x84\xc7\x14CS\xa8e\xd5F\x90\xdb\'9\x87\x12\xaa\xc9\x94\x1dp+\xb8\x91\x04\xd1\xc1\xf6\x92}X2\xb3p\xfcXUn\xcb\xb5J\xdb\x87\xc5\rla\x0c\xbe\xff\x8b\xdd\xa50\xa0\xc1\xce\xa5v\xb9\xd9\xddHEz\xee\x17\xc9P] \x1a\x84r8\xd6\xc5a\x14$\x10Q\xbd\xe2v\xecBy\xe8\rQ2\x8bI:0\xd4p\xa2<\xb6\xf9\xf9n\x9cfvGO\x9f\xa1^\x00;\x11?\xbc%g\xb3\x15\x969H\x07\x08\xa3S\xab\xb0\x12\x19\xd7eS\xd8x\x11B\x0f\tz\xf3\xb7\x11\xaem\xdc\xdc0?m\xa2n\xa9\x92+j\xad\xd57du\xb4\xb5D\xd7\xb5\xfa\xbeubF\x1e\x83Y\xd6\x84\xed\x8f\x96x\xb2BNJMI.\xdc5\xec\xc7\xc77U\r\x17\x93\xa5\xd6\xa0\xea\xd1b\xc5F\x95rS\\\xfb\x90DK\xc0\x10\xc9/s\xfb\xa6W\xf2Pw\x8d\xe0\xe7\x92\x01\x91\x1cQ\x89\xb0\x00\xae\xc1*0\xf6I\x13\xd0\x02\xa2\x85\xa5\x11,^\xeb;7\xde\xbf\xe0\rz\xdac\xd8pSc\x16YB\xd8\xdd\xa7\xf3\xc1\xf0]\xe5\xb5\x837T\xe04\xb3I\x86x\xec\x956\x06"\xa1u\xfduG\xc9\x16\xd5I\xa2\xec\xea\x92\x95\xa5\xe6\x03o\x95\xd7\xdaf0\xc0l\x83\tn\x04\x07\xd0=\xfatS\xf4)\xdc-F\xf2\\i,\xeeU\xfc\x08\x1f1\xf9\x80)t\xcf\xa86T\x01\xa3y\xcc\xb0\x81\xf5z\xfa\x02\xc1>\xdb2=\xc89\xc1\xbf\xe8\xb7\xb6je5~\x0f\x08IEF5\x15s\x9b\x9b`\xb1w\xdf\x1e\x9a1TI9T\xd2ec\x93\xa0\xfa\x03\x1c\x9e\x16\xf8\xd2A\x15V\xf4\xc05\x82E\xcf\xc7\xda\xd9S\xe7\xe04\xa2H\xcc\xd1t\xa5\x8e\xac\xde\xe3Rk`O\x94\xcd\n\xba\xa4\'\xed\xe4d\x06\x03\xddP\xa4\xb4\xfd\x82\xc1\xa8\xf8\x9e\x94CUSG\xaa\xea\x13{\xb5z\xf0\xa8/\xcc>]\xca\x11@\xd8\x88\xd9\x86JQ\xd6\xbf"(\\\x05!\xb8\xb1d\xeb\xa9=D\x12\xa4;\xdd\xf3\x04\xf1\xb9\xdd\x7f\xac\xea\x9a \x83\x9a=\x1c\xb8Vt\x0c4\xb3\xfc-\x9cq\xfdl\x0b\xcc]\xccqp\x9a\xdaY\xbe\x14\x8bd\xdcb\x98\x03&J\x02\xb6 \xa1:\x95g\x88\xb3\x85R\xeak7KSb\x0e=\xca\xea\xecsu\xfe\xf1Q)\xc6\x00\x04h\xe2\xbb7.\xcd|I\xff#\x85\xe8\x9d\x9aa\xf0J\x01\x04\x80n\xde\xfe\x9f\xc3@\rit\xb1\x95\x03u4\xf3\xc9\xbe\x03^\x91q\x1f\xc7\xdb`\x15\x02\xebU\x16 \x1f\xe1\x7f\xd9DB\xac\xc6\xfc\x10%\xae\xbb<X \xe7\x1a\x1c\xc2\x8b\x91\xb9\xaf\x082nU5>"\xc8\x0b\xd8h\x88>\xc2\x9f\xc1\xbbR_?Y\xdd\x16D\x16\xdd\\\xb4\x9c\x0c\xf3\xcd\x80\x9b\xda\x07K\xdf\xee\t\xe5\xe6\x8c%:}>\xa0M\xe9F\x9fj\x83)%(\xd8\xcc\x7f\xa1,\'\xdag\xd5\xbc[\x97\xfd\x12q\xe1_\x9c\x15B%\x0b=\x19\x07b\\\x15\xc0\xa5\x84kv\x18\xbdq\xa0}\x12&J?\x1e\xdb\x85l\xfc\x13\xabV\xd9MI\xecx\xf10*\x11\x96\xe2\x9eG\x8bk\x82A\t\xaf\x9b\x80\xf7f\xdd\x8e\xbf\xc8\xd6\xcd\x1b\xb1nh\xccW4h@W\xdd\x9e\xd5\xd0\xcc\xb3sKT\x9a\xf6\xe0r\xd5R\x9f6y\x8bT\x11\xfa\xe5\x9cL@*\x97,\xdf\xb2\x97\xa5{O\xcc\x91S\xa7\x04\xff$\x92\xbfg\xa6\x97\x19\x80\xe0\xb3\xd7\xea\xad\xc0\xbf\x85\xbf\xdbRy%\xbd\x91\xea\x87\xcc\x87\x8c\xc7{\xf0\xfd\xad@\xfc\x06\xa2.-H\xf2u~1aN\x8d\x9e\x99\x02\'WU\xcc\x11\x13\xfb\xfe\n\xb0\xacI\x7f\xbc\xfe\x08[-\x9eZ\xc6\xd2\x84\xaf\x178\xad\xfb#v\x03lM\xdbF\xb2f\xb6L\xbf\x90\x103\xc7L\xb9\x0b\x97\x95\xea\x95\x16\xbaa1\xc5\xf7\xa8],\xd4\x82\xf8\xa5\x92\xd8\xce\x7f\xfe\xef\xf3\x8a\xdd\xae8\xbb\x07Z\x07\xb4\xc5\x03t\x06*\xacCn\x81&0\xa5-\xcc\xa7\x84\xea\xcf\x8d\xdf[K\xf6\x81)\xee\xc7\x04\xf4\x16^\xf3\x91%c|\xa8\x01#L5\x1d\x12\x86\xdc;\\\xfd\xe2\xfbj\xa9\xa1\xbde\x8b\x93\xa1XY#^\x894)\xbe{\x88\xd7\xc7,|\x94?G\xc1\x0f\x8d\xa1V\x95 S\xca\xb8\x14Nn\x86p\xf5Zw\x9eZp\x8a\x07b@\xb9jD^\x866\x05`\xd1\x01\n\xaa\x8bc\xf8t\x9c;\xafi\xfcxV9ck\x89\x08I\xd9\x85\x08\'4\xb1v\xfc\x19LC\xadaZ\x8f8\x80\x91\xb34\x10\x17\x0b\t\x08)"\xf0\xb8\xec\xe8F\xc2[\xc8\xc9p\xf6\x98j\xe8A\xc7\x83\xc3)\x01\xa5T:HAW\x06\x98\xcb\x0eW\x83\x1c*Y\x05B\x99M\xaf\x9b\rv\x89(\xf0\x8e\x1d\xc4\x8e\xb5\x0f\x1dw\xa7\xc2\xbf\xb6D\x85#u\xb5\x95;U\xd80\x7f\xd1\xb5\xbda\xc1\xe1\x1er\xa5\xe4\xcb\xcdw\xbb\xba0\x88\xcd\x81\xe3x\xef\xe5\xea\x07\xf9\x18\xde\x0e\xb3s\xa0`\xb5\xd1\xd8\xb8\xbdB\xc0\xd6{\n\xb1\xa8\x1d\x1fG_f\xed>G\xf1\xb2\x1d\x10qc\xb1\x058\x8c\x07\xef\x8c\xdep_\x98s\xe2\xbb\xb5\xe4\x86\xacN\x0b\xe7N\x94\x9c\x90~>;e\xbe\xfc]-\tz\xa3\xa2}E^\xc4oV\x10H\x97\xa8\xd0\x98\x812\xcar\xbc\xf8\xbe\x87\xbdBP\xe2\xac\xb0\x96\x9eN )\xae]q\xdb^\xda\xd5sp\xbe\x94\xf9)\xecn1\x17\xeaWp\xc4\x85yO\xf2b\xb06\xbd\xcdw \x12\xd8H~o(?\xd0\xda\'c[\xad6\xfc\xc1\xb8\x8c8`\xaeB)\x0cB\xa0\t\x12\xd4":\x0cy_C\xadqy\x87\xf0\\c\xd7&6\xacj7\xe2Z\xb3\xf7\xf6~D\x0b;K=\xf1\xc3\x9c\x83e\xc4\x8d\x95\xbf\x8a\x7fS\x0bE\xec\x01?\xf2\xc6C\x81\\\x93s\x19\xac\xbf\xda\xc1\x97\xdf\xf6\xbap\x83~\xd0V\xb7 \x1fT\x14\x8b\xd3\x93j\x83\xb8\x95Wx\xc1#\xb3\xdc6Y-\x87\xab\xa1"\x08Oqx\x1a`q/\xf7J~A{J\x8ci\xc5\xbe.\xabSh\x0f\x980.\xa6&!\xe5O\xb2\x07\xa7\x8d8\x0e\xaa\x92\xaf\xf1/\x96\xc5\xef\x8b\x94\x06U\xa8\xdbY\xfc\xe0\xcd\xa2\xc5\xe3\xff\xea\x85E\xf3\x1cL\xa2Y\x1b\x9c\xe0K=\xb4\x8f\xc9L(\xfe2c\xd2\x7f\xf6\xe0\xd99\xec;*\xf0\xfe\xb18t\xbc\x91z\xdcN\xc5C\x9d\x8d\xe1\x07\x9e\xc2`1gb;\x14^\xf65\x8b\xaao\xe6R\xdeR\x1f\x9f\x81\'7\xd8A\xc1\xc1\xc8\xa1L\x9c\xa2\xa6\xea\xb0sZS\xbe\xb3_C\x12%\xa2\xfdQ\x11\nO\xb1\xc7\x10\xda\x01\x0bw\xc0F\xcd\'\xfb\xa6\xa1Z\xb6X\x19A\xdf5\xb0\xb2sv,\x8d\x18\x84\xa4\\\xfb\xeeh\xf9\x7f\x02k$\x04"W\xcd\xc2\xb5\x9a\x1d[\x07\x07\xdf\xca\xa1Dd\xdc\xfc\xd8\x7f\x02\x9b\xdbxn\x976\xbe2\xfa\x03}\xf9x\xa5\xcf\xf9\xcc:\xa3N$\xdeV|\x1a\x82wn\xe3LI>;.0\xf2\xe2\xc1\x9f\xef\xb3f\']\xb5\xf6\xbc\x1c\nu\xc2yW"\'{\xf0\xa7\x1a\xd5\xc1LU\xd9\xea\x7f\x85m4\xc4-\xdaEd\x13Q\xf3_\xd1B\r+\xc6\x00l\xf0\xaf1\xf0\x071\xe1^\xb7-R\xdd\x0fWi\xeb\x87a\x98H\xd2\x92\xb1\x03\x9d\xc4`\x0e\xd2\xe5\xd5\xd6\xef\xbdOJ4b;\x1f\xa7d\xa1\x82\xeaU#Z\x06\xe98@\x80C\x81\x8eM\xc3\x1e\x14r\x0f\xc8\x94\xf0\x8b:a\xf7Q\xa3\xb5\x9d\xa4\x95\xb0E7c\xcf\xa6K4\xf7I\xf1\x9a\xa8,W\'8X8\x18\xbfN\x1a\xd9\x19\x85n(\xc732\x88\xbb\xf9l\xf2\x9c.n\\\xb7\xaa\x11.\x15\x91\xa1K\xfe\xda\xe7m+\x7fI*\xa1\xd1`uj{\x84\x9f\xc6\x92\\\x90\x15z\xd9\xb3Z\xa0\t+d\x7f\xfb\x91L\x1f,\xab\xd0\xbc\xb0\xc9C\x94Q }\xd7\xed\xc5\xae\xa5C\x88\x0c5\x08y^\x14\x05\x92\xbc\xb2\x07\x98 \xa5\xc0\x85\x83V\r\xe4O9\x86\x1b?\x1e\xdb\xb2\xbb\xe5\xad:\xde\xb5\xaa\x88\x1b\x141[\xc6V]\x06X->z1k\xfb\x87\x1c\xba~\x04=Y=@\xbf\xd4~\x1b\xdck\xbf\xf7\xd8\xee]\x15?\xd4\xebPO\x888\x8fPc\xe2\x15\x94\xf5\x8a\x15\xces\xfb\xb7o\xf6?\xaf\xfdR\xd5\x84\xa0\x1eW\xabKv\x1b\x9b\xc1\x99%y\xad\x8ae\xaf.M\xeb\xcd\xf9:H\xd9\xb2\xc9\x8dD\xc1RR\x14\x191(@\x1c\xf4\xa3ml]\xdae\xeaI\x0e&\xf9:\xbd!\'\xdd\xad\xbcCN_\xa3!\xcf\xa8\x08;\xc0\xe2\x19"\xdd5v\xcf\xf1\x9e\xc4\x8a\rx\xa9\x88\xc2\x08\x8d\xfdA`Q3n\xc9F?\xb0J\x06cr\xf7\xb5Y,Y#\xb0^\xc0\x9d\xcc\x9f\xb3Rm-\xa7\x12\x1c&\xc6\xf5\xb1\xcd\x0f\xf5I\t\x89b\x7f4}y\xb7\xb7]\x8a\xcf\x87>+S\x8c\x86\x8by\xa6\xb7,a\xe8\x8bN\xa4\xb9\xe4\xef\r\xe8nj\xa5\xf5\xdd\xe8|\x8f\xdb\x85\x7f\'f&\x04\xdd\xe1\x0c\xbb\xc3\xd6\xe2\x8ew\xd82\xca\xc5i\x82\xd7\xbb\xf0\xabx^\x0e\xf5\x1f)\x99\x06R/\xa7\xd8\xaa\xd5\x15\xa3\xdd.\x96Z\x99\xb2\xb0\x8aO\x91(\xd6\x80\xe9D^q\xa5\x9a\x85\xc9\xcd\xcf\xc2\xee/}tb\xa1\x89\x8c\xea\xba\xc7l\xe6o"\x1f\xa5&\x0e\xaeou E\xd7h\x10\x87\x1bU\xa9\xd1\xca\x9b\xaan\x9b\x913\xf9h\xdf\xbb\xd4A\x7f\x13\xcb!\xef\xa0\x1a\xbc\xd9\xf9\xa8\x80\xd5\xa9&\xa6\x06\x8e\xb3Z\xc4\xc5\x7f\x90kV\x0fk\xf3Ml\xc7\x08\x1b\xea\xa9\x82M\xb51\xf3\xe2\xd0\xb2*\x93k\xbbk_\x90\xd4\xd5\xce288\xb5NA\x8b\r\xa2\xee\x10\xf0\xfd\xe0I)ID\\\xcaC\x94y\xc9f\xa0N\xb0\x7fF\xb0\x1c|c&d\\\xe5AA\xe8+\x17\xa7U\xd1\xf5 \xb2k\xd0\n#\xe6.W\xbc-%\x97\xd3R=Rl\xce\xe8\xac\x13\x7f\xf5\xe3\x0f\xa1\xce\x81\x9d?W\x86\xb9\xfb}\r \xcb\x80\xc07\xf4zH+\x10"\x1b1T\xfcy\xb2k\xa1\xc5v\x89\xee\xa0\x19\r\x9b\r\xd1\xe3\'VuPm\x10\xe1\xdb=V\x16U|T\rA6E[\x0ee\x9dc&\xad\x04>b\xbb[\rGE4\x19\xd0\xd7od6&=R\x91\xdby\x86\xee\xc1\xe8\xb59;\xff\xd3\x9e\x139\xc9}w\xa9\x91\x93\xc5[B \xb1\x12\xfa\xdb\x88Z\xc7\xc1\xe2J\x05:R\xfb\xa4M\xbe_\x88\xe1T\xa7\xa4mb\xfd*5W\x08\xf4v\xc7\xcd\xc6\xbe\nHT\xf2q\xfb[\x8f\xb7\xdda\xae"\x02d\x00)\xa0\x06e\x9a\xb1\xb9\x1d\xe1xx{|\x88\x8f:;%\xef\xe8\xc7\x94\x83\xc2ct\xd4lCc\x92\xfa\x01L \x91\xe66\xdd\xdc\x01\xf3\xb6\x9boq\x19T<\x11\xe9\xc5$4\x0b\x19\xf7\xcbB\xb9\x89P~\x07\x16\x11uO\x85d\xa5*]\x91%c\xef5\xaf@\xf5\xe7\x81\xc5\x9b*\xb7\xe8\xcf\xda<\x99\xfc\x0b\xe7\xbc\xfb@\xe9\x03\x16>7\x10> ,\x16\xfd\xdb\xb6\x03\xe5F\x1fzn\xee\x8f\x06\xe6^q\xb7\nE\x1d\xd0\\\xad\xeb\xbbO\xd9\x81\xa8U=\x15\x16~\xa2\xf8\xf7\xdd\xf4\xe8\r\nHlh\t9\x9b\x13\x11\x901\xac{g\xd5\x96|\x1d7s\xad\x84\xec]9\xee;\xa7x\xd7\x13;\xdc*CE\x9b\xfe^\xe2#.\xe5&<6v\x1a\x0e\x90\xeb\x8a\xe5\xf7\x9e\xd9\xd3\x1f\x8e\x8f\xd4OW\x81\x86\xf9U,s\x97\xa0\x10\x1c\x18Bi\xe4\xb6\xe5e\x12\xa1D\x8d\xfd\xbb\x12J\xe5\xb8\x8d:\xee[A,\x16H\xdd\xfb~\xb6<a\xa9\'\x89\xc8\xb1\x00L\xa6/o_\x9b\xb9\xd8<\xcc\xe7\xd7f\xda\xbd(\xf40D\xb8\xc9|\xf6\xafe\x1eD\x9f\xbcE\xfb\x80.n\xdd\xac+\xc7/r\xabA\tf\x06\x81D\x90|$\xa9\x89\xfeH\xd1\x8eZ\xf7\x14\xb7B\xedS<o\x9f\xbe\xc2:\xcc\x0ezB\x14\xb1o_\xf2\x7fn\x99\xcbp\xe0\xb6\x91\xfaK\xf5I\x12^\x82]\x90\xb6X6\xcf\xc1\xaa\xca\x9a\xfd\xd7(${CH\x95\xd8\xd0\t?\x9f\x08G\x84\xe4F\xe5n\xc0,\xc9\x10\xfc*\xba\x06\x01\xe9\x1e\x8aA\x9dH\x19\x12\x02\x17w\xa7\xbf\xe8\xa0+a\xfc]=\x83\xbdh\xbd\xf8;\xf5\xd9\x84\xeb \x90\xdf\xe9\xdd\xb8p/b$\xa9\\\x1d\x0e\x98S\xa6\xe4\x95\xa0\xfaA\x9aT\xb3\xbeG\xfb\x009Y-"\x8b\x13\x12\xd1\xe7\xdb\xaeY>\x07\x97\x1el\xc0\x18\xa3\xf7m\x06\xa8\xa6yE\xad.4f\xab\x1a\xec\xbc0\x93a\x91@x\xa1\xcf\xa4\xb8]c,\xa9\xcb\xd7\x7fA\xbcr \xbd\xd6.\x9d\x9cU\xf2\xce\xd0\xfe\xbc~\xfc\xdb\xee\x04\xf5\x9c\x99\xbd\x9c\xb0\xf3\xfaA\xd5\x83\x7fU\x95jKq\x0c\xf2\xb0\x1cJ\xd4\xda\x82t{\xfeL\xddUq\xd8\x06\x89\xb7\xe5p\x86\x88\x12~H\xab\xdd\xe2\xe5\xf1M\xf8Wx\xe6\xd4\x95\xc3\x91\x02\xe6\x93\xc9_%\xed\xbe,\'y8_E\x17qLL\xb6J\xc9UC>\xbc\xe6\x0c\x1c\x82\xe9\xe0\xfdJ9\xd4\xd2\x1bn\xa0t\xab\x8a\xde4\x84\xac\x1cj1\xc5\xd9\x96\xc1\x19\xce\xdd\xb2hM\xca\x15\x0e\xbe>\\\xb7oBs\xc0U!E\xa0\xf8\xf6\x1eX`G\xc5\x87\x13|\xa5\x17\xe5P\xaa\xaa#\x85brA\x9f\x92:mP\x08a{t\xf0\xe0\xdc\xf2\x95.8l\x0f\x8a\xa76\xf7\xed\x0bG K\xb2\x83\xd9\x80\xaaR\x1c\xee~\xd4\xc1\xe5f}1f\xdc\xc8`,\x9c\xfa\\8\xea\xe6\xc1\xa1\xeeAcE\x01M,\x91;\x0e\xa4\x83+y$\x06\x1dl\xb0\xd1\x9c\xb2#\xd9\x07M\xd7\xf8\xaf\xb9\xa4\x16\x13\x1fu\xd2\x97\xb5\x07\xffh\xbf\xa5\xbd\xb3\'r\x1b\xa9\x12\x19\x94Z\x84\xbe\x86\x8b\xfe\xc4\x8e\xbc\xdf\x0e\xd3\x8aJ\xe4\xb2O\xb1\xc5\xdb\xd6C\xe6\x85\xb5\x04\x93\x96K\x81\x9a\xb5%\x91\x15\xe5\x08\xb2\xa1k5\xae\x86j \x0e,\xa2R\xac\xd5\x97%\xa3\xc5\x7f\xb2\x80Z\x08\xd4\x8d \x87g\\\x99\x1b\xe6\xef\xd8\x99\rJ9T8\xb7\xe9\xe1\x92\xf2\x9c`et\x184_\xb6,\xf4\xf3\xb7\xdd\xb4\\sb\xe6\x10g\xa8\xc0\x1a\x072v\x87(\x18\xe5?A3hI\x15\xee\xc9!\x8dYH\xeb\x1f\x82\xcaV\xf5%\xb1\x12\x05zP\xf6\x91&?\xa9\xb6\xad9\xaf\xff\xf2fx\xb6}\xfd\xeb\x1c\rB \xa9\xab\x07\xa5(\xb5\xf3\xaa\n\xaa1\xb9w\xa5?d\xd6\x18r\x9b\xda\xcf\x06\x10\xd9,\xc1\xfd"/DL;Y\xfce~\x88"\r\x03$\xe7\x9d\x90\xecv\xf0LITn6\xc4\r\'\xb5\xdct\x8d\x15\xb7\x11\xb0A\x9f\xde\xb4\x133\xb9g\xb3\xe2Z\x01\xacN\xa7\x9e\xda\xfb\x99t\xf7\xa1\xdd\xbb\xa6\xc5Z4\xd3\x92\xde\x19\xb7\xb7`\x8c0wdv\x18cV\xa4_\x07B\xd16\xf0NlC\t\xdc[\xb1\xb0Q\x1e\x9c\xd3Vi\xb9\xdf\xcd\x19\xb6\xed\xe2<\x9bv^\xd7\xc4]\'\xb0\x95C6O\xb1F\\\xcan\xa2\xa3fMf)\x96\t+\xd9\ro\x88\xf4\x92\x0b[\xa9zQ4\xe8\x97\xdau\xb9\xac\xc1\xcb\r\xd0.\xe3r\xa8\x9c^E\xba\x9f\xf9\xafo\xb26\x92\x01\xcf\xeen\xbc\xb3\xbf%h\xdb^\xfc:\x86\x012]}\xfd\xd0^\xc3\x85\xdd\xdf\x96\x8bqX\x8c|\xc3\xc1\xdbR\x12q\xb0\xae\xbb\x16\xef\xb9.5;V\xb0\xf1\xf6\xb4\xe30m\xf1\xb0\xc1\xca#B\xcc|\x1e\x8bf\xf8\x9c#G\xc8\xe2$\x04\xec\xd4+\r\xac0y\xcb\xa2\xa8\xb6\xca\xb4\xe9\xa6\xaf\x1d\x08\xde\x8f~\x06\x98D\xe7\xf7F\xbd\x95m^\xddL\xdbNi\xb6D\x93\xf6\xa56\\\x99o&\x9d\xaa*\xa5v\x90\x98\x97\xe1\x1e\x9e\x81\xf9u\x16\xf1\xf8%\xd2\xd21\xa2J\xf5u\xb1&\x1dZ3\xdc\xa5>\x80\x96\xa4\xc1?\'\xe82\xb9\x13\xc0\xfc\x19\xc5I\xc9a<\xbew_\x0e\x83\xdc\x15\xe8\x96\xc4\xe8(p\x8d\x1a,=_\x95L\x1b\xcfb\xa0\xcf\x8ek\x019#k^H\xe98\x88\xfb\xdc\x06\xab\x08\x99\xd8`c%I\x98\x81SK\xcc\x0e\x12\x8bi|5\x966\xf9\x91\x19\x08s\xfc\xf6n\x9f\xdb\xb2\xb8:\xe8\xcdL\xb7\x16\xb6\xceu\xc9\xb5\xb9p\xcf)w\x86\x08\xa8I\xf7\x15Y\xe8?\x04\x82\xa8\x9b?\x9d\xb1\x0fk\xf2YG\x82\xa0x\x07\x0f\xca!]\xb1\xd7\xa3\xa9\xbbj5\x17\x86\x96\x11E\xe52\xaao\x0ceg\x9d\xf9\xbcZ\x1e\xb1i\x84\x89\x06R\xc6a\xe3\\\xa8Gmu\x17\xfd!\x07\xe4\xe3\xec\xefK\xd5+\x9d\xfd\xc3\xf5\x0bd\x95\xba\xf4\xabn+\xfcvt\xeb\x89.r^\x0eG\xd5\x1b419\xf0F\x06\xcfWsg\x94\xa7H\xfe\x91;\xbc\xb1+\x00UR\x86\xa8\xaf\xcac\x8b\\\r6\x18"xV\xa2\xc4p\x05\xb33\x1b\x8d\xc5%e\x0b\x95\x1f.\xc0\'&{\xa5\x16\xbc\x96\xc3\xdev\x93x\xbbT5\xe1\x85c3\x8a\x184?\x9f-\x1d\xcfz\xcb\xc0|\xd2\xa3\xa4\xb4\xe2#\xb4\xaa\x8bz\xc1+\x0f\x8d\xcak\x02.GZ\xf3\x95\x1bf\xd9\x8c\xff\xb5}$\xd7?\xf3\x08\\c\xef\x1f\x8c\x93\x97\x060\x92\xf3g\xd1\xeb\x83\xf7\x82W\x01\xe1\xf5\xd8g#\x1a6)\xd4x\xaa\\\x07Q<}^J\xc8\xc3\xa2o7\xb8\x8dcyiPK\\\xad++\xae\x84D\xe4_\xff\t\xf6\x15rD \x9b\xc4J^\xf1\xb3\xb7v\xd0\xad\xa8\x0cMr\xf3\xc1\xeeS\xa4(\xfe\xa5\xb2\x85\xd8J\\V\xc6\xa9\x194\xb5\x96\xf2\xbe\xc9\xb2y\xb4o\x918\xe2\xe2n\xb2x\x96\x97\xac\xa6\xa8\xfc\xe2\x07Uv\xa6y\x8a\x07\xf1\xfav=\x87\x83\xb9\xfd\xd2l\x81\xad{\xbf\x9c\x08>\xeb,8\xdb\x06&\xe6\x07/\xecIQ\xf50Q\x8c\x8d\xbf\xed\xcf\xfe\x800|\xbc\xfb\xb5\xbf\x01\xf6\xcb\x1e*\x9b\x94\x0c\xb4\'\x04f\x03<K\x9b\x7f\xfat\xa3\x145\x04\xaf\xb2\xf6\xd3\\/9\xc0\x8e\xc6\xf2\xe9g)\xbe\xef\xf6\xa6YE\x95\xbdY\xc3U\xc7\x9aCSu\x02\xaf\x9ab$\x02/\xb8\xa6\xbe\xb8\x07]\x07v\x9aD\xb5\xa5FG\xf7\xe1\x9e\xdc\x86\xb7\xe3\xa3\xf5\xfa\x9b\x82cj\xcf\x066\xd0f7\r\x18\xc6\xbf{A#\xa1\xe0\xc8\x0fAU\xfa\xb5^)r\xb89#\x1e\x81\x1c\x15\xfa\xf3\xc4\x87\xaa\xcd[\x16J\x87\xda\xf7\x89\xf9>N\x19\xc0_\x05:\x16|[`\xa9q\x1f\xb5Q\x90!\xdf\xa5\xf6\r)\xbd\xaf\xd1I\xe5\xce\xb4F/S\xff\x8d\xd4\xd6\xe1v\xaa\xe4\xc1\xfc\xd9~c\x07\xd9\xaa=@\x15\x04\x10\x05\xd8\x08(\x94\xa9\xc4f\x1bl\xe8D\xef,fr\xd5\x13\x7f\xf3D>$\xc3\xa3H@\xbe\xf2\xdd\xde\xb8\xce\x16f\x8b\xc5kNT\x826\x8a\x80u\xd9&\x0e\x12s\x95\xcf\xf9xs\x81\xe4\xe4Je1\x1bT#\x14\xaa\x19\x98\x97w\xc9\xf9\xf9\x9f\x8b\x1f\x9c\x9b\xe0\x94\x01\x1b\xfe\x07vt\x04\xac\x94+GP\xf8\xf3\xfe\xfd[<\xd8\xf8\xfe\xdc\xfc\xb8]\xf7\xd6\xef\xa1s\xea\x9e\xff\xb2\xf7u\xea\x83s\xf3\x91\x85k\xa12\xb5]M2Fp4\xf9\x18\xec\x97\xc3\xd4\x83\xd1\xaf/\xe4c\x8b\x0b;5O8\xe9\x96\x1du\x9db\xb4\xad{\xdfN"\xf1\xaft\xe5\x0cI\xf9`!\xf6\xea\xfe\xedB\xfd\x9f\x92\xad|\xba\x01\x80l\xccu\x87\xa3\xb8\x93\xb7\xc8\xf5\xc1\xe0`\xfd\xef\xb0\xec\xa7\x95\xcb\x873\x82\x89H\xb1;&\x8d\xe3P\x90u\x8c\x85\xec\xcc\x1a\xe2\xed1n\x94J{\x14\xc6\xbc&I\x00d\x95m\x9b,\xff\x1b\xe42SVA\xc5\xd0\xa7#c=vG?\x81\xe5\x12\r\xad\x847\xc6\xb1\xb2\x0b\x9d\x18\x8br?\xf9\xde\xaa\x02N\x81\x16\xaf\xcc\xeb\x85\\\x94\x0fM \x1e\x98\xd3\xea\xf7!\xb2\x89\x8d\x8e\x1f\xf7T\x07M5\xf4r%l\x93EJ\x08"su\\D\rS\xcc\x1eO\x96\xd7\xa2\xeb\x99\xd9 \xf1\x1e\xd4\x17(\x128m\xd4\xeaMRt\xdaj\xcc\x18c\x01\x9e\xc7\x8aq\xabb\x119\x93\x9e\x82\x80\x90\xdf>\xb1GGi\x85\xce\x8d\xac\xdf\xb8\x94\x8bm\xc3\x19|,\xd6\x17u\x80~\x88\xb6\xf8r\xb8_\xa3\xef"\x06Pq8\x81\xcdD\x17J5\xc9\xb92\xfe\x10?\xbaoij\x93\x03\xc7K\x97w\xee\xde5Ce>\xc7O@A\x18\xf5\x92Zp\xe2\xf9=#%2\x95\x99bg\x0b\x1c\x01\x13p\xf6\x9a\xaa\xcb\x905\x03\x03\x1a\x1cF\xca3`q\x10\xb7\x00`v\x96$\xfcP\xee\xce(\xd5\xc1E\x85$B+nT\xa5\xbb\xcd\x04\x8e\xbd\x17\xa0\xbfd\x99\x03:\xab\xd4k\xf3\x94Q \xb0y\xf6\xe8\xcax3\xe1f\xa9\x1dC\x9b-\xf6\x10\x137] \xde\xa4\xaf\x1f\xe9\x1b\xb4\xf6<\xa9\xaa\xef\x8d\x03\x80K\xeb^*\x9f`\xb6\xf8\xd2\x95\xfd6\xd7;\xca\xca\xeb\xf9{\xf9*\x01M\x8b\x1dQ\rY\x96\xf4\xb4\xe6O\xee\x96C\xfdcf\xdc\x1a(\xf0\xf9\xf6s\xfb*uX\xfa\xfa\x06\x8f\xf0>5\xc8\x9b\x0b\x95e\x99=>\x87]\x15\x0eZd\x9c=Q\\\xad\xd2\x885\xf6\xc0\xb26J\\\x00\x00g7\xabBC\xad\xe9\xae\x93\x83\x0e\x1b\xec{jZ-rRT?\xfd\xa2\xea\x8f]\xfe\x19x\xcc\xf0\xdf{\xd9->\x02\x92,\x97\xceb\x83V\x9c\x9cE\xafO\x05-\xe1\xa9u\x8a\x92\x9c)!\xa4\xf3\xc1"u\x9d\x8d\xf5\xb0\x9b\xb7\xd7bK4\xc1\xac*\xbeBPM\xda\xef@\xe8\xc4\x1d\xac\xb91\x00\xcd\x13\x0e\xe5\xfa7\xc4C\xd2#\x90\x8d\xb29!u\xbd,t\rb\x16\xa7\xa3\xa2q\x88.\x81U\x10\'\xf7u\x0bu#\x99e\x0c\xbe\x94\xbd\x1a\xd1\x81]\xf8F\x9e3F\xea\x12\xc3Yv\x99>\x16\x93\x8f\x91sXa\x01\x1e\'tu\x18\x84p\xb3lc`}\xbe\xa0\xe4~ \xa9q&5,eA\xe6Kv\xe9\xd4\xb0O\xb3\x11a\xa1)&{@~\xc0\x07Hp\xea&\xbe:F\x07\xeb\x03f\xbdD\xa7T\xd8P-1\\e\xcd\xbb#\xb8k\xe8\x97S\xe1\xc4=\xf2\x9f-\xf5l\x06\xac)\xf6\xfeXx\xf2"y\xd0\x7fc\x13s\x94\x9d\xc8\x8by\x88`\xc4]43\x81\xb4\x83sw\xc2fc\x950!_\x86LV\x8a\x8ep\x04\x95+\xd29q&I2\xab\xbe8\xfbr\xd0\xd4\xe8\xaa\x7f$\x07>\xc19\xc7\xaa\x8ay+\x9f\n\x9d\xe9\xac|\x8a\xfbd\xac}\xb8%\x1a\x89g/\x11m\xa5\xa6\x12\xd9\x8cf\x8e \xe3H;\x08\x93\xa7V\x1b\x92#\xdb\xb3I\xb3\x1d\x82\x8d\xb1,\x07?\xe9\x04\xe0p\x94%\x9b\x9c\xff\xac8\xdb\x95\x12\x8fG\x94\xfcn\x7f\xc0Vof\xf6\xc2N\x82\xba\xa3&k\x8d\x1f\x06N^\xdf&v\x87Bn0\xf9\xd4\xf6N\xdd\x19\xb0\xed\xe7K\x8dK\xda-\xeaU\x17#Q\xa4j?\xfe\xc5\x8a\xfeV\x88\x18\xbb\x9e\xc8+\xea\xd7B\x8f\x9c\xfd\n\x8e1n\x91K\xb6\xab\xa5\x84\x1d\xec\x03\xf1\x83t\x97G8\xe1\xb5\x93G_\xed\x18\xd8\x01\xc3\xc9EH\x93\xf0\x84:"\x1cH\xbe\x07\x9fA3\xeb\xc6\xeeZBpU\x12=*\x87[9\xbad\x11M\xeex\x83j\xa6\xe9C\xd9,J\xf6\xda\x0b\x08\xe3\x10<\x04\x96+4h\x89\xb2\x0c\xe4tZ\x81]\xb5]\x1e\xe6\xec\xf25\x14<\xc9A\x07\xae\xde\x94\xe5\\i+\xb7\xb9\xea\x1eU\x0c\'N"K\x8c\xe8\x9fk\xb8\x8f\x02\r\xbc\x1a\xea`\xad;\x1a7\xf7\xe0\xdb?p\x12\xdd3sk^\x84l\x0e\xf3d\x84N\x9a\xcf,\xe1f\x95\x16\xb1\r9N\xd8\x19V\x83\x02\xb3m\x06\xde\xbepUl\xa8X$\x95\x81H\xd2\xe1\x07|\xdc7\xf6v\xad\xba\t\x95\xdb\xb0.\xfe\x07\x04N|\x91V\x05Ph\xa9\xcc\x08\x12\x15\xb4\xd09,Q|\xc3\x16\xf2N\xbc9\x92i\n\xfb\xab+\xbfu\xa7\x1c\x0e\x1a\xb8n\xd5,\xd0cB|H\xc3\x0e\xa7f\x0cW.\xed\xces\x86\xb1\x12\n7\x81\r<\x99o\xaf\x1a\xcb=$P-o\x85=RN\xc6\xbf\x7f\xa0\xc6l~Q\x1e\xb4v\x038(\x03\x99\x82\x98]\xf4\xb6\x14\xd9\x81v\xc0^\xea\xf1\x7f\xca\xba\xb3\xb9\x1d\xb3\xfaV\xf0[\x7f\x16\x15\xd6\'\xb6\xad\xe9\xc9\x95\xbd{\xc3V\xef\xec\xbbb\\\xda\x07S\xe5,\x84/Y\x8a\xac\xadZ^R\xa9\xff\xc3\xfe\x0b\x83\xf6\xf75\x1c\nx\x1a\xe0\xbc\xfb\xfar\xd4\xbe\x08\xf1l\\\x16\xa0\xd6&\x92\xf3\xe1\x0f\xae\t\xe6Nl\x17*\x0e\xefcI\x8d\x81\x83\xe1\xb1a\x12\xdc\xbc\x91\x03\xdb\x0b\xdf\x9e\xc2\xa8\xec\xe6\xb1\xcf\x1c=\xf9\xf0\xde^\x9b\x83d1o\xfe\xe3\xd4,\x9e\x04\xc2\xdaN\x80iljw\x96J\\M\xa9\xa1\xc7E\x04YSR+\x87D8Bl\xa0vs\xfc#DK\xa9\xca2h\x1fz\x11\xa7;6\x9fY\x82\x00\xccn\x07\x8f\x05B\x12\x96\xba\xad\x14\'\x9b\xfc\xb06e\x05\x01\xe6LP\x15\x90\xfaK\xf0~\x85\x99\xf4=\xbb\x01\x9e\xa36\xa9@\xf7Z\xcd\xa5\x163\x1a\xcc\xb78\x04\x8e<\xd5\xb4\x17\x96\xd7\xcb\xe1\x04\xa8W[\x8a\xb2\xe8\x91\x82l\xe8[L^\xd1\x1f<B& \x95\xa5\xde\xa4\xf8\x91\x9a\xc5.z8\xff!W!\xbc\xadk \xd2Mm\xce\x9eo@\xd8\xf1+~\xbc`\xbc\xf6\xe9\x0b\xcbYZ\xfdD\x07\x1a\xe3U\xaa\xde\xaeYZ\x15\xd4\xc7\xbeJ\x02\x16T=\xb7 \xe26\x01\xf8-\xc4\x85\xb1\x04Dk/\xd8\xbaq\xef:\xfa6K \x98\xbeB\xbe\xa42\xc9&\x04\x10\xa8\xcb6PA\xa8\xc3\xc5\xf1\x93\x9f\xf6\xe8\xea\xc1\x8brH\xb8j1\x80\xd0T;\xccV>\x7fI\x12\x91\x87\x1a\x08\xc2\x91\x1eG\x87\xf2\x07\xd7\xeb\x9f]\xc7\x16\xcbo\xc8\xdcf\xe9\xb8`1\x86\xdd\x85J\xb3\xcc\xc6\xa1\xc6\xb7\xa0\x98]\xbef\xfbn\xe7\'\xd9W\xfaV\x0e\x19T\x1c0\tv\xd0.`&\x07\x9e\x0e\xc7\x84"3n\x8f\xef=\x12H\x89!\x00?X\x9a\x93U\x86\xaf\xb4\x94\x81\x80^U\xb8\x94\xa7\x85\xaf\xa6\xceK\xd7\xbf\xae\xf9jZ6\xf3f\x85<\xe6\xdd\r\xf4\x01\xd9\xb5\x8c\x94=U\xc2P\x99\xd2\x9c\xe0Z\xb1\x17O$\xc8\xad/\x9cpx\xbd3\x03!s\xb7\x12\\\x8e\x896\xa3\xb8U\x96W\xb9\x06\x9e\xbf>\xc3\xe4C\xbe\xb8\x05\x02\xa8\x1f\x084\xe9\x17\xe9\xf2\x91\x83\xe5\x14\xa9\x8a!\x8c\xdc\x081(\xca^\xa5\xca\x9a\xbc\xb5%\x1a\xb5\x10Qb\xe4\x04\x87k\x15\x9f\xb9\xaa\x8b\xd4>c\xe5\x1e\x9d\xbf\x9c\x9b\xbf%\xef\xcc\x8d\x04^\x12\xb6^\r\x9b:y\x93\xde\xfek\x1e \x82\xb2+\x91\xe4N\xa9)\x03\x99VPqg\x13\xe58\x93?9\x08\xef\x18\xf4\xa2\xc0x\xd0\xadc6\t7G\xcdf\xd8\x11\xd2\x86\xab\xb4\xa6\xfc\x84\x01\xfaTGK\xdf}:\xf8{,$%\xbb1j\'\xe1\xd8`\xc8-\xe5a\xb1\x91\xff\x19-\x87\x8a\xbd\x1c\xc9\xc8?a*\xa1\x97\x8f!>]\xa3N\xaa#\xf4\xed\xbcK\x88\x91\xdc/\x87z\xce\x91\xb0\xf5\x1e\xf5\xb2\xd7\xc7\x97L\xf2\x0f\xbf\xb6\xfb\xee\x8f\x1d;-\x96}\x88j\xeb\x86YA\xc0\xa2 \xceN\x80\x87H\x8e\x0e{\x95\xd2\xdb\xc5\xc7\xac\xf1\x04[|xf68\x12\xf4\x90\xb8y\x0e\xd3\xe1]\xfe\n\x079x\xf2\xf3\x16\x1e\xf7\x04X\xb2\xd1\x82\x19\x1c\x95Tk5\x848\x1eY\xd8\x7f\xb3I\xd4L\x9f\xaa\xaa\xc1\x8e.k+\xbb\xe3\x03{#\xd7\x02\x04\x8a\x1e\x8d.\xea\xff\x17o\xca\xe1\x16\x0cfO\x85\xbc=\xec\xb3\xf2 \xeb\x04u\xc4\x9c\x98Wd\xbbu\xe3\'\x9f\xb9o\x90\x98\xc3\x17<\xda\xd55\xe1(p\xa4\x1d\xe5<xk\xf1\xdd\xaa\x8f\x0fn>4\x1c\xbc\x0e\xbb\xbb\xe6=#Dw\x12|\x8b\xbb`|\xbbO\x02\xcaLP\x1a\xac\xd3nzl7Xf\x12\xd7\xcf\xed\xa7;\x87/S\xff\x81ucV\xae\xfb\xf6\xf8\x9b\x05;j\x80d\x89@\rU\x97\x11\x12C\x13V\'i1y`\xee\xb9\x17\'\xea\xc2\xe39\xb3\xebP\xfcW\x0e\x89\xd9\x95\x1as|4Hr\x80\xe6\xd5M_>\x96Zw\xfa\x1c\x1a\xa8W\xbe\xbb\\\xdd\x9b-5F:\xb1`\xfe\xb6\x12>\xe0b\x8c\xfa\xc5U\xdf\x12L~\x12\x82\x01Q\xa6\x81r}\xec\xb9\x93b\xfc\xe5t9\xa47\xf5s,\x15z\xd2\xce\xff2{C6\x1c8\x7f\x93Y\xf4n\xc9\xad\xf1\xc7\xe5\xc1_\xf4\xf4\xfd\xddo\xf8\x9a\xfb\xe5pN\xb4\xed\x1e\x96\x92\xd9sv\x07\xa2\xf6\x9e\xf5t\xd8"n\x04~8\xa5\x89\xactF\x0fn\n\x87\xcb\xd5\x89\xbbr\x81\xd3v.\xb5\xa2\x0e\xc9\xee!\xfff\xa6\xc3\xc6\x08\n\xed\xa0\x8b7\x14PH\xec([\x07\x14%\xa5\x82\x0f\x8b\xa5\x8f\xa080\xe1\xad?\xff3+\xe9D\xa4\xf2\x10b\xa7ji\xdf\xd8\xc5\xb7\xc5\xac!\xbd\xbf\xdd)\\\xba\x04-m\x0e{6\xf9.\xa78\xef\xbe\xd4C\xa7HJ5>\x8f\x85\xe6\x10\xb9rW\x95\xe7\x81\x853n\xd9\t\xe6\x0cx\x1dQ\xd8\x07t_\xa2\xec\xde\x9eE\xa90\xc0\x16qh\xa2\xd8\x08\x95\xaes!\xc8\xb0PO\x01\xcc\x0b\xc3\x8e\xf0\xe6\x1ftkja;\xa9\x85\x11\xce\xca\xb7fp\x04\xe0y,\xd5g\xb3\xd9X]\x18\xb67\xa8\x834P\xd4Uo\xb6\xa9\'T\xf0{\xbb\x19$5r\xf0\x9d%\x13?\xdf\xc6\xeaS\xfbX\xc4xr\x8by\xd4]j\xcba\xa3\x84D\xeahC\xc0\xba\x13K\x94\x9d;\x97\xfe\xb6\xe3\xf6\xe0\x83Q\xd9\xa9\xf8\n\xe4\xa3]\x91\x98/\xcb\xbb\xe4\x02\xbbU\x9a\x9f\xe7\x16m\xa2\xf8\xf6[\x0b#QqV\xaa\xe3\xc7\x03\x06_\x10S\x96U\xb2\'B\x1e3H\x7f|\xbf,\x91\x02a\x0c\x8c<\xa8\xb0\xab\xaa\xd3\xdb\xef{\xf4\nzQy\x97}4\xd7c\xa1\xfa\xb3\x190\xbb.\xc5\x9b\x19;\xbfV4O\x97\xee\xdc;{\x83K\x97\x99\x9f\xa1\xfcm|W\xa175\xcf\xde\xeb0\xb1\xbb\xcf\x14p\xfe\x81]\x96B%m\x9c\xfd\xdcY\xbbm)\xba\xef\xf5\xa2CuC\xc8z\'\x18U-\xfe\x08\xedq\xa76s\x04E\xee\x16\xcb\xc5c\xc0\xe1\x95z[|\xba\x18\x87\xe3\x9cSnN4\xae\x94\xd5E\x82.Z\xccPr9B]\x91Sz\xf7\x99p\xce\xb4\xbc\x83\xb4\xa0I\xe7?\xe9.%vx\xfd\xb6\xbdZ%p(\x1e.\x11\rD\xc4lP\xc7a\xab\'m\xb31{\xea\x1c6\xe2p\xc7I\xaa\x8b[)\xbf\xf5\xf9\xa6p_j\x06\xb0\x9d\xb5\xa4,\xdf\xf9\xc3\xac\x1cN[\xd8h\x1d)\xb7:\r\xbey\xb6`HE\xc7.\x8f\x80|\xe6\x98\xa9YK-F[\x1c?\x15\xe2\x90\x99\xadF\xe2WT\xf9#\xcb\xfb\xd0\x8au8Lv\xa0\xe8\x99\xe7\x16pi\x1a\xfbr\xad\xd7\'k\x06\x97\xf6\xb9\xfa\xd5\xd9M\xbd\xfc\xc5\xe2u\xcb\xc1\x0c\xbe\xea\xd7\x89gJ2p\xdb#\xd20C_\xcdj}o\xec\x8f\xff\xa2\x19\xd3\xbd\xb2l\x8c\xee\x1fsH$nv\x96\xe3\xfa\xfc\xd6\xbc`\x88\x81\xd0\x90\x9c\xefKz\xec1\xb1=\x00\x9f,\x98#\xcbC#\x962&\x15\xb8\xfb|R\x89\x85\x8a"\xdf<\xfc\xa5d\x1c\xa0O\xbbU=\xf8m\x0e\x8c)yj/\x15\xab\xe8\xef]a\x85\x85\xb6QZ+\x05\xc4\xadi\x96\xec\xcaSi\x86\x9c\xd0h\x0e6\x1c[\x0eER\xd7U1pio\x0f\x03p\xf1\x93\t3I\xd2\x85y/N\x05.\xe4f\xd3Q\x03\x8bH\xe8\x8b\xd6\x15\x07\xe8\xcbH\xb8XW\xa5U\x99\xb5Pu\x86\xcc\xe0\xdd\x1d\xdc&\xf0\x14\xe2\x11^\xe6\xf1\x95\xdbp"\xf5\xc4G\xfc!\xeaD\\\xf0\xca=)\x87+\x8e\xea\x144\xaftMA3\xb6\xaa\xbf\xb5\xc1\x87C\x1d\x18\x92\xb8|\xe7\xc9\xd7\xed\x19E_t\x9f(|\xa6\x1e/>\x8bg\x12\x07\x1f\x07\xd2M\x05\x05\x82P?\xfe(\x14\x10\xb5S\xa7fQ\x15\xce\x1f\xda\x17u\x1e\x89\xb8\x8b\x8a\xd7\xf2\xed\xf8x\xc5\xe4*\x8e\xfc\x8b\\\n\xf1\x044i\xb8\xa1\x90@\x9b\x1e\tq\xda\xd8n\x91-c\xf9m\x1f\xbf\xef\xfcSI\x1d\xa8j\x83\x0fN\xf5\xa8\xf4\x15\x86\x1b\xaau\x80\xe2\xe8\x90\xd5\xed\x8b\x19s\x0e\xb0b"\x1e\xc89\xaf2\xc1;W\xc6\x05\x94\xce#C\xca\xbb\x1d\xd5\x82\xdeR\x04\xba\\\xf6\xc0\xc1Dr3\x8aEL3\xaa\xc7\x17\x17c\xeaZTB(`i\xc5\x12p\xa9\xf6?\xccl\x91lTk\x1fv\xe5@\x12\x89\x9f\xadl\x9a\xfd{\xa4J\xc4r\x1a\xb5\x9d*\xe5U\\\xfb\x14J\x0c\x10U\x83\xc93\xbb\xc6-\x1f]\xcd\xab\xb2\x0b\x14\xbc\x8b5\x8aY\xebW\xa8~\x9b~\xc2\x07L\xfa\xe6Gb\xbf\xe3\xdbof\xf8\xbc\xa7\xf1\xd7\x1f\xe0\x07S\t\xab+\x8f\xd0\x15#x\x96\x9em\xe0Co3\xa5-M\xf7\xb8\xe2pv\x0f\xc3C\x83\xbd\x12(b,\xd8\x177\xecO\x08\x12\xaa+\x19%\xdf\xc7\x96\xadf\t\xf5,X\xab\xdc\x0f\xed\xfe\xa7w\x95\x1f\xbc\xb3r\x9d\x8d\x01\xee\x88a\xa94\xa5\xcbL\xbcHc\xf81\xf6\x0f\xc6J\x0c\x90^\x10\xb8\xc6Od\xcf\x82a\x96-6\x9d\x14\xdd\xebC\xfc\xdf\xed-\xa6%\xaf\xd9\x06e\x15\xf7\xbb\x1c\xae\x01\xe6i0+\x96k\xf7PJm\x02w\x0b$\x1f,\xd7\xbaz\x98\x07f\xd2\xae\x1d3c\x89\xa1\xa0\xdc\x84\x9e\xa3\x83*\xbf\x8d\xde\x96\xa2\xd2\xfeAt{hO\xb0\x0b7\xbf\xbc\xcd\xecolk\xec\xb9^+VE\x19\xd7\xf7\x9d0\xf3\xb0\xf0W\xad\r\xe2Tl\xbca-&\x841\\wv\r\xb3\xd6\xed7\xc4!\xacY0\xd1\x9aI\xe6R\x08\x89\x15\x8d\xfd\x1c\x8d\x0e\xa4\x81}\x0cl\xea\x9b\xdf\xedw9k\x8a\x06O\xedN\xb7\xf7\x96\xd7\xf0&\xffXl\xdfx+H\x0f\x855\tA\x95\x10\xc3\xaa\xf8f\x87\xce\x95L\x831\xb0i+\xe6\x19\x07\xe8\xf0\xd6\xbf\xf9\xe8\xee\xd8\x1b\xf4\xf3\x15U1bE\xb2i\x8a\xd0\xd6\xcc\x0e\xb8\xb8m`\xf1\x98\xb9\x0e!\x8c\xb8O\x04\xecK\xc5\xa9\xb0\x9d\n\x12k\x9d;\xe4\xd4\xf7\xa7\xc8b\x99\x17\x86\x80\x97h\xe0L\x19E\x8b\xf7\xca\xe9H\xf3\xe75\x83M\xb4\xdbr\xf2\xd98\xfc0\x87\xf8\x91D\x85x\xa4\x94B/<\xc7=\xcbn\xdabDY\x9b\xb3\x04\x95\xbc\xcc|\xc7\x9e\x195\xa1\xf2[\xcaDX\xa2\xc1x\xdd+{C\x16\x83\xc1nqE\xe6\x1d\xf1\xff\xfal\x04Fv\xb9ivG\x10\x07\x95\x9cSV\xd2\x0cv/K1\xe4/,\x87\x08h\xc6\x04\x11:\xabvc\x18Y\x8c\xf2\xc2\x04\xe1U\xa9y\x19\xfc\xd4\xf9\'z\x9e-\xcb\x02HrmO\x81\xe9B\xbd\x89)^\xd5\xaf\x92d2\x0c\x05+\x0f\xcef\x83\x95\x1a\x8d\x15\x05w\xec\x01p\x9f\nk\x95yTt\r\xf6\xe9\xb0\xeaK\xc1\xa2J\x99!\x8b\x05\x13\x93\xaf\xa3$\x90\xaewp\x9d\x834(j\xdafk\xd1\xd2\x19\xae\xb7\xca,\xd7\x8e\x13d\xb6x\xffV\xc0\x05;\x8c\x19\x94\xd5i\xd9\xb5\x12\xe7\xdeJ\xc8\x8e/\xa6\x95/\x0c\xb2\xf3\x8f\xfc\xcf\xd2\x1c\x9bmp\xf9e\xef\x03\x97]t\xab[\xb2=\xa7A\x87Z\xe0\x8bo\xdeo\xff0oS\x0b\x98\xaa\xf1\x14;R\xe8\xc8\xbbo\xde\x9a\x89WXz\xc41\xed\xca>Q\x1c\x1eu\xf6k\x1c\x81\x0b\x93\n!X\xc9\xc7\x01\xc2\xbe\xc1\xc7vm1\x0b\x98\x02\xf6G\x18:\xfd\xfd\x9f\x82\xa3\xc8{dL\xe1\x89\x93\xf9\x04\xff\xd0\x8d}1\x9f\xd6O\x10:\x15\xe9\x94"\xad\xde\t\t\x8eA\xc4\xea\xb7\xba\x93\x17\x14Y\x01\xc8=\xf0\x05\x1b\xac\\M(\x01u\xd4*\xfd\x8e\x99:;R\x01\x92j[#\xd3\xec\xd4~\x92S\xd6P\x15@B\\\x08!\x0f\xe1\xf3\xe7U\xfb,\x08\xc94\xcb\x84\xf4\x84Wrc \x15w=\xcd\x04\xbd\xcdZ\xcd\xac\xb6Y\x1f\xff\x85\xf0\xed\x7f\n\x93\xe5\x18tg\x01\xbc\xc3\x04-\xc5\xa11\xddE\x17\x13\x9f=S\xcf\x9c\x14\x13\x17\xce\xb0\x96\xb78P\xa3\x81\xba\x8d\x03"\xb7O\xec7x\xda5\xa1bj_|/\x94W2\x1a\x03\xe9\t\x8b?\xce\xcc\xc4q\x8a\xc4\xdf\xa2\x9b\x85\xc5\xaa6\xff\xef\x81\xd2k\xecT\xf1\xa4\xea\x8eNXP\xa1\xa7\x02\xc3\xbc\xdf\xfa\x15cc#3)`\xb8\xc4\xb1\x01\x18\x90\x94\x1bTV\'\xaa{\xe9\xfe\xbf\xab\xb2\xcc\xeci\xb6\xd9\x94\xda5\x8d\x10\xec&\xbe\xf1L\t\x97.\x1d\xfc\x1b\x9e\x88\n\xa0\xd2\xf2\x89\xae\xcb\xe0X\x13\r\xe6\x1a\x14\x8e\x10\x05_\x1e\x03\x82fdH\xde[\xe4\xe8\xd4]\xf09v\xe4\xb5\xc9?\x12\x0cn(k\x88\xed\x8bV\xea\xa1\xd2\xf1$R]"bJ\xd4\x81\xac\xe8\x0euh\x9b\xef\x9f\xda\xebQ\x1b@\xac\xb0\x8e\t)\xae\xfe\xe0\xdeqi<;\x9c\xaa\xbb\x05\x0c\x85\x17\x9d\x80*>S\x93\xccaAkx\xf6\xd5\xbeN<X\xc7_\xbf\xb6\xcb\xda\xab\x0bS\x8d1\x1e\xb9e\x88;s\xddhjls\xf4\xc8>#\xc5\xc1\x0c\xe6\x86\x8b\xbdi\xd9\'\xf5\x05s\xdd\x18g\x07oR"\xd8Jb\x0b^\xeeW\x02\xcb\x92\xd1\x0bpIkD\x82\xc0Mx$<\xa0]\xd6F\xb7Q\x86\xe3[\xbb\x14H#\xa1`\xec\xd4\xf5X\x93h\xfb\x9fP\x7fT(\x8f\x89b7\x96\xc5\x90\xf13X\xe4\x9d9\x82\x0f\xa2\x04\x19\x1b5\x87\xe5\xc1\xfde\xd5]m\t/\xdf@\xdd\xb1\xe9\x11\x01o\xe7\xc6\x14\x1c\xcc\x89\x00yu\x8aXu\xabGV\x97X\x93\xf5\x8f\\\x0b\xce\xbdxaE~\x1c\x96\x01\xcda\xad\x88\x97\x8b\xa8\xc3\xe2?\xfbp!y\x89\x11\xa4\x18#\xe5\xf0\\E\xe7\x9agft\x9c\xfeH\x97\xac\n\xb7\xbad\xc6\x94\x9b\xf4pb\xfc\xf8C\xfb\x98\x84\xaa\x99\xffvV\t\xdaf\xaf\xd6\xbez\x14\xffG\x84\x8d\xb8-\xee1v\x87\x91P\xce\xfe\xf4\xcb\x15\x85|\xb0\xabi7\x01B|\xd4\x12K\x1flK\xf3\xa9;\x16\\$\xce$5\x94L@\x1c\x9f\x8f*j\xff>\x12\xa4\xb8\x90^%,\x06\xd6\xd2\xb2\x17\xadL\x80:F\x84^\xdc<\x0c\xea\xd4\xbe\x0e/^\x81\xfc\x1c\x12\x1eq*\x7fXcg\x96\xf5\xd9\x116\xc8l\xfdX\x0e\xa9\x83\xad\xda\xe1\xc3U\xc9*\x0f)\xfb\xd1=/\xa5\xea\x8eX\xbcj\x976\xf6\x0f&\x0f5\x97\xdf+Ou~\xf4\x83^\xa7\xc6\xdc}\xaf\x94L1\x8e\x8e\x8a\xb1\x18:\xec\x07\xe4\x82\x00\x8f\xde\xd3\xf9\xec-\xdco\xfc\xcd\xcecHk\x1c\x8c\x8e\x8e/\xe02\x1f\xe28\xf6-\x1dn\x82\x11J\xf1\x140M\xc7\xf1\x11\xbcp2\xb0\x98\xe2\xaa\xa3Y%g\xc0~\x187e\x0e\x15\xdbj\\H\x81\x83tH(\x13\xf5\xf0\x89\xf5\xabs\xca\x84\x17\x8c/W\xffn\xec\x91z\xdd\xbf\xba\x7f\x82a\xaf\x87\xe4\x139IJe\xe0\xaf\xdcE)n\xdbL)\x01\x12\xf9\xf9X\xe8B\xac\xf0\xc2V\xcd\x17\xd9c0\x93f\xd19\x10g\xb4\xf3\xff\xce\xcb\xdd\xd9\xd3-\x8b\x00x\xd56\xfez:\xa7\xb73\xe9\xee\xe7\xc2Y\x95\xf4\xb03\xe6\xe2\x99\x9f\x16\xd4\xdb\xf6\xf4\x9f<_\x8cr\x8f\xd9o\xa5J\x89\xbc\x1b\xb6\x94\x91,c\xbc\xc0$4o\xa8\xffH2\x07\x85}\x90su\x98\xf5\xecjp\x08\xaai\xbcL\xfb3\x9b8\x9c\xc3\t}\xb3"\x9c\xb9\xb3\xbf\xbbb\xee7\xe06\xc5\xd1\x0e@\x7f\xbcu\xd3\xa9\xc4\xbc\xcaD *\x01\xeew\x87|\x82j\x0f\x88\x98\xb1\xed\xaa\xd9\x05\xbd\x05\x1b\xd7\x18ik\x04\xea\xee\xcf\x7f\xe2\x7f;K\x0f\x8e\x1a\xb5\xc0Zax\xa4(\xc6\xfd;\xdb\xc9\xd79\x93J\xd8@\xf7\xf7\xb0\x1cjMuH\x0f\xa8\x11\xd3_\x80\xa8\xd4&\xbf\x91\xbbvs\xc8\xd8\x1b\x98;\x1c\x7f\x86\x8b\xd3\xd5\xe2yP\xaf\xaf\x03\xb8\xca\x07B\xa2\xbd*b\x97\xd5[\x95\xf9Q\x1bc\xdc6\xff]\xb7\xfe\x80\xb4I\xae\x85\xae\xa0\xd7_\x85\x91\x87\xffY\xc2\xe6\xd37\x9f\xed\xcb5\xac\x9d\xb2\x85\xd1_r\xae\xc1N\x8cM`V\xe3\x9f^?\xfa\xee-\xd0\x93\xf4!\x9b\r\x94\x82#\x8f\xeb\xd2\xf6|\xcc\x9d\xe9\x19r\x1e\x04\x89+\x8a\'#\'\x97\xd7de\xceQ\x15\xf6\xee\xbe\xfd\xf5\xd6\x1e:C\xb7L\xb5\xc2\x12X\x8a\x1c@\xb4\xb0\x16\xf4\xcc\xe2\x1b\xc5<\'\xe9r\xbbE\xb1B:76\xa6\xe6J\xb9r\x06\xcb\x9a\x02\xc8\x1f\xec\xfaD\xf1b}\xeb\xe5\xa5\xfd}P,\xee\xb9G\xec5\x92?_\xf5l\tJ_\n\x1c$\x98\x83\xb2\x9c`U\xabd;\xb5\xa7\x1c\x83\x0bf\x1b,\xf1\x87\x0b\x96\xc7\x10 \xcdg\x98wi\xf2\x9c\x19AZ\x1e\xac\x94\x07\'\xc8.\xbb\xd9W\x12 \x89\n\xf5B"\xe4}\xecOz\xa6\xcc\x13\x82I0\x15A\x1dw\xfcL\xc2{\xa3\xa1f\xa7\xb6q#\x88\x8c[\x1d\x1b\xfb\n\x15\xd4\xa3\xbb\xca\x9cQ3xpr\x8f\x10\xc8\x1d\xb03\x0b\x88\x83\x80\x8b\x16a\xc2\x8e\xcbR\x92k\x9bn\xc5\x88\'m/\xda0[u\xcd\xe6#%ibsq0\x89\x01\x12\xf0C\xfd\x1c\xa7\xe3o)cP\xb3\xbdn\xdd\x84do\x88|\x0bYeh\xc7\xb5n\xf4\xb61\x06\xc5m#\xa8>\xc3\xc0\xce\xc4\x8ba\xc0\xc9d\xe8\x84\xd3y\x0c\xfa\n\x10jv^\x89\x1bi\x1e\x85\xfd7a\xde\xda\x1c8\xa7\xd7\x9b\x1e\r\xc0!\xcd\xbd\xfe\xb5y\\\x0eI\xe0\xe4\xb2\xe2Rd\x9d8\xe8\xfd8Up\xb7h\x11\xad\x05{\xd65xb\xd1\xe1Zyt\xa4\x8a\xae\x15\x0e\xa3\xbe\x1b\xeb\xa1f\xf5s\xf8\xf2\xe4\xac\xf7\xbeK\xab\x86Y\xb6\xe9X\x8f\xa6\x8d\xdbo\x91\x8a\xdcY\xf8\xe8\xc2\xac\xd9R\x94~\x82\xcb\x04\x03\xb5\xd8\x1f\x7f\x85J\xd3\xfd\x05\x05!\x1c\xcc)\xb8\xf6\xfe\xb4>\xd8Zg\xca\x03k\xbb:\xde\x12,\xaa\xc1k\xa5\r\x08\xcfM\xf6\xb6\xdc\x85\xb6 z\xba6l\xb0I\\\x18rvTW"W\xca\xa9\xd2\x11K\xc4t\xd38\x9b\xccU\x8a\xab"V\xf2\x8dEr\xe1\x8e\xbeJ(g\xcf$\'\xa8\x88+\x90|A\n\x9d!\x93\xa8\xdb\xcd/\xab\xa4\xbc\xba\x1fs\xe5\xf5z\xe2\x81Y\xae\x87\xba^\xa7\xc4\xd6\xe4F\xc0\xf9\xacx\xf7?\x94C\x05\x0b\x07\x9c/V\xcf?\xc2j\x8d\x90\xddF\np\xfe\x00@\x01RE\x1b\r\xd3\x8f\xf0|\x11\xe8\x0be\x96\x9c\xf7b~\xf8\xd8nq\x10M\xd9\xeb\x8dyQ\x8b\x97\xe5p\x05\xab\xebi1\x90J"AHD\x1b\x8a\x16R\x84\x07\xce\xa3\x90Z\x01\xba\xc6WG\xb3\xf4\x82\xae\xee\xe0\x04\x12\x1f\xdc\xe7Q\x83i\x0b\x1f\x1ce\x88hX}\x10\xd2\x83\x17\xca>\xbd\x9d}\x85\xc7\xd2\x8f\xdd\x10\xa3\xeb\xa7\x8az\x16D\xaf\xde\x87\x1fj\xd4\xbe\'z[\x8fo\xd9\xaf\xd5\x98>\xf6\x8e\xc4\x95Y\x81\x91\x95\xbdMS\xdf\x9c4\xc7\xd9\xa6\n\xfb\xa1\xfeG\xdb*%\xffL\x1f\x03\x0f\x11\xb1~}\xfc\x9c\xea\xa0\x884\xc9\xfcK\xc5\x9b\xce\xde\xb6\x05\xfb\x8b\xb5\x16\xc6\x0f8#\xcb\xfc\xa1\x99#\xe8\xbf\xf4^ \xa2h\xc8\x84~\xfc\xe9\x83\x8fv\xc5\xab\xc1\x89E~\nld\x13\xb4"\x0c\xe3\xc4 \xb3#$u\xed\'5Y\xf4Y\x89z\x07e\xb0x\x88\x88\xd2\xdc\xa6\xd7\x89H\xed\x9b\xb5\x0b\xab$\x1e\x9b\xcb\x89\xa2K\xa1\xf3\xc5\xc1"\xfb\x8d^5=\xaa\xbeN\xc4\x83&\xfa}\xbb\x94J\xcb\x87O\x87x\xbb\xec?yy\xbcO\xb5t\x1d\x15{-M\xf6\x00\xf0\xa1 ,\xc0\xe2\x9dt\xf5\xee\x1b\x1e\xfc\xc8\xc4\x93o}\x1b\x8b\x924\x80\x1a\x99\x96\xe5\xcc\x9a\x80\xc0bq\n7r\x05>\x9ah\xe9\x02C\xb1\xc9\xd9\xd3\xce\xca\x9a6\xfe\x89\t[^K\xba)\x91\xb2\x991\x16\xc0~\t#\x16\x07\x1b\xe5\x90=\xc1\xba"\x9d\xae\x16\xcchB\xfa}e\x99\xb8\xe4cs\xda\xc1\xdff.\\z\x19x\xca\x13\xbe\xc3\x9b\x88\x08\x1em\x95\x92\xe91Sc\x93\x00\xe7\xd2q5\xee\x97gG\x02W\xe00\x1aI\xb8S[\xb9\xb0\n\x97V,p\x92\xfb\x86H\xad\xdb7\x8f\x10)\xdd\xaf\x84J\x11\x99\xaav\x8e\r\x0c\x89\xd4[t9\xda^\xd4\xc7\xc0L\ny\x02d\xdb\x1eY\xc4\xa1\xea:\x9b\x91\xbc1s\xff\x84a\xe5\xe5P\xf9\x0e\xef\x1c\xfc\xde?l_\xe5\x94dlo\x1cX\x02\xcc\xe3\xe3\xfd\xf8\xa0\xcf\xd1\x99\xbbs5f:\xdaG\x08\x89\xcd\x0c\x06y\xb2\x9d\xdf\xd1\xebr(\xc8\x17S\xa7\x10}+v\x9aDr\x88\xdb\xc3\xf7\xe6\xbb\xfb\x15\x8fa\x90\xfd\x18\x7f\x05\x98\x00j>\xbc\xcc\xf8\xea\x1c\x00p\xb7.\x85\x97\xa8F\xf5\xe1\xf5\xf1\x89p\xfb\xd8\xfa\xce\xffO\x03\xb4\x81\xd7!\x8b\x1a\xef\x07\xdd7F\xbdf\xfd\x0b\xbcq7\n\x81\x0eT\xdb\xfd\xc0W+>g\x94\xce\xe9f\xc1Cg\xf4~\x9b\xaa73\xb3I\x96\x9b]\x8e\xe7\x06\x91 \x86zJw\xb8\xac\xb3 k,?\xc4 \x8c\x85\xe2\x1c+y\xa0\xbeSEn%G\xd0\xb3\x8f\xe6\xff"\x8a\xefT\x86\x0fp\xa6\x97\x14S\xaf\x0c \xb7\xb3\x0c\xd6z@#\x97r\xbb\x02\xd8`i\xb1\x17N\x1fe`\x9c`\xf5v\xbf\r\x82\xadY\xa2\x14\xc7\xb8\x98\x89:2J\x1d\xe9\xe7\xddX[\x0ewZr\xb9\t\xa1\xb7\xc7\xca\x18\x12\x0b=\rZu&\xe0\x19c\xe8\x8c\xf3\xdf\x03K8Q\xab\xf8l\xf4\xb4\x1cj\x10qgE<gO\xb9g?8A<m\xb87e\xae\xa3\xd5pA\xaf\xf4\xc0\x18\x84W\'\xdfw\xb0\xacL\xa3&\x85\x95\xdd\x05[\xa5\xbb\x8b\x1e\x1d\xaa{6\xa0\nsm\x14\x8e\xc8\xfc\xe2:\xac|\xcf\xaeS\xd3\xf5K\x89a\x0b$es\xb1\x17D\t\xd3\xe7\x0f\xefr\x91!V\xde\x84\xb0{\x13O\xef\xf4\xab\xc9y\x1f\x9aq\xb2\x94\x10\x86\xd3\x8b\xc7\xc5\xf5\xdb\xf2 \xb1\x84\x8b\xda\x03\x9d\x1dw\xbf94\x88$\xd0ez\xaa\xb1\xe7\xe2\xd4\xfc\x91\xd2\x1d\\s\xc8\x82\xbbl\x07\x07\x1c\xef\xf4o\xd0\x94Z\x85\x86\x7f\xaf\xff\xc1y\x9c\x08\x96`\xb9\x8b0\x89\x96\xafO\xef[\xea`\x15\xeb\x85\x80+LUG\xdd\x07y\xa3`\xc6\xccX\x0c]\xdd\xba\xda\xdb\x13\x80\xa3t\x8f\x013Q\xd6\xe8,\xad\xe9\xc8oUg\xde+\x88\xb0\xde\x8d\xb7\x17\x97e\x12\xe2\x9cqS\x92\xb3{\x8d/F\xcb\xce,]\xaa\x84o\xd7\xee\xca=\xdd\xf9dw\x85\xcbD*\xb8@\x17>\x8c?\xc1\xf3|*\xf4O\x90Q\xe5\xa7d\xfa5\x04\xe4\x98\xb2\xd4\xdb\xcbS\xf7\xe00\xea\xe3\xd1\xcf\xef\xcc"\x9b\xfc\xb9>\xf3\xa0g\x872=\xf2\xf6;!\xcc\xe3^\xa3\xfap\xa9\xb0U\xd5\xee|*}\xbf\x0e\x1a8N\xbf\xcc\xc1\xd8da\xf4\xe5\xe6\x86\xec\x17\xd3\xaeD5: \xb7\xacw\xd8\xe1X\xdb\x15<EV\x04N?\xc7\xbaPd\xf0]sdN\x99\x85\xe0@}\xa4\xab\x14\xe5\xe3\xb6j\xc0\x98\xea25i\xa0 \xf6\xc6\x10\xb8g\x9f\n\xcfs\xf0u\x99*\xe6\xf3\x8a\x81\xaaN\xba\x9cL<e\x06u\xf7\xd5\x1e`\x13\xf5-\x9b\xc4R\x94\x80AK\xe2\xf7\xf1;<\xb27#S\x80\xd5}"v<\xcb\xdf\x81=)\'@\x9f\xf8\\n\x8f\xe1\xca\x83\xdd\xb9\x8d\xb4c\xee\x93\x85)N\x95\x0c\x94\xb0v\xba\xb0\xed\xfb\x8f\xe6\xab\x98\x8a\xe4\x82\xb9\x80`y\xd8x-\xd4.8\xd0\xbc n\xdb\x85\x1d;\x1eG\xa1\xff\xa6\xec\x07\xf6p\t\xc2?H\xcfqcH\xb0scU!\xaa\x1d\xeem\x9bM\xac\xdbG\xe1\xd6\x82N\x15\x9b7\xef\xd6?X\x92\xa7\x92\xea\xdc<f\xbfV\xd7\x08\xd0\xcc\xec\xf7\x9c\x9d{7\xd8\xf1\xe6\x14HKi\xde\xc0\x8c\xc1A`\x92\x97\x0b\xec5\x08c\xc9\xbeh\xf0\xcfV\xd6\xed-:hE\xb5\xd9\xd1K\xf3H]\xf5\x12\x1dl,\xe0\xecD\x92\x8c\xd5\xcb\xe9I0\x9c\xbc\xce\x95[\xb9W\xf7,7n4\xfd\xe1\x91\x0bqm\t>i\xfc\xab\xbf>+8\x04\xb4\x1bI\xebj\xdfrD5\xb6\x9c\xa0)\xd6A#\xeb\xd80\xa3G\x85\x13fA\x84&\x97\x1f\xfcO\x12\xc2\xa7A\xb7\xbd\x8b\x8e\x04\xd6sxw^j*:\xa8\xf7\xc1I\x00e\x18l6D\xcb*`\xf1e\xda\x91\x87\x8f,\x08\x99\xa4\xb5\x12\'P\xba{\x81\xa3\x80}m\x8c\xd0\x04\x01\xd2g\x9bS\xd3\x17BM\x08\xaa\xff\xbd\x10H\x9a\xf5\xa6\xaap\xea\xec\xdd\xad\xbb~*\xa7\xa5\x0c\x89"8]9\xdcPY\x91*\x91\xde\xc5\xb7O&\xb0G\x87\xd36&[1R\x0e{\x85W\x9fjwB8\xa3\x92_\x07Y*\x1fO\x8d\x95\xd6\xef\xe5\xf7\xda\x87\xfd\xbe\xd7\xdd\x88-S&\xf0\xc5\xbb\xbe\xaf\x10\xca\xc6\xcb\xd3u\xe5\xb3\x8d\xe0\xc3\xf8H\xd8B\xfb\x00<\xf1t\x12\xdc\xe4&\x7f\xad:=\xb5GE\xa4\xbe\xedk\x03\x96\xea{v\x8d<\x04\xe2{L\x9d\x83\xff\xb9ERFt\x07\xedG\x87\xd5*\xec\xe5t\xbf\xf0\xbc\xdf\xbe\xb5\x08TuC\xec\xd7\x9ev\x8c\x91=\xfa,\xb2\x8b\xde\xdc\xc2\xe6\xf3\xec\xe77\xfckP\x0c\xda~3\x11\xac\x8bj7\xaaC;e\x9d\xb6\xcb\xa7-%\xe0\xf2\xdc\xb2\r\x1f\xa0%\xd0L\xae\xd8s\xa7\x8d\xc0\x93pL\'\xfe\x9f\xaa\x0eM\xf7P\xff\xb8P\x01\xccR\x0b\xff\xd1\xbe|$\xcc)\xa9\x81\x1a\xb7\xab\x94\xec8\xd5]\xe0\xd4V\x0f\x89\x0c\xec\tp\'\x88\xe7 Dg\x0f.\xd4\xeb3\xb8*\x13v\x15\xdb\xf8F\xd0HM\xd5\x8a\'F<d\x80\x8f\x19\xf5\xbc\xc7\x9c\x8ef\xcd\xc0\xb7Z\x98\x14\x97P\xa6e\xcf\xe3\xbe\x03\xafK0\x10\xa9~u\xeb\xd1\xe2\xc8\xf3Of\x0c\x11\x0e\xbc\x8d\xe5\xe7\x85\x9au2[\x1b\xa6!BxuE\xee$\xa8\xc0\xb8\x06\xbd\xd0cFK\x9dB\xb1\xfe\xd6\x17{\x80\x1c\xee#7\xe5;~f\x12dd,\xdb\xf0\xd1\xbd\x8d\xe9\xf9m\xe115\xd4\xc4H\x0c\x8f\xcd,\xael\xb1\x1cUU\x11\x16\x00M\xd5PD`\xf1\x8d\x86)\x05;H\x08\xdb\x82\x17\xc7\xf6\x8d\x08\xe8p=X3\xbf]SZ9!=\xf6#\xae>P\xa7\xab*i\nBL\xf5\xe1\x99\xaa\xf0:\xf9\xc6L\\]+F\xd8\x8a\xed\xd4\x03\x9d\x87@k\xd6\x1b(]\xaf\xfc\xea\x9dEE\xea\xec\x1cp\x9d\xed\x05\xfdgHM\x87A$\x8adA|\xd0\x8b\x1d\x1cb\x8e\xf6m\xa0\x0b\x14\xc6\xc7af\xb0\xb5(\xafz\xb6\xbd\x92\xcd\x9e\xa3[\x11\xc9\xac\xed\xc2PY\xaaYx\xf5\'\xbfv\x12$\xa7\x01\xd7c>\x02 \xca\x89\xf0G\xc6D\x0b\xa9U\xf6\xce\x91V\xa5\xb4\xff\x06\xc5\xb4\xa3w!\x9b\x01\xfdA\xae\xc9j\xb0s\x82a\r\xd9>\x9anNp\x8bc\xe7\xb3\x85\xb8\x99)\xbf\xec*}n\xecSq2\xbc\x0f\xa3j\xbf\xd6\xf1O\xdd:\xf6m\xce`\xd0\x97\xcb\xe6\x93\xd8\xaa\xf7\xd1\x7f\xf6\xf99\xa1@\xb4\x11Ch\xcc\xb92\x88\x93\x85\x14\xf5\x9c\xebP*\xc4\x7f\xfd{\xab\xf4*\x91,b\xaa\x1a\xa6\xf6\x82Q\xbc0\xa5\x87\x93L\xbd7\xcb\xedr\x01\xcbW\xf1\xe7\xceu\x96\xda\xf5E\xa7N\x8f\x02\xc3\xe4\xe6\x85\xe7\xf6M\x08T\xc9\x17pcW5\x85\x1eE\xd7}\x7fe\x1e\xb6@-\x11c\xd4\xaa\xeb\xd6/\xcc;\x13\xa7\xc6\xc6\xcc\x9a\xfd\x1d\n\x8a<\xfaV\x0e96\xb5\xbf\xbd1a\xf90&\xa1q\x80\xd0f\xbc\xbaR\x07\x8d\x15\x95$\xb0\xa9\xd8e\xeb\xa9\xdf\xd8\xd3\x19\xaf\x18gZM\x88\x9dE\xf0\xa2\xb0\xa8\xc10U\xd9G`\x1f?\xe39p.\x11\xab\xc4\x99\xb3\xf4\x05qb%p\\L-\xe0\x89\xee\x9b\xbb\xa5V\x7ff\xe3J q\xd0\xdc\x80?\xf80\x8a\x1e"\x06[\xf8\x1c\xf9t\x11\x88\x8a\x87{\x16\x9a\x03\xaaA\x8b\xd1x\x1b\xd2\x10\xb39\x9c\xeb\x05G\x9ax_\x05t\x92\xd7\x1a\x8b\xfaI"\xab\x1bY\x10\xaa\xc7f\xa6\x80/W\x8cl\xad\xce\xa9B\x85\x114\xcauz\xfa/[\r\xb8\xe3\xc4\xc7\xd5,\x89\xf4\x97\xac[rU\x01\xe2\x0cs\x01A\xb5w\x16m\xf5[|q\xcaW\t\x04%\x19\xcfd1\x94\x9db+\xe0\x819\xcc\xe0\x9f\xfc\xa7|\x02f3 \x9f\x12\x98^\xa5\x82\x96\x05|8,{M\xa5#\xf3\x98\xa4\x0c`7S\x9b\xd3\xcc\x10/qil\xd1\x85\xaa\x86\xdc\x8c\xb1\x15W\x81\xb3\xb84\x9e\xa5C\x08\xcb\xe3 8\x94\x1c\xdf\x049\x9dW)\x85\xe2\x1c\xe5+`\xb5U\xbf\xea\x87\x88\xe5\x0f\xde\xe1\x03\xf3\xb3&_\xf9\xc5R}\x1f\xc0"v\x02\xa1\xe2\x16\xd4@\x13i{\xb6\xda\x0f\xfe\xfbx\xeb\x81e\x8bD5\xbd\x1deCN\x8fO\xd5!\xb7e\x10x\xc03\n,x\xda\x89\x84\xcbcn8kt\x86\xf8\x14Q\xf2\x8eUU\xf9Q&\x96\x94\xd2*5\x8b\xa4\x17\xaboi\xff"\x99\xeb\xb9\xbc\t\xaf\r\xe96h\xe8\xba\x95\xf7\x8f\xf4\x99s\x85\xbc8;\xb4\x0fSghJSW3%\xc1\x04\x1d\x8bzv\x92}\xecy\xfc\x06F\xcf\xa8#\xd3~\xbeg\xe7M\xec\xbe\x1e\x835\xfeV\xf9\x94<5 \x8aE]\x879\xa9&\x01\x0e\xde\x80\xcc\x89\xb6p\x9b=\xe3g\xdc}2/8\x11\xb9&\x13\x96X\xb5+L\x9b\xf8Eae\x189\xf9I9\x9cDnE\x0c\x8a\x19\x8f\xd7\x05\x8bU\x02\xde\x94\\\xc5\x98\xce\xaa\x07o\xc0"\x19\xbc\xb2g\xe1\x08\x9e\xe4\x02\xf3\xd8/b}\x95\xca!G:[\x9a\x10N#\xdd[\xb4\x1f\x8e\xea\x83\xfb\xc40I%\xc2\xf6\xf0\xee\xc63\x16\xcbT0\xa0\x97\x97W\xac\xd1\xcdm\xd4Y\xe7(pb9\t-\xb53\'\xed\x99\x92dV?R\x9c\xc2\x14\x14>\xf5\x10E$\xdaJ,h\xe3\xea\x89\x1e\xe9I\xab\xa3[\xc9\xd8C\xfd\x01\x96\xdc<\xc1\xad\xc2>w.\xce$\xc3\xde\te\xa7\xec\\*\xe4\x8c\x18\xb5*\xd9\xa6zc\x0e\x8d\xa9h\xa4\xea-\xa7\x98\x00,\x84\xc0U\xa1O\x81S\x029\x99\x8c|u\x8b\xa8f\xe1\xfe\xfb^\x0eG0\x82\xa7\xbf\xbbmO\x84\xfb\xef"\x85#=\x99\xaa~\xdf\\\xff6i-\xeaPZG\x00YF\xc4\xfedwxW\x9f\xcd\x87\xd5\xfd\xff0P\x9a\xfdi\xf5\xf9C\xf5\xf6s\n@\xd1\xe9\xa6Q\x89?\xbe\xc0\x7f%\xf5\xf6\xbc\x8a?.\x9a\xa2\x0cA\xb4~ff\xcb\xc2\xb0\xb5\xf3\'\xbe\x01\xf3\xc9\x0b\xc9I\x87n\xfb\xd7\x93is"Q\x0fI\xe4\x16\xb2\xe0&\x90\xe8F\x03\xb43k0\xdc#\xec/\xf1\x18\x18b\xbb[\x8c\xee\x80=+m{\x0b\x9d\x07\x0c\xdb\xd3\xb4"z\x8e\x83\xb7\xf6q[t\xbf\x89\'1\xcdx4\xf5r\xff\xb7%\xb0T\xa8o\xf5\x0c8\x88\x86E\x9c\xf1\xe0\xbd}\xa8\x80=),\x08\xf2R\xcb\xe2\xa7F\xff\xda\xa14\x83\xb9u5\x8b\x18{[A\xa0\xf0\xe7\xf0\x03\x9d\xc2G\x9c\x05\xb3\xf9\x18\xac\t\xa6\x0e\xac$\x8b\xeb\xe34M\'\xb8!o\x82\xbbw\xbe\xdf=\xfdf\x86J\x01\xf8\x1c\xc8d\xb7b\x80%\x8d<\xa2Z\xd3\xcf\xdf\xc96\xbaf\x05\xda\x17\xff\xd7\xd4\x95\xb6E\x8du\xc1\xbf\x02\xa2\x8e\xb8\x91\x9bt:\x89\xb8\x0c"\x82\n.\xa8\x08c\xd4\xbe\xb9I\x00EFx[\x10Y~\xfbK\xd5\xa9\xdb\xce7\x1e\x85\xee\xe4.g\xa9S\xa7N\xf3\x12:\x16\xb6\x13\x0fN\xbfZJ{\xf9\xf0\xe3\xaf\xe6\xaeY\xef%\x10\x04\xcaX8\xc02m\xdeE2F\xb2\x19z\x7f\x13\xf1\x15;htQZ\t\x94\x8bFD\xde\x16\xc4\xbe\x12z\x1a4t\x95\x92\xf5\x1eZ\x99\xce\xed\xa2$\n\xec\x80\x017\xec@,\xde\xb1\xf5%\xb13\xdb\xb1Q\xbf\xac6\x95C\x16v6\x12\xa7y\xce\x01\xc3\xb3*1\x89\xc8\xf1\x86\xf1#c\x12\xbb_B&\xdb\xdb\xee\x8a\xcb\xff\xbf\x99\xd5X\x12\x12\xae\xc9h\x08_P\\\x8b\xfb\xa2\xca\xaf5\xec\xbd\xaak\x94\xa2c]\xc4{H\xd1\xfa\xd5_\x11\x9f\xc2a,\x05#8\xbb\xb4!@\x8d\x8a\xd6\x81\x02\xccJ\x8b[6\x12a\xca\x0e\x91\x8f`\x9f\xe0\xda[?\xcdGq\xc2B\xbb\x07Q\x00\xafBs%\xd8\xad\xcd\xd5\xc2\x89o\x1a\xac\xc5\x91\x91M\x7fup\xe3\x02.\x9c\xe4\x0c\xf3\x1fH6\x1a\xcc\xd65\x11\xdbu\xf8 \\\x93\xe2\xba\x00El\x10\xc6\xe1\xf6\xd9\xde\xda\xf7{O\x1f|8\xc0\x14_\xb4\x95\x92\xc9\x90\xde\xbb\xf5I\xd1\x05\xd0UV\xd7(P\x9c\x1a(C\x98Jx}\xd2\xfd\x00\x11\xb6\x83D\x9e\x01\xd8\x1b_\x0e\xec\xd7\x82\x9bz\x14\xb3\xfa\x1f\xbc\xef\x87\xd7 \x84\x12\x93\x85N\xd8y\x13>_\xb1\x00\x86)\x9c\xdf>\xaeM\xe5\xce\x8bw\xcd\x16.\x01C,\x173\x94\x9a1\x1bVzH\xb7\xb0\x12V\xed\xdfz\xcc\xa7\x18#\xbb\x1aH\x988\x806C\xc7^\x1e\xacc\xd4\x04\xaelS\\\xb5\xc3\x16\x94\xe2\x92\x9d\x18\x96w\x11v\xe4s_\xecr\xd2\x14A\xb0\xf6\xd2x\x1c\nG\x89\x9d\xb2\x1cy\xdb\x1f\xc1H\x95\xe3c\xfc\xf1\x1a\x01,\xbbD\xd6\x95\x85\x83\xd7\x9c`0\'\xf93nw\xa4\xa0\x1b~\x06\xbdN\xacY\xf7B[0\x14\xaa\xe9\xa6\xb1\xf9\xe9\xbc\xb3V\xbb\xeb\xb5d\xf9\x9f\xed\xe0\x9d#\x1d\x06F$\xcd\xa7W\xeaI%\x8c\x19p/<O\xee\x8cu\xdd\xcc2\xbe\x04\x02f\x84\x13\x07\xb3\xb6\xfc\xa46\xca\xf8\xf7\x81\xb8\x9e\x17\xe2\xd7\xfff\xe6\xb9\xef(\xa1\xed\xde[\x94OS\xddL\xe5\x0c)3\xbbrD@\xe8E\xfe^\xc2h\r.%\xe7@b.\x00\xeb\x8a\xe5\xeb]\xcbc\xcbX\x1b\x19j\xb6\x1e\xec\x19\xc2\xe2\xcb\xcb\xce\xe5E\x94\xd2/\xc0W\xff\xb2\xdf&Jd\xf5\x8aZIW\xaak\xd2d\x11\xad\xd3\x83\t\x16cQ\x05\xfd\xc7M\xd3rh\x15J\xb9l\xa4\x83X\x06\xa9\xe2D\xfd\x1f\xbc\x9f\xb1\xfb]\x8a\xb9\xd25;\xa8\xc1W\xff\xfe6C\xe2\xb3\xbbl\xc5\xd0^z\xa1\xe6\x95\xea\x01\x97Gc\x8c\xfb\xde\xd6\x92\xad\xe2\xe8j\x8c|\xe8\xa9\xd5P\xec\xa1+ W\xc55D\x8c\x8a\x8d\xa5I\xadV\x94\xc2 y\xe69\xe9\xb9bt\xb8\x0fL&fa\'e\x7f3\xc2c\xd6\x12S\xfb\xb8\xce\xfdT\x0c\xca\x94e\x13\xb6\xb9{d\xaf\xd6g\x0b\x9b\n\xd2\\\xad\t\x9a\xe3z2\xeb\x86\x1d\x0b\xd5\r\xd9\xab\xc6\x1f^\xc3\x16\xdf\xc6\x10\xa7\x86j*\xc3\xbd\xdf2|\xc1b\\\xd6\xb9\xbc\xaa\tA=\x04L\xb9;\xbb!\x0c}\x84\xec\xd2&\xab\x9d\x8da\x0c\x83\x1c\xe8\xb7\x10\ne!n\xb1\x9e\xa8\x13\xf6\x14D/.\xfe\x11u\x82\xd9\x1d;\xa3\x87\xb5\xf4\x04\xf0 \x1fk\xe8Z5\x00\x00y\xce\xbda@I*\x1eU\x85\x06\xef\n\x88\x0e\xbf[G\xa2\x12\x00I\xef[*\xdbo\x85\x7f2o\xc2\xf5\x1e\xe6\xe8\n\x00\x88Q\xa2\t\x84\xf8\x1a\x8f\x89.Sd\x18I\xec\xb1V\xde\x84\xfab\xd9\x8b\xe1\xc6d\xac\x93;\x81\x02\x1a\xad\x94\xa0}\xfa\x0c\x98\xbatdf\x86\x11H\xac\xb0V}N\xcf\xb5i\xa7\xd0W/\x05h\xc4\x91W\x14\xd2+\xbe\x9e\xd7\x92\x01#\xc3\x18\xb6\x05\r\x89\x0e\x01\x87\x05%\xdd\xd5\xa3\xf7\n?+x\xa8\xde\xcfp\xab\x1f\x11B>\xc7\x07}S\xba\x8d\xb8\xa5?\xbb{ML\x82\x80\xe9\xa2\x8d\xb8\xfe\x14\xbcMvX\xf2l\xe4ly\xd1\x11\x888I\xa8\x93\xe1G7\x98\xcdm\xdb\x97\x9a\xbd\x839\x8bE\x8c\xcaY\x1c\x83\xf8\xa3W1\xc5a\n!ah\x84Ya\xf3Lk,\xae\t\x13\x80\xfc\xb41\xc3\xc8P\t0S;D\x8bQ(\xb7\xde\xbdR\xe2\x94Ebp=i\x92\xed\xd3\'\xc3\x11\x1dk\xcd\x9b\x99\x92\x90\xf1\xf3\x00\x81\x08\x9e\xdb\x9a\xd5\xed{*\x7f\xaeTD\xd5\xbf$[\xa9\x0f\xffV\x14\xc1b\xe2\xdbZ\xb3\x85\xf6\xd2\xd1]\xf7`\xe7\xa3\x19\x8f\x12L\x96\nr\xd3\x04Q\x82\xea^\x93A\xa6\xbe\xae\xf7\xccA\xc6\xa9\xd5\xb4\x90\xc3u\xdb\xd0\xcb\xef\xab\x7f\xdc#\rq\x06\xfc\xeb\xe1Vm\xf3~\xcbHo\xa9*\x7f\xa4\x0f\'\x95\xa6\xbd\x0e\xb9\x84\xe4K\x8c\xf2m\xf7[\x7f\x06\xb3\x05\xec\x8f~\xca\x1b\x82\xd0\x06\x9bJk6\xa6\x87Z\xbb\xcfo?\xb5\x9d\xf5\x10\xf9d\xb3\x19\x88MD\xcdYL\xdb\xee\xed\xda\x92\xfa\xc0\xc2\xd6\x033K\xa5\x98[QP\x9b\x93\x03\xa0\x06\xd3\xa9\xd8\xc4\xd0\x9e\xd5\xaf\x07\xf6\x0cM\xb2j?`c\n\xe2\xda0mT\xe1\x11\xc5\x84W\xa9\x17\x00\x88\xf2E\xb8\x82\xcb\xb0\xfa\x05\xa05; \xa0`i\xfc\xb4M`\'}\xf6a8\xfa\xb49\x94\x05\xf1\x16\x04\xd2\x87\x04N\x1e-\x85AR\x97\x04\xe4\xa8\x88\x9b\xb3\xc1\x0b\xcb\xe4\x9e\x05\xadQ\xb3^K\xcd\xe9Xnc8w\x07\xfa\x0f\xcd\xceM\xc5\x8f\xc1^\x94J6\xa5V\xa5x\xb9rN\xaf\xbb?\xfdl\xbaD\xc4\xe9\x8e\xe1\xe2\xee\xac\xc1b\xba\xcf/\xe2\xb5\x1f\n\x9c\xe6\x05\x05\xd2\x1cV\x1er\x84\xe6\xd2\x13\xdd;\xd5"II\x18\x98\x91u\x83\xb7\x1b\xbb\x08\xc0\xa9\x0bA\xd2\xe5V\xf9\x16\x84\x9fv\xf7\x95\xec\xb0\xb1Q\x19\x1a\xa0z\x8esKY\xb3\x1e\xc8\xb6\xcb^\xfe\xfeZ<\x98Q\x84\xd4\x0bl\x8f\x04.99V~\x88Oaz\x04m\x1fHhq\x88\x85W\x01\xab\xcc\xefP\xb6\xe1!;1\xd6e\xda1\xe5\x9d\\<yK\xd6k\x82\xc5\xea\x8cae\xe8\\\xba\\\x8b\xb3z\xf1xi\xe1\xc2L!\xfdkeF\x02\x11f\x80(u\xd1\xef\xdf\xfa\xbc\xfd\xafj\xc3e\xf9=\xb5\x98<\x88\xa0\xd0\xe6qdJ\xf9\x1f\xd4\xbaa\x83\x99;\xa1\x02i\xbfx\xe5\xd9o\xcc<\xeb\xb7Df$\x0fU\\\xa1\xb2Se\xcc\x8a\x89\xdb\xf6\x1cF\xca{\xa2\x80[\xc7\x8a\xac\x13\nf\xe1\xbc\xbb9\xcb/\xbbd\xb4\xfa\xd4"=\x12s\x85\xfe3EH\xcc>\xb5\x80\x8c\x1d\xd9\xb7\xa8_T\xf9\xcf\xe3k\x18\xa9\x06\xec\xb4S.\xd3\xf7\x1b\xe0DB\xaa\x94 \r\xe9\xac\xc4#\xeb1"r7s\xd5\xccXW\x89\xd7\xcc\xa91\x89}\x13\x9b^\x87\xf71\x82\xa8=\xdf8\xb3\xd7\x0c\xc5\xb7\xa3S\x95wB\x9a\xd9;\xb7\xed\xc8\xf6\xd5\x84\x14\xf0\xc5+\xcb:\x03\x89\xecO\xb6<\xb6\xb4\x80\xcf\xcc\xe0#X\xa8Af\xeb\xe0\xde\xb6\x19\x18\x06\x85\x90\xc9 e\x00\xea\x8b\x14n\x1d\xea\xc9=\xc7\xaa\x8c\x94\x14\x81Q\xd7P\x9b\xcd\x87\x8d\x1d[Q\x9a\xe7\xea\xf0\xe1N\t\xa7\x1d\xee\x8c\xe2\xc1$\xfd\x9c\r\xb2e-t\x7f%\x9e\xd6\xc1\x9f\xbc\xd8\x0b\'+\x8b3F\xc2\x881\xc0\x8c\xa0\xdfd-\rPj\xb2\xb84o\x8fO\xad:\xf0\xb1x\\\x9b\x03{\n\xb6\xa4\x11\x94\x85\x82\xa9k#\xe0\x88\xbb\xbel\xbf\xcb\xa2\\\x89\x80s\x10\x84\x0e\xa9~N[\tC\x8c\x19X\xd4VK\xed\xb9\xdaX\x01\xe0\x80\xd4\xd6\x1e\x88\xff\xc1\xdc\x8b\x96\xb8cF\x05`\x8a\xe51\x141YbA+v\'\x8aE\xc3\xfa\xfbK\xe1\x12I=~w\xd3\x0e\x1eS\xe90\'\x0b\xabB\\\x9c\xa2\xd1\xf5o\xa6\xf0\xb9\xd4q\x11t\xe4\x13\xd8\xbd\xee\xec\x9d\xedH\xd7\x8dA\x1fCk-\x01\x04\xc87\xb3\xea\x89]M\x8f\x96l\xf1\x19\x02U\x16\xd6:\xa6\x15\xe9g[\xccFU\xc3\x123\xfb\xda\xc1k\xcb5-_\xdb\xe9\x15\xfd\x89\x83\xe9\x14\x1fUT\x1b\xd2G\x1a\x13\xf5h\xcer\xb5\xd2\xe0\x97}\x9d\xa6r\xde<$\xd5\x80\x06\xf5D\x9f\xb9\xf7\xd3\x9b\x12$#=M\xc1\x17\x9b\xb3\xbaUK\x15\xa8\xa7\xcd\xa1FL z\xe8\xe59\x7f\xe5\x8dy<\x8fz k\xd0\xfd\xfe\xa2\x05\xad\xbd*\x82MA\x99\xa2\xf3?~\x8eD\xa8\xd4N\x03\x8bB$w\xbd\xb3\xa3T\xa9\xeeR\nb\xef\xf4@Uen\xd6\xba\xd8p\x8e\xfcl#\x8f\x1f\xe4\'i,c\xf3\x0c@\xa2$\\\x15\x9e0\x98\xab\xd5\x802g/I\xf1\xd6\xbc\x9ep\x7f\x0b\x11zH\'`gns\xfdsL\xdcl\xab(\xdaR\x10\xdf\xfa\xfdU\xc9\x90\x93\xc7\x17\x9e\xd9j\x8fB\xa4\xe9\xa7\xf6\xaa\xad\xa05\xefO\x19\xe51"\xf9PO\xb4\x82C\xb2\xfc\xc6\x9e\x87\xe0U~e\xfd\xb3\xd6\xbd\x17\xb9\xba\x1bR\xc2]\x80\'#\x7f\xe6\n\x03{u\xc2\xef\xa92\xf7r\xd9\xe2\x87\xde\xad!\xf4d#\x12n\xef\xf0|S\xdf\xd7@\x96\xae\x1cL\xbf\x1a\xbd\xf8\x0e\x83\xce\xd6Clw~\x1b \x8d\xbbC~\xe5\x9cy\x08\xcf\xa6w\xc1 LqYgx.\x0f70\x03\x13\xe7\x9d\xb6\xe54\x92\xd7B\xbfOA\xf3^\xa8m\xdb\xcdB;\xc0\x11\xb1\xce,\x03/\xfb\xb5#[\xb4^\xb4\xb9\x82\x15\x97\x067\xbao\xd7@n\x80`\xc5enzVOt\x13H\xaf\x19<\xda\xd1\xd0\xb3\x8e\x93\xe6\xa1\xfb@\x90\x1d\xc2\xd0\x8cs\xd2\xbd+\x13\xac\xd8\xb2\xd4d\xe1|\xdeN\x17[\x11H\xf3C\x89?\x91\xf5o@\x7f\xa2\x92^o\x06\xd9F\x05@\x04\xd9\xa9bK\x8f\x08\xde\x8d\x03J\xd0\x82\x06\xe9mH\x04~\xf3kev\x94,\xa2\xcan&U]\xdbG\xa5 \x14\xde\xfdtW\xfe\x00\xe7:;\xc0\x064[\xfb\x0b\x93\xfb]\xdb\x1er\xe6,C\xf4\xae\xd6\x88\x88\x8bi\xde\xf3\xc3\xb7\xe3/\x97k\xf4\x17\x9f\nd\xe8\x04\x97\x98MT@\xcd\x12yu\xb6\xb8d7)\xfa5\x9c\xb5\xe0\x8b\x02\x884}/\xaaz2l\xd3\x85\xe9\'$g\xa1\xfc\x9d\x9c`;\x9f\xc9U\xe1.\xb3B\x06\xbe]t\x92^\x0c\x14\xa3|f\x16\xe8P.9\xd3ma\x1bB\x7f^(\xf42%\xd2K\xdb|d\x96\xae+o@\x81\xa8y\x086b\xf1\r\xe2!4\xd5Z9rrBg\x07\x83\xa5E\xe6\xc0L\x9e"?\xa2\xb0\x13\xe9\xaa\xbd\xff\x8d\x04\r\xa5\x10\xf6\xa04\x18Z\x8f\x92!\xf5\x11\xde\xd4\x13\xc5\xa8\nd\xbf\x16\xed\xd8t{\xa4\x1c\x04\x8b\xa7\x13\x81*T\x99\xf3\x98$\xd5\xc8^\xf9\x00=\x90\xbe{\x06\xa8\x08\\\x9a\xaa\xfd\x07\x15\xf7\x8eT\xd4p\xff\xfd\xb9\xdd\x84\xc6\xfd\x18)\x93\xe1\x8e\xde\xac\xd5Y\xf0\x1c\xc7\x99\xd4\xa1\x81\x99\xfa6w3\xaa:\x80o\xd0;D\xa6\xfe2\xbe\xb2\xc4V\x15\xe2\xc8\xbcm\x95^w\xaa\x8d\'6Aed\xa6\xc7\x83\x0e\xc7\xb6\x80V\xde\x93&\x81\xad\xf7\xa0&u\xa3z|\xa8\xac\xb6|Z\x1ca\xe50@\x96\xf5[\x9c\xadt\xe9\xf4\x1b\x02\xcdt\xde\xfc\x15i\xa91\xd2\x84\xac\x0ck\xcf\xc33;\xbb$\xfde\x8fP\xb9\r\xbfs]\x97`\xbbQVgSz\xe7\x8c\xaf\xc2Y\x13\xa0\x89q@\xc3\x10\x05\xc2\xea\xecE\xb9\x05\xff\x81!\x94\xa1\x90\xd0~\xe5\xa6,<w\xca\x05\x9b\x9c\x94\x98\x9c\xe1\xfc\xda\x9eYX\x0e#\x87\xd3p\n\xbeBz\xb2\x95\xa3\x0f\x12\x0cH\x0e\x82 OdV\xc7\x86~\xaeYd<\x80\xddm\xec\xa7\x1d;\xd8\xf8\x00\xa2\xa2\x1c\x1a\x8c\xa1\xe1\xa4\xb1%\xf6>\x1c\x1e\x0c~\x81\xd1\x9de\xa01\xe2\xde\xfb\x0f|\xbe\x7fl\xb9\\\xc0\x08\x1e\xdf\xbf\xfe|\xc6\xe0X\x07\xad\x89\xe03E-\x1e\x9b\xffjTS\xe8"\x1d\xd5W\xb1\xf4\xb2\xa1\x84@D W\xac}\x998\xea\xb19\xb2*\x1b=CQ\xd4_#\x1cs\xa3\x9e\x94K\x9aV\xcf\xd7\xeb$\x92_x\x04\x03p?\x06\xffBb\xb4r=\xe7\xd6\x8a\x94\xd7\x96\x0bcs\xd1\xd6y\xf5\xf1\r\xf7\x195\x80Vb&l\xb2%\x8f\xc4aJM\xf3cE\x81)R\xf3\xd2z\xa6q\x1f\x83\x8a\xda\x9d\x0e#\x11\x0cV\n@?\xee\xf3\xa9\x7f-\xac1Y\x04a\x83C\xbb\xd0$\xcf\x86z2\x9a\xcf\xab\xfeh\x90\x96\xd2\x8c\xf6\xe9\xa6\xd9\xc4\xa8\xe1DZ\x7fx\xf8\xd8\x92\x8c\xbe\xf8\xba\x84\xbc8\x88\xca\xd8\xfa\xddm\xf3\x07l\x9a\xed\x84\x7f\xa2 \x9d\xfc\x82\xdf\x80\x04\rq\xfa\xd4V\xc0\x8b\xfbU\r\xa6\x8e\xf0\xd5#\xdb)+\xa3#\xe3\x00K\xfc\xf2;\xeb\xbde;\x17\xd4<6\xe7\xd0\x89\x8cf\x9a\x15l\xce\xcae\xbd8R\x00\x16\x11S\x14\xdal\xe1*\xf0\x86R\xd4\x8d\xae"\x17\xff\x88b"\xfa2\xdf477\x17G\nV\xe8\x16fk\r\xe4\x1aY\x80A\x05\x12\xd8\xcb\xe1]\xe5I\xb9\xee\x08\xa4\x86\x92\xe2\x9e\x82y\x17\xa3\x1ef\xd9dEd\x94\xce\xfaed\xe6}\x0b\x8c\xfd\x84Uc\xb9t\x83p\x87V\x15{\x13cQF\xcad@~\x94\xa9\xd0\x85q\xf9\x81\x82\x01\x87z)\x88\xfe`3\xcd#}p\xe7\x1bwd\xb4y\xbfq\x80\x19\xeaB\xc8:B\x9e\x9d\xdf\x12\xfc8\x08\xcd\x94\x1f\xd9\x1f\xb0\xe7B\xc18\xa3\x9f\xac>D\xb8\x94}\xad\'\xc3\xfc\x12\x99o\x8aW\xd0\x05g\'\xd7\xa1q\xe6\x07\x07:VP \xb4\xc3\rH2\x9f\xb2\xd3a\xcd\x14\xb6\xc3\x9cf\xcb\xacz`\xf0\x14\x89\x18\xf8v\x8e\x01d\x9d\xedB\xe9\x8c\xff\xbe\x07fP\x07\xcb\xd1y\x80J$\x80\x1b+\xc8r?\xa3\xd5\xfe%Ti\xf8\x1fC\xc9YJ\x18$\x1fD\xce\x0b\xd9I=\x11\xde(\xd9\xaa<\xd8\xfb\x13&\'\ny\xaa\xa1\xe6\x1au\xe9\xfd\xdf\x7f\x8c\x15\x85F\xbc\x16\xa1\xe0f\xfaO\xeb\xdf\xdf\xd6Q\xfb\xf9\xd20\xce\xd8\x83%\xa2\x11v\xdd\xb1\xc58,\x7f\xe2\xef\xcak\xb77\x14\xe39\xbb\x98\xf8\xb9\x89\x8c\xdf\xde\xd6\xbfG\x01\xd0\xac\x9a\xd9E\x1f\xeb\x00\xd9\xc9.\xb7\xf3)\x8e\x87\xa6\xfb\x11\x0f\\\x10B\xd0\xda\x92\x84\xe1\xecV\x1a#\x08\xf4\xb7\x94\xfb\xb6\xd0-\xc8$D\xda\xcbY\xdc\nl\\\x9f\x9f\x9e\\X\x88\xeb=\xe7\xbd\xb1\xa6\xbe\xb5\x14g\xd6MF\xa8\xe8\xe4\xdax\xd0\xf2p\xfd\xf6\'k!\xb2U\xa7\xeb\x10G\xbd\x10g\xa2\xec\x9e-\x8d\t\xd2Z\x86t\xcb\x9e\x8f\xa7\x08\xc7\x94\xbd#\xa5\xfd\xae\x93\xe5 P\xd6gf\xa4\xdd\xe0t\xea\x04\x1a\xaa\xf8\xd7\xaax,\x83\x14\xb6\xdf\x8cW\x15W1~\x80jw3\xdcS\xccO\xf2wK\x00h\x03\x9f\x83))\xdc\xc8!^?\x7f\xa3\x02\x91Bk\x07.\x12\xc7@\x94\xb5\xf4\x0e3\x0c \xe7\x89N\x8f\x07vpY\x91\x84\x91\xe8\x01\xeb\'\x9f\x94\x19;\x8c[\xe1\x80\x95\xe1\x18g\x15\xb4\xf8F\xe9J\x9b\xff|x\x03\xef\xf0\x86\xecu\x98f\x0cm0*\xaa%\x13I8{2\xa6l\xd8\'Vd\n\x10\x81z\xba\xa9\xd2\xccq\xa7\x13U\xe6\xbf\xe8\x1a\x81 &\xa8h\xf5\xd5,\xdc\x03\x1f:}\x1b\x03t{!,\xefL?\x8f\xa9%\x1a\xc1\x99\xd9\xfe\xb1\xf0R<V\xa6x\x99\xb7\xb2\x8dQ\x15`\x9a\xa7\x04\xfe\xbd\xfc0\xb3\xb9f/\xe20\xd2\xdeke\xe8\xe8H\xfb\xeb\xed\xfc\xd2\x8e\xb7v\xfd87\x83<\xb8k\xca\x08\x02\x853#\xf3\x87\xed(nQ\x89$D\xee.W\xe1P\x95\x85\xc4\xfe\xde\xc5\xda\xb1_\x90;be\r}\n\x1c2X\x95?\xb9\xdc\xfb\x1bf\xb7\xe2\xa1a\x86\x0c\xc5\x0fS5}\xe5\xd8\x1bS)\xd9e\x19\xe3\xce\x96\xadL\x18L\xbfT\x9c_\\\x1c\x1e\xcd\xaf\xd4\x12\x85\xc6\n\xa1s\xb5\xddYTB\xae\xaf\xe7\x84\x8c\xe1C\xb4\x97\xe3\x9d\x83\xf2aV2\xa0\x94\x15g,\xb2\xb1\x97\x85\xeb\xd9\x11@\xb8\xe4\xc6HIJ\x02\x14i\xf0\xea\xfa]\xdb/\x86\x9e\x85\xed\x88o\x17\x8e1 \xb1Rz\x06K\x1b\x06;\x1bf2\xa9\x0e\xa24\x9f\x8d\xf7\xb9\x92,5\xb6p|\xba\x1a\x02\xa8\x13`\x06\xf1\xf2\x13\xf6\xcd\x1d\x91\xbfOl\xfa\x9e\xa1\x99\r\x87`B\xa3\xabD\x86\x8ag\x0e\xc5\xca\xeb\x97\\\xd6\x9a\xbfS+\xc5\xc7\xb88\x1eRQ\x88[@\xcc\xe4L%\xa2a\xb9X\xe5\x1c\xd6\x7f\xa43\x81\xb4\xb4\xed\x8d/\x16\xb8\xf9\xfcq=a`\xb3T\xdd\xd6\x13V\x0bsk\x08>\xb8n\xc5\x81\xcf\x93\xc42\xf3\xe032\x9eA\xec@E\x93y\x99\x82\x16\x9d\x7fBtH-L\xfc/\xbb2A\xa1\xe8M\xb5\xbe\xd7\x87\'\xef\x7fR\xc8\xb1\xad5\xf6q\xde\xf2\x858\x8d\x89\xb9\xa7[\xb7G`\x1f05PI\xa4a\xee\xf8\xd6\xfe0\x00\xd4"k\x08J\xf6\x8d\xae\xadc\x89\xddO\x9b\xc5#\xcc\x18\xeaC\xf4tQB\x8a\xc0!\x96\x94@\x05\x06~\x11\x9d\xc8\xcc\xaf\x93/\xea\xcd\xde;\xaa\xb4D\xee#\xf7\xa2\x9e\x14\x93IK\xa5\xe7z\xa5\xe0D\x801/,\x9e\xdb\xaf\x1e\x0b\xf2\xa3t\xdb\xa0\x9ep)I\x1eeP\r\x156N\x8f\xce\x9bS84h\x1dU\xca{Z\xb9{\x16\xbe\x02\xaa3\x15\xa6w\xf8\xe1\xd1\xf2\xed\xb7:\xf8\xaa\x96\xb8\xe64\x7f\xaf\x0eV\' \xd6\x178F\x83\xe7S\xff\x01\xbb\xda\x83E\xd8\xffG\xf6\xc1\xb8\x8e%\xe9n\xee\xf6\xea\x87\xd19\xaa\x83\xee\xb4\x9e\xa8\xc9\x9b,\xdcvfJ\xcb\xf6o=\xec\x07C\x03\xa6\xc3\x17\xc5+\xf3l!?\x83\tMg\x9f\xdf\x99\x13z\x15\x14GD<\xcfC\xcd\xab\x81\x1e*/h\x19\xdb6\x10\x84\xf4\x17\xf6\xf1|\xb6^\x88}\xa5Jh\x0bF\x11\xe9>\x1c\xd5\xcaH\x16\x7f\x10i\xb7\x1dhGE\xbf\xf3D\xe1\xab\xc242j\xc3F\xbb\x1f\x19\n\xb9\x9d\x88D\xeb\xca)\x1b\xfd\xbb?)CU\xfd5;\xfb\x7f\x16\xd1m\x0f(\x02\x00\x00\x00t\x04\x00\x00\x00zlibt\n\x00\x00\x00decompress(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Ariya>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01')) | 39,341.5 | 157,284 | 0.735432 | 36,172 | 157,366 | 3.194017 | 0.172675 | 0.0121 | 0.011451 | 0.010906 | 0.018912 | 0.018912 | 0.018912 | 0.018912 | 0.018912 | 0.018912 | 0 | 0.235779 | 0.001417 | 157,366 | 4 | 157,284 | 39,341.5 | 0.499437 | 0.0004 | 0 | 0 | 0 | 65.5 | 0.666088 | 0.664105 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
9a29ea8e2e85a75edd2dae672a10c1d23292c302 | 192 | py | Python | powerline/segments/ipython.py | zhaocai/powerline | 8aae1145835b4b2a71f1ed71b81d490e2907bd39 | [
"MIT"
] | 19 | 2015-09-01T20:49:16.000Z | 2022-01-08T22:13:23.000Z | powerline/segments/ipython.py | zhaocai/powerline | 8aae1145835b4b2a71f1ed71b81d490e2907bd39 | [
"MIT"
] | null | null | null | powerline/segments/ipython.py | zhaocai/powerline | 8aae1145835b4b2a71f1ed71b81d490e2907bd39 | [
"MIT"
] | 6 | 2019-04-25T03:42:35.000Z | 2020-06-05T15:25:23.000Z | # vim:fileencoding=utf-8:noet
from powerline.theme import requires_segment_info
@requires_segment_info
def prompt_count(pl, segment_info):
return str(segment_info['ipython'].prompt_count)
| 21.333333 | 49 | 0.822917 | 28 | 192 | 5.357143 | 0.678571 | 0.293333 | 0.253333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005682 | 0.083333 | 192 | 8 | 50 | 24 | 0.846591 | 0.140625 | 0 | 0 | 0 | 0 | 0.042945 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
9a4e4cd6f2835fedf68c154e4800c0ef72a66c87 | 3,230 | py | Python | hgapp/profiles/migrations/0014_auto_20210909_1452.py | shadytradesman/The-Contract-Website | d8b353064f91c53ebab951dec784a0a36caba260 | [
"Apache-2.0"
] | 6 | 2020-10-03T12:15:05.000Z | 2021-10-15T04:43:36.000Z | hgapp/profiles/migrations/0014_auto_20210909_1452.py | shadytradesman/The-Contract-Website | d8b353064f91c53ebab951dec784a0a36caba260 | [
"Apache-2.0"
] | 99 | 2020-06-04T17:43:56.000Z | 2022-03-12T01:07:20.000Z | hgapp/profiles/migrations/0014_auto_20210909_1452.py | shadytradesman/The-Contract-Website | d8b353064f91c53ebab951dec784a0a36caba260 | [
"Apache-2.0"
] | 9 | 2020-06-06T16:39:09.000Z | 2020-10-02T16:24:17.000Z | # Generated by Django 2.2.13 on 2021-09-09 14:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('profiles', '0013_auto_20210906_2005'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='num_contractors_played',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_deadly_player_games',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_games_gmed',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_gm_kills',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_gm_losses',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_gm_victories',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_gmed_cells',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_gmed_contractors',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_gmed_players',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_golden_ratios',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_played_ringers',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_player_deaths',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_player_games',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_player_losses',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_player_survivals',
field=models.IntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='num_player_victories',
field=models.IntegerField(blank=True, default=0, null=True),
),
]
| 34.361702 | 72 | 0.573994 | 318 | 3,230 | 5.666667 | 0.166667 | 0.17758 | 0.221976 | 0.257492 | 0.860155 | 0.860155 | 0.860155 | 0.836293 | 0.836293 | 0.836293 | 0 | 0.021505 | 0.308978 | 3,230 | 93 | 73 | 34.731183 | 0.785842 | 0.014241 | 0 | 0.735632 | 1 | 0 | 0.131364 | 0.02137 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011494 | 0 | 0.045977 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
d0409900f938da4f18bf6fda521fceb414abb053 | 404 | py | Python | ex108/ex107.py | arthurfas123/Curso-De-Python | c4a15d92811bd101a8562d2c3a90fe2d5a3c360d | [
"MIT"
] | null | null | null | ex108/ex107.py | arthurfas123/Curso-De-Python | c4a15d92811bd101a8562d2c3a90fe2d5a3c360d | [
"MIT"
] | null | null | null | ex108/ex107.py | arthurfas123/Curso-De-Python | c4a15d92811bd101a8562d2c3a90fe2d5a3c360d | [
"MIT"
] | null | null | null | # Formatando moedas em python
from ex108 import moedas
valor = int(input('Valor: '))
taxa = int(input('Taxa % : '))
moedas.linha()
print(f'Aumento de {taxa}%: {moedas.formatando(moedas.aumentar(valor, taxa))}')
print(f'Menos {taxa}%: {moedas.formatando(moedas.diminuir(valor, taxa))}')
print(f'Dobro: {moedas.formatando(moedas.dobro(valor))}')
print(f'Metade: {moedas.formatando(moedas.metade(valor))}')
| 40.4 | 79 | 0.717822 | 55 | 404 | 5.272727 | 0.381818 | 0.275862 | 0.303448 | 0.17931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008086 | 0.081683 | 404 | 9 | 80 | 44.888889 | 0.773585 | 0.066832 | 0 | 0 | 0 | 0 | 0.653333 | 0.434667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
d077fa75f652a7ed3ed403258a845a379a85e42b | 163 | py | Python | wsgi.py | jsugg/locate-apple-devices | 69de53064e1f33bb2f890c026eedd3d2654feb2b | [
"MIT"
] | 1 | 2018-05-22T05:34:13.000Z | 2018-05-22T05:34:13.000Z | wsgi.py | jsugg/locate-apple-devices | 69de53064e1f33bb2f890c026eedd3d2654feb2b | [
"MIT"
] | null | null | null | wsgi.py | jsugg/locate-apple-devices | 69de53064e1f33bb2f890c026eedd3d2654feb2b | [
"MIT"
] | null | null | null | import sys
sys.path.append('api')
from handy_tools_apple_devices import apple_devices_handy_tools
if __name__ == "__main__":
apple_devices_handy_tools.run()
| 20.375 | 63 | 0.803681 | 24 | 163 | 4.75 | 0.583333 | 0.263158 | 0.298246 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110429 | 163 | 7 | 64 | 23.285714 | 0.786207 | 0 | 0 | 0 | 0 | 0 | 0.067485 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
19035c39832eedd65fa43becfe40113374812209 | 51,202 | py | Python | jamf/api/app_request_preview_api.py | jensenbox/python-jamf | 85213085b1064a00375a7aa7df5e33c19f5178eb | [
"RSA-MD"
] | 1 | 2021-04-20T15:28:57.000Z | 2021-04-20T15:28:57.000Z | jamf/api/app_request_preview_api.py | jensenbox/python-jamf | 85213085b1064a00375a7aa7df5e33c19f5178eb | [
"RSA-MD"
] | null | null | null | jamf/api/app_request_preview_api.py | jensenbox/python-jamf | 85213085b1064a00375a7aa7df5e33c19f5178eb | [
"RSA-MD"
] | null | null | null | # coding: utf-8
"""
Jamf Pro API
## Overview This is a sample Jamf Pro server which allows for usage without any authentication. The Jamf Pro environment which supports the Try it Out functionality does not run the current beta version of Jamf Pro, thus any newly added endpoints will result in an error and should be used soley for documentation purposes. # noqa: E501
The version of the OpenAPI document: 10.25.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from jamf.api_client import ApiClient
from jamf.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class AppRequestPreviewApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def v1_app_request_form_input_fields_get(self, **kwargs): # noqa: E501
"""Search for Form Input Fields # noqa: E501
Search for form input fields # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_get(async_req=True)
>>> result = thread.get()
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: AppRequestFormInputFieldSearchResults
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_form_input_fields_get_with_http_info(**kwargs) # noqa: E501
def v1_app_request_form_input_fields_get_with_http_info(self, **kwargs): # noqa: E501
"""Search for Form Input Fields # noqa: E501
Search for form input fields # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_get_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(AppRequestFormInputFieldSearchResults, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_form_input_fields_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "AppRequestFormInputFieldSearchResults",
}
return self.api_client.call_api(
'/v1/app-request/form-input-fields', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_form_input_fields_id_delete(self, id, **kwargs): # noqa: E501
"""Remove specified Form Input Field record # noqa: E501
Removes specified form input field record # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_id_delete(id, async_req=True)
>>> result = thread.get()
:param id: Instance id of form input field record (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_form_input_fields_id_delete_with_http_info(id, **kwargs) # noqa: E501
def v1_app_request_form_input_fields_id_delete_with_http_info(self, id, **kwargs): # noqa: E501
"""Remove specified Form Input Field record # noqa: E501
Removes specified form input field record # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_id_delete_with_http_info(id, async_req=True)
>>> result = thread.get()
:param id: Instance id of form input field record (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_form_input_fields_id_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_app_request_form_input_fields_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {}
return self.api_client.call_api(
'/v1/app-request/form-input-fields/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_form_input_fields_id_get(self, id, **kwargs): # noqa: E501
"""Get specified Form Input Field object # noqa: E501
Gets specified form input field object # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_id_get(id, async_req=True)
>>> result = thread.get()
:param id: Instance id of form input field record (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: AppRequestFormInputField
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_form_input_fields_id_get_with_http_info(id, **kwargs) # noqa: E501
def v1_app_request_form_input_fields_id_get_with_http_info(self, id, **kwargs): # noqa: E501
"""Get specified Form Input Field object # noqa: E501
Gets specified form input field object # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_id_get_with_http_info(id, async_req=True)
>>> result = thread.get()
:param id: Instance id of form input field record (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(AppRequestFormInputField, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_form_input_fields_id_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_app_request_form_input_fields_id_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "AppRequestFormInputField",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/app-request/form-input-fields/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_form_input_fields_id_put(self, id, app_request_form_input_field, **kwargs): # noqa: E501
"""Update specified Form Input Field object # noqa: E501
Update specified form input field object # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_id_put(id, app_request_form_input_field, async_req=True)
>>> result = thread.get()
:param id: Instance id of form input field record (required)
:type id: int
:param app_request_form_input_field: form input field object to create. ids defined in this body will be ignored (required)
:type app_request_form_input_field: AppRequestFormInputField
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: AppRequestFormInputField
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_form_input_fields_id_put_with_http_info(id, app_request_form_input_field, **kwargs) # noqa: E501
def v1_app_request_form_input_fields_id_put_with_http_info(self, id, app_request_form_input_field, **kwargs): # noqa: E501
"""Update specified Form Input Field object # noqa: E501
Update specified form input field object # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_id_put_with_http_info(id, app_request_form_input_field, async_req=True)
>>> result = thread.get()
:param id: Instance id of form input field record (required)
:type id: int
:param app_request_form_input_field: form input field object to create. ids defined in this body will be ignored (required)
:type app_request_form_input_field: AppRequestFormInputField
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(AppRequestFormInputField, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'app_request_form_input_field'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_form_input_fields_id_put" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_app_request_form_input_fields_id_put`") # noqa: E501
# verify the required parameter 'app_request_form_input_field' is set
if self.api_client.client_side_validation and ('app_request_form_input_field' not in local_var_params or # noqa: E501
local_var_params['app_request_form_input_field'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `app_request_form_input_field` when calling `v1_app_request_form_input_fields_id_put`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'app_request_form_input_field' in local_var_params:
body_params = local_var_params['app_request_form_input_field']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "AppRequestFormInputField",
400: "ApiError",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/app-request/form-input-fields/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_form_input_fields_post(self, app_request_form_input_field, **kwargs): # noqa: E501
"""Create Form Input Field record # noqa: E501
Create form input field record # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_post(app_request_form_input_field, async_req=True)
>>> result = thread.get()
:param app_request_form_input_field: form input field object to create. ids defined in this body will be ignored (required)
:type app_request_form_input_field: AppRequestFormInputField
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: AppRequestFormInputField
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_form_input_fields_post_with_http_info(app_request_form_input_field, **kwargs) # noqa: E501
def v1_app_request_form_input_fields_post_with_http_info(self, app_request_form_input_field, **kwargs): # noqa: E501
"""Create Form Input Field record # noqa: E501
Create form input field record # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_post_with_http_info(app_request_form_input_field, async_req=True)
>>> result = thread.get()
:param app_request_form_input_field: form input field object to create. ids defined in this body will be ignored (required)
:type app_request_form_input_field: AppRequestFormInputField
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(AppRequestFormInputField, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'app_request_form_input_field'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_form_input_fields_post" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'app_request_form_input_field' is set
if self.api_client.client_side_validation and ('app_request_form_input_field' not in local_var_params or # noqa: E501
local_var_params['app_request_form_input_field'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `app_request_form_input_field` when calling `v1_app_request_form_input_fields_post`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'app_request_form_input_field' in local_var_params:
body_params = local_var_params['app_request_form_input_field']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
201: "AppRequestFormInputField",
400: "ApiError",
}
return self.api_client.call_api(
'/v1/app-request/form-input-fields', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_form_input_fields_put(self, app_request_form_input_field, **kwargs): # noqa: E501
"""Replace all Form Input Fields # noqa: E501
Replace all form input fields. Will delete, update, and create all input fields accordingly. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_put(app_request_form_input_field, async_req=True)
>>> result = thread.get()
:param app_request_form_input_field: list of form input fields to replace all existing fields. Will delete, update, and create all input fields accordingly. (required)
:type app_request_form_input_field: list[AppRequestFormInputField]
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: list[AppRequestFormInputField]
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_form_input_fields_put_with_http_info(app_request_form_input_field, **kwargs) # noqa: E501
def v1_app_request_form_input_fields_put_with_http_info(self, app_request_form_input_field, **kwargs): # noqa: E501
"""Replace all Form Input Fields # noqa: E501
Replace all form input fields. Will delete, update, and create all input fields accordingly. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_form_input_fields_put_with_http_info(app_request_form_input_field, async_req=True)
>>> result = thread.get()
:param app_request_form_input_field: list of form input fields to replace all existing fields. Will delete, update, and create all input fields accordingly. (required)
:type app_request_form_input_field: list[AppRequestFormInputField]
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(list[AppRequestFormInputField], status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'app_request_form_input_field'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_form_input_fields_put" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'app_request_form_input_field' is set
if self.api_client.client_side_validation and ('app_request_form_input_field' not in local_var_params or # noqa: E501
local_var_params['app_request_form_input_field'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `app_request_form_input_field` when calling `v1_app_request_form_input_fields_put`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'app_request_form_input_field' in local_var_params:
body_params = local_var_params['app_request_form_input_field']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "list[AppRequestFormInputField]",
400: "ApiError",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/app-request/form-input-fields', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_settings_get(self, **kwargs): # noqa: E501
"""Get Applicastion Request Settings # noqa: E501
Get app request settings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_settings_get(async_req=True)
>>> result = thread.get()
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: AppRequestSettings
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_settings_get_with_http_info(**kwargs) # noqa: E501
def v1_app_request_settings_get_with_http_info(self, **kwargs): # noqa: E501
"""Get Applicastion Request Settings # noqa: E501
Get app request settings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_settings_get_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(AppRequestSettings, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_settings_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "AppRequestSettings",
}
return self.api_client.call_api(
'/v1/app-request/settings', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_app_request_settings_put(self, app_request_settings, **kwargs): # noqa: E501
"""Update Application Request Settings # noqa: E501
Update app request settings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_settings_put(app_request_settings, async_req=True)
>>> result = thread.get()
:param app_request_settings: App request settings object (required)
:type app_request_settings: AppRequestSettings
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: AppRequestSettings
"""
kwargs['_return_http_data_only'] = True
return self.v1_app_request_settings_put_with_http_info(app_request_settings, **kwargs) # noqa: E501
def v1_app_request_settings_put_with_http_info(self, app_request_settings, **kwargs): # noqa: E501
"""Update Application Request Settings # noqa: E501
Update app request settings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_app_request_settings_put_with_http_info(app_request_settings, async_req=True)
>>> result = thread.get()
:param app_request_settings: App request settings object (required)
:type app_request_settings: AppRequestSettings
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(AppRequestSettings, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'app_request_settings'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_app_request_settings_put" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'app_request_settings' is set
if self.api_client.client_side_validation and ('app_request_settings' not in local_var_params or # noqa: E501
local_var_params['app_request_settings'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `app_request_settings` when calling `v1_app_request_settings_put`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'app_request_settings' in local_var_params:
body_params = local_var_params['app_request_settings']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "AppRequestSettings",
400: "ApiError",
}
return self.api_client.call_api(
'/v1/app-request/settings', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
| 45.391844 | 342 | 0.60939 | 5,800 | 51,202 | 5.079655 | 0.040517 | 0.045143 | 0.047519 | 0.06191 | 0.968468 | 0.967416 | 0.966126 | 0.965854 | 0.961306 | 0.960695 | 0 | 0.013717 | 0.325104 | 51,202 | 1,127 | 343 | 45.432121 | 0.83887 | 0.49086 | 0 | 0.762195 | 0 | 0 | 0.188012 | 0.089247 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034553 | false | 0 | 0.010163 | 0 | 0.079268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
efaa6a2b32a911c246d012e0b8967c5b2a3d1871 | 79 | py | Python | elephant/utils/__init__.py | bear-in-white-house/elephant | a30ecacb6ed3bba397e06a89e1cd28377d5f54f0 | [
"Apache-2.0"
] | null | null | null | elephant/utils/__init__.py | bear-in-white-house/elephant | a30ecacb6ed3bba397e06a89e1cd28377d5f54f0 | [
"Apache-2.0"
] | null | null | null | elephant/utils/__init__.py | bear-in-white-house/elephant | a30ecacb6ed3bba397e06a89e1cd28377d5f54f0 | [
"Apache-2.0"
] | null | null | null | from elephant.utils.phone_code import *
from elephant.utils.renderers import *
| 26.333333 | 39 | 0.822785 | 11 | 79 | 5.818182 | 0.636364 | 0.375 | 0.53125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101266 | 79 | 2 | 40 | 39.5 | 0.901408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
efd313cc4ba802a078bb86784f2e247ae7ffbc69 | 150 | py | Python | beobench/data/agents/rllib.py | rdnfn/beobench | c51e9a6d320e3e1db035cb298936ac4dacaca64b | [
"MIT"
] | 9 | 2022-01-10T13:51:38.000Z | 2022-03-31T15:08:23.000Z | beobench/data/agents/rllib.py | rdnfn/beobench | c51e9a6d320e3e1db035cb298936ac4dacaca64b | [
"MIT"
] | 25 | 2022-01-09T16:35:43.000Z | 2022-03-31T15:39:15.000Z | beobench/data/agents/rllib.py | rdnfn/beobench | c51e9a6d320e3e1db035cb298936ac4dacaca64b | [
"MIT"
] | 1 | 2022-03-30T13:24:07.000Z | 2022-03-30T13:24:07.000Z | """RLlib agent."""
import beobench.integration.rllib
from beobench.experiment.provider import config
beobench.integration.rllib.run_in_tune(config)
| 21.428571 | 47 | 0.82 | 19 | 150 | 6.368421 | 0.631579 | 0.31405 | 0.396694 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073333 | 150 | 6 | 48 | 25 | 0.870504 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4bf9cbfea19f9e5d4e2ccfd5ee76707d1540aa3d | 4,354 | py | Python | healthcareai/tests/test_deploy_supervised_model_class.py | Dokotta/healthcareai-py | 700b1f7a14f2087481aa98c01dba00dfe3efc2cb | [
"MIT"
] | null | null | null | healthcareai/tests/test_deploy_supervised_model_class.py | Dokotta/healthcareai-py | 700b1f7a14f2087481aa98c01dba00dfe3efc2cb | [
"MIT"
] | null | null | null | healthcareai/tests/test_deploy_supervised_model_class.py | Dokotta/healthcareai-py | 700b1f7a14f2087481aa98c01dba00dfe3efc2cb | [
"MIT"
] | 1 | 2019-10-11T10:40:44.000Z | 2019-10-11T10:40:44.000Z | import unittest
import numpy as np
import pandas as pd
from healthcareai import DeploySupervisedModel
from healthcareai.tests.helpers import fixture
class TestRFDeployNoTreesNoMtry(unittest.TestCase):
def setUp(self):
df = pd.read_csv(fixture('DiabetesClinicalSampleData.csv'),
na_values=['None'])
df.drop('PatientID', axis=1, inplace=True) # drop uninformative column
print(df.head())
np.random.seed(42)
self.o = DeploySupervisedModel(modeltype='classification',
df=df,
graincol='PatientEncounterID',
windowcol='InTestWindowFLG',
predictedcol='ThirtyDayReadmitFLG',
impute=True)
self.o.deploy(
method='rf',
cores=1,
server='localhost',
dest_db_schema_table='[SAM].[dbo].[HCPyDeployClassificationBASE]',
use_saved_model=False)
def runTest(self):
self.assertAlmostEqual(np.round(self.o.y_pred[5], 6), 0.060000)
def tearDown(self):
del self.o
class TestRFDeployNoTreesWithMtry(unittest.TestCase):
def setUp(self):
df = pd.read_csv(fixture('DiabetesClinicalSampleData.csv'),
na_values=['None'])
df.drop('PatientID', axis=1, inplace=True) # drop uninformative column
np.random.seed(42)
self.o = DeploySupervisedModel(modeltype='classification',
df=df,
graincol='PatientEncounterID',
windowcol='InTestWindowFLG',
predictedcol='ThirtyDayReadmitFLG',
impute=True)
self.o.deploy(
method='rf',
cores=1,
mtry=3,
server='localhost',
dest_db_schema_table='[SAM].[dbo].[HCPyDeployClassificationBASE]',
use_saved_model=False)
def runTest(self):
self.assertAlmostEqual(np.round(self.o.y_pred[5], 6), 0.1)
def tearDown(self):
del self.o
class TestRFDeployWithTreesNoMtry(unittest.TestCase):
def setUp(self):
df = pd.read_csv(fixture('DiabetesClinicalSampleData.csv'),
na_values=['None'])
df.drop('PatientID', axis=1, inplace=True) # drop uninformative column
np.random.seed(42)
self.o = DeploySupervisedModel(modeltype='classification',
df=df,
graincol='PatientEncounterID',
windowcol='InTestWindowFLG',
predictedcol='ThirtyDayReadmitFLG',
impute=True)
self.o.deploy(
method='rf',
cores=1,
trees=100,
server='localhost',
dest_db_schema_table='[SAM].[dbo].[HCPyDeployClassificationBASE]',
use_saved_model=False)
def runTest(self):
self.assertAlmostEqual(np.round(self.o.y_pred[5], 6), 0.060000)
def tearDown(self):
del self.o
class TestLinearDeploy(unittest.TestCase):
def setUp(self):
df = pd.read_csv(fixture('DiabetesClinicalSampleData.csv'),
na_values=['None'])
df.drop('PatientID', axis=1, inplace=True) # drop uninformative column
np.random.seed(42)
self.o = DeploySupervisedModel(modeltype='classification',
df=df,
graincol='PatientEncounterID',
windowcol='InTestWindowFLG',
predictedcol='ThirtyDayReadmitFLG',
impute=True)
self.o.deploy(
method='linear',
cores=1,
server='localhost',
dest_db_schema_table='[SAM].[dbo].[HCPyDeployClassificationBASE]',
use_saved_model=False)
def runTest(self):
self.assertAlmostEqual(np.round(self.o.y_pred[5], 5), 0.18087)
def tearDown(self):
del self.o
| 35.398374 | 79 | 0.520441 | 375 | 4,354 | 5.957333 | 0.229333 | 0.03581 | 0.03402 | 0.042972 | 0.880483 | 0.880483 | 0.870188 | 0.857654 | 0.857654 | 0.857654 | 0 | 0.018512 | 0.379651 | 4,354 | 122 | 80 | 35.688525 | 0.808589 | 0.023656 | 0 | 0.84375 | 0 | 0 | 0.153556 | 0.067829 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.125 | false | 0 | 0.052083 | 0 | 0.21875 | 0.010417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ef19c038134d3e8a35aaaee63945a5f7dadda605 | 4,682 | py | Python | tests/image_jaco_test.py | danielkychen/rasl | 75cea33a615e6ebe0e7f6d43d93bf81c8077f7e4 | [
"MIT"
] | 49 | 2016-04-18T17:22:26.000Z | 2021-10-31T01:31:35.000Z | tests/image_jaco_test.py | danielkychen/rasl | 75cea33a615e6ebe0e7f6d43d93bf81c8077f7e4 | [
"MIT"
] | 3 | 2018-05-11T11:59:25.000Z | 2018-05-23T14:21:56.000Z | tests/image_jaco_test.py | danielkychen/rasl | 75cea33a615e6ebe0e7f6d43d93bf81c8077f7e4 | [
"MIT"
] | 12 | 2016-11-08T08:31:02.000Z | 2021-04-23T18:12:09.000Z | # basic liveness tests for image_jaco
#
from __future__ import division
import pytest
import numpy as np
from rasl.toolbox import image_jaco # pylint:disable=import-error
image = np.zeros((4, 3))
image[1,:] = 1.0
zeros = np.zeros((4, 3))
def test_translate():
paramv = [10, -100]
J = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'translate', paramv)
J = J.reshape((4,3,2))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], 1) # translation preserves Iu
assert np.allclose(J[1, :, 1], 0) # translation preserves Iv
J = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'translate', paramv)
J = J.reshape((4,3,2))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], 0) # translation preserves Iu
assert np.allclose(J[1, :, 1], 1) # translation preserves Iv
def test_scale():
paramv = [1]
J = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'scale', paramv)
J = J.reshape((4,3,1))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], [0, 1, 2]) # scale increases with u
J = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'scale', paramv)
J = J.reshape((4,3,1))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], 1) # scale fixed with fixed v
def test_rotate():
paramv = [0]
J = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'rotate', paramv)
J = J.reshape((4,3,1))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], -1) # rotation from 0 fixed with v
J = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'rotate', paramv)
J = J.reshape((4,3,1))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], [0, 1, 2]) # rotation from 0 increases with u
def test_similarity():
paramv = [1, 0, 10, -100]
J = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'similarity', paramv)
J = J.reshape((4,3,4))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], [0, 1, 2]) # scale increases with u
assert np.allclose(J[1, :, 1], -1) # rotation from 0 fixed with v
assert np.allclose(J[1, :, 2], 1) # translation preserves Iu
assert np.allclose(J[1, :, 3], 0) # translation preserves Iv
J = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'similarity', paramv)
J = J.reshape((4,3,4))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], 1) # scale fixed with fixed v
assert np.allclose(J[1, :, 1], [0, 1, 2]) # rotation from 0 increases with u
assert np.allclose(J[1, :, 2], 0) # translation preserves Iu
assert np.allclose(J[1, :, 3], 1) # translation preserves Iv
def test_affine():
J = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'affine', None)
J = J.reshape((4,3,6))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 0], [0, 1, 2]) # increases with u
assert np.allclose(J[1, :, 1], 1) # fixed with fixed v
assert np.allclose(J[1, :, 2], 1) # fixed
assert np.allclose(J[1:, :, 3:], 0)
J = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'affine', None)
J = J.reshape((4,3,6))
assert np.allclose(J[0, :, :], 0)
assert np.allclose(J[2:, :, :], 0)
assert np.allclose(J[1, :, 3], [0, 1, 2]) # increases with u
assert np.allclose(J[1, :, 4], 1) # fixed with fixed v
assert np.allclose(J[1, :, 5], 1) # fixed
assert np.allclose(J[1:, :, :3], 0)
def test_projective():
paramv = np.zeros(8)
J = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'projective', paramv)
J = J.reshape((4,3,8))
# with paramv[6:]==0, reduces to affine, a simpler (though incomplete) test
Jaff = image_jaco(image.flatten(), zeros.flatten(), image.shape, 'affine', None)
Jaff = Jaff.reshape((4,3,6))
assert np.allclose(J[:, :, 0:6], Jaff)
J = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'projective', paramv)
J = J.reshape((4,3,8))
Jaff = image_jaco(zeros.flatten(), image.flatten(), image.shape, 'affine', None)
Jaff = Jaff.reshape((4,3,6))
assert np.allclose(J[:, :, 0:6], Jaff)
def test_BOGUS():
with pytest.raises(ValueError) as info:
image_jaco(None, None, (4, 3), 'BOGUS', None)
assert str(info.value).endswith('BOGUS')
| 41.803571 | 87 | 0.602093 | 743 | 4,682 | 3.756393 | 0.094213 | 0.131852 | 0.263705 | 0.280186 | 0.836976 | 0.836976 | 0.815478 | 0.79828 | 0.784307 | 0.620566 | 0 | 0.05322 | 0.197352 | 4,682 | 111 | 88 | 42.18018 | 0.689462 | 0.136907 | 0 | 0.489583 | 0 | 0 | 0.028401 | 0 | 0 | 0 | 0 | 0 | 0.489583 | 1 | 0.072917 | false | 0 | 0.041667 | 0 | 0.114583 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
329382ef63339522426ab5907c8820cdc9d068ac | 2,518 | py | Python | tachyon/propiedades/forms.py | Tachyon-BR/TachyonBR | 9afc99bbf0f6dea19d34dc1487165174733a0e66 | [
"MIT"
] | 1 | 2020-01-29T18:35:22.000Z | 2020-01-29T18:35:22.000Z | tachyon/propiedades/forms.py | Tachyon-BR/TachyonBR | 9afc99bbf0f6dea19d34dc1487165174733a0e66 | [
"MIT"
] | 12 | 2020-03-04T20:37:33.000Z | 2022-03-12T00:17:24.000Z | tachyon/propiedades/forms.py | Tachyon-BR/TachyonBR | 9afc99bbf0f6dea19d34dc1487165174733a0e66 | [
"MIT"
] | null | null | null | from django import forms
from .models import *
class IbanForm(forms.Form):
phone = forms.CharField(max_length = 100)
name = forms.CharField(max_length = 100)
class CrearPropiedadForma(forms.Form):
oferta = forms.CharField(max_length = 30)
tipo = forms.CharField(max_length = 30)
titulo = forms.CharField(max_length = 200)
desc = forms.CharField()
habs = forms.IntegerField(required=False)
banos = forms.FloatField(required=False)
garaje = forms.IntegerField(required=False)
pais = forms.CharField(max_length = 100)
estado = forms.CharField(max_length = 100)
codigo_postal = forms.CharField(max_length = 5)
colonia = forms.CharField(max_length = 200)
direccion = forms.CharField(max_length = 300)
precio = forms.CharField(max_length = 100)
negociable = forms.BooleanField(widget=forms.CheckboxInput(), required=False)
dif = forms.CharField(max_length = 100, required=False)
m_terr = forms.CharField(max_length = 30)
m_cons = forms.CharField(max_length = 30)
pisos = forms.IntegerField(required=False)
portada = forms.ImageField(widget=forms.ClearableFileInput())
# extra = forms.ImageField(widget=forms.ClearableFileInput(attrs={'multiple': True}))
video = forms.CharField(max_length = 150, required=False)
class EditarPropiedadForma(forms.Form):
oferta = forms.CharField(max_length = 30)
tipo = forms.CharField(max_length = 30)
titulo = forms.CharField(max_length = 200)
desc = forms.CharField()
habs = forms.IntegerField(required=False)
banos = forms.FloatField(required=False)
garaje = forms.IntegerField(required=False)
pais = forms.CharField(max_length = 100)
estado = forms.CharField(max_length = 100)
codigo_postal = forms.CharField(max_length = 5)
colonia = forms.CharField(max_length = 200)
direccion = forms.CharField(max_length = 300)
precio = forms.CharField(max_length = 100)
negociable = forms.BooleanField(widget=forms.CheckboxInput(), required=False)
dif = forms.CharField(max_length = 100, required=False)
m_terr = forms.CharField(max_length = 30)
m_cons = forms.CharField(max_length = 30)
pisos = forms.IntegerField(required=False)
portada = forms.ImageField(widget=forms.ClearableFileInput(), required=False)
extra = forms.ImageField(widget=forms.ClearableFileInput(attrs={'multiple': True}), required=False)
video = forms.CharField(max_length = 150, required=False)
| 46.62963 | 104 | 0.712073 | 297 | 2,518 | 5.922559 | 0.188552 | 0.238772 | 0.270608 | 0.366117 | 0.918704 | 0.889142 | 0.889142 | 0.889142 | 0.839113 | 0.76407 | 0 | 0.034749 | 0.177125 | 2,518 | 53 | 105 | 47.509434 | 0.814189 | 0.032963 | 0 | 0.791667 | 0 | 0 | 0.003361 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 11 |
32b19e9447f20e616b89c8e5275797ed3fafb77c | 12,432 | py | Python | morse-stf/stensorflow/basic/operator/sigmoid.py | alipay/Antchain-MPC | f6916465e1da5722ca7efadc4eeaca13ec229707 | [
"Apache-2.0"
] | 33 | 2021-11-23T09:04:03.000Z | 2022-03-14T07:56:31.000Z | morse-stf/stensorflow/basic/operator/sigmoid.py | qizhi-zhang/Antchain-MPC | f551170f68b0baff328e6594484e9832230fe719 | [
"Apache-2.0"
] | null | null | null | morse-stf/stensorflow/basic/operator/sigmoid.py | qizhi-zhang/Antchain-MPC | f551170f68b0baff328e6594484e9832230fe719 | [
"Apache-2.0"
] | 6 | 2021-11-25T12:38:41.000Z | 2022-02-23T03:29:51.000Z | #!/usr/bin/env python
# coding=utf-8
"""
Ant Group
Copyright (c) 2004-2020 All Rights Reserved.
------------------------------------------------------
File Name : sigmoid
Author : Qizhi Zhang
Email: qizhi.zqz@antgroup.com
Create Time : 2020-05-14 11:42
Description : description what the main function of this file
"""
from stensorflow.basic.basic_class.private import PrivateTensor
from stensorflow.basic.basic_class.pair import SharedPair
from stensorflow.basic.basic_class.share import sin2pi as sin2pi_share, cos2pi as cos2pi_share, SharedTensor
import tensorflow as tf
from stensorflow.global_var import StfConfig
from stensorflow.random.random import get_seed
from typing import Union
import numpy as np
def sigmoid_poly(x: SharedPair):
"""A Chebyshev polynomial approximation of the sigmoid function."""
w0 = 0.5
w1 = 0.2159198015
w3 = -0.0082176259
w5 = 0.0001825597
w7 = -0.0000018848
w9 = 0.0000000072
x1 = x
x2 = (x1 * x).dup_with_precision(x.fixedpoint)
x3 = (x2 * x).dup_with_precision(x.fixedpoint)
x5 = (x2 * x3).dup_with_precision(x.fixedpoint)
x7 = (x2 * x5).dup_with_precision(x.fixedpoint)
x9 = (x2 * x7).dup_with_precision(x.fixedpoint)
y1 = w1 * x1
y3 = w3 * x3
y5 = w5 * x5
y7 = w7 * x7
y9 = w9 * x9
z = y9 + y7 + y5 + y3 + y1 + tf.constant(w0)
return z
def sigmoid_poly_minmax(x: SharedPair):
"""A minmax polynomial approximation of the sigmoid function.
"""
w0 = 0.5
w1 = 0.197
w3 = -0.004
x1 = x
x2 = x1 * x
x3 = x2 * x
y1 = w1 * x1
y3 = w3 * x3
z = y3 + y1 + tf.constant(w0)
# z = y7 + y5 + y3 + y1 + w0
return z
def sin2pi_bak(x: SharedPair, T: int = 1, k: Union[int, tf.Tensor] = None) -> SharedPair:
# sin(2kpix/T)
# print("x.xL.shape=", x.xL.shape)
# print("x.xR.shape=", x.xR.shape)
n = int(np.log2(T))
if 1 << n != T:
raise Exception("T must be a power of 2")
if k is None:
k = 1
if isinstance(k, int):
pass
elif isinstance(k, tf.Tensor):
if k.dtype in [tf.dtypes.int8, tf.dtypes.int16, tf.dtypes.int32, tf.dtypes.int64]:
pass
else:
raise Exception("the type of k is error")
with tf.device(x.ownerL):
yL = tf.stack([sin2pi_share(x.xL, x.fixedpoint + n, k), cos2pi_share(x.xL, x.fixedpoint + n, k)], axis=-1)
with tf.device(x.ownerR):
yR = tf.stack([cos2pi_share(x.xR, x.fixedpoint + n, k), sin2pi_share(x.xR, x.fixedpoint + n, k)], axis=-1)
if StfConfig.parties == 3:
with tf.device(x.ownerL):
yL = tf.expand_dims(yL, axis=-2)
zL = PrivateTensor(owner=x.ownerL)
zL.load_from_tf_tensor(yL)
with tf.device(x.ownerR):
yR = tf.expand_dims(yR, axis=-1)
zR = PrivateTensor(owner=x.ownerR)
zR.load_from_tf_tensor(yR)
result = (zL @ zR).dup_with_precision(new_fixedpoint=StfConfig.default_fixed_point)
result = result.squeeze(axis=[-1, -2])
else:
with tf.device(x.ownerL):
zL = PrivateTensor(owner=x.ownerL)
zL.load_from_tf_tensor(yL)
with tf.device(x.ownerR):
zR = PrivateTensor(owner=x.ownerR)
zR.load_from_tf_tensor(yR)
result = (zL * zR).reduce_sum(axis=[-1]).dup_with_precision(new_fixedpoint=StfConfig.default_fixed_point)
return result
def sin2pi(x: SharedPair, T: int = 1, k: Union[int, tf.Tensor] = None) -> SharedPair:
# sin(2kpix/T)
# print("x.xL.shape=", x.xL.shape)
# print("x.xR.shape=", x.xR.shape)
n = int(np.log2(T))
if 1 << n != T:
raise Exception("T must be a power of 2, but is {}".format(T))
if k is None:
k = 1
if isinstance(k, int):
pass
elif isinstance(k, tf.Tensor):
if k.dtype in [tf.dtypes.int8, tf.dtypes.int16, tf.dtypes.int32, tf.dtypes.int64]:
pass
else:
raise Exception("the type of k is error")
if StfConfig.parties == 3:
with tf.device(StfConfig.RS[0]):
prf_flag = StfConfig.prf_flag
if prf_flag:
seed_xL = get_seed()
seed_xR = get_seed()
seed_sin = get_seed()
seed_cos = get_seed()
else:
seed_xL = None
seed_xR = None
seed_sin = None
seed_cos = None
xL_adjoint = x.xL.random_uniform_adjoint(seed_xL)
xR_adjoint = x.xR.random_uniform_adjoint(seed_xR)
_sin2pi_adjoint = sin2pi_share(xL_adjoint+xR_adjoint, x.fixedpoint + n, k)
_sin2pi_adjoint = tf.cast(_sin2pi_adjoint * (1 << x.fixedpoint), 'int64')
_cos2pi_adjoint = cos2pi_share(xL_adjoint+xR_adjoint, x.fixedpoint + n, k)
_cos2pi_adjoint = tf.cast(_cos2pi_adjoint * (1 << x.fixedpoint), 'int64')
sin2pi_adjointL = SharedTensor(shape=_sin2pi_adjoint.shape.as_list()).random_uniform_adjoint(seed_sin)
sin2pi_adjointR = SharedTensor(inner_value=_sin2pi_adjoint) - sin2pi_adjointL
cos2pi_adjointR = SharedTensor(shape=_cos2pi_adjoint.shape.as_list()).random_uniform_adjoint(seed_cos)
cos2pi_adjointL = SharedTensor(inner_value=_cos2pi_adjoint) - cos2pi_adjointR
with tf.device(x.ownerL):
if prf_flag:
xL_adjoint = x.xL.random_uniform_adjoint(seed_xL)
delta_xL = (x.xL - xL_adjoint) % (1 << n+x.fixedpoint)
#print("delta_xL=", delta_xL)
with tf.device(x.ownerR):
if prf_flag:
xR_adjoint = x.xR.random_uniform_adjoint(seed_xR)
delta_xR = (x.xR - xR_adjoint) % (1 << n+x.fixedpoint)
with tf.device(x.ownerL):
if prf_flag:
sin2pi_adjointL = sin2pi_adjointL.random_uniform_adjoint(seed_sin)
yL = sin2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * cos2pi_adjointL + \
cos2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * sin2pi_adjointL
with tf.device(x.ownerR):
if prf_flag:
cos2pi_adjointR = cos2pi_adjointR.random_uniform_adjoint(seed_cos)
yR = cos2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * sin2pi_adjointR + \
sin2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * cos2pi_adjointR
result = SharedPair(ownerL=x.ownerL, ownerR=x.ownerR, xL=yL, xR=yR, fixedpoint=2*x.fixedpoint)
result = result.dup_with_precision(x.fixedpoint)
else:
with tf.device(x.ownerL):
yL = tf.stack([sin2pi_share(x.xL, x.fixedpoint + n, k), cos2pi_share(x.xL, x.fixedpoint + n, k)], axis=-1)
zL = PrivateTensor(owner=x.ownerL)
zL.load_from_tf_tensor(yL)
with tf.device(x.ownerR):
yR = tf.stack([cos2pi_share(x.xR, x.fixedpoint + n, k), sin2pi_share(x.xR, x.fixedpoint + n, k)], axis=-1)
zR = PrivateTensor(owner=x.ownerR)
zR.load_from_tf_tensor(yR)
result = (zL * zR).reduce_sum(axis=[-1]).dup_with_precision(new_fixedpoint=StfConfig.default_fixed_point)
return result
def sin2pi(x: SharedPair, T: int = 1, k: Union[int, tf.Tensor] = None) -> SharedPair:
# sin(2kpix/T)
# print("x.xL.shape=", x.xL.shape)
# print("x.xR.shape=", x.xR.shape)
n = int(np.log2(T))
if 1 << n != T:
raise Exception("T must be a power of 2")
if k is None:
k = 1
if isinstance(k, int):
pass
elif isinstance(k, tf.Tensor):
if k.dtype in [tf.dtypes.int8, tf.dtypes.int16, tf.dtypes.int32, tf.dtypes.int64]:
pass
else:
raise Exception("the type of k is error")
if StfConfig.parties == 3:
with tf.device(StfConfig.RS[0]):
prf_flag = StfConfig.prf_flag
if prf_flag:
seed_xL = get_seed()
seed_xR = get_seed()
seed_sin = get_seed()
seed_cos = get_seed()
else:
seed_xL = None
seed_xR = None
seed_sin = None
seed_cos = None
xL_adjoint = x.xL.random_uniform_adjoint(seed_xL)
xR_adjoint = x.xR.random_uniform_adjoint(seed_xR)
_sin2pi_adjoint = sin2pi_share(xL_adjoint+xR_adjoint, x.fixedpoint + n, k)
_sin2pi_adjoint = tf.cast(_sin2pi_adjoint * (1 << x.fixedpoint), 'int64')
_cos2pi_adjoint = cos2pi_share(xL_adjoint+xR_adjoint, x.fixedpoint + n, k)
_cos2pi_adjoint = tf.cast(_cos2pi_adjoint * (1 << x.fixedpoint), 'int64')
sin2pi_adjointL = SharedTensor(shape=_sin2pi_adjoint.shape.as_list()).random_uniform_adjoint(seed_sin)
sin2pi_adjointR = SharedTensor(inner_value=_sin2pi_adjoint) - sin2pi_adjointL
cos2pi_adjointR = SharedTensor(shape=_cos2pi_adjoint.shape.as_list()).random_uniform_adjoint(seed_cos)
cos2pi_adjointL = SharedTensor(inner_value=_cos2pi_adjoint) - cos2pi_adjointR
with tf.device(x.ownerL):
if prf_flag:
xL_adjoint = x.xL.random_uniform_adjoint(seed_xL)
delta_xL = (x.xL - xL_adjoint) % (1 << n+x.fixedpoint)
print("dela_xL=", delta_xL)
with tf.device(x.ownerR):
if prf_flag:
xR_adjoint = x.xR.random_uniform_adjoint(seed_xR)
delta_xR = (x.xR - xR_adjoint) % (1 << n+x.fixedpoint)
with tf.device(x.ownerL):
if prf_flag:
sin2pi_adjointL = sin2pi_adjointL.random_uniform_adjoint(seed_sin)
yL = sin2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * cos2pi_adjointL + \
cos2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * sin2pi_adjointL
with tf.device(x.ownerR):
if prf_flag:
cos2pi_adjointR = cos2pi_adjointR.random_uniform_adjoint(seed_cos)
yR = cos2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * sin2pi_adjointR + \
sin2pi_share(delta_xL + delta_xR, x.fixedpoint+n, k) * (1<<x.fixedpoint) * cos2pi_adjointR
result = SharedPair(ownerL=x.ownerL, ownerR=x.ownerR, xL=yL, xR=yR, fixedpoint=2*x.fixedpoint)
result = result.dup_with_precision(x.fixedpoint)
else:
with tf.device(x.ownerL):
yL = tf.stack([sin2pi_share(x.xL, x.fixedpoint + n, k), cos2pi_share(x.xL, x.fixedpoint + n, k)], axis=-1)
zL = PrivateTensor(owner=x.ownerL)
zL.load_from_tf_tensor(yL)
with tf.device(x.ownerR):
yR = tf.stack([cos2pi_share(x.xR, x.fixedpoint + n, k), sin2pi_share(x.xR, x.fixedpoint + n, k)], axis=-1)
zR = PrivateTensor(owner=x.ownerR)
zR.load_from_tf_tensor(yR)
result = (zL * zR).reduce_sum(axis=[-1]).dup_with_precision(new_fixedpoint=StfConfig.default_fixed_point)
return result
def sigmoid_sin(x: SharedPair, M=16):
"""Fourier series approximation of the sigmoid function.
https://arxiv.org/pdf/2109.11726.pdf """
if 1 << int(np.log2(M)) != M:
raise Exception("M must be a power of 2")
term = 6
sample_num = 256
X = np.linspace(-M, M, sample_num, endpoint=False) # -M to+M的256个值
sigmoid = 1 / (1 + np.exp(-X))
sm5 = sigmoid - 0.5
sm5_odd = sm5 * 1.0
sm5_odd[0] = 0
F = np.fft.fft(sm5_odd)
a = F[0:term].imag
# a = tf.constant(a, dtype='float32', shape=[term]+[1]*len(x.shape))
a = np.reshape(a, newshape=[term] + [1] * len(x.shape))
a = tf.constant(a, dtype='float32')
integers = np.reshape(range(term), newshape=[term] + [1] * len(x.shape))
integers = tf.constant(integers, dtype='int64')
x = x.expend_dims(axis=[0])
s = sin2pi(x - M, T=2 * M, k=integers)
y = -a / 128 * s
y = y.reduce_sum(axis=[0])
y = y + 0.5
return y
def sigmoid_idea(x: SharedPair, M=16):
"""The idea sigmoid"""
y=tf.sigmoid(x.to_tf_tensor("R"))
z = SharedPair(ownerL="L", ownerR="R", shape=y.shape)
z.load_from_tf_tensor(y)
return z
def sigmoid_local(x: PrivateTensor):
z = PrivateTensor(owner=x.owner)
with tf.device(x.owner):
y = tf.sigmoid(x.to_tf_tensor())
z.load_from_tf_tensor(y)
return z
| 37 | 118 | 0.603764 | 1,776 | 12,432 | 4.043919 | 0.124437 | 0.075049 | 0.0401 | 0.043442 | 0.822612 | 0.787664 | 0.756335 | 0.744639 | 0.737678 | 0.730019 | 0 | 0.040351 | 0.266409 | 12,432 | 335 | 119 | 37.110448 | 0.747149 | 0.0736 | 0 | 0.733607 | 0 | 0 | 0.018171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0.02459 | 0.032787 | 0 | 0.098361 | 0.004098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
087cc80e1f0842d662f0db3fe5e6b231f02a2c0d | 183 | py | Python | pysit/solvers/variable_density_acoustic/frequency/__init__.py | zfang-slim/pysit | 8fca42b9749841abc302d1f8195a1437fad7ae4d | [
"BSD-3-Clause"
] | 64 | 2015-09-08T06:23:27.000Z | 2022-03-09T23:35:24.000Z | pysit/solvers/variable_density_acoustic/frequency/__init__.py | zfang-slim/pysit | 8fca42b9749841abc302d1f8195a1437fad7ae4d | [
"BSD-3-Clause"
] | 23 | 2015-10-08T01:14:24.000Z | 2021-07-15T11:37:05.000Z | pysit/solvers/variable_density_acoustic/frequency/__init__.py | zfang-slim/pysit | 8fca42b9749841abc302d1f8195a1437fad7ae4d | [
"BSD-3-Clause"
] | 48 | 2015-06-25T14:48:22.000Z | 2021-12-06T19:50:25.000Z | from .variable_density_acoustic_frequency_scalar_1D import *
from .variable_density_acoustic_frequency_scalar_2D import *
from .variable_density_acoustic_frequency_scalar_3D import *
| 45.75 | 60 | 0.901639 | 24 | 183 | 6.25 | 0.416667 | 0.24 | 0.38 | 0.54 | 0.92 | 0.92 | 0.64 | 0 | 0 | 0 | 0 | 0.017544 | 0.065574 | 183 | 3 | 61 | 61 | 0.859649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.