hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4c151f6f83ab052a3d6e0365858828fe281e245f | 54,958 | py | Python | src/fake_bpy_module/analyzer.py | kant/fake-bpy-module | 9de0e4cce17ee7e7c50fa2a3189584dd4b2bc897 | [
"MIT"
] | null | null | null | src/fake_bpy_module/analyzer.py | kant/fake-bpy-module | 9de0e4cce17ee7e7c50fa2a3189584dd4b2bc897 | [
"MIT"
] | null | null | null | src/fake_bpy_module/analyzer.py | kant/fake-bpy-module | 9de0e4cce17ee7e7c50fa2a3189584dd4b2bc897 | [
"MIT"
] | null | null | null | import re
from typing import List, IO, Any
import json
from .common import (
IntermidiateDataType,
ParameterDetailInfo,
ReturnInfo,
VariableInfo,
FunctionInfo,
ClassInfo,
SectionInfo,
)
from .utils import (
output_log,
LOG_LEVEL_NOTICE,
LOG_LEVEL_WARN,
)
class AnalysisResult:
def __init__(self):
self.section_info: List['SectionInfo'] = []
class RstLevel:
def __init__(self, level: int = 0, spaces: str = ""):
self._level = level
self._spaces = spaces
def __str__(self) -> str:
return "Level: {}, Spaces: {}".format(self.level(), self.num_spaces())
def level(self) -> int:
return self._level
def spaces(self) -> str:
return self._spaces
def num_spaces(self) -> int:
return len(self.spaces())
def make_next_level(self, spaces_to_add: str) -> 'RstLevel':
new_level = RstLevel(self._level + 1, self._spaces + spaces_to_add)
return new_level
class BaseAnalyzer:
def __init__(self):
self.support_bge: bool = False
self.current_file: str = None
self.current_module: str = None
self.current_base_classes: str = None
self.blender_version: str = None
def set_blender_version(self, version: str):
self.blender_version = version
def enable_bge_support(self):
self.support_bge = True
def _is_bge_supported(self) -> bool:
return self.support_bge
def _cleanup_string(self, line: str) -> str:
result = line
result = re.sub(r":class:", " ", result)
result = re.sub(r"`", " ", result)
result = re.sub(r"^\s+", "", result)
result = re.sub(r"\s+$", "", result)
result = re.sub(r"\s+", " ", result)
return result
def _invalid_line(self, line: str, level: 'RstLevel'):
raise ValueError("Invalid line: {} (File name: {}, Level: {})"
.format(line.rstrip("\n"), self.current_file, level))
def _skip_until_next_le_level(self, file: IO[Any], level: 'RstLevel'):
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return
last_pos = file.tell()
line = file.readline()
def _parse_module(self, file: IO[Any], level: int) -> str:
line = file.readline()
m = re.match(r"^\.\. (currentmodule|module):: ([a-zA-Z0-9._]+)", line)
if m is None:
self._invalid_line(line, level)
module_name = m.group(2)
if self.blender_version is not None and self.blender_version != "":
version = [int(sp) for sp in self.blender_version.split(".")]
if not self.support_bge:
if version == [2, 90]:
if module_name.startswith("bpy.types."):
module_name = module_name[:module_name.rfind(".")]
if version == [2, 91]:
if module_name == "bpy.data":
module_name = "bpy"
if version == [2, 92]:
if module_name == "bpy.data":
module_name = "bpy"
return module_name
def _parse_base_class(self, file: IO[Any], level: int) -> List['DataType']:
line = file.readline()
m = re.match(r"^base (class|classes) --- (.*)", line)
if m is None:
self._invalid_line(line, level)
base_classes = []
sps = self._parse_comma_separated_string(self._cleanup_string(m.group(2)))
for sp in sps:
base_classes.append(IntermidiateDataType(self._cleanup_string(sp)))
return base_classes
def _parse_description(self, file: IO[Any], level: 'RstLevel') -> str:
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}\S+"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
description = line
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return description
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return description
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(arg|type|return|rtype)", line):
file.seek(last_pos)
return description
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s*)\S+", line):
description += line
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return description
def _has_le_level_start(self, line: str, level: 'RstLevel') -> bool:
pattern = r"^\s{0," + str(level.num_spaces()) + r"}\.\."
if re.match(pattern, line):
return True
return False
def _has_le_level_string(self, line: str, level: 'RstLevel') -> bool:
pattern = r"^\s{0," + str(level.num_spaces()) + r"}\S+"
if re.match(pattern, line):
return True
return False
def _parse_func_detail(self, file: IO[Any], level: 'RstLevel') -> dict:
def _parse_type(file: IO[Any], level: 'RstLevel') -> List[dict]:
last_pos = file.tell()
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}:type ([a-zA-Z0-9_, ]+):(.*)"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
infos = []
for s in self._parse_comma_separated_string(m.group(1)):
infos.append({
"name": self._cleanup_string(s),
"type": "parameter",
"description": "",
"data_type": m.group(2),
})
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return infos
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return infos
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(arg|type|return|rtype)", line):
file.seek(last_pos)
return infos
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)(\S+)", line):
# TODO: should use this when we handle multiple line.
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)(\S+)", line).group(1)
data_type = re.sub(r"\s+", " ", line)
for info in infos:
info["data_type"] += data_type
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\S+)", line):
file.seek(last_pos)
return infos
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return infos
def _parse_arg(file: IO[Any], level: 'RstLevel') -> List[dict]:
last_pos = file.tell()
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}:(arg|param) ([a-zA-Z0-9_, ]+)\s*.*:(.*)"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
infos = []
for s in self._parse_comma_separated_string(m.group(2)):
infos.append({
"name": self._cleanup_string(s),
"type": "parameter",
"description": m.group(3),
"data_type": "",
})
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return infos
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return infos
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(type|arg|return|rtype)", line):
file.seek(last_pos)
return infos
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)(\S+)", line):
description = re.sub(r"\s+", " ", line)
for info in infos:
info["description"] += description
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\S+)", line):
file.seek(last_pos)
return infos
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return infos
def _parse_return(file: IO[Any], level: 'RstLevel') -> str:
last_pos = file.tell()
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}:return.*:(.*)" # TODO: handle :return vert: or :return (min, max): case
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
description = re.sub(r"\s+", " ", m.group(1))
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return description
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(type|arg|return|rtype)", line):
file.seek(last_pos)
return description
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)(\S+)", line):
description += re.sub(r"\s+", " ", line)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\S+)", line):
file.seek(last_pos)
return description
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return description
def _parse_rtype(file: IO[Any], level: 'RstLevel') -> str:
last_pos = file.tell()
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}:rtype.*:(.*)" # TODO: handle :rtype vert: or :rtype (min, max): case
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
data_type = re.sub(r"\s+", " ", m.group(1))
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return data_type
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return data_type
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(type|arg|return|rtype)", line):
file.seek(last_pos)
return data_type
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)(\S+)", line):
data_type += re.sub(r"\s+", " ", line)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\S+)", line):
file.seek(last_pos)
return data_type
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return data_type
parameters_types = []
parameters_args = []
return_type = None
return_ = None
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
break
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(type):", line):
self._skip_until_next_le_level(file, level=level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(type|arg|param|return|rtype)", line):
m = re.match(r"^\s{" + str(level.num_spaces()) + r"}:(type|arg|param|return|rtype)", line)
file.seek(last_pos)
if m.group(1) == "type":
parameters_types.extend(_parse_type(file, level))
elif m.group(1) in ["arg", "param"]:
parameters_args.extend(_parse_arg(file, level))
elif m.group(1) == "return":
if return_ is not None:
raise ValueError(":return must be appeared only once: {} (File name: {}, Level: {})"
.format(self.current_file, line, level.level()))
return_ = _parse_return(file, level)
elif m.group(1) == "rtype":
if return_type is not None:
raise ValueError(":rtype must be appeared only once: {} (File name: {}, Level: {})"
.format(self.current_file, line, level.level()))
return_type = _parse_rtype(file, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}:(file):", line):
self._skip_until_next_le_level(file, level=level)
elif self._has_le_level_string(line, level):
file.seek(last_pos)
break
last_pos = file.tell()
line = file.readline()
# Merge.
info = {
"parameters": [],
"return": None,
}
for pt in parameters_types:
param_info = ParameterDetailInfo()
param_info.set_name(self._cleanup_string(pt["name"]))
param_info.set_data_type(IntermidiateDataType(self._cleanup_string(pt["data_type"])))
for pa in parameters_args:
if pt["name"] == pa["name"]:
param_info.append_description(" " + self._cleanup_string(pa["description"]))
param_info.set_description(self._cleanup_string(param_info.description()))
info["parameters"].append(param_info)
for pa in parameters_args:
for pi in parameters_types:
if pi["name"] == pa["name"]:
break
else:
param_info = ParameterDetailInfo()
param_info.set_name(self._cleanup_string(pa["name"]))
param_info.set_data_type(IntermidiateDataType(self._cleanup_string(pa["data_type"])))
info["parameters"].append(param_info)
if return_ is not None and return_type is not None:
return_info = ReturnInfo()
if return_ is not None:
return_info.set_description(self._cleanup_string(return_))
if return_type is not None:
return_info.set_data_type(IntermidiateDataType(self._cleanup_string(return_type)))
info["return"] = return_info
return info
def _parse_comma_separated_string(self, line: str) -> List[str]:
level = 0
params = []
current = ""
line_to_parse = line
for c in line_to_parse:
if c in ("(", "{", "["):
level += 1
elif c in (")", "}", "]"):
level -= 1
if level < 0:
raise ValueError("Level must be >= 0 but {} (File name: {}, Line: {})"
.format(level, self.current_file, line))
if level == 0 and c == ",":
params.append(current)
current = ""
else:
current += c
if level != 0:
raise ValueError("Level must be == 0 but {} (File name: {}, Line: {})"
.format(level, self.current_file, line))
if current != "":
params.append(current)
def is_builtin_value(value):
# Numerical default value.
m = re.search(r"^[-0-9.+e*]+$", value)
if m is not None:
return True
# String default value.
m = re.search(r"^('|\")(.*)('|\")$", value)
if m is not None:
return True
# Built-in default value.
m = re.search(r"^(None|True|False)$", value)
if m is not None:
return True
# Bin, hex, oct default value.
m = re.search(r"0[box][0-9A-Fa-f]+", value)
if m is not None:
return True
return False
# Convert data type to string about the custom data type.
params_converted = []
for p in params:
m = re.search(r"(.*)=(.*)", p)
if m is None:
# No default value.
params_converted.append(p)
continue
param_variable = m.group(1)
default_value = m.group(2)
if is_builtin_value(default_value):
params_converted.append(p)
continue
m = re.search(r"^\s*\{(.*)\}\s*$", default_value)
if m is not None:
# Set default value.
params_converted.append(p)
continue
m = re.search(r"^\s*\[(.*)\]\s*$", default_value)
if m is not None:
# List list value.
params_converted.append(p)
continue
m = re.search(r"^\s*\((.*)\)\s*$", default_value)
if m is not None:
# Tuple default value.
params_converted.append(p)
continue
# Custom data type
params_converted.append("{}='{}'".format(param_variable, default_value))
output_log(LOG_LEVEL_NOTICE, "'{}' is a parameter with custom data type".format(p))
return params_converted
def _parse_constant(self, file: IO[Any], level: 'RstLevel') -> 'VariableInfo':
def _parse_type(file: IO[Any], level: 'RstLevel') -> str:
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}:type: (.*)"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
type_str = m.group(1)
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
file.seek(last_pos)
return type_str
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return type_str
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return type_str
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
type_str += " " + self._parse_description(file, level=level.make_next_level(next_level_spaces))
type_str = self._cleanup_string(type_str)
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return type_str
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}\.\. (data|attribute|DATA):: ([a-zA-Z0-9_]+):*$"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
info = VariableInfo("constant")
info.set_name(self._cleanup_string(m.group(2)))
if self.current_module is not None:
info.set_module(self.current_module)
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):type:", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):type:", line).group(1)
file.seek(last_pos)
info.set_data_type(IntermidiateDataType(self._cleanup_string(_parse_type(file, level=level.make_next_level(next_level_spaces)))))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|code-block)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|code-block)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (to do)", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (to do)", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. _[a-zA-Z0-9-_]+:", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. _[a-zA-Z0-9-_]+:", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
def _parse_attribute(self, file: IO[Any], level: 'RstLevel') -> 'VariableInfo':
def _parse_type(file: IO[Any], level: 'RstLevel') -> str:
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}:type: (.*)"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
type_str = m.group(1)
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
file.seek(last_pos)
return type_str
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return type_str
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return type_str
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
type_str += " " + self._parse_description(file, level=level.make_next_level(next_level_spaces))
type_str = self._cleanup_string(type_str)
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return type_str
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}\.\. (data|attribute):: ([a-zA-Z0-9_]+)$"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
info = VariableInfo("attribute")
info.set_name(self._cleanup_string(m.group(2)))
if self.current_module is not None:
info.set_module(self.current_module)
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):type:", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):type:", line).group(1)
file.seek(last_pos)
info.set_data_type(IntermidiateDataType(self._cleanup_string(_parse_type(file, level=level.make_next_level(next_level_spaces)))))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (seealso|warning|note|code-block|deprecated)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (seealso|warning|note|code-block|deprecated)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note):", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note):", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
def _get_multiline_string(self, file: IO[Any], level: 'RstLevel') -> str:
line = file.readline()
line = line.rstrip("\n")
long_line = line
while len(line) >= 1 and line[-1] == "\\":
line = file.readline()
line = line.rstrip("\n")
long_line += line
long_line = re.sub(r"\\", "", long_line)
return long_line
def _parse_function(self, file: IO[Any], level: 'RstLevel') -> 'FunctionInfo':
line = self._get_multiline_string(file, level)
pattern = r"^\s{" + str(level.num_spaces()) + r"}\.\. (function|method):: ([a-zA-Z0-9_]+)\s*\((.*)\)"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
info = FunctionInfo("function")
info.set_name(self._cleanup_string(m.group(2)))
if self.current_module is not None:
info.set_module(self.current_module)
for p in self._parse_comma_separated_string(m.group(3)):
info.add_parameter(self._cleanup_string(p))
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line).group(1)
file.seek(last_pos)
detail = self._parse_func_detail(file, level=level.make_next_level(next_level_spaces))
info.add_parameter_details(detail["parameters"])
if detail["return"] is not None:
info.set_return(detail["return"])
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (seealso|note|warning|code-block)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (seealso|note|warning|code-block)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (warning):", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (warning):", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
def _parse_class(self, file: IO[Any], level: 'RstLevel') -> 'ClassInfo':
def _parse_method(file: IO[Any], level: 'RstLevel') -> 'FunctionInfo':
line = self._get_multiline_string(file, level)
pattern = r"^\s{" + str(level.num_spaces()) + r"}\.\. method:: ([a-zA-Z0-9_]+)\((.*)\):*$"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
info = FunctionInfo("method")
info.set_name(self._cleanup_string(m.group(1)))
if self.current_module is not None:
info.set_module(self.current_module)
for p in self._parse_comma_separated_string(m.group(2)):
info.add_parameter(self._cleanup_string(p))
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line).group(1)
file.seek(last_pos)
detail = self._parse_func_detail(file, level=level.make_next_level(next_level_spaces))
info.add_parameter_details(detail["parameters"])
if detail["return"] is not None:
info.set_return(detail["return"])
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|code-block|warning|literalinclude|seealso|deprecated)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|code-block|warning|literalinclude|seealso|deprecated)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
def _parse_class_method(file: IO[Any], level: 'RstLevel') -> 'FunctionInfo':
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}\.\. classmethod:: ([a-zA-Z0-9_]+)\((.*)\):*$"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
info = FunctionInfo("classmethod")
info.set_name(self._cleanup_string(m.group(1)))
if self.current_module is not None:
info.set_module(self.current_module)
for p in self._parse_comma_separated_string(m.group(2)):
info.add_parameter(self._cleanup_string(p))
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line).group(1)
file.seek(last_pos)
detail = self._parse_func_detail(file, level=level.make_next_level(next_level_spaces))
info.add_parameter_details(detail["parameters"])
if detail["return"] is not None:
info.set_return(detail["return"])
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|warning)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|warning)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line).group(1)
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
def _parse_static_method(file: IO[Any], level: 'RstLevel') -> 'FunctionInfo':
line = file.readline()
pattern = r"^\s{" + str(level.num_spaces()) + r"}\.\. (staticmethod|function):: ([a-zA-Z0-9_]+)\((.*)\):*$"
m = re.match(pattern, line)
if m is None:
self._invalid_line(line, level)
info = FunctionInfo("staticmethod")
info.set_name(self._cleanup_string(m.group(2)))
if self.current_module is not None:
info.set_module(self.current_module)
for p in self._parse_comma_separated_string(m.group(3)):
info.add_parameter(self._cleanup_string(p))
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+):(type|arg|param|return|rtype)", line).group(1)
file.seek(last_pos)
detail = self._parse_func_detail(file, level=level.make_next_level(next_level_spaces))
info.add_parameter_details(detail["parameters"])
if detail["return"] is not None:
info.set_return(detail["return"])
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|tip)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|tip)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
line = file.readline()
m = re.match(r"^\s{" + str(level.num_spaces()) + r"}\.\. class:: ([a-zA-Z0-9_]+)(\([a-zA-Z0-9_,]+\))*", line)
if m is None:
self._invalid_line(line, level)
class_name = self._cleanup_string(m.group(1))
info = ClassInfo()
info.set_name(class_name)
if self.current_module is not None:
info.set_module(self.current_module)
if self.current_base_classes is not None:
info.add_base_classes(self.current_base_classes)
last_pos = file.tell()
line = file.readline()
while line:
if re.match(r"^\s*$", line):
pass
elif self._has_le_level_start(line, level):
file.seek(last_pos)
return info
elif self._has_le_level_string(line, level):
file.seek(last_pos)
return info
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. data::", line):
# TODO: Should use assignment expression introduced in Python 3.8
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. data::", line).group(1)
file.seek(last_pos)
attr = self._parse_attribute(file, level=level.make_next_level(next_level_spaces))
attr.set_class(class_name)
info.add_attribute(attr)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. attribute::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. attribute::", line).group(1)
is_deprecated = re.search(r"\(Deprecated", line) is not None
if self._is_bge_supported() and is_deprecated:
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
else:
file.seek(last_pos)
attr = self._parse_attribute(file, level=level.make_next_level(next_level_spaces))
attr.set_class(class_name)
info.add_attribute(attr)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. method::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. method::", line).group(1)
file.seek(last_pos)
method = _parse_method(file, level=level.make_next_level(next_level_spaces))
method.set_class(class_name)
info.add_method(method)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. classmethod::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. classmethod::", line).group(1)
file.seek(last_pos)
method = _parse_class_method(file, level=level.make_next_level(next_level_spaces))
method.set_class(class_name)
info.add_method(method)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. staticmethod::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. staticmethod::", line).group(1)
file.seek(last_pos)
method = _parse_static_method(file, level=level.make_next_level(next_level_spaces))
method.set_class(class_name)
info.add_method(method)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. function::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. function::", line).group(1)
file.seek(last_pos)
method = _parse_static_method(file, level=level.make_next_level(next_level_spaces))
method.set_class(class_name)
info.add_method(method)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|code-block|warning|literalinclude|seealso)::", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\. (note|code-block|warning|literalinclude|seealso)::", line).group(1)
self._skip_until_next_le_level(file, level=level.make_next_level(next_level_spaces))
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\.\.", line):
self._invalid_line(line, level)
elif re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line):
next_level_spaces = re.match(r"^\s{" + str(level.num_spaces()) + r"}(\s+)\S+", line).group(1)
file.seek(last_pos)
info.append_description(" " + self._cleanup_string(self._parse_description(file, level=level.make_next_level(next_level_spaces))))
info.set_description(self._cleanup_string(info.description()))
else:
self._invalid_line(line, level)
last_pos = file.tell()
line = file.readline()
return info
def _modify(self, result: 'AnalysisResult'):
pass
def _analyze_by_file(self, filename: str) -> 'SectionInfo':
self.current_file = filename
with open(filename, "r", encoding="utf-8") as file:
last_pos = file.tell()
line = file.readline()
section = SectionInfo()
self.current_base_classes = None
if self._is_bge_supported() and re.search(r"/bge\.types\.(?!rst)", filename) is not None:
self.current_module = "bge.types"
else:
self.current_module = None
while line:
if re.match(r"^base (class|classes) ---", line):
if self.current_base_classes is not None:
self._invalid_line(line, 0)
file.seek(last_pos)
self.current_base_classes = self._parse_base_class(file, level=RstLevel())
elif re.match(r"^\.\. (currentmodule|module)::", line):
if self.current_module is not None:
self._invalid_line(line, 0)
file.seek(last_pos)
self.current_module = self._cleanup_string(self._parse_module(file, level=RstLevel()))
elif re.match(r"^\.\. class::", line):
file.seek(last_pos)
class_info = self._parse_class(file, level=RstLevel())
section.add_info(class_info)
elif re.match(r"^\.\. function::", line):
is_deprecated = re.search(r"\(Deprecated", line) is not None
if self._is_bge_supported() and is_deprecated:
self._skip_until_next_le_level(file, level=RstLevel())
else:
file.seek(last_pos)
function_info = self._parse_function(file, level=RstLevel())
section.add_info(function_info)
elif re.match(r"^\.\. method::", line):
file.seek(last_pos)
function_info = self._parse_function(file, level=RstLevel())
section.add_info(function_info)
elif re.match(r"^\.\. (data|DATA)::", line):
is_deprecated = re.search(r"\(Deprecated", line) is not None
if self._is_bge_supported() and is_deprecated:
self._skip_until_next_le_level(file, level=RstLevel())
else:
file.seek(last_pos)
data_info = self._parse_constant(file, level=RstLevel())
section.add_info(data_info)
elif re.match(r"^\.\. attribute::", line):
file.seek(last_pos)
data_info = self._parse_constant(file, level=RstLevel())
section.add_info(data_info)
elif (re.match(r"^\.\. include::", line) or
re.match(r"^\.\. literalinclude::", line) or
re.match(r"^\.\. note::", line) or
re.match(r"^\.\. rubric::", line) or
re.match(r"^\.\. hlist::", line) or
re.match(r"^\.\. toctree::", line) or
re.match(r"^\.\. warning::", line) or
re.match(r"^\.\. code-block::", line) or
re.match(r"^\.\. seealso::", line) or
re.match(r"^\.\. note:", line) or
re.match(r"^\.\. note,", line) or
re.match(r"^\.\.$", line) or
re.match(r"^\.\. _[a-zA-Z0-9-_]+:", line) or
re.match(r"^ :Attributes:", line)):
self._skip_until_next_le_level(file, level=RstLevel())
elif re.match(r"^\.\.", line):
self._invalid_line(line, 0)
elif re.match(r"^\s+\.\.", line):
self._invalid_line(line, 0)
elif re.match(r"^\s+:", line):
self._invalid_line(line, 0)
last_pos = file.tell()
line = file.readline()
section_none_removed = SectionInfo()
for info in section.info_list:
if info.module() is not None:
section_none_removed.add_info(info)
return section_none_removed
def analyze(self, filenames: List[str]) -> 'AnalysisResult':
result = AnalysisResult()
for f in filenames:
info = self._analyze_by_file(f)
result.section_info.append(info)
self._modify(result)
return result
class AnalyzerWithModFile(BaseAnalyzer):
def __init__(self, mod_files: List[str]):
super(AnalyzerWithModFile, self).__init__()
self._mod_files: List[str] = mod_files
def _modify_with_mod_files(self, result: 'AnalysisResult'):
for mod_file in self._mod_files:
self._modify_with_mod_file(mod_file, result)
def _modify_with_mod_file(self, mod_file: str, result: 'AnalysisResult'):
with open(mod_file, encoding="utf-8") as f:
data = json.load(f)
# Process "remove" field
# - Remove item if the same item exists in AnalysisResult.
if "remove" in data.keys():
for item in data["remove"]:
for section in result.section_info:
remove_list = []
for info in section.info_list:
if ("type" not in item) or (info.type() != item["type"]):
continue
if ("name" not in item) or (info.name() != item["name"]):
continue
if (("module" in item) and (info.module() == item["module"])) or\
(("module" not in item) and (info.module() is None)):
remove_list.append(info)
for rm in remove_list:
section.info_list.remove(rm)
output_log(LOG_LEVEL_NOTICE,
"{} (type={}) is removed"
.format(rm.name(), rm.type()))
# Process "new" field
# - Add item if the same item doesn't exist in AnalysisResult.
if "new" in data.keys():
new_section = SectionInfo()
for item in data["new"]:
# check if entry is already registered
has_entry = False
for section in result.section_info:
for info in section.info_list:
if ("type" not in item) or (info.type() != item["type"]):
continue
if ("name" not in item) or (info.name() != item["name"]):
continue
if ("module" not in item) or (info.module() != item["module"]):
continue
has_entry = True
break
if has_entry:
break
if not has_entry:
if item["type"] == "constant":
new_v = VariableInfo("constant")
new_v.from_dict(item, 'NEW')
new_section.info_list.append(new_v)
elif item["type"] == "function":
new_f = FunctionInfo("function")
new_f.from_dict(item, 'NEW')
new_section.info_list.append(new_f)
elif item["type"] == "class":
new_c = ClassInfo()
new_c.from_dict(item, 'NEW')
new_section.info_list.append(new_c)
else:
raise RuntimeError("Unsupported Type: {}"
.format(item["type"]))
else:
output_log(LOG_LEVEL_WARN,
"{} is already registered"
.format(item["name"]))
result.section_info.append(new_section)
# Process "append" field
# - Add item's field if the same exists in AnalysisResult.
# - Value of item's field must be None.
if "append" in data.keys():
for item in data["append"]:
for section in result.section_info:
for info in section.info_list:
if ("type" not in item) or (info.type() != item["type"]):
continue
if ("name" not in item) or (info.name() != item["name"]):
continue
if ("module" not in item) or (info.module() != item["module"]):
continue
info.from_dict(item, 'APPEND')
# Process "update" field
# - Update item's field if the same exists in AnalysisResult.
# - Value of item's field can be None or some values.
if "update" in data.keys():
for item in data["update"]:
for section in result.section_info:
for info in section.info_list:
if ("type" not in item) or (info.type() != item["type"]):
continue
if ("name" not in item) or (info.name() != item["name"]):
continue
if ("module" not in item) or (info.module() != item["module"]):
continue
info.from_dict(item, 'UPDATE')
def _modify_post_process(self, result: 'AnalysisResult'):
pass
def _modify(self, result: 'AnalysisResult'):
self._modify_with_mod_files(result)
self._modify_post_process(result)
| 47.174249 | 176 | 0.508188 | 6,405 | 54,958 | 4.143169 | 0.039969 | 0.016656 | 0.040396 | 0.037306 | 0.821909 | 0.790142 | 0.770208 | 0.747975 | 0.731469 | 0.727324 | 0 | 0.003521 | 0.348812 | 54,958 | 1,164 | 177 | 47.214777 | 0.737985 | 0.017122 | 0 | 0.656436 | 0 | 0.090099 | 0.092739 | 0.020282 | 0 | 0 | 0 | 0.000859 | 0 | 1 | 0.043564 | false | 0.015842 | 0.004951 | 0.00495 | 0.127723 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c1bf6f6a397df7dd76ba2f44473b3ebd793f898 | 29 | py | Python | examples/int.py | mifieldxu/pseudo-lang | 889477c094236dc36526984be6f6537a4875e5a9 | [
"MIT"
] | 661 | 2016-03-12T07:32:36.000Z | 2018-11-12T14:31:30.000Z | examples/int.py | mifieldxu/pseudo-lang | 889477c094236dc36526984be6f6537a4875e5a9 | [
"MIT"
] | 21 | 2016-03-07T03:49:17.000Z | 2018-11-05T08:30:42.000Z | examples/int.py | mifieldxu/pseudo-lang | 889477c094236dc36526984be6f6537a4875e5a9 | [
"MIT"
] | 45 | 2016-03-07T03:48:09.000Z | 2018-04-16T20:55:47.000Z | def s(e):
return 2
s(2)
| 5.8 | 12 | 0.482759 | 7 | 29 | 2 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.344828 | 29 | 4 | 13 | 7.25 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d5bd2b771ad93f9b25f2900785d758c297504452 | 6,045 | py | Python | tests/fields/test_int.py | blazing-gig/tortoise-orm | 811bbcb12c702c5f45e3d86ce6e0b2ab386459df | [
"Apache-2.0"
] | 2,847 | 2018-08-27T12:02:21.000Z | 2022-03-31T01:30:40.000Z | tests/fields/test_int.py | blazing-gig/tortoise-orm | 811bbcb12c702c5f45e3d86ce6e0b2ab386459df | [
"Apache-2.0"
] | 983 | 2018-08-24T16:42:41.000Z | 2022-03-30T05:14:49.000Z | tests/fields/test_int.py | blazing-gig/tortoise-orm | 811bbcb12c702c5f45e3d86ce6e0b2ab386459df | [
"Apache-2.0"
] | 323 | 2018-09-04T23:38:42.000Z | 2022-03-31T06:49:17.000Z | from tests import testmodels
from tortoise.contrib import test
from tortoise.exceptions import IntegrityError
from tortoise.expressions import F
class TestIntFields(test.TestCase):
async def test_empty(self):
with self.assertRaises(IntegrityError):
await testmodels.IntFields.create()
async def test_create(self):
obj0 = await testmodels.IntFields.create(intnum=2147483647)
obj = await testmodels.IntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, 2147483647)
self.assertEqual(obj.intnum_null, None)
obj2 = await testmodels.IntFields.get(id=obj.id)
self.assertEqual(obj, obj2)
await obj.delete()
obj = await testmodels.IntFields.filter(id=obj0.id).first()
self.assertEqual(obj, None)
async def test_update(self):
obj0 = await testmodels.IntFields.create(intnum=2147483647)
await testmodels.IntFields.filter(id=obj0.id).update(intnum=2147483646)
obj = await testmodels.IntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, 2147483646)
self.assertEqual(obj.intnum_null, None)
async def test_min(self):
obj0 = await testmodels.IntFields.create(intnum=-2147483648)
obj = await testmodels.IntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, -2147483648)
self.assertEqual(obj.intnum_null, None)
obj2 = await testmodels.IntFields.get(id=obj.id)
self.assertEqual(obj, obj2)
async def test_cast(self):
obj0 = await testmodels.IntFields.create(intnum="3")
obj = await testmodels.IntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, 3)
async def test_values(self):
obj0 = await testmodels.IntFields.create(intnum=1)
values = await testmodels.IntFields.get(id=obj0.id).values("intnum")
self.assertEqual(values["intnum"], 1)
async def test_values_list(self):
obj0 = await testmodels.IntFields.create(intnum=1)
values = await testmodels.IntFields.get(id=obj0.id).values_list("intnum", flat=True)
self.assertEqual(values, 1)
async def test_f_expression(self):
obj0 = await testmodels.IntFields.create(intnum=1)
await obj0.filter(id=obj0.id).update(intnum=F("intnum") + 1)
obj1 = await testmodels.IntFields.get(id=obj0.id)
self.assertEqual(obj1.intnum, 2)
class TestSmallIntFields(test.TestCase):
async def test_empty(self):
with self.assertRaises(IntegrityError):
await testmodels.SmallIntFields.create()
async def test_create(self):
obj0 = await testmodels.SmallIntFields.create(smallintnum=32767)
obj = await testmodels.SmallIntFields.get(id=obj0.id)
self.assertEqual(obj.smallintnum, 32767)
self.assertEqual(obj.smallintnum_null, None)
await obj.save()
obj2 = await testmodels.SmallIntFields.get(id=obj.id)
self.assertEqual(obj, obj2)
async def test_min(self):
obj0 = await testmodels.SmallIntFields.create(smallintnum=-32768)
obj = await testmodels.SmallIntFields.get(id=obj0.id)
self.assertEqual(obj.smallintnum, -32768)
self.assertEqual(obj.smallintnum_null, None)
await obj.save()
obj2 = await testmodels.SmallIntFields.get(id=obj.id)
self.assertEqual(obj, obj2)
async def test_values(self):
obj0 = await testmodels.SmallIntFields.create(smallintnum=2)
values = await testmodels.SmallIntFields.get(id=obj0.id).values("smallintnum")
self.assertEqual(values["smallintnum"], 2)
async def test_values_list(self):
obj0 = await testmodels.SmallIntFields.create(smallintnum=2)
values = await testmodels.SmallIntFields.get(id=obj0.id).values_list(
"smallintnum", flat=True
)
self.assertEqual(values, 2)
async def test_f_expression(self):
obj0 = await testmodels.SmallIntFields.create(smallintnum=1)
await obj0.filter(id=obj0.id).update(smallintnum=F("smallintnum") + 1)
obj1 = await testmodels.SmallIntFields.get(id=obj0.id)
self.assertEqual(obj1.smallintnum, 2)
class TestBigIntFields(test.TestCase):
async def test_empty(self):
with self.assertRaises(IntegrityError):
await testmodels.BigIntFields.create()
async def test_create(self):
obj0 = await testmodels.BigIntFields.create(intnum=9223372036854775807)
obj = await testmodels.BigIntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, 9223372036854775807)
self.assertEqual(obj.intnum_null, None)
await obj.save()
obj2 = await testmodels.BigIntFields.get(id=obj.id)
self.assertEqual(obj, obj2)
async def test_min(self):
obj0 = await testmodels.BigIntFields.create(intnum=-9223372036854775808)
obj = await testmodels.BigIntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, -9223372036854775808)
self.assertEqual(obj.intnum_null, None)
await obj.save()
obj2 = await testmodels.BigIntFields.get(id=obj.id)
self.assertEqual(obj, obj2)
async def test_cast(self):
obj0 = await testmodels.BigIntFields.create(intnum="3")
obj = await testmodels.BigIntFields.get(id=obj0.id)
self.assertEqual(obj.intnum, 3)
async def test_values(self):
obj0 = await testmodels.BigIntFields.create(intnum=1)
values = await testmodels.BigIntFields.get(id=obj0.id).values("intnum")
self.assertEqual(values["intnum"], 1)
async def test_values_list(self):
obj0 = await testmodels.BigIntFields.create(intnum=1)
values = await testmodels.BigIntFields.get(id=obj0.id).values_list("intnum", flat=True)
self.assertEqual(values, 1)
async def test_f_expression(self):
obj0 = await testmodels.BigIntFields.create(intnum=1)
await obj0.filter(id=obj0.id).update(intnum=F("intnum") + 1)
obj1 = await testmodels.BigIntFields.get(id=obj0.id)
self.assertEqual(obj1.intnum, 2)
| 40.844595 | 95 | 0.688172 | 729 | 6,045 | 5.655693 | 0.082305 | 0.170992 | 0.044628 | 0.100412 | 0.87218 | 0.865147 | 0.85302 | 0.78341 | 0.739753 | 0.65268 | 0 | 0.05249 | 0.202647 | 6,045 | 147 | 96 | 41.122449 | 0.802905 | 0 | 0 | 0.625 | 0 | 0 | 0.01555 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.058333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5cf753be8bbb5113f7aeab0979a70f86b1dd392 | 46 | py | Python | build/lib/DjangoTemplateConverter/__init__.py | iamaksingh11/DjangoTemplateConverter | 937372b4b1c80a14b0c403693339594d81cbfbae | [
"MIT"
] | null | null | null | build/lib/DjangoTemplateConverter/__init__.py | iamaksingh11/DjangoTemplateConverter | 937372b4b1c80a14b0c403693339594d81cbfbae | [
"MIT"
] | null | null | null | build/lib/DjangoTemplateConverter/__init__.py | iamaksingh11/DjangoTemplateConverter | 937372b4b1c80a14b0c403693339594d81cbfbae | [
"MIT"
] | null | null | null | from DjangoTemplateConverter.start import main | 46 | 46 | 0.913043 | 5 | 46 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5e3e15960136c3f80056aa5ed939738ce23329e | 9,108 | py | Python | thenewboston_node/business_logic/tests/test_blockchain/test_nodes.py | nishp77/thenewboston-node | 158b1f1739b2c6c9c21c80e9da854ca141f1cf8f | [
"MIT"
] | null | null | null | thenewboston_node/business_logic/tests/test_blockchain/test_nodes.py | nishp77/thenewboston-node | 158b1f1739b2c6c9c21c80e9da854ca141f1cf8f | [
"MIT"
] | null | null | null | thenewboston_node/business_logic/tests/test_blockchain/test_nodes.py | nishp77/thenewboston-node | 158b1f1739b2c6c9c21c80e9da854ca141f1cf8f | [
"MIT"
] | null | null | null | import pytest
from thenewboston_node.business_logic.blockchain.base import BlockchainBase
from thenewboston_node.business_logic.models import Block, NodeDeclarationSignedChangeRequest
from thenewboston_node.core.utils.cryptography import generate_key_pair
@pytest.mark.parametrize('blockchain_argument_name', ('memory_blockchain', 'file_blockchain'))
def test_can_get_single_node_from_blockchain_genesis_state(
file_blockchain: BlockchainBase, memory_blockchain: BlockchainBase, blockchain_argument_name
):
blockchain: BlockchainBase = locals()[blockchain_argument_name]
assert list(blockchain.yield_nodes()) == list(blockchain.get_first_blockchain_state().yield_nodes())
@pytest.mark.parametrize('blockchain_argument_name', ('memory_blockchain', 'file_blockchain'))
def test_can_get_nodes_from_genesis_state_and_blocks(
file_blockchain: BlockchainBase,
memory_blockchain: BlockchainBase,
user_account_key_pair,
blockchain_argument_name,
primary_validator_key_pair,
):
blockchain: BlockchainBase = locals()[blockchain_argument_name]
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node.non-existing.domain:8555/'],
fee_amount=3,
signing_key=user_account_key_pair.private
)
blocks_node = request.message.node
assert blocks_node.identifier
block = Block.create_from_signed_change_request(blockchain, request, primary_validator_key_pair.private)
blockchain.add_block(block)
blockchain_state_nodes = list(blockchain.get_first_blockchain_state().yield_nodes())
assert list(blockchain.yield_nodes()) == [blocks_node] + blockchain_state_nodes
@pytest.mark.parametrize('blockchain_argument_name', ('memory_blockchain', 'file_blockchain'))
def test_can_get_nodes_blocks_node_overrides_genesis_state_node(
file_blockchain: BlockchainBase, memory_blockchain: BlockchainBase, primary_validator_key_pair,
blockchain_argument_name, confirmation_validator
):
blockchain: BlockchainBase = locals()[blockchain_argument_name]
blockchain_state_nodes = list(blockchain.get_first_blockchain_state().yield_nodes())
assert len(blockchain_state_nodes) == 2
blockchain_state_node = blockchain_state_nodes[0] # TODO(dmu) LOW: Improve PV detection
assert primary_validator_key_pair.public == blockchain_state_node.identifier
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node.non-existing.domain:8555/'],
fee_amount=3,
signing_key=primary_validator_key_pair.private
)
blocks_node = request.message.node
assert blocks_node.identifier
assert blocks_node.identifier == blockchain_state_node.identifier
block = Block.create_from_signed_change_request(blockchain, request, primary_validator_key_pair.private)
blockchain.add_block(block)
assert blocks_node != blockchain_state_node
assert list(blockchain.yield_nodes()) == [blocks_node, confirmation_validator]
@pytest.mark.parametrize('blockchain_argument_name', ('memory_blockchain', 'file_blockchain'))
def test_can_get_nodes_from_different_block_numbers(
file_blockchain: BlockchainBase, memory_blockchain: BlockchainBase, primary_validator_key_pair,
blockchain_argument_name, confirmation_validator
):
blockchain: BlockchainBase = locals()[blockchain_argument_name]
blockchain_state_nodes = list(blockchain.get_first_blockchain_state().yield_nodes())
assert len(blockchain_state_nodes) == 2
blockchain_state_node = blockchain_state_nodes[0] # TODO(dmu) MEDIUM: Improve detection of PV
assert primary_validator_key_pair.public == blockchain_state_node.identifier
signing_key = primary_validator_key_pair.private
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node1.non-existing-domain:8555/'], fee_amount=3, signing_key=signing_key
)
node0 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node2.non-existing-domain:8555/'], fee_amount=3, signing_key=signing_key
)
node1 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node3.non-existing-domain:8555/'], fee_amount=3, signing_key=signing_key
)
node2 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
assert list(blockchain.yield_nodes()) == [node2, confirmation_validator]
assert list(blockchain.yield_nodes(block_number=2)) == [node2, confirmation_validator]
assert list(blockchain.yield_nodes(block_number=1)) == [node1, confirmation_validator]
assert list(blockchain.yield_nodes(block_number=0)) == [node0, confirmation_validator]
assert list(blockchain.yield_nodes(block_number=-1)) == [blockchain_state_node, confirmation_validator]
@pytest.mark.parametrize('blockchain_argument_name', ('memory_blockchain', 'file_blockchain'))
def test_can_get_nodes_from_complex_blockchain(
file_blockchain: BlockchainBase, memory_blockchain: BlockchainBase, blockchain_argument_name,
confirmation_validator, primary_validator_key_pair
):
key_pair1 = generate_key_pair()
key_pair2 = generate_key_pair()
key_pair3 = generate_key_pair()
key_pair4 = generate_key_pair()
key_pair5 = generate_key_pair()
key_pair6 = generate_key_pair()
assert len({
key_pair1.public, key_pair2.public, key_pair3.public, key_pair4.public, key_pair5.public, key_pair6.public
}) == 6
blockchain: BlockchainBase = locals()[blockchain_argument_name]
blockchain_state_nodes = list(blockchain.get_first_blockchain_state().yield_nodes())
assert len(blockchain_state_nodes) == 2
node1 = blockchain_state_nodes[0]
signing_key = primary_validator_key_pair.private
# Block 0
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node1.non-existing-domain:8555/'], fee_amount=3, signing_key=key_pair2.private
)
node2 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
# Block 1
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node2.non-existing-domain:8555/'], fee_amount=3, signing_key=key_pair3.private
)
node3_old = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
# Block 2
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node3.non-existing-domain:8555/'], fee_amount=3, signing_key=key_pair4.private
)
node4 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
assert set(blockchain.yield_nodes(block_number=2)) == {node4, node3_old, node2, node1, confirmation_validator}
blockchain.snapshot_blockchain_state()
# Block 3
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node4.non-existing-domain:8555/'], fee_amount=3, signing_key=key_pair3.private
)
node3 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
# Block 4
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node5.non-existing-domain:8555/'], fee_amount=3, signing_key=key_pair5.private
)
node5 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
# Block 5
request = NodeDeclarationSignedChangeRequest.create(
network_addresses=['http://new-node6.non-existing-domain:8555/'], fee_amount=3, signing_key=key_pair6.private
)
node6 = request.message.node
blockchain.add_block(Block.create_from_signed_change_request(blockchain, request, signing_key))
assert set(blockchain.yield_nodes()) == {node6, node5, node3, node4, node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=5)
) == {node6, node5, node3, node4, node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=4)) == {node5, node3, node4, node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=3)) == {node3, node4, node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=2)) == {node4, node3_old, node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=1)) == {node3_old, node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=0)) == {node2, node1, confirmation_validator}
assert set(blockchain.yield_nodes(block_number=-1)) == {node1, confirmation_validator}
| 50.043956 | 117 | 0.78206 | 1,078 | 9,108 | 6.252319 | 0.09462 | 0.037092 | 0.050445 | 0.04451 | 0.867507 | 0.839614 | 0.815134 | 0.759347 | 0.759347 | 0.714837 | 0 | 0.0195 | 0.121651 | 9,108 | 181 | 118 | 50.320442 | 0.823 | 0.013724 | 0 | 0.482517 | 0 | 0 | 0.08246 | 0.013372 | 0 | 0 | 0 | 0.005525 | 0.188811 | 1 | 0.034965 | false | 0 | 0.027972 | 0 | 0.062937 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5ff3ac27c332a8ec0c5df4521488fe3bfe6c40d | 42,300 | py | Python | Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ_test.py | matan-xmcyber/content | 7f02301c140b35956af3cd20cb8dfc64f34afb3e | [
"MIT"
] | 2 | 2019-05-31T10:56:27.000Z | 2020-08-14T19:48:06.000Z | Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ_test.py | matan-xmcyber/content | 7f02301c140b35956af3cd20cb8dfc64f34afb3e | [
"MIT"
] | 48 | 2022-03-08T13:45:00.000Z | 2022-03-31T14:32:05.000Z | Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ_test.py | matan-xmcyber/content | 7f02301c140b35956af3cd20cb8dfc64f34afb3e | [
"MIT"
] | 2 | 2019-08-29T10:20:55.000Z | 2019-09-01T12:16:09.000Z | from CommonServerPython import *
import json
import io
import pytest
from unittest import mock
from unittest.mock import patch
import SailPointIdentityIQ
''' TEST CONSTANTS '''
MOCK_IDENTITYIQ_BASE_URL = 'https://identityiq-server.com/identityiq'
MOCK_BEARER_TOKEN = 'RXAxTEQ0ZkhUVm94dmhIWDd1M2Q0TjU3NDRnQUYzN2ouZXVlV2h1WUk4OW9jMi95Zml'
MOCK_HEADERS = {
'Authorization': 'Bearer %s' % MOCK_BEARER_TOKEN,
'Content-Type': 'application/json'
}
MOCK_CLIENT = SailPointIdentityIQ.Client(base_url=MOCK_IDENTITYIQ_BASE_URL, verify=False, proxy=False,
headers=MOCK_HEADERS, max_results=1000, request_timeout=10)
''' HELPER/UTILITY FUNCTIONS '''
def util_load_json(path: str):
"""
Utility to load json data from a local folder.
"""
with io.open(path, mode='r', encoding='utf-8') as file:
return json.loads(file.read())
def util_mock_http_resp(status: int, json_data=None):
"""
Utility to mock http response.
"""
response = mock.Mock()
response.status_code = status
if json_data is not None:
response.json = mock.Mock(return_value=json_data)
return response
def verify_scim_list_response(response, total_results):
"""
Verify SCIM structure for list response.
"""
assert response['totalResults'] == total_results
assert len(response['Resources']) == total_results
if 'startIndex' in response:
assert response['startIndex'] == 1
if 'schemas' in response:
assert 'urn:ietf:params:scim:api:messages:2.0:ListResponse' in response['schemas']
def verify_user(user):
"""
Verify SCIM structure for IdentityIQ User.
"""
assert user['id'] is not None
assert user['userName'] is not None
assert user['active'] is True
assert user['displayName'] is not None
assert 'urn:ietf:params:scim:schemas:core:2.0:User' in user['schemas']
def verify_policy_violation(policy_violation):
"""
Verify SCIM structure for IdentityIQ PolicyViolation.
"""
assert policy_violation['id'] is not None
assert policy_violation['constraintName'] is not None
assert policy_violation['status'] in ['Open', 'Closed', 'Mitigated']
assert policy_violation['policyName'] is not None
assert 'urn:ietf:params:scim:schemas:sailpoint:1.0:PolicyViolation' in policy_violation['schemas']
assert policy_violation['identity']['displayName'] is not None
assert policy_violation['identity']['value'] is not None
def verify_task_result(task_result):
"""
Verify SCIM structure for IdentityIQ TaskResult.
"""
assert task_result['id'] is not None
assert task_result['taskDefinition'] is not None
assert task_result['name'] is not None
assert task_result['host'] is not None
assert task_result['type'] is not None
assert task_result['pendingSignoffs'] is not None
assert task_result['completionStatus'] in ['Success', 'Error']
assert task_result['launcher'] is not None
assert task_result['completed'] is not None
def verify_account(account):
"""
Verify SCIM structure for IdentityIQ Account response.
"""
assert account['id'] is not None
assert account['nativeIdentity'] is not None
assert account['identity']['displayName'] is not None
assert account['identity']['value'] is not None
assert account['application']['displayName'] is not None
assert account['application']['value'] is not None
assert account['hasEntitlements'] is not None
assert account['active'] is not None
def verify_launched_workflow(launched_workflow):
"""
Verify SCIM structure for IdentityIQ Launched Workflow response.
"""
assert launched_workflow['id'] is not None
assert launched_workflow['name'] is not None
assert launched_workflow['launcher'] is not None
assert launched_workflow['type'] is not None
assert launched_workflow['completionStatus'] in ['Success', 'Error']
assert launched_workflow['terminated'] is not None
assert launched_workflow['targetClass'] is not None
def verify_role(role):
"""
Verify SCIM structure for IdentityIQ Role response.
"""
assert role['id'] is not None
assert role['name'] is not None
assert role['displayableName'] is not None
assert role['active'] is not None
assert role['owner']['displayName'] is not None
assert role['owner']['value'] is not None
assert role['type']['name'] is not None
assert role['type']['autoAssignment'] is not None
assert role['type']['displayName'] is not None
assert role['type']['manualAssignment'] is not None
def verify_entitlement(entitlement):
"""
Verify SCIM structure for IdentityIQ Entitlement response.
"""
assert entitlement['id'] is not None
assert entitlement['type'] is not None
assert entitlement['requestable'] is not None
assert entitlement['aggregated'] is not None
assert entitlement['application']['displayName'] is not None
assert entitlement['application']['value'] is not None
assert entitlement['owner']['displayName'] is not None
assert entitlement['owner']['value'] is not None
def verify_alert(alert):
"""
Verify SCIM structure for IdentityIQ Alert response.
"""
assert alert['id'] is not None
assert alert['name'] is not None
assert alert['displayName'] is not None
assert alert['meta']['created'] is not None
''' TESTS (UTILITY)'''
def test_get_headers_all_none():
headers = SailPointIdentityIQ.get_headers(None, None, None, None)
assert headers is None
def test_get_headers_base_url_none():
headers = SailPointIdentityIQ.get_headers(None, 'test', 'test', 'client_credentials')
assert headers is None
def test_get_headers_client_id_none():
headers = SailPointIdentityIQ.get_headers(MOCK_IDENTITYIQ_BASE_URL, None, 'test', 'client_credentials')
assert headers is None
def test_get_headers_client_secret_none():
headers = SailPointIdentityIQ.get_headers(MOCK_IDENTITYIQ_BASE_URL, 'test', None, 'client_credentials')
assert headers is None
@patch('SailPointIdentityIQ.get_headers')
def test_get_headers_grant_type(mock_header):
mock_header.return_value = {
'Authorization': 'Bearer RXAxTEQ0ZkhUVm94dmhIWDd1M2Q0TjU3NDRnQUYzN2ouZXVlV2h1WUk4OW9jMi95Zml',
'Content-Type': 'application/json'
}
headers = SailPointIdentityIQ.get_headers(MOCK_IDENTITYIQ_BASE_URL, 'test', 'test', None)
assert headers['Authorization'] == 'Bearer RXAxTEQ0ZkhUVm94dmhIWDd1M2Q0TjU3NDRnQUYzN2ouZXVlV2h1WUk4OW9jMi95Zml'
assert headers['Content-Type'] == 'application/json'
@patch('SailPointIdentityIQ.get_headers')
def test_get_headers_success(mock_header):
mock_header.return_value = {
'Authorization': 'Bearer RXAxTEQ0ZkhUVm94dmhIWDd1M2Q0TjU3NDRnQUYzN2ouZXVlV2h1WUk4OW9jMi95Zml',
'Content-Type': 'application/json'
}
headers = SailPointIdentityIQ.get_headers(MOCK_IDENTITYIQ_BASE_URL, 'test', 'test', 'client_credentials')
assert headers['Authorization'] == 'Bearer RXAxTEQ0ZkhUVm94dmhIWDd1M2Q0TjU3NDRnQUYzN2ouZXVlV2h1WUk4OW9jMi95Zml'
assert headers['Content-Type'] == 'application/json'
@patch('SailPointIdentityIQ.Client.send_request')
def test_send_request_all_none(mock_response):
mock_response.return_value = None
response = MOCK_CLIENT.send_request(None, None, None, None)
assert response is None
@patch('SailPointIdentityIQ.Client.send_request')
def test_send_request_url_suffix_none(mock_response):
mock_response.return_value = None
response = MOCK_CLIENT.send_request(None, 'GET', None, None)
assert response is None
@patch('SailPointIdentityIQ.Client.send_request')
def test_send_request_method_none(mock_response):
mock_response.return_value = None
response = MOCK_CLIENT.send_request(MOCK_IDENTITYIQ_BASE_URL, None, None, None)
assert response is None
@patch('SailPointIdentityIQ.Client.send_request')
def test_send_request_non_200_status(mock_response):
"""
Send request should return None in case of 3XX, 4XX or 5XX HTTP status from IdentityIQ.
"""
json_data = util_load_json('test_data/404_Not_Found.json')
mock_response.return_value = util_mock_http_resp(404, json_data)
response = MOCK_CLIENT.send_request(MOCK_IDENTITYIQ_BASE_URL, 'GET', None)
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_send_request_success(mock_response):
"""
Send request should return response json in case of 2XX HTTP status from IdentityIQ.
"""
json_data = util_load_json('test_data/ResourceTypes.json')
mock_response.return_value = util_mock_http_resp(200, json_data)
response = MOCK_CLIENT.send_request(MOCK_IDENTITYIQ_BASE_URL, 'GET', None)
assert response.status_code == 200
verify_scim_list_response(response.json(), response.json()['totalResults'])
def test_transform_object_list_none_all():
data_list = SailPointIdentityIQ.transform_object_list(None, None)
assert data_list is None
def test_transform_object_list_type_none():
json_data = util_load_json('test_data/Users.json')
data_list = SailPointIdentityIQ.transform_object_list(None, json_data['Resources'])
assert data_list == json_data['Resources']
def test_transform_object_list_none():
data_list = SailPointIdentityIQ.transform_object_list('IdentityIQ.Identity', None)
assert data_list is None
def test_transform_object_list():
json_data = util_load_json('test_data/Users.json')
data_list = SailPointIdentityIQ.transform_object_list('IdentityIQ.Identity', json_data['Resources'])
assert data_list == json_data['Resources']
for data in data_list:
assert 'sailpointUser' in data
for resource in json_data['Resources']:
assert 'sailpointUser' in resource
def test_transform_object_none_all():
data = SailPointIdentityIQ.transform_object(None, None)
assert data is None
def test_transform_object_type_none():
json_data = util_load_json('test_data/User.json')
data = SailPointIdentityIQ.transform_object(None, json_data)
assert data == json_data
def test_transform_object_none():
data = SailPointIdentityIQ.transform_object('IdentityIQ.Identity', None)
assert data is None
def test_transform_object():
json_data = util_load_json('test_data/User.json')
data = SailPointIdentityIQ.transform_object('IdentityIQ.Identity', json_data)
assert data == json_data
assert 'sailpointUser' in data
assert 'sailpointUser' in json_data
def test_get_markdown_none():
markdown = SailPointIdentityIQ.get_markdown(None, None)
assert markdown == ''
def test_get_markdown_object_type_none():
json_data = util_load_json('test_data/User.json')
markdown = SailPointIdentityIQ.get_markdown(None, json_data)
assert markdown == ''
def test_get_markdown_objects_none():
markdown = SailPointIdentityIQ.get_markdown('IdentityIQ.Identity', None)
headers = ['id', 'userName', 'displayName', 'name', 'emails', 'sailpointUser', 'extendedUser', 'entitlements',
'roles', 'capabilities', 'active']
assert markdown == tableToMarkdown('Identity(Identities)', None, headers=headers)
def test_get_markdown():
json_data = util_load_json('test_data/User.json')
markdown = SailPointIdentityIQ.get_markdown('IdentityIQ.Identity', json_data)
headers = ['id', 'userName', 'displayName', 'name', 'emails', 'sailpointUser', 'extendedUser', 'entitlements',
'roles', 'capabilities', 'active']
assert markdown == tableToMarkdown('Identity(Identities)', json_data, headers=headers)
def test_build_results_none():
response = util_mock_http_resp(500, None)
with pytest.raises(TypeError):
SailPointIdentityIQ.build_results(None, None, response)
def test_build_results_non_2xx_status():
json_data = util_load_json('test_data/404_Not_Found.json')
response = util_mock_http_resp(404, json_data)
results = SailPointIdentityIQ.build_results('Test.prefix', 'Test.key_field', response)
assert results == '404 : Resource 7f000001705911b4817059d30cf50348 not found.'
def test_build_results_2xx_status():
json_data = util_load_json('test_data/User.json')
response = util_mock_http_resp(200, json_data)
results = SailPointIdentityIQ.build_results('IdentityIQ.Identity', 'IdentityIQ.Identity', response)
assert results.readable_output == '### Results:\n' + SailPointIdentityIQ.get_markdown('IdentityIQ.Identity',
json_data)
assert results.outputs_prefix == 'IdentityIQ.Identity'
assert results.outputs_key_field == 'IdentityIQ.Identity'
verify_user(results.outputs)
''' TESTS (COMMAND)'''
@patch('SailPointIdentityIQ.Client.send_request')
def test_connection_fail(mock_response):
mock_response.return_value = util_mock_http_resp(404, None)
test_connection = SailPointIdentityIQ.test_connection(MOCK_CLIENT)
assert test_connection == 'Unable to connect to IdentityIQ!'
@patch('SailPointIdentityIQ.Client.send_request')
def test_connection_success(mock_response):
mock_response.return_value = util_mock_http_resp(200, None)
test_connection = SailPointIdentityIQ.test_connection(MOCK_CLIENT)
assert test_connection == 'ok'
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_no_resources(mock_search_identities_response):
json_data = util_load_json('test_data/NoResources.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, None, 0, False)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_id_not_found(mock_search_identities_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_search_identities_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, '7f000001705911b4817059d30cf50348', None, 0, True)
assert response.status_code == 404
assert response.json()['status'] == '404'
assert response.json()['detail'] == 'Resource 7f000001705911b4817059d30cf50348 not found.'
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_id_found(mock_search_identities_response):
json_data = util_load_json('test_data/User.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, '7f00000174441779817444c8842b0017', None, 0, True)
assert response.status_code == 200
verify_user(response.json())
assert response.json()['id'] == '7f00000174441779817444c8842b0017'
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_email_not_found(mock_search_identities_response):
json_data = util_load_json('test_data/NoResources.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, 'test@sailpointdemo.com', 0, True)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_email_found(mock_search_identities_response):
json_data = util_load_json('test_data/User_Filtered.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, 'serviceaccount@sailpointdemo.com', 0, True)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
user = response.json()['Resources'][0]
verify_user(user)
assert user['id'] == '7f000001705914d1817059d59e18000e'
has_email = False
for email in user['emails']:
if email['value'] == 'serviceaccount@sailpointdemo.com':
has_email = True
assert has_email is True
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_risk_score_not_matched(mock_search_identities_response):
json_data = util_load_json('test_data/NoResources.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, None, 1600, True)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_risk_score_invalid(mock_search_identities_response):
json_data = util_load_json('test_data/400_Bad_Request.json')
mock_search_identities_response.return_value = util_mock_http_resp(400, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, None, -1, True)
assert response.status_code == 400
assert response.json()['status'] == '400'
assert response.json()['detail'] == 'Invalid filter:urn:ietf:params:scim:schemas:sailpoint:1.0:User:riskScore eq -1'
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_risk_score_matched(mock_search_identities_response):
json_data = util_load_json('test_data/User_Filtered.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, None, 100, True)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
user = response.json()['Resources'][0]
verify_user(user)
assert user['id'] == '7f000001705914d1817059d59e18000e'
assert user['urn:ietf:params:scim:schemas:sailpoint:1.0:User']['riskScore'] >= 100
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_active_false(mock_search_identities_response):
json_data = util_load_json('test_data/NoResources.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, None, 0, True)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_search_identities_active_true(mock_search_identities_response):
json_data = util_load_json('test_data/Users.json')
mock_search_identities_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.search_identities(MOCK_CLIENT, None, None, 0, True)
assert response.status_code == 200
verify_scim_list_response(response.json(), 5)
for user in response.json()['Resources']:
verify_user(user)
assert user['active'] is True
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_policy_violations_id_not_found(mock_policy_violations_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_policy_violations_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_policy_violations(MOCK_CLIENT, '8a8080824df45873014df46036521343')
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_policy_violations_id_found(mock_policy_violations_response):
json_data = util_load_json('test_data/PolicyViolation.json')
mock_policy_violations_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_policy_violations(MOCK_CLIENT, '8a8080824df45873014df46036521328')
assert response.status_code == 200
verify_policy_violation(response.json())
assert response.json()['id'] == '8a8080824df45873014df46036521328'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_policy_violations_no_resources(mock_policy_violations_response):
json_data = util_load_json('test_data/NoResources.json')
mock_policy_violations_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_policy_violations(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_policy_violations(mock_policy_violations_response):
json_data = util_load_json('test_data/PolicyViolations.json')
mock_policy_violations_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_policy_violations(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 2)
for policy_violation in response.json()['Resources']:
verify_policy_violation(policy_violation)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_task_results_id_not_found(mock_task_results_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_task_results_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_task_results(MOCK_CLIENT, '7f00000175891f4b81763bd218de1d64')
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_task_results_id_found(mock_task_results_response):
json_data = util_load_json('test_data/TaskResult.json')
mock_task_results_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_task_results(MOCK_CLIENT, '7f00000175891f4b81763bd2181c1d5f')
assert response.status_code == 200
verify_task_result(response.json())
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_task_results_no_resources(mock_task_results_response):
json_data = util_load_json('test_data/NoResources.json')
mock_task_results_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_task_results(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_task_results(mock_task_results_response):
json_data = util_load_json('test_data/TaskResults.json')
mock_task_results_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_task_results(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 5)
for task_result in response.json()['Resources']:
verify_task_result(task_result)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_no_resources(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_id_not_found(mock_accounts_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_accounts_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, '7f00000174441779817444c8837c5373', None, None, None, None,
None, None)
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_id_found(mock_accounts_response):
json_data = util_load_json('test_data/Account.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, '7f00000174441779817444c8837c0014', None, None, None, None,
None, None)
assert response.status_code == 200
verify_account(response.json())
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_display_name_not_found(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, 'Black Jack', None, None, None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_display_name_found(mock_accounts_response):
json_data = util_load_json('test_data/Account_Filtered.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, 'bjack', None, None, None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
account = response.json()['Resources'][0]
verify_account(account)
assert account['id'] == '7f00000174441779817444c883c30016'
assert account['displayName'] == 'bjack'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_last_refresh_not_matched(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, '2020-12-10T08:50:25Z', None, None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_last_refresh_matched(mock_accounts_response):
json_data = util_load_json('test_data/Account_Filtered.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, '2020-08-31T00:00:00Z', None, None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
account = response.json()['Resources'][0]
verify_account(account)
assert account['id'] == '7f00000174441779817444c883c30016'
assert account['lastRefresh'] >= '2020-08-31T00:00:00Z'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_native_identity_not_found(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, 'Black Jack', None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_native_identity_found(mock_accounts_response):
json_data = util_load_json('test_data/Account_Filtered.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, 'bjack', None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
account = response.json()['Resources'][0]
verify_account(account)
assert account['id'] == '7f00000174441779817444c883c30016'
assert account['nativeIdentity'] == 'bjack'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_last_target_agg_not_matched(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, '2020-12-10T00:00:00Z', None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_last_target_agg_matched(mock_accounts_response):
json_data = util_load_json('test_data/Account_Filtered.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, '2020-08-31T00:00:00Z', None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
account = response.json()['Resources'][0]
verify_account(account)
assert account['id'] == '7f00000174441779817444c883c30016'
assert account['lastTargetAggregation'] == '2020-09-05T09:22:45.432-05:00'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_identity_name_not_matched(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, None, 'Black Jack', None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_identity_name_matched(mock_accounts_response):
json_data = util_load_json('test_data/Account_Filtered.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, None, 'bjack', None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
account = response.json()['Resources'][0]
verify_account(account)
assert account['id'] == '7f00000174441779817444c883c30016'
assert account['identity']['displayName'] == 'bjack'
assert account['identity']['userName'] == 'bjack'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_application_name_not_matched(mock_accounts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, None, None, 'SCIM Server')
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts_application_name_matched(mock_accounts_response):
json_data = util_load_json('test_data/Account_Filtered.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, None, None, None, None, 'SCIM SDK')
assert response.status_code == 200
verify_scim_list_response(response.json(), 1)
account = response.json()['Resources'][0]
verify_account(account)
assert account['id'] == '7f00000174441779817444c883c30016'
assert account['application']['displayName'] == 'SCIM SDK'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_accounts(mock_accounts_response):
json_data = util_load_json('test_data/Accounts.json')
mock_accounts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_accounts(MOCK_CLIENT, None, None, '2020-05-01T00:00:00Z', None, None, None, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 3)
for account in response.json()['Resources']:
verify_account(account)
@patch('SailPointIdentityIQ.change_account_status')
def test_change_account_id_not_found(mock_account):
mock_account.return_value = util_load_json('test_data/404_Not_Found.json')
accounts = SailPointIdentityIQ.change_account_status(MOCK_CLIENT, '7f00000174441779817444c8837c5373', True)
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in accounts['schemas']
assert accounts['status'] == '404'
@patch('SailPointIdentityIQ.change_account_status')
def test_change_account_enable(mock_account):
mock_account.return_value = util_load_json('test_data/Account.json')
account = SailPointIdentityIQ.change_account_status(MOCK_CLIENT, '7f00000174441779817444c883c30016', True)
verify_account(account)
assert account['active'] is True
@patch('SailPointIdentityIQ.change_account_status')
def test_change_account_disable(mock_account):
mock_account.return_value = util_load_json('test_data/Account_Disabled.json')
account = SailPointIdentityIQ.change_account_status(MOCK_CLIENT, '7f00000174441779817444c883c30016', False)
verify_account(account)
assert account['active'] is False
@patch('SailPointIdentityIQ.delete_account')
def test_delete_account_id_none(mock_account_response):
mock_account_response.return_value = '405'
response = SailPointIdentityIQ.delete_account(MOCK_CLIENT, None)
assert response == '405'
@patch('SailPointIdentityIQ.delete_account')
def test_delete_account_id_not_found(mock_account_response):
mock_account_response.return_value = '404 : Resource 7f00000174441779817444c8837c5373 not found.'
response = SailPointIdentityIQ.delete_account(MOCK_CLIENT, '7f00000174441779817444c8837c5373')
assert response == '404 : Resource 7f00000174441779817444c8837c5373 not found.'
@patch('SailPointIdentityIQ.delete_account')
def test_delete_account_deleted(mock_account_response):
mock_account_response.return_value = '404 : Resource 7f00000174441779817444c8837c5373 not found.'
response = SailPointIdentityIQ.delete_account(MOCK_CLIENT, '7f00000174441779817444c8837c5373')
assert response == '404 : Resource 7f00000174441779817444c8837c5373 not found.'
@patch('SailPointIdentityIQ.delete_account')
def test_delete_account(mock_account_response):
mock_account_response.return_value = 'Account deleted successfully!'
response = SailPointIdentityIQ.delete_account(MOCK_CLIENT, '7f00000174441779817444c8837c5373')
assert response == 'Account deleted successfully!'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_launched_workflows_id_not_found(mock_launched_workflows_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_launched_workflows_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_launched_workflows(MOCK_CLIENT, '7f00000173de18fa8173deb1064e0453')
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_launched_workflows_id_found(mock_launched_workflows_response):
json_data = util_load_json('test_data/LaunchedWorkflow.json')
mock_launched_workflows_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_launched_workflows(MOCK_CLIENT, '7f00000173de18fa8173deb1064e001c')
assert response.status_code == 200
verify_launched_workflow(response.json())
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_launched_workflows_no_resources(mock_launched_workflows_response):
json_data = util_load_json('test_data/NoResources.json')
mock_launched_workflows_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_launched_workflows(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_launched_workflows(mock_launched_workflows_response):
json_data = util_load_json('test_data/LaunchedWorkflows.json')
mock_launched_workflows_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_launched_workflows(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 5)
for launched_workflow in response.json()['Resources']:
verify_launched_workflow(launched_workflow)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_roles_id_not_found(mock_roles_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_roles_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_roles(MOCK_CLIENT, '7f000001705911b4817059d312394432')
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_roles_id_found(mock_roles_response):
json_data = util_load_json('test_data/Role.json')
mock_roles_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_roles(MOCK_CLIENT, '7f000001705911b4817059d31239035f')
assert response.status_code == 200
verify_role(response.json())
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_roles_no_resources(mock_roles_response):
json_data = util_load_json('test_data/NoResources.json')
mock_roles_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_roles(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_roles(mock_roles_response):
json_data = util_load_json('test_data/Roles.json')
mock_roles_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_roles(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 5)
for role in response.json()['Resources']:
verify_role(role)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_entitlements_id_not_found(mock_entitlements_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_entitlements_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_entitlements(MOCK_CLIENT, '7f000001705911b4817059d355844443')
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_entitlements_id_found(mock_entitlements_response):
json_data = util_load_json('test_data/Entitlement.json')
mock_entitlements_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_entitlements(MOCK_CLIENT, '7f000001705911b4817059d355840657')
assert response.status_code == 200
verify_entitlement(response.json())
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_entitlements_no_resources(mock_entitlements_response):
json_data = util_load_json('test_data/NoResources.json')
mock_entitlements_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_entitlements(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_entitlements(mock_entitlements_response):
json_data = util_load_json('test_data/Entitlements.json')
mock_entitlements_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_entitlements(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 5)
for entitlement in response.json()['Resources']:
verify_entitlement(entitlement)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_alerts_id_not_found(mock_alerts_response):
json_data = util_load_json('test_data/404_Not_Found.json')
mock_alerts_response.return_value = util_mock_http_resp(404, json_data)
response = SailPointIdentityIQ.get_alerts(MOCK_CLIENT, '0a000001758a173e81763f81205e6453')
assert response.status_code == 404
assert 'urn:ietf:params:scim:api:messages:2.0:Error' in response.json()['schemas']
assert response.json()['status'] == '404'
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_alerts_id_found(mock_alerts_response):
json_data = util_load_json('test_data/Alert.json')
mock_alerts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_alerts(MOCK_CLIENT, '0a000001758a173e81763f81205e0062')
assert response.status_code == 200
verify_alert(response.json())
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_alerts_no_resources(mock_alerts_response):
json_data = util_load_json('test_data/NoResources.json')
mock_alerts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_alerts(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 0)
@patch('SailPointIdentityIQ.Client.send_request')
def test_get_alerts(mock_alerts_response):
json_data = util_load_json('test_data/Alerts.json')
mock_alerts_response.return_value = util_mock_http_resp(200, json_data)
response = SailPointIdentityIQ.get_alerts(MOCK_CLIENT, None)
assert response.status_code == 200
verify_scim_list_response(response.json(), 3)
for alert in response.json()['Resources']:
verify_alert(alert)
@patch('SailPointIdentityIQ.Client.send_request')
def test_create_alert(mock_alerts_response):
json_data = util_load_json('test_data/Alert.json')
mock_alerts_response.return_value = util_mock_http_resp(201, json_data)
response = SailPointIdentityIQ.create_alert(MOCK_CLIENT, 'Test Alert')
assert response.status_code == 201
verify_alert(response.json())
| 44.526316 | 120 | 0.773546 | 5,296 | 42,300 | 5.838557 | 0.053059 | 0.035704 | 0.025226 | 0.033117 | 0.854565 | 0.805504 | 0.7394 | 0.727984 | 0.711232 | 0.661686 | 0 | 0.047682 | 0.12591 | 42,300 | 949 | 121 | 44.573235 | 0.788608 | 0.017045 | 0 | 0.501429 | 0 | 0.001429 | 0.223447 | 0.154003 | 0 | 0 | 0 | 0 | 0.281429 | 1 | 0.138571 | false | 0 | 0.01 | 0 | 0.151429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
912f1f1b85aa76b42bd1dd9f2cab13735fb3e0d7 | 6,317 | py | Python | src/post-process-costs.py | wogandavid/KEMGCC | b5e1c887c311beb6b7ac3eb6986a885991f6daa4 | [
"MIT"
] | null | null | null | src/post-process-costs.py | wogandavid/KEMGCC | b5e1c887c311beb6b7ac3eb6986a885991f6daa4 | [
"MIT"
] | 1 | 2021-05-18T08:30:06.000Z | 2021-05-18T08:30:06.000Z | src/post-process-costs.py | wogandavid/KEMGCC | b5e1c887c311beb6b7ac3eb6986a885991f6daa4 | [
"MIT"
] | null | null | null | import numpy as np
import gdxpds
import pandas as pd
#files = [
# '../results/Sensitivities/CO2/costs/results_cost_calcs_B30.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_D30.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_F30.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_H30.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_B90.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_D90.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_F90.gdx',
# '../results/Sensitivities/CO2/costs/results_cost_calcs_H90.gdx',
##
# '../results/Sensitivities/Int2x/costs/results_cost_calcs_Cint.gdx',
# '../results/Sensitivities/Int2x/costs/results_cost_calcs_Dint.gdx',
# '../results/Sensitivities/Int2x/costs/results_cost_calcs_Gint.gdx',
# '../results/Sensitivities/Int2x/costs/results_cost_calcs_Hint.gdx',
##
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Aoil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Boil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Coil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Doil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Eoil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Foil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Goil.gdx',
# '../results/Sensitivities/Oil and gas prices/costs/results_cost_calcs_Hoil.gdx',
##
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Are.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Bre.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Cre.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Dre.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Ere.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Fre.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Gre.gdx',
# '../results/Sensitivities/RE costs/costs/results_cost_calcs_Hre.gdx',
#]
files = [
'../results/MainScenarios/results_cost_calcs_carbon_A.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_B.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_C.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_D.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_E.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_F.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_G.gdx',
'../results/MainScenarios/results_cost_calcs_carbon_H.gdx',
#
'../results/Sensitivities/results_cost_calcs_carbon_B30.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_D30.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_F30.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_H30.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_B90.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_D90.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_F90.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_H90.gdx',
#
'../results/Sensitivities//results_cost_calcs_carbon_Cint.gdx',
'../results/Sensitivities//results_cost_calcs_carbon_Dint.gdx',
'../results/Sensitivities//results_cost_calcs_carbon_Gint.gdx',
'../results/Sensitivities//results_cost_calcs_carbon_Hint.gdx',
#
'../results/Sensitivities/results_cost_calcs_carbon_Aoil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Boil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Coil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Doil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Eoil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Foil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Goil.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Hoil.gdx',
#
'../results/Sensitivities/results_cost_calcs_carbon_Are.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Bre.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Cre.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Dre.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Ere.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Fre.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Gre.gdx',
'../results/Sensitivities/results_cost_calcs_carbon_Hre.gdx',
]
scenarios = [
'A','B','C','D','E','F','G','H',
'B30','D30','F30','H30','B90','D90','F90','H90',
'Cint','Dint','Gint','Hint',
'Aoil','Boil','Coil','Doil','Eoil','Foil','Goil','Hoil',
'Are','Bre','Cre','Dre','Ere','Fre','Gre','Hre']
countries = ['bah','kuw','omn','qat','ksa','uae']
years = {'t01':'2015',
't02':'2016',
't03':'2017',
't04':'2018',
't05':'2019',
't06':'2020',
't07':'2021',
't08':'2022',
't09':'2023',
't10':'2024',
't11':'2025',
't12':'2026',
't13':'2027',
't14':'2028',
't15':'2029',
't16':'2030'}
costcalclist = []
withClist = []
withoutClist = []
for filename, scenario in zip(files, scenarios):
_dict = gdxpds.to_dataframes(filename)
#costcalcs = _df['RWcostcalcs']
#costcalcs.columns = ['trun','item','c','value']
withC = _dict['RWnet_carbonrecycle']
withoutC = _dict['RWnetex']
withC.columns = ['trun','c','value']
withoutC.columns = ['trun','c','value']
withC['scenario'] = scenario
withoutC['scenario'] = scenario
#costcalcs = costcalcs[costcalcs['item']=='Total System']
withC = withC.replace(years)
withClist.append(withC)
withoutC = withoutC.replace(years)
withoutClist.append(withoutC)
d_withC = pd.concat(withClist)
d_withoutC = pd.concat(withoutClist)
to_write ={}
to_write['withC'] = d_withC
to_write['withoutC'] = d_withoutC
with pd.ExcelWriter('../results/Sensitivities/costcalcs_sensitivities_with.xlsx') as writer:
for k, v in to_write.items():
v.to_excel(writer, sheet_name=k, merge_cells=False,float_format='%.2f',index=False) | 45.446043 | 92 | 0.714896 | 783 | 6,317 | 5.45083 | 0.186462 | 0.164948 | 0.239925 | 0.185567 | 0.718369 | 0.718369 | 0.708529 | 0.329194 | 0.206186 | 0.11059 | 0 | 0.028091 | 0.115245 | 6,317 | 139 | 93 | 45.446043 | 0.735552 | 0.352541 | 0 | 0 | 0 | 0 | 0.60746 | 0.530138 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e67f6841169f206a30421d11205174a2d3e7587f | 16,605 | py | Python | src/neos/makers.py | gradhep/neos | 24943f58e7e644146576a6afc16c63e5c48ab7d3 | [
"Apache-2.0"
] | 14 | 2020-11-15T09:35:39.000Z | 2022-03-15T00:19:36.000Z | src/neos/makers.py | gradhep/neos | 24943f58e7e644146576a6afc16c63e5c48ab7d3 | [
"Apache-2.0"
] | 8 | 2020-11-10T13:34:43.000Z | 2021-09-13T16:13:56.000Z | src/neos/makers.py | gradhep/neos | 24943f58e7e644146576a6afc16c63e5c48ab7d3 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/02_makers.ipynb (unless otherwise specified).
__all__ = (
"hists_from_nn",
"hepdata_like_from_hists",
"histosys_model_from_hists",
)
# Cell
import sys
from unittest.mock import patch
# Cell
import jax
import jax.numpy as jnp
import pyhf
from relaxed import hist_kde as hist
from .models import hepdata_like
jax_backend = pyhf.tensor.jax_backend(precision="64b")
def hists_from_nn(
data_generator,
predict,
hpar_dict,
method="softmax",
LUMI=10,
sig_scale=2,
bkg_scale=10,
reflect_infinities=False,
):
"""Initialize a function `hist_maker` that returns a 'soft' histogram based
on a neural network with a softmax output. Choose which example problem to
try by setting the `example` argument.
Args:
data_generator: Callable that returns generated data (in jax array
format).
predict: Decision function for a parameterized observable, e.g. neural
network.
method: A string to specify the method to use for constructing soft
histograms. Either "softmax" or "kde".
LUMI: 'Luminosity' scaling factor for the yields.
sig_scale: Individual scaling factor for the signal yields.
bkg_scale: Individual scaling factor for the signal yields.
Returns:
hist_maker: A callable function that takes the parameters of the
observable (and optional hyperpars), then constructs signal,
background, and background uncertainty yields.
"""
data = data_generator()
if len(data) == 3:
if method == "softmax":
def hist_maker(hm_params):
"""Uses the nn decision function `predict` to form histograms
from signal and background data, all drawn from multivariate
normal distributions with different means. Two background
distributions are sampled from, which is meant to mimic the
situation in particle physics where one has a 'nominal'
prediction for a nuisance parameter (taken here as the mean of
two modes) and then alternate values (e.g. from varying up/down
by one standard deviation), which then modifies the background
pdf. Here, we take that effect to be a shift of the mean of the
distribution. The value for the background histogram is then
the mean of the resulting counts of the two modes, and the
uncertainty can be quantified through the count standard
deviation.
Arguments:
hm_params: a list containing:
nn: jax array of observable parameters.
"""
nn = hm_params
s, b_up, b_down = data
NMC = len(s)
s_hist = predict(nn, s).sum(axis=0) * sig_scale / NMC * LUMI
b_hists = [
predict(nn, b_up).sum(axis=0) * bkg_scale / NMC * LUMI,
predict(nn, b_down).sum(axis=0) * bkg_scale / NMC * LUMI,
]
b_mean = jnp.mean(jnp.asarray(b_hists), axis=0)
b_unc = jnp.std(jnp.asarray(b_hists), axis=0)
return s_hist, b_mean, b_unc
elif method == "kde":
def hist_maker(hm_params):
"""Uses the nn decision function `predict` to form histograms
from signal and background data using a kde, all drawn from
multivariate normal distributions with different means. Two
background distributions are sampled from, which is meant to
mimic the situation in particle physics where one has a
'nominal' prediction for a nuisance parameter (taken here as
the mean of two modes) and then alternate values (e.g. from
varying up/down by one standard deviation), which then modifies
the background pdf. Here, we take that effect to be a shift of
the mean of the distribution. The value for the background
histogram is then the mean of the resulting counts of the two
modes, and the uncertainty can be quantified through the count
standard deviation.
Arguments:
hm_params: Array-like, consisting of:
nn: jax array of observable parameters.
bins: Array of bin edges, e.g. np.linspace(0,1,3)
defines a two-bin histogram with edges at 0, 0.5,
1.
bandwidth: Float that controls the 'smoothness' of the
kde. It's recommended to keep this fairly
similar to the bin width to avoid
oversmoothing the distribution. Going too low
will cause things to break, as the gradients
of the kde become unstable.
"""
nn = hm_params
bins, bandwidth = hpar_dict["bins"], hpar_dict["bandwidth"]
s, b_up, b_down = data
NMC = len(s)
nn_s, nn_b_up, nn_b_down = (
predict(nn, s).ravel(),
predict(nn, b_up).ravel(),
predict(nn, b_down).ravel(),
)
s_hist = (
hist(nn_s, bins, bandwidth, reflect_infinities=reflect_infinities)
* sig_scale
/ NMC
* LUMI
)
b_hists = jnp.asarray(
[
hist(
nn_b_up,
bins,
bandwidth,
reflect_infinities=reflect_infinities,
)
* bkg_scale
/ NMC
* LUMI,
hist(
nn_b_down,
bins,
bandwidth,
reflect_infinities=reflect_infinities,
)
* bkg_scale
/ NMC
* LUMI,
]
)
kde_counts = [
s_hist,
jnp.mean(b_hists, axis=0),
jnp.std(b_hists, axis=0),
]
return kde_counts
else:
assert False, (
f"Unsupported method: {method}"
" (only using kde or softmax for these examples)."
)
elif len(data) == 4:
if method == "softmax":
def hist_maker(hm_params):
"""Uses the nn decision function `predict` to form histograms
from signal and background data, all drawn from multivariate
normal distributions with different means. Three background
distributions are sampled from, which mimics the situation in
particle physics where one has a 'nominal' prediction for a
nuisance parameter (taken here as the mean of two modes) and
then alternate values (e.g. from varying up/down by one
standard deviation), which then modifies the background pdf.
Here, we take that effect to be a shift of the mean of the
distribution. The HistFactory 'histosys' nusiance parameter
will then be constructed from the yields downstream by
interpolating between them using pyhf.
Arguments:
hm_params: a list containing:
nn: jax array of observable parameters.
Returns:
Set of 4 counts for signal, background, and up/down modes.
"""
nn = hm_params
s, b_nom, b_up, b_down = data
NMC = len(s)
counts = [
predict(nn, s).sum(axis=0) * sig_scale / NMC * LUMI,
predict(nn, b_nom).sum(axis=0) * bkg_scale / NMC * LUMI,
predict(nn, b_up).sum(axis=0) * bkg_scale / NMC * LUMI,
predict(nn, b_down).sum(axis=0) * bkg_scale / NMC * LUMI,
]
return counts
elif method == "kde":
def hist_maker(hm_params):
"""Uses the nn decision function `predict` to form histograms
from signal and background data, all drawn from multivariate
normal distributions with different means. Three background
distributions are sampled from, which mimics the situation in
particle physics where one has a 'nominal' prediction for a
nuisance parameter (taken here as the mean of two modes) and
then alternate values (e.g. from varying up/down by one
standard deviation), which then modifies the background pdf.
Here, we take that effect to be a shift of the mean of the
distribution. The HistFactory 'histosys' nusiance parameter
will then be constructed from the yields downstream by
interpolating between them using pyhf.
Arguments:
hm_params: Array-like, consisting of:
nn: jax array of observable parameters.
bins: Array of bin edges, e.g. np.linspace(0,1,3)
defines a two-bin histogram with edges at 0, 0.5,
1.
bandwidth: Float that controls the 'smoothness' of the
kde. It's recommended to keep this fairly
similar to the bin width to avoid
oversmoothing the distribution. Going too low
will cause things to break, as the gradients
of the kde become unstable.
Returns:
Set of 4 counts for signal, background, and up/down modes.
"""
nn = hm_params
bins, bandwidth = hpar_dict["bins"], hpar_dict["bandwidth"]
s, b_nom, b_up, b_down = data
NMC = len(s)
nn_s, nn_b_nom, nn_b_up, nn_b_down = (
predict(nn, s).ravel(),
predict(nn, b_nom).ravel(),
predict(nn, b_up).ravel(),
predict(nn, b_down).ravel(),
)
kde_counts = [
hist(nn_s, bins, bandwidth, reflect_infinities=reflect_infinities)
* sig_scale
/ NMC
* LUMI,
hist(
nn_b_nom, bins, bandwidth, reflect_infinities=reflect_infinities
)
* bkg_scale
/ NMC
* LUMI,
hist(
nn_b_up, bins, bandwidth, reflect_infinities=reflect_infinities
)
* bkg_scale
/ NMC
* LUMI,
hist(
nn_b_down,
bins,
bandwidth,
reflect_infinities=reflect_infinities,
)
* bkg_scale
/ NMC
* LUMI,
]
return [k + 1e-8 for k in kde_counts]
else:
assert False, (
f"Unsupported method: {method}"
" (only using kde or softmax for these examples)."
)
else:
assert False, (
f"Unsupported number of blobs: {len(data)}"
" (only using 3 or 4 blobs for these examples)."
)
return hist_maker
# Cell
def hepdata_like_from_hists(histogram_maker):
"""Returns a function that constructs a typical 'hepdata-like' statistical
model with signal, background, and background uncertainty yields when
evaluated at the parameters of the observable.
Args:
histogram_maker: A function that, when called, returns a secondary function
that takes the observable's parameters as argument, and returns yields.
Returns:
nn_model_maker: A function that returns a Model object (either from
`neos.models` or from `pyhf`) when evaluated at the observable's parameters,
along with the background-only parameters for use in downstream inference.
"""
def nn_model_maker(hm_params):
s, b, db = histogram_maker(hm_params)
m = hepdata_like(s, b, db) # neos 'pyhf' model
nompars = m.config.suggested_init()
bonlypars = jnp.asarray([x for x in nompars])
bonlypars = jax.ops.index_update(bonlypars, m.config.poi_index, 0.0)
return m, bonlypars
return nn_model_maker
def histosys_model_from_hists(histogram_maker):
"""Returns a function that constructs a HEP statistical model using a
'histosys' uncertainty for the background (nominal background, up and down
systematic variations) when evaluated at the parameters of the observable.
Args:
histogram_maker: A function that, when called, returns a secondary function
that takes the observable's parameters as argument, and returns yields.
Returns:
nn_model_maker: A function that returns a `pyhf.Model` object when
evaluated at the observable's parameters (nn weights), along with the
background-only parameters for use in downstream inference.
"""
@patch("pyhf.default_backend", new=jax_backend)
@patch.object(
sys.modules["pyhf.interpolators.code0"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.interpolators.code1"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.interpolators.code2"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.interpolators.code4"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.interpolators.code4p"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.modifiers.shapefactor"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.modifiers.shapesys"], "default_backend", new=jax_backend
)
@patch.object(
sys.modules["pyhf.modifiers.staterror"], "default_backend", new=jax_backend
)
def from_spec(yields):
s, b, bup, bdown = yields
spec = {
"channels": [
{
"name": "nn",
"samples": [
{
"name": "signal",
"data": s,
"modifiers": [
{"name": "mu", "type": "normfactor", "data": None}
],
},
{
"name": "bkg",
"data": b,
"modifiers": [
{
"name": "nn_histosys",
"type": "histosys",
"data": {
"lo_data": bdown,
"hi_data": bup,
},
}
],
},
],
},
],
}
return pyhf.Model(spec)
def nn_model_maker(hm_params):
yields = histogram_maker(hm_params)
m = from_spec(yields)
nompars = m.config.suggested_init()
bonlypars = jnp.asarray([x for x in nompars])
bonlypars = jax.ops.index_update(bonlypars, m.config.poi_index, 0.0)
return m, bonlypars
return nn_model_maker
| 38.978873 | 93 | 0.513701 | 1,764 | 16,605 | 4.722789 | 0.166667 | 0.007202 | 0.020166 | 0.018005 | 0.795223 | 0.774457 | 0.74937 | 0.740007 | 0.728964 | 0.726563 | 0 | 0.005306 | 0.421138 | 16,605 | 425 | 94 | 39.070588 | 0.861423 | 0.399518 | 0 | 0.473684 | 1 | 0 | 0.097486 | 0.0287 | 0 | 0 | 0 | 0 | 0.013158 | 1 | 0.04386 | false | 0 | 0.030702 | 0 | 0.118421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6a26f04964de5c7ca311f0cdf8ef63785dc9110 | 24,266 | py | Python | ccdproc/tests/test_combiner.py | simontorres/ccdproc | 2da2c469a1e4c4ddd165d73c0f2cd11f3e63fed3 | [
"BSD-3-Clause"
] | null | null | null | ccdproc/tests/test_combiner.py | simontorres/ccdproc | 2da2c469a1e4c4ddd165d73c0f2cd11f3e63fed3 | [
"BSD-3-Clause"
] | null | null | null | ccdproc/tests/test_combiner.py | simontorres/ccdproc | 2da2c469a1e4c4ddd165d73c0f2cd11f3e63fed3 | [
"BSD-3-Clause"
] | null | null | null | # Licensed under a 3-clause BSD style license - see LICENSE.rst
import numpy as np
import astropy.units as u
from astropy.stats import median_absolute_deviation as mad
import pytest
from astropy.utils.data import get_pkg_data_filename
from astropy.nddata import CCDData
from ..combiner import Combiner, combine
#test that the Combiner raises error if empty
def test_combiner_empty():
with pytest.raises(TypeError):
Combiner() # empty initializer should fail
#test that the Combiner raises error if empty if ccd_list is None
def test_combiner_init_with_none():
with pytest.raises(TypeError):
Combiner(None) # empty initializer should fail
#test that Combiner throws an error if input
#objects are not ccddata objects
def test_ccddata_combiner_objects(ccd_data):
ccd_list = [ccd_data, ccd_data, None]
with pytest.raises(TypeError):
Combiner(ccd_list) # different objects should fail
#test that Combiner throws an error if input
#objects do not have the same size
def test_ccddata_combiner_size(ccd_data):
ccd_large = CCDData(np.zeros((200, 100)), unit=u.adu)
ccd_list = [ccd_data, ccd_data, ccd_large]
with pytest.raises(TypeError):
Combiner(ccd_list) # arrays of different sizes should fail
#test that Combiner throws an error if input
#objects do not have the same units
def test_ccddata_combiner_units(ccd_data):
ccd_large = CCDData(np.zeros((100, 100)), unit=u.second)
ccd_list = [ccd_data, ccd_data, ccd_large]
with pytest.raises(TypeError):
Combiner(ccd_list)
#test if mask and data array are created
def test_combiner_create(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
assert c.data_arr.shape == (3, 100, 100)
assert c.data_arr.mask.shape == (3, 100, 100)
#test if dtype matches the value that is passed
def test_combiner_dtype(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list, dtype=np.float32)
assert c.data_arr.dtype == np.float32
avg = c.average_combine()
# dtype of average should match input dtype
assert avg.dtype == c.dtype
med = c.median_combine()
# dtype of median should match dtype of input
assert med.dtype == c.dtype
result_sum = c.sum_combine()
# dtype of sum should match dtype of input
assert result_sum.dtype == c.dtype
#test mask is created from ccd.data
def test_combiner_mask():
data = np.zeros((10, 10))
data[5, 5] = 1
mask = (data == 0)
ccd = CCDData(data, unit=u.adu, mask=mask)
ccd_list = [ccd, ccd, ccd]
c = Combiner(ccd_list)
assert c.data_arr.shape == (3, 10, 10)
assert c.data_arr.mask.shape == (3, 10, 10)
assert not c.data_arr.mask[0, 5, 5]
def test_weights(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
with pytest.raises(TypeError):
c.weights = 1
def test_weights_shape(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
with pytest.raises(ValueError):
c.weights = ccd_data.data
#test the min-max rejection
def test_combiner_minmax():
ccd_list = [CCDData(np.zeros((10, 10)), unit=u.adu),
CCDData(np.zeros((10, 10)) - 1000, unit=u.adu),
CCDData(np.zeros((10, 10)) + 1000, unit=u.adu)]
c = Combiner(ccd_list)
c.minmax_clipping(min_clip=-500, max_clip=500)
ccd = c.median_combine()
assert ccd.data.mean() == 0
def test_combiner_minmax_max():
ccd_list = [CCDData(np.zeros((10, 10)), unit=u.adu),
CCDData(np.zeros((10, 10)) - 1000, unit=u.adu),
CCDData(np.zeros((10, 10)) + 1000, unit=u.adu)]
c = Combiner(ccd_list)
c.minmax_clipping(min_clip=None, max_clip=500)
assert c.data_arr[2].mask.all()
def test_combiner_minmax_min():
ccd_list = [CCDData(np.zeros((10, 10)), unit=u.adu),
CCDData(np.zeros((10, 10)) - 1000, unit=u.adu),
CCDData(np.zeros((10, 10)) + 1000, unit=u.adu)]
c = Combiner(ccd_list)
c.minmax_clipping(min_clip=-500, max_clip=None)
assert c.data_arr[1].mask.all()
def test_combiner_sigmaclip_high():
ccd_list = [CCDData(np.zeros((10, 10)), unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 10, unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 1000, unit=u.adu)]
c = Combiner(ccd_list)
# using mad for more robust statistics vs. std
c.sigma_clipping(high_thresh=3, low_thresh=None, func=np.ma.median,
dev_func=mad)
assert c.data_arr[5].mask.all()
def test_combiner_sigmaclip_single_pix():
ccd_list = [CCDData(np.zeros((10, 10)), unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 10, unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 10, unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu)]
c = Combiner(ccd_list)
# add a single pixel in another array to check that
# that one gets rejected
c.data_arr[0, 5, 5] = 0
c.data_arr[1, 5, 5] = -5
c.data_arr[2, 5, 5] = 5
c.data_arr[3, 5, 5] = -5
c.data_arr[4, 5, 5] = 25
c.sigma_clipping(high_thresh=3, low_thresh=None, func=np.ma.median,
dev_func=mad)
assert c.data_arr.mask[4, 5, 5]
def test_combiner_sigmaclip_low():
ccd_list = [CCDData(np.zeros((10, 10)), unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 10, unit=u.adu),
CCDData(np.zeros((10, 10)) - 10, unit=u.adu),
CCDData(np.zeros((10, 10)) + 10, unit=u.adu),
CCDData(np.zeros((10, 10)) - 1000, unit=u.adu)]
c = Combiner(ccd_list)
# using mad for more robust statistics vs. std
c.sigma_clipping(high_thresh=None, low_thresh=3, func=np.ma.median,
dev_func=mad)
assert c.data_arr[5].mask.all()
#test that the median combination works and returns a ccddata object
def test_combiner_median(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.median_combine()
assert isinstance(ccd, CCDData)
assert ccd.shape == (100, 100)
assert ccd.unit == u.adu
assert ccd.meta['NCOMBINE'] == len(ccd_list)
#test that the average combination works and returns a ccddata object
def test_combiner_average(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.average_combine()
assert isinstance(ccd, CCDData)
assert ccd.shape == (100, 100)
assert ccd.unit == u.adu
assert ccd.meta['NCOMBINE'] == len(ccd_list)
#test that the sum combination works and returns a ccddata object
def test_combiner_sum(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.sum_combine()
assert isinstance(ccd, CCDData)
assert ccd.shape == (100, 100)
assert ccd.unit == u.adu
assert ccd.meta['NCOMBINE'] == len(ccd_list)
#test data combined with mask is created correctly
def test_combiner_mask_average():
data = np.zeros((10, 10))
data[5, 5] = 1
mask = (data == 0)
ccd = CCDData(data, unit=u.adu, mask=mask)
ccd_list = [ccd, ccd, ccd]
c = Combiner(ccd_list)
ccd = c.average_combine()
assert ccd.data[0, 0] == 0
assert ccd.data[5, 5] == 1
assert ccd.mask[0, 0]
assert not ccd.mask[5, 5]
def test_combiner_with_scaling(ccd_data):
# The factors below are not particularly important; just avoid anything
# whose average is 1.
ccd_data_lower = ccd_data.multiply(3)
ccd_data_higher = ccd_data.multiply(0.9)
combiner = Combiner([ccd_data, ccd_data_higher, ccd_data_lower])
# scale each array to the mean of the first image
scale_by_mean = lambda x: ccd_data.data.mean()/np.ma.average(x)
combiner.scaling = scale_by_mean
avg_ccd = combiner.average_combine()
# Does the mean of the scaled arrays match the value to which it was
# scaled?
np.testing.assert_almost_equal(avg_ccd.data.mean(),
ccd_data.data.mean())
assert avg_ccd.shape == ccd_data.shape
median_ccd = combiner.median_combine()
# Does median also scale to the correct value?
np.testing.assert_almost_equal(np.median(median_ccd.data),
np.median(ccd_data.data))
# Set the scaling manually...
combiner.scaling = [scale_by_mean(combiner.data_arr[i]) for i in range(3)]
avg_ccd = combiner.average_combine()
np.testing.assert_almost_equal(avg_ccd.data.mean(),
ccd_data.data.mean())
assert avg_ccd.shape == ccd_data.shape
def test_combiner_scaling_fails(ccd_data):
combiner = Combiner([ccd_data, ccd_data.copy()])
# Should fail unless scaling is set to a function or list-like
with pytest.raises(TypeError):
combiner.scaling = 5
#test data combined with mask is created correctly
def test_combiner_mask_median():
data = np.zeros((10, 10))
data[5, 5] = 1
mask = (data == 0)
ccd = CCDData(data, unit=u.adu, mask=mask)
ccd_list = [ccd, ccd, ccd]
c = Combiner(ccd_list)
ccd = c.median_combine()
assert ccd.data[0, 0] == 0
assert ccd.data[5, 5] == 1
assert ccd.mask[0, 0]
assert not ccd.mask[5, 5]
#test data combined with mask is created correctly
def test_combiner_mask_sum():
data = np.zeros((10, 10))
data[5, 5] = 1
mask = (data == 0)
ccd = CCDData(data, unit=u.adu, mask=mask)
ccd_list = [ccd, ccd, ccd]
c = Combiner(ccd_list)
ccd = c.sum_combine()
assert ccd.data[0, 0] == 0
assert ccd.data[5, 5] == 3
assert ccd.mask[0, 0]
assert not ccd.mask[5, 5]
#test combiner convenience function reads fits file and combine as expected
def test_combine_average_fitsimages():
fitsfile = get_pkg_data_filename('data/a8280271.fits')
ccd = CCDData.read(fitsfile, unit=u.adu)
ccd_list = [ccd]*3
c = Combiner(ccd_list)
ccd_by_combiner = c.average_combine()
fitsfilename_list = [fitsfile]*3
avgccd = combine(fitsfilename_list, output_file=None, method='average', unit=u.adu)
# averaging same fits images should give back same fits image
np.testing.assert_array_almost_equal(avgccd.data, ccd_by_combiner.data)
def test_combine_numpyndarray():
""" Test of numpy ndarray implementation: #493
Test the average combine using ``Combiner`` and ``combine`` with input
``img_list`` in the format of ``numpy.ndarray``.
"""
fitsfile = get_pkg_data_filename('data/a8280271.fits')
ccd = CCDData.read(fitsfile, unit=u.adu)
ccd_list = [ccd]*3
c = Combiner(ccd_list)
ccd_by_combiner = c.average_combine()
fitsfilename_list = np.array([fitsfile]*3)
avgccd = combine(fitsfilename_list, output_file=None, method='average', unit=u.adu)
# averaging same fits images should give back same fits image
np.testing.assert_array_almost_equal(avgccd.data, ccd_by_combiner.data)
def test_combiner_result_dtype():
"""Regression test: #391
The result should have the appropriate dtype not the dtype of the first
input."""
ccd = CCDData(np.ones((3, 3), dtype=np.uint16), unit='adu')
res = combine([ccd, ccd.multiply(2)])
# The default dtype of Combiner is float64
assert res.data.dtype == np.float64
ref = np.ones((3, 3)) * 1.5
np.testing.assert_array_almost_equal(res.data, ref)
res = combine([ccd, ccd.multiply(2), ccd.multiply(3)], dtype=int)
# The result dtype should be integer:
assert res.data.dtype == np.int_
ref = np.ones((3, 3)) * 2
np.testing.assert_array_almost_equal(res.data, ref)
#test combiner convenience function works with list of ccddata objects
def test_combine_average_ccddata():
fitsfile = get_pkg_data_filename('data/a8280271.fits')
ccd = CCDData.read(fitsfile, unit=u.adu)
ccd_list = [ccd]*3
c = Combiner(ccd_list)
ccd_by_combiner = c.average_combine()
avgccd = combine(ccd_list,output_file=None, method='average', unit=u.adu)
# averaging same ccdData should give back same images
np.testing.assert_array_almost_equal(avgccd.data, ccd_by_combiner.data)
#test combiner convenience function reads fits file and
# and combine as expected when asked to run in limited memory
def test_combine_limitedmem_fitsimages():
fitsfile = get_pkg_data_filename('data/a8280271.fits')
ccd = CCDData.read(fitsfile, unit=u.adu)
ccd_list = [ccd]*5
c = Combiner(ccd_list)
ccd_by_combiner = c.average_combine()
fitsfilename_list = [fitsfile]*5
avgccd = combine(fitsfilename_list,output_file=None, method='average',
mem_limit=1e6, unit=u.adu)
# averaging same ccdData should give back same images
np.testing.assert_array_almost_equal(avgccd.data, ccd_by_combiner.data)
#test combiner convenience function reads fits file and
# and combine as expected when asked to run in limited memory with scaling
def test_combine_limitedmem_scale_fitsimages():
fitsfile = get_pkg_data_filename('data/a8280271.fits')
ccd = CCDData.read(fitsfile, unit=u.adu)
ccd_list = [ccd]*5
c = Combiner(ccd_list)
# scale each array to the mean of the first image
scale_by_mean = lambda x: ccd.data.mean()/np.ma.average(x)
c.scaling = scale_by_mean
ccd_by_combiner = c.average_combine()
fitsfilename_list = [fitsfile]*5
avgccd = combine(fitsfilename_list,output_file=None, method='average',
mem_limit=1e6, scale=scale_by_mean, unit=u.adu)
np.testing.assert_array_almost_equal(avgccd.data, ccd_by_combiner.data, decimal=4)
#test the optional uncertainty function in average_combine
def test_average_combine_uncertainty(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.average_combine(uncertainty_func=np.sum)
uncert_ref = np.sum(c.data_arr, 0) / np.sqrt(3)
np.testing.assert_array_equal(ccd.uncertainty.array, uncert_ref)
# Compare this also to the "combine" call
ccd2 = combine(ccd_list, method='average', combine_uncertainty_function=np.sum)
np.testing.assert_array_equal(ccd.data, ccd2.data)
np.testing.assert_array_equal(ccd.uncertainty.array, ccd2.uncertainty.array)
#test the optional uncertainty function in median_combine
def test_median_combine_uncertainty(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.median_combine(uncertainty_func=np.sum)
uncert_ref = np.sum(c.data_arr, 0) / np.sqrt(3)
np.testing.assert_array_equal(ccd.uncertainty.array, uncert_ref)
# Compare this also to the "combine" call
ccd2 = combine(ccd_list, method='median', combine_uncertainty_function=np.sum)
np.testing.assert_array_equal(ccd.data, ccd2.data)
np.testing.assert_array_equal(ccd.uncertainty.array, ccd2.uncertainty.array)
#test the optional uncertainty function in sum_combine
def test_sum_combine_uncertainty(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.sum_combine(uncertainty_func=np.sum)
uncert_ref = np.sum(c.data_arr, 0) * np.sqrt(3)
np.testing.assert_almost_equal(ccd.uncertainty.array, uncert_ref)
# Compare this also to the "combine" call
ccd2 = combine(ccd_list, method='sum', combine_uncertainty_function=np.sum)
np.testing.assert_array_equal(ccd.data, ccd2.data)
np.testing.assert_array_equal(ccd.uncertainty.array, ccd2.uncertainty.array)
# test resulting uncertainty is corrected for the number of images
def test_combiner_uncertainty_average():
ccd_list = [CCDData(np.ones((10, 10)), unit=u.adu),
CCDData(np.ones((10, 10))*2, unit=u.adu)]
c = Combiner(ccd_list)
ccd = c.average_combine()
# Just the standard deviation of ccd data.
ref_uncertainty = np.ones((10, 10)) / 2
# Correction because we combined two images.
ref_uncertainty /= np.sqrt(2)
np.testing.assert_array_almost_equal(ccd.uncertainty.array,
ref_uncertainty)
# test resulting uncertainty is corrected for the number of images (with mask)
def test_combiner_uncertainty_average_mask():
mask = np.zeros((10, 10), dtype=np.bool_)
mask[5, 5] = True
ccd_with_mask = CCDData(np.ones((10, 10)), unit=u.adu, mask=mask)
ccd_list = [ccd_with_mask,
CCDData(np.ones((10, 10))*2, unit=u.adu),
CCDData(np.ones((10, 10))*3, unit=u.adu)]
c = Combiner(ccd_list)
ccd = c.average_combine()
# Just the standard deviation of ccd data.
ref_uncertainty = np.ones((10, 10)) * np.std([1, 2, 3])
# Correction because we combined two images.
ref_uncertainty /= np.sqrt(3)
ref_uncertainty[5, 5] = np.std([2, 3]) / np.sqrt(2)
np.testing.assert_array_almost_equal(ccd.uncertainty.array,
ref_uncertainty)
# test resulting uncertainty is corrected for the number of images (with mask)
def test_combiner_uncertainty_median_mask():
mad_to_sigma = 1.482602218505602
mask = np.zeros((10, 10), dtype=np.bool_)
mask[5, 5] = True
ccd_with_mask = CCDData(np.ones((10, 10)), unit=u.adu, mask=mask)
ccd_list = [ccd_with_mask,
CCDData(np.ones((10, 10))*2, unit=u.adu),
CCDData(np.ones((10, 10))*3, unit=u.adu)]
c = Combiner(ccd_list)
ccd = c.median_combine()
# Just the standard deviation of ccd data.
ref_uncertainty = np.ones((10, 10)) * mad_to_sigma * mad([1, 2, 3])
# Correction because we combined two images.
ref_uncertainty /= np.sqrt(3) # 0.855980789955
ref_uncertainty[5, 5] = mad_to_sigma * mad([2, 3]) / np.sqrt(2) # 0.524179041254
np.testing.assert_array_almost_equal(ccd.uncertainty.array,
ref_uncertainty)
# test resulting uncertainty is corrected for the number of images (with mask)
def test_combiner_uncertainty_sum_mask():
mask = np.zeros((10, 10), dtype=np.bool_)
mask[5, 5] = True
ccd_with_mask = CCDData(np.ones((10, 10)), unit=u.adu, mask=mask)
ccd_list = [ccd_with_mask,
CCDData(np.ones((10, 10))*2, unit=u.adu),
CCDData(np.ones((10, 10))*3, unit=u.adu)]
c = Combiner(ccd_list)
ccd = c.sum_combine()
# Just the standard deviation of ccd data.
ref_uncertainty = np.ones((10, 10)) * np.std([1, 2, 3])
ref_uncertainty *= np.sqrt(3)
ref_uncertainty[5, 5] = np.std([2, 3]) * np.sqrt(2)
np.testing.assert_array_almost_equal(ccd.uncertainty.array,
ref_uncertainty)
def test_combiner_3d():
data1 = CCDData(3 * np.ones((5,5,5)), unit=u.adu)
data2 = CCDData(2 * np.ones((5,5,5)), unit=u.adu)
data3 = CCDData(4 * np.ones((5,5,5)), unit=u.adu)
ccd_list = [data1, data2, data3]
c = Combiner(ccd_list)
assert c.data_arr.shape == (3, 5, 5, 5)
assert c.data_arr.mask.shape == (3, 5, 5, 5)
ccd = c.average_combine()
assert ccd.shape == (5, 5, 5)
np.testing.assert_array_almost_equal(ccd.data, data1, decimal=4)
def test_3d_combiner_with_scaling(ccd_data):
# The factors below are not particularly important; just avoid anything
# whose average is 1.
ccd_data = CCDData(np.ones((5,5,5)), unit=u.adu)
ccd_data_lower = CCDData(3 * np.ones((5,5,5)), unit=u.adu)
ccd_data_higher = CCDData(0.9 * np.ones((5,5,5)), unit=u.adu)
combiner = Combiner([ccd_data, ccd_data_higher, ccd_data_lower])
# scale each array to the mean of the first image
scale_by_mean = lambda x: ccd_data.data.mean()/np.ma.average(x)
combiner.scaling = scale_by_mean
avg_ccd = combiner.average_combine()
# Does the mean of the scaled arrays match the value to which it was
# scaled?
np.testing.assert_almost_equal(avg_ccd.data.mean(),
ccd_data.data.mean())
assert avg_ccd.shape == ccd_data.shape
median_ccd = combiner.median_combine()
# Does median also scale to the correct value?
np.testing.assert_almost_equal(np.median(median_ccd.data),
np.median(ccd_data.data))
# Set the scaling manually...
combiner.scaling = [scale_by_mean(combiner.data_arr[i]) for i in range(3)]
avg_ccd = combiner.average_combine()
np.testing.assert_almost_equal(avg_ccd.data.mean(),
ccd_data.data.mean())
assert avg_ccd.shape == ccd_data.shape
def test_clip_extrema_3d():
ccdlist = [CCDData(np.ones((3, 3, 3))*90., unit="adu"),
CCDData(np.ones((3, 3, 3))*20., unit="adu"),
CCDData(np.ones((3, 3, 3))*10., unit="adu"),
CCDData(np.ones((3, 3, 3))*40., unit="adu"),
CCDData(np.ones((3, 3, 3))*25., unit="adu"),
CCDData(np.ones((3, 3, 3))*35., unit="adu"),
]
c = Combiner(ccdlist)
c.clip_extrema(nlow=1, nhigh=1)
result = c.average_combine()
expected = CCDData(np.ones((3, 3, 3)) * 30, unit="adu")
np.testing.assert_array_equal(result, expected)
@pytest.mark.parametrize('comb_func', ['average_combine', 'median_combine', 'sum_combine'])
def test_writeable_after_combine(ccd_data, tmpdir, comb_func):
tmp_file = tmpdir.join('tmp.fits')
from ..combiner import Combiner
combined = Combiner([ccd_data for _ in range(3)])
ccd2 = getattr(combined, comb_func)()
# This should not fail because the resulting uncertainty has a mask
ccd2.write(tmp_file.strpath)
def test_clip_extrema():
ccdlist = [CCDData(np.ones((3, 5))*90., unit="adu"),
CCDData(np.ones((3, 5))*20., unit="adu"),
CCDData(np.ones((3, 5))*10., unit="adu"),
CCDData(np.ones((3, 5))*40., unit="adu"),
CCDData(np.ones((3, 5))*25., unit="adu"),
CCDData(np.ones((3, 5))*35., unit="adu"),
]
ccdlist[0].data[0,1] = 3.1
ccdlist[1].data[1,2] = 100.1
ccdlist[1].data[2,0] = 100.1
c = Combiner(ccdlist)
c.clip_extrema(nlow=1, nhigh=1)
result = c.average_combine()
expected = [[30.0, 22.5, 30.0, 30.0, 30.0],
[30.0, 30.0, 47.5, 30.0, 30.0],
[47.5, 30.0, 30.0, 30.0, 30.0]]
np.testing.assert_array_equal(result, expected)
def test_clip_extrema_via_combine():
ccdlist = [CCDData(np.ones((3, 5))*90., unit="adu"),
CCDData(np.ones((3, 5))*20., unit="adu"),
CCDData(np.ones((3, 5))*10., unit="adu"),
CCDData(np.ones((3, 5))*40., unit="adu"),
CCDData(np.ones((3, 5))*25., unit="adu"),
CCDData(np.ones((3, 5))*35., unit="adu"),
]
ccdlist[0].data[0,1] = 3.1
ccdlist[1].data[1,2] = 100.1
ccdlist[1].data[2,0] = 100.1
result = combine(ccdlist, clip_extrema=True, nlow=1, nhigh=1,)
expected = [[30.0, 22.5, 30.0, 30.0, 30.0],
[30.0, 30.0, 47.5, 30.0, 30.0],
[47.5, 30.0, 30.0, 30.0, 30.0]]
np.testing.assert_array_equal(result, expected)
def test_clip_extrema_with_other_rejection():
ccdlist = [CCDData(np.ones((3, 5))*90., unit="adu"),
CCDData(np.ones((3, 5))*20., unit="adu"),
CCDData(np.ones((3, 5))*10., unit="adu"),
CCDData(np.ones((3, 5))*40., unit="adu"),
CCDData(np.ones((3, 5))*25., unit="adu"),
CCDData(np.ones((3, 5))*35., unit="adu"),
]
ccdlist[0].data[0,1] = 3.1
ccdlist[1].data[1,2] = 100.1
ccdlist[1].data[2,0] = 100.1
c = Combiner(ccdlist)
## Reject ccdlist[1].data[1,2] by other means
c.data_arr.mask[1,1,2] = True
## Reject ccdlist[1].data[1,2] by other means
c.data_arr.mask[3,0,0] = True
c.clip_extrema(nlow=1, nhigh=1)
result = c.average_combine()
expected = [[ 80./3., 22.5, 30. , 30., 30.],
[ 30. , 30. , 47.5, 30., 30.],
[ 47.5, 30. , 30. , 30., 30.]]
np.testing.assert_array_equal(result, expected)
| 38.334913 | 91 | 0.648562 | 3,731 | 24,266 | 4.055213 | 0.076387 | 0.048579 | 0.032783 | 0.024719 | 0.815664 | 0.794712 | 0.765235 | 0.742697 | 0.721018 | 0.712029 | 0 | 0.04976 | 0.218207 | 24,266 | 632 | 92 | 38.39557 | 0.747773 | 0.164634 | 0 | 0.642697 | 0 | 0 | 0.014883 | 0 | 0 | 0 | 0 | 0 | 0.177528 | 1 | 0.098876 | false | 0 | 0.017978 | 0 | 0.116854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6c711623b94dfa8fd9410e28f27f45514bc5123 | 57,573 | py | Python | tests/test_httpx_async.py | jakul/pytest_httpx | 7a869302ad3332a2d57e09fc05d01b07dfc3406b | [
"MIT"
] | null | null | null | tests/test_httpx_async.py | jakul/pytest_httpx | 7a869302ad3332a2d57e09fc05d01b07dfc3406b | [
"MIT"
] | null | null | null | tests/test_httpx_async.py | jakul/pytest_httpx | 7a869302ad3332a2d57e09fc05d01b07dfc3406b | [
"MIT"
] | null | null | null | import re
import httpx
import pytest
from pytest import Testdir
import pytest_httpx
from pytest_httpx import HTTPXMock
@pytest.mark.asyncio
async def test_without_response(httpx_mock: HTTPXMock) -> None:
with pytest.raises(Exception) as exception_info:
async with httpx.AsyncClient() as client:
await client.get("https://test_url")
assert (
str(exception_info.value)
== """No response can be found for GET request on https://test_url"""
)
@pytest.mark.asyncio
async def test_default_response(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b""
assert response.status_code == 200
assert response.headers == httpx.Headers({})
assert response.http_version == "HTTP/1.1"
@pytest.mark.asyncio
async def test_url_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b""
response = await client.post("https://test_url")
assert response.content == b""
@pytest.mark.asyncio
async def test_url_query_string_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url?a=1&b=2")
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url?a=1&b=2")
assert response.content == b""
# Parameters order should not matter
response = await client.get("https://test_url?b=2&a=1")
assert response.content == b""
@pytest.mark.asyncio
async def test_url_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url")
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.get("https://test_url2")
assert (
str(exception_info.value)
== """No response can be found for GET request on https://test_url2 amongst:
Match all requests on https://test_url"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_query_string_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url?a=1&a=2")
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
# Same parameter order matters as it corresponds to a list on server side
await client.get("https://test_url?a=2&a=1")
assert (
str(exception_info.value)
== """No response can be found for GET request on https://test_url?a=2&a=1 amongst:
Match all requests on https://test_url?a=1&a=2"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(method="get")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b""
response = await client.get("https://test_url2")
assert response.content == b""
@pytest.mark.asyncio
async def test_method_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(method="get")
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url")
assert (
str(exception_info.value)
== """No response can be found for POST request on https://test_url amongst:
Match GET requests"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_with_one_response(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url", content=b"test content")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b"test content"
response = await client.get("https://test_url")
assert response.content == b"test content"
@pytest.mark.asyncio
async def test_response_with_string_body(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url", text="test content")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b"test content"
@pytest.mark.asyncio
async def test_response_with_html_string_body(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url", html="<body>test content</body>")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b"<body>test content</body>"
@pytest.mark.asyncio
async def test_stream_response_streaming(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url",
stream=pytest_httpx.IteratorStream([b"part 1", b"part 2"]),
)
async with httpx.AsyncClient() as client:
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"part 1",
b"part 2",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"part 1",
b"part 2",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
@pytest.mark.asyncio
async def test_content_response_streaming(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url",
content=b"part 1 and 2",
)
async with httpx.AsyncClient() as client:
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"part 1 and 2",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"part 1 and 2",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
@pytest.mark.asyncio
async def test_text_response_streaming(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url",
text="part 1 and 2",
)
async with httpx.AsyncClient() as client:
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"part 1 and 2",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"part 1 and 2",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
@pytest.mark.asyncio
async def test_default_response_streaming(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
async with client.stream(method="GET", url="https://test_url") as response:
assert [part async for part in response.aiter_raw()] == [
b"",
]
# Assert that stream still behaves the proper way (can only be consumed once per request)
with pytest.raises(httpx.StreamConsumed):
async for part in response.aiter_raw():
pass
@pytest.mark.asyncio
async def test_with_many_responses(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url", content=b"test content 1")
httpx_mock.add_response(url="https://test_url", content=b"test content 2")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b"test content 1"
response = await client.get("https://test_url")
assert response.content == b"test content 2"
response = await client.get("https://test_url")
assert response.content == b"test content 2"
@pytest.mark.asyncio
async def test_with_many_responses_methods(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url", method="GET", content=b"test content 1"
)
httpx_mock.add_response(
url="https://test_url", method="POST", content=b"test content 2"
)
httpx_mock.add_response(
url="https://test_url", method="PUT", content=b"test content 3"
)
httpx_mock.add_response(
url="https://test_url", method="DELETE", content=b"test content 4"
)
httpx_mock.add_response(
url="https://test_url", method="PATCH", content=b"test content 5"
)
httpx_mock.add_response(
url="https://test_url", method="HEAD", content=b"test content 6"
)
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url")
assert response.content == b"test content 2"
response = await client.get("https://test_url")
assert response.content == b"test content 1"
response = await client.put("https://test_url")
assert response.content == b"test content 3"
response = await client.head("https://test_url")
assert response.content == b"test content 6"
response = await client.patch("https://test_url")
assert response.content == b"test content 5"
response = await client.delete("https://test_url")
assert response.content == b"test content 4"
@pytest.mark.asyncio
async def test_with_many_responses_status_codes(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url", method="GET", content=b"test content 1", status_code=200
)
httpx_mock.add_response(
url="https://test_url",
method="POST",
content=b"test content 2",
status_code=201,
)
httpx_mock.add_response(
url="https://test_url", method="PUT", content=b"test content 3", status_code=202
)
httpx_mock.add_response(
url="https://test_url",
method="DELETE",
content=b"test content 4",
status_code=303,
)
httpx_mock.add_response(
url="https://test_url",
method="PATCH",
content=b"test content 5",
status_code=404,
)
httpx_mock.add_response(
url="https://test_url",
method="HEAD",
content=b"test content 6",
status_code=500,
)
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url")
assert response.content == b"test content 2"
assert response.status_code == 201
response = await client.get("https://test_url")
assert response.content == b"test content 1"
assert response.status_code == 200
response = await client.put("https://test_url")
assert response.content == b"test content 3"
assert response.status_code == 202
response = await client.head("https://test_url")
assert response.content == b"test content 6"
assert response.status_code == 500
response = await client.patch("https://test_url")
assert response.content == b"test content 5"
assert response.status_code == 404
response = await client.delete("https://test_url")
assert response.content == b"test content 4"
assert response.status_code == 303
@pytest.mark.asyncio
async def test_with_many_responses_urls_str(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url?param1=test", method="GET", content=b"test content 1"
)
httpx_mock.add_response(
url="https://test_url?param2=test", method="POST", content=b"test content 2"
)
httpx_mock.add_response(
url="https://test_url?param3=test", method="PUT", content=b"test content 3"
)
httpx_mock.add_response(
url="https://test_url?param4=test", method="DELETE", content=b"test content 4"
)
httpx_mock.add_response(
url="https://test_url?param5=test", method="PATCH", content=b"test content 5"
)
httpx_mock.add_response(
url="https://test_url?param6=test", method="HEAD", content=b"test content 6"
)
async with httpx.AsyncClient() as client:
response = await client.post(
httpx.URL("https://test_url", params={"param2": "test"})
)
assert response.content == b"test content 2"
response = await client.get(
httpx.URL("https://test_url", params={"param1": "test"})
)
assert response.content == b"test content 1"
response = await client.put(
httpx.URL("https://test_url", params={"param3": "test"})
)
assert response.content == b"test content 3"
response = await client.head(
httpx.URL("https://test_url", params={"param6": "test"})
)
assert response.content == b"test content 6"
response = await client.patch(
httpx.URL("https://test_url", params={"param5": "test"})
)
assert response.content == b"test content 5"
response = await client.delete(
httpx.URL("https://test_url", params={"param4": "test"})
)
assert response.content == b"test content 4"
@pytest.mark.asyncio
async def test_response_with_pattern_in_url(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=re.compile(".*test.*"))
httpx_mock.add_response(url="https://unmatched", content=b"test content")
async with httpx.AsyncClient() as client:
response = await client.get("https://unmatched")
assert response.content == b"test content"
response = await client.get("https://test_url")
assert response.content == b""
@pytest.mark.asyncio
async def test_request_with_pattern_in_url(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url")
httpx_mock.add_response(url="https://unmatched")
async with httpx.AsyncClient() as client:
await client.get("https://unmatched")
await client.get("https://test_url", headers={"X-Test": "1"})
assert httpx_mock.get_request(url=re.compile(".*test.*")).headers["x-test"] == "1"
@pytest.mark.asyncio
async def test_requests_with_pattern_in_url(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url")
httpx_mock.add_response(url="https://tests_url")
httpx_mock.add_response(url="https://unmatched")
async with httpx.AsyncClient() as client:
await client.get("https://tests_url", headers={"X-Test": "1"})
await client.get("https://unmatched", headers={"X-Test": "2"})
await client.get("https://test_url")
requests = httpx_mock.get_requests(url=re.compile(".*test.*"))
assert len(requests) == 2
assert requests[0].headers["x-test"] == "1"
assert "x-test" not in requests[1].headers
@pytest.mark.asyncio
async def test_callback_with_pattern_in_url(httpx_mock: HTTPXMock) -> None:
def custom_response(request: httpx.Request) -> httpx.Response:
return httpx.Response(status_code=200, json={"url": str(request.url)})
def custom_response2(request: httpx.Request) -> httpx.Response:
return httpx.Response(
status_code=200,
extensions={"http_version": b"HTTP/2.0"},
json={"url": str(request.url)},
)
httpx_mock.add_callback(custom_response, url=re.compile(".*test.*"))
httpx_mock.add_callback(custom_response2, url="https://unmatched")
async with httpx.AsyncClient() as client:
response = await client.get("https://unmatched")
assert response.http_version == "HTTP/2.0"
response = await client.get("https://test_url")
assert response.http_version == "HTTP/1.1"
@pytest.mark.asyncio
async def test_with_many_responses_urls_instances(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url=httpx.URL("https://test_url", params={"param1": "test"}),
method="GET",
content=b"test content 1",
)
httpx_mock.add_response(
url=httpx.URL("https://test_url", params={"param2": "test"}),
method="POST",
content=b"test content 2",
)
httpx_mock.add_response(
url=httpx.URL("https://test_url", params={"param3": "test"}),
method="PUT",
content=b"test content 3",
)
httpx_mock.add_response(
url=httpx.URL("https://test_url", params={"param4": "test"}),
method="DELETE",
content=b"test content 4",
)
httpx_mock.add_response(
url=httpx.URL("https://test_url", params={"param5": "test"}),
method="PATCH",
content=b"test content 5",
)
httpx_mock.add_response(
url=httpx.URL("https://test_url", params={"param6": "test"}),
method="HEAD",
content=b"test content 6",
)
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url?param2=test")
assert response.content == b"test content 2"
response = await client.get("https://test_url?param1=test")
assert response.content == b"test content 1"
response = await client.put("https://test_url?param3=test")
assert response.content == b"test content 3"
response = await client.head("https://test_url?param6=test")
assert response.content == b"test content 6"
response = await client.patch("https://test_url?param5=test")
assert response.content == b"test content 5"
response = await client.delete("https://test_url?param4=test")
assert response.content == b"test content 4"
@pytest.mark.asyncio
async def test_with_http_version_2(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url", http_version="HTTP/2", content=b"test content 1"
)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b"test content 1"
assert response.http_version == "HTTP/2"
@pytest.mark.asyncio
async def test_with_headers(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url",
content=b"test content 1",
headers={"X-Test": "Test value"},
)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b"test content 1"
assert response.headers == httpx.Headers(
{"x-test": "Test value", "content-length": "14"}
)
@pytest.mark.asyncio
async def test_requests_retrieval(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url", method="GET", content=b"test content 1"
)
httpx_mock.add_response(
url="https://test_url", method="POST", content=b"test content 2"
)
httpx_mock.add_response(
url="https://test_url", method="PUT", content=b"test content 3"
)
httpx_mock.add_response(
url="https://test_url", method="DELETE", content=b"test content 4"
)
httpx_mock.add_response(
url="https://test_url", method="PATCH", content=b"test content 5"
)
httpx_mock.add_response(
url="https://test_url", method="HEAD", content=b"test content 6"
)
async with httpx.AsyncClient() as client:
await client.post("https://test_url", content=b"sent content 2")
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.put("https://test_url", content=b"sent content 3")
await client.head("https://test_url")
await client.patch("https://test_url", content=b"sent content 5")
await client.delete("https://test_url", headers={"X-Test": "test header 4"})
assert (
httpx_mock.get_request(url=httpx.URL("https://test_url"), method="PATCH").read()
== b"sent content 5"
)
assert (
httpx_mock.get_request(url=httpx.URL("https://test_url"), method="HEAD").read()
== b""
)
assert (
httpx_mock.get_request(url=httpx.URL("https://test_url"), method="PUT").read()
== b"sent content 3"
)
assert (
httpx_mock.get_request(url=httpx.URL("https://test_url"), method="GET").headers[
"x-test"
]
== "test header 1"
)
assert (
httpx_mock.get_request(url=httpx.URL("https://test_url"), method="POST").read()
== b"sent content 2"
)
assert (
httpx_mock.get_request(
url=httpx.URL("https://test_url"), method="DELETE"
).headers["x-test"]
== "test header 4"
)
@pytest.mark.asyncio
async def test_requests_retrieval_on_same_url(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url="https://test_url")
async with httpx.AsyncClient() as client:
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.get("https://test_url", headers={"X-TEST": "test header 2"})
requests = httpx_mock.get_requests(url=httpx.URL("https://test_url"))
assert len(requests) == 2
assert requests[0].headers["x-test"] == "test header 1"
assert requests[1].headers["x-test"] == "test header 2"
@pytest.mark.asyncio
async def test_request_retrieval_on_same_url(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.get("https://test_url2", headers={"X-TEST": "test header 2"})
request = httpx_mock.get_request(url=httpx.URL("https://test_url"))
assert request.headers["x-test"] == "test header 1"
@pytest.mark.asyncio
async def test_requests_retrieval_on_same_method(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.get("https://test_url2", headers={"X-TEST": "test header 2"})
requests = httpx_mock.get_requests(method="GET")
assert len(requests) == 2
assert requests[0].headers["x-test"] == "test header 1"
assert requests[1].headers["x-test"] == "test header 2"
@pytest.mark.asyncio
async def test_request_retrieval_on_same_method(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.post("https://test_url", headers={"X-TEST": "test header 2"})
request = httpx_mock.get_request(method="GET")
assert request.headers["x-test"] == "test header 1"
@pytest.mark.asyncio
async def test_requests_retrieval_on_same_url_and_method(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.get("https://test_url", headers={"X-TEST": "test header 2"})
await client.post("https://test_url", headers={"X-TEST": "test header 3"})
await client.get("https://test_url2", headers={"X-TEST": "test header 4"})
requests = httpx_mock.get_requests(url=httpx.URL("https://test_url"), method="GET")
assert len(requests) == 2
assert requests[0].headers["x-test"] == "test header 1"
assert requests[1].headers["x-test"] == "test header 2"
@pytest.mark.asyncio
async def test_default_requests_retrieval(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.post("https://test_url", headers={"X-TEST": "test header 1"})
await client.get("https://test_url2", headers={"X-TEST": "test header 2"})
requests = httpx_mock.get_requests()
assert len(requests) == 2
assert requests[0].headers["x-test"] == "test header 1"
assert requests[1].headers["x-test"] == "test header 2"
@pytest.mark.asyncio
async def test_default_request_retrieval(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.post("https://test_url", headers={"X-TEST": "test header 1"})
request = httpx_mock.get_request()
assert request.headers["x-test"] == "test header 1"
@pytest.mark.asyncio
async def test_requests_json_body(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url", method="GET", json=["list content 1", "list content 2"]
)
httpx_mock.add_response(
url="https://test_url",
method="POST",
json={"key 1": "value 1", "key 2": "value 2"},
)
httpx_mock.add_response(url="https://test_url", method="PUT", json="string value")
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url")
assert response.json() == {"key 1": "value 1", "key 2": "value 2"}
assert response.headers["content-type"] == "application/json"
response = await client.get("https://test_url")
assert response.json() == ["list content 1", "list content 2"]
assert response.headers["content-type"] == "application/json"
response = await client.put("https://test_url")
assert response.json() == "string value"
assert response.headers["content-type"] == "application/json"
@pytest.mark.asyncio
async def test_callback_raising_exception(httpx_mock: HTTPXMock) -> None:
def raise_timeout(request: httpx.Request) -> httpx.Response:
raise httpx.ReadTimeout(
f"Unable to read within {request.extensions['timeout']['read']}",
request=request,
)
httpx_mock.add_callback(raise_timeout, url="https://test_url")
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.ReadTimeout) as exception_info:
await client.get("https://test_url")
assert str(exception_info.value) == "Unable to read within 5.0"
@pytest.mark.asyncio
async def test_request_exception_raising(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_exception(
httpx.ReadTimeout("Unable to read within 5.0"), url="https://test_url"
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.ReadTimeout) as exception_info:
await client.get("https://test_url")
assert str(exception_info.value) == "Unable to read within 5.0"
assert exception_info.value.request is not None
@pytest.mark.asyncio
async def test_non_request_exception_raising(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_exception(
httpx.HTTPError("Unable to read within 5.0"), url="https://test_url"
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.HTTPError) as exception_info:
await client.get("https://test_url")
assert str(exception_info.value) == "Unable to read within 5.0"
@pytest.mark.asyncio
async def test_callback_returning_response(httpx_mock: HTTPXMock) -> None:
def custom_response(request: httpx.Request) -> httpx.Response:
return httpx.Response(status_code=200, json={"url": str(request.url)})
httpx_mock.add_callback(custom_response, url="https://test_url")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.json() == {"url": "https://test_url"}
assert response.headers["content-type"] == "application/json"
@pytest.mark.asyncio
async def test_callback_executed_twice(httpx_mock: HTTPXMock) -> None:
def custom_response(request: httpx.Request) -> httpx.Response:
return httpx.Response(status_code=200, json=["content"])
httpx_mock.add_callback(custom_response)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.json() == ["content"]
assert response.headers["content-type"] == "application/json"
response = await client.post("https://test_url")
assert response.json() == ["content"]
assert response.headers["content-type"] == "application/json"
@pytest.mark.asyncio
async def test_callback_registered_after_response(httpx_mock: HTTPXMock) -> None:
def custom_response(request: httpx.Request) -> httpx.Response:
return httpx.Response(status_code=200, json=["content2"])
httpx_mock.add_response(json=["content1"])
httpx_mock.add_callback(custom_response)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.json() == ["content1"]
assert response.headers["content-type"] == "application/json"
response = await client.post("https://test_url")
assert response.json() == ["content2"]
assert response.headers["content-type"] == "application/json"
# Assert that the last registered callback is sent again even if there is a response
response = await client.post("https://test_url")
assert response.json() == ["content2"]
assert response.headers["content-type"] == "application/json"
@pytest.mark.asyncio
async def test_response_registered_after_callback(httpx_mock: HTTPXMock) -> None:
def custom_response(request: httpx.Request) -> httpx.Response:
return httpx.Response(status_code=200, json=["content1"])
httpx_mock.add_callback(custom_response)
httpx_mock.add_response(json=["content2"])
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.json() == ["content1"]
assert response.headers["content-type"] == "application/json"
response = await client.post("https://test_url")
assert response.json() == ["content2"]
assert response.headers["content-type"] == "application/json"
# Assert that the last registered response is sent again even if there is a callback
response = await client.post("https://test_url")
assert response.json() == ["content2"]
assert response.headers["content-type"] == "application/json"
@pytest.mark.asyncio
async def test_callback_matching_method(httpx_mock: HTTPXMock) -> None:
def custom_response(request: httpx.Request) -> httpx.Response:
return httpx.Response(status_code=200, json=["content"])
httpx_mock.add_callback(custom_response, method="GET")
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.json() == ["content"]
assert response.headers["content-type"] == "application/json"
response = await client.get("https://test_url2")
assert response.json() == ["content"]
assert response.headers["content-type"] == "application/json"
def test_request_retrieval_with_more_than_one(testdir: Testdir) -> None:
"""
Single request cannot be returned if there is more than one matching.
"""
testdir.makepyfile(
"""
import pytest
import httpx
@pytest.mark.asyncio
async def test_request_retrieval_with_more_than_one(httpx_mock):
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
await client.get("https://test_url", headers={"X-TEST": "test header 1"})
await client.get("https://test_url", headers={"X-TEST": "test header 2"})
httpx_mock.get_request(url=httpx.URL("https://test_url"))
"""
)
result = testdir.runpytest()
result.assert_outcomes(failed=1)
result.stdout.fnmatch_lines(
[
"*AssertionError: More than one request (2) matched, use get_requests instead."
]
)
@pytest.mark.asyncio
async def test_headers_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
match_headers={"user-agent": f"python-httpx/{httpx.__version__}"}
)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.content == b""
@pytest.mark.asyncio
async def test_headers_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
"host2": "test_url",
}
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.get("https://test_url")
assert (
str(exception_info.value)
== f"""No response can be found for GET request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers amongst:
Match all requests with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2', 'host2': 'test_url'}} headers"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_content_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(match_content=b"This is the body")
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url", content=b"This is the body")
assert response.read() == b""
@pytest.mark.asyncio
async def test_content_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(match_content=b"This is the body")
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body2")
assert (
str(exception_info.value)
== """No response can be found for POST request on https://test_url with b'This is the body2' body amongst:
Match all requests with b'This is the body' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_headers_and_content_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
match_headers={"user-agent": f"python-httpx/{httpx.__version__}"},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url", content=b"This is the body")
assert response.content == b""
@pytest.mark.asyncio
async def test_headers_not_matching_and_content_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_headers_matching_and_content_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_headers_and_content_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_and_headers_and_content_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url",
match_headers={"user-agent": f"python-httpx/{httpx.__version__}"},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url", content=b"This is the body")
assert response.content == b""
@pytest.mark.asyncio
async def test_headers_not_matching_and_url_and_content_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests on https://test_url with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_and_headers_not_matching_and_content_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url2",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_and_headers_matching_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests on https://test_url with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_headers_matching_and_url_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url2",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_matching_and_headers_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests on https://test_url with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_and_headers_and_content_not_matching(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="https://test_url2",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match all requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_and_url_and_headers_and_content_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
method="POST",
match_headers={"user-agent": f"python-httpx/{httpx.__version__}"},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
response = await client.post("https://test_url", content=b"This is the body")
assert response.content == b""
@pytest.mark.asyncio
async def test_headers_not_matching_and_method_and_url_and_content_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
method="POST",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match POST requests on https://test_url with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_url_and_headers_not_matching_and_method_and_content_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url2",
method="POST",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match POST requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_and_url_and_headers_matching_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
method="POST",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match POST requests on https://test_url with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_and_headers_matching_and_url_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url2",
method="POST",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match POST requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_and_url_matching_and_headers_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url",
method="POST",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match POST requests on https://test_url with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_matching_and_url_and_headers_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url2",
method="POST",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match POST requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_method_and_url_and_headers_and_content_not_matching(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="https://test_url2",
method="PUT",
match_headers={
"user-agent": f"python-httpx/{httpx.__version__}",
"host": "test_url2",
},
match_content=b"This is the body2",
)
async with httpx.AsyncClient() as client:
with pytest.raises(httpx.TimeoutException) as exception_info:
await client.post("https://test_url", content=b"This is the body")
assert (
str(exception_info.value)
== f"""No response can be found for POST request on https://test_url with {{'host': 'test_url', 'user-agent': 'python-httpx/{httpx.__version__}'}} headers and b'This is the body' body amongst:
Match PUT requests on https://test_url2 with {{'user-agent': 'python-httpx/{httpx.__version__}', 'host': 'test_url2'}} headers and b'This is the body2' body"""
)
# Clean up responses to avoid assertion failure
httpx_mock.reset(assert_all_responses_were_requested=False)
@pytest.mark.asyncio
async def test_header_as_str_tuple_list(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
headers=[("set-cookie", "key=value"), ("set-cookie", "key2=value2")]
)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert dict(response.cookies) == {"key": "value", "key2": "value2"}
@pytest.mark.asyncio
async def test_header_as_bytes_tuple_list(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
headers=[(b"set-cookie", b"key=value"), (b"set-cookie", b"key2=value2")]
)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert dict(response.cookies) == {"key": "value", "key2": "value2"}
@pytest.mark.asyncio
async def test_header_as_bytes_dict(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(headers={b"set-cookie": b"key=value"})
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert dict(response.cookies) == {"key": "value"}
@pytest.mark.asyncio
async def test_header_as_httpx_headers(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(headers=httpx.Headers({"set-cookie": "key=value"}))
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert dict(response.cookies) == {"key": "value"}
@pytest.mark.asyncio
async def test_elapsed_when_add_response(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response()
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.elapsed is not None
@pytest.mark.asyncio
async def test_elapsed_when_add_callback(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_callback(
callback=lambda req: httpx.Response(status_code=200, json={"foo": "bar"})
)
async with httpx.AsyncClient() as client:
response = await client.get("https://test_url")
assert response.elapsed is not None
| 38.587802 | 204 | 0.657704 | 7,562 | 57,573 | 4.814599 | 0.028564 | 0.049797 | 0.072512 | 0.052736 | 0.95707 | 0.944902 | 0.935261 | 0.923396 | 0.902796 | 0.889997 | 0 | 0.008139 | 0.212478 | 57,573 | 1,491 | 205 | 38.613682 | 0.794861 | 0.034964 | 0 | 0.659051 | 0 | 0.031634 | 0.273 | 0.034887 | 0 | 0 | 0 | 0 | 0.154657 | 1 | 0.007909 | false | 0.00703 | 0.005272 | 0.006151 | 0.019332 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6cbb7abfa5d5dc3fd42e2830e169221b895714d | 570 | py | Python | cosmogrb/instruments/gbm/__init__.py | grburgess/cosmogrb | 55182f2223a329f598bcbc43448f3b0ae9f45448 | [
"BSD-2-Clause"
] | 3 | 2020-03-08T18:20:32.000Z | 2022-03-10T17:27:26.000Z | cosmogrb/instruments/gbm/__init__.py | grburgess/cosmogrb | 55182f2223a329f598bcbc43448f3b0ae9f45448 | [
"BSD-2-Clause"
] | 11 | 2020-03-04T17:21:15.000Z | 2020-06-09T12:20:00.000Z | cosmogrb/instruments/gbm/__init__.py | grburgess/cosmogrb | 55182f2223a329f598bcbc43448f3b0ae9f45448 | [
"BSD-2-Clause"
] | 5 | 2020-03-18T18:05:05.000Z | 2022-03-21T16:06:38.000Z | from cosmogrb.instruments.gbm.gbm_grb import GBMGRB, GBMGRB_CPL, GBMGRB_CPL_Constant
from cosmogrb.instruments.gbm.gbm_lightcurve import GBMLightCurve
from cosmogrb.instruments.gbm.gbm_response import GBMResponse, BGOResponse, NaIResponse
from cosmogrb.instruments.gbm.gbm_background import GBMBackground
from cosmogrb.instruments.gbm.gbm_universe import GBM_CPL_Universe, GBM_CPL_Constant_Universe
from cosmogrb.instruments.gbm.gbm_lightcurve_analyzer import GBMLightCurveAnalyzer
from cosmogrb.instruments.gbm.gbm_trigger import GBMTrigger
# __all__ = ["GBMGRB_CPL"]
| 57 | 93 | 0.877193 | 73 | 570 | 6.561644 | 0.315068 | 0.175365 | 0.336117 | 0.379958 | 0.465553 | 0.162839 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 570 | 9 | 94 | 63.333333 | 0.900376 | 0.042105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6cc2601605d17f244ac09a0781294e744fb13fb | 19,875 | py | Python | embodied_arch/embodied_indep.py | oosoba/rl-policy-abms | d1de92cc6aee5ce1eeaa6d3b6965d6d51d79125e | [
"MIT"
] | null | null | null | embodied_arch/embodied_indep.py | oosoba/rl-policy-abms | d1de92cc6aee5ce1eeaa6d3b6965d6d51d79125e | [
"MIT"
] | null | null | null | embodied_arch/embodied_indep.py | oosoba/rl-policy-abms | d1de92cc6aee5ce1eeaa6d3b6965d6d51d79125e | [
"MIT"
] | null | null | null | '''
init: May2019
Author: OO
Goals: RF+AC for indep multi-agents
Still in development:
- base class now encapsulated off in EmbodiedPopulation.py
- Next: CycLR + WoLF
Design Logic:
- All agents interact w/ a common env => single menv
- The shared env is a property of the agent popn => menv is a param
- Independent state reports for each agent now
- formerly: ~~All agents see the same full state info. => S_t is shared~~
- All RL agents act independently => separate indeps Actor & Value NNs
- No explicit collusion or shared learning => separate sensoria NNs
'''
from embodied_arch.EmbodiedPopulation import EmbodiedAgent_Population
from embodied_arch.embodied_misc import *
from embodied_arch.embodied_misc import _zdim_, bernoulli_H, summarize, _sched_win_
from embodied_arch.misc_helpers import discount
from minoritygame.minority_multienv import MinorityGame_Multiagent_env
sys.path.append('.')
# class: var defaults
tboard_path = "./log"
agent_name = "embodied_agent_IRL"
__version__ = "0.0.3"
_DEBUG_ = False
# model: var defaults
_default_reports_ = ['Perf/Recent Reward',
'Losses/Policy LL', 'Losses/Entropy',
'Norms/Grad Norm', 'Norms/Var Norm']
_every_ = 100 # 500
_eps_ = 1.e-2 # 1.e-5
_ent_decay_ = 5e-2
(lrp, lrv) = (1e-2, 5e-2) # learning rates
_max_len_ = 400
# default_sense_hSeq = (32,)
# envs: var defaults
(na_, m_, s_, p_) = (33, 3, 4, 0.5)
_s_size_, _a_size_ = (4, 1)
class EmbodiedAgent_IRF(EmbodiedAgent_Population): # instead of EmbodiedAgent_Independent
def __init__(self, name=agent_name,
env_=MinorityGame_Multiagent_env,
alpha=5.e-2,
latentDim=_zdim_,
space_size=(_s_size_, _a_size_),
sensorium=SensoriumNetworkTemplate,
actorNN=ActionPolicyNetwork,
recover=None,
_every_=_every_,
max_episode_length=_max_len_
): # hseq=None,
super().__init__(name=name, env_=env_,
latentDim=latentDim, space_size=space_size,
sensorium=sensorium, actorNN=actorNN,
_every_=_every_, max_episode_length=max_episode_length
)
self.report_labels = ['Perf/Recent Rewards',
'Losses/Policy LLs', 'Losses/Policy Entropies']
self.last_good_model = recover
self.lnPi_ts = {}
self.entropies = {}
self.GlnPi_ts = {}
self.rflosses = {}
self.policy_LLs = {}
self.rf_trainers = {}
self.optimizers = {name_i: tf.train.AdamOptimizer(learning_rate=lrp) for name_i in self.actor_names}
with tf.variable_scope(self.name):
for name in self.actor_names: # need to do better vectorization at some point...
# Intermediate variables
self.lnPi_ts[name] = -tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.a_logits[name], ## a_t|s_t
labels=self.actions_Ats[name] ## use tf.one_hot for m-ary action spaces
) # bernoulli_LL(self.actions_Ats[name], self.a_probs[name])
self.entropies[name] = tf.clip_by_value(
bernoulli_H(self.a_probs[name]),
_eps_ / self.env.nagents, 100.
)
self.GlnPi_ts[name] = tf.multiply(self.returns_Gts[name], self.lnPi_ts[name])
# Losses
self.rflosses[name] = tf.reduce_mean(tf.reduce_sum(self.GlnPi_ts[name]))
# Training ops
self.rf_trainers[name] = self.optimizers[name].minimize(
loss=(- alpha) * (self.rflosses[name] + _ent_decay_ * self.entropies[name]),
var_list=[v for v in tf.trainable_variables() if name in v.name]
) # probably need to separate A-vs-C gradients later...
self.policy_LLs = tf.stack(list(self.lnPi_ts.values()), axis=1)
self.entropy = tf.stack(list(self.entropies.values()), axis=1)
self.summLLs = summarize(self.policy_LLs)
self.summEntropy = summarize(self.entropy)
return
def act(self, state, sess):
"""Returns vector of p-net sample action (in {0,1})"""
assert self.actor_count in state.shape
ind = 0
probs = {}
a_ts = {}
for name in self.actor_names:
st = state[ind]
ind += 1
probs[name] = sess.run(
self.a_probs[name],
{self.states_St[name]: np.expand_dims(st.flatten(), axis=0)}
).squeeze()
a_ts[name] = 1 * (np.random.rand() < probs[name]) # scalar -> vector comparison
return np.array(list(a_ts.values())).squeeze()
def generate_summary(self, sess, act_dicts):
return sess.run(
[self.summLLs, self.summEntropy],
feed_dict=act_dicts
)
def train_single(self, sess, agent_index, rollout, Qsa_i=None, gamma=0.95, bootstrap_value=0.0):
# self.episode_buffer.append([s, acts_lst.squeeze(), r_lst, s1])
discounted_returns = discount(np.hstack([rollout[2].ravel(), [bootstrap_value]]), gamma)
discounted_returns = discounted_returns[:-1, None]
feed_dict = {
self.states_St[self.actor_names[agent_index]]: np.vstack(rollout[0].squeeze()),
self.actions_Ats[self.actor_names[agent_index]]: np.vstack(rollout[1].squeeze()),
self.returns_Gts[self.actor_names[agent_index]]: np.vstack(discounted_returns),
self.states_St_prime[self.actor_names[agent_index]]: np.vstack(rollout[3].squeeze())
}
sess.run(
self.rf_trainers[self.actor_names[agent_index]],
feed_dict=feed_dict
)
return
class EmbodiedAgent_IRFB(EmbodiedAgent_Population): # instead of EmbodiedAgent_Independent
def __init__(self, name=agent_name,
env_=MinorityGame_Multiagent_env,
alpha_p=5.e-2, alpha_v=1e-1,
latentDim=_zdim_,
space_size=(_s_size_, _a_size_),
sensorium=SensoriumNetworkTemplate,
actorNN=ActionPolicyNetwork,
valueNN=ValueNetwork,
recover=None,
_every_=_every_,
max_episode_length=_max_len_
):
super().__init__(name=name, env_=env_,
latentDim=latentDim, space_size=space_size,
sensorium=sensorium, actorNN=actorNN, valueNN=valueNN,
_every_=_every_, max_episode_length=max_episode_length
)
self.report_labels = ['Perf/Recent Rewards', 'Losses/Policy LLs',
'Losses/Critic Scores', 'Losses/Policy Entropies']
self.last_good_model = recover
self.lnPi_ts = {}
self.GlnPi_ts = {}
self.Advs_ts = {}
self.AdvlnPi_ts = {}
self.plosses = {}
self.vlosses = {}
self.p_trainers = {}
self.v_trainers = {}
self.entropies = {}
self.policy_LLs = {}
self.optimizers_p = {name_i: tf.train.AdamOptimizer(
learning_rate=lrp) for name_i in self.actor_names}
self.optimizers_v = {name_i: tf.train.AdamOptimizer(
learning_rate=lrv) for name_i in self.actor_names}
with tf.variable_scope(self.name):
for name in self.actor_names: # need to do better vectorization at some point...
# Intermediate variables
self.lnPi_ts[name] = -tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.a_logits[name], ## a_t|s_t
labels=self.actions_Ats[name] ## use tf.one_hot for m-ary action spaces
) # bernoulli_LL(self.actions_Ats[name], self.a_probs[name])
self.entropies[name] = tf.clip_by_value(
bernoulli_H(self.a_probs[name]),
_eps_, 100.
)
# using value-baselined returns instead of raw G_t
self.Advs_ts[name] = self.returns_Gts[name] - self.values[name]
self.AdvlnPi_ts[name] = tf.multiply(
tf.stop_gradient(self.Advs_ts[name]),
self.lnPi_ts[name]
) # AdvlnP
# Losses
self.plosses[name] = tf.reduce_sum(self.AdvlnPi_ts[name])
self.vlosses[name] = 0.5 * tf.reduce_sum(
tf.square(self.returns_Gts[name] - tf.reshape(self.values[name], [-1]))
)
# Training ops
self.p_trainers[name] = self.optimizers_p[name].minimize(
loss=(- alpha_p) * (self.plosses[name] + _ent_decay_ * self.entropies[name]),
var_list=[v for v in tf.trainable_variables()
if ((name in v.name) and ("actor" in v.name)) or "sensorium" in v.name]
)
self.v_trainers[name] = self.optimizers_v[name].minimize(
loss=alpha_v * self.vlosses[name],
var_list=[v for v in tf.trainable_variables()
if ((name in v.name) and ("critic" in v.name)) or "sensorium" in v.name]
)
# Aggregate Learning Stats
self.policy_LLs = tf.stack(list(self.lnPi_ts.values()), axis=1)
self.crits = tf.stack(list(self.values.values()), axis=1)
self.entropy = tf.stack(list(self.entropies.values()), axis=1)
self.summLLs = summarize(self.policy_LLs)
self.summValues = summarize(self.crits)
self.summEntropy = summarize(self.entropy)
return
def act(self, state, sess):
"""Returns vector of p-net sample action (in {0,1})"""
assert self.actor_count in state.shape
ind = 0
probs = {}
a_ts = {}
for name in self.actor_names:
st = state[ind]
ind += 1
probs[name] = sess.run(
self.a_probs[name],
{self.states_St[name]: np.expand_dims(st.flatten(), axis=0)}
).squeeze()
a_ts[name] = 1 * (np.random.rand() < probs[name]) # scalar -> vector comparison
return np.array(list(a_ts.values())).squeeze()
def generate_summary(self, sess, act_dicts):
# 'Perf/Recent Rewards', 'Losses/Policy LLs','Losses/Mean Value Fxn', 'Losses/Policy Entropies']
return sess.run(
[self.summLLs, self.summValues, self.summEntropy],
feed_dict=act_dicts
)
def train_single(self, sess, agent_index, rollout, Qsa_i=None, gamma=0.95, bootstrap_value=0.0):
# self.episode_buffer.append([s, acts_lst.squeeze(), r_lst, s1])
discounted_returns = discount(np.hstack([rollout[2].ravel(), [bootstrap_value]]), gamma)
discounted_returns = discounted_returns[:-1, None]
feed_dict = {
self.states_St[self.actor_names[agent_index]]: np.vstack(rollout[0].squeeze()),
self.actions_Ats[self.actor_names[agent_index]]: np.vstack(rollout[1].squeeze()),
self.returns_Gts[self.actor_names[agent_index]]: np.vstack(discounted_returns),
self.states_St_prime[self.actor_names[agent_index]]: np.vstack(rollout[3].squeeze())
}
sess.run([
self.p_trainers[self.actor_names[agent_index]],
self.v_trainers[self.actor_names[agent_index]]
],
feed_dict=feed_dict
)
return
class EmbodiedAgent_IAC(EmbodiedAgent_Population):
def __init__(self, name=agent_name,
env_=MinorityGame_Multiagent_env,
alpha_p=5.e-2, alpha_v=1e-1,
latentDim=_zdim_,
space_size=(_s_size_, _a_size_),
sensorium=SensoriumNetworkTemplate,
actorNN=ActionPolicyNetwork,
valueNN=ValueNetwork,
recover=None,
_every_=_every_,
max_episode_length=_max_len_,
CyclicSchedule=None
):
super().__init__(name=name, env_=env_,
latentDim=latentDim, space_size=space_size,
sensorium=sensorium, actorNN=actorNN, valueNN=valueNN,
_every_=_every_, max_episode_length=max_episode_length
)
self.report_labels = ['Perf/Recent Rewards', 'Losses/Policy LLs',
'Losses/Critic Scores', 'Losses/Policy Entropies']
self.last_good_model = recover
self.lnPi_ts = {}
self.GlnPi_ts = {}
self.Advs_ts = {}
self.AdvlnPi_ts = {}
# self.delta_Advs_t = {}
self.plosses = {}
self.vlosses = {}
self.p_trainers = {}
self.v_trainers = {}
self.entropies = {}
self.policy_LLs = {}
self.optimizers_p = {name_i: tf.train.AdamOptimizer(
learning_rate=lrp) for name_i in self.actor_names}
self.optimizers_v = {name_i: tf.train.AdamOptimizer(
learning_rate=lrv) for name_i in self.actor_names}
## Setup Cyclic Learning Rate Schedule
self.init_spd = alpha_p
if CyclicSchedule is None:
self.sched_type, self.sched_halflife = "constant", 1
else:
self.sched_type, self.sched_halflife = CyclicSchedule
self.alpha_p_schedule = cyc_learning_spd(self.sched_type, self.init_spd, self.sched_halflife)
with tf.variable_scope(self.name):
self.alpha_p_sched_t = tf.placeholder(
shape=None, dtype=tf.float32,
name='cyc_alpha_rate_p'
)
for name in self.actor_names: # need to do better vectorization at some point...
# Intermediate variables
self.lnPi_ts[name] = -tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.a_logits[name], ## a_t|s_t
labels=self.actions_Ats[name] ## use tf.one_hot for m-ary action spaces
) # bernoulli_LL(self.actions_Ats[name], self.a_probs[name])
self.entropies[name] = tf.clip_by_value(
bernoulli_H(self.a_probs[name]),
_eps_, 10.
)
# using value-baselined returns instead of raw G_t
self.Advs_ts[name] = self.returns_Gts[name] - self.values[name]
self.AdvlnPi_ts[name] = tf.multiply(
tf.stop_gradient(self.Advs_ts[name]),
self.lnPi_ts[name]
) # AdvlnP
# Losses
self.plosses[name] = tf.reduce_mean(self.AdvlnPi_ts[name])
self.vlosses[name] = 0.5 * tf.reduce_mean(
tf.square(self.returns_Gts[name] - self.values[name])
) # squared TD error loss version
# tf.reduce_sum(self.delta_Advs_t[name] * self.values[name]) # semi-value loss version
# Training ops
self.p_trainers[name] = self.optimizers_p[name].minimize(
loss=(- self.alpha_p_sched_t) * (self.plosses[name] + _ent_decay_ * self.entropies[name]),
var_list=[v for v in tf.trainable_variables()
if ((name in v.name) and ("actor" in v.name)) or "sensorium" in v.name]
)
self.v_trainers[name] = self.optimizers_v[name].minimize(
loss=alpha_v * self.vlosses[name],
var_list=[v for v in tf.trainable_variables()
if ((name in v.name) and ("critic" in v.name)) or "sensorium" in v.name]
)
# Aggregate Learning Stats
self.policy_LLs = tf.stack(list(self.lnPi_ts.values()), axis=1)
self.crits = tf.stack(list(self.values.values()), axis=1)
self.entropy = tf.stack(list(self.entropies.values()), axis=1)
self.summLLs = summarize(self.policy_LLs)
self.summValues = summarize(self.crits)
self.summEntropy = summarize(self.entropy)
return
def act(self, state, sess):
"""Returns vector of p-net sample action (in {0,1})"""
assert self.actor_count in state.shape
ind = 0
probs = {}
a_ts = {}
for name in self.actor_names:
st = state[ind]
ind += 1
probs[name] = sess.run(
self.a_probs[name],
{self.states_St[name]: np.expand_dims(st.flatten(), axis=0)}
).squeeze()
a_ts[name] = 1 * (np.random.rand() < probs[name]) # scalar -> vector comparison
return np.array(list(a_ts.values())).squeeze()
def generate_summary(self, sess, act_dicts):
# 'Perf/Recent Rewards', 'Losses/Policy LLs','Losses/Mean Value Fxn', 'Losses/Policy Entropies']
return sess.run(
[self.summLLs, self.summValues, self.summEntropy],
feed_dict=act_dicts
)
def train_single(self, sess, agent_index, rollout, Qsa_i=None, gamma=0.95, bootstrap_value=0.0):
# Rollout Structure: [S0, A, R, S1]
states = np.vstack(rollout[0].squeeze())
rewards = rollout[2].ravel()
vals = sess.run(
self.values[self.actor_names[agent_index]],
feed_dict={
self.states_St[self.actor_names[agent_index]]: states
}
).ravel() # v(s)
# generate TD(1) target of state value: G_t + gamma*v(s_{t+1})
Gt_TD = np.squeeze(
calc_V_TD_target(rewards, vals=vals,
gamma=gamma, bootstrap_value=bootstrap_value)
) # discounted total returns after t based on current v-net
feed_dict = {
self.states_St[self.actor_names[agent_index]]: states,
self.actions_Ats[self.actor_names[agent_index]]: np.vstack(rollout[1].squeeze()),
self.returns_Gts[self.actor_names[agent_index]]: np.vstack(Gt_TD),
self.alpha_p_sched_t: self.alpha_p_schedule[self.total_epoch_count % _sched_win_]
}
sess.run([
self.p_trainers[self.actor_names[agent_index]],
self.v_trainers[self.actor_names[agent_index]]
],
feed_dict=feed_dict
)
return
def pretrainCritics(self, sess):
assert all(np.diff([len(buf) for _, buf in self.episode_buffer.items()]) == 0), \
"Rollout is not the correct shape"
states = np.stack(self.episode_buffer['states'])
rewards = np.stack(self.episode_buffer['rewards'])
learners = range(self.actor_count)
vls = np.zeros_like(learners, dtype=float)
for agent_idx in learners:
rwds = rewards[:, agent_idx, ...].ravel()
vals = sess.run(
self.values[self.actor_names[agent_idx]],
feed_dict={
self.states_St[self.actor_names[agent_idx]]: (states[:, agent_idx, ...])
}).ravel() # v(s)
# generate TD(1) target of state value: G_t + gamma*v(s_{t+1})
Gt_TD = np.squeeze(calc_V_TD_target(rwds, vals=vals))
feed_dict = {
self.states_St[self.actor_names[agent_idx]]: (states[:, agent_idx, ...]),
self.returns_Gts[self.actor_names[agent_idx]]: np.vstack(Gt_TD)
}
sess.run(self.v_trainers[self.actor_names[agent_idx]], feed_dict=feed_dict)
vls[agent_idx] = sess.run(self.vlosses[self.actor_names[agent_idx]], feed_dict=feed_dict)
return vls
| 45.376712 | 110 | 0.57605 | 2,400 | 19,875 | 4.525417 | 0.142083 | 0.032317 | 0.045116 | 0.041985 | 0.802965 | 0.791179 | 0.768253 | 0.760335 | 0.760335 | 0.749379 | 0 | 0.009148 | 0.312503 | 19,875 | 437 | 111 | 45.480549 | 0.785714 | 0.127547 | 0 | 0.662921 | 0 | 0 | 0.026079 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 1 | 0.036517 | false | 0 | 0.014045 | 0.008427 | 0.095506 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6ccf1c642fcdad251404672641cf48d7e7e116d | 33 | py | Python | gino_factory/__init__.py | Basalex/gino_factory | ebcfe10bb78416664dc4b4b100f004c841abd2c4 | [
"MIT"
] | 9 | 2021-05-23T14:57:48.000Z | 2022-03-18T08:46:02.000Z | gino_factory/__init__.py | Basalex/gino_factory | ebcfe10bb78416664dc4b4b100f004c841abd2c4 | [
"MIT"
] | null | null | null | gino_factory/__init__.py | Basalex/gino_factory | ebcfe10bb78416664dc4b4b100f004c841abd2c4 | [
"MIT"
] | null | null | null | from .factory import GinoFactory
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6dfe141323d5bcede4e1cff961c6376ee2666ef | 40 | py | Python | spoopy/tools/file_utils/__init__.py | rodrigobressan/PADify | 362db2b3a33793ac53f938e89f90a6ecdf778e89 | [
"MIT"
] | 12 | 2019-11-26T07:44:08.000Z | 2021-03-03T09:51:43.000Z | spoopy/tools/file_utils/__init__.py | rodrigobressan/PADify | 362db2b3a33793ac53f938e89f90a6ecdf778e89 | [
"MIT"
] | 13 | 2020-01-28T22:09:41.000Z | 2022-03-11T23:43:37.000Z | spoopy/tools/file_utils/__init__.py | rodrigobressan/PADify | 362db2b3a33793ac53f938e89f90a6ecdf778e89 | [
"MIT"
] | 5 | 2020-01-02T09:52:42.000Z | 2022-02-21T15:45:23.000Z | from tools.file_utils import file_helper | 40 | 40 | 0.9 | 7 | 40 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6e0b159ab9957186b4ddfba939b660c60b44ed4 | 983 | py | Python | docs/source/examples/FB2.2/get_object_store_access_policies_rules.py | Flav-STOR-WL/py-pure-client | 03b889c997d90380ac5d6380ca5d5432792d3e89 | [
"BSD-2-Clause"
] | 14 | 2018-12-07T18:30:27.000Z | 2022-02-22T09:12:33.000Z | docs/source/examples/FB2.2/get_object_store_access_policies_rules.py | Flav-STOR-WL/py-pure-client | 03b889c997d90380ac5d6380ca5d5432792d3e89 | [
"BSD-2-Clause"
] | 28 | 2019-09-17T21:03:52.000Z | 2022-03-29T22:07:35.000Z | docs/source/examples/FB2.2/get_object_store_access_policies_rules.py | Flav-STOR-WL/py-pure-client | 03b889c997d90380ac5d6380ca5d5432792d3e89 | [
"BSD-2-Clause"
] | 15 | 2020-06-11T15:50:08.000Z | 2022-03-21T09:27:25.000Z | # list all object store access policy rules
res = client.get_object_store_access_policies_rules()
print(res)
if type(res) == pypureclient.responses.ValidResponse:
print(list(res.items))
# list rules for specific policy
res = client.get_object_store_access_policies_rules(policy_names=["pure:policy/full-access"])
print(res)
if type(res) == pypureclient.responses.ValidResponse:
print(list(res.items))
# list rules for specific policy by id
res = client.get_object_store_access_policies_rules(policy_ids=["10314f42-020d-7080-8013-000ddt400012"])
print(res)
if type(res) == pypureclient.responses.ValidResponse:
print(list(res.items))
# list specific rule
res = client.get_object_store_access_policies_rules(policy_names=["pure:policy/full-access"], names=["myrule"])
print(res)
if type(res) == pypureclient.responses.ValidResponse:
print(list(res.items))
# Other valid fields: continuation_token, filter, limit, offset, sort
# See section "Common Fields" for examples
| 42.73913 | 111 | 0.785351 | 138 | 983 | 5.42029 | 0.333333 | 0.073529 | 0.113636 | 0.096257 | 0.754011 | 0.754011 | 0.754011 | 0.754011 | 0.697861 | 0.63369 | 0 | 0.030405 | 0.096643 | 983 | 22 | 112 | 44.681818 | 0.811937 | 0.241099 | 0 | 0.75 | 0 | 0 | 0.11908 | 0.110961 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e6eaa3dcdb40fcdae03a7e1e13ef2dc91b65b99e | 38,797 | py | Python | test/configs/__init__.py | rgammans/nativeconfig | f8a6b0c541d8c288b4577209336c03a328fe841f | [
"MIT"
] | 6 | 2015-07-07T13:06:54.000Z | 2021-01-01T07:25:44.000Z | test/configs/__init__.py | rgammans/nativeconfig | f8a6b0c541d8c288b4577209336c03a328fe841f | [
"MIT"
] | 16 | 2016-12-23T00:50:55.000Z | 2021-07-13T19:45:36.000Z | test/configs/__init__.py | rgammans/nativeconfig | f8a6b0c541d8c288b4577209336c03a328fe841f | [
"MIT"
] | 4 | 2015-04-29T19:52:21.000Z | 2020-05-27T10:59:51.000Z | from abc import ABC, abstractmethod
import json
import os
from unittest.mock import MagicMock
from nativeconfig.options import StringOption, IntOption, ArrayOption, DictOption, ValueSource
from nativeconfig.exceptions import DeserializationError, ValidationError
class ConfigMixin(ABC):
CONFIG_TYPE = None
def tearDown(self):
os.environ.pop('FIRST_NAME', None)
super().tearDown()
def test_exception_is_raised_for_duplicate_options(self):
with self.assertRaises(AttributeError):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('Name')
last_name = StringOption('Name')
MyConfig.get_instance()
def test_default_values_are_not_written_to_config(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
MyConfig.get_instance().del_value_for_option_name('FirstName')
self.assertEqual(MyConfig.get_instance().get_value('FirstName'), None)
def test_get_value_for_option_name_returns_python(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
self.assertEqual(c.get_value_for_option_name('FirstName'), 'Ilya')
def test_get_raw_value_for_option_name_returns_raw(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
self.assertEqual(c.option_for_name('FirstName').deserialize(c.get_raw_value_for_option_name('FirstName')), 'Ilya')
def test_get_json_value_for_option_name_returns_json(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
self.assertEqual(json.loads(c.get_json_value_for_option_name('FirstName')), 'Ilya')
def test_get_value_for_option_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
self.assertEqual(c.get_value_for_option_name('LastName'), None)
def test_get_raw_value_for_option_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.get_raw_value_for_option_name('LastName')
def test_get_json_value_for_option_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.get_json_value_for_option_name('LastName')
def test_set_value_for_option_name_accepts_python(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_value_for_option_name('FirstName', 'Artem')
self.assertEqual(c.first_name, 'Artem')
def test_set_raw_value_for_option_name_accepts_raw(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
c.set_raw_value_for_option_name('Age', c.option_for_name('Age').serialize(9000))
self.assertEqual(c.age, 9000)
with self.assertRaises(DeserializationError):
c.set_raw_value_for_option_name('Age', 'Artem')
def test_set_json_value_for_option_name_accepts_json(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_json_value_for_option_name('FirstName', json.dumps('Artem'))
self.assertEqual(c.first_name, 'Artem')
with self.assertRaises(DeserializationError):
c.set_json_value_for_option_name('FirstName', 'Artem')
def test_set_None_value_for_option_name_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.first_name = 'Artem'
self.assertEqual(c.get_value_for_option_name('FirstName'), 'Artem')
c.set_value_for_option_name('FirstName', None)
self.assertEqual(c.get_value('FirstName'), None)
def test_set_null_json_value_for_option_name_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.first_name = 'Artem'
self.assertEqual(c.get_json_value_for_option_name('FirstName'), '"Artem"')
c.set_json_value_for_option_name('FirstName', json.dumps(None))
self.assertEqual(c.get_value('FirstName'), None)
def test_set_value_for_option_name_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.set_value_for_option_name('LastName', 'Kulakov')
def test_set_raw_value_for_option_name_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.set_json_value_for_option_name('LastName', 'Kulakov')
def test_set_json_value_for_option_name_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.set_json_value_for_option_name('LastName', '"Kulakov"')
def test_set_one_shot_value_for_option_name_accepts_python(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
c.set_one_shot_value_for_option_name('Age', 9000)
self.assertEqual(c.age, 9000)
with self.assertRaises(ValidationError):
c.set_one_shot_value_for_option_name('Age', '9000')
def test_set_one_shot_raw_value_for_option_name_accepts_raw(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
c.set_one_shot_raw_value_for_option_name('Age', c.option_for_name('Age').serialize(9000))
self.assertEqual(c.age, 9000)
with self.assertRaises(DeserializationError):
c.set_one_shot_raw_value_for_option_name('Age', 'fortytwo')
def test_set_one_shot_json_value_for_option_name_accepts_json(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_json_value_for_option_name('FirstName', json.dumps('Artem'))
self.assertEqual(c.first_name, 'Artem')
with self.assertRaises(DeserializationError):
c.set_one_shot_json_value_for_option_name('FirstName', 'Artem')
def test_set_one_shot_value_for_option_name_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.set_one_shot_value_for_option_name('LastName', 'Kulakov')
def test_set_one_shot_raw_value_for_option_name_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.set_one_shot_raw_value_for_option_name('LastName', c.option_for_name('FirstName').serialize('Kulakov'))
def test_set_one_shot_json_value_for_option_name_raises_key_error_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.set_one_shot_json_value_for_option_name('LastName', '"Kulakov"')
def test_one_shot_value_overrides_config(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_value_for_option_name('FirstName', 'Artem')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
c.set_one_shot_value_for_option_name('FirstName', 'Ivan')
self.assertEqual(c.first_name, 'Ivan')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
def test_one_shot_raw_value_overrides_config(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Artem'))
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
c.set_one_shot_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Ivan'))
self.assertEqual(c.first_name, 'Ivan')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
def test_one_shot_json_value_overrides_config(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_json_value_for_option_name('FirstName', json.dumps('Artem'))
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
c.set_one_shot_json_value_for_option_name('FirstName', json.dumps('Ivan'))
self.assertEqual(c.first_name, 'Ivan')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
def test_one_shot_value_does_not_override_env(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', env_name='FIRST_NAME')
c = MyConfig.get_instance()
os.environ['FIRST_NAME'] = json.dumps('Ivan')
c.set_one_shot_value_for_option_name('FirstName', 'Artem')
self.assertEqual(c.first_name, 'Ivan')
def test_one_shot_raw_value_does_not_override_env(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', env_name='FIRST_NAME')
c = MyConfig.get_instance()
os.environ['FIRST_NAME'] = json.dumps('Ivan')
c.set_one_shot_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Artem'))
self.assertEqual(c.first_name, 'Ivan')
def test_one_shot_json_value_does_not_override_env(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', env_name='FIRST_NAME')
c = MyConfig.get_instance()
os.environ['FIRST_NAME'] = json.dumps('Ivan')
c.set_one_shot_json_value_for_option_name('FirstName', json.dumps('Artem'))
self.assertEqual(c.first_name, 'Ivan')
def test_one_shot_value_reset_by_set(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_value_for_option_name('FirstName', 'Artem')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
self.assertEqual(c.first_name, 'Artem')
c.first_name = 'Ivan'
self.assertEqual(c.first_name, 'Ivan')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
def test_one_shot_raw_value_reset_by_set(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Artem'))
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
self.assertEqual(c.first_name, 'Artem')
c.first_name = 'Ivan'
self.assertEqual(c.first_name, 'Ivan')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
def test_one_shot_json_value_reset_by_set(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_json_value_for_option_name('FirstName', json.dumps('Artem'))
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
self.assertEqual(c.first_name, 'Artem')
c.first_name = 'Ivan'
self.assertEqual(c.first_name, 'Ivan')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
def test_one_shot_value_reset_by_del(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_value_for_option_name('FirstName', 'Artem')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
self.assertEqual(c.first_name, 'Artem')
del c.first_name
self.assertEqual(c.first_name, 'Ilya')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
def test_one_shot_raw_value_reset_by_del(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Artem'))
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
self.assertEqual(c.first_name, 'Artem')
del c.first_name
self.assertEqual(c.first_name, 'Ilya')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
def test_one_shot_json_value_reset_by_del(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.set_one_shot_json_value_for_option_name('FirstName', json.dumps('Artem'))
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, True)
self.assertEqual(c.first_name, 'Artem')
del c.first_name
self.assertEqual(c.first_name, 'Ilya')
self.assertEqual(c.option_for_name('FirstName')._is_one_shot_value_set, False)
def test_one_shot_value_set_to_None_forces_default(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.first_name = 'Artem'
self.assertEqual(c.first_name, 'Artem')
c.set_one_shot_value_for_option_name('FirstName', None)
self.assertEqual(c.first_name, 'Ilya')
def test_one_shot_json_value_set_to_null_forces_default(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.first_name = 'Artem'
self.assertEqual(c.first_name, 'Artem')
c.set_one_shot_json_value_for_option_name('FirstName', json.dumps(None))
self.assertEqual(c.first_name, 'Ilya')
def test_del_value_for_option_name_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.first_name = 'Ivan'
c.del_value_for_option_name('FirstName')
self.assertEqual(c.get_value('FirstName'), None)
def test_del_value_for_option_name_raises_warn_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(KeyError):
c.del_value_for_option_name('LastName')
def test_validate_value_for_option_name_accepts_python(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.validate_value_for_option_name('FirstName', 'Artem')
def test_validate_raw_value_for_option_name_accepts_raw(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.validate_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Artem'))
def test_validate_json_value_for_option_name_accepts_json(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
c.validate_json_value_for_option_name('FirstName', json.dumps('Artem'))
def test_validate_value_for_option_name_raises_validation_error_for_invalid_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
with self.assertRaises(ValidationError):
c.validate_value_for_option_name('FirstName', 42)
def test_validate_raw_value_for_option_name_raises_validation_error_for_invalid_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', choices=['Ilya', 'Artem'])
c = MyConfig.get_instance()
with self.assertRaises(ValidationError):
c.validate_raw_value_for_option_name('FirstName', c.option_for_name('FirstName').serialize('Ivan'))
def test_validate_json_value_for_option_name_raises_validation_error_for_invalid_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', choices=['Ilya', 'Artem'])
c = MyConfig.get_instance()
with self.assertRaises(ValidationError):
c.validate_json_value_for_option_name('FirstName', json.dumps('Ivan'))
def test_validate_raw_value_for_option_name_raises_deserialization_error_for_malformed_raw(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
with self.assertRaises(DeserializationError):
c.validate_raw_value_for_option_name('Age', 'fortytwo')
def test_validate_json_value_for_option_name_raises_deserialization_error_for_malformed_json(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
with self.assertRaises(DeserializationError):
c.validate_json_value_for_option_name('Age', '"fortytwo"')
def test_items_enumerates_values(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
for option_name, (python_value, value_source) in c.python_items():
if option_name == 'Age2':
self.assertEqual((option_name, (python_value, value_source)), ('Age2', (42, ValueSource.default)))
def test_raw_items_enumerates_raw(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
for option_name, (raw_value, value_source) in c.raw_items():
if option_name == 'Age2':
self.assertEqual((option_name, (raw_value, value_source)), ('Age2', ('42', ValueSource.default)))
def test_json_items_enumerates_raw(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig.get_instance()
for option_name, (json_value, value_source) in c.json_items():
if option_name == 'Age2':
self.assertEqual((option_name, (json_value, value_source)), ('Age2', ('42', ValueSource.default)))
def test_snapshot_returns_json_dict(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
last_name = StringOption('LastName', default='Kulakov')
c = MyConfig.get_instance()
self.assertEqual(json.loads(c.snapshot()), {'ConfigVersion': '1.0', 'FirstName': 'Ilya', 'LastName': 'Kulakov'})
self.assertEqual(json.loads(c.snapshot(['FirstName'])), {'FirstName': 'Ilya'})
def test_option_for_name_returns_property(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
self.assertEqual(c.option_for_name('FirstName'), getattr(MyConfig, 'first_name'))
def test_option_for_name_returns_None_if_option_not_found(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = MyConfig.get_instance()
self.assertEqual(c.option_for_name('LastName'), None)
def test_resolve_value_is_called_to_resolve_broken_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber')
c = MyConfig.get_instance()
c.resolve_value = MagicMock()
c.set_value('LuckyNumber', 'NotANumber')
c.lucky_number
self.assertEqual(c.resolve_value.call_count, 1)
self.assertIsInstance(c.resolve_value.call_args[0][0][1], DeserializationError)
self.assertEqual(c.resolve_value.call_args[0][1], 'LuckyNumber')
self.assertEqual(c.resolve_value.call_args[0][2], 'NotANumber')
def test_get_value_returns_raw_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber')
c = MyConfig.get_instance()
c.lucky_number = 1
self.assertEqual(c.get_value('LuckyNumber'), '1')
def test_get_value_returns_None_if_option_does_not_exist(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber')
c = MyConfig.get_instance()
c.lucky_number = 1
self.assertEqual(c.get_value('UnluckyNumber'), None)
def test_set_value_accepts_raw_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber')
c = MyConfig.get_instance()
c.set_value('LuckyNumber', '2')
self.assertEqual(c.lucky_number, 2)
def test_set_None_value_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber')
c = MyConfig.get_instance()
c.lucky_number = 10
self.assertEqual(c.get_value('LuckyNumber'), '10')
c.set_value('LuckyNumber', None)
self.assertEqual(c.get_value('LuckyNumber'), None)
def test_del_value_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber')
c = MyConfig.get_instance()
c.lucky_number = 1
c.del_value('LuckyNumber')
self.assertEqual(c.get_value('LuckyNumber'), None)
def test_get_array_value_returns_list(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
c.lucky_numbers = [7, 42]
self.assertIsInstance(c.get_array_value('LuckyNumber'), list)
def test_get_array_value_returns_None_if_option_does_not_exist(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
self.assertEqual(c.get_array_value('FirstName'), None)
def test_set_array_value_accepts_iterable(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
c.set_array_value('LuckyNumber', ['7', '42'])
self.assertEqual(c.lucky_numbers, [7, 42])
c.set_array_value('LuckyNumber', ('7', '42'))
self.assertEqual(c.lucky_numbers, [7, 42])
def test_set_None_array_value_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
c.lucky_numbers = [7, 42]
self.assertEqual(c.lucky_numbers, [7, 42])
c.set_array_value('LuckyNumber', None)
self.assertEqual(c.lucky_numbers, None)
def test_get_dict_value_returns_dict(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
c.lucky_numbers = {'a': 1}
self.assertIsInstance(c.get_dict_value('LuckyNumber'), dict)
def test_remove_fields_from_dict(self):
class MyConfig(self.CONFIG_TYPE):
test_dict = DictOption('TestDict', value_option=StringOption('_'))
c = MyConfig.get_instance()
c.test_dict = {"key1": "value1", "key2": "value2"}
c.test_dict = {"key2": "value2"}
self.assertEqual(c.test_dict, {"key2": "value2"})
def test_get_dict_value_returns_None_if_option_does_not_exist(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
self.assertEqual(c.get_dict_value('FirstName'), None)
def test_set_dict_value_accepts_dict(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
c.set_dict_value('LuckyNumber', {'a': '1'})
self.assertEqual(c.lucky_numbers, {'a': 1})
def test_set_None_dict_value_deletes_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'))
c = MyConfig.get_instance()
c.set_dict_value('LuckyNumber', {'a': '1'})
self.assertEqual(c.lucky_numbers, {'a': 1})
c.set_dict_value('LuckyNumber', None)
self.assertEqual(c.lucky_numbers, None)
def test_default_value_is_used_when_no_value_in_config(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber', default=42)
c = MyConfig.get_instance()
c.del_value_for_option_name('LuckyNumber')
self.assertEqual(c.lucky_number, 42)
def test_overriding_base_option_moves_it_to_the_end(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber', default=42)
first_name = StringOption('FirstName')
last_name = StringOption('LastName')
class SubMyConfig(MyConfig):
lucky_number = IntOption('LuckyNumber', default=9000)
old_index = 0
for i, option in enumerate(MyConfig._ordered_options):
if option._name == 'LuckyNumber':
old_index = i
break
new_index = 0
for i, option in enumerate(SubMyConfig._ordered_options):
if option._name == 'LuckyNumber':
new_index = i
break
self.assertNotEqual(old_index, new_index)
def test_custom_properties_are_allowed(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber', default=42)
@property
def custom_property(self):
return '9000'
c = MyConfig.get_instance()
def test_ordered_options_supports_multiple_inheritance(self):
class MyConfigMixin1:
first_name = StringOption('FirstName', default='Ilya')
class MyConfigMixin2:
last_name = StringOption('LastName', default='Kulakov')
class MyConfig(self.CONFIG_TYPE, MyConfigMixin1, MyConfigMixin2):
age = IntOption('Age', default=42)
self.assertIn(MyConfigMixin1.first_name._name, [o._name for o in MyConfig._ordered_options])
self.assertIn(MyConfigMixin2.last_name, MyConfig._ordered_options)
self.assertIn(MyConfig.age, MyConfig._ordered_options)
def test_overriding_option_type_raises_warn_if_not_subclass(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
with self.assertWarns(UserWarning):
class MyConfig2(MyConfig):
first_name = IntOption('FirstName', default=42)
def test_reset_deletes_from_config(self):
class MyConfig(self.CONFIG_TYPE):
lucky_number = IntOption('LuckyNumber', default=42)
c = MyConfig.get_instance()
c.lucky_number = 9000
self.assertEqual(c.get_value('LuckyNumber'), '9000')
c.reset()
self.assertEqual(c.get_value('LuckyNumber'), None)
def test_get_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', allow_cache=True)
c = MyConfig.get_instance()
c.get_value_cache_free = MagicMock(return_value='Ilya')
c.first_name
c.first_name
c.first_name
self.assertLessEqual(c.get_value_cache_free.call_count, 1)
def test_set_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', allow_cache=True)
c = MyConfig.get_instance()
c.set_value_cache_free = MagicMock(return_value='Ilya')
c.first_name = 'Ilya'
c.first_name = 'Ilya'
c.first_name = 'Ilya'
self.assertLessEqual(c.set_value_cache_free.call_count, 1)
def test_get_array_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'), allow_cache=True)
c = MyConfig.get_instance()
c.get_array_value_cache_free = MagicMock(return_value=[1, 2, 3])
c.lucky_numbers
c.lucky_numbers
c.lucky_numbers
self.assertLessEqual(c.get_array_value_cache_free.call_count, 1)
def test_set_array_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'), allow_cache=True)
c = MyConfig.get_instance()
c.set_array_value_cache_free = MagicMock(return_value=[1, 2, 3])
c.lucky_numbers = [1, 2, 3]
c.lucky_numbers = [1, 2, 3]
c.lucky_numbers = [1, 2, 3]
self.assertLessEqual(c.set_array_value_cache_free.call_count, 1)
def test_get_dict_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'), allow_cache=True)
c = MyConfig.get_instance()
c.get_dict_value_cache_free = MagicMock(return_value={'a': 1, 'b': 2, 'c': 3})
c.lucky_numbers
c.lucky_numbers
c.lucky_numbers
self.assertLessEqual(c.get_dict_value_cache_free.call_count, 1)
def test_set_dict_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'), allow_cache=True)
c = MyConfig.get_instance()
c.set_dict_value_cache_free = MagicMock(return_value={'a': 1, 'b': 2, 'c': 3})
c.lucky_numbers = {'a': 1, 'b': 2, 'c': 3}
c.lucky_numbers = {'a': 1, 'b': 2, 'c': 3}
c.lucky_numbers = {'a': 1, 'b': 2, 'c': 3}
self.assertLessEqual(c.set_dict_value_cache_free.call_count, 1)
def test_del_value_is_cached(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'), allow_cache=True)
c = MyConfig.get_instance()
c.del_value_cache_free = MagicMock()
del c.lucky_numbers
del c.lucky_numbers
del c.lucky_numbers
self.assertLessEqual(c.del_value_cache_free.call_count, 1)
def test_set_value_writes_new_value(self):
class MyConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=True)
c = MyConfig.get_instance()
c.first_name = 'Artem'
c.first_name = 'Konstantin'
c.first_name = 'Kirill'
self.assertEqual(c.get_value_cache_free('FirstName'), 'Kirill')
def test_set_array_value_writes_new_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = ArrayOption('LuckyNumber', IntOption('_'), default=(1, 2, 3), allow_cache=True)
c = MyConfig.get_instance()
c.lucky_numbers = [4, 5, 6]
c.lucky_numbers = [7, 8, 9]
c.lucky_numbers = [10, 11, 12]
self.assertEqual(c.get_array_value_cache_free('LuckyNumber'), ['10', '11', '12'])
def test_set_dict_value_writes_new_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'), default={'a': 1, 'b': 2, 'c': 3}, allow_cache=True)
c = MyConfig.get_instance()
c.lucky_numbers = {'a': 4, 'b': 5, 'c': 6}
c.lucky_numbers = {'a': 7, 'b': 8, 'c': 9}
c.lucky_numbers = {'a': 10, 'b': 11, 'c': 12}
self.assertEqual(c.get_dict_value_cache_free('LuckyNumber'), {'a': '10', 'b': '11', 'c': '12'})
def test_del_value_writes_new_value(self):
class MyConfig(self.CONFIG_TYPE):
lucky_numbers = DictOption('LuckyNumber', IntOption('_'), default={'a': 1, 'b': 2, 'c': 3}, allow_cache=True)
c = MyConfig.get_instance()
c.lucky_numbers = {'a': 4, 'b': 5, 'c': 6}
del c.lucky_numbers
c.lucky_numbers = {'a': 10, 'b': 11, 'c': 12}
del c.lucky_numbers
self.assertEqual(c.get_dict_value_cache_free('LuckyNumber'), None)
def test_allow_cache(self):
class AllowCacheConfig(self.CONFIG_TYPE):
ALLOW_CACHE = True
first_name = StringOption('FirstName', default='Ilya')
class DisallowCacheConfig(self.CONFIG_TYPE):
ALLOW_CACHE = False
first_name = StringOption('FirstName', default='Ilya')
class DefaultCacheConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya')
c = AllowCacheConfig.get_instance()
c.get_value = MagicMock(return_value='Ilya')
c.first_name
c.get_value.assert_called_with('FirstName', allow_cache=True)
c = DisallowCacheConfig.get_instance()
c.get_value = MagicMock(return_value='Ilya')
c.first_name
c.get_value.assert_called_with('FirstName', allow_cache=False)
c = DefaultCacheConfig.get_instance()
c.get_value = MagicMock(return_value='Ilya')
c.first_name
c.get_value.assert_called_with('FirstName', allow_cache=False)
def test_per_option_allow_cache(self):
class AllowCacheConfig(self.CONFIG_TYPE):
ALLOW_CACHE = True
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
class DisallowCacheConfig(self.CONFIG_TYPE):
ALLOW_CACHE = False
first_name = StringOption('FirstName', default='Ilya', allow_cache=True)
c = AllowCacheConfig.get_instance()
c.get_value = MagicMock(return_value='Ilya')
c.first_name
c.get_value.assert_called_with('FirstName', allow_cache=False)
c = DisallowCacheConfig.get_instance()
c.get_value = MagicMock(return_value='Ilya')
c.first_name
c.get_value.assert_called_with('FirstName', allow_cache=True)
def test_magic_len(self):
class ZeroItemsConfig(self.CONFIG_TYPE):
pass
class OneItemConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
class TwoItemsConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
last_name = StringOption('LastName', default='Kulakov', allow_cache=False)
self.assertEqual(len(ZeroItemsConfig.get_instance()), len(ZeroItemsConfig._ordered_options))
self.assertEqual(len(OneItemConfig.get_instance()), len(OneItemConfig._ordered_options))
self.assertEqual(len(TwoItemsConfig.get_instance()), len(TwoItemsConfig._ordered_options))
def test_magic_getitem(self):
class OneItemConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
c = OneItemConfig.get_instance()
self.assertEqual(c['FirstName'], c.first_name)
with self.assertRaises(KeyError):
c['SecondName']
def test_magic_setitem(self):
class OneItemConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
c = OneItemConfig.get_instance()
c['FirstName'] = 'Tamara'
self.assertEqual(c.first_name, 'Tamara')
with self.assertRaises(KeyError):
c['LastName'] = 'Fedorova'
def test_magic_delitem(self):
class OneItemConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
c = OneItemConfig.get_instance()
c['FirstName'] = 'Tamara'
self.assertEqual(c.first_name, 'Tamara')
del c['FirstName']
self.assertEqual(c.first_name, 'Ilya')
def test_magic_iter(self):
class OneItemConfig(self.CONFIG_TYPE):
first_name = StringOption('FirstName', default='Ilya', allow_cache=False)
c = OneItemConfig.get_instance()
self.assertSetEqual(set(c.keys()), set(iter(c)))
def test_reset_clears_cache(self):
class MyConfig(self.CONFIG_TYPE):
age = IntOption('Age', default=42)
c = MyConfig()
c.age = 99
self.assertEqual(c._cache['Age'], '99')
c.reset()
self.assertEqual(c._cache['Age'], None)
def test_option_mixins(self):
class MyConfigMixin:
age = IntOption('Age', default=42)
class MyConfig(MyConfigMixin, self.CONFIG_TYPE):
pass
c = MyConfig()
self.assertIn('Age', c)
@abstractmethod
def test_config_is_created_if_not_found(self):
pass
| 39.669734 | 122 | 0.675207 | 4,844 | 38,797 | 5.043765 | 0.044591 | 0.046046 | 0.056156 | 0.080018 | 0.88732 | 0.851343 | 0.814178 | 0.795882 | 0.773453 | 0.712631 | 0 | 0.008168 | 0.211047 | 38,797 | 977 | 123 | 39.710338 | 0.790029 | 0 | 0 | 0.6375 | 0 | 0 | 0.081759 | 0 | 0 | 0 | 0 | 0 | 0.198611 | 1 | 0.133333 | false | 0.004167 | 0.008333 | 0.001389 | 0.288889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc8761b98f76d8a0ad0a0a92c9197cf27f7cf405 | 105 | py | Python | test/conftest.py | RZiane/pympi | c17292c21dacb747a20fc1069450792b52c8a6f8 | [
"MIT"
] | 70 | 2015-03-05T21:29:45.000Z | 2022-03-18T15:53:44.000Z | test/conftest.py | RZiane/pympi | c17292c21dacb747a20fc1069450792b52c8a6f8 | [
"MIT"
] | 41 | 2015-07-17T15:05:32.000Z | 2022-03-03T05:04:04.000Z | test/conftest.py | RZiane/pympi | c17292c21dacb747a20fc1069450792b52c8a6f8 | [
"MIT"
] | 27 | 2015-03-06T22:51:44.000Z | 2022-02-08T15:47:54.000Z | import pathlib
import pytest
@pytest.fixture
def test_dir():
return pathlib.Path(__file__).parent
| 11.666667 | 40 | 0.761905 | 14 | 105 | 5.357143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152381 | 105 | 8 | 41 | 13.125 | 0.842697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
5dc92805bb911674a8d0bc0a51fa724becd5a852 | 23,449 | py | Python | infoblox_netmri/api/broker/v2_2_0/config_template_broker.py | NastyaArslanova/infoblox-netmri | 399d904399ba7958262c6f107fa3b0efdd55019b | [
"Apache-2.0"
] | null | null | null | infoblox_netmri/api/broker/v2_2_0/config_template_broker.py | NastyaArslanova/infoblox-netmri | 399d904399ba7958262c6f107fa3b0efdd55019b | [
"Apache-2.0"
] | null | null | null | infoblox_netmri/api/broker/v2_2_0/config_template_broker.py | NastyaArslanova/infoblox-netmri | 399d904399ba7958262c6f107fa3b0efdd55019b | [
"Apache-2.0"
] | null | null | null | from ..broker import Broker
class ConfigTemplateBroker(Broker):
controller = "config_templates"
def index(self, **kwargs):
"""Lists the available config templates. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.1
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template.
:type id: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template.
:type id: Array of Integer
| ``api version min:`` 2.1
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param name: The name of the config template.
:type name: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param name: The name of the config template.
:type name: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of config template methods. The listed methods will be called on each config template returned and included in the output. Available methods are: template_text.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, name, vendor, model, version, device_type, description, created_by, updated_by, created_at, updated_at, template_type, risk_level.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each ConfigTemplate. Valid values are id, name, vendor, model, version, device_type, description, created_by, updated_by, created_at, updated_at, template_type, risk_level. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return config_templates: An array of the ConfigTemplate objects that match the specified input criteria.
:rtype config_templates: Array of ConfigTemplate
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def show(self, **kwargs):
"""This method will return the specified configuration template
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template to show
:type id: Integer
**Outputs**
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def destroy(self, **kwargs):
"""Deletes the specified config template from NetMRI.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template.
:type id: Integer
**Outputs**
"""
return self.api_request(self._get_method_fullname("destroy"), kwargs)
def create(self, **kwargs):
"""Creates a new config template.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param created_at: The date and time the config template was created.
:type created_at: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param created_by: The user that created the config template.
:type created_by: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param description: A description for the config template.
:type description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param device_type: The device type associated with the config template.
:type device_type: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param model: The device model name associated with the config template.
:type model: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param name: The name of the config template.
:type name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param risk_level: The user-specified risk level for the template. Possible levels are 1 (low), 2 (medium), and 3 (high). To run higher risk templates, higher privileges are required.
:type risk_level: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param template_type: The template type denotes whether devices or interfaces should be specified when the template job is scheduled or run. The value could be either 'Device' or 'Interface'.
:type template_type: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param updated_at: The date and time the config template was updated.
:type updated_at: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param updated_by: The user that last updated the config template.
:type updated_by: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param vendor: The device vendor name associated with the config template.
:type vendor: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param version: The device OS version associated with the config template.
:type version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param overwrite_ind: If set to 1, overwrite existing template file. If set to 0, do not overwrite existing template file
:type overwrite_ind: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param template_text: Template text.
:type template_text: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return id: The id of the newly created config template.
:rtype id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return model: The class name of the newly created config template.
:rtype model: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return uri: A URI that may be used to retrieve the newly created config template.
:rtype uri: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return config_template: The newly created config template.
:rtype config_template: ConfigTemplate
"""
return self.api_request(self._get_method_fullname("create"), kwargs)
def update(self, **kwargs):
"""Updates an existing config template.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template.
:type id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param description: A description for the config template. If omitted, this field will not be updated.
:type description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param device_type: The device type associated with the config template. If omitted, this field will not be updated.
:type device_type: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param model: The device model name associated with the config template. If omitted, this field will not be updated.
:type model: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param name: The name of the config template.
:type name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param risk_level: The user-specified risk level for the template. Possible levels are 1 (low), 2 (medium), and 3 (high). To run higher risk templates, higher privileges are required. If omitted, this field will not be updated.
:type risk_level: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param template_type: The template type denotes whether devices or interfaces should be specified when the template job is scheduled or run. The value could be either 'Device' or 'Interface'.
:type template_type: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param updated_at: The date and time the config template was updated. If omitted, this field will not be updated.
:type updated_at: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param updated_by: The user that last updated the config template. If omitted, this field will not be updated.
:type updated_by: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param vendor: The device vendor name associated with the config template. If omitted, this field will not be updated.
:type vendor: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param version: The device OS version associated with the config template. If omitted, this field will not be updated.
:type version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1
:param overwrite_ind: An indicator to overwrite an existing template file with the same name. Overwrite if set to 1. Do not overwrite if set to 0
:type overwrite_ind: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param template_text: Template text.
:type template_text: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return id: The id of the updated config template.
:rtype id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return model: The class name of the updated config template.
:rtype model: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return uri: A URI that may be used to retrieve the updated config template.
:rtype uri: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return config_template: The updated config template.
:rtype config_template: ConfigTemplate
"""
return self.api_request(self._get_method_fullname("update"), kwargs)
def duplicate(self, **kwargs):
"""This method will make a copy of the specified configuration template
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template from which to make a copy
:type id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param name: The name to be assigned to the new configuration template
:type name: String
**Outputs**
"""
return self.api_request(self._get_method_fullname("duplicate"), kwargs)
def populate_template(self, **kwargs):
"""This method populates a new configuration template with information from the selected configuration revision of a device
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param ConfigRevisionID: The internal NetMRI identifier of the specific configuration revision
:type ConfigRevisionID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier of the specific device
:type DeviceID: Integer
**Outputs**
"""
return self.api_request(self._get_method_fullname("populate_template"), kwargs)
def export(self, **kwargs):
"""This method exports a configuration template
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The internal NetMRI identifier of the configuration template
:type id: Integer
**Outputs**
"""
return self.api_request(self._get_method_fullname("export"), kwargs)
def import_data(self, **kwargs):
"""This method imports a configuration template
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param template_file: The configuration template file contents
:type template_file: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param overwrite_ind: An indicator to overwrite an existing template file with the same name. Overwrite if set to 1. Do not overwrite if set to 0
:type overwrite_ind: Boolean
**Outputs**
"""
return self.api_request(self._get_method_fullname("import"), kwargs)
def run(self, **kwargs):
"""Run a config template immediately with specified input.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The ID of the template to run.
:type id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param device_group_ids: A comma delimited string of device group ids. Can be blank if not using device groups.
:type device_group_ids: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param device_ids: A comma delimited string of device ids. Can be blank if ONLY using device groups.
:type device_ids: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param template_variables: Optional variables to be passed to the template. Any variable name starting with $ will be passed through as input to the template.
:type template_variables: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` requestor
:param credential_mode: If user credentials are required, they may be set from additional inputs (credential_mode = 'manual'). The credentials may be looked up using requestor stored credentials (credential_mode = 'requestor').
:type credential_mode: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param username: Username to be used if the job requires user credentials.
:type username: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param password: Password to be used if the job requires user credentials.
:type password: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param enable_password: Enable password to be used if the job requires user credentials.
:type enable_password: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return JobID: The JobID of the running template.
:rtype JobID: Integer
"""
return self.api_request(self._get_method_fullname("run"), kwargs)
def variables(self, **kwargs):
"""List the variables for the specified config template (tailored for input forms)
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The ConfigTemplateID of the config template from which to obtain variables
:type id: Integer
**Outputs**
"""
return self.api_request(self._get_method_fullname("variables"), kwargs)
| 35.368024 | 350 | 0.529319 | 2,490 | 23,449 | 4.934538 | 0.10241 | 0.109058 | 0.070888 | 0.089932 | 0.764955 | 0.749166 | 0.746073 | 0.7324 | 0.729063 | 0.715553 | 0 | 0.003643 | 0.367905 | 23,449 | 662 | 351 | 35.42145 | 0.825327 | 0.711971 | 0 | 0 | 0 | 0 | 0.06456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.44 | false | 0 | 0.12 | 0 | 1.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f8f3f9dc8a94d4c7e8a13487f345dfd4158b0d18 | 40 | py | Python | funcoes.py | wedsonchaves/exercicicos | 4240fed6189dd69103676c80543bb27fff504be4 | [
"Apache-2.0"
] | null | null | null | funcoes.py | wedsonchaves/exercicicos | 4240fed6189dd69103676c80543bb27fff504be4 | [
"Apache-2.0"
] | null | null | null | funcoes.py | wedsonchaves/exercicicos | 4240fed6189dd69103676c80543bb27fff504be4 | [
"Apache-2.0"
] | null | null | null | def aoQuadrado (num):
return num**4
| 13.333333 | 21 | 0.65 | 6 | 40 | 4.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.225 | 40 | 2 | 22 | 20 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f8fd6ada07498149b5c6c3ea611d0bad0cc368fe | 6,549 | py | Python | jacobi.py | u-t-k-a-n/sayisalanaliz | 2a85b9e42b4de74bf1c174dca684aae496d66dc4 | [
"MIT"
] | null | null | null | jacobi.py | u-t-k-a-n/sayisalanaliz | 2a85b9e42b4de74bf1c174dca684aae496d66dc4 | [
"MIT"
] | null | null | null | jacobi.py | u-t-k-a-n/sayisalanaliz | 2a85b9e42b4de74bf1c174dca684aae496d66dc4 | [
"MIT"
] | null | null | null | fonksiyonlar={}
while True:
try:
max_derece=int(input("Lütfen h(x) fonksiyonunun en büyük derecesini giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
fonksiyonlar["fonksiyon_h"]=[]
for i in range(max_derece,-1,-1):
while True:
try:
x = float(input("{}. dereceli terimin katsayısını giriniz:".format(i)))
break
except:
print("Lütfen bir sayı giriniz.")
fonksiyonlar["fonksiyon_h"].append(x)
yes_kök=input("h(x) fonksiyonu köklü ifade içeriyorsa evet yazın,içermiyorsa devam edin.")
yes_kök=yes_kök.upper()
if yes_kök=="EVET":
while True:
try:
kök_derece=float(input("Lütfen kökün derecesini float biçiminde giriniz:"))
break
except:
print("Float biçiminde sayı giriniz.")
while True:
try:
kök_max=int(input("Lütfen kökün içerisindeki ifadenin en büyük derecesini giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
while True:
try:
kök_min=int(input("Lütfen kökün içerisindeki ifadenin en küçük dereceyi giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
fonksiyonlar["kök"]=[]
for i in range(kök_max,kök_min-1,-1):
while True:
try:
x=float(input("Kök içerisindeki ifadenin {}.dereceli olan terimin katsayısını giriniz:".format(i)))
break
except:
print("Lütfen bir sayı giriniz.")
fonksiyonlar["kök"].append(x)
yes_kesir=input("h(x) fonksiyonu kesirli ifade içeriyorsa evet yazın,içermiyorsa devam edin.")
yes_kesir=yes_kesir.upper()
if yes_kesir=="EVET":
while True:
try:
pay_max=int(input("Lütfen paydaki ifadenin en büyük derecesini giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
while True:
try:
pay_min=int(input("Lütfen paydaki ifadenin en küçük derecesini giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
fonksiyonlar["pay"]=[]
for i in range(pay_max,pay_min-1,-1):
while True:
try:
x=float(input("Paydaki ifadenin {}. dereceli olan terimin katsayısını giriniz:".format(i)))
break
except:
print("Lütfen bir sayı giriniz.")
fonksiyonlar["pay"].append(x)
while True:
try:
payda_max=int(input("Lütfen paydadaki ifadenin en büyük derecesini giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
while True:
try:
payda_min=int(input("Lütfen paydadaki ifadenin en küçük derecesini giriniz:"))
break
except:
print("Lütfen bir tam sayı giriniz.")
fonksiyonlar["payda"]=[]
for i in range(payda_max,payda_min-1,-1):
while True:
try:
x=float(input("Paydadaki ifadenin {}. dereceli olan terimin katsayısını giriniz:".format(i)))
break
except:
print("Lütfen bir sayı giriniz.")
fonksiyonlar["payda"].append(x)
while True:
try:
start_x=float(input("Lütfen başlangıç değerini giriniz:"))
break
except:
print("Lütfen bir sayı giriniz.")
while True:
try:
epsilon = float(input("Lütfen epsilon değerini giriniz:"))
break
except:
print("Lütfen bir sayı giriniz.")
def function(fonksiyon,x,max_derece):
sum=0
j=0
for i in fonksiyon:
sum+=(i*(x**(max_derece-j)))
j+=1
return sum
def türev(fonksiyon,x,max_derece):
sum=0
j=0
for i in fonksiyon:
sum+=(i*(max_derece-j)*(x**(max_derece-j-1)))
j+=1
return sum
if yes_kök=="EVET" and yes_kesir=="EVET":
türev1=türev(fonksiyonlar["fonksiyon_h"],start_x,max_derece)+\
(kök_derece*türev(fonksiyonlar["kök"],start_x,kök_max)*(function(fonksiyonlar["kök"],start_x,kök_max)**(kök_derece-1)))+\
((türev(fonksiyonlar["pay"],start_x,pay_max)*function(fonksiyonlar["payda"],start_x,payda_max))-\
(türev(fonksiyonlar["payda"]*function(fonksiyonlar["pay"],start_x,pay_max))))/(function(fonksiyonlar["payda"],start_x,payda_max)**2)
elif yes_kök=="EVET":
türev1=türev(fonksiyonlar["fonksiyon_h"],start_x,max_derece)+\
(kök_derece*türev(fonksiyonlar["kök"],start_x,kök_max)*(function(fonksiyonlar["kök"],start_x,kök_max)**(kök_derece-1)))
elif yes_kesir=="EVET":
türev1=türev(fonksiyonlar["fonksiyon_h"],start_x,max_derece)+\
((türev(fonksiyonlar["pay"],start_x,pay_max)*function(fonksiyonlar["payda"],start_x,payda_max))-\
(türev(fonksiyonlar["payda"]*function(fonksiyonlar["pay"],start_x,pay_max))))/(function(fonksiyonlar["payda"],start_x,payda_max)**2)
if abs(türev1)>=1:
print("Çözüm yok.Başka bir h(x) deneyin.")
else:
if yes_kök=="EVET" and yes_kesir=="EVET":
sum1=function(fonksiyonlar["fonksiyon_h"],start_x,max_derece)+(function(fonksiyonlar["kök"],start_x,kök_max)**kök_derece)+\
(function(fonksiyonlar["pay"],start_x,pay_max)/function(fonksiyonlar["payda"],start_x,payda_max))
elif yes_kök=="EVET":
sum1=function(fonksiyonlar["fonksiyon_h"],start_x,max_derece)+(function(fonksiyonlar["kök"],start_x,kök_max)**kök_derece)
elif yes_kesir=="EVET":
sum1 = function(fonksiyonlar["fonksiyon_h"], start_x, max_derece) + \
(function(fonksiyonlar["pay"], start_x, pay_max) / function(fonksiyonlar["payda"], start_x, payda_max))
while abs(sum1-start_x)>epsilon:
start_x = sum1
if yes_kök == "EVET" and yes_kesir == "EVET":
sum1 = function(fonksiyonlar["fonksiyon_h"], start_x, max_derece) + (
function(fonksiyonlar["kök"], start_x, kök_max) ** kök_derece) + \
(function(fonksiyonlar["pay"], start_x, pay_max) / function(fonksiyonlar["payda"], start_x,payda_max))
elif yes_kök == "EVET":
sum1 = function(fonksiyonlar["fonksiyon_h"], start_x, max_derece) + (
function(fonksiyonlar["kök"], start_x, kök_max) ** kök_derece)
elif yes_kesir == "EVET":
sum1 = function(fonksiyonlar["fonksiyon_h"], start_x, max_derece) + \
(function(fonksiyonlar["pay"], start_x, pay_max) / function(fonksiyonlar["payda"], start_x,payda_max))
print("Kökün yaklaşık değeri:",round(sum1,2))
| 42.251613 | 139 | 0.61811 | 813 | 6,549 | 4.821648 | 0.105781 | 0.055102 | 0.042857 | 0.072959 | 0.804847 | 0.78699 | 0.757143 | 0.738776 | 0.704592 | 0.657143 | 0 | 0.006881 | 0.245534 | 6,549 | 154 | 140 | 42.525974 | 0.78648 | 0 | 0 | 0.673203 | 0 | 0 | 0.246335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013072 | false | 0 | 0 | 0 | 0.026144 | 0.104575 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d119888336b71534752af62a523bb318fa2e2e6 | 278 | py | Python | interface/cffi/_cffi_ABI_outbound_generated.py | techtonik/discovery | 8017917487fa78c7defa76610100fc4170680f0f | [
"Unlicense"
] | 1 | 2019-03-26T09:00:07.000Z | 2019-03-26T09:00:07.000Z | interface/cffi/_cffi_ABI_outbound_generated.py | techtonik/discovery | 8017917487fa78c7defa76610100fc4170680f0f | [
"Unlicense"
] | null | null | null | interface/cffi/_cffi_ABI_outbound_generated.py | techtonik/discovery | 8017917487fa78c7defa76610100fc4170680f0f | [
"Unlicense"
] | null | null | null | # auto-generated file
import _cffi_backend
ffi = _cffi_backend.FFI('_cffi_ABI_outbound_generated',
_version = 0x2601,
_types = b'\x00\x00\x04\x0D\x00\x00\x03\x03\x00\x00\x01\x0F\x00\x00\x02\x01\x00\x00\x07\x01',
_globals = (b'\x00\x00\x00\x23printf',0,),
)
| 30.888889 | 98 | 0.690647 | 45 | 278 | 4.022222 | 0.533333 | 0.232044 | 0.154696 | 0.198895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217573 | 0.140288 | 278 | 8 | 99 | 34.75 | 0.539749 | 0.068345 | 0 | 0 | 1 | 0.166667 | 0.522088 | 0.522088 | 0 | 0 | 0.024096 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d2f676d455f502221cc0437f58bd62c4dff2c78 | 21 | py | Python | python_library/StarKiller/StarKiller/eos/__init__.py | yut23/Microphysics | 3c4985213c5e5b1ad2602b0bba2ce164b847361a | [
"BSD-3-Clause"
] | 16 | 2017-08-17T11:12:01.000Z | 2021-06-10T23:11:08.000Z | python_library/StarKiller/StarKiller/eos/__init__.py | Youhichka/Microphysics | 6f28333d40c9e15fdfbb1c4dc208e887fb5549c3 | [
"BSD-3-Clause"
] | 533 | 2017-06-08T13:52:11.000Z | 2022-01-28T16:13:29.000Z | python_library/StarKiller/StarKiller/eos/__init__.py | Youhichka/Microphysics | 6f28333d40c9e15fdfbb1c4dc208e887fb5549c3 | [
"BSD-3-Clause"
] | 34 | 2017-08-16T16:29:20.000Z | 2021-09-09T16:19:15.000Z | from .eos import Eos
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d439d265b5ba8300490e11b3b402adb7eff7f45 | 192 | py | Python | Exercicios/ex018.py | Dobravoski/Exercicios-Python | e7169e1ee6954a7bc9216063845611107a13759f | [
"MIT"
] | null | null | null | Exercicios/ex018.py | Dobravoski/Exercicios-Python | e7169e1ee6954a7bc9216063845611107a13759f | [
"MIT"
] | null | null | null | Exercicios/ex018.py | Dobravoski/Exercicios-Python | e7169e1ee6954a7bc9216063845611107a13759f | [
"MIT"
] | null | null | null | from math import cos, sin, tan, radians
n = float(input('Digite um ângulo: '))
print(f'O cosseno é {cos(radians(n)):.3f}, o seno é {sin(radians(n)):.3f} e a tangente é {tan(radians(n)):.3f}')
| 48 | 112 | 0.65625 | 37 | 192 | 3.405405 | 0.621622 | 0.253968 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018072 | 0.135417 | 192 | 3 | 113 | 64 | 0.740964 | 0 | 0 | 0 | 0 | 0.333333 | 0.625 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5d6cec844f2c29d5d83fa2207476247365167a01 | 6,730 | py | Python | tests/components/mailgun/test_init.py | domwillcode/home-assistant | f170c80bea70c939c098b5c88320a1c789858958 | [
"Apache-2.0"
] | 6 | 2020-07-18T16:33:25.000Z | 2021-09-26T09:52:04.000Z | tests/components/mailgun/test_init.py | domwillcode/home-assistant | f170c80bea70c939c098b5c88320a1c789858958 | [
"Apache-2.0"
] | 47 | 2020-07-23T07:13:11.000Z | 2022-03-31T06:01:46.000Z | tests/components/mailgun/test_init.py | klauern/home-assistant-core | c18ba6aec0627e6afb6442c678edb5ff2bb17db6 | [
"Apache-2.0"
] | 5 | 2020-03-29T00:29:13.000Z | 2021-09-06T20:58:40.000Z | """Test the init file of Mailgun."""
import hashlib
import hmac
import pytest
from homeassistant import data_entry_flow
from homeassistant.components import mailgun, webhook
from homeassistant.config import async_process_ha_core_config
from homeassistant.const import CONF_API_KEY, CONF_DOMAIN
from homeassistant.core import callback
from homeassistant.setup import async_setup_component
API_KEY = "abc123"
@pytest.fixture
async def http_client(hass, aiohttp_client):
"""Initialize a Home Assistant Server for testing this module."""
await async_setup_component(hass, webhook.DOMAIN, {})
return await aiohttp_client(hass.http.app)
@pytest.fixture
async def webhook_id_with_api_key(hass):
"""Initialize the Mailgun component and get the webhook_id."""
await async_setup_component(
hass,
mailgun.DOMAIN,
{mailgun.DOMAIN: {CONF_API_KEY: API_KEY, CONF_DOMAIN: "example.com"}},
)
await async_process_ha_core_config(
hass, {"internal_url": "http://example.local:8123"},
)
result = await hass.config_entries.flow.async_init(
"mailgun", context={"source": "user"}
)
assert result["type"] == data_entry_flow.RESULT_TYPE_FORM, result
result = await hass.config_entries.flow.async_configure(result["flow_id"], {})
assert result["type"] == data_entry_flow.RESULT_TYPE_CREATE_ENTRY
return result["result"].data["webhook_id"]
@pytest.fixture
async def webhook_id_without_api_key(hass):
"""Initialize the Mailgun component and get the webhook_id w/o API key."""
await async_setup_component(hass, mailgun.DOMAIN, {})
await async_process_ha_core_config(
hass, {"internal_url": "http://example.local:8123"},
)
result = await hass.config_entries.flow.async_init(
"mailgun", context={"source": "user"}
)
assert result["type"] == data_entry_flow.RESULT_TYPE_FORM, result
result = await hass.config_entries.flow.async_configure(result["flow_id"], {})
assert result["type"] == data_entry_flow.RESULT_TYPE_CREATE_ENTRY
return result["result"].data["webhook_id"]
@pytest.fixture
async def mailgun_events(hass):
"""Return a list of mailgun_events triggered."""
events = []
@callback
def handle_event(event):
"""Handle Mailgun event."""
events.append(event)
hass.bus.async_listen(mailgun.MESSAGE_RECEIVED, handle_event)
return events
async def test_mailgun_webhook_with_missing_signature(
http_client, webhook_id_with_api_key, mailgun_events
):
"""Test that webhook doesn't trigger an event without a signature."""
event_count = len(mailgun_events)
await http_client.post(
f"/api/webhook/{webhook_id_with_api_key}",
json={"hello": "mailgun", "signature": {}},
)
assert len(mailgun_events) == event_count
await http_client.post(
f"/api/webhook/{webhook_id_with_api_key}", json={"hello": "mailgun"}
)
assert len(mailgun_events) == event_count
async def test_mailgun_webhook_with_different_api_key(
http_client, webhook_id_with_api_key, mailgun_events
):
"""Test that webhook doesn't trigger an event with a wrong signature."""
timestamp = "1529006854"
token = "a8ce0edb2dd8301dee6c2405235584e45aa91d1e9f979f3de0"
event_count = len(mailgun_events)
await http_client.post(
f"/api/webhook/{webhook_id_with_api_key}",
json={
"hello": "mailgun",
"signature": {
"signature": hmac.new(
key=b"random_api_key",
msg=bytes(f"{timestamp}{token}", "utf-8"),
digestmod=hashlib.sha256,
).hexdigest(),
"timestamp": timestamp,
"token": token,
},
},
)
assert len(mailgun_events) == event_count
async def test_mailgun_webhook_event_with_correct_api_key(
http_client, webhook_id_with_api_key, mailgun_events
):
"""Test that webhook triggers an event after validating a signature."""
timestamp = "1529006854"
token = "a8ce0edb2dd8301dee6c2405235584e45aa91d1e9f979f3de0"
event_count = len(mailgun_events)
await http_client.post(
f"/api/webhook/{webhook_id_with_api_key}",
json={
"hello": "mailgun",
"signature": {
"signature": hmac.new(
key=bytes(API_KEY, "utf-8"),
msg=bytes(f"{timestamp}{token}", "utf-8"),
digestmod=hashlib.sha256,
).hexdigest(),
"timestamp": timestamp,
"token": token,
},
},
)
assert len(mailgun_events) == event_count + 1
assert mailgun_events[-1].data["webhook_id"] == webhook_id_with_api_key
assert mailgun_events[-1].data["hello"] == "mailgun"
async def test_mailgun_webhook_with_missing_signature_without_api_key(
http_client, webhook_id_without_api_key, mailgun_events
):
"""Test that webhook triggers an event without a signature w/o API key."""
event_count = len(mailgun_events)
await http_client.post(
f"/api/webhook/{webhook_id_without_api_key}",
json={"hello": "mailgun", "signature": {}},
)
assert len(mailgun_events) == event_count + 1
assert mailgun_events[-1].data["webhook_id"] == webhook_id_without_api_key
assert mailgun_events[-1].data["hello"] == "mailgun"
await http_client.post(
f"/api/webhook/{webhook_id_without_api_key}", json={"hello": "mailgun"}
)
assert len(mailgun_events) == event_count + 1
assert mailgun_events[-1].data["webhook_id"] == webhook_id_without_api_key
assert mailgun_events[-1].data["hello"] == "mailgun"
async def test_mailgun_webhook_event_without_an_api_key(
http_client, webhook_id_without_api_key, mailgun_events
):
"""Test that webhook triggers an event if there is no api key."""
timestamp = "1529006854"
token = "a8ce0edb2dd8301dee6c2405235584e45aa91d1e9f979f3de0"
event_count = len(mailgun_events)
await http_client.post(
f"/api/webhook/{webhook_id_without_api_key}",
json={
"hello": "mailgun",
"signature": {
"signature": hmac.new(
key=bytes(API_KEY, "utf-8"),
msg=bytes(f"{timestamp}{token}", "utf-8"),
digestmod=hashlib.sha256,
).hexdigest(),
"timestamp": timestamp,
"token": token,
},
},
)
assert len(mailgun_events) == event_count + 1
assert mailgun_events[-1].data["webhook_id"] == webhook_id_without_api_key
assert mailgun_events[-1].data["hello"] == "mailgun"
| 31.895735 | 82 | 0.659584 | 808 | 6,730 | 5.20297 | 0.148515 | 0.045671 | 0.045671 | 0.034253 | 0.809943 | 0.785918 | 0.773311 | 0.753806 | 0.738107 | 0.738107 | 0 | 0.029519 | 0.224814 | 6,730 | 210 | 83 | 32.047619 | 0.776308 | 0.007727 | 0 | 0.644737 | 0 | 0 | 0.168287 | 0.070051 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.006579 | false | 0 | 0.059211 | 0 | 0.092105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d6d35fd531a2147535903c3df80e851556bad0e | 50 | py | Python | basic_functions_pckg/__init__.py | jochenruland/some_basic_functions | d946ec9747d59e64ab6118f38bbca43e32b9a2be | [
"MIT"
] | null | null | null | basic_functions_pckg/__init__.py | jochenruland/some_basic_functions | d946ec9747d59e64ab6118f38bbca43e32b9a2be | [
"MIT"
] | null | null | null | basic_functions_pckg/__init__.py | jochenruland/some_basic_functions | d946ec9747d59e64ab6118f38bbca43e32b9a2be | [
"MIT"
] | null | null | null | from .basic_functions_pckg import basic_functions
| 25 | 49 | 0.9 | 7 | 50 | 6 | 0.714286 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d74582f1d78b53149d6c42235ed2e620a1cc8be | 51 | py | Python | django-rtk-green/django_rtk_green/utils/__init__.py | mnieber/django-graphql-registration | 20dc61e207f92dcbc88fb83707315e5b304238cd | [
"MIT"
] | null | null | null | django-rtk-green/django_rtk_green/utils/__init__.py | mnieber/django-graphql-registration | 20dc61e207f92dcbc88fb83707315e5b304238cd | [
"MIT"
] | null | null | null | django-rtk-green/django_rtk_green/utils/__init__.py | mnieber/django-graphql-registration | 20dc61e207f92dcbc88fb83707315e5b304238cd | [
"MIT"
] | null | null | null | from .create_user import create_user # noqa: F401
| 25.5 | 50 | 0.784314 | 8 | 51 | 4.75 | 0.75 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0.156863 | 51 | 1 | 51 | 51 | 0.813953 | 0.196078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d757c2131ce17b480f03ed2229fdead16f03629 | 40 | py | Python | python/testData/refactoring/move/reformatFromImports/before/src/b.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/move/reformatFromImports/before/src/b.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/reformatFromImports/before/src/b.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from lib import Class1
print(Class1())
| 10 | 22 | 0.75 | 6 | 40 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.15 | 40 | 3 | 23 | 13.333333 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
53c06e6be95a7ff3e7d32eccd4ee11e61c313a5d | 49,588 | py | Python | models/cdisn_models.py | akirato0223/CDISN | 792bc17aaa950aa473b4e8b70f6a4f5e96d6dbdd | [
"Apache-2.0"
] | null | null | null | models/cdisn_models.py | akirato0223/CDISN | 792bc17aaa950aa473b4e8b70f6a4f5e96d6dbdd | [
"Apache-2.0"
] | null | null | null | models/cdisn_models.py | akirato0223/CDISN | 792bc17aaa950aa473b4e8b70f6a4f5e96d6dbdd | [
"Apache-2.0"
] | null | null | null | # cdisn_models.py: a file defining classes and functions for CDISN Ensemble training
# SEE LICENSE STATEMENT AT THE END OF THE FILE
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import pickle as pkl
class CDISNCompatibleStagerNet(nn.Module):
"""
In [25]: x_11chan_out = stager_11chan_instance(x_11chan_in)
- stager: orig x shape == torch.Size([1, 3000, 11])
- stager: after first operation == torch.Size([1, 1, 3000, 11])
- stager: after conv1 == torch.Size([1, 11, 3000, 1])
X stager: after permute1 == torch.Size([1, 1, 3000, 11])
X stager: after conv2 == torch.Size([1, 16, 2951, 11])
- stager: after relu/maxpool 1 == torch.Size([1, 16, 227, 11])
X stager: after batchnorm1 == torch.Size([1, 16, 227, 11])
X stager: after conv3 == torch.Size([1, 16, 178, 11])
- stager: after relu/max_pool 2 == torch.Size([1, 16, 13, 11])
X stager: after batchnorm2 == torch.Size([1, 16, 13, 11])
- stager: after flatten1 == torch.Size([1, 2288])
- stager: after dropout 1 == torch.Size([1, 2288])
X stager: after linear1 == torch.Size([1, 100])
Hidden Representation Set(s)
Set 1
torch.Size([1, 1, 3000, 11]) # after permute1
total_params_in_dim_2 = 3000
Set 2
torch.Size([1, 16, 2951, 11]) # after conv2
torch.Size([1, 16, 227, 11]) # after batchnorm1
torch.Size([1, 16, 178, 11]) # after conv3
torch.Size([1, 16, 13, 11]) # after batchnorm2
total_params_in_dim_2 = 3369
Set 3
torch.Size([1, 100]) # after linear 1
total_params_in_dim_1 = 100
"""
def __init__(
self, channels, dropout_rate=0.5, embed_dim=100, num_tandem_nets=2, device="cpu"
):
super(CDISNCompatibleStagerNet, self).__init__()
self.SELF_REFERENCE_INDEX = 0
# sanity check
assert num_tandem_nets > 0
# see https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html#torch.nn.functional.pad
self.conv1_layer = nn.Conv2d(1, channels, (1, channels), stride=(1, 1))
self.conv1_update_layers = nn.ModuleList()
self.conv2_layer = nn.Conv2d(1, 16, (50, 1), stride=(1, 1))
self.conv2_update_pad_tuple = (
0,
0,
24,
25,
0,
0,
0,
0,
) # pad by (0, 0), (24, 25), (0, 0), and (0, 0) in reverse-dimensional order
self.conv2_update_layers = nn.ModuleList()
self.batchnorm1_layer = nn.BatchNorm2d(16)
self.batchnorm1_update_pad_tuple = (
0,
0,
14,
15,
0,
0,
0,
0,
) # pad by (0, 0), (14, 15), (0, 0), and (0, 0)
self.batchnorm1_update_layers = nn.ModuleList()
self.conv3_layer = nn.Conv2d(16, 16, (50, 1), stride=(1, 1))
self.conv3_update_pad_tuple = (
0,
0,
14,
15,
0,
0,
0,
0,
) # pad by (0, 0), (14, 15), (0, 0), and (0, 0)
self.conv3_update_layers = nn.ModuleList()
self.batchnorm2_layer = nn.BatchNorm2d(16)
self.batchnorm2_update_pad_tuple = (
0,
0,
1,
1,
0,
0,
0,
0,
) # pad by (0, 0), (1, 1), (0, 0), and (0, 0)
self.batchnorm2_update_layers = nn.ModuleList()
self.linear1_layer = nn.Linear(208 * channels, embed_dim)
self.linear1_update_layers = nn.ModuleList()
for _ in range(
num_tandem_nets - 1
): # add psi functions at each layer for all tandem networks (excluding self)
self.conv1_update_layers.append(
nn.Conv2d(1, channels, (1, channels), stride=(1, 1))
)
self.conv2_update_layers.append(nn.Conv2d(16, 16, (50, 1), stride=(1, 1)))
self.batchnorm1_update_layers.append(
nn.Conv2d(16, 16, (30, 1), stride=(1, 1))
)
self.conv3_update_layers.append(nn.Conv2d(16, 16, (30, 1), stride=(1, 1)))
self.batchnorm2_update_layers.append(
nn.Conv2d(16, 16, (3, 1), stride=(1, 1))
)
self.linear1_update_layers.append(nn.Linear(embed_dim, embed_dim))
self.dropout_rate = dropout_rate
self.embed_dim = embed_dim
self.num_tandem_nets = num_tandem_nets
self.device = device
self.BATCH_DIM = 0
self.cdisn_layer_output_shapes = [
[None, 1, 3000, 11],
[None, 16, 2951, 11],
[None, 16, 227, 11],
[None, 16, 178, 11],
[None, 16, 13, 11],
[None, 100],
]
pass
def get_phi_output_for_current_layer(self, z, cdisn_layer_index):
"""
z: tensor of concatenated hidden layer inputs
cdisn_layer_index: current layer in the cdisn network (i.e. first conv layer in each net, 2nd batchnorm, etc)
"""
hiddens = None
# get phi_i output
if cdisn_layer_index == 0:
hiddens = self.conv1_layer(z)
# permute to (batch_num, C, T, 1)
hiddens = hiddens.permute(0, 3, 2, 1)
elif cdisn_layer_index == 1:
hiddens = self.conv2_layer(z)
elif cdisn_layer_index == 2:
hiddens = self.batchnorm1_layer(z)
elif cdisn_layer_index == 3:
hiddens = self.conv3_layer(z)
elif cdisn_layer_index == 4:
hiddens = self.batchnorm2_layer(z)
elif cdisn_layer_index == 5:
hiddens = self.linear1_layer(z)
else:
raise ValueError(
"Unrecognized layer index "
+ str(cdisn_layer_index)
+ " for Stager Net."
)
return hiddens
def get_updates_for_current_hidden_layer(
self, cdisn_layer_index, other_hiddens, curr_pad_tuple=None
):
"""
cdisn_layer_index: current layer in the cdisn network (i.e. first conv layer in each net, 2nd batchnorm, etc)
other_hiddens: output of get_phi_output_for_current_layer corresponding to cdisn_layer_index from other networks in CDISN Ensemble
curr_pad_tuple=None: tuple describing the shape of the padding necessary for current run
"""
# compute updates psi_{i,k,p}(phi_{p,k}) for z
# see https://discuss.pytorch.org/t/runtimeerror-element-0-of-variables-does-not-require-grad-and-does-not-have-a-grad-fn/11074
d_i = (
torch.from_numpy(
np.zeros(self.cdisn_layer_output_shapes[cdisn_layer_index])
)
.to(self.device)
.float()
)
d_i.requires_grad = True # see https://pytorch.org/docs/stable/autograd.html
for j, hidden in enumerate(
other_hiddens
): # compute updates for nets besides ith net activation in list of hidden activations
if curr_pad_tuple is not None:
hidden = F.pad(hidden, curr_pad_tuple, "constant", 0)
curr_d = None # will become output of call to curr_psi_functions[j](hidden)
if cdisn_layer_index == 0:
curr_d = self.conv1_update_layers[j](hidden)
curr_d = curr_d.permute(0, 3, 2, 1)
elif cdisn_layer_index == 1:
curr_d = self.conv2_update_layers[j](hidden)
elif cdisn_layer_index == 2:
curr_d = self.batchnorm1_update_layers[j](hidden)
elif cdisn_layer_index == 3:
curr_d = self.conv3_update_layers[j](hidden)
elif cdisn_layer_index == 4:
curr_d = self.batchnorm2_update_layers[j](hidden)
elif cdisn_layer_index == 5:
curr_d = self.linear1_update_layers[j](hidden)
else:
raise ValueError(
"Unrecognized layer index "
+ str(cdisn_layer_index)
+ " for Stager Net."
)
d_i = d_i + curr_d
return d_i
def get_updated_hidden_layer_activations(
self, all_hiddens, cdisn_layer_index, frozen_cdisn_nets, curr_pad_tuple=None
):
for i in range(len(frozen_cdisn_nets) + 1):
if i == self.SELF_REFERENCE_INDEX:
all_hiddens[i] = self.get_phi_output_for_current_layer(
all_hiddens[i], cdisn_layer_index
)
else:
all_hiddens[i] = frozen_cdisn_nets[
i - 1
].embedder.get_phi_output_for_current_layer(
all_hiddens[i], cdisn_layer_index
)
all_updates = []
for i in range(len(frozen_cdisn_nets) + 1):
if i == self.SELF_REFERENCE_INDEX:
all_updates.append(
self.get_updates_for_current_hidden_layer(
cdisn_layer_index,
all_hiddens[:i] + all_hiddens[i + 1 :],
curr_pad_tuple=curr_pad_tuple,
)
)
else:
all_updates.append(
frozen_cdisn_nets[
i - 1
].embedder.get_updates_for_current_hidden_layer(
cdisn_layer_index,
all_hiddens[:i] + all_hiddens[i + 1 :],
curr_pad_tuple=curr_pad_tuple,
)
)
for i, (phi_output, psi_outputs) in enumerate(zip(all_hiddens, all_updates)):
all_hiddens[i] = phi_output + psi_outputs
return all_hiddens
def forward(self, x, frozen_cdisn_nets):
# input assumed to be of shape (batch_num,window_len,channels)
print("<<< BEGINNING STAGER FORWARD PASS >>>")
print("stager: original input x.size() == ", x.size())
curr_batch_size = x.size()[0]
for i in range(len(self.cdisn_layer_output_shapes)):
self.cdisn_layer_output_shapes[i][self.BATCH_DIM] = curr_batch_size
for j in range(len(frozen_cdisn_nets)):
frozen_cdisn_nets[j].embedder.cdisn_layer_output_shapes[i][
self.BATCH_DIM
] = curr_batch_size
x = torch.unsqueeze(x, 1)
print("stager: after 1st squeeze x.size() == ", x.size())
all_hiddens = self.get_updated_hidden_layer_activations(
[x for _ in range(len(frozen_cdisn_nets) + 1)],
0,
frozen_cdisn_nets,
curr_pad_tuple=None,
)
print("stager: after 1st update all_hiddens[0].size() == ", all_hiddens[0].size())
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
1,
frozen_cdisn_nets,
curr_pad_tuple=self.conv2_update_pad_tuple,
)
print("stager: after 2nd update all_hiddens[0].size() == ", all_hiddens[0].size())
for i in range(len(frozen_cdisn_nets) + 1):
all_hiddens[i] = F.relu(F.max_pool2d(all_hiddens[i], (13, 1)))
print("stager: after 1st maxpool all_hiddens[0].size() == ", all_hiddens[0].size())
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
2,
frozen_cdisn_nets,
curr_pad_tuple=self.batchnorm1_update_pad_tuple,
)
print("stager: after 3rd update all_hiddens[0].size() == ", all_hiddens[0].size())
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
3,
frozen_cdisn_nets,
curr_pad_tuple=self.conv3_update_pad_tuple,
)
print("stager: after 4th update all_hiddens[0].size() == ", all_hiddens[0].size())
for i in range(len(frozen_cdisn_nets) + 1):
all_hiddens[i] = F.relu(F.max_pool2d(all_hiddens[i], (13, 1)))
print("stager: after 2nd maxpool all_hiddens[0].size() == ", all_hiddens[0].size())
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
4,
frozen_cdisn_nets,
curr_pad_tuple=self.batchnorm2_update_pad_tuple,
)
print("stager: after 5th update all_hiddens[0].size() == ", all_hiddens[0].size())
for i in range(len(frozen_cdisn_nets) + 1):
all_hiddens[i] = F.dropout(all_hiddens[i], p=self.dropout_rate)
all_hiddens[i] = all_hiddens[i].view(
curr_batch_size, -1
) # see https://stackoverflow.com/questions/49643225/whats-the-difference-between-reshape-and-view-in-pytorch
print("stager: after final view all_hiddens[0].size() == ", all_hiddens[0].size())
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens, 5, frozen_cdisn_nets, curr_pad_tuple=None
)
print("stager: after final update all_hiddens[0].size() == ", all_hiddens[0].size())
print("stager: END OF FORWARD PASS")
return all_hiddens[self.SELF_REFERENCE_INDEX], all_hiddens
class CDISNCompatibleShallowNet(nn.Module):
"""
x: (batch_size, 1, 600, 21)
x: (batch_size, 40, 576, 21)
x: (batch_size, 40, 576, 1)
x: (batch_size, 40, 34, 1)
x: (batch_size, 1360)
x: (batch_size, 100)
"""
def __init__(
self, channels=21, dropout_rate=0.5, embed_dim=100, num_tandem_nets=2, device="cpu"
):
super(CDISNCompatibleShallowNet, self).__init__()
self.SELF_REFERENCE_INDEX = 0
# sanity check
assert num_tandem_nets > 0
# see https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html#torch.nn.functional.pad
self.conv1_layer = nn.Conv2d(1, 40, (25, 1), stride=(1, 1)) # Temporal Conv
self.conv1_update_pad_tuple = (
0,
0,
12,
12,
0,
0,
0,
0,
) # pad by (0, 0), (12, 12), (0, 0), and (0, 0) in reverse-dimensional order
self.conv1_update_layers = nn.ModuleList()
self.batchnorm1_layer = nn.BatchNorm2d(40)
self.batchnorm1_update_pad_tuple = (
0,
0,
12,
12,
0,
0,
0,
0,
) # pad by (0, 0), (14, 15), (0, 0), and (0, 0)
self.batchnorm1_update_layers = nn.ModuleList()
# self.conv2_layer = nn.Conv2d(40, 40, (1, 21), stride=(1, 1)) # Spacial Conv
# self.conv2_update_pad_tuple = (
# 10,
# 10,
# 0,
# 0,
# 0,
# 0,
# 0,
# 0,
# ) # pad by (0, 0), (24, 25), (0, 0), and (0, 0) in reverse-dimensional order
self.conv2_layer = nn.Conv2d(40, 40, (1, channels), stride=(1, 1)) # Spacial Conv
self.conv2_update_pad_tuple = (
int((channels-1)//2),
int(((channels-1)//2)+((channels-1)%2)),
0,
0,
0,
0,
0,
0,
) # pad by (0, 0), (24, 25), (0, 0), and (0, 0) in reverse-dimensional order
self.conv2_update_layers = nn.ModuleList()
self.avgPool1_layer = nn.AvgPool2d((75, 1), stride=(15, 1)) # Mean Pool
self.avgPool1_update_pad_tuple = (
0,
0,
271,
271,
0,
0,
0,
0,
) # pad by (0, 0), (14, 15), (0, 0), and (0, 0)
self.avgPool1_update_layers = nn.ModuleList()
self.linear1_layer = nn.Linear(1360, embed_dim) # Fully Connected
self.linear1_update_layers = nn.ModuleList()
for _ in range(
num_tandem_nets - 1
): # add psi functions at each layer for all tandem networks (excluding self)
self.conv1_update_layers.append(
nn.Conv2d(40, 40, (25, 1), stride=(1, 1))
)
self.batchnorm1_update_layers.append(
nn.Conv2d(40, 40, (25, 1), stride=(1, 1))
)
# self.conv2_update_layers.append(nn.Conv2d(40, 40, (1, 21), stride=(1, 1)))
self.conv2_update_layers.append(nn.Conv2d(40, 40, (1, channels), stride=(1, 1)))
self.avgPool1_update_layers.append(
nn.Conv2d(40, 40, (75, 1), stride=(15, 1))
)
self.linear1_update_layers.append(nn.Linear(embed_dim, embed_dim))
self.dropout_rate = dropout_rate
self.embed_dim = embed_dim
self.num_tandem_nets = num_tandem_nets
self.device = device
self.BATCH_DIM = 0
self.cdisn_layer_output_shapes = [
[None, 40, 576, channels], # [None, 40, 576, 21],
[None, 40, 576, channels], # [None, 40, 576, 21],
[None, 40, 576, 1],
[None, 40, 34, 1],
[None, 100],
]
pass
def unfreeze_psi_update_parameters_for_given_layer(self, k):
params_to_optimize = []
if k == 0:
for j in range(len(self.conv1_update_layers)):
for p in self.conv1_update_layers[j].parameters():
p.requires_grad = True
params_to_optimize.append(p)
elif k == 1:
for j in range(len(self.batchnorm1_update_layers)):
for p in self.batchnorm1_update_layers[j].parameters():
p.requires_grad = True
params_to_optimize.append(p)
elif k == 2:
for j in range(len(self.conv2_update_layers)):
for p in self.conv2_update_layers[j].parameters():
p.requires_grad = True
params_to_optimize.append(p)
elif k == 3:
for j in range(len(self.avgPool1_update_layers)):
for p in self.avgPool1_update_layers[j].parameters():
p.requires_grad = True
params_to_optimize.append(p)
elif k == 4:
for j in range(len(self.linear1_update_layers)):
for p in self.linear1_update_layers[j].parameters():
p.requires_grad = True
params_to_optimize.append(p)
else:
raise ValueError("CDISNCompatibleShallowNet.unfreeze_psi_update_parameters_for_given_layer: CDISNCompatibleShallowNet only has 5 layers, meaning requested layer index k=="+str(k)+" is out-of-bounds.")
return params_to_optimize
def get_phi_output_for_current_layer(self, z, cdisn_layer_index):
"""
z: tensor of concatenated hidden layer inputs
cdisn_layer_index: current layer in the cdisn task model (e.g. first conv layer in each net)
"""
hiddens = None
# get phi_i output
if cdisn_layer_index == 0:
hiddens = self.conv1_layer(z)
elif cdisn_layer_index == 1:
hiddens = self.batchnorm1_layer(z)
elif cdisn_layer_index == 2:
hiddens = self.conv2_layer(z)
elif cdisn_layer_index == 3:
hiddens = self.avgPool1_layer(z)
elif cdisn_layer_index == 4:
hiddens = self.linear1_layer(z)
else:
raise ValueError(
"Unrecognized layer index "
+ str(cdisn_layer_index)
+ " for ShallowNet."
)
return hiddens
def get_updates_for_current_hidden_layer(
self, cdisn_layer_index, other_hiddens, curr_pad_tuple=None
):
"""
cdisn_layer_index: current layer in the cdisn network (e.g., first conv layer in each net)
other_hiddens: output of get_phi_output_for_current_layer corresponding to cdisn_layer_index from other networks in CDISN Ensemble
curr_pad_tuple=None: tuple describing the shape of the padding necessary for current run
"""
# compute updates psi_{i,k,p}(phi_{p,k}) for z
# see https://discuss.pytorch.org/t/runtimeerror-element-0-of-variables-does-not-require-grad-and-does-not-have-a-grad-fn/11074
d_i = (
torch.from_numpy(
np.zeros(self.cdisn_layer_output_shapes[cdisn_layer_index])
)
.to(self.device)
.float()
)
d_i.requires_grad = True # see https://pytorch.org/docs/stable/autograd.html
for j, hidden in enumerate(
other_hiddens
): # compute updates for nets besides ith net activation in list of hidden activations
if curr_pad_tuple is not None:
hidden = F.pad(hidden, curr_pad_tuple, "constant", 0)
curr_d = None # will become output of call to curr_psi_functions[j](hidden)
if cdisn_layer_index == 0:
curr_d = self.conv1_update_layers[j](hidden)
elif cdisn_layer_index == 1:
curr_d = self.batchnorm1_update_layers[j](hidden)
elif cdisn_layer_index == 2:
curr_d = self.conv2_update_layers[j](hidden)
elif cdisn_layer_index == 3:
curr_d = self.avgPool1_update_layers[j](hidden)
elif cdisn_layer_index == 4:
curr_d = self.linear1_update_layers[j](hidden)
else:
raise ValueError(
"Unrecognized layer index "
+ str(cdisn_layer_index)
+ " for ShallowNet."
)
d_i = d_i + curr_d
return d_i
def get_updated_hidden_layer_activations(
self, all_hiddens, cdisn_layer_index, frozen_cdisn_nets, curr_pad_tuple=None
):
for i in range(len(frozen_cdisn_nets) + 1):
if i == self.SELF_REFERENCE_INDEX:
all_hiddens[i] = self.get_phi_output_for_current_layer(
all_hiddens[i], cdisn_layer_index
)
else:
all_hiddens[i] = frozen_cdisn_nets[
i - 1
].embedder.get_phi_output_for_current_layer(
all_hiddens[i], cdisn_layer_index
)
all_updates = []
for i in range(len(frozen_cdisn_nets) + 1):
if i == self.SELF_REFERENCE_INDEX:
all_updates.append(
self.get_updates_for_current_hidden_layer(
cdisn_layer_index,
all_hiddens[:i] + all_hiddens[i + 1 :],
curr_pad_tuple=curr_pad_tuple,
)
)
else:
all_updates.append(
frozen_cdisn_nets[
i - 1
].embedder.get_updates_for_current_hidden_layer(
cdisn_layer_index,
all_hiddens[:i] + all_hiddens[i + 1 :],
curr_pad_tuple=curr_pad_tuple,
)
)
for i, (phi_output, psi_outputs) in enumerate(zip(all_hiddens, all_updates)):
all_hiddens[i] = phi_output + psi_outputs
return all_hiddens
def forward(self, x, frozen_cdisn_nets):
# input assumed to be of shape (batch_num,window_len,channels)
# print("<<< BEGINNING SHALLOWNET FORWARD PASS >>>")
# print("shallownet: original input x.size() == ", x.size())
# print("shallownet: original input torch.sum(x) == ", torch.sum(x))
# print("shallownet: original input x == ", x)
curr_batch_size = x.size()[0]
for i in range(len(self.cdisn_layer_output_shapes)):
self.cdisn_layer_output_shapes[i][self.BATCH_DIM] = curr_batch_size
for j in range(len(frozen_cdisn_nets)):
frozen_cdisn_nets[j].embedder.cdisn_layer_output_shapes[i][
self.BATCH_DIM
] = curr_batch_size
x = torch.unsqueeze(x, 1)
# print("shallownet: after 1st squeeze x.size() == ", x.size())
# print("shallownet: after 1st squeeze torch.sum(x) == ", torch.sum(x))
# print("shallownet: after 1st squeeze x == ", x)
all_hiddens = self.get_updated_hidden_layer_activations(
[x for _ in range(len(frozen_cdisn_nets) + 1)],
0,
frozen_cdisn_nets,
curr_pad_tuple=self.conv1_update_pad_tuple,
)
# print("shallownet: after 1st layer conv update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after 1st layer conv update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after 1st layer conv update all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
1,
frozen_cdisn_nets,
curr_pad_tuple=self.batchnorm1_update_pad_tuple,
)
# print("shallownet: after 2nd layer batchnorm update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after 2nd layer batchnorm update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after 2nd layer batchnorm update all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
2,
frozen_cdisn_nets,
curr_pad_tuple=self.conv2_update_pad_tuple,
)
# print("shallownet: after 3rd layer conv update and all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after 3rd layer conv update and torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after 3rd layer conv update and all_hiddens[0] == ", all_hiddens[0])
for i in range(len(frozen_cdisn_nets) + 1):
all_hiddens[i] = torch.square(all_hiddens[i]) # F.relu(all_hiddens[i])
# print("shallownet: after square activation all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after square activation torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after square activation all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
3,
frozen_cdisn_nets,
curr_pad_tuple=self.avgPool1_update_pad_tuple,
)
# print("shallownet: after 4th layer avg pool update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after 4th layer avg pool update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after 4th layer avg pool update all_hiddens[0] == ", all_hiddens[0])
for i in range(len(frozen_cdisn_nets) + 1):
all_hiddens[i] = F.relu(all_hiddens[i]) # torch.log(all_hiddens[i])
# print("shallownet: after log activation all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after log activation torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after log activation all_hiddens[0] == ", all_hiddens[0])
for i in range(len(frozen_cdisn_nets) + 1):
all_hiddens[i] = F.dropout(all_hiddens[i], p=self.dropout_rate)
all_hiddens[i] = all_hiddens[i].view(
curr_batch_size, -1
) # see https://stackoverflow.com/questions/49643225/whats-the-difference-between-reshape-and-view-in-pytorch
# print("shallownet: after final view all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after final view torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after final view all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens, 4, frozen_cdisn_nets, curr_pad_tuple=None
)
# print("shallownet: after final update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("shallownet: after final update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("shallownet: after final update all_hiddens[0] == ", all_hiddens[0])
# print("shallownet: END OF FORWARD PASS")
return all_hiddens[self.SELF_REFERENCE_INDEX], all_hiddens
class CDISNCompatibleShallowNetWithCorrelatedMatching(CDISNCompatibleShallowNet):
"""
x: (batch_size, 1, 600, 21)
x: (batch_size, 40, 576, 21)
x: (batch_size, 40, 576, 1)
x: (batch_size, 40, 34, 1)
x: (batch_size, 1360)
x: (batch_size, 100)
"""
def __init__(
self, channels=21, dropout_rate=0.5, embed_dim=100, num_tandem_nets=2, device="cpu"
):
super().__init__(
channels=channels,
dropout_rate=dropout_rate,
embed_dim=embed_dim,
num_tandem_nets=num_tandem_nets,
device=device
)
pass
def forward(self, x, adjacent_cdisn_nets): # overwrite CDISNCompatibleShallowNet.forward with different handling of input x
# input assumed to be of shape [(batch_num,window_len,channels) for _ in range(len(adjacent_cdisn_nets)+1)]
# print("<<< BEGINNING CORR-SHALLOWNET FORWARD PASS >>>")
# print("corr-shallownet: original input sizes x_i.size() == ", [x_i.size() for x_i in x])
# print("corr-shallownet: original input sums torch.sum(x_i) == ", [torch.sum(x_i) for x_i in x])
# print("corr-shallownet: original input x == ", x)
curr_batch_size = x[0].size()[0]
for i in range(len(self.cdisn_layer_output_shapes)):
self.cdisn_layer_output_shapes[i][self.BATCH_DIM] = curr_batch_size
for j in range(len(adjacent_cdisn_nets)):
adjacent_cdisn_nets[j].embedder.cdisn_layer_output_shapes[i][
self.BATCH_DIM
] = curr_batch_size
x = [torch.unsqueeze(x_i, 1) for x_i in x]
# print("corr-shallownet: after 1st squeeze x_i.size() == ", [x_i.size() for x_i in x])
# print("corr-shallownet: after 1st squeeze torch.sum(x_i) == ", [torch.sum(x_i) for x_i in x])
# print("corr-shallownet: after 1st squeeze x == ", x)
all_hiddens = self.get_updated_hidden_layer_activations(
x, # [x for _ in range(len(adjacent_cdisn_nets) + 1)],
0,
adjacent_cdisn_nets,
curr_pad_tuple=self.conv1_update_pad_tuple,
)
# print("corr-shallownet: after 1st layer conv update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after 1st layer conv update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after 1st layer conv update all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
1,
adjacent_cdisn_nets,
curr_pad_tuple=self.batchnorm1_update_pad_tuple,
)
# print("corr-shallownet: after 2nd layer batchnorm update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after 2nd layer batchnorm update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after 2nd layer batchnorm update all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
2,
adjacent_cdisn_nets,
curr_pad_tuple=self.conv2_update_pad_tuple,
)
# print("corr-shallownet: after 3rd layer conv update and all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after 3rd layer conv update and torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after 3rd layer conv update and all_hiddens[0] == ", all_hiddens[0])
for i in range(len(adjacent_cdisn_nets) + 1):
all_hiddens[i] = torch.square(all_hiddens[i]) # F.relu(all_hiddens[i])
# print("corr-shallownet: after square activation all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after square activation torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after square activation all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens,
3,
adjacent_cdisn_nets,
curr_pad_tuple=self.avgPool1_update_pad_tuple,
)
# print("corr-shallownet: after 4th layer avg pool update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after 4th layer avg pool update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after 4th layer avg pool update all_hiddens[0] == ", all_hiddens[0])
for i in range(len(adjacent_cdisn_nets) + 1):
all_hiddens[i] = F.relu(all_hiddens[i]) # torch.log(all_hiddens[i])
# print("corr-shallownet: after log activation all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after log activation torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after log activation all_hiddens[0] == ", all_hiddens[0])
for i in range(len(adjacent_cdisn_nets) + 1):
all_hiddens[i] = F.dropout(all_hiddens[i], p=self.dropout_rate)
all_hiddens[i] = all_hiddens[i].view(
curr_batch_size, -1
) # see https://stackoverflow.com/questions/49643225/whats-the-difference-between-reshape-and-view-in-pytorch
# print("corr-shallownet: after final view all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after final view torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after final view all_hiddens[0] == ", all_hiddens[0])
all_hiddens = self.get_updated_hidden_layer_activations(
all_hiddens, 4, adjacent_cdisn_nets, curr_pad_tuple=None
)
# print("corr-shallownet: after final update all_hiddens[0].size() == ", all_hiddens[0].size())
# print("corr-shallownet: after final update torch.sum(all_hiddens[0]) == ", torch.sum(all_hiddens[0]))
# print("corr-shallownet: after final update all_hiddens[0] == ", all_hiddens[0])
# print("corr-shallownet: END OF FORWARD PASS")
return all_hiddens[self.SELF_REFERENCE_INDEX], all_hiddens
# rp net for Relative Positioning Task
class CDISNCompatibleRPNetDecoder(nn.Module):
def __init__(self, embed_dim=100):
super(CDISNCompatibleRPNetDecoder, self).__init__()
self.linear = nn.Linear(embed_dim, 2)
self.embed_dim = embed_dim
def forward(self, x1, x2):
# the torch.abs() is able to emulate the grp function in RP
out = self.linear(torch.abs(x1 - x2))
return out
# ts net for Temporal Shuffling Task
class CDISNCompatibleTSNetDecoder(nn.Module):
def __init__(self, embed_dim=100):
super(CDISNCompatibleTSNetDecoder, self).__init__()
self.linear = nn.Linear(2 * embed_dim, 2)
self.embed_dim = embed_dim
def forward(self, x1, x2, x3):
# the torch.abs() is able to emulate the grp function in RP
out = self.linear(torch.cat((torch.abs(x1 - x2), torch.abs(x2 - x3)), dim=-1))
return out
class CDISNCompatibleLinearDecoder(nn.Module):
def __init__(self, num_classes, embed_dim=100):
super(CDISNCompatibleLinearDecoder, self).__init__()
self.linear = nn.Linear(embed_dim, num_classes)
self.embed_dim = embed_dim
def forward(self, x):
out = self.linear(x)
return out
class FullCDISNTaskModel(nn.Module):
def __init__(
self,
requested_task_id,
num_tandem_nets,
channels=22, # 21,
num_classes=None,
dropout_rate=0.5,
embed_dim=100,
device="cpu",
embedder_type="ShallowNet",
):
super(FullCDISNTaskModel, self).__init__()
self.BATCH_DIM = 0
self.supported_task_ids = [
"RP",
"TS",
"BehavioralTST",
"BehavioralFluoxetine",
"BehavioralTUAB",
]
assert requested_task_id in self.supported_task_ids
self.requested_task_id = requested_task_id
self.channels = channels
self.unfrozen_network_id = None
self.dropout_rate = dropout_rate
self.embed_dim = embed_dim
self.num_classes = num_classes
if embedder_type == "ShallowNet":
self.embedder = CDISNCompatibleShallowNet( # used in taskwise and NonAnchored layerwise training
channels=channels,
dropout_rate=dropout_rate,
embed_dim=embed_dim,
num_tandem_nets=num_tandem_nets,
device=device,
)
elif embedder_type == "CorrelatedShallowNet":
self.embedder = CDISNCompatibleShallowNetWithCorrelatedMatching( # used in Anchored layerwise training
channels=channels,
dropout_rate=dropout_rate,
embed_dim=embed_dim,
num_tandem_nets=num_tandem_nets,
device=device,
)
elif embedder_type == "StagerNet": # decremented (used in pre-TUAB experiments)
self.embedder = CDISNCompatibleStagerNet(
channels,
dropout_rate=dropout_rate,
embed_dim=embed_dim,
num_tandem_nets=num_tandem_nets,
device=device,
)
else:
raise NotImplementedError("FullCDISNTaskModel does not currently support the following embedder type: "+str(embedder_type))
if "Behavioral" in requested_task_id:
self.decoder = CDISNCompatibleLinearDecoder(
num_classes, embed_dim=embed_dim
)
elif requested_task_id == "RP":
self.decoder = CDISNCompatibleRPNetDecoder(embed_dim=embed_dim)
elif requested_task_id == "TS":
self.decoder = CDISNCompatibleTSNetDecoder(embed_dim=embed_dim)
else:
raise ValueError("Unrecognized task_id == " + str(requested_task_id))
# putting it all together
self.forward_functions_by_id = {
"Behavioral": self.behavioral_forward,
"BehavioralTUAB": self.behavioral_forward,
"RP": self.rp_forward,
"TS": self.ts_forward,
"AnchoredBTUABRPTS": self.anchoredBRPTS_forward,
"NonAnchoredBTUABRPTS": self.nonanchoredBRPTS_forward,
}
pass
def load_pretrained_upstream_params(self, pretrained_upstream_cdisn_file_path):
print("load_pretrained_upstream_params: ATTEMPTING TO LOAD WARM-START PARAMS")
with open(pretrained_upstream_cdisn_file_path, "rb") as infile:
upstream_cdisn_model_ensemble = pkl.load(infile)
assert len(upstream_cdisn_model_ensemble) == 1
self.embedder.load_state_dict(
upstream_cdisn_model_ensemble[0].embedder.state_dict()
) # https://discuss.pytorch.org/t/copying-weights-from-one-net-to-another/1492
print("load_pretrained_upstream_params: SUCCESSFULLY LOADED WARM-START PARAMS")
pass
def forward(self, forward_func_id, forward_inputs):
return self.forward_functions_by_id[forward_func_id](*forward_inputs)
def behavioral_forward(self, x, frozen_cdisn_nets):
_, x_embeddings = self.embedder(x, frozen_cdisn_nets)
out = self.decoder(x_embeddings[self.embedder.SELF_REFERENCE_INDEX])
return out, [x_embeddings]
def rp_forward(self, x1, x2, frozen_cdisn_nets):
_, x1_embeddings = self.embedder(x1, frozen_cdisn_nets)
_, x2_embeddings = self.embedder(x2, frozen_cdisn_nets)
out = self.decoder(
x1_embeddings[self.embedder.SELF_REFERENCE_INDEX],
x2_embeddings[self.embedder.SELF_REFERENCE_INDEX],
)
return out, [x1_embeddings, x2_embeddings]
def ts_forward(self, x1, x2, x3, frozen_cdisn_nets):
_, x1_embeddings = self.embedder(x1, frozen_cdisn_nets)
_, x2_embeddings = self.embedder(x2, frozen_cdisn_nets)
_, x3_embeddings = self.embedder(x3, frozen_cdisn_nets)
out = self.decoder(
x1_embeddings[self.embedder.SELF_REFERENCE_INDEX],
x2_embeddings[self.embedder.SELF_REFERENCE_INDEX],
x3_embeddings[self.embedder.SELF_REFERENCE_INDEX],
)
return out, [x1_embeddings, x2_embeddings, x3_embeddings]
def anchoredBRPTS_forward(self, x, requested_tasks, adjacent_cdisn_task_models):
"""
Inputs:
- x: list containing an anchor window and one or more of an rp other, ts anchor2, and/or ts other window
- requested_tasks: a list of task_ids ordered as [self.requested_task_id]+[other_id1, other_id2,...]
- adjacent_cdisn_task_models: a list of cdisn task models (ordered according to requested_tasks[1:])
"""
# perform sanity checks
if len(requested_tasks) == 2:
if "BehavioralTUAB" in requested_tasks:
assert requested_tasks[0] == "BehavioralTUAB"
else:
assert requested_tasks[0] == "RP"
elif len(requested_tasks) == 3:
assert requested_tasks == ["BehavioralTUAB", "RP", "TS"]
# perform forward computation
if requested_tasks == ["BehavioralTUAB"]:
pred_label, embeds = self.behavioral_forward([x[0]], adjacent_cdisn_task_models)
return [pred_label, None, None], [[embeds], None, None]
elif requested_tasks == ['RP']:
pred_label, embeds = self.rp_forward([x[0]], [x[1]], adjacent_cdisn_task_models)
return [None, pred_label, None], [None, [embeds], None]
elif requested_tasks == ['TS']:
pred_label, embeds = self.ts_forward([x[0]], [x[1]], [x[2]], adjacent_cdisn_task_models)
return [None, None, pred_label], [None, None, [embeds]]
elif requested_tasks == sorted(["BehavioralTUAB", "RP"]):
assert self.requested_task_id == "BehavioralTUAB"
assert self.embedder.SELF_REFERENCE_INDEX == 0
_, embeds = self.embedder(x, adjacent_cdisn_task_models)
out_behavioral = self.decoder(embeds[self.embedder.SELF_REFERENCE_INDEX])
out_rp = adjacent_cdisn_task_models.decoder(
embeds[self.embedder.SELF_REFERENCE_INDEX],
embeds[1]
)
return [out_behavioral, out_rp, None], [[embeds], [embeds], None]
elif requested_tasks == sorted(["BehavioralTUAB", "TS"]):
assert self.requested_task_id == "BehavioralTUAB"
assert self.embedder.SELF_REFERENCE_INDEX == 0
_, embeds1 = self.embedder([x[0], x[1]], adjacent_cdisn_task_models)
_, embeds2 = self.embedder([x[0],x[2]], adjacent_cdisn_task_models)
behavioral_embeds = (embeds1[self.embedder.SELF_REFERENCE_INDEX] + embeds2[self.embedder.SELF_REFERENCE_INDEX] ) / 2.
out_behavioral = self.decoder(behavioral_embeds)
out_ts = adjacent_cdisn_task_models.decoder(
behavioral_embeds[self.embedder.SELF_REFERENCE_INDEX],
embeds1[1],
embeds2[1]
)
return [out_behavioral, None, out_ts], [[behavioral_embeds], None, [embeds1, embeds2]]
elif requested_tasks == sorted(["RP", "TS"]):
assert self.requested_task_id == "RP"
assert self.embedder.SELF_REFERENCE_INDEX == 0
_, embeds1 = self.embedder([x[0], x[0]], adjacent_cdisn_task_models)
_, embeds2 = self.embedder([x[1],x[2]], adjacent_cdisn_task_models)
anchor_embed = (embeds1[self.embedder.SELF_REFERENCE_INDEX] + embeds1[1] ) / 2.
out_rp = self.decoder(anchor_embed, embeds2[self.embedder.SELF_REFERENCE_INDEX])
out_ts = adjacent_cdisn_task_models.decoder(
anchor_embed,
embeds2[self.embedder.SELF_REFERENCE_INDEX],
embeds2[1]
)
return [None, out_rp, out_ts], [None, [anchor_embed, embeds2], [anchor_embed, embeds2]]
elif requested_tasks == sorted(["BehavioralTUAB", "RP", "TS"]):
assert self.requested_task_id == "BehavioralTUAB"
assert self.embedder.SELF_REFERENCE_INDEX == 0
_, embeds = self.embedder(x, adjacent_cdisn_task_models)
out_behavioral = self.decoder(embeds[self.embedder.SELF_REFERENCE_INDEX])
out_rp = adjacent_cdisn_task_models.decoder(
embeds[self.embedder.SELF_REFERENCE_INDEX],
embeds[1]
)
out_ts = adjacent_cdisn_task_models.decoder(
embeds[self.embedder.SELF_REFERENCE_INDEX],
embeds[1],
embeds[2]
)
return [out_behavioral, out_rp, out_ts], [[embeds], [embeds], [embeds]]
else:
raise ValueError("anchoredBRPTS_forward: requested_tasks is not sorted properly, leading to unhandled case")
pass
def nonanchoredBRPTS_forward(self, x, requested_tasks, adjacent_cdisn_task_models):
# perform sanity checks
if len(requested_tasks) == 2:
if "BehavioralTUAB" in requested_tasks:
assert requested_tasks[0] == "BehavioralTUAB"
else:
assert requested_tasks[0] == "RP"
elif len(requested_tasks) == 3:
assert requested_tasks == ["BehavioralTUAB", "RP", "TS"]
# perform forward computation
if requested_tasks == ["BehavioralTUAB"]:
pred_label, embeds = self.behavioral_forward(x[0], adjacent_cdisn_task_models)
return [pred_label, None, None], [[embeds], None, None]
elif requested_tasks == ['RP']:
pred_label, embeds = self.rp_forward(x[0], x[1], adjacent_cdisn_task_models)
return [None, pred_label, None], [None, [embeds], None]
elif requested_tasks == ['TS']:
pred_label, embeds = self.ts_forward(x[0], x[1], x[2], adjacent_cdisn_task_models)
return [None, None, pred_label], [None, None, [embeds]]
elif requested_tasks == sorted(["BehavioralTUAB", "RP"]):
assert self.requested_task_id == "BehavioralTUAB"
assert self.embedder.SELF_REFERENCE_INDEX == 0
behavioral_label, behavioral_embeds = self.behavioral_forward(x[0], adjacent_cdisn_task_models)
rp_label, rp_embeds = self.rp_forward(x[1], x[2], adjacent_cdisn_task_models)
return [behavioral_label, rp_label, None], [[behavioral_embeds], [rp_embeds], None]
elif requested_tasks == sorted(["BehavioralTUAB", "TS"]):
assert self.requested_task_id == "BehavioralTUAB"
assert self.embedder.SELF_REFERENCE_INDEX == 0
behavioral_label, behavioral_embeds = self.behavioral_forward(x[0], adjacent_cdisn_task_models)
ts_label, ts_embeds = self.ts_forward(x[1], x[2], x[3], adjacent_cdisn_task_models)
return [behavioral_label, None, ts_label], [[behavioral_embeds], None, [ts_embeds]]
elif requested_tasks == sorted(["RP", "TS"]):
assert self.requested_task_id == "RP"
assert self.embedder.SELF_REFERENCE_INDEX == 0
rp_label, rp_embeds = self.rp_forward(x[0], x[1], adjacent_cdisn_task_models)
ts_label, ts_embeds = self.ts_forward(x[2], x[3], x[4], adjacent_cdisn_task_models)
return [None, rp_label, ts_label], [None, [rp_embeds], [ts_embeds]]
elif requested_tasks == sorted(["BehavioralTUAB", "RP", "TS"]):
assert self.requested_task_id == "BehavioralTUAB"
assert self.embedder.SELF_REFERENCE_INDEX == 0
behavioral_label, behavioral_embeds = self.behavioral_forward(x[0], adjacent_cdisn_task_models)
rp_label, rp_embeds = self.rp_forward(x[1], x[2], adjacent_cdisn_task_models)
ts_label, ts_embeds = self.ts_forward(x[3], x[4], x[5], adjacent_cdisn_task_models)
return [behavioral_label, rp_label, ts_label], [[behavioral_embeds], [rp_embeds], [ts_embeds]]
else:
raise ValueError("anchoredBRPTS_forward: requested_tasks is not sorted properly, leading to unhandled case")
pass
# ###########################################################################
# Copyright 2021 Zachary C. Brown
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ###########################################################################
| 44.876018 | 212 | 0.597019 | 6,179 | 49,588 | 4.532772 | 0.068134 | 0.071765 | 0.044773 | 0.026778 | 0.833512 | 0.806769 | 0.772958 | 0.752035 | 0.73147 | 0.694552 | 0 | 0.034949 | 0.292006 | 49,588 | 1,104 | 213 | 44.916667 | 0.762818 | 0.251069 | 0 | 0.64365 | 0 | 0 | 0.051456 | 0.010834 | 0 | 0 | 0 | 0 | 0.032059 | 1 | 0.033292 | false | 0.011097 | 0.006165 | 0.001233 | 0.086313 | 0.018496 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53e17d24e770bef709f599d76da5ed028980b150 | 31 | py | Python | version_writer.py | Nepmia/N4-Framework | 84d98f3fe05ca02f938332e5970bca5482ef8ce7 | [
"MIT"
] | null | null | null | version_writer.py | Nepmia/N4-Framework | 84d98f3fe05ca02f938332e5970bca5482ef8ce7 | [
"MIT"
] | null | null | null | version_writer.py | Nepmia/N4-Framework | 84d98f3fe05ca02f938332e5970bca5482ef8ce7 | [
"MIT"
] | null | null | null | from helpers import *
# WIP
| 5.166667 | 21 | 0.645161 | 4 | 31 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290323 | 31 | 5 | 22 | 6.2 | 0.909091 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53f2451ff8ef5c4a26d304b82291bf4d63dbb055 | 18,249 | py | Python | util/data/gen/sechost.dll.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | util/data/gen/sechost.dll.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | util/data/gen/sechost.dll.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | symbols = []
exports = [{'type': 'function', 'name': 'AuditComputeEffectivePolicyBySid', 'address': '0x7ffb3c57bd10'}, {'type': 'function', 'name': 'AuditEnumerateCategories', 'address': '0x7ffb3c5afdc0'}, {'type': 'function', 'name': 'AuditEnumeratePerUserPolicy', 'address': '0x7ffb3c5aff10'}, {'type': 'function', 'name': 'AuditEnumerateSubCategories', 'address': '0x7ffb3c5affa0'}, {'type': 'function', 'name': 'AuditFree', 'address': '0x7ffb3c56e990'}, {'type': 'function', 'name': 'AuditLookupCategoryNameW', 'address': '0x7ffb3c5b0110'}, {'type': 'function', 'name': 'AuditLookupSubCategoryNameW', 'address': '0x7ffb3c5b0290'}, {'type': 'function', 'name': 'AuditQueryGlobalSaclW', 'address': '0x7ffb3c5b0410'}, {'type': 'function', 'name': 'AuditQueryPerUserPolicy', 'address': '0x7ffb3c57a1f0'}, {'type': 'function', 'name': 'AuditQuerySecurity', 'address': '0x7ffb3c5b0480'}, {'type': 'function', 'name': 'AuditQuerySystemPolicy', 'address': '0x7ffb3c57a340'}, {'type': 'function', 'name': 'AuditSetGlobalSaclW', 'address': '0x7ffb3c5b0550'}, {'type': 'function', 'name': 'AuditSetPerUserPolicy', 'address': '0x7ffb3c5b05c0'}, {'type': 'function', 'name': 'AuditSetSecurity', 'address': '0x7ffb3c5b0670'}, {'type': 'function', 'name': 'AuditSetSystemPolicy', 'address': '0x7ffb3c5b07d0'}, {'type': 'function', 'name': 'BuildSecurityDescriptorForSharingAccess', 'address': '0x7ffb3c579610'}, {'type': 'function', 'name': 'BuildSecurityDescriptorForSharingAccessEx', 'address': '0x7ffb3c578f80'}, {'type': 'function', 'name': 'CapabilityCheck', 'address': '0x7ffb3c56d690'}, {'type': 'function', 'name': 'CapabilityCheckForSingleSessionSku', 'address': '0x7ffb3c5a5290'}, {'type': 'function', 'name': 'ChangeServiceConfig2A', 'address': '0x7ffb3c5a5b60'}, {'type': 'function', 'name': 'ChangeServiceConfig2W', 'address': '0x7ffb3c57c6e0'}, {'type': 'function', 'name': 'ChangeServiceConfigA', 'address': '0x7ffb3c5a5d10'}, {'type': 'function', 'name': 'ChangeServiceConfigW', 'address': '0x7ffb3c56e5b0'}, {'type': 'function', 'name': 'CloseServiceHandle', 'address': '0x7ffb3c568460'}, {'type': 'function', 'name': 'CloseTrace', 'address': '0x7ffb3c56ba40'}, {'type': 'function', 'name': 'ControlService', 'address': '0x7ffb3c56e8c0'}, {'type': 'function', 'name': 'ControlServiceExA', 'address': '0x7ffb3c5a5ff0'}, {'type': 'function', 'name': 'ControlServiceExW', 'address': '0x7ffb3c56e2f0'}, {'type': 'function', 'name': 'ControlTraceA', 'address': '0x7ffb3c5ab630'}, {'type': 'function', 'name': 'ControlTraceW', 'address': '0x7ffb3c569280'}, {'type': 'function', 'name': 'ConvertSDToStringSDRootDomainW', 'address': '0x7ffb3c58ca00'}, {'type': 'function', 'name': 'ConvertSecurityDescriptorToStringSecurityDescriptorW', 'address': '0x7ffb3c570ee0'}, {'type': 'function', 'name': 'ConvertSidToStringSidW', 'address': '0x7ffb3c56ecc0'}, {'type': 'function', 'name': 'ConvertStringSDToSDDomainA', 'address': '0x7ffb3c58ca80'}, {'type': 'function', 'name': 'ConvertStringSDToSDDomainW', 'address': '0x7ffb3c58cbc0'}, {'type': 'function', 'name': 'ConvertStringSDToSDRootDomainW', 'address': '0x7ffb3c58cc90'}, {'type': 'function', 'name': 'ConvertStringSecurityDescriptorToSecurityDescriptorW', 'address': '0x7ffb3c570f80'}, {'type': 'function', 'name': 'ConvertStringSidToSidW', 'address': '0x7ffb3c5719b0'}, {'type': 'function', 'name': 'CreateIsolatedProcess', 'address': '0x7ffb3c5c1300'}, {'type': 'function', 'name': 'CreateIsolationContainer', 'address': '0x7ffb3c5c1390'}, {'type': 'function', 'name': 'CreateServiceA', 'address': '0x7ffb3c5a6150'}, {'type': 'function', 'name': 'CreateServiceEx', 'address': '0x7ffb3c5a68b0'}, {'type': 'function', 'name': 'CreateServiceW', 'address': '0x7ffb3c5a6db0'}, {'type': 'function', 'name': 'CredBackupCredentials', 'address': '0x7ffb3c5b0ad0'}, {'type': 'function', 'name': 'CredDeleteA', 'address': '0x7ffb3c5b0c70'}, {'type': 'function', 'name': 'CredDeleteW', 'address': '0x7ffb3c5b0d60'}, {'type': 'function', 'name': 'CredEncryptAndMarshalBinaryBlob', 'address': '0x7ffb3c5b2110'}, {'type': 'function', 'name': 'CredEnumerateA', 'address': '0x7ffb3c5b0e50'}, {'type': 'function', 'name': 'CredEnumerateW', 'address': '0x7ffb3c56ee00'}, {'type': 'function', 'name': 'CredFindBestCredentialA', 'address': '0x7ffb3c5b0fb0'}, {'type': 'function', 'name': 'CredFindBestCredentialW', 'address': '0x7ffb3c5b10f0'}, {'type': 'function', 'name': 'CredFree', 'address': '0x7ffb3c56e990'}, {'type': 'function', 'name': 'CredGetSessionTypes', 'address': '0x7ffb3c5b1230'}, {'type': 'function', 'name': 'CredGetTargetInfoA', 'address': '0x7ffb3c5b12c0'}, {'type': 'function', 'name': 'CredGetTargetInfoW', 'address': '0x7ffb3c5b1400'}, {'type': 'function', 'name': 'CredIsMarshaledCredentialW', 'address': '0x7ffb3c5b2140'}, {'type': 'function', 'name': 'CredIsProtectedA', 'address': '0x7ffb3c5b2180'}, {'type': 'function', 'name': 'CredIsProtectedW', 'address': '0x7ffb3c579840'}, {'type': 'function', 'name': 'CredMarshalCredentialA', 'address': '0x7ffb3c5b2220'}, {'type': 'function', 'name': 'CredMarshalCredentialW', 'address': '0x7ffb3c57add0'}, {'type': 'function', 'name': 'CredParseUserNameWithType', 'address': '0x7ffb3c579730'}, {'type': 'function', 'name': 'CredProfileLoaded', 'address': '0x7ffb3c5b1540'}, {'type': 'function', 'name': 'CredProfileLoadedEx', 'address': '0x7ffb3c56ec60'}, {'type': 'function', 'name': 'CredProfileUnloaded', 'address': '0x7ffb3c57a020'}, {'type': 'function', 'name': 'CredProtectA', 'address': '0x7ffb3c5b2290'}, {'type': 'function', 'name': 'CredProtectEx', 'address': '0x7ffb3c57ab40'}, {'type': 'function', 'name': 'CredProtectW', 'address': '0x7ffb3c57ab20'}, {'type': 'function', 'name': 'CredReadA', 'address': '0x7ffb3c5b15c0'}, {'type': 'function', 'name': 'CredReadByTokenHandle', 'address': '0x7ffb3c5b1700'}, {'type': 'function', 'name': 'CredReadDomainCredentialsA', 'address': '0x7ffb3c5b1850'}, {'type': 'function', 'name': 'CredReadDomainCredentialsW', 'address': '0x7ffb3c5b19b0'}, {'type': 'function', 'name': 'CredReadW', 'address': '0x7ffb3c5b1b20'}, {'type': 'function', 'name': 'CredRestoreCredentials', 'address': '0x7ffb3c5b1c60'}, {'type': 'function', 'name': 'CredUnmarshalCredentialA', 'address': '0x7ffb3c5b2410'}, {'type': 'function', 'name': 'CredUnmarshalCredentialW', 'address': '0x7ffb3c579920'}, {'type': 'function', 'name': 'CredUnprotectA', 'address': '0x7ffb3c5b24c0'}, {'type': 'function', 'name': 'CredUnprotectEx', 'address': '0x7ffb3c57b490'}, {'type': 'function', 'name': 'CredUnprotectW', 'address': '0x7ffb3c5b2660'}, {'type': 'function', 'name': 'CredWriteA', 'address': '0x7ffb3c5b1de0'}, {'type': 'function', 'name': 'CredWriteDomainCredentialsA', 'address': '0x7ffb3c5b1ec0'}, {'type': 'function', 'name': 'CredWriteDomainCredentialsW', 'address': '0x7ffb3c5b1ff0'}, {'type': 'function', 'name': 'CredWriteW', 'address': '0x7ffb3c56e9b0'}, {'type': 'function', 'name': 'CredpConvertCredential', 'address': '0x7ffb3c56ea60'}, {'type': 'function', 'name': 'CredpConvertOneCredentialSize', 'address': '0x7ffb3c570950'}, {'type': 'function', 'name': 'CredpConvertTargetInfo', 'address': '0x7ffb3c5b2680'}, {'type': 'function', 'name': 'CredpDecodeCredential', 'address': '0x7ffb3c5b28e0'}, {'type': 'function', 'name': 'CredpEncodeCredential', 'address': '0x7ffb3c5b2930'}, {'type': 'function', 'name': 'CredpEncodeSecret', 'address': '0x7ffb3c5b29b0'}, {'type': 'function', 'name': 'DeleteIsolationContainer', 'address': '0x7ffb3c5c13f0'}, {'type': 'function', 'name': 'DeleteService', 'address': '0x7ffb3c5a7270'}, {'type': 'function', 'name': 'EnableTraceEx2', 'address': '0x7ffb3c569910'}, {'type': 'function', 'name': 'EnumDependentServicesW', 'address': '0x7ffb3c57a0c0'}, {'type': 'function', 'name': 'EnumServicesStatusExW', 'address': '0x7ffb3c567e10'}, {'type': 'function', 'name': 'EnumerateIdentityProviders', 'address': '0x7ffb3c56ca50'}, {'type': 'function', 'name': 'EnumerateTraceGuidsEx', 'address': '0x7ffb3c56e7a0'}, {'type': 'function', 'name': 'EtwQueryRealtimeConsumer', 'address': '0x7ffb3c5aad80'}, {'type': 'function', 'name': 'EventAccessControl', 'address': '0x7ffb3c5abca0'}, {'type': 'function', 'name': 'EventAccessQuery', 'address': '0x7ffb3c5abcf0'}, {'type': 'function', 'name': 'EventAccessRemove', 'address': '0x7ffb3c5abf40'}, {'type': 'function', 'name': 'FreeContainer', 'address': '0x7ffb3c5b34e0'}, {'type': 'function', 'name': 'FreeTransientObjectSecurityDescriptor', 'address': '0x7ffb3c578240'}, {'type': 'function', 'name': 'GetDefaultIdentityProvider', 'address': '0x7ffb3c57bb90'}, {'type': 'function', 'name': 'GetEmbeddedContainerIsolationPolicy', 'address': '0x7ffb3c5b3530'}, {'type': 'function', 'name': 'GetEmbeddedImageMitigationPolicy', 'address': '0x7ffb3c56d810'}, {'type': 'function', 'name': 'GetIdentityProviderInfoByGUID', 'address': '0x7ffb3c57ba00'}, {'type': 'function', 'name': 'GetIdentityProviderInfoByName', 'address': '0x7ffb3c58a030'}, {'type': 'function', 'name': 'GetServiceDirectory', 'address': '0x7ffb3c56e1b0'}, {'type': 'function', 'name': 'GetServiceDisplayNameW', 'address': '0x7ffb3c579be0'}, {'type': 'function', 'name': 'GetServiceKeyNameW', 'address': '0x7ffb3c579ca0'}, {'type': 'function', 'name': 'GetServiceProcessToken', 'address': '0x7ffb3c5a72f0'}, {'type': 'function', 'name': 'GetServiceRegistryStateKey', 'address': '0x7ffb3c56e730'}, {'type': 'function', 'name': 'I_QueryTagInformation', 'address': '0x7ffb3c5674a0'}, {'type': 'function', 'name': 'I_RegisterSvchostNotificationCallback', 'address': '0x7ffb3c56e8a0'}, {'type': 'function', 'name': 'I_ScBroadcastServiceControlMessage', 'address': '0x7ffb3c57bad0'}, {'type': 'function', 'name': 'I_ScIsSecurityProcess', 'address': '0x7ffb3c57c830'}, {'type': 'function', 'name': 'I_ScPnPGetServiceName', 'address': '0x7ffb3c56ab80'}, {'type': 'function', 'name': 'I_ScQueryServiceConfig', 'address': '0x7ffb3c567380'}, {'type': 'function', 'name': 'I_ScRegisterDeviceNotification', 'address': '0x7ffb3c56ba60'}, {'type': 'function', 'name': 'I_ScRegisterPreshutdownRestart', 'address': '0x7ffb3c5a7380'}, {'type': 'function', 'name': 'I_ScReparseServiceDatabase', 'address': '0x7ffb3c5a7450'}, {'type': 'function', 'name': 'I_ScRpcBindA', 'address': '0x7ffb3c5a8f50'}, {'type': 'function', 'name': 'I_ScRpcBindW', 'address': '0x7ffb3c57bca0'}, {'type': 'function', 'name': 'I_ScSendPnPMessage', 'address': '0x7ffb3c567710'}, {'type': 'function', 'name': 'I_ScSendTSMessage', 'address': '0x7ffb3c57bad0'}, {'type': 'function', 'name': 'I_ScSetServiceBitsA', 'address': '0x7ffb3c5a59b0'}, {'type': 'function', 'name': 'I_ScSetServiceBitsW', 'address': '0x7ffb3c57c770'}, {'type': 'function', 'name': 'I_ScUnregisterDeviceNotification', 'address': '0x7ffb3c56e500'}, {'type': 'function', 'name': 'I_ScValidatePnPService', 'address': '0x7ffb3c56abd0'}, {'type': 'function', 'name': 'LocalGetConditionForString', 'address': '0x7ffb3c570230'}, {'type': 'function', 'name': 'LocalGetReferencedTokenTypesForCondition', 'address': '0x7ffb3c58e530'}, {'type': 'function', 'name': 'LocalGetStringForCondition', 'address': '0x7ffb3c58f3b0'}, {'type': 'function', 'name': 'LocalRpcBindingCreateWithSecurity', 'address': '0x7ffb3c5a54a0'}, {'type': 'function', 'name': 'LocalRpcBindingSetAuthInfoEx', 'address': '0x7ffb3c5a5650'}, {'type': 'function', 'name': 'LookupAccountNameLocalA', 'address': '0x7ffb3c58a130'}, {'type': 'function', 'name': 'LookupAccountNameLocalW', 'address': '0x7ffb3c574de0'}, {'type': 'function', 'name': 'LookupAccountSidLocalA', 'address': '0x7ffb3c58a2b0'}, {'type': 'function', 'name': 'LookupAccountSidLocalW', 'address': '0x7ffb3c575270'}, {'type': 'function', 'name': 'LsaAddAccountRights', 'address': '0x7ffb3c5ae390'}, {'type': 'function', 'name': 'LsaClose', 'address': '0x7ffb3c56f860'}, {'type': 'function', 'name': 'LsaCreateSecret', 'address': '0x7ffb3c5aef10'}, {'type': 'function', 'name': 'LsaDelete', 'address': '0x7ffb3c5ae5c0'}, {'type': 'function', 'name': 'LsaEnumerateAccountRights', 'address': '0x7ffb3c57a280'}, {'type': 'function', 'name': 'LsaEnumerateAccountsWithUserRight', 'address': '0x7ffb3c5ae430'}, {'type': 'function', 'name': 'LsaFreeMemory', 'address': '0x7ffb3c56d7f0'}, {'type': 'function', 'name': 'LsaICLookupNames', 'address': '0x7ffb3c56efb0'}, {'type': 'function', 'name': 'LsaICLookupNamesWithCreds', 'address': '0x7ffb3c5ae660'}, {'type': 'function', 'name': 'LsaICLookupSids', 'address': '0x7ffb3c56f620'}, {'type': 'function', 'name': 'LsaICLookupSidsWithCreds', 'address': '0x7ffb3c5ae870'}, {'type': 'function', 'name': 'LsaLookupClose', 'address': '0x7ffb3c575780'}, {'type': 'function', 'name': 'LsaLookupFreeMemory', 'address': '0x7ffb3c56d7f0'}, {'type': 'function', 'name': 'LsaLookupGetDomainInfo', 'address': '0x7ffb3c574d40'}, {'type': 'function', 'name': 'LsaLookupManageSidNameMapping', 'address': '0x7ffb3c56e0a0'}, {'type': 'function', 'name': 'LsaLookupNames2', 'address': '0x7ffb3c56ef40'}, {'type': 'function', 'name': 'LsaLookupOpenLocalPolicy', 'address': '0x7ffb3c5757f0'}, {'type': 'function', 'name': 'LsaLookupSids', 'address': '0x7ffb3c56f470'}, {'type': 'function', 'name': 'LsaLookupSids2', 'address': '0x7ffb3c5aeac0'}, {'type': 'function', 'name': 'LsaLookupTranslateNames', 'address': '0x7ffb3c57bed0'}, {'type': 'function', 'name': 'LsaLookupTranslateSids', 'address': '0x7ffb3c56d8c0'}, {'type': 'function', 'name': 'LsaLookupUserAccountType', 'address': '0x7ffb3c56da20'}, {'type': 'function', 'name': 'LsaOpenPolicy', 'address': '0x7ffb3c56f8f0'}, {'type': 'function', 'name': 'LsaOpenSecret', 'address': '0x7ffb3c5af020'}, {'type': 'function', 'name': 'LsaQueryInformationPolicy', 'address': '0x7ffb3c56f330'}, {'type': 'function', 'name': 'LsaQuerySecret', 'address': '0x7ffb3c5af130'}, {'type': 'function', 'name': 'LsaRemoveAccountRights', 'address': '0x7ffb3c5ae510'}, {'type': 'function', 'name': 'LsaRetrievePrivateData', 'address': '0x7ffb3c57a3c0'}, {'type': 'function', 'name': 'LsaSetInformationPolicy', 'address': '0x7ffb3c5aead0'}, {'type': 'function', 'name': 'LsaSetSecret', 'address': '0x7ffb3c5af5d0'}, {'type': 'function', 'name': 'LsaStorePrivateData', 'address': '0x7ffb3c5af820'}, {'type': 'function', 'name': 'NotifyServiceStatusChange', 'address': '0x7ffb3c566a70'}, {'type': 'function', 'name': 'NotifyServiceStatusChangeA', 'address': '0x7ffb3c57a000'}, {'type': 'function', 'name': 'NotifyServiceStatusChangeW', 'address': '0x7ffb3c566a70'}, {'type': 'function', 'name': 'OpenSCManagerA', 'address': '0x7ffb3c5681b0'}, {'type': 'function', 'name': 'OpenSCManagerW', 'address': '0x7ffb3c568360'}, {'type': 'function', 'name': 'OpenServiceA', 'address': '0x7ffb3c579f30'}, {'type': 'function', 'name': 'OpenServiceW', 'address': '0x7ffb3c5682e0'}, {'type': 'function', 'name': 'OpenTraceW', 'address': '0x7ffb3c56b010'}, {'type': 'function', 'name': 'ProcessTrace', 'address': '0x7ffb3c56b6e0'}, {'type': 'function', 'name': 'QueryAllTracesA', 'address': '0x7ffb3c5ac160'}, {'type': 'function', 'name': 'QueryAllTracesW', 'address': '0x7ffb3c5613f0'}, {'type': 'function', 'name': 'QueryLocalUserServiceName', 'address': '0x7ffb3c5a7510'}, {'type': 'function', 'name': 'QueryServiceConfig2A', 'address': '0x7ffb3c5a7890'}, {'type': 'function', 'name': 'QueryServiceConfig2W', 'address': '0x7ffb3c567850'}, {'type': 'function', 'name': 'QueryServiceConfigA', 'address': '0x7ffb3c5a7d00'}, {'type': 'function', 'name': 'QueryServiceConfigW', 'address': '0x7ffb3c568090'}, {'type': 'function', 'name': 'QueryServiceDynamicInformation', 'address': '0x7ffb3c5a8330'}, {'type': 'function', 'name': 'QueryServiceObjectSecurity', 'address': '0x7ffb3c5a7e90'}, {'type': 'function', 'name': 'QueryServiceStatus', 'address': '0x7ffb3c567b30'}, {'type': 'function', 'name': 'QueryServiceStatusEx', 'address': '0x7ffb3c568240'}, {'type': 'function', 'name': 'QueryTraceProcessingHandle', 'address': '0x7ffb3c5aae00'}, {'type': 'function', 'name': 'QueryTransientObjectSecurityDescriptor', 'address': '0x7ffb3c5780a0'}, {'type': 'function', 'name': 'QueryUserServiceName', 'address': '0x7ffb3c567c90'}, {'type': 'function', 'name': 'QueryUserServiceNameForContext', 'address': '0x7ffb3c5a7f70'}, {'type': 'function', 'name': 'RegisterServiceCtrlHandlerA', 'address': '0x7ffb3c5a83d0'}, {'type': 'function', 'name': 'RegisterServiceCtrlHandlerExA', 'address': '0x7ffb3c579df0'}, {'type': 'function', 'name': 'RegisterServiceCtrlHandlerExW', 'address': '0x7ffb3c5654f0'}, {'type': 'function', 'name': 'RegisterServiceCtrlHandlerW', 'address': '0x7ffb3c56e970'}, {'type': 'function', 'name': 'ReleaseIdentityProviderEnumContext', 'address': '0x7ffb3c56d450'}, {'type': 'function', 'name': 'RemoveTraceCallback', 'address': '0x7ffb3c5ab020'}, {'type': 'function', 'name': 'RpcClientCapabilityCheck', 'address': '0x7ffb3c56d5b0'}, {'type': 'function', 'name': 'SetLocalRpcServerInterfaceSecurity', 'address': '0x7ffb3c5a5760'}, {'type': 'function', 'name': 'SetLocalRpcServerProtseqSecurity', 'address': '0x7ffb3c5a5840'}, {'type': 'function', 'name': 'SetServiceObjectSecurity', 'address': '0x7ffb3c57c5d0'}, {'type': 'function', 'name': 'SetServiceStatus', 'address': '0x7ffb3c567b90'}, {'type': 'function', 'name': 'SetTraceCallback', 'address': '0x7ffb3c5ab110'}, {'type': 'function', 'name': 'StartServiceA', 'address': '0x7ffb3c579fa0'}, {'type': 'function', 'name': 'StartServiceCtrlDispatcherA', 'address': '0x7ffb3c5a8440'}, {'type': 'function', 'name': 'StartServiceCtrlDispatcherW', 'address': '0x7ffb3c565a60'}, {'type': 'function', 'name': 'StartServiceW', 'address': '0x7ffb3c565480'}, {'type': 'function', 'name': 'StartTraceA', 'address': '0x7ffb3c5ac170'}, {'type': 'function', 'name': 'StartTraceW', 'address': '0x7ffb3c56a3b0'}, {'type': 'function', 'name': 'StopTraceW', 'address': '0x7ffb3c57bc80'}, {'type': 'function', 'name': 'SubscribeServiceChangeNotifications', 'address': '0x7ffb3c56dfa0'}, {'type': 'function', 'name': 'TraceQueryInformation', 'address': '0x7ffb3c5ac6a0'}, {'type': 'function', 'name': 'TraceSetInformation', 'address': '0x7ffb3c5aca50'}, {'type': 'function', 'name': 'UnsubscribeServiceChangeNotifications', 'address': '0x7ffb3c56e930'}, {'type': 'function', 'name': 'WaitServiceState', 'address': '0x7ffb3c56dc30'}] | 9,124.5 | 18,236 | 0.691764 | 1,315 | 18,249 | 9.587072 | 0.330798 | 0.2056 | 0.274133 | 0.022924 | 0.023638 | 0.006028 | 0 | 0 | 0 | 0 | 0 | 0.094224 | 0.071237 | 18,249 | 2 | 18,236 | 9,124.5 | 0.649596 | 0 | 0 | 0 | 0 | 0 | 0.690959 | 0.172658 | 0 | 0 | 0.165699 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
99057e1994400afeb65942caf0e3d65fcc497f2e | 35 | py | Python | passenger_wsgi.py | vitwb/SysOcto | 37d9f0ab0750a7f9856bbc545e4416a1493c6a4e | [
"MIT"
] | null | null | null | passenger_wsgi.py | vitwb/SysOcto | 37d9f0ab0750a7f9856bbc545e4416a1493c6a4e | [
"MIT"
] | 1 | 2021-06-10T23:09:57.000Z | 2021-06-10T23:09:57.000Z | passenger_wsgi.py | vitwb/SysOcto | 37d9f0ab0750a7f9856bbc545e4416a1493c6a4e | [
"MIT"
] | null | null | null | from mysite.wsgi import application | 35 | 35 | 0.885714 | 5 | 35 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54cc9292f9e0795a7c1eba64299ea58c668ad3dc | 2,811 | py | Python | tests/integration/questionnaire/test_questionnaire_custom_page_titles.py | petechd/eq-questionnaire-runner | 1c5b182a7f8bc878cfdd767ae080410fa679abd6 | [
"MIT"
] | 3 | 2020-09-28T13:21:21.000Z | 2021-05-05T14:14:51.000Z | tests/integration/questionnaire/test_questionnaire_custom_page_titles.py | petechd/eq-questionnaire-runner | 1c5b182a7f8bc878cfdd767ae080410fa679abd6 | [
"MIT"
] | 402 | 2019-11-06T17:23:03.000Z | 2022-03-31T16:03:35.000Z | tests/integration/questionnaire/test_questionnaire_custom_page_titles.py | petechd/eq-questionnaire-runner | 1c5b182a7f8bc878cfdd767ae080410fa679abd6 | [
"MIT"
] | 10 | 2020-03-03T14:23:27.000Z | 2022-01-31T12:21:21.000Z | from . import QuestionnaireTestCase
class TestQuestionnaireCustomPageTitles(QuestionnaireTestCase):
def test_custom_page_titles(self):
self.launchSurvey("test_custom_page_titles")
self.post()
self.assertEqualPageTitle("Custom page title - Test Custom Page Titles")
self.post({"anyone-else": "Yes"})
self.assertEqualPageTitle("Add person 1 - Test Custom Page Titles")
self.post({"first-name": "Marie", "last-name": "Doe"})
self.post({"anyone-else": "Yes"})
self.assertEqualPageTitle("Add person 2 - Test Custom Page Titles")
self.post({"first-name": "John", "last-name": "Doe"})
self.add_person("Susan", "Doe")
self.post({"anyone-else": "No"})
self.assertEqualPageTitle(
"How Person 1 is related to Person 2 - Test Custom Page Titles"
)
self.post({"relationship-answer": "Husband or Wife"})
self.assertEqualPageTitle(
"How Person 1 is related to Person 3 - Test Custom Page Titles"
)
self.post({"relationship-answer": "Husband or Wife"})
self.assertEqualPageTitle(
"How Person 2 is related to Person 3 - Test Custom Page Titles"
)
self.post({"relationship-answer": "Husband or Wife"})
self.assertEqualPageTitle(
"Custom section summary page title - Test Custom Page Titles"
)
def test_custom_repeating_page_titles(self):
self.launchSurvey("test_custom_page_titles")
self.post()
self.post({"anyone-else": "Yes"})
self.post({"first-name": "Marie", "last-name": "Doe"})
self.add_person("John", "Doe")
self.post({"anyone-else": "No"})
self.post({"relationship-answer": "Husband or Wife"})
self.post()
self.post()
self.assertEqualPageTitle(
"Individual interstitial: Person 1 - Test Custom Page Titles"
)
self.post()
self.assertEqualPageTitle("Proxy question: Person 1 - Test Custom Page Titles")
self.post()
self.assertEqualPageTitle(
"What is your date of birth?: Person 1 - Test Custom Page Titles"
)
self.post()
self.assertEqualPageTitle("Summary: Person 1 - Test Custom Page Titles")
self.post()
self.post()
self.assertEqualPageTitle(
"Individual interstitial: Person 2 - Test Custom Page Titles"
)
self.post()
self.assertEqualPageTitle("Proxy question: Person 2 - Test Custom Page Titles")
self.post()
self.assertEqualPageTitle(
"What is your date of birth?: Person 2 - Test Custom Page Titles"
)
self.post()
self.assertEqualPageTitle("Summary: Person 2 - Test Custom Page Titles")
| 34.703704 | 87 | 0.613305 | 308 | 2,811 | 5.548701 | 0.168831 | 0.112346 | 0.147455 | 0.21065 | 0.909304 | 0.903452 | 0.83031 | 0.803394 | 0.731422 | 0.511995 | 0 | 0.007809 | 0.271078 | 2,811 | 80 | 88 | 35.1375 | 0.826257 | 0 | 0 | 0.52381 | 0 | 0 | 0.404127 | 0.016364 | 0 | 0 | 0 | 0 | 0.238095 | 1 | 0.031746 | false | 0 | 0.015873 | 0 | 0.063492 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54f0540265687cf10beefb22b6ed840f81138203 | 174 | py | Python | share/lib/python/neuron/neuroml/metadata.py | niltonlk/nrn | 464541abbf72fe58de77b16bf0e1df425a280b89 | [
"BSD-3-Clause"
] | 203 | 2018-05-03T11:02:11.000Z | 2022-03-31T14:18:31.000Z | share/lib/python/neuron/neuroml/metadata.py | niltonlk/nrn | 464541abbf72fe58de77b16bf0e1df425a280b89 | [
"BSD-3-Clause"
] | 1,228 | 2018-04-25T09:00:48.000Z | 2022-03-31T21:42:21.000Z | share/lib/python/neuron/neuroml/metadata.py | niltonlk/nrn | 464541abbf72fe58de77b16bf0e1df425a280b89 | [
"BSD-3-Clause"
] | 134 | 2018-04-23T09:14:13.000Z | 2022-03-16T08:57:11.000Z | def notes(self, node):
pass
def properties(self, node):
pass
def property(self, node):
pass
def tag(self, node):
pass
def value(self, node):
pass
| 9.157895 | 27 | 0.609195 | 25 | 174 | 4.24 | 0.36 | 0.377358 | 0.566038 | 0.566038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275862 | 174 | 18 | 28 | 9.666667 | 0.84127 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
070704fa2002e29067f0b6c99c371981b28ad96e | 308 | py | Python | python/pytest_example/test/eric_strings_test.py | holycrap872/til | 97f6b041dad03a2edffb804dc4db090b65b9154f | [
"MIT"
] | 8 | 2015-10-07T02:47:58.000Z | 2018-12-25T16:01:08.000Z | python/pytest_example/test/eric_strings_test.py | holycrap872/til | 97f6b041dad03a2edffb804dc4db090b65b9154f | [
"MIT"
] | null | null | null | python/pytest_example/test/eric_strings_test.py | holycrap872/til | 97f6b041dad03a2edffb804dc4db090b65b9154f | [
"MIT"
] | 1 | 2016-08-25T17:45:40.000Z | 2016-08-25T17:45:40.000Z | import pytest
from pytest_example.eric_strings import *
def test_stringify_array_1():
assert stringify_array(['hi', 'eric']) == "hi eric"
def test_stringify_array_2():
assert stringify_array(['hi']) == "hi"
def test_stringify_array_3():
assert stringify_array(["yes ", "mam"]) == "yes mam"
| 22 | 57 | 0.694805 | 42 | 308 | 4.761905 | 0.404762 | 0.42 | 0.24 | 0.315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.152597 | 308 | 13 | 58 | 23.692308 | 0.754789 | 0 | 0 | 0 | 0 | 0 | 0.104235 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.375 | true | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
070d25a8cd90569c0e33657a0af0fb6f9cb3b45c | 202 | py | Python | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/s/subprocess_popen_preexec_fn.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 463 | 2015-01-15T08:17:42.000Z | 2022-03-28T15:10:20.000Z | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/s/subprocess_popen_preexec_fn.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 52 | 2015-01-06T02:43:59.000Z | 2022-03-14T11:15:21.000Z | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/s/subprocess_popen_preexec_fn.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 249 | 2015-01-07T22:49:49.000Z | 2022-03-18T02:32:06.000Z | # pylint: disable=disallowed-name,no-value-for-parameter,missing-docstring
import subprocess
def foo():
pass
subprocess.Popen(preexec_fn=foo) # [subprocess-popen-preexec-fn]
subprocess.Popen()
| 16.833333 | 74 | 0.767327 | 26 | 202 | 5.923077 | 0.692308 | 0.292208 | 0.285714 | 0.311688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10396 | 202 | 11 | 75 | 18.363636 | 0.850829 | 0.504951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0.2 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
07254b87342dcf24db1915fcbc4ffe7e54cb2f17 | 35 | py | Python | bowtievisualization/__init__.py | GLopezMUZH/bowtievisualization | d5834f878e5900b4764a2b2963a481278dddec82 | [
"BSD-3-Clause"
] | null | null | null | bowtievisualization/__init__.py | GLopezMUZH/bowtievisualization | d5834f878e5900b4764a2b2963a481278dddec82 | [
"BSD-3-Clause"
] | null | null | null | bowtievisualization/__init__.py | GLopezMUZH/bowtievisualization | d5834f878e5900b4764a2b2963a481278dddec82 | [
"BSD-3-Clause"
] | null | null | null | from .bowTieVisualization import *
| 17.5 | 34 | 0.828571 | 3 | 35 | 9.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07495b7b1f451665d7d108ec0bd7069532464855 | 175 | py | Python | packages/pyright-internal/src/tests/samples/import12.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 4,391 | 2019-05-07T01:18:57.000Z | 2022-03-31T20:45:44.000Z | packages/pyright-internal/src/tests/samples/import12.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 2,740 | 2019-05-07T03:29:30.000Z | 2022-03-31T12:57:46.000Z | packages/pyright-internal/src/tests/samples/import12.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 455 | 2019-05-07T12:55:14.000Z | 2022-03-31T17:09:15.000Z | # This sample tests the reportWildcardImportFromLibrary option.
# This should generate a warning or error depending on whether
# strict mode is enabled.
from typing import *
| 29.166667 | 63 | 0.805714 | 23 | 175 | 6.130435 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 175 | 5 | 64 | 35 | 0.959184 | 0.834286 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4ada8c3b1633d8ea215ecd13247ec0ef8a2a2a39 | 29 | py | Python | Scripts/Python/hactoberhello.py | minproton/ProjectMRU | 80e385c0f97ebe2255ecdb8d0db510d8884a5ee0 | [
"CC0-1.0"
] | 9 | 2020-10-06T08:12:28.000Z | 2020-10-24T20:32:46.000Z | Scripts/Python/hactoberhello.py | minproton/ProjectMRU | 80e385c0f97ebe2255ecdb8d0db510d8884a5ee0 | [
"CC0-1.0"
] | 5 | 2020-10-09T04:54:43.000Z | 2020-10-31T06:31:03.000Z | Scripts/Python/hactoberhello.py | minproton/ProjectMRU | 80e385c0f97ebe2255ecdb8d0db510d8884a5ee0 | [
"CC0-1.0"
] | 33 | 2020-10-05T20:02:12.000Z | 2020-10-31T05:05:48.000Z | print("Hello hactoberfest!")
| 14.5 | 28 | 0.758621 | 3 | 29 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 29 | 1 | 29 | 29 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ab19c7487663b8c54fd60acc843957dea39b24b5 | 51 | py | Python | connectfour/__init__.py | amwhalen/connectfour | 4f01bc4a94a04ae729c66c0498fe64b1ce8585f6 | [
"MIT"
] | 1 | 2017-10-12T05:20:02.000Z | 2017-10-12T05:20:02.000Z | connectfour/__init__.py | amwhalen/connectfour | 4f01bc4a94a04ae729c66c0498fe64b1ce8585f6 | [
"MIT"
] | null | null | null | connectfour/__init__.py | amwhalen/connectfour | 4f01bc4a94a04ae729c66c0498fe64b1ce8585f6 | [
"MIT"
] | null | null | null | import board
import board_factory
import exceptions | 17 | 20 | 0.901961 | 7 | 51 | 6.428571 | 0.571429 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 51 | 3 | 21 | 17 | 0.978261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab26c684b5250a979620bc28cf858d8f3e1794fb | 41 | py | Python | crowdsource/client/__init__.py | texodus/crowdsource | 60d222ef3e9ad6b35b54c103e66e647908014e7a | [
"Apache-2.0"
] | null | null | null | crowdsource/client/__init__.py | texodus/crowdsource | 60d222ef3e9ad6b35b54c103e66e647908014e7a | [
"Apache-2.0"
] | 46 | 2017-09-30T04:01:00.000Z | 2021-12-12T20:26:10.000Z | crowdsource/client/__init__.py | texodus/crowdsource | 60d222ef3e9ad6b35b54c103e66e647908014e7a | [
"Apache-2.0"
] | 1 | 2019-11-12T00:53:31.000Z | 2019-11-12T00:53:31.000Z | from .client import Client # noqa: F401
| 20.5 | 40 | 0.731707 | 6 | 41 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.195122 | 41 | 1 | 41 | 41 | 0.818182 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab332685ec1d71a3946dad65e6ecc9a3c785deef | 218 | py | Python | core/admin.py | SM2015/orchid | 41ec024a6faaa52a7568f4430b52173a3eb98667 | [
"MIT"
] | null | null | null | core/admin.py | SM2015/orchid | 41ec024a6faaa52a7568f4430b52173a3eb98667 | [
"MIT"
] | null | null | null | core/admin.py | SM2015/orchid | 41ec024a6faaa52a7568f4430b52173a3eb98667 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.contrib import admin
import core.models as cm
admin.site.register(cm.Location)
admin.site.register(cm.Indicator)
admin.site.register(cm.Image)
admin.site.register(cm.Score) | 27.25 | 33 | 0.821101 | 35 | 218 | 5.114286 | 0.485714 | 0.201117 | 0.379888 | 0.424581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073395 | 218 | 8 | 34 | 27.25 | 0.886139 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
db570765f3293aaa321758d525be73b36ec41e73 | 37 | py | Python | networkx/testing/__init__.py | armando1793/networkx | 48326e1761c08d7a073aec53f7a644baf2249ef6 | [
"BSD-3-Clause"
] | 445 | 2019-01-26T13:50:26.000Z | 2022-03-18T05:17:38.000Z | networkx/testing/__init__.py | armando1793/networkx | 48326e1761c08d7a073aec53f7a644baf2249ef6 | [
"BSD-3-Clause"
] | 242 | 2019-01-29T15:48:27.000Z | 2022-03-31T22:09:21.000Z | networkx/testing/__init__.py | armando1793/networkx | 48326e1761c08d7a073aec53f7a644baf2249ef6 | [
"BSD-3-Clause"
] | 136 | 2018-01-09T22:52:06.000Z | 2022-02-24T13:26:18.000Z | from networkx.testing.utils import *
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db598addfa51058a702942e0d641cabb6b88bb0e | 7,768 | py | Python | backend/test/bot/modules/auth.py | MuChin708/Tosurnament | 13e493a5ac3f7b0af94d7f509d3711b9fa6e94c1 | [
"MIT"
] | null | null | null | backend/test/bot/modules/auth.py | MuChin708/Tosurnament | 13e493a5ac3f7b0af94d7f509d3711b9fa6e94c1 | [
"MIT"
] | null | null | null | backend/test/bot/modules/auth.py | MuChin708/Tosurnament | 13e493a5ac3f7b0af94d7f509d3711b9fa6e94c1 | [
"MIT"
] | null | null | null | """
All tests concerning the Tosurnament auth module.
'link' generates a code to set on the osu profile.
'auth' finalizes the linking process, meaning the discord and osu! account are associated.
"""
import pytest
from bot.modules import module as base
from bot.modules import auth
from common.databases.user import User
import test.resources.mock.tosurnament as tosurnament_mock
MODULE_TO_TEST = "bot.modules.auth"
OSU_MODULE = MODULE_TO_TEST + ".osu"
REQUESTS_MODULE = MODULE_TO_TEST + ".requests"
OS_MODULE = MODULE_TO_TEST + ".os"
CODE_URANDOM = b"\xe8!(r\xbd\xc6\x00\\\x825\x9f\xc9\xdbm>A"
CODE_ASCII = "6CEocr3GAFyCNZ_J220-QQ"
@pytest.mark.asyncio
async def test_link_already_verified():
"""Links the user but they are already verified."""
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(User(discord_id=tosurnament_mock.USER_ID, verified=True))
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(auth.UserAlreadyVerified):
await cog.link(
cog,
tosurnament_mock.CtxMock(mock_bot),
osu_name=tosurnament_mock.OSU_USER_NAME,
)
@pytest.mark.asyncio
async def test_link_user_not_found(mocker):
"""Links the user but the osu name/id is not found."""
mock_osu = mocker.patch(OSU_MODULE)
mock_osu.get_user.return_value = None
mock_bot = tosurnament_mock.BotMock()
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(base.UserNotFound):
await cog.link(
cog,
tosurnament_mock.CtxMock(mock_bot),
osu_name=tosurnament_mock.OSU_USER_NAME,
)
@pytest.mark.asyncio
async def test_link(mocker):
"""Links the user."""
mocker.patch(OSU_MODULE, tosurnament_mock.OsuMock())
mocker.patch(OS_MODULE + ".urandom", mocker.Mock(return_value=CODE_URANDOM))
mock_author = tosurnament_mock.UserMock()
mock_bot = tosurnament_mock.BotMock()
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
await cog.link(cog, tosurnament_mock.CtxMock(mock_bot, mock_author), osu_name=tosurnament_mock.OSU_USER_NAME)
mock_bot.session.add.assert_called_once_with(
tosurnament_mock.Matcher(
User(
osu_id=tosurnament_mock.OSU_USER_ID,
code=CODE_ASCII,
osu_name=tosurnament_mock.OSU_USER_NAME,
discord_id_snowflake=tosurnament_mock.USER_ID,
)
)
)
assert mock_bot.session.update.call_count == 0
cog.send_reply.assert_called_once_with(
mocker.ANY, "success", CODE_ASCII, channel=tosurnament_mock.ChannelMock(mock_author.DM_CHANNEL_ID)
)
@pytest.mark.asyncio
async def test_link_regenerate_code(mocker):
"""Links the user, and since they already tried to link, regenerate the linking code."""
mocker.patch(OSU_MODULE, tosurnament_mock.OsuMock())
mocker.patch(OS_MODULE + ".urandom", mocker.Mock(return_value=CODE_URANDOM))
mock_author = tosurnament_mock.UserMock()
await mock_author.create_dm()
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(
User(
discord_id=tosurnament_mock.USER_ID,
osu_id=tosurnament_mock.OSU_USER_ID,
verified=False,
code="test",
osu_name="",
)
)
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
await cog.link(cog, tosurnament_mock.CtxMock(mock_bot, mock_author), osu_name=tosurnament_mock.OSU_USER_NAME)
assert mock_bot.session.add.call_count == 0
user_matcher = tosurnament_mock.Matcher(
User(
osu_id=tosurnament_mock.OSU_USER_ID,
verified=False,
code=CODE_ASCII,
osu_name=tosurnament_mock.OSU_USER_NAME,
)
)
mock_bot.session.update.assert_called_once_with(user_matcher)
cog.send_reply.assert_called_once_with(
mocker.ANY, "success", CODE_ASCII, channel=tosurnament_mock.ChannelMock(mock_author.DM_CHANNEL_ID)
)
@pytest.mark.asyncio
async def test_auth_not_linked():
"""Tries to authenticate the user but they didn't link their osu! account yet."""
mock_bot = tosurnament_mock.BotMock()
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(base.UserNotLinked):
await cog.auth(cog, tosurnament_mock.CtxMock(mock_bot))
@pytest.mark.asyncio
async def test_auth_already_verified():
"""Tries to authenticate the user but they are already verified."""
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(User(discord_id=tosurnament_mock.USER_ID, verified=True))
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(auth.UserAlreadyVerified):
await cog.auth(cog, tosurnament_mock.CtxMock(mock_bot))
@pytest.mark.asyncio
async def test_auth_osu_find_user_web_page(mocker):
"""Tries to authenticate the user but the osu! website is down (or another error)."""
mock_requests_get = mocker.Mock()
mock_requests_get.status_code = 404
mock_requests = mocker.patch(REQUESTS_MODULE)
mock_requests.get.return_value = mock_requests_get
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(User(discord_id=tosurnament_mock.USER_ID, verified=False))
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(base.OsuError):
await cog.auth(cog, tosurnament_mock.CtxMock(mock_bot))
@pytest.mark.asyncio
async def test_auth_osu_location_not_found(mocker):
"""Tries to authenticate the user but the location of the user on the osu! website is not found."""
mock_requests_get = mocker.Mock()
mock_requests_get.status_code = 200
mock_requests_get.text = "random text"
mock_requests = mocker.patch(REQUESTS_MODULE)
mock_requests.get.return_value = mock_requests_get
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(User(discord_id=tosurnament_mock.USER_ID, verified=False))
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(base.OsuError):
await cog.auth(cog, tosurnament_mock.CtxMock(mock_bot))
@pytest.mark.asyncio
async def test_auth_wrong_code(mocker):
"""Tries to authenticate the user but the code in location is wrong."""
mock_requests_get = mocker.Mock()
mock_requests_get.status_code = 200
mock_requests_get.text = 'location":"'
mock_requests = mocker.patch(REQUESTS_MODULE)
mock_requests.get.return_value = mock_requests_get
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(User(discord_id=tosurnament_mock.USER_ID, verified=False, code=CODE_ASCII))
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
with pytest.raises(auth.WrongCodeError):
await cog.auth(cog, tosurnament_mock.CtxMock(mock_bot))
@pytest.mark.asyncio
async def test_auth(mocker):
"""Authenticates the user."""
mock_requests_get = mocker.Mock()
mock_requests_get.status_code = 200
mock_requests_get.text = 'location":"' + CODE_ASCII
mock_requests = mocker.patch(REQUESTS_MODULE)
mock_requests.get.return_value = mock_requests_get
mock_author = tosurnament_mock.UserMock()
mock_bot = tosurnament_mock.BotMock()
mock_bot.session.add_stub(User(discord_id=tosurnament_mock.USER_ID, verified=False, code=CODE_ASCII))
cog = tosurnament_mock.mock_cog(auth.get_class(mock_bot))
await cog.auth(cog, tosurnament_mock.CtxMock(mock_bot, mock_author))
mock_bot.session.update.assert_called_once_with(tosurnament_mock.Matcher(User(verified=True, code=CODE_ASCII)))
cog.send_reply.assert_called_with(
mocker.ANY, "success", channel=tosurnament_mock.ChannelMock(mock_author.DM_CHANNEL_ID)
)
| 38.455446 | 115 | 0.734037 | 1,081 | 7,768 | 4.968548 | 0.12951 | 0.164774 | 0.067027 | 0.040961 | 0.780488 | 0.772482 | 0.772482 | 0.755167 | 0.711227 | 0.69447 | 0 | 0.004334 | 0.168254 | 7,768 | 201 | 116 | 38.646766 | 0.826962 | 0.024588 | 0 | 0.615894 | 1 | 0 | 0.024411 | 0.0091 | 0 | 0 | 0 | 0 | 0.05298 | 1 | 0 | false | 0 | 0.033113 | 0 | 0.033113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db5e3a0a4636ee427f70abdce8a0abdc3c1f52d9 | 1,921 | py | Python | tests/test_base_crawler.py | dulanyuexiayinghuo/ProfessorFinder | b9edeec3491a515960342c603704a1d7afa70bfc | [
"MIT"
] | 2 | 2021-07-19T13:44:02.000Z | 2021-07-24T12:14:45.000Z | tests/test_base_crawler.py | dulanyuexiayinghuo/ProfessorFinder | b9edeec3491a515960342c603704a1d7afa70bfc | [
"MIT"
] | 2 | 2021-07-19T13:58:02.000Z | 2021-07-24T01:46:30.000Z | tests/test_base_crawler.py | dulanyuexiayinghuo/ProfessorFinder | b9edeec3491a515960342c603704a1d7afa70bfc | [
"MIT"
] | 2 | 2021-07-19T13:43:42.000Z | 2021-07-21T10:46:33.000Z | import unittest
from ProfessorFinder.content_crawler.base_crawler import WebCrawler
class BaseCrawlerTestCase(unittest.TestCase):
def setUp(self):
self.test_crawler1 = WebCrawler('https://www.sem.tsinghua.edu.cn/tesearch/jssearch.html', test=True)
self.test_crawler2 = WebCrawler('http://www.arch.tsinghua.edu.cn/column/rw', test=True)
def test_urlparse(self):
pass
def test_internal_convert(self):
self.assertEqual(self.test_crawler1._internal_link_convert('/upload_files/image/1572091053113_0B.png'),
'https://www.sem.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png')
self.assertEqual(self.test_crawler1._internal_link_convert(
'www.sem.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png'),
'https://www.sem.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png')
self.assertEqual(self.test_crawler1._internal_link_convert(
'https://www.sem.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png'),
'https://www.sem.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png')
self.assertEqual(self.test_crawler2._internal_link_convert('/upload_files/image/1572091053113_0B.png'),
'http://www.arch.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png')
self.assertEqual(self.test_crawler2._internal_link_convert(
'www.arch.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png'),
'http://www.arch.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png')
self.assertEqual(self.test_crawler2._internal_link_convert(
'http://www.arch.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png'),
'http://www.arch.tsinghua.edu.cn/upload_files/image/1572091053113_0B.png')
def tearDown(self):
pass
if __name__ == '__main__':
unittest.main()
| 50.552632 | 111 | 0.709006 | 243 | 1,921 | 5.345679 | 0.193416 | 0.101617 | 0.120092 | 0.267898 | 0.759815 | 0.759815 | 0.722864 | 0.722864 | 0.698999 | 0.684373 | 0 | 0.109521 | 0.163457 | 1,921 | 37 | 112 | 51.918919 | 0.698818 | 0 | 0 | 0.413793 | 0 | 0 | 0.457054 | 0.107756 | 0 | 0 | 0 | 0 | 0.206897 | 1 | 0.137931 | false | 0.068966 | 0.068966 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
db665b333f986f029244a140a0f482ec8002e45c | 32 | py | Python | causalinference/__init__.py | nickmvincent/ugc-val-est | b5cceda14ef5830f1befaddfccfd90a694c9677a | [
"MIT"
] | 2 | 2019-11-13T19:56:05.000Z | 2020-09-05T03:21:14.000Z | causalinference/__init__.py | nickmvincent/ugc-val-est | b5cceda14ef5830f1befaddfccfd90a694c9677a | [
"MIT"
] | 6 | 2018-03-02T16:49:20.000Z | 2021-06-10T18:55:02.000Z | causalinference/__init__.py | nickmvincent/ugc-val-est | b5cceda14ef5830f1befaddfccfd90a694c9677a | [
"MIT"
] | null | null | null | from .causal import CausalModel
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbc7adaef9b76ab491dc8020cee37644d7992c55 | 134 | py | Python | aoc/solver.py | witchtrash/aoc2021 | 31b02a4c7a6e5c2756a128499857f996099050a2 | [
"MIT"
] | 1 | 2021-12-02T16:34:05.000Z | 2021-12-02T16:34:05.000Z | aoc/solver.py | witchtrash/aoc2021 | 31b02a4c7a6e5c2756a128499857f996099050a2 | [
"MIT"
] | null | null | null | aoc/solver.py | witchtrash/aoc2021 | 31b02a4c7a6e5c2756a128499857f996099050a2 | [
"MIT"
] | 1 | 2021-12-02T02:40:29.000Z | 2021-12-02T02:40:29.000Z | from typing import Protocol
class Solver(Protocol):
def run(self) -> str:
pass
def test(self) -> str:
pass
| 13.4 | 27 | 0.58209 | 17 | 134 | 4.588235 | 0.705882 | 0.179487 | 0.282051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.320896 | 134 | 9 | 28 | 14.888889 | 0.857143 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
91a839d4808680f9afc22ad0c0fcced964c3e3d0 | 166 | py | Python | myapps/contratos/admin.py | informatorio2020com07/grupo3 | f3c11c39008cd35ab5004a5b7599cfdaadd2fd6b | [
"MIT"
] | 1 | 2020-09-08T10:13:59.000Z | 2020-09-08T10:13:59.000Z | myapps/contratos/admin.py | informatorio2020com07/grupo3 | f3c11c39008cd35ab5004a5b7599cfdaadd2fd6b | [
"MIT"
] | 1 | 2020-09-22T07:53:34.000Z | 2020-10-03T00:53:23.000Z | myapps/contratos/admin.py | informatorio2020com07/grupo3 | f3c11c39008cd35ab5004a5b7599cfdaadd2fd6b | [
"MIT"
] | 2 | 2020-09-16T01:17:50.000Z | 2020-10-01T23:55:13.000Z | from django.contrib import admin
# Register your models here.
from .models import Contrato #, Trabajo
admin.site.register(Contrato)
# admin.site.register(Trabajo)
| 18.444444 | 39 | 0.783133 | 22 | 166 | 5.909091 | 0.545455 | 0.138462 | 0.261538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126506 | 166 | 8 | 40 | 20.75 | 0.896552 | 0.385542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91b7819e18b8ddf59575920efa963893063a34cb | 11,862 | py | Python | tests/components/humidifier/test_device_action.py | erogleva/core | 994ae09f69afe772150a698953c0d7386a745de2 | [
"Apache-2.0"
] | 6 | 2016-11-25T06:36:27.000Z | 2021-11-16T11:20:23.000Z | tests/components/humidifier/test_device_action.py | erogleva/core | 994ae09f69afe772150a698953c0d7386a745de2 | [
"Apache-2.0"
] | 56 | 2020-08-03T07:30:54.000Z | 2022-03-31T06:02:04.000Z | tests/components/humidifier/test_device_action.py | erogleva/core | 994ae09f69afe772150a698953c0d7386a745de2 | [
"Apache-2.0"
] | 3 | 2016-10-03T20:14:06.000Z | 2019-04-19T15:56:56.000Z | """The tests for Humidifier device actions."""
import pytest
import voluptuous_serialize
import homeassistant.components.automation as automation
from homeassistant.components.humidifier import DOMAIN, const, device_action
from homeassistant.const import STATE_ON
from homeassistant.helpers import config_validation as cv, device_registry
from homeassistant.setup import async_setup_component
from tests.common import (
MockConfigEntry,
assert_lists_same,
async_get_device_automations,
async_mock_service,
mock_device_registry,
mock_registry,
)
@pytest.fixture
def device_reg(hass):
"""Return an empty, loaded, registry."""
return mock_device_registry(hass)
@pytest.fixture
def entity_reg(hass):
"""Return an empty, loaded, registry."""
return mock_registry(hass)
async def test_get_actions(hass, device_reg, entity_reg):
"""Test we get the expected actions from a humidifier."""
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(DOMAIN, "test", "5678", device_id=device_entry.id)
hass.states.async_set("humidifier.test_5678", STATE_ON, {})
hass.states.async_set(
"humidifier.test_5678", "attributes", {"supported_features": 1}
)
expected_actions = [
{
"domain": DOMAIN,
"type": "turn_on",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "turn_off",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "toggle",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "set_humidity",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "set_mode",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
]
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert_lists_same(actions, expected_actions)
async def test_get_action_no_modes(hass, device_reg, entity_reg):
"""Test we get the expected actions from a humidifier."""
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(DOMAIN, "test", "5678", device_id=device_entry.id)
hass.states.async_set("humidifier.test_5678", STATE_ON, {})
hass.states.async_set(
"humidifier.test_5678", "attributes", {"supported_features": 0}
)
expected_actions = [
{
"domain": DOMAIN,
"type": "turn_on",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "turn_off",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "toggle",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "set_humidity",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
]
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert_lists_same(actions, expected_actions)
async def test_get_action_no_state(hass, device_reg, entity_reg):
"""Test we get the expected actions from a humidifier."""
config_entry = MockConfigEntry(domain="test", data={})
config_entry.add_to_hass(hass)
device_entry = device_reg.async_get_or_create(
config_entry_id=config_entry.entry_id,
connections={(device_registry.CONNECTION_NETWORK_MAC, "12:34:56:AB:CD:EF")},
)
entity_reg.async_get_or_create(DOMAIN, "test", "5678", device_id=device_entry.id)
expected_actions = [
{
"domain": DOMAIN,
"type": "turn_on",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "turn_off",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "toggle",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
{
"domain": DOMAIN,
"type": "set_humidity",
"device_id": device_entry.id,
"entity_id": "humidifier.test_5678",
},
]
actions = await async_get_device_automations(hass, "action", device_entry.id)
assert_lists_same(actions, expected_actions)
async def test_action(hass):
"""Test for actions."""
hass.states.async_set(
"humidifier.entity",
STATE_ON,
{const.ATTR_AVAILABLE_MODES: [const.MODE_HOME, const.MODE_AWAY]},
)
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: [
{
"trigger": {
"platform": "event",
"event_type": "test_event_turn_off",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "turn_off",
},
},
{
"trigger": {
"platform": "event",
"event_type": "test_event_turn_on",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "turn_on",
},
},
{
"trigger": {"platform": "event", "event_type": "test_event_toggle"},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "toggle",
},
},
{
"trigger": {
"platform": "event",
"event_type": "test_event_set_humidity",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "set_humidity",
"humidity": 35,
},
},
{
"trigger": {
"platform": "event",
"event_type": "test_event_set_mode",
},
"action": {
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "set_mode",
"mode": const.MODE_AWAY,
},
},
]
},
)
set_humidity_calls = async_mock_service(hass, "humidifier", "set_humidity")
set_mode_calls = async_mock_service(hass, "humidifier", "set_mode")
turn_on_calls = async_mock_service(hass, "humidifier", "turn_on")
turn_off_calls = async_mock_service(hass, "humidifier", "turn_off")
toggle_calls = async_mock_service(hass, "humidifier", "toggle")
assert len(set_humidity_calls) == 0
assert len(set_mode_calls) == 0
assert len(turn_on_calls) == 0
assert len(turn_off_calls) == 0
assert len(toggle_calls) == 0
hass.bus.async_fire("test_event_set_humidity")
await hass.async_block_till_done()
assert len(set_humidity_calls) == 1
assert len(set_mode_calls) == 0
assert len(turn_on_calls) == 0
assert len(turn_off_calls) == 0
assert len(toggle_calls) == 0
hass.bus.async_fire("test_event_set_mode")
await hass.async_block_till_done()
assert len(set_humidity_calls) == 1
assert len(set_mode_calls) == 1
assert len(turn_on_calls) == 0
assert len(turn_off_calls) == 0
assert len(toggle_calls) == 0
hass.bus.async_fire("test_event_turn_off")
await hass.async_block_till_done()
assert len(set_humidity_calls) == 1
assert len(set_mode_calls) == 1
assert len(turn_on_calls) == 0
assert len(turn_off_calls) == 1
assert len(toggle_calls) == 0
hass.bus.async_fire("test_event_turn_on")
await hass.async_block_till_done()
assert len(set_humidity_calls) == 1
assert len(set_mode_calls) == 1
assert len(turn_on_calls) == 1
assert len(turn_off_calls) == 1
assert len(toggle_calls) == 0
hass.bus.async_fire("test_event_toggle")
await hass.async_block_till_done()
assert len(set_humidity_calls) == 1
assert len(set_mode_calls) == 1
assert len(turn_on_calls) == 1
assert len(turn_off_calls) == 1
assert len(toggle_calls) == 1
async def test_capabilities(hass):
"""Test getting capabilities."""
# Test capabililities without state
capabilities = await device_action.async_get_action_capabilities(
hass,
{
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "set_mode",
},
)
assert capabilities and "extra_fields" in capabilities
assert voluptuous_serialize.convert(
capabilities["extra_fields"], custom_serializer=cv.custom_serializer
) == [{"name": "mode", "options": [], "required": True, "type": "select"}]
# Set state
hass.states.async_set(
"humidifier.entity",
STATE_ON,
{const.ATTR_AVAILABLE_MODES: [const.MODE_HOME, const.MODE_AWAY]},
)
# Set humidity
capabilities = await device_action.async_get_action_capabilities(
hass,
{
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "set_humidity",
},
)
assert capabilities and "extra_fields" in capabilities
assert voluptuous_serialize.convert(
capabilities["extra_fields"], custom_serializer=cv.custom_serializer
) == [{"name": "humidity", "required": True, "type": "integer"}]
# Set mode
capabilities = await device_action.async_get_action_capabilities(
hass,
{
"domain": DOMAIN,
"device_id": "abcdefgh",
"entity_id": "humidifier.entity",
"type": "set_mode",
},
)
assert capabilities and "extra_fields" in capabilities
assert voluptuous_serialize.convert(
capabilities["extra_fields"], custom_serializer=cv.custom_serializer
) == [
{
"name": "mode",
"options": [("home", "home"), ("away", "away")],
"required": True,
"type": "select",
}
]
| 33.041783 | 88 | 0.55682 | 1,230 | 11,862 | 5.045528 | 0.095122 | 0.043506 | 0.060909 | 0.048985 | 0.844183 | 0.834515 | 0.828875 | 0.797937 | 0.771189 | 0.757009 | 0 | 0.016453 | 0.323639 | 11,862 | 358 | 89 | 33.134078 | 0.757073 | 0.014922 | 0 | 0.597444 | 0 | 0 | 0.188724 | 0.004021 | 0 | 0 | 0 | 0 | 0.13099 | 1 | 0.00639 | false | 0 | 0.025559 | 0 | 0.038339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91bf5289229e37655d5298eeb7249bb56d7fe2b2 | 30 | py | Python | python/src/ecole/scip.py | Sandbergo/ecole | 2bab4d6a66e5d1932870f4cecbdc989b8fd17546 | [
"BSD-3-Clause"
] | 234 | 2019-11-22T15:50:52.000Z | 2022-03-28T15:03:02.000Z | python/src/ecole/scip.py | Sandbergo/ecole | 2bab4d6a66e5d1932870f4cecbdc989b8fd17546 | [
"BSD-3-Clause"
] | 179 | 2019-12-04T19:19:04.000Z | 2022-03-24T14:20:01.000Z | python/src/ecole/scip.py | Sandbergo/ecole | 2bab4d6a66e5d1932870f4cecbdc989b8fd17546 | [
"BSD-3-Clause"
] | 47 | 2020-01-29T19:48:24.000Z | 2022-03-31T08:42:09.000Z | from ecole.core.scip import *
| 15 | 29 | 0.766667 | 5 | 30 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
726ab307830e2857175a11229eeb80e094caae6d | 96 | py | Python | VetDataHub/views.py | nkoroi/KVDH | 622a2331325de113b95791b74c1f74383dcbd7f1 | [
"MIT"
] | null | null | null | VetDataHub/views.py | nkoroi/KVDH | 622a2331325de113b95791b74c1f74383dcbd7f1 | [
"MIT"
] | null | null | null | VetDataHub/views.py | nkoroi/KVDH | 622a2331325de113b95791b74c1f74383dcbd7f1 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def home(request):
return render(request, 'vdh/home.html') | 24 | 40 | 0.78125 | 14 | 96 | 5.357143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 96 | 4 | 40 | 24 | 0.872093 | 0 | 0 | 0 | 0 | 0 | 0.134021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f4089d0d12a93f9fbbd60372d9472d966195eb15 | 22 | py | Python | drawing/__init__.py | akyruu/blender-cartography-addon | 4f34b029d9b6a72619227ab3ceaed9393506934e | [
"Apache-2.0"
] | null | null | null | drawing/__init__.py | akyruu/blender-cartography-addon | 4f34b029d9b6a72619227ab3ceaed9393506934e | [
"Apache-2.0"
] | null | null | null | drawing/__init__.py | akyruu/blender-cartography-addon | 4f34b029d9b6a72619227ab3ceaed9393506934e | [
"Apache-2.0"
] | null | null | null | from .drawer import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f419a0e243ef758d73daf21a40471338d7a22f20 | 17,663 | py | Python | tests/functional/test_main.py | luk-kop/ec2-tags-actions | 5d6bb654d77d67f8b64df16aba7ef34c491de8f4 | [
"MIT"
] | null | null | null | tests/functional/test_main.py | luk-kop/ec2-tags-actions | 5d6bb654d77d67f8b64df16aba7ef34c491de8f4 | [
"MIT"
] | null | null | null | tests/functional/test_main.py | luk-kop/ec2-tags-actions | 5d6bb654d77d67f8b64df16aba7ef34c491de8f4 | [
"MIT"
] | null | null | null | from ec2_tags import main
def test_main_single_instance_no_tags_stop(ec2_instance):
"""
GIVEN Single instance without assigned tag.
WHEN main() is called.
THEN The stop action has been performed on given instance.
"""
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_tags=True)
assert ec2_instance.state['Name'] == 'stopped'
def test_main_single_instance_no_tags_terminate(ec2_instance):
"""
GIVEN Single instance without assigned tag.
WHEN main() is called.
THEN The terminate action has been performed on given instance.
"""
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_tags=True)
assert ec2_instance.state['Name'] == 'terminated'
def test_main_single_instance_no_tags_list(ec2_instance):
"""
GIVEN Single instance without assigned tag.
WHEN main() is called.
THEN Only list action has been performed. Instances are still running.
"""
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='list', ec2_no_tags=True)
assert ec2_instance.state['Name'] == 'running'
def test_main_single_instance_no_tags_stop_no_action(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tag.
WHEN main() is called.
THEN The NO action has been performed on given instance.
"""
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_tags=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_no_tags_terminate_no_action(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tag.
WHEN main() is called.
THEN The NO action has been performed on given instance.
"""
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_tags=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_no_name_tag_stop(ec2_instance):
"""
GIVEN Single instance without assigned Name tag.
WHEN main() is called.
THEN The stop action has been performed on given instance.
"""
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_name_tag=True)
assert ec2_instance.state['Name'] == 'stopped'
def test_main_single_instance_no_name_tag_terminate(ec2_instance):
"""
GIVEN Single instance without assigned Name tag.
WHEN main() is called.
THEN The stop action has been performed on given instance.
"""
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_name_tag=True)
assert ec2_instance.state['Name'] == 'terminated'
def test_main_single_instance_no_name_tag_list(ec2_instance):
"""
GIVEN Single instance without assigned Name tag.
WHEN main() is called.
THEN Only list action has been performed. Instances are still running.
"""
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='list', ec2_no_name_tag=True)
assert ec2_instance.state['Name'] == 'running'
def test_main_single_instance_specified_ec2_tag_stop(ec2_instance_with_tag):
"""
GIVEN Single instance with specified tag assigned.
WHEN main() is called.
THEN The stop action has been performed on given instance.
"""
ec2_instance_with_tag.start()
ec2_tag_wanted = {
'tag_key': 'Env',
'tag_value': 'Production'
}
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_tag=ec2_tag_wanted)
assert ec2_instance_with_tag.state['Name'] == 'stopped'
def test_main_single_instance_specified_ec2_tag_terminate(ec2_instance_with_tag):
"""
GIVEN Single instance with specified tag assigned.
WHEN main() is called.
THEN The stop action has been performed on given instance.
"""
ec2_instance_with_tag.start()
ec2_tag_wanted = {
'tag_key': 'Env',
'tag_value': 'Production'
}
main(aws_region='eu-west-1', ec2_action='terminate', ec2_tag=ec2_tag_wanted)
assert ec2_instance_with_tag.state['Name'] == 'terminated'
def test_main_single_instance_specified_ec2_tag_list(ec2_instance_with_tag):
"""
GIVEN Single instance with specified tag assigned.
WHEN main() is called.
THEN Only list action has been performed. Instances are still running.
"""
ec2_instance_with_tag.start()
ec2_tag_wanted = {
'tag_key': 'Env',
'tag_value': 'Production'
}
main(aws_region='eu-west-1', ec2_action='list', ec2_tag=ec2_tag_wanted)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_with_other_tag_no_name_tag_stop(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tags other than Name tag.
WHEN main() is called.
THEN The stop action has been performed on given instance.
"""
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'stopped'
def test_main_single_instance_with_other_tag_no_name_tag_terminate(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tags other than Name tag.
WHEN main() is called.
THEN The terminate action has been performed on given instance.
"""
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'terminated'
def test_main_single_instance_with_other_tag_no_name_tag_list(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tags other than Name tag.
WHEN main() is called.
THEN Only list action has been performed. Instances are still running.
"""
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='list', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_with_name_tag_no_name_tag_stop(ec2_resource, ec2_instance_with_tag):
"""
GIVEN Single instance with assigned Name tag and one dummy tag.
WHEN main() is called.
THEN The NO action has been performed on given instance.
"""
# Add Name tag to instance
ec2_resource.create_tags(
Resources=[ec2_instance_with_tag.id],
Tags=[
{
'Key': 'Name',
'Value': 'Dummy-instance'
}
]
)
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_with_name_tag_no_name_tag_terminate(ec2_resource, ec2_instance_with_tag):
"""
GIVEN Single instance with assigned Name tag and one dummy tag.
WHEN main() is called.
THEN The NO action has been performed on given instance.
"""
# Add Name tag to instance
ec2_resource.create_tags(
Resources=[ec2_instance_with_tag.id],
Tags=[
{
'Key': 'Name',
'Value': 'Dummy-instance'
}
]
)
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_specified_ec2_tag_other_tag_stop(ec2_resource, ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tag other than wanted.
WHEN main() is called.
THEN The NO action has been performed on given instance.
"""
# Add Name tag to instance
ec2_resource.create_tags(
Resources=[ec2_instance_with_tag.id],
Tags=[
{
'Key': 'Name',
'Value': 'Dummy-instance'
}
]
)
ec2_instance_with_tag.start()
ec2_tag_wanted = {
'tag_key': 'Env',
'tag_value': 'Staging'
}
main(aws_region='eu-west-1', ec2_action='terminate', ec2_tag=ec2_tag_wanted)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_specified_ec2_tag_other_tag_terminate(ec2_resource, ec2_instance_with_tag):
"""
GIVEN Single instance with assigned tag other than wanted.
WHEN main() is called.
THEN The NO action has been performed on given instance.
"""
# Add Name tag to instance
ec2_resource.create_tags(
Resources=[ec2_instance_with_tag.id],
Tags=[
{
'Key': 'Name',
'Value': 'Staging'
}
]
)
ec2_instance_with_tag.start()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_no_name_tag_stop_diff_region(ec2_instance_with_tag):
"""
GIVEN Single instance with not assigned Name tag.
WHEN main() is called with different region attr than EC2 instance.
THEN The NO action has been performed on given instance.
"""
# Start instance in other region
ec2_instance_with_tag.start()
main(aws_region='eu-west-2', ec2_action='stop', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_no_name_tag_terminate_diff_region(ec2_instance_with_tag):
"""
GIVEN Single instance with not assigned Name tag.
WHEN main() is called with different region attr than EC2 instance.
THEN The NO action has been performed on given instance.
"""
# Start instance in other region
ec2_instance_with_tag.start()
main(aws_region='eu-west-2', ec2_action='terminate', ec2_no_name_tag=True)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_specified_ec2_tag_stop_diff_region(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned specified tag.
WHEN main() is called with different region attr than EC2 instance.
THEN The NO action has been performed on given instance.
"""
# Start instance in other region
ec2_instance_with_tag.start()
ec2_tag_wanted = {
'tag_key': 'Env',
'tag_value': 'Production'
}
main(aws_region='eu-west-2', ec2_action='terminate', ec2_tag=ec2_tag_wanted)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_single_instance_specified_ec2_tag_terminate_diff_region(ec2_instance_with_tag):
"""
GIVEN Single instance with assigned specified tag.
WHEN main() is called with different region attr than EC2 instance.
THEN The NO action has been performed on given instance.
"""
# Start instance in other region
ec2_instance_with_tag.start()
ec2_tag_wanted = {
'tag_key': 'Env',
'tag_value': 'Production'
}
main(aws_region='eu-west-2', ec2_action='terminate', ec2_tag=ec2_tag_wanted)
assert ec2_instance_with_tag.state['Name'] == 'running'
def test_main_multiple_instances_no_tags_stop(ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple instance without assigned tag.
WHEN main() is called.
THEN The stop action has been performed on all given instance.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
for ec2_instance in ec2_instances:
ec2_instance.start()
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_tags=True)
for ec2_instance in ec2_instances:
assert ec2_instance.state['Name'] == 'stopped'
def test_main_multiple_instances_stopped_no_tags_terminate(ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple stopped instances without assigned tag.
WHEN main() is called.
THEN The terminate action has been performed on all given instance.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
for ec2_instance in ec2_instances:
ec2_instance.stop()
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_tags=True)
for ec2_instance in ec2_instances:
assert ec2_instance.state['Name'] == 'terminated'
def test_main_multiple_instances_stopped_no_tags_list(ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple stopped instances without assigned tag.
WHEN main() is called.
THEN Only list action has been performed. Instances are still stopped.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
for ec2_instance in ec2_instances:
ec2_instance.stop()
main(aws_region='eu-west-1', ec2_action='list', ec2_no_tags=True)
for ec2_instance in ec2_instances:
assert ec2_instance.state['Name'] == 'stopped'
def test_main_multiple_instances_no_tags_stop_not_all_instances(ec2_resource, ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple instances without assigned tag.
WHEN main() is called.
THEN The stop action has been performed on all instance without tags.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
for num, ec2_instance in enumerate(ec2_instances, 1):
# Assign tag only to 1st instance
if num == 1:
# Store instance id for later test
tagged_instance_id = ec2_instance.id
ec2_resource.create_tags(
Resources=[tagged_instance_id],
Tags=[
{
'Key': 'Env',
'Value': 'Production'
}
]
)
ec2_instance.start()
# Stop instances except instance with tagged_instance_id
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_tags=True)
for ec2_instance in ec2_instances:
if ec2_instance.id == tagged_instance_id:
assert ec2_instance.state['Name'] == 'running'
else:
assert ec2_instance.state['Name'] == 'stopped'
def test_main_multiple_instances_no_name_tag_stop(ec2_resource, ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple instances - only two with assigned Name tag.
WHEN main() is called.
THEN The stop action has been performed on instance without Name tag.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
tagged_instance_ids = []
for num, ec2_instance in enumerate(ec2_instances, 1):
# Assign Name tag only to 1st nad 3rd instance
if num in [1, 3]:
# Store instance id for later test
tagged_instance_ids.append(ec2_instance.id)
ec2_resource.create_tags(
Resources=[ec2_instance.id],
Tags=[
{
'Key': 'Name',
'Value': f'Dummy-instance-{num}'
}
]
)
ec2_instance.start()
# Stop instances except instance without name tags (only 2nd instance)
main(aws_region='eu-west-1', ec2_action='stop', ec2_no_name_tag=True)
for ec2_instance in ec2_instances:
if ec2_instance.id in tagged_instance_ids:
assert ec2_instance.state['Name'] == 'running'
else:
assert ec2_instance.state['Name'] == 'stopped'
def test_main_multiple_instances_stopped_no_name_tag_terminate(ec2_resource, ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple stopped instances - only two with assigned Name tag.
WHEN main() is called.
THEN The stop action has been performed on instance without Name tag.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
tagged_instance_ids = []
for num, ec2_instance in enumerate(ec2_instances, 1):
# Assign Name tag only to 1st nad 3rd instance
if num in [1, 3]:
# Store instance id for later test
tagged_instance_ids.append(ec2_instance.id)
ec2_resource.create_tags(
Resources=[ec2_instance.id],
Tags=[
{
'Key': 'Name',
'Value': f'Dummy-instance-{num}'
}
]
)
ec2_instance.stop()
# Stop instances except instance without name tags (only 2nd instance)
main(aws_region='eu-west-1', ec2_action='terminate', ec2_no_name_tag=True)
for ec2_instance in ec2_instances:
if ec2_instance.id in tagged_instance_ids:
assert ec2_instance.state['Name'] == 'stopped'
else:
assert ec2_instance.state['Name'] == 'terminated'
def test_main_multiple_instances_no_name_tag_list(ec2_resource, ec2_instance_multiple_instances_no_tags):
"""
GIVEN Multiple stopped instances - only two with assigned Name tag.
WHEN main() is called.
THEN Only list action has been performed. Instances are still running.
"""
ec2_instances = ec2_instance_multiple_instances_no_tags
tagged_instance_ids = []
for num, ec2_instance in enumerate(ec2_instances, 1):
# Assign Name tag only to 1st nad 3rd instance
if num in [1, 3]:
# Store instance id for later test
tagged_instance_ids.append(ec2_instance.id)
ec2_resource.create_tags(
Resources=[ec2_instance.id],
Tags=[
{
'Key': 'Name',
'Value': f'Dummy-instance-{num}'
}
]
)
ec2_instance.start()
# Stop instances except instance without name tags (only 2nd instance)
main(aws_region='eu-west-1', ec2_action='list', ec2_no_name_tag=True)
for ec2_instance in ec2_instances:
assert ec2_instance.state['Name'] == 'running'
| 36.721414 | 119 | 0.680802 | 2,393 | 17,663 | 4.712077 | 0.03761 | 0.126818 | 0.070504 | 0.084604 | 0.983593 | 0.982086 | 0.98049 | 0.976321 | 0.949095 | 0.942533 | 0 | 0.021683 | 0.229746 | 17,663 | 480 | 120 | 36.797917 | 0.80713 | 0.282964 | 0 | 0.686275 | 0 | 0 | 0.09995 | 0 | 0 | 0 | 0 | 0 | 0.12549 | 1 | 0.113725 | false | 0 | 0.003922 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f423d050f977fd84c171d495af172a04195826de | 46,305 | py | Python | roi_data_layer/roibatchLoader.py | moli1026/regrad | f66c38c00405b22cb746cc3f5c38d2b49f77d854 | [
"MIT"
] | 1 | 2021-11-02T13:12:00.000Z | 2021-11-02T13:12:00.000Z | roi_data_layer/roibatchLoader.py | moli1026/regrad | f66c38c00405b22cb746cc3f5c38d2b49f77d854 | [
"MIT"
] | null | null | null | roi_data_layer/roibatchLoader.py | moli1026/regrad | f66c38c00405b22cb746cc3f5c38d2b49f77d854 | [
"MIT"
] | null | null | null | # --------------------------------------------------------
# Visual Detection: State-of-the-Art
# Copyright: Hanbo Zhang
# Licensed under The MIT License [see LICENSE for details]
# Written by Hanbo Zhang
# based on code from Jiasen Lu, Jianwei Yang, Ross Girshick
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import torch
import torch.utils.data as data
from model.utils.config import cfg
from roi_data_layer.minibatch import *
from model.utils.blob import prep_im_for_blob, image_normalize
import abc
import cv2
import os
from model.utils.augmentations import *
import pdb
class roibatchLoader(data.Dataset):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True, cls_list=None,
augmentation = False):
self._roidb = roidb
self._num_classes = num_classes
# we make the height of image consistent to trim_height, trim_width
self.max_num_box = cfg.MAX_NUM_GT_BOXES
self.max_num_grasp = cfg.MAX_NUM_GT_GRASPS
self.training = training
self.ratio_list = ratio_list
self.ratio_index = ratio_index
self.batch_size = batch_size
self.data_size = len(self.ratio_list)
self.cls_list = cls_list
self.pixel_means = cfg.PIXEL_MEANS if cfg.PRETRAIN_TYPE == "pytorch" else cfg.PIXEL_MEANS_CAFFE
self.pixel_stds = cfg.PIXEL_STDS if cfg.PRETRAIN_TYPE == "pytorch" else np.array([[[1., 1., 1.]]])
self.augmentation = augmentation
if self.augmentation:
self.augImageOnly = None
self.augObjdet = None
@abc.abstractmethod
def _imagePreprocess(self, blob, fix_size):
raise NotImplementedError
@abc.abstractmethod
def __getitem__(self, index):
raise NotImplementedError
def __len__(self):
return len(self._roidb)
class objdetRoibatchLoader(roibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(objdetRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _imagePreprocess(self, blob, fix_size = True):
keep = np.arange(blob['gt_boxes'].shape[0])
if self.augmentation:
if self.augImageOnly is not None: blob['data'] = self.augImageOnly(blob['data'])
if self.augObjdet is not None: blob['data'], blob['gt_boxes'], _, _, _ = \
self.augObjdet(image=blob['data'], boxes=blob['gt_boxes'], boxes_keep=keep)
# choose one predefined size, TODO: support multi-instance batch
random_scale_ind = np.random.randint(0, high=len(cfg.SCALES))
blob['data'], im_scale = prep_im_for_blob(blob['data'], cfg.SCALES[random_scale_ind], cfg.TRAIN.COMMON.MAX_SIZE, fix_size)
# modify bounding boxes according to resize parameters
blob['im_info'][:2] = (blob['data'].shape[0], blob['data'].shape[1])
blob['im_info'][2:4] = (im_scale['y'], im_scale['x'])
blob['gt_boxes'][:, :-1][:, 0::2] *= im_scale['x']
blob['gt_boxes'][:, :-1][:, 1::2] *= im_scale['y']
blob['data'] = image_normalize(blob['data'], mean=self.pixel_means, std=self.pixel_stds)
return blob
def _boxPostProcess(self, gt_boxes):
gt_boxes_padding = torch.FloatTensor(self.max_num_box, 5).zero_()
not_keep = (gt_boxes[:, 0] == gt_boxes[:, 2]) | (gt_boxes[:, 1] == gt_boxes[:, 3])
keep = torch.nonzero(not_keep == 0).view(-1)
num_boxes = min(keep.size(0), self.max_num_box)
keep = keep[:num_boxes]
if keep.numel() != 0:
gt_boxes = gt_boxes[keep]
gt_boxes_padding[:num_boxes, :] = gt_boxes
return gt_boxes_padding, keep
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_objdet(minibatch_db)
# preprocess images
blobs = self._imagePreprocess(blobs)
data = torch.from_numpy(blobs['data'].copy())
data = data.permute(2, 0, 1).contiguous()
im_info = torch.from_numpy(blobs['im_info'])
if self.training:
# object detection data
# 4 coordinates (xmin, ymin, xmax, ymax) and 1 label
np.random.shuffle(blobs['gt_boxes'])
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
gt_boxes, keep = self._boxPostProcess(gt_boxes)
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_boxes, keep.size(0)
else:
gt_boxes = torch.FloatTensor([1, 1, 1, 1, 1])
num_boxes = 0
return data, im_info, gt_boxes, num_boxes
class graspdetRoibatchLoader(roibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(graspdetRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _imagePreprocess(self, blob, fix_size = True):
keep = np.arange(blob['gt_grasps'].shape[0])
if self.augmentation:
if self.augImageOnly is not None: blob['data'] = self.augImageOnly(blob['data'])
if self.augObjdet is not None: blob['data'], _, blob['gt_grasps'], _, _ = \
self.augmGraspdet(image=blob['data'], grasps=blob['gt_grasps'], grasps_keep=keep)
# choose one predefined size, TODO: support multi-instance batch
random_scale_ind = np.random.randint(0, high=len(cfg.SCALES))
blob['data'], im_scale = prep_im_for_blob(blob['data'], cfg.SCALES[random_scale_ind], cfg.TRAIN.COMMON.MAX_SIZE, fix_size)
blob['im_info'][:2] = (blob['data'].shape[0], blob['data'].shape[1])
blob['im_info'][2:4] = (im_scale['y'], im_scale['x'])
blob['gt_grasps'][:, 0::2] *= im_scale['x']
blob['gt_grasps'][:, 1::2] *= im_scale['y']
blob['data'] = image_normalize(blob['data'], mean=self.pixel_means, std=self.pixel_stds)
return blob
def _graspPostProcess(self, gt_grasps, gt_grasp_inds = None):
gt_grasps_padding = torch.FloatTensor(self.max_num_grasp, 8).zero_()
num_grasps = min(gt_grasps.size(0), self.max_num_grasp)
gt_grasps_padding[:num_grasps, :] = gt_grasps[:num_grasps]
if gt_grasp_inds is not None:
gt_grasp_inds_padding = torch.LongTensor(self.max_num_grasp).zero_()
gt_grasp_inds_padding[:num_grasps] = gt_grasp_inds[:num_grasps]
return gt_grasps_padding, num_grasps, gt_grasp_inds_padding
return gt_grasps_padding, num_grasps
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_graspdet(minibatch_db)
blobs = self._imagePreprocess(blobs)
data = torch.from_numpy(blobs['data'].copy())
data = data.permute(2, 0, 1).contiguous()
im_info = torch.from_numpy(blobs['im_info'])
if self.training:
np.random.shuffle(blobs['gt_grasps'])
gt_grasps = torch.from_numpy(blobs['gt_grasps'])
gt_grasps, num_grasps = self._graspPostProcess(gt_grasps)
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_grasps, num_grasps
else:
gt_grasps = torch.FloatTensor([1, 1, 1, 1, 1, 1, 1, 1])
num_grasps = 0
return data, im_info, gt_grasps, num_grasps
class vmrdetRoibatchLoader(objdetRoibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(vmrdetRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _imagePreprocess(self, blob, fix_size=True):
keep = np.arange(blob['gt_boxes'].shape[0])
if self.augmentation:
if self.augImageOnly is not None: blob['data'] = self.augImageOnly(blob['data'])
if self.augObjdet is not None: blob['data'], blob['gt_boxes'], _, keep, _ = \
self.augObjdet(image=blob['data'], boxes=blob['gt_boxes'], boxes_keep=keep)
# choose one predefined size, TODO: support multi-instance batch
random_scale_ind = np.random.randint(0, high=len(cfg.SCALES))
blob['data'], im_scale = prep_im_for_blob(blob['data'], cfg.SCALES[random_scale_ind], cfg.TRAIN.COMMON.MAX_SIZE, fix_size)
# modify bounding boxes according to resize parameters
blob['im_info'][:2] = (blob['data'].shape[0], blob['data'].shape[1])
blob['im_info'][2:4] = (im_scale['y'], im_scale['x'])
blob['gt_boxes'][:, :-1][:, 0::2] *= im_scale['x']
blob['gt_boxes'][:, :-1][:, 1::2] *= im_scale['y']
blob['data'] = image_normalize(blob['data'], mean=self.pixel_means, std=self.pixel_stds)
blob['node_inds'] = blob['node_inds'][keep]
blob['parent_lists'] = [blob['parent_lists'][p_ind] for p_ind in list(keep)]
blob['child_lists'] = [blob['child_lists'][c_ind] for c_ind in list(keep)]
return blob
def _genRelMat(self, obj_list, node_inds, child_lists, parent_lists):
num_boxes = obj_list.size(0)
rel_mat = torch.FloatTensor(self.max_num_box, self.max_num_box).zero_()
# get relationship matrix
for o1 in range(num_boxes):
for o2 in range(num_boxes):
ind_o1 = node_inds[obj_list[o1].item()]
ind_o2 = node_inds[obj_list[o2].item()]
if ind_o2 == ind_o1 or rel_mat[o1, o2].item() != 0:
continue
o1_children = child_lists[obj_list[o1].item()]
o1_fathers = parent_lists[obj_list[o1].item()]
if ind_o2 in o1_children:
# o1 is o2's father
rel_mat[o1, o2] = cfg.VMRN.FATHER
elif ind_o2 in o1_fathers:
# o1 is o2's child
rel_mat[o1, o2] = cfg.VMRN.CHILD
else:
# o1 and o2 has no relationship
rel_mat[o1, o2] = cfg.VMRN.NOREL
return rel_mat
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_vmrdet(minibatch_db)
# preprocess images
blobs = self._imagePreprocess(blobs)
data = torch.from_numpy(blobs['data'].copy())
data = data.permute(2, 0, 1).contiguous()
im_info = torch.from_numpy(blobs['im_info'])
if self.training:
# object detection data
# 4 coordinates (xmin, ymin, xmax, ymax) and 1 label
shuffle_inds = range(blobs['gt_boxes'].shape[0])
np.random.shuffle(list(shuffle_inds))
shuffle_inds = torch.LongTensor(shuffle_inds)
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
gt_boxes = gt_boxes[shuffle_inds]
gt_boxes, keep = self._boxPostProcess(gt_boxes)
shuffle_inds = shuffle_inds[keep]
rel_mat = self._genRelMat(shuffle_inds, blobs['node_inds'], blobs['child_lists'], blobs['parent_lists'])
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_boxes, keep.size(0), rel_mat
else:
if cfg.TRAIN.COMMON.USE_ODLOSS:
gt_boxes = torch.FloatTensor([1, 1, 1, 1, 1])
num_boxes = 0
else:
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
num_boxes = gt_boxes.shape[0]
rel_mat = torch.FloatTensor([0])
return data, im_info, gt_boxes, num_boxes, rel_mat
class mulInSizeRoibatchLoader(roibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True, cls_list=None, augmentation = False):
super(mulInSizeRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
# given the ratio_list, we want to make the ratio same for each batch.
self.ratio_list_batch = torch.FloatTensor(self.data_size).zero_()
num_batch = int(np.ceil(len(ratio_index) / batch_size))
for i in range(num_batch):
left_idx = i * batch_size
right_idx = min((i + 1) * batch_size - 1, self.data_size - 1)
if ratio_list[right_idx] < 1:
# for ratio < 1, we preserve the leftmost in each batch.
target_ratio = ratio_list[left_idx]
elif ratio_list[left_idx] > 1:
# for ratio > 1, we preserve the rightmost in each batch.
target_ratio = ratio_list[right_idx]
else:
# for ratio cross 1, we make it to be 1.
target_ratio = 1
self.ratio_list_batch[left_idx:(right_idx + 1)] = target_ratio
def _cropImage(self, data, gt_boxes, target_ratio):
data_height, data_width = data.size(0), data.size(1)
x_s, y_s = 0, 0
if target_ratio < 1:
# this means that data_width << data_height, we need to crop the
# data_height
min_y = int(torch.min(gt_boxes[:, :-1][:, 1::2]))
max_y = int(torch.max(gt_boxes[:, :-1][:, 1::2]))
trim_size = int(np.floor(data_width / target_ratio))
if trim_size > data_height:
trim_size = data_height
box_region = max_y - min_y + 1
if min_y > 0:
if (box_region - trim_size) < 0:
y_s_min = max(max_y - trim_size, 0)
y_s_max = min(min_y, data_height - trim_size)
if y_s_min == y_s_max:
y_s = y_s_min
else:
y_s = np.random.choice(range(y_s_min, y_s_max))
else:
y_s_add = int((box_region - trim_size) / 2)
if y_s_add == 0:
y_s = min_y
else:
y_s = np.random.choice(range(min_y, min_y + y_s_add))
elif min_y < 0:
raise RuntimeError
# crop the image
data = data[y_s:(y_s + trim_size), :, :]
else:
# this means that data_width >> data_height, we need to crop the
# data_width
min_x = int(torch.min(gt_boxes[:, :-1][:, 0::2]))
max_x = int(torch.max(gt_boxes[:, :-1][:, 0::2]))
trim_size = int(np.ceil(data_height * target_ratio))
if trim_size > data_width:
trim_size = data_width
box_region = max_x - min_x + 1
if min_x > 0:
if (box_region - trim_size) < 0:
x_s_min = max(max_x - trim_size, 0)
x_s_max = min(min_x, data_width - trim_size)
if x_s_min == x_s_max:
x_s = x_s_min
else:
x_s = np.random.choice(range(x_s_min, x_s_max))
else:
x_s_add = int((box_region - trim_size) / 2)
if x_s_add == 0:
x_s = min_x
else:
x_s = np.random.choice(range(min_x, min_x + x_s_add))
elif min_x < 0:
raise RuntimeError
# crop the image
data = data[:, x_s:(x_s + trim_size), :]
return data, (x_s, y_s)
def _paddingImage(self, data, im_info, target_ratio):
data_height, data_width = data.size(0), data.size(1)
if target_ratio < 1:
# this means that data_width < data_height
padding_data = torch.FloatTensor(int(np.ceil(data_width / target_ratio)), \
data_width, 3).zero_()
padding_data[:data_height, :, :] = data
im_info[0] = padding_data.size(0)
elif target_ratio > 1:
# this means that data_width > data_height
padding_data = torch.FloatTensor(data_height, \
int(np.ceil(data_height * target_ratio)), 3).zero_()
padding_data[:, :data_width, :] = data
im_info[1] = padding_data.size(1)
else:
trim_size = min(data_height, data_width)
padding_data = data[:trim_size, :trim_size, :]
im_info[0] = trim_size
im_info[1] = trim_size
return padding_data, im_info
@abc.abstractmethod
def __getitem__(self, index):
raise NotImplementedError
class objdetMulInSizeRoibatchLoader(objdetRoibatchLoader, mulInSizeRoibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(objdetMulInSizeRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _cropBox(self, data, coord_s, gt_boxes):
# shift y coordiante of gt_boxes
gt_boxes[:, :(gt_boxes.size(1) - 1)][:, 1::2] -= float(coord_s[1])
# update gt bounding box according the trip
gt_boxes[:, :(gt_boxes.size(1) - 1)][:, 1::2].clamp_(0, data.size(0) - 1)
# shift x coordiante of gt_boxes
gt_boxes[:, :(gt_boxes.size(1) - 1)][:, 0::2] -= float(coord_s[0])
# update gt bounding box according the trip
gt_boxes[:, :(gt_boxes.size(1) - 1)][:, 0::2].clamp_(0, data.size(1) - 1)
return gt_boxes
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_objdet(minibatch_db)
# preprocess images
blobs = self._imagePreprocess(blobs, False)
data = torch.from_numpy(blobs['data'].copy())
im_info = torch.from_numpy(blobs['im_info'])
# we need to random shuffle the bounding box.
data_height, data_width = data.size(0), data.size(1)
if self.training:
np.random.shuffle(blobs['gt_boxes'])
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
# if batch_size > 1, all images need to be processed to have the same size
if self.batch_size > 1:
ratio = self.ratio_list_batch[index]
# if the image need to crop, crop to the target size.
coord_s = (0, 0)
# TODO: currently no crop is applied since the target ratio is equal to the original ratio.
if self._roidb[index_ratio]['need_crop']:
data, coord_s = self._cropImage(data, gt_boxes, ratio)
# based on the ratio, padding the image.
data, im_info = self._paddingImage(data, im_info, ratio)
# crpo bbox according to cropped image
gt_boxes = self._cropBox(data, coord_s, gt_boxes)
gt_boxes, keep = self._boxPostProcess(gt_boxes)
# permute trim_data to adapt to downstream processing
data = data.permute(2, 0, 1).contiguous()
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_boxes, keep.size(0)
else:
data = data.permute(2, 0, 1).contiguous()
gt_boxes = torch.FloatTensor([1, 1, 1, 1, 1])
num_boxes = 0
return data, im_info, gt_boxes, num_boxes
class graspMulInSizeRoibatchLoader(graspdetRoibatchLoader, mulInSizeRoibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation=False):
super(graspMulInSizeRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _cropGrasp(self, data, coord_s, gt_grasps, gt_grasp_inds = None):
# shift y coordiante of gt_boxes
gt_grasps[:, 1::2] -= float(coord_s[1])
# shift x coordiante of gt_boxes
gt_grasps[:, 0::2] -= float(coord_s[0])
# filter out illegal grasps. TWO OPTIONS:
# 1) filter out all grasps that have any vertices out of the range of the image.
# keep = (((gt_grasps[:, 0::2] > 0) & (gt_grasps[:, 0::2] < data.size(1))).sum(1) == 4) & \
# (((gt_grasps[:, 1::2] > 0) & (gt_grasps[:, 1::2] < data.size(0))).sum(1) == 4)
# 2) filter out all grasps whose centers are out of the range of the image.
gc_x = gt_grasps[:, 0::2].sum(1) / 4
gc_y = gt_grasps[:, 1::2].sum(1) / 4
keep = (gc_x > 0) & (gc_x < data.size(1)) & (gc_y > 0)& (gc_y < data.size(0))
gt_grasps = gt_grasps[keep]
if gt_grasp_inds is not None:
gt_grasp_inds = gt_grasp_inds[keep]
return gt_grasps, keep, gt_grasp_inds
return gt_grasps, keep
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_graspdet(minibatch_db)
blobs = self._imagePreprocess(blobs, False)
data = torch.from_numpy(blobs['data'].copy())
im_info = torch.from_numpy(blobs['im_info'])
# we need to random shuffle the bounding box.
data_height, data_width = data.size(0), data.size(1)
if self.training:
np.random.shuffle(blobs['gt_grasps'])
gt_grasps = torch.from_numpy(blobs['gt_grasps'])
# if batch_size > 1, all images need to be processed to have the same size
if self.batch_size > 1:
ratio = self.ratio_list_batch[index]
# if the image need to crop, crop to the target size.
coord_s = (0, 0)
if self._roidb[index_ratio]['need_crop']:
data, coord_s = self._cropImage(data, gt_grasps, ratio)
# based on the ratio, padding the image.
data, im_info = self._paddingImage(data, im_info, ratio)
# crpo bbox according to cropped image
gt_grasps, _ = self._cropGrasp(data, coord_s, gt_grasps)
gt_grasps, num_grasps = self._graspPostProcess(gt_grasps)
# permute trim_data to adapt to downstream processing
data = data.permute(2, 0, 1).contiguous()
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_grasps, num_grasps
else:
data = data.permute(2, 0, 1).contiguous()
gt_grasps = torch.FloatTensor([1, 1, 1, 1, 1, 1, 1, 1])
num_grasps = 0
return data, im_info, gt_grasps, num_grasps
class vmrdetMulInSizeRoibatchLoader(vmrdetRoibatchLoader, objdetMulInSizeRoibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation=False):
super(vmrdetMulInSizeRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_vmrdet(minibatch_db)
# preprocess images
blobs = self._imagePreprocess(blobs, False)
data = torch.from_numpy(blobs['data'].copy())
im_info = torch.from_numpy(blobs['im_info'])
# we need to random shuffle the bounding box.
data_height, data_width = data.size(0), data.size(1)
if self.training:
shuffle_inds = range(blobs['gt_boxes'].shape[0])
np.random.shuffle(list(shuffle_inds))
shuffle_inds = torch.LongTensor(shuffle_inds)
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
gt_boxes = gt_boxes[shuffle_inds]
# if batch_size > 1, all images need to be processed to have the same size
if self.batch_size > 1:
ratio = self.ratio_list_batch[index]
# if the image need to crop, crop to the target size.
coord_s = (0, 0)
if self._roidb[index_ratio]['need_crop']:
data, coord_s = self._cropImage(data, gt_boxes, ratio)
# based on the ratio, padding the image.
data, im_info = self._paddingImage(data, im_info, ratio)
# crpo bbox according to cropped image
gt_boxes = self._cropBox(data, coord_s, gt_boxes)
gt_boxes, keep = self._boxPostProcess(gt_boxes)
shuffle_inds = shuffle_inds[keep]
rel_mat = self._genRelMat(shuffle_inds, blobs['node_inds'], blobs['child_lists'], blobs['parent_lists'])
# permute trim_data to adapt to downstream processing
data = data.permute(2, 0, 1).contiguous()
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_boxes, keep.size(0), rel_mat
else:
data = data.permute(2, 0, 1).contiguous()
if cfg.TRAIN.COMMON.USE_ODLOSS:
gt_boxes = torch.FloatTensor([1, 1, 1, 1, 1])
num_boxes = 0
else:
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
num_boxes = gt_boxes.shape[0]
rel_mat = torch.FloatTensor([0])
return data, im_info, gt_boxes, num_boxes, rel_mat
class roigdetMulInSizeRoibatchLoader(graspMulInSizeRoibatchLoader, objdetMulInSizeRoibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation=False):
super(roigdetMulInSizeRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _imagePreprocess(self, blob, fix_size = False):
assert not fix_size, "When grasp labels are included, the input image can not be fixed-size."
keep_b = np.arange(blob['gt_boxes'].shape[0])
keep_g = np.arange(blob['gt_grasps'].shape[0])
if self.augmentation:
if self.augImageOnly is not None: blob['data'] = self.augImageOnly(blob['data'])
if self.augObjdet is not None: blob['data'], blob['gt_boxes'], blob['gt_grasps'], keep_b, keep_g = \
self.augObjdet(image=blob['data'], boxes=blob['gt_boxes'], grasps=blob['gt_grasps'],
boxes_keep=keep_b, grasps_keep=keep_g)
# choose one predefined size, TODO: support multi-instance batch
random_scale_ind = np.random.randint(0, high=len(cfg.SCALES))
blob['data'], im_scale = prep_im_for_blob(blob['data'], cfg.SCALES[random_scale_ind], cfg.TRAIN.COMMON.MAX_SIZE, fix_size)
blob['im_info'][:2] = (blob['data'].shape[0], blob['data'].shape[1])
blob['im_info'][2:4] = (im_scale['y'], im_scale['x'])
# modify bounding boxes according to resize parameters
blob['gt_boxes'][:, :-1][:, 0::2] *= im_scale['x']
blob['gt_boxes'][:, :-1][:, 1::2] *= im_scale['y']
blob['gt_grasps'][:, 0::2] *= im_scale['x']
blob['gt_grasps'][:, 1::2] *= im_scale['y']
blob['node_inds'] = blob['node_inds'][keep_b]
blob['gt_grasp_inds'] = blob['gt_grasp_inds'][keep_g]
blob['data'] = image_normalize(blob['data'], mean=self.pixel_means, std=self.pixel_stds)
return blob
def _graspIndsPostProcess(self, grasp_inds, shuffle_inds, node_inds):
grasp_inds_ori = grasp_inds.clone()
order2inds = dict(enumerate(node_inds))
inds2order = dict(zip(order2inds.values(), order2inds.keys()))
shuffle2order = dict(enumerate(shuffle_inds))
order2shuffle = dict(zip(shuffle2order.values(), shuffle2order.keys()))
# make box index begins with 1
for key in order2shuffle.keys():
order2shuffle[key] += 1
for ind in node_inds:
grasp_inds[grasp_inds_ori == float(ind)] = float(order2shuffle[inds2order[ind]])
return grasp_inds
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_roigdet(minibatch_db)
blobs = self._imagePreprocess(blobs)
data = torch.from_numpy(blobs['data'].copy())
im_info = torch.from_numpy(blobs['im_info'])
# we need to random shuffle the bounding box.
data_height, data_width = data.size(0), data.size(1)
if self.training:
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
gt_grasps = torch.from_numpy(blobs['gt_grasps'])
gt_grasp_inds = torch.from_numpy(blobs['gt_grasp_inds'])
# shuffle boxes
shuffle_inds_b = range(blobs['gt_boxes'].shape[0])
np.random.shuffle(list(shuffle_inds_b))
shuffle_inds_b = torch.LongTensor(shuffle_inds_b)
gt_boxes = gt_boxes[shuffle_inds_b]
gt_grasp_inds = self._graspIndsPostProcess(gt_grasp_inds, shuffle_inds_b.data.numpy(), blobs['node_inds'])
# shuffle grasps
shuffle_inds_g = range(blobs['gt_grasps'].shape[0])
np.random.shuffle(list(shuffle_inds_g))
shuffle_inds_g = torch.LongTensor(shuffle_inds_g)
gt_grasps = gt_grasps[shuffle_inds_g]
gt_grasp_inds = gt_grasp_inds[shuffle_inds_g]
# if batch_size > 1, all images need to be processed to have the same size
if self.batch_size > 1:
ratio = self.ratio_list_batch[index]
# if the image need to crop, crop to the target size.
coord_s = (0, 0)
if self._roidb[index_ratio]['need_crop']:
# here image cropping is according to both gt_boxes and gt_grasps
data, coord_s = self._cropImage(data, torch.cat((gt_grasps, gt_boxes), dim=-1), ratio)
# based on the ratio, padding the image.
data, im_info = self._paddingImage(data, im_info, ratio)
# crpo bbox according to cropped image
gt_boxes = self._cropBox(data, coord_s, gt_boxes)
gt_grasps, _, gt_grasp_inds = self._cropGrasp(data, coord_s, gt_grasps, gt_grasp_inds)
gt_boxes, keep = self._boxPostProcess(gt_boxes)
gt_grasps, num_grasps, gt_grasp_inds = self._graspPostProcess(gt_grasps, gt_grasp_inds)
# permute trim_data to adapt to downstream processing
data = data.permute(2, 0, 1).contiguous()
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_boxes, gt_grasps, keep.size(0), num_grasps, gt_grasp_inds
else:
data = data.permute(2, 0, 1).contiguous()
if cfg.TRAIN.COMMON.USE_ODLOSS:
gt_boxes = torch.FloatTensor([1, 1, 1, 1, 1])
num_boxes = 0
else:
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
num_boxes = gt_boxes.shape[0]
gt_grasps = torch.FloatTensor([1, 1, 1, 1, 1, 1, 1, 1])
gt_grasp_inds = torch.LongTensor([0])
num_grasps = 0
return data, im_info, gt_boxes, gt_grasps, num_boxes, num_grasps, gt_grasp_inds
class allInOneMulInSizeRoibatchLoader(roigdetMulInSizeRoibatchLoader, vmrdetMulInSizeRoibatchLoader):
__metaclass__ = abc.ABCMeta
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation= False):
super(allInOneMulInSizeRoibatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes, training,
cls_list, augmentation)
def _imagePreprocess(self, blob, fix_size = False):
assert not fix_size, "When grasp labels are included, the input image can not be fixed-size."
keep_b = np.arange(blob['gt_boxes'].shape[0])
keep_g = np.arange(blob['gt_grasps'].shape[0])
if self.augmentation:
blob['data'] = self.augImageOnly(blob['data'])
blob['data'], blob['gt_boxes'], blob['gt_grasps'], keep_b, keep_g = \
self.augObjdet(image=blob['data'], boxes=blob['gt_boxes'], grasps=blob['gt_grasps'],
boxes_keep=keep_b, grasps_keep=keep_g)
# choose one predefined size, TODO: support multi-instance batch
random_scale_ind = np.random.randint(0, high=len(cfg.SCALES))
blob['data'], im_scale = prep_im_for_blob(blob['data'], cfg.SCALES[random_scale_ind], cfg.TRAIN.COMMON.MAX_SIZE, fix_size)
blob['im_info'][:2] = (blob['data'].shape[0], blob['data'].shape[1])
blob['im_info'][2:4] = (im_scale['y'], im_scale['x'])
# modify bounding boxes according to resize parameters
blob['gt_boxes'][:, :-1][:, 0::2] *= im_scale['x']
blob['gt_boxes'][:, :-1][:, 1::2] *= im_scale['y']
blob['gt_grasps'][:, 0::2] *= im_scale['x']
blob['gt_grasps'][:, 1::2] *= im_scale['y']
blob['gt_grasp_inds'] = blob['gt_grasp_inds'][keep_g]
blob['data'] = image_normalize(blob['data'], mean=self.pixel_means, std=self.pixel_stds)
blob['node_inds'] = blob['node_inds'][keep_b]
blob['parent_lists'] = [blob['parent_lists'][p_ind] for p_ind in list(keep_b)]
blob['child_lists'] = [blob['child_lists'][c_ind] for c_ind in list(keep_b)]
return blob
def __getitem__(self, index):
if self.training:
index_ratio = int(self.ratio_index[index])
else:
index_ratio = index
# get the anchor index for current sample index
# here we set the anchor index to the last one
# sample in this group
minibatch_db = self._roidb[index_ratio]
blobs = get_minibatch_allinone(minibatch_db)
blobs = self._imagePreprocess(blobs)
data = torch.from_numpy(blobs['data'].copy())
im_info = torch.from_numpy(blobs['im_info'])
# we need to random shuffle the bounding box.
data_height, data_width = data.size(0), data.size(1)
if self.training:
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
gt_grasps = torch.from_numpy(blobs['gt_grasps'])
gt_grasp_inds = torch.from_numpy(blobs['gt_grasp_inds'])
# shuffle boxes
shuffle_inds_b = range(blobs['gt_boxes'].shape[0])
np.random.shuffle(list(shuffle_inds_b))
shuffle_inds_b = torch.LongTensor(shuffle_inds_b)
gt_boxes = gt_boxes[shuffle_inds_b]
gt_grasp_inds = self._graspIndsPostProcess(gt_grasp_inds, shuffle_inds_b.data.numpy(), blobs['node_inds'])
# shuffle grasps
shuffle_inds_g = range(blobs['gt_grasps'].shape[0])
np.random.shuffle(list(shuffle_inds_g))
shuffle_inds_g = torch.LongTensor(shuffle_inds_g)
gt_grasps = gt_grasps[shuffle_inds_g]
gt_grasp_inds = gt_grasp_inds[shuffle_inds_g]
# if batch_size > 1, all images need to be processed to have the same size
if self.batch_size > 1:
ratio = self.ratio_list_batch[index]
# if the image need to crop, crop to the target size.
coord_s = (0, 0)
if self._roidb[index_ratio]['need_crop']:
# here image cropping is according to both gt_boxes and gt_grasps
data, coord_s = self._cropImage(data, torch.cat((gt_grasps, gt_boxes), dim=-1), ratio)
# based on the ratio, padding the image.
data, im_info = self._paddingImage(data, im_info, ratio)
# crpo bbox according to cropped image
gt_boxes = self._cropBox(data, coord_s, gt_boxes)
gt_grasps, _, gt_grasp_inds = self._cropGrasp(data, coord_s, gt_grasps, gt_grasp_inds)
gt_boxes, keep = self._boxPostProcess(gt_boxes)
gt_grasps, num_grasps, gt_grasp_inds = self._graspPostProcess(gt_grasps, gt_grasp_inds)
shuffle_inds_b = shuffle_inds_b[keep]
rel_mat = self._genRelMat(shuffle_inds_b, blobs['node_inds'], blobs['child_lists'], blobs['parent_lists'])
# permute trim_data to adapt to downstream processing
data = data.permute(2, 0, 1).contiguous()
assert data.size(1) == im_info[0] and data.size(2) == im_info[1]
return data, im_info, gt_boxes, gt_grasps, keep.size(0), num_grasps, rel_mat, gt_grasp_inds
else:
data = data.permute(2, 0, 1).contiguous()
if cfg.TRAIN.COMMON.USE_ODLOSS:
gt_boxes = torch.FloatTensor([1, 1, 1, 1, 1])
num_boxes = 0
else:
gt_boxes = torch.from_numpy(blobs['gt_boxes'])
num_boxes = gt_boxes.shape[0]
gt_grasps = torch.FloatTensor([1, 1, 1, 1, 1, 1, 1, 1])
gt_grasp_inds = torch.LongTensor([0])
num_grasps = 0
rel_mat = torch.FloatTensor([0])
return data, im_info, gt_boxes, gt_grasps, num_boxes, num_grasps, rel_mat, gt_grasp_inds
class ssdbatchLoader(objdetRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(ssdbatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train SSD without any augmentation.")
else:
self.augImageOnly = ComposeImageOnly([
ConvertToFloats(),
PhotometricDistort(),
])
self.augObjdet = Compose([
RandomMirror(),
Expand(mean = self.pixel_means * 255. if cfg.PRETRAIN_TYPE == "pytorch" else self.pixel_means),
RandomSampleCrop(),
])
class fcgnbatchLoader(graspdetRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(fcgnbatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train FCGN without any augmentation.")
else:
self.augImageOnly = ComposeImageOnly([
ConvertToFloats(),
PhotometricDistort(),
])
self.augmGraspdet = Compose([
RandomRotate(),
RandomMirror(),
RandomCropKeepBoxes(keep_shape=True),
])
class svmrnbatchLoader(vmrdetRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(svmrnbatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train S-VMRN without any augmentation.")
else:
self.augImageOnly = ComposeImageOnly([
ConvertToFloats(),
PhotometricDistort(),
])
self.augObjdet = Compose([
RandomMirror(),
Expand(mean= self.pixel_means * 255. if cfg.PRETRAIN_TYPE == "pytorch" else self.pixel_means),
# TODO: allow to damage bounding boxes while prevent deleting them when doing random crop
RandomCropKeepBoxes(),
])
class fasterrcnnbatchLoader(objdetMulInSizeRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = True):
super(fasterrcnnbatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train Faster-RCNN without flipped images.")
else:
self.augObjdet = Compose([
RandomMirror(),
])
class fvmrnbatchLoader(vmrdetMulInSizeRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(fvmrnbatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train F-VMRN without any augmentation.")
else:
# self.augImageOnly = ComposeImageOnly([
# ConvertToFloats(),
# PhotometricDistort(),
# ])
self.augObjdet = Compose([
RandomMirror(),
# TODO: allow to damage bounding boxes while prevent deleting them when doing random crop
# RandomCropKeepBoxes(keep_shape=True),
# RandomCropKeepBoxes(),
# Expand(mean=self.pixel_means * 255. if cfg.PRETRAIN_TYPE == "pytorch" else self.pixel_means, keep_size=True),
])
class roignbatchLoader(roigdetMulInSizeRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(roignbatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train ROI-GN without any augmentation.")
else:
self.augImageOnly = ComposeImageOnly([
ConvertToFloats(),
PhotometricDistort(),
])
self.augObjdet = Compose([
RandomMirror(),
# TODO: allow to damage bounding boxes while prevent deleting them when doing random crop
RandomCropKeepBoxes(keep_shape=True),
Expand(mean = self.pixel_means * 255. if cfg.PRETRAIN_TYPE == "pytorch" else self.pixel_means, keep_size=True),
])
class fallinonebatchLoader(allInOneMulInSizeRoibatchLoader):
def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True,
cls_list=None, augmentation = False):
super(fallinonebatchLoader, self).__init__(roidb, ratio_list, ratio_index, batch_size, num_classes,
training, cls_list, augmentation)
if not self.augmentation and self.training:
warnings.warn("You are going to train ROI-GN without any augmentation.")
else:
self.augImageOnly = ComposeImageOnly([
ConvertToFloats(),
PhotometricDistort(),
])
self.augObjdet = Compose([
RandomMirror(),
# TODO: allow to damage bounding boxes while prevent deleting them when doing random crop
# RandomCropKeepBoxes(keep_shape=True),
RandomCropKeepBoxes(),
Expand(mean = self.pixel_means * 255. if cfg.PRETRAIN_TYPE == "pytorch" else self.pixel_means, keep_size=True),
])
| 48.844937 | 132 | 0.604168 | 5,915 | 46,305 | 4.45579 | 0.060186 | 0.035324 | 0.005122 | 0.024511 | 0.812377 | 0.802246 | 0.776446 | 0.761269 | 0.748558 | 0.741994 | 0 | 0.014017 | 0.286643 | 46,305 | 947 | 133 | 48.896515 | 0.783876 | 0.118605 | 0 | 0.683403 | 0 | 0 | 0.044852 | 0 | 0 | 0 | 0 | 0.001056 | 0.013947 | 1 | 0.058577 | false | 0 | 0.019526 | 0.001395 | 0.160391 | 0.001395 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f45398ada80ce3472eeb1d415d59a0c20fc0b3f8 | 178 | py | Python | montepython_tree/montepython/likelihoods/eft_withbao_lowzNGC/__init__.py | zhaoruiyang98/pybird | 13e1c090bb51ba44a4228f379046de8a7280a088 | [
"MIT"
] | 13 | 2020-03-19T02:25:13.000Z | 2022-03-06T13:19:19.000Z | montepython_tree/montepython/likelihoods/eft_withbao_lowzNGC/__init__.py | zhaoruiyang98/pybird | 13e1c090bb51ba44a4228f379046de8a7280a088 | [
"MIT"
] | 3 | 2021-05-24T05:48:10.000Z | 2021-10-18T10:37:47.000Z | montepython_tree/montepython/likelihoods/eft_withbao_lowzNGC/__init__.py | zhaoruiyang98/pybird | 13e1c090bb51ba44a4228f379046de8a7280a088 | [
"MIT"
] | 14 | 2020-04-13T00:46:29.000Z | 2021-10-10T16:05:27.000Z | import os
import numpy as np
from montepython.likelihood_class import Likelihood_bird
from scipy.optimize import fsolve
class eft_withbao_lowzNGC(Likelihood_bird):
pass
| 22.25 | 56 | 0.825843 | 25 | 178 | 5.68 | 0.68 | 0.197183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 178 | 7 | 57 | 25.428571 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.666667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f476dbbc2cedc5e3852c4a74513c2369818a5e80 | 6,460 | py | Python | ldaptor/test/test_entry_diff.py | jaraco/ldaptor | 4e2b67798b75e0c2b364fc0d9fdf327b0f0ddca4 | [
"MIT"
] | null | null | null | ldaptor/test/test_entry_diff.py | jaraco/ldaptor | 4e2b67798b75e0c2b364fc0d9fdf327b0f0ddca4 | [
"MIT"
] | null | null | null | ldaptor/test/test_entry_diff.py | jaraco/ldaptor | 4e2b67798b75e0c2b364fc0d9fdf327b0f0ddca4 | [
"MIT"
] | 1 | 2018-10-17T18:43:59.000Z | 2018-10-17T18:43:59.000Z | """
Test cases for ldaptor.diff
"""
from twisted.trial import unittest
from ldaptor import delta, entry
class TestDiffEntry(unittest.TestCase):
def testEqual(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
result = a.diff(b)
self.assertEqual(result, None)
def testAdd_New_OneType_OneValue(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['quux'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Add('baz', ['quux']),
]))
def testAdd_New_OneType_ManyValues(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['quux', 'thud', 'foo'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Add('baz', ['quux', 'thud', 'foo']),
]))
def testAdd_New_ManyTypes(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['quux'],
'bang': ['thud'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Add('bang', ['thud']),
delta.Add('baz', ['quux']),
]))
def testAdd_Existing_OneType_OneValue(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar', 'quux'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Add('foo', ['quux']),
]))
def testAdd_Existing_OneType_ManyValues(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar', 'quux', 'thud', 'foo'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Add('foo', ['quux', 'thud', 'foo']),
]))
def testAdd_NewAndExisting_ManyTypes(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['quux'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar', 'thud', 'bang'],
'baz': ['quux', 'bar', 'stump'],
'bang': ['thud', 'barble'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Add('bang', ['thud', 'barble']),
delta.Add('baz', ['bar', 'stump']),
delta.Add('foo', ['thud', 'bang']),
]))
def testDelete_All_OneType(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['quux', 'thud'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Delete('baz', ['quux', 'thud']),
]))
def testDelete_Some_OneType(self):
a = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['quux', 'thud'],
})
b = entry.BaseLDAPEntry(dn='dc=foo',
attributes={
'foo': ['bar'],
'baz': ['thud'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('dc=foo',
[
delta.Delete('baz', ['quux']),
]))
def testComplex(self):
a = entry.BaseLDAPEntry(dn='cn=Paula Jensen,ou=Product Development,dc=airius,dc=com',
attributes={
'description': ['Something'],
'telephonenumber': ['+123 456'],
'facsimiletelephonenumber': ['+1 408 555 9876'],
})
b = entry.BaseLDAPEntry(dn='cn=Paula Jensen,ou=Product Development,dc=airius,dc=com',
attributes={
'postalAddress': ['123 Anystreet $ Sunnyvale, CA $ 94086'],
'telephonenumber': ['+1 408 555 1234', '+1 408 555 5678'],
})
result = a.diff(b)
self.assertEqual(result,
delta.ModifyOp('cn=Paula Jensen,ou=Product Development,dc=airius,dc=com',
[
delta.Add('postalAddress', ['123 Anystreet $ Sunnyvale, CA $ 94086']),
delta.Delete('description', ['Something']),
delta.Delete('facsimiletelephonenumber', ['+1 408 555 9876']),
delta.Add('telephonenumber', ['+1 408 555 1234', '+1 408 555 5678']),
delta.Delete('telephonenumber', ['+123 456']),
]))
| 35.108696 | 99 | 0.39644 | 521 | 6,460 | 4.877159 | 0.143954 | 0.051161 | 0.157418 | 0.155844 | 0.830382 | 0.777253 | 0.731208 | 0.731208 | 0.731208 | 0.667454 | 0 | 0.026509 | 0.451084 | 6,460 | 183 | 100 | 35.300546 | 0.690073 | 0.00418 | 0 | 0.686391 | 0 | 0 | 0.158444 | 0.020545 | 0 | 0 | 0 | 0 | 0.059172 | 1 | 0.059172 | false | 0 | 0.011834 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be3ed7a9972a216876812f4ea060a1da929410f4 | 43 | py | Python | Python/Python For Absolute Beginner/46 If __name__ == __main__ usage & necessity.py | omkarsutar1255/Python-Data | 169d0c54b23d9dd5a7f1aea41ab385121c3b3c63 | [
"CC-BY-3.0"
] | null | null | null | Python/Python For Absolute Beginner/46 If __name__ == __main__ usage & necessity.py | omkarsutar1255/Python-Data | 169d0c54b23d9dd5a7f1aea41ab385121c3b3c63 | [
"CC-BY-3.0"
] | null | null | null | Python/Python For Absolute Beginner/46 If __name__ == __main__ usage & necessity.py | omkarsutar1255/Python-Data | 169d0c54b23d9dd5a7f1aea41ab385121c3b3c63 | [
"CC-BY-3.0"
] | null | null | null | import mainfile
print(mainfile.add(5, 3))
| 10.75 | 25 | 0.744186 | 7 | 43 | 4.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.116279 | 43 | 3 | 26 | 14.333333 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
be9cfbb04012289415b3ac1b12247d2b7c5671dc | 167 | py | Python | shtk/test/__init__.py | jroose/shtk | caba1babe49399f34a7be8ab820a380e346d1515 | [
"BSD-3-Clause"
] | 24 | 2021-02-02T09:22:53.000Z | 2021-09-13T00:12:13.000Z | shtk/test/__init__.py | jroose/shtk | caba1babe49399f34a7be8ab820a380e346d1515 | [
"BSD-3-Clause"
] | 15 | 2021-02-02T03:00:35.000Z | 2022-02-20T22:48:30.000Z | shtk/test/__init__.py | jroose/shtk | caba1babe49399f34a7be8ab820a380e346d1515 | [
"BSD-3-Clause"
] | 1 | 2021-02-02T11:49:53.000Z | 2021-02-02T11:49:53.000Z | from . import util
from . import PipelineNode
from . import Stream
from . import StreamFactory
from . import PipelineNodeFactory
from . import Job
from . import Shell
| 20.875 | 33 | 0.790419 | 21 | 167 | 6.285714 | 0.428571 | 0.530303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167665 | 167 | 7 | 34 | 23.857143 | 0.94964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe4fbc9262053ecb36ad86c9ec20df889ac033dd | 49 | py | Python | dingus/tree/__init__.py | ricott1/dingus | ef0edd9fff164f54171b354714e600f410a3bbe9 | [
"MIT"
] | null | null | null | dingus/tree/__init__.py | ricott1/dingus | ef0edd9fff164f54171b354714e600f410a3bbe9 | [
"MIT"
] | null | null | null | dingus/tree/__init__.py | ricott1/dingus | ef0edd9fff164f54171b354714e600f410a3bbe9 | [
"MIT"
] | null | null | null | from .sparse_merkle_tree import SparseMerkleTree
| 24.5 | 48 | 0.897959 | 6 | 49 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feb03dd961e78df977c4f495465f0e7d3a3fb25f | 136 | py | Python | allennlp/evaluation/__init__.py | HOZHENWAI/allennlp | 0d25f967c7996ad4980c7ee2f4c71294f51fef80 | [
"Apache-2.0"
] | null | null | null | allennlp/evaluation/__init__.py | HOZHENWAI/allennlp | 0d25f967c7996ad4980c7ee2f4c71294f51fef80 | [
"Apache-2.0"
] | null | null | null | allennlp/evaluation/__init__.py | HOZHENWAI/allennlp | 0d25f967c7996ad4980c7ee2f4c71294f51fef80 | [
"Apache-2.0"
] | null | null | null | from allennlp.evaluation.evaluator import Evaluator, SimpleEvaluator
from allennlp.evaluation.serializers.serializers import Serializer
| 45.333333 | 68 | 0.889706 | 14 | 136 | 8.642857 | 0.571429 | 0.198347 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066176 | 136 | 2 | 69 | 68 | 0.952756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feb3224b32aad9f71a8c436dfff6d9165bc1eb85 | 161 | py | Python | tests/mocks/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | null | null | null | tests/mocks/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | 1 | 2021-10-09T20:42:03.000Z | 2021-10-09T20:42:03.000Z | tests/mocks/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | null | null | null | """Inicializacao de mocks"""
from .mock_users import mock_user
from .user_repository_spy import UserRepositorySpy
from .password_hash_spy import PasswordHashSpy
| 32.2 | 50 | 0.850932 | 21 | 161 | 6.238095 | 0.666667 | 0.137405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093168 | 161 | 4 | 51 | 40.25 | 0.89726 | 0.136646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
2290f110e542a7b5e064c894b66534e8c232f719 | 37,872 | py | Python | instances/passenger_demand/pas-20210421-2109-int4e-1/20.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int4e-1/20.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int4e-1/20.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 891
passenger_arriving = (
(2, 3, 4, 0, 0, 0, 0, 4, 2, 0, 0, 0), # 0
(1, 0, 0, 1, 0, 0, 2, 0, 2, 1, 0, 0), # 1
(2, 1, 2, 0, 1, 0, 2, 5, 1, 0, 0, 0), # 2
(1, 3, 3, 0, 1, 0, 0, 4, 3, 1, 2, 0), # 3
(1, 2, 1, 0, 2, 0, 0, 3, 2, 1, 2, 0), # 4
(0, 1, 1, 0, 1, 0, 1, 1, 2, 0, 1, 0), # 5
(0, 2, 1, 2, 2, 0, 0, 2, 2, 0, 0, 0), # 6
(2, 2, 1, 2, 0, 0, 1, 4, 3, 0, 0, 0), # 7
(2, 2, 1, 2, 1, 0, 2, 3, 0, 0, 1, 0), # 8
(2, 0, 3, 0, 0, 0, 3, 1, 1, 0, 1, 0), # 9
(1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 2, 0), # 10
(2, 2, 2, 2, 1, 0, 4, 2, 2, 1, 1, 0), # 11
(1, 0, 4, 2, 1, 0, 4, 4, 2, 3, 0, 0), # 12
(0, 0, 3, 3, 0, 0, 1, 4, 3, 0, 0, 0), # 13
(0, 1, 4, 3, 0, 0, 1, 3, 1, 1, 3, 0), # 14
(0, 2, 2, 0, 0, 0, 0, 5, 4, 1, 1, 0), # 15
(2, 5, 2, 1, 1, 0, 1, 3, 0, 1, 0, 0), # 16
(0, 4, 1, 1, 0, 0, 1, 2, 3, 3, 0, 0), # 17
(0, 2, 1, 2, 1, 0, 0, 3, 3, 1, 0, 0), # 18
(1, 1, 4, 3, 1, 0, 3, 5, 0, 2, 1, 0), # 19
(3, 1, 0, 2, 1, 0, 1, 2, 1, 1, 1, 0), # 20
(1, 3, 2, 3, 1, 0, 3, 2, 0, 0, 0, 0), # 21
(1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0), # 22
(1, 2, 2, 0, 2, 0, 2, 3, 2, 1, 1, 0), # 23
(0, 1, 2, 1, 0, 0, 2, 3, 3, 4, 0, 0), # 24
(2, 0, 4, 2, 0, 0, 4, 2, 3, 3, 1, 0), # 25
(2, 2, 1, 1, 2, 0, 1, 1, 4, 0, 0, 0), # 26
(3, 1, 1, 2, 2, 0, 2, 3, 2, 1, 1, 0), # 27
(1, 1, 3, 1, 0, 0, 1, 2, 4, 3, 0, 0), # 28
(1, 5, 1, 2, 1, 0, 1, 2, 1, 1, 0, 0), # 29
(0, 2, 2, 0, 0, 0, 2, 1, 0, 0, 0, 0), # 30
(0, 1, 0, 1, 0, 0, 1, 5, 1, 3, 2, 0), # 31
(1, 5, 4, 2, 0, 0, 4, 3, 1, 1, 0, 0), # 32
(3, 2, 3, 0, 1, 0, 2, 4, 0, 0, 0, 0), # 33
(1, 3, 2, 0, 0, 0, 0, 2, 0, 0, 2, 0), # 34
(1, 0, 1, 0, 0, 0, 0, 3, 0, 3, 1, 0), # 35
(0, 2, 2, 2, 2, 0, 1, 2, 1, 1, 1, 0), # 36
(2, 2, 1, 1, 1, 0, 4, 3, 4, 4, 3, 0), # 37
(1, 3, 3, 1, 0, 0, 1, 2, 2, 5, 0, 0), # 38
(1, 1, 3, 0, 1, 0, 4, 3, 1, 2, 0, 0), # 39
(2, 2, 2, 0, 1, 0, 3, 3, 3, 2, 0, 0), # 40
(1, 5, 3, 0, 2, 0, 1, 5, 1, 0, 0, 0), # 41
(2, 4, 2, 1, 0, 0, 3, 4, 1, 1, 0, 0), # 42
(2, 4, 4, 0, 0, 0, 0, 5, 2, 0, 0, 0), # 43
(1, 2, 3, 2, 0, 0, 1, 2, 1, 2, 2, 0), # 44
(1, 7, 2, 1, 0, 0, 0, 0, 1, 3, 1, 0), # 45
(1, 0, 2, 2, 1, 0, 1, 2, 2, 0, 0, 0), # 46
(4, 1, 0, 0, 0, 0, 0, 3, 2, 2, 1, 0), # 47
(0, 3, 4, 2, 0, 0, 2, 1, 0, 1, 1, 0), # 48
(2, 7, 2, 1, 1, 0, 4, 4, 1, 2, 0, 0), # 49
(1, 5, 1, 3, 1, 0, 1, 1, 3, 1, 1, 0), # 50
(0, 4, 2, 0, 2, 0, 6, 1, 0, 0, 0, 0), # 51
(1, 2, 3, 1, 0, 0, 2, 0, 2, 2, 0, 0), # 52
(1, 2, 0, 3, 1, 0, 1, 5, 0, 0, 0, 0), # 53
(1, 2, 3, 2, 1, 0, 2, 0, 5, 0, 2, 0), # 54
(0, 1, 1, 0, 1, 0, 2, 5, 2, 2, 0, 0), # 55
(1, 2, 0, 1, 0, 0, 1, 5, 1, 0, 2, 0), # 56
(2, 2, 3, 2, 0, 0, 1, 2, 2, 1, 0, 0), # 57
(2, 1, 4, 2, 0, 0, 3, 2, 1, 3, 1, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(1.0598606233538193, 2.7185842803030305, 3.19769440874036, 2.534510869565217, 2.8572115384615384, 1.902717391304348), # 0
(1.06980880401912, 2.7488166394851294, 3.214966215009998, 2.548625754830918, 2.8786266025641023, 1.9020688556763288), # 1
(1.0796433908886935, 2.7786005611672278, 3.2318280491288203, 2.5624299516908216, 2.8995897435897438, 1.9014004830917874), # 2
(1.089356035996488, 2.8079039062500004, 3.248267593187661, 2.5759116847826085, 2.9200817307692315, 1.900712364130435), # 3
(1.0989383913764512, 2.8366945356341193, 3.264272529277349, 2.5890591787439616, 2.940083333333334, 1.9000045893719806), # 4
(1.1083821090625305, 2.8649403102202586, 3.2798305394887173, 2.6018606582125603, 2.9595753205128204, 1.8992772493961352), # 5
(1.1176788410886744, 2.892609090909091, 3.2949293059125973, 2.6143043478260872, 2.9785384615384616, 1.8985304347826089), # 6
(1.1268202394888305, 2.9196687386012905, 3.3095565106398173, 2.6263784722222225, 2.9969535256410262, 1.8977642361111113), # 7
(1.1357979562969467, 2.9460871141975304, 3.3236998357612118, 2.6380712560386472, 3.014801282051282, 1.8969787439613528), # 8
(1.144603643546971, 2.971832078598485, 3.33734696336761, 2.6493709239130436, 3.0320625, 1.8961740489130436), # 9
(1.1532289532728515, 2.9968714927048263, 3.350485575549843, 2.6602657004830923, 3.048717948717949, 1.8953502415458936), # 10
(1.1616655375085359, 3.0211732174172283, 3.363103354398743, 2.670743810386474, 3.0647483974358973, 1.8945074124396137), # 11
(1.169905048287972, 3.0447051136363634, 3.3751879820051416, 2.6807934782608696, 3.0801346153846154, 1.893645652173913), # 12
(1.1779391376451076, 3.0674350422629075, 3.3867271404598682, 2.6904029287439615, 3.0948573717948724, 1.8927650513285024), # 13
(1.1857594576138915, 3.0893308641975317, 3.3977085118537556, 2.6995603864734297, 3.1088974358974357, 1.8918657004830917), # 14
(1.1933576602282703, 3.1103604403409086, 3.4081197782776345, 2.7082540760869565, 3.122235576923077, 1.8909476902173914), # 15
(1.200725397522193, 3.1304916315937152, 3.4179486218223367, 2.7164722222222224, 3.134852564102564, 1.8900111111111113), # 16
(1.207854321529607, 3.1496922988566216, 3.427182724578692, 2.7242030495169085, 3.146729166666667, 1.8890560537439616), # 17
(1.2147360842844601, 3.167930303030303, 3.4358097686375326, 2.731434782608696, 3.1578461538461546, 1.8880826086956521), # 18
(1.2213623378207004, 3.185173505015432, 3.443817436089689, 2.7381556461352656, 3.168184294871795, 1.8870908665458939), # 19
(1.227724734172276, 3.201389765712682, 3.451193409025992, 2.7443538647343, 3.177724358974359, 1.886080917874396), # 20
(1.2338149253731345, 3.2165469460227265, 3.457925369537276, 2.7500176630434785, 3.186447115384615, 1.8850528532608697), # 21
(1.2396245634572236, 3.2306129068462406, 3.4640009997143673, 2.755135265700483, 3.194333333333333, 1.8840067632850244), # 22
(1.2451453004584918, 3.243555509083895, 3.469407981648101, 2.7596948973429956, 3.201363782051282, 1.8829427385265705), # 23
(1.2503687884108867, 3.2553426136363637, 3.4741339974293055, 2.7636847826086957, 3.207519230769231, 1.8818608695652175), # 24
(1.2552866793483561, 3.265942081404321, 3.478166729148815, 2.767093146135266, 3.2127804487179485, 1.8807612469806765), # 25
(1.2598906253048483, 3.2753217732884394, 3.481493858897458, 2.769908212560386, 3.217128205128205, 1.879643961352657), # 26
(1.2641722783143108, 3.2834495501893937, 3.4841030687660663, 2.7721182065217396, 3.22054326923077, 1.8785091032608698), # 27
(1.2681232904106918, 3.2902932730078565, 3.485982040845473, 2.773711352657005, 3.22300641025641, 1.8773567632850243), # 28
(1.2717353136279388, 3.295820802644501, 3.4871184572265066, 2.774675875603865, 3.2244983974358976, 1.876187032004831), # 29
(1.2750000000000001, 3.3000000000000003, 3.4875000000000003, 2.7750000000000004, 3.225, 1.875), # 30
(1.2780548113810744, 3.3034715198863633, 3.487213979468599, 2.774941462418301, 3.2248174645390075, 1.8733505039147094), # 31
(1.2810436700767265, 3.3068971590909095, 3.486364009661836, 2.774766993464052, 3.224273758865248, 1.8708099033816428), # 32
(1.2839679187979538, 3.310276491477273, 3.48496222826087, 2.7744783088235296, 3.223374734042553, 1.867403073463268), # 33
(1.2868289002557545, 3.3136090909090914, 3.48302077294686, 2.774077124183007, 3.222126241134752, 1.8631548892220557), # 34
(1.2896279571611253, 3.3168945312499996, 3.4805517814009663, 2.7735651552287583, 3.220534131205674, 1.8580902257204732), # 35
(1.292366432225064, 3.320132386363637, 3.4775673913043477, 2.772944117647059, 3.2186042553191494, 1.8522339580209897), # 36
(1.2950456681585678, 3.3233222301136367, 3.4740797403381642, 2.772215727124183, 3.216342464539007, 1.8456109611860736), # 37
(1.2976670076726342, 3.3264636363636364, 3.470100966183575, 2.771381699346405, 3.213754609929078, 1.838246110278194), # 38
(1.300231793478261, 3.3295561789772727, 3.465643206521739, 2.77044375, 3.2108465425531914, 1.83016428035982), # 39
(1.302741368286445, 3.332599431818182, 3.4607185990338167, 2.769403594771242, 3.207624113475177, 1.8213903464934198), # 40
(1.3051970748081843, 3.3355929687499994, 3.4553392814009665, 2.7682629493464055, 3.2040931737588654, 1.811949183741463), # 41
(1.3076002557544757, 3.3385363636363645, 3.449517391304348, 2.7670235294117647, 3.2002595744680855, 1.8018656671664168), # 42
(1.3099522538363173, 3.341429190340909, 3.4432650664251208, 2.765687050653595, 3.1961291666666667, 1.7911646718307512), # 43
(1.312254411764706, 3.344271022727273, 3.4365944444444447, 2.76425522875817, 3.19170780141844, 1.7798710727969347), # 44
(1.3145080722506393, 3.347061434659091, 3.429517663043479, 2.7627297794117647, 3.1870013297872344, 1.7680097451274364), # 45
(1.3167145780051153, 3.3498000000000006, 3.422046859903382, 2.7611124183006535, 3.18201560283688, 1.7556055638847246), # 46
(1.3188752717391306, 3.3524862926136367, 3.414194172705314, 2.7594048611111113, 3.1767564716312062, 1.7426834041312678), # 47
(1.320991496163683, 3.355119886363637, 3.4059717391304347, 2.7576088235294116, 3.1712297872340427, 1.7292681409295356), # 48
(1.3230645939897698, 3.3577003551136357, 3.397391696859904, 2.7557260212418306, 3.16544140070922, 1.7153846493419955), # 49
(1.3250959079283888, 3.360227272727272, 3.3884661835748795, 2.753758169934641, 3.1593971631205675, 1.7010578044311178), # 50
(1.3270867806905373, 3.362700213068182, 3.379207336956522, 2.751706985294118, 3.1531029255319147, 1.6863124812593704), # 51
(1.3290385549872124, 3.36511875, 3.36962729468599, 2.749574183006536, 3.1465645390070924, 1.6711735548892221), # 52
(1.3309525735294119, 3.367482457386364, 3.3597381944444447, 2.74736147875817, 3.1397878546099296, 1.6556659003831418), # 53
(1.3328301790281332, 3.369790909090909, 3.349552173913043, 2.7450705882352944, 3.1327787234042557, 1.6398143928035982), # 54
(1.3346727141943733, 3.3720436789772728, 3.3390813707729468, 2.742703227124183, 3.125542996453901, 1.6236439072130602), # 55
(1.3364815217391304, 3.3742403409090906, 3.3283379227053143, 2.7402611111111113, 3.1180865248226954, 1.6071793186739964), # 56
(1.3382579443734017, 3.3763804687500008, 3.3173339673913045, 2.737745955882353, 3.110415159574468, 1.5904455022488755), # 57
(1.3400033248081842, 3.378463636363636, 3.306081642512077, 2.735159477124183, 3.1025347517730495, 1.5734673330001667), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(2, 3, 4, 0, 0, 0, 0, 4, 2, 0, 0, 0), # 0
(3, 3, 4, 1, 0, 0, 2, 4, 4, 1, 0, 0), # 1
(5, 4, 6, 1, 1, 0, 4, 9, 5, 1, 0, 0), # 2
(6, 7, 9, 1, 2, 0, 4, 13, 8, 2, 2, 0), # 3
(7, 9, 10, 1, 4, 0, 4, 16, 10, 3, 4, 0), # 4
(7, 10, 11, 1, 5, 0, 5, 17, 12, 3, 5, 0), # 5
(7, 12, 12, 3, 7, 0, 5, 19, 14, 3, 5, 0), # 6
(9, 14, 13, 5, 7, 0, 6, 23, 17, 3, 5, 0), # 7
(11, 16, 14, 7, 8, 0, 8, 26, 17, 3, 6, 0), # 8
(13, 16, 17, 7, 8, 0, 11, 27, 18, 3, 7, 0), # 9
(14, 17, 18, 10, 8, 0, 12, 27, 19, 4, 9, 0), # 10
(16, 19, 20, 12, 9, 0, 16, 29, 21, 5, 10, 0), # 11
(17, 19, 24, 14, 10, 0, 20, 33, 23, 8, 10, 0), # 12
(17, 19, 27, 17, 10, 0, 21, 37, 26, 8, 10, 0), # 13
(17, 20, 31, 20, 10, 0, 22, 40, 27, 9, 13, 0), # 14
(17, 22, 33, 20, 10, 0, 22, 45, 31, 10, 14, 0), # 15
(19, 27, 35, 21, 11, 0, 23, 48, 31, 11, 14, 0), # 16
(19, 31, 36, 22, 11, 0, 24, 50, 34, 14, 14, 0), # 17
(19, 33, 37, 24, 12, 0, 24, 53, 37, 15, 14, 0), # 18
(20, 34, 41, 27, 13, 0, 27, 58, 37, 17, 15, 0), # 19
(23, 35, 41, 29, 14, 0, 28, 60, 38, 18, 16, 0), # 20
(24, 38, 43, 32, 15, 0, 31, 62, 38, 18, 16, 0), # 21
(25, 38, 44, 32, 16, 0, 32, 63, 38, 19, 16, 0), # 22
(26, 40, 46, 32, 18, 0, 34, 66, 40, 20, 17, 0), # 23
(26, 41, 48, 33, 18, 0, 36, 69, 43, 24, 17, 0), # 24
(28, 41, 52, 35, 18, 0, 40, 71, 46, 27, 18, 0), # 25
(30, 43, 53, 36, 20, 0, 41, 72, 50, 27, 18, 0), # 26
(33, 44, 54, 38, 22, 0, 43, 75, 52, 28, 19, 0), # 27
(34, 45, 57, 39, 22, 0, 44, 77, 56, 31, 19, 0), # 28
(35, 50, 58, 41, 23, 0, 45, 79, 57, 32, 19, 0), # 29
(35, 52, 60, 41, 23, 0, 47, 80, 57, 32, 19, 0), # 30
(35, 53, 60, 42, 23, 0, 48, 85, 58, 35, 21, 0), # 31
(36, 58, 64, 44, 23, 0, 52, 88, 59, 36, 21, 0), # 32
(39, 60, 67, 44, 24, 0, 54, 92, 59, 36, 21, 0), # 33
(40, 63, 69, 44, 24, 0, 54, 94, 59, 36, 23, 0), # 34
(41, 63, 70, 44, 24, 0, 54, 97, 59, 39, 24, 0), # 35
(41, 65, 72, 46, 26, 0, 55, 99, 60, 40, 25, 0), # 36
(43, 67, 73, 47, 27, 0, 59, 102, 64, 44, 28, 0), # 37
(44, 70, 76, 48, 27, 0, 60, 104, 66, 49, 28, 0), # 38
(45, 71, 79, 48, 28, 0, 64, 107, 67, 51, 28, 0), # 39
(47, 73, 81, 48, 29, 0, 67, 110, 70, 53, 28, 0), # 40
(48, 78, 84, 48, 31, 0, 68, 115, 71, 53, 28, 0), # 41
(50, 82, 86, 49, 31, 0, 71, 119, 72, 54, 28, 0), # 42
(52, 86, 90, 49, 31, 0, 71, 124, 74, 54, 28, 0), # 43
(53, 88, 93, 51, 31, 0, 72, 126, 75, 56, 30, 0), # 44
(54, 95, 95, 52, 31, 0, 72, 126, 76, 59, 31, 0), # 45
(55, 95, 97, 54, 32, 0, 73, 128, 78, 59, 31, 0), # 46
(59, 96, 97, 54, 32, 0, 73, 131, 80, 61, 32, 0), # 47
(59, 99, 101, 56, 32, 0, 75, 132, 80, 62, 33, 0), # 48
(61, 106, 103, 57, 33, 0, 79, 136, 81, 64, 33, 0), # 49
(62, 111, 104, 60, 34, 0, 80, 137, 84, 65, 34, 0), # 50
(62, 115, 106, 60, 36, 0, 86, 138, 84, 65, 34, 0), # 51
(63, 117, 109, 61, 36, 0, 88, 138, 86, 67, 34, 0), # 52
(64, 119, 109, 64, 37, 0, 89, 143, 86, 67, 34, 0), # 53
(65, 121, 112, 66, 38, 0, 91, 143, 91, 67, 36, 0), # 54
(65, 122, 113, 66, 39, 0, 93, 148, 93, 69, 36, 0), # 55
(66, 124, 113, 67, 39, 0, 94, 153, 94, 69, 38, 0), # 56
(68, 126, 116, 69, 39, 0, 95, 155, 96, 70, 38, 0), # 57
(70, 127, 120, 71, 39, 0, 98, 157, 97, 73, 39, 0), # 58
(70, 127, 120, 71, 39, 0, 98, 157, 97, 73, 39, 0), # 59
)
passenger_arriving_rate = (
(1.0598606233538193, 2.174867424242424, 1.918616645244216, 1.0138043478260867, 0.5714423076923076, 0.0, 1.902717391304348, 2.2857692307692306, 1.5207065217391302, 1.2790777634961439, 0.543716856060606, 0.0), # 0
(1.06980880401912, 2.1990533115881035, 1.9289797290059987, 1.019450301932367, 0.5757253205128204, 0.0, 1.9020688556763288, 2.3029012820512818, 1.5291754528985508, 1.2859864860039991, 0.5497633278970259, 0.0), # 1
(1.0796433908886935, 2.222880448933782, 1.9390968294772921, 1.0249719806763284, 0.5799179487179487, 0.0, 1.9014004830917874, 2.3196717948717946, 1.5374579710144929, 1.292731219651528, 0.5557201122334455, 0.0), # 2
(1.089356035996488, 2.246323125, 1.9489605559125964, 1.0303646739130432, 0.5840163461538462, 0.0, 1.900712364130435, 2.336065384615385, 1.545547010869565, 1.2993070372750644, 0.56158078125, 0.0), # 3
(1.0989383913764512, 2.269355628507295, 1.9585635175664093, 1.0356236714975846, 0.5880166666666667, 0.0, 1.9000045893719806, 2.352066666666667, 1.553435507246377, 1.3057090117109396, 0.5673389071268238, 0.0), # 4
(1.1083821090625305, 2.291952248176207, 1.9678983236932304, 1.0407442632850241, 0.591915064102564, 0.0, 1.8992772493961352, 2.367660256410256, 1.5611163949275362, 1.311932215795487, 0.5729880620440517, 0.0), # 5
(1.1176788410886744, 2.3140872727272725, 1.9769575835475584, 1.0457217391304348, 0.5957076923076923, 0.0, 1.8985304347826089, 2.382830769230769, 1.5685826086956522, 1.317971722365039, 0.5785218181818181, 0.0), # 6
(1.1268202394888305, 2.3357349908810323, 1.9857339063838904, 1.0505513888888889, 0.5993907051282052, 0.0, 1.8977642361111113, 2.397562820512821, 1.5758270833333334, 1.323822604255927, 0.5839337477202581, 0.0), # 7
(1.1357979562969467, 2.3568696913580243, 1.994219901456727, 1.0552285024154588, 0.6029602564102564, 0.0, 1.8969787439613528, 2.4118410256410256, 1.5828427536231884, 1.3294799343044845, 0.5892174228395061, 0.0), # 8
(1.144603643546971, 2.377465662878788, 2.002408178020566, 1.0597483695652175, 0.6064124999999999, 0.0, 1.8961740489130436, 2.4256499999999996, 1.5896225543478262, 1.3349387853470438, 0.594366415719697, 0.0), # 9
(1.1532289532728515, 2.397497194163861, 2.0102913453299056, 1.0641062801932368, 0.6097435897435898, 0.0, 1.8953502415458936, 2.438974358974359, 1.5961594202898552, 1.340194230219937, 0.5993742985409652, 0.0), # 10
(1.1616655375085359, 2.4169385739337823, 2.0178620126392457, 1.0682975241545893, 0.6129496794871794, 0.0, 1.8945074124396137, 2.4517987179487175, 1.6024462862318842, 1.345241341759497, 0.6042346434834456, 0.0), # 11
(1.169905048287972, 2.4357640909090907, 2.0251127892030847, 1.0723173913043478, 0.6160269230769231, 0.0, 1.893645652173913, 2.4641076923076923, 1.6084760869565218, 1.3500751928020565, 0.6089410227272727, 0.0), # 12
(1.1779391376451076, 2.4539480338103257, 2.0320362842759208, 1.0761611714975845, 0.6189714743589745, 0.0, 1.8927650513285024, 2.475885897435898, 1.6142417572463768, 1.3546908561839472, 0.6134870084525814, 0.0), # 13
(1.1857594576138915, 2.471464691358025, 2.0386251071122534, 1.0798241545893719, 0.6217794871794871, 0.0, 1.8918657004830917, 2.4871179487179482, 1.6197362318840578, 1.3590834047415021, 0.6178661728395063, 0.0), # 14
(1.1933576602282703, 2.488288352272727, 2.0448718669665804, 1.0833016304347824, 0.6244471153846154, 0.0, 1.8909476902173914, 2.4977884615384616, 1.6249524456521738, 1.3632479113110536, 0.6220720880681817, 0.0), # 15
(1.200725397522193, 2.504393305274972, 2.050769173093402, 1.0865888888888888, 0.6269705128205127, 0.0, 1.8900111111111113, 2.507882051282051, 1.6298833333333334, 1.3671794487289346, 0.626098326318743, 0.0), # 16
(1.207854321529607, 2.519753839085297, 2.056309634747215, 1.0896812198067634, 0.6293458333333333, 0.0, 1.8890560537439616, 2.517383333333333, 1.6345218297101451, 1.3708730898314767, 0.6299384597713242, 0.0), # 17
(1.2147360842844601, 2.5343442424242424, 2.0614858611825193, 1.0925739130434784, 0.6315692307692309, 0.0, 1.8880826086956521, 2.5262769230769235, 1.6388608695652176, 1.3743239074550129, 0.6335860606060606, 0.0), # 18
(1.2213623378207004, 2.5481388040123454, 2.0662904616538134, 1.095262258454106, 0.633636858974359, 0.0, 1.8870908665458939, 2.534547435897436, 1.6428933876811593, 1.3775269744358754, 0.6370347010030863, 0.0), # 19
(1.227724734172276, 2.5611118125701453, 2.0707160454155953, 1.0977415458937199, 0.6355448717948717, 0.0, 1.886080917874396, 2.542179487179487, 1.64661231884058, 1.3804773636103966, 0.6402779531425363, 0.0), # 20
(1.2338149253731345, 2.573237556818181, 2.0747552217223655, 1.1000070652173912, 0.6372894230769229, 0.0, 1.8850528532608697, 2.5491576923076917, 1.650010597826087, 1.38317014781491, 0.6433093892045453, 0.0), # 21
(1.2396245634572236, 2.584490325476992, 2.0784005998286204, 1.1020541062801932, 0.6388666666666665, 0.0, 1.8840067632850244, 2.555466666666666, 1.6530811594202899, 1.3856003998857467, 0.646122581369248, 0.0), # 22
(1.2451453004584918, 2.5948444072671157, 2.0816447889888603, 1.103877958937198, 0.6402727564102564, 0.0, 1.8829427385265705, 2.5610910256410255, 1.6558169384057972, 1.3877631926592402, 0.6487111018167789, 0.0), # 23
(1.2503687884108867, 2.6042740909090907, 2.084480398457583, 1.105473913043478, 0.6415038461538461, 0.0, 1.8818608695652175, 2.5660153846153846, 1.6582108695652173, 1.389653598971722, 0.6510685227272727, 0.0), # 24
(1.2552866793483561, 2.6127536651234564, 2.086900037489289, 1.1068372584541064, 0.6425560897435897, 0.0, 1.8807612469806765, 2.5702243589743587, 1.6602558876811597, 1.3912666916595258, 0.6531884162808641, 0.0), # 25
(1.2598906253048483, 2.6202574186307515, 2.0888963153384745, 1.1079632850241543, 0.6434256410256409, 0.0, 1.879643961352657, 2.5737025641025637, 1.6619449275362317, 1.392597543558983, 0.6550643546576879, 0.0), # 26
(1.2641722783143108, 2.6267596401515148, 2.0904618412596396, 1.1088472826086957, 0.6441086538461539, 0.0, 1.8785091032608698, 2.5764346153846156, 1.6632709239130437, 1.3936412275064265, 0.6566899100378787, 0.0), # 27
(1.2681232904106918, 2.632234618406285, 2.091589224507284, 1.1094845410628018, 0.644601282051282, 0.0, 1.8773567632850243, 2.578405128205128, 1.664226811594203, 1.3943928163381891, 0.6580586546015712, 0.0), # 28
(1.2717353136279388, 2.6366566421156006, 2.092271074335904, 1.109870350241546, 0.6448996794871795, 0.0, 1.876187032004831, 2.579598717948718, 1.664805525362319, 1.3948473828906025, 0.6591641605289001, 0.0), # 29
(1.2750000000000001, 2.64, 2.0925000000000002, 1.11, 0.645, 0.0, 1.875, 2.58, 1.6650000000000003, 1.395, 0.66, 0.0), # 30
(1.2780548113810744, 2.6427772159090903, 2.0923283876811594, 1.1099765849673204, 0.6449634929078014, 0.0, 1.8733505039147094, 2.5798539716312057, 1.6649648774509807, 1.3948855917874394, 0.6606943039772726, 0.0), # 31
(1.2810436700767265, 2.6455177272727273, 2.091818405797101, 1.1099067973856207, 0.6448547517730496, 0.0, 1.8708099033816428, 2.5794190070921985, 1.664860196078431, 1.3945456038647341, 0.6613794318181818, 0.0), # 32
(1.2839679187979538, 2.6482211931818185, 2.090977336956522, 1.1097913235294117, 0.6446749468085106, 0.0, 1.867403073463268, 2.5786997872340423, 1.6646869852941177, 1.393984891304348, 0.6620552982954546, 0.0), # 33
(1.2868289002557545, 2.6508872727272728, 2.089812463768116, 1.1096308496732028, 0.6444252482269504, 0.0, 1.8631548892220557, 2.5777009929078014, 1.6644462745098043, 1.393208309178744, 0.6627218181818182, 0.0), # 34
(1.2896279571611253, 2.6535156249999994, 2.0883310688405796, 1.1094260620915033, 0.6441068262411348, 0.0, 1.8580902257204732, 2.576427304964539, 1.664139093137255, 1.3922207125603865, 0.6633789062499998, 0.0), # 35
(1.292366432225064, 2.6561059090909094, 2.0865404347826084, 1.1091776470588235, 0.6437208510638298, 0.0, 1.8522339580209897, 2.5748834042553193, 1.6637664705882353, 1.391026956521739, 0.6640264772727273, 0.0), # 36
(1.2950456681585678, 2.6586577840909094, 2.0844478442028986, 1.1088862908496733, 0.6432684929078013, 0.0, 1.8456109611860736, 2.573073971631205, 1.66332943627451, 1.3896318961352656, 0.6646644460227273, 0.0), # 37
(1.2976670076726342, 2.661170909090909, 2.082060579710145, 1.1085526797385619, 0.6427509219858156, 0.0, 1.838246110278194, 2.5710036879432625, 1.662829019607843, 1.3880403864734299, 0.6652927272727273, 0.0), # 38
(1.300231793478261, 2.663644943181818, 2.079385923913043, 1.1081775, 0.6421693085106382, 0.0, 1.83016428035982, 2.568677234042553, 1.66226625, 1.3862572826086954, 0.6659112357954545, 0.0), # 39
(1.302741368286445, 2.6660795454545454, 2.0764311594202898, 1.1077614379084968, 0.6415248226950353, 0.0, 1.8213903464934198, 2.566099290780141, 1.6616421568627453, 1.3842874396135265, 0.6665198863636363, 0.0), # 40
(1.3051970748081843, 2.6684743749999993, 2.07320356884058, 1.107305179738562, 0.640818634751773, 0.0, 1.811949183741463, 2.563274539007092, 1.6609577696078432, 1.3821357125603864, 0.6671185937499998, 0.0), # 41
(1.3076002557544757, 2.6708290909090913, 2.0697104347826087, 1.1068094117647058, 0.640051914893617, 0.0, 1.8018656671664168, 2.560207659574468, 1.6602141176470588, 1.3798069565217392, 0.6677072727272728, 0.0), # 42
(1.3099522538363173, 2.673143352272727, 2.0659590398550725, 1.1062748202614379, 0.6392258333333333, 0.0, 1.7911646718307512, 2.556903333333333, 1.6594122303921568, 1.377306026570048, 0.6682858380681818, 0.0), # 43
(1.312254411764706, 2.675416818181818, 2.0619566666666667, 1.105702091503268, 0.6383415602836879, 0.0, 1.7798710727969347, 2.5533662411347517, 1.658553137254902, 1.3746377777777778, 0.6688542045454545, 0.0), # 44
(1.3145080722506393, 2.6776491477272724, 2.0577105978260875, 1.1050919117647058, 0.6374002659574468, 0.0, 1.7680097451274364, 2.5496010638297872, 1.6576378676470587, 1.3718070652173915, 0.6694122869318181, 0.0), # 45
(1.3167145780051153, 2.67984, 2.053228115942029, 1.1044449673202612, 0.6364031205673759, 0.0, 1.7556055638847246, 2.5456124822695037, 1.656667450980392, 1.3688187439613528, 0.66996, 0.0), # 46
(1.3188752717391306, 2.681989034090909, 2.0485165036231883, 1.1037619444444444, 0.6353512943262412, 0.0, 1.7426834041312678, 2.5414051773049646, 1.6556429166666666, 1.3656776690821253, 0.6704972585227272, 0.0), # 47
(1.320991496163683, 2.6840959090909093, 2.043583043478261, 1.1030435294117646, 0.6342459574468085, 0.0, 1.7292681409295356, 2.536983829787234, 1.654565294117647, 1.3623886956521738, 0.6710239772727273, 0.0), # 48
(1.3230645939897698, 2.6861602840909082, 2.0384350181159423, 1.1022904084967322, 0.6330882801418439, 0.0, 1.7153846493419955, 2.5323531205673757, 1.6534356127450984, 1.3589566787439613, 0.6715400710227271, 0.0), # 49
(1.3250959079283888, 2.688181818181817, 2.0330797101449276, 1.1015032679738563, 0.6318794326241135, 0.0, 1.7010578044311178, 2.527517730496454, 1.6522549019607846, 1.3553864734299517, 0.6720454545454543, 0.0), # 50
(1.3270867806905373, 2.6901601704545453, 2.027524402173913, 1.1006827941176471, 0.6306205851063829, 0.0, 1.6863124812593704, 2.5224823404255314, 1.6510241911764707, 1.3516829347826087, 0.6725400426136363, 0.0), # 51
(1.3290385549872124, 2.6920949999999997, 2.021776376811594, 1.0998296732026143, 0.6293129078014185, 0.0, 1.6711735548892221, 2.517251631205674, 1.6497445098039216, 1.347850917874396, 0.6730237499999999, 0.0), # 52
(1.3309525735294119, 2.693985965909091, 2.0158429166666667, 1.098944591503268, 0.6279575709219859, 0.0, 1.6556659003831418, 2.5118302836879436, 1.648416887254902, 1.3438952777777777, 0.6734964914772728, 0.0), # 53
(1.3328301790281332, 2.6958327272727267, 2.009731304347826, 1.0980282352941177, 0.6265557446808511, 0.0, 1.6398143928035982, 2.5062229787234043, 1.6470423529411766, 1.3398208695652172, 0.6739581818181817, 0.0), # 54
(1.3346727141943733, 2.697634943181818, 2.003448822463768, 1.0970812908496732, 0.6251085992907801, 0.0, 1.6236439072130602, 2.5004343971631204, 1.64562193627451, 1.3356325483091787, 0.6744087357954545, 0.0), # 55
(1.3364815217391304, 2.6993922727272723, 1.9970027536231885, 1.0961044444444443, 0.623617304964539, 0.0, 1.6071793186739964, 2.494469219858156, 1.6441566666666667, 1.3313351690821256, 0.6748480681818181, 0.0), # 56
(1.3382579443734017, 2.7011043750000003, 1.9904003804347825, 1.0950983823529412, 0.6220830319148936, 0.0, 1.5904455022488755, 2.4883321276595742, 1.642647573529412, 1.3269335869565217, 0.6752760937500001, 0.0), # 57
(1.3400033248081842, 2.7027709090909084, 1.983648985507246, 1.094063790849673, 0.6205069503546099, 0.0, 1.5734673330001667, 2.4820278014184396, 1.6410956862745099, 1.3224326570048308, 0.6756927272727271, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
19, # 1
)
| 113.050746 | 218 | 0.728982 | 5,147 | 37,872 | 5.361764 | 0.191179 | 0.313078 | 0.247853 | 0.469616 | 0.344313 | 0.335435 | 0.331014 | 0.328659 | 0.327499 | 0.327499 | 0 | 0.818934 | 0.119191 | 37,872 | 334 | 219 | 113.389222 | 0.008364 | 0.031976 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2297ca055c7fc60d89b00e5bcffc1615082f847a | 31 | py | Python | HelloWorld.py | Vinutha2905/Python_RestAPI | 4c185d37d32c3b5f00154f4be1b4ad0d2fab6d66 | [
"MIT"
] | null | null | null | HelloWorld.py | Vinutha2905/Python_RestAPI | 4c185d37d32c3b5f00154f4be1b4ad0d2fab6d66 | [
"MIT"
] | null | null | null | HelloWorld.py | Vinutha2905/Python_RestAPI | 4c185d37d32c3b5f00154f4be1b4ad0d2fab6d66 | [
"MIT"
] | null | null | null | print("Hello World from Dell")
| 15.5 | 30 | 0.741935 | 5 | 31 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
22b0b5f94b71906ed863f84fbffb2978dd6d27d0 | 154 | py | Python | week3/5_class_basics/ex1.py | skku-overflow/python-2020-2 | def09d9a8ff32ee085edaa5eca89ccc03c29af2a | [
"Apache-2.0"
] | null | null | null | week3/5_class_basics/ex1.py | skku-overflow/python-2020-2 | def09d9a8ff32ee085edaa5eca89ccc03c29af2a | [
"Apache-2.0"
] | null | null | null | week3/5_class_basics/ex1.py | skku-overflow/python-2020-2 | def09d9a8ff32ee085edaa5eca89ccc03c29af2a | [
"Apache-2.0"
] | null | null | null | # C: int main { }
# 파이썬: { } 없음
def empty():
pass
def unwanted():
def my_fn():
pass
def unwanted1():
pass
def my_fn2():
pass
| 9.058824 | 17 | 0.5 | 21 | 154 | 3.571429 | 0.619048 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.350649 | 154 | 16 | 18 | 9.625 | 0.73 | 0.175325 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.555556 | true | 0.444444 | 0 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
fe07f1327c57cc40102d698c49a8c700db0d2831 | 44 | py | Python | neighbors/data/__init__.py | smoh/neighbors | 6f7e2c5a2d1515911765798a925ac48aa691152f | [
"MIT"
] | 1 | 2018-04-09T22:34:33.000Z | 2018-04-09T22:34:33.000Z | neighbors/data/__init__.py | smoh/neighbors | 6f7e2c5a2d1515911765798a925ac48aa691152f | [
"MIT"
] | 1 | 2018-02-19T18:59:49.000Z | 2018-02-19T18:59:49.000Z | neighbors/data/__init__.py | smoh/neighbors | 6f7e2c5a2d1515911765798a925ac48aa691152f | [
"MIT"
] | 1 | 2018-04-09T18:05:16.000Z | 2018-04-09T18:05:16.000Z | from .datasets import *
from .mock import *
| 14.666667 | 23 | 0.727273 | 6 | 44 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 24 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a2c74128e62f05b42b32845a48c99bb91d4bda5 | 149,384 | py | Python | geone/covModel.py | pjuda/geone | 5a9e5d99702cdccb11ab825ea9b4caa90f3ba111 | [
"BSD-4-Clause-UC"
] | null | null | null | geone/covModel.py | pjuda/geone | 5a9e5d99702cdccb11ab825ea9b4caa90f3ba111 | [
"BSD-4-Clause-UC"
] | null | null | null | geone/covModel.py | pjuda/geone | 5a9e5d99702cdccb11ab825ea9b4caa90f3ba111 | [
"BSD-4-Clause-UC"
] | null | null | null | #!/usr/bin/python3
#-*- coding: utf-8 -*-
"""
Python module: 'covModel.py'
authors: Julien Straubhaar and Philippe Renard
date: 2018-2020
Module for:
- definition of (classic) covariance / variogram models in 1D, 2D, and 3D
(omni-directional or anisotropic)
- covariance / variogram analysis and fitting
- ordinary kriging
- cross-validation (leave-one-out (loo))
"""
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.optimize import curve_fit
from scipy import stats
import pyvista as pv
import copy
from geone import img
from geone import imgplot as imgplt
from geone import imgplot3d as imgplt3
# ============================================================================
# Definition of 1D elementary covariance models:
# - nugget, spherical, exponential, gaussian, cubic,
# - power (non-stationary)
# ============================================================================
# ----------------------------------------------------------------------------
def cov_nug(h, w=1.0):
"""
1D-nugget covariance model:
:param h: (1-dimensional array or float): lag(s)
:param w: (float >0): weight (sill)
:return: (1-dimensional array or float) evaluation of the model at h
"""
return (w * np.asarray(h==0., dtype=float))
def cov_sph(h, w=1.0, r=1.0):
"""
1D-shperical covariance model:
:param h: (1-dimensional array or float): lag(s)
:param w: (float >0): weight (sill)
:param r: (float >0): range
:return: (1-dimensional array or float) evaluation of the model at h
"""
t = np.minimum(np.abs(h)/r, 1.) # "parallel or element-wise minimum"
return (w * (1 - 0.5 * t * (3. - t**2))) # w * (1 - 3/2 * t + 1/2 * t^3)
def cov_exp(h, w=1.0, r=1.0):
"""
1D-gaussian covariance model (with sill=1 and range=1):
:param h: (1-dimensional array or float): lag(s)
:param w: (float >0): weight (sill)
:param r: (float >0): range
:return: (1-dimensional array or float) evaluation of the model at h
"""
return (w * np.exp(-3. * np.abs(h)/r)) # w * exp(-3*|h|/r)
def cov_gau(h, w=1.0, r=1.0):
"""
1D-gaussian covariance model (with sill=1 and range=1):
:param h: (1-dimensional array or float): lag(s)
:param w: (float >0): weight (sill)
:param r: (float >0): range
:return: (1-dimensional array or float) evaluation of the model at h
"""
return (w * np.exp(-3. * (h/r)**2)) # w * exp(-3*(h/r)^2)
def cov_cub(h, w=1.0, r=1.0):
"""
1D-cubic covariance model (with sill=1 and range=1):
:param h: (1-dimensional array or float): lag(s)
:param w: (float >0): weight (sill)
:param r: (float >0): range
:return: (1-dimensional array or float) evaluation of the model at h
"""
t = np.minimum(np.abs(h)/r, 1.) # "parallel or element-wise minimum"
t2 = t**2
return (w * (1 + t2 * (-7. + t * (8.75 + t2 * (-3.5 + 0.75 * t2))))) # w * (1 - 7 * t^2 + 35/4 * t^3 - 7/2 * t^5 + 3/4 * t^7)
def cov_pow(h, w=1.0, r=1.0, s=1.0):
"""
1D-power covariance model (with sill=1 and range=1):
:param h: (1-dimensional array or float): lag(s)
:param w: (float >0): weight (sill)
:param r: (float >0): range
:param s: (float btw 0 and 2): power
:return: (1-dimensional array or float) evaluation of the model at h
"""
return (w * (1. - (h/r)**s))
# ----------------------------------------------------------------------------
# ============================================================================
# Definition of class for covariance models in 1D, 2D, 3D, as combination
# of elementary models and accounting for anisotropy and rotation
# ============================================================================
# ----------------------------------------------------------------------------
class CovModel1D (object):
"""
Defines a covariance model in 1D:
elem: (sequence of 2-tuple) an entry (t, d) of the sequence
corresponds to an elementary model with:
t: (string) the type, could be
'nugget', 'spherical', 'exponential', 'gaussian',
'cubic', 'power'
d: (dict) dictionary of required parameters to be
passed to the elementary model,
e.g.
(t, d) = ('power', {w:2.0, r:1.5, s:1.7})
the final model is the sum of the elementary models
name: (string) name of the model
Example: to define a covariance model (1D) that is a combination of
2 elementary structures:
- gaussian with a contribution of 10. and a range of 100.,
- nugget of (contribution) 0.5
>>> cov_model = CovModel1D(
elem=[
('gaussian', {'w':10., 'r':100.}), # elementary contribution
('nugget', {'w':0.5}) # elementary contribution
], name='gau+nug') # name is not necessary
"""
def __init__(self,
elem=[],
name=""):
self.elem = elem
self.name = name
def __repr__(self):
s = "Covariance model 1D: (Name = {})\n".format(self.name)
nelem = len(self.elem)
s = s + " {} elementary contribution(s)\n".format(nelem)
for i, el in enumerate(self.elem):
s = s + " Elementary contribution {}: type : {}\n".format(i, el[0])
s = s + " parameters:"
nparam = len(el[1])
for j, (k, val) in enumerate(el[1].items()):
s = s + " {} = {}".format(k, val)
if j < nparam - 1:
s = s + ","
if i < nelem - 1:
s = s + "\n"
return s
def sill(self):
"""Returns the sill."""
return sum([d['w'] for t, d in self.elem if 'w' in d])
def r(self):
"""Returns the range (max)."""
r = 0.
for t, d in self.elem:
if 'r' in d:
r = max(r, d['r'])
return r
def func(self):
"""
Returns the covariance model function f(h) where:
h: (1-dimensional array or float) 1D-lag(s)
f(h): (1-dimensional array) evaluation of the model at h
note that the result is casted to a 1-dimensional array
"""
def f(h):
h = np.array(h).reshape(-1) # cast to 1-dimensional array if needed
s = np.zeros(len(h))
for t, d in self.elem:
if t == 'nugget':
s = s + cov_nug(h, **d)
elif t == 'spherical':
s = s + cov_sph(h, **d)
elif t == 'exponential':
s = s + cov_exp(h, **d)
elif t == 'gaussian':
s = s + cov_gau(h, **d)
elif t == 'cubic':
s = s + cov_cub(h, **d)
elif t == 'power':
s = s + cov_pow(h, **d)
return s
return f
def vario_func(self):
"""
Returns the varioram model function f(h) where:
h: (1-dimensional array or float) 1D-lag(s)
f(h): (1-dimensional array) evaluation of the model at h
note that the result is casted to a 1-dimensional array
"""
def f(h):
h = np.array(h).reshape(-1) # cast to 1-dimensional array if needed
s = np.zeros(len(h))
for t, d in self.elem:
if t == 'nugget':
s = s + d['w'] - cov_nug(h, **d)
elif t == 'spherical':
s = s + d['w'] - cov_sph(h, **d)
elif t == 'exponential':
s = s + d['w'] - cov_exp(h, **d)
elif t == 'gaussian':
s = s + d['w'] - cov_gau(h, **d)
elif t == 'cubic':
s = s + d['w'] - cov_cub(h, **d)
elif t == 'power':
s = s + d['w'] - cov_pow(h, **d)
return s
return f
def plot_model(self, vario=False, hmin=0, hmax=None, npts=500,
grid=True, show_xlabel=True, show_ylabel=True, **kwargs):
"""
Plot covariance or variogram function (in current figure axis).
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param hmin, hmax: (float) function is plotted for h in interval [hmin, hmax]
hmax=None for default: 1.2 * range max
:param npts: (int) number of points used in interval [hmin, hmax]
:param grid: (bool) indicates if a grid is plotted (True by default)
:param show_xlabel, show_ylabel:
(bool) indicates if (default) label for x axis (resp. y axis)
is displayed
:kwargs: keyword arguments passed to the funtion plt.plot
"""
# In kwargs:
# - add default 'label' if not given
if 'label' not in kwargs.keys():
if vario:
kwargs['label'] = 'vario func'
else:
kwargs['label'] = 'cov func'
# Set hmax if needed
if hmax is None:
hmax = 1.2*self.r()
h = np.linspace(0, hmax, npts)
if vario:
g = self.vario_func()(h)
else:
g = self.func()(h)
plt.plot(h, g, **kwargs)
if show_xlabel:
plt.xlabel('h')
if show_ylabel:
if vario:
plt.ylabel(r'$\gamma(h)$')
else:
plt.ylabel(r'$cov(h)$')
if grid:
plt.grid(True)
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
class CovModel2D (object):
"""
Defines a covariance model in 2D:
elem: (sequence of 2-tuple) an entry (t, d) of the sequence
corresponds to an elementary model with:
t: (string) the type, could be
'nugget','spherical','exponential', 'gaussian',
'cubic', 'power'
d: (dict) dictionary of required parameters to be
passed to the elementary model, excepting
the parameter 'r' which must be given here
as a sequence of range along each axis
e.g.
(t, d) = ('power', {w:2.0, r:[1.5, 2.5], s:1.7})
the final model is the sum of the elementary models
alpha: (float) azimuth angle in degrees:
the system Ox'y', supporting the axes of the model (ranges),
is obtained from the system Oxy by applying a rotation of
angle -alpha.
The 2x2 matrix m for changing the coordinate system from
Ox'y' to Oxy is:
+ +
| cos(alpha) sin(alpha)|
m = | -sin(alpha) cos(alpha)|
+ +
name: (string) name of the model
Example: to define a covariance model (2D) that is a combination of
2 elementary structures:
- gaussian with a contribution of 10. and ranges of 150. and 50.,
along axis x' and axis y' resp. defined by the angle alpha=-30.
(see above)
- nugget of (contribution) 0.5
>>> cov_model = CovModel2D(elem=[
('gaussian', {'w':10., 'r':[150, 50]}), # elementary contribution
('nugget', {'w':0.5}) # elementary contribution
], alpha=-30., name='')
"""
def __init__(self,
elem=[],
alpha=0.,
name=""):
self.elem = elem
self.alpha = alpha
self.name = name
def __repr__(self):
s = "Covariance model 2D: (Name = {})\n".format(self.name)
nelem = len(self.elem)
s = s + " {} elementary contribution(s)\n".format(nelem)
for i, el in enumerate(self.elem):
s = s + " Elementary contribution {}: type : {}\n".format(i, el[0])
s = s + " parameters:"
nparam = len(el[1])
for j, (k, val) in enumerate(el[1].items()):
s = s + " {} = {}".format(k, val)
if j < nparam - 1:
s = s + ","
# if i < nelem - 1:
# s = s + "\n"
s = s + "\n"
s = s + " Angle: alpha = {} deg.\n".format(self.alpha)
s = s + " i.e.: the system Ox'y', supporting the axes of the model (ranges),\n"
s = s + " is obtained from the system Oxy by applying a rotation of\n"
s = s + " angle -alpha."
return s
def sill(self):
"""Returns the sill."""
return sum([d['w'] for t, d in self.elem if 'w' in d])
def mrot(self):
"""Returns the 2x2 matrix m for changing the coordinate system from Ox'y'
to Oxy, where Ox' and Oy' are the axes supporting the ranges of the model."""
a = self.alpha * np.pi/180.
ca, sa = np.cos(a), np.sin(a)
return (np.array([[ca, sa], [-sa, ca]]))
def r12(self):
"""Returns the range (max) along each axis in the new coordinate system
(corresponding the axes of the ellipse supporting the covariance model).
"""
r = [0., 0.]
for t, d in self.elem:
if 'r' in d:
r = np.maximum(r, d['r']) # element-wise maximum
return r
def rxy(self):
"""Returns the range (max) along each axis in the original coordinate
system.
"""
r12 = self.r12()
m = np.abs(self.mrot())
return np.maximum(r12[0] * m[:,0], r12[1] * m[:,1]) # element-wise maximum
def func(self):
"""
Returns the covariance model function f(h) where:
h: (2-dimensional array of dim n x 2, or
1-dimensional array of dim 2) 2D-lag(s)
f(h): (1-dimensional array of dim n) evaluation of the model at h
"""
def f(h):
h = np.array(h).reshape(-1,2) # cast to 2-dimensional array with 2 columns if needed
if self.alpha != 0:
hnew = np.dot(h,self.mrot()).reshape(-1,2)
else:
hnew = h.reshape(-1,2)
s = np.zeros(hnew.shape[0])
for t, d in self.elem:
# new dictionary from d (remove 'r' key)
dnew = {key:val for key, val in d.items() if key != 'r'}
if t == 'nugget':
s = s + cov_nug(np.sum(hnew != 0, axis=1), **dnew)
elif t == 'spherical':
s = s + cov_sph(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'exponential':
s = s + cov_exp(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'gaussian':
s = s + cov_gau(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'cubic':
s = s + cov_cub(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'power':
s = s + cov_pow(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
return s
return f
def vario_func(self):
"""
Returns the variogram model function f(h) where:
h: (2-dimensional array of dim n x 2, or
1-dimensional array of dim 2) 2D-lag(s)
f(h): (1-dimensional array of dim n) evaluation of the model at h
"""
def f(h):
h = np.array(h).reshape(-1,2) # cast to 2-dimensional array with 2 columns if needed
if self.alpha != 0:
hnew = np.dot(h,self.mrot()).reshape(-1,2)
else:
hnew = h.reshape(-1,2)
s = np.zeros(hnew.shape[0])
for t, d in self.elem:
# new dictionary from d (remove 'r' key)
dnew = {key:val for key, val in d.items() if key != 'r'}
if t == 'nugget':
s = s + d['w'] - cov_nug(np.sum(hnew != 0, axis=1), **dnew)
elif t == 'spherical':
s = s + d['w'] - cov_sph(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'exponential':
s = s + d['w'] - cov_exp(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'gaussian':
s = s + d['w'] - cov_gau(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'cubic':
s = s + d['w'] - cov_cub(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'power':
s = s + d['w'] - cov_pow(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
return s
return f
def plot_mrot(self, color0='red', color1='green'):
"""
Plot system Oxy and Ox'y' (in current figure axis).
:param color0, color1: colors for main axes x', y'
"""
mrot = self.mrot()
# Plot system Oxy and Ox'y'
# This:
plt.arrow(*[0,0], *[0.9,0], color='k', head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *[0,0.9], color='k', head_width=0.05, head_length=0.1)
plt.text(*[1,0], "x", c='k', ha='left', va='top')
plt.text(*[0,1], "y", c='k', ha='left', va='top')
plt.arrow(*[0,0], *(0.9*mrot[:,0]), color=color0, head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *(0.9*mrot[:,1]), color=color1, head_width=0.05, head_length=0.1)
plt.text(*mrot[:,0], "x'", c=color0 , ha='right', va='bottom')
plt.text(*mrot[:,1], "y'", c=color1 , ha='right', va='bottom')
plt.text(0, 0, "O", c='k', ha='right', va='top')
plt.xlim(min(min(mrot[0,:]), 0)-0.1, max(max(mrot[0,:]), 1)+0.1)
plt.ylim(min(min(mrot[1,:]), 0)-0.1, max(max(mrot[1,:]), 1)+0.1)
plt.gca().set_aspect('equal')
plt.axis('off')
# # Or that:
# plt.arrow(*[0,0], *(0.9*mrot[:,0]), color=color0, head_width=0.05, head_length=0.1)
# plt.arrow(*[0,0], *(0.9*mrot[:,1]), color=color1, head_width=0.05, head_length=0.1)
# plt.text(*mrot[:,0], "x'", c=color0, ha='right', va='bottom')
# plt.text(*mrot[:,1], "y'", c=color1, ha='right', va='bottom')
# plt.xlabel('x')
# plt.ylabel('y')
# plt.xlim(min(min(mrot[0,:]), 0)-0.1, max(max(mrot[0,:]), 1)+0.1)
# plt.ylim(min(min(mrot[1,:]), 0)-0.1, max(max(mrot[1,:]), 1)+0.1)
# plt.gca().set_aspect('equal')
# plt.gca().spines['left'].set_position('zero')
# plt.gca().spines['left'].set_position('zero')
# plt.gca().spines['right'].set_color('none')
# plt.gca().spines['bottom'].set_position('zero')
# plt.gca().spines['top'].set_color('none')
def plot_model(self, vario=False, plot_map=True, plot_curves=True,
cmap='terrain', color0='red', color1='green',
extent=None, ncell=(201, 201),
h1min=0, h1max=None, h2min=0, h2max=None, n1=500, n2=500,
grid=True, show_xlabel=True, show_ylabel=True, show_suptitle=True, figsize=None):
"""
Plot covariance or variogram function
- map of the function, and / or
- curves along axis x' and axis y' (where Ox' and Oy' are the axes supporting the ranges of the model)
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param plot_map, plot_curves:
(bool) indicates what is plotted:
- plot_map is True and plot_curves is True :
plot map and curves along axis x' and axis y' in a new 1x2 figure
- plot_map is True and plot_curves is False:
plot map in current figure axis
- plot_map is False and plot_curves is True :
plot curves along axis x' and axis y' in current figure axis
- plot_map is False and plot_curves is False:
nothing is done
:param cmap: color map
:param color0, color1: colors for curves along axis x' and along axis y' resp.
:param extent: (hxmin, hxmax, hymin, hymax): 4 floats defining the domain of the map.
None for default
:param ncell: (nx, ny): 2 ints defining the number of the cells in the map (nx x ny)
:param h1min, h1max: function is plotted along x' for h in interval [h1min, h1max] (default h1max if None)
:param h2min, h2max: function is plotted along y' for h in interval [h2min, h2max] (default h2max if None)
:param n1, n2: number of points in interval [h1min, h1max] and [h2min, h2max] resp.
:param show_xlabel, show_ylabel:
(bool) indicates if label for x axis (resp. y axis)
is displayed (True by default), for curves plot
:param show_suptitle:
(bool) indicates if suptitle is displayed (True by default),
in case of map and curves are plotted (1x2 figure)
:param grid: (bool) indicates if a grid is plotted (True by default) for curves plot
:param figsize: (tuple of 2 ints) size of the figure, used if a new 1x2 figure is created
(i.e. if plot_map and plot_curves are set to True)
"""
if not plot_map and not plot_curves:
return
# Set hr to 1.2 * max of ranges, used as default in extent and h1max, h2max below
r = max(self.r12())
hr = 1.2 * r
# Rotation matrix
mrot = self.mrot()
if plot_map:
# Set extent if needed
if extent is None:
extent = [-hr, hr, -hr, hr]
hxmin, hxmax, hymin, hymax = extent
# Evaluate function on 2D mesh
nx, ny = ncell
sx, sy = (hxmax - hxmin) / nx, (hymax - hymin) / ny
ox, oy = hxmin, hymin
hx = ox + sx * (0.5 + np.arange(nx))
hy = oy + sy * (0.5 + np.arange(ny))
hhx, hhy = np.meshgrid(hx, hy)
hh = np.hstack((hhx.reshape(-1,1), hhy.reshape(-1,1))) # 2D-lags: (n, 2) array
if vario:
gg = self.vario_func()(hh).reshape(ny, nx)
else:
gg = self.func()(hh).reshape(ny, nx)
# Set image (Img class)
im = img.Img(nx=nx, ny=ny, nz=1, sx=sx, sy=sy, sz=1.0, ox=ox, oy=oy, oz=0.0, nv=1, val=gg)
if plot_curves:
# Set h1max, h2max if needed
if h1max is None:
h1max = hr
if h2max is None:
h2max = hr
# Evaluate function along axis x'
h1 = np.linspace(h1min, h1max, n1)
hh1 = np.hstack((h1.reshape(-1,1), np.zeros((len(h1),1)))) # (n1,2) array) 2D-lags along x' expressed in system Ox'y'
if vario:
g1 = self.vario_func()(hh1.dot(mrot.T)) # hh1.dot(mrot.T): 2D-lags in system Oxy (what is taken by the function)
else:
g1 = self.func()(hh1.dot(mrot.T)) # hh1.dot(mrot.T): 2D-lags in system Oxy (what is taken by the function)
# Evaluate function along axis y'
h2 = np.linspace(h2min, h2max, n2)
hh2 = np.hstack((np.zeros((len(h2),1)), h2.reshape(-1,1))) # (n2,2) array) 2D-lags along y' expressed in system Ox'y'
if vario:
g2 = self.vario_func()(hh2.dot(mrot.T)) # hh2.dot(mrot.T): 2D-lags in system Oxy (what is taken by the function)
else:
g2 = self.func()(hh2.dot(mrot.T)) # hh2.dot(mrot.T): 2D-lags in system Oxy (what is taken by the function)
# Plot...
if plot_map and plot_curves:
# Figure (new)
fig, ax = plt.subplots(1,2, figsize=figsize)
plt.sca(ax[0])
if plot_map:
# Plot map and system Ox'y'
# ... map
imgplt.drawImage2D(im, cmap=cmap)
# ... system Ox'y'
hm1 = 0.9*min(hxmax, hymax)
hm2 = 0.9*max(hxmax, hymax)
plt.arrow(*[0,0], *(hm2*mrot[:,0]), color=color0)#, head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *(hm2*mrot[:,1]), color=color1)#, head_width=0.05, head_length=0.1)
plt.text(*(hm1*mrot[:,0]), "x'", c=color0, ha='right', va='bottom')
plt.text(*(hm1*mrot[:,1]), "y'", c=color1, ha='right', va='bottom')
# plt.text(0, 0, "O", c='k', ha='right', va='top')
# plt.gca().set_aspect('equal')
plt.xlabel("x")
plt.ylabel("y")
if plot_map and plot_curves:
plt.sca(ax[1])
if plot_curves:
# Plot curve along x'
plt.plot(h1, g1, '-', c=color0, label="along x'")
# Plot curve along y'
plt.plot(h2, g2, '-', c=color1, label="along y'")
if show_xlabel:
plt.xlabel('h')
if show_ylabel:
if vario:
plt.ylabel(r'$\gamma(h)$')
else:
plt.ylabel(r'$cov(h)$')
plt.legend()
if grid:
plt.grid(True)
if plot_map and plot_curves and show_suptitle:
if vario:
s = ['Model (vario): alpha={}'.format(self.alpha)] + ['{}'.format(el) for el in self.elem]
else:
s = ['Model (cov): alpha={}'.format(self.alpha)] + ['{}'.format(el) for el in self.elem]
plt.suptitle('\n'.join(s))
# plt.show()
def plot_model_one_curve(self, main_axis=1, vario=False, hmin=0, hmax=None, npts=500,
grid=True, show_xlabel=True, show_ylabel=True, **kwargs):
"""
Plot covariance or variogram curve along one main axis (in current figure axis).
:param main_axis: (int) 1 or 2:
1: plot curve along x',
2: plot curve along y'
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param hmin, hmax: (float) function is plotted for h in interval [hmin, hmax]
hmax=None for default: 1.2 * range max
:param npts: (int) number of points used in interval [hmin, hmax]
:param grid: (bool) indicates if a grid is plotted (True by default)
:param show_xlabel, show_ylabel:
(bool) indicates if label for x axis (resp. y axis)
is displayed (True by default)
:kwargs: keyword arguments passed to the funtion plt.plot
"""
if main_axis not in (1, 2):
print('ERROR: main_axis not valid (should be 1 or 2)')
return
# In kwargs:
# - add default 'label' if not given
if 'label' not in kwargs.keys():
if vario:
kwargs['label'] = 'vario func'
else:
kwargs['label'] = 'cov func'
# Set hmax if needed
if hmax is None:
hmax = 1.2*self.r12()[main_axis-1]
# Rotation matrix
mrot = self.mrot()
# Evaluate function along selected axis
h = np.linspace(hmin, hmax, npts)
if main_axis == 1:
hh = np.hstack((h.reshape(-1,1), np.zeros((len(h),1)))) # (npts,2) array) 2D-lags along x' expressed in system Ox'y'
else:
hh = np.hstack((np.zeros((len(h),1)), h.reshape(-1,1))) # (npts,2) array) 2D-lags along y' expressed in system Ox'y'
if vario:
g = self.vario_func()(hh.dot(mrot.T)) # hh.dot(mrot.T): 2D-lags in system Oxy (what is taken by the function)
else:
g = self.func()(hh.dot(mrot.T)) # hh.dot(mrot.T): 2D-lags in system Oxy (what is taken by the function)
plt.plot(h, g, **kwargs)
if show_xlabel:
plt.xlabel('h')
if show_ylabel:
if vario:
plt.ylabel(r'$\gamma(h)$')
else:
plt.ylabel(r'$cov(h)$')
if grid:
plt.grid(True)
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
class CovModel3D (object):
"""
Defines a covariance model in 3D:
elem: (sequence of 2-tuple) an entry (t, d) of the sequence
corresponds to an elementary model with:
t: (string) the type, could be
'nugget','spherical','exponential', 'gaussian',
'cubic', 'power'
d: (dict) dictionary of required parameters to be
passed to the elementary model, excepting
the parameter 'r' which must be given here
as a sequence of range along each axis
e.g.
(t, d) = ('power', {w:2.0, r:[1.5, 2.5], s:1.7})
the final model is the sum of the elementary models
alpha, beta, gamma:
(floats) azimuth, dip and plunge angles in degrees:
the system Ox'''y''''z''', supporting the axes of the model
(ranges), is obtained from the system Oxyz as follows:
Oxyz -- rotation of angle -alpha around Oz --> Ox'y'z'
Ox'y'z' -- rotation of angle -beta around Ox' --> Ox''y''z''
Ox''y''z''-- rotation of angle -gamma around Oy''--> Ox'''y'''z'''
The 3x3 matrix m for changing the coordinate system from
Ox'''y'''z''' to Oxy is:
+ +
| ca * cc + sa * sb * sc, sa * cb, - ca * sc + sa * sb * cc|
m = |- sa * cc + ca * sb * sc, ca * cb, sa * sc + ca * sb * cc|
| cb * sc, - sb, cb * cc|
+ +
where
ca = cos(alpha), cb = cos(beta), cc = cos(gamma),
sa = sin(alpha), sb = sin(beta), sc = sin(gamma)
name: (string) name of the model
Example: to define a covariance model (3D) that is a combination of
2 elementary structures:
- gaussian with a contributtion of 10. and ranges of 40., 20. and 10.,
along axis x'' and axis y'', axis z'' resp. defined by the angles
alpha=-30., beta=-40., and gamma=20. (see above)
- nugget of (contribution) 0.5
>>> cov_model = CovModel3D(elem=[
('gaussian', {'w':8.5, 'r':[40, 20, 10]}), # elementary contribution
('nugget', {'w':0.5}) # elementary contribution
], alpha=-30., beta=-40., gamma=20., name='')
"""
def __init__(self,
elem=[],
alpha=0., beta=0., gamma=0.,
name=""):
self.elem = elem
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.name = name
def __repr__(self):
s = "Covariance model 3D: (Name = {})\n".format(self.name)
nelem = len(self.elem)
s = s + " {} elementary contribution(s)\n".format(nelem)
for i, el in enumerate(self.elem):
s = s + " Elementary contribution {}: type : {}\n".format(i, el[0])
s = s + " parameters:"
nparam = len(el[1])
for j, (k, val) in enumerate(el[1].items()):
s = s + " {} = {}".format(k, val)
if j < nparam - 1:
s = s + ","
# if i < nelem - 1:
# s = s + "\n"
s = s + "\n"
s = s + " Angles: alpha = {} deg., beta = {} deg., gamma = {} deg.\n".format(self.alpha, self.beta, self.gamma)
s = s + " i.e.: the system Ox'''y''''z''', supporting the axes of the model (ranges),\n"
s = s + " is obtained from the system Oxyz as follows:\n"
s = s + " Oxyz -- rotation of angle -alpha around Oz --> Ox'y'z'\n"
s = s + " Ox'y'z' -- rotation of angle -beta around Ox' --> Ox''y''z''\n"
s = s + " Ox''y''z''-- rotation of angle -gamma around Oy''--> Ox'''y'''z'''"
return s
def sill(self):
"""Returns the sill."""
return sum([d['w'] for t, d in self.elem if 'w' in d])
def mrot(self):
"""Returns the 3x3 matrix m for changing the coordinate system from
Ox'''y'''z''' to Oxyz, where Ox''', Oy''', Oz''' are the axes supporting
the ranges of the model."""
a = self.alpha * np.pi/180.
b = self.beta * np.pi/180.
c = self.gamma * np.pi/180.
ca, sa = np.cos(a), np.sin(a)
cb, sb = np.cos(b), np.sin(b)
cc, sc = np.cos(c), np.sin(c)
return np.array([[ ca * cc + sa * sb * sc, sa * cb, - ca * sc + sa * sb * cc],
[- sa * cc + ca * sb * sc, ca * cb, sa * sc + ca * sb * cc],
[ cb * sc, - sb, cb * cc ]])
def r123(self):
"""Returns the range (max) along each axis in the new coordinate system
(corresponding the axes of the ellipse supporting the covariance model).
"""
r = [0., 0., 0.]
for t, d in self.elem:
if 'r' in d:
r = np.maximum(r, d['r']) # element-wise maximum
return r
def rxyz(self):
"""Returns the range (max) along each axis in the original coordinate
system.
"""
r123 = self.r123()
m = np.abs(self.mrot())
return np.maximum(r123[0] * m[:,0], r123[1] * m[:,1], r123[2] * m[:,2]) # element-wise maximum
def func(self):
"""
Returns the covariance model function f(h) where:
h: (2-dimensional array of dim n x 3, or
1-dimensional array of dim 3) 2D-lag(s)
f(h): (1-dimensional array of dim n) evaluation of the model at h
"""
def f(h):
h = np.array(h).reshape(-1,3) # cast to 2-dimensional array with 3 columns if needed
if self.alpha != 0 or self.beta != 0 or self.gamma != 0:
hnew = np.dot(h,self.mrot()).reshape(-1,3)
else:
hnew = h.reshape(-1,3)
s = np.zeros(hnew.shape[0])
for t, d in self.elem:
# new dictionary from d (remove 'r' key)
dnew = {key:val for key, val in d.items() if key != 'r'}
if t == 'nugget':
s = s + cov_nug(np.sum(hnew != 0, axis=1), **dnew)
elif t == 'spherical':
s = s + cov_sph(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'exponential':
s = s + cov_exp(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'gaussian':
s = s + cov_gau(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'cubic':
s = s + cov_cub(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'power':
s = s + cov_pow(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
return s
return f
def vario_func(self):
"""
Returns the variogram model function f(h) where:
h: (2-dimensional array of dim n x 3, or
1-dimensional array of dim 3) 2D-lag(s)
f(h): (1-dimensional array of dim n) evaluation of the model at h
"""
def f(h):
h = np.array(h).reshape(-1,3) # cast to 2-dimensional array with 3 columns if needed
if self.alpha != 0 or self.beta != 0 or self.gamma != 0:
hnew = np.dot(h,self.mrot()).reshape(-1,3)
else:
hnew = h.reshape(-1,3)
s = np.zeros(hnew.shape[0])
for t, d in self.elem:
# new dictionary from d (remove 'r' key)
dnew = {key:val for key, val in d.items() if key != 'r'}
if t == 'nugget':
s = s + d['w'] - cov_nug(np.sum(hnew != 0, axis=1), **dnew)
elif t == 'spherical':
s = s + d['w'] - cov_sph(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'exponential':
s = s + d['w'] - cov_exp(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'gaussian':
s = s + d['w'] - cov_gau(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'cubic':
s = s + d['w'] - cov_cub(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
elif t == 'power':
s = s + d['w'] - cov_pow(np.sqrt(np.sum((hnew/d['r'])**2, axis=1)), **dnew)
return s
return f
def plot_mrot(self, color0='red', color1='green', color2='blue', set_3d_subplot=True, figsize=None):
"""
Plot system Oxyz and Ox'''y'''z''' (in a new figure).
:param color0, color1, color2: colors for main axes x''', y''', z'''
:param set_3d_subplot:
(bool)
- True: a new figure is created with one axis "projection='3d'"
- False: the plot is done in the current figure axis assumed to be set
as "projection='3d'"
(this allows to plot in a figure with multiple axes)
:param figsize: (tuple of 2 ints) size of the figure, not used if set_polar_subplot is False
"""
mrot = self.mrot()
if set_3d_subplot:
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(projection='3d')
else:
ax = plt.gca()
# Plot system Oxzy and Ox'y'z'
# This:
ax.plot([0,1], [0,0], [0,0], color='k')
ax.plot([0,0], [0,1], [0,0], color='k')
ax.plot([0,0], [0,0], [0,1], color='k')
ax.plot([0, mrot[0,0]], [0, mrot[1,0]], [0, mrot[2,0]], color=color0, label="x'''")
ax.plot([0, mrot[0,1]], [0, mrot[1,1]], [0, mrot[2,1]], color=color1, label="y'''")
ax.plot([0, mrot[0,2]], [0, mrot[1,2]], [0, mrot[2,2]], color=color2, label="z'''")
ax.set_xticks([0,1])
ax.set_yticks([0,1])
ax.set_zticks([0,1])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.legend()
# plt.sca(ax)
# plt.title("System Ox'''y'''z'''")
# plt.show()
def plot_model3d_volume(self, plotter=None, vario=False,
color0='red', color1='green', color2='blue',
extent=None, ncell=(101, 101, 101), **kwargs):
"""
Plot covariance or variogram function in 3D (using the function drawImage3D_volume
from geone.imgplot3d (based on pyvista)).
:param plotter: (pyvista plotter)
if given: add element to the plotter, a further call
to plotter.show() will be required to show the plot
if None (default): a plotter is created and the plot
is shown
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param color0, color1, color2: colors for main axes x''', y''', z'''
:param extent: (hxmin, hxmax, hymin, hymax, hzmin, hzmax): 4 floats defining the domain of the plot.
None for default
:param ncell: (nx, ny, nz): 3 ints defining the number of the cells in the plot (nx x ny x nz)
:param kwargs: keyword arguments passed to the funtion drawImage3D_volume from geone.imgplot3d
(cmap, ...)
"""
# Set extent if needed
r = max(self.r123())
hr = 1.1 * r
if extent is None:
extent = [-hr, hr, -hr, hr, -hr, hr]
hxmin, hxmax, hymin, hymax, hzmin, hzmax = extent
# Rotation matrix
mrot = self.mrot()
# Evaluate function on 3D mesh
nx, ny, nz = ncell
sx, sy, sz = (hxmax - hxmin) / nx, (hymax - hymin) / ny, (hzmax - hzmin) / nz
ox, oy, oz = hxmin, hymin, hzmin
hx = ox + sx * (0.5 + np.arange(nx))
hy = oy + sy * (0.5 + np.arange(ny))
hz = oz + sz * (0.5 + np.arange(nz))
hhz, hhy, hhx = np.meshgrid(hz, hy, hx, indexing='ij')
hh = np.hstack((hhx.reshape(-1,1), hhy.reshape(-1,1), hhz.reshape(-1,1))) # 3D-lags: (n, 3) array
if vario:
gg = self.vario_func()(hh).reshape(nz, ny, nx)
else:
gg = self.func()(hh).reshape(nz, ny, nx)
# Set image (Img class)
im = img.Img(nx=nx, ny=ny, nz=nz, sx=sx, sy=sy, sz=sz, ox=ox, oy=oy, oz=oz, nv=1, val=gg)
# In kwargs (for imgplt3d.drawImage3D_slice):
# - set color map 'cmap'
# - set 'show_bounds' to True
# - set 'scalar_bar_kwargs'
if 'cmap' not in kwargs.keys():
kwargs['cmap'] = 'terrain'
if 'show_bounds' not in kwargs.keys():
kwargs['show_bounds'] = True
if 'scalar_bar_kwargs' not in kwargs.keys():
if vario:
title='vario'
else:
title='cov'
kwargs['scalar_bar_kwargs'] = {'vertical':True, 'title':title, 'title_font_size':16, 'label_font_size':12}
# Set plotter if not given
plotter_show = False
if plotter is None:
plotter = pv.Plotter()
plotter_show = True
# plot slices in 3D
imgplt3.drawImage3D_volume(im, plotter=plotter, **kwargs)
# add main axis x''' (cyl1), y''' (cyl2), z''' (cyl3)
height = min(hxmax-hxmin, hymax-hymin, hzmax-hzmin)
radius = 0.005*height
cyl1 = pv.Cylinder(center=(0.0, 0.0, 0.0), direction=mrot[:,0], radius=radius, height=height, resolution=100, capping=True)
cyl2 = pv.Cylinder(center=(0.0, 0.0, 0.0), direction=mrot[:,1], radius=radius, height=height, resolution=100, capping=True)
cyl3 = pv.Cylinder(center=(0.0, 0.0, 0.0), direction=mrot[:,2], radius=radius, height=height, resolution=100, capping=True)
plotter.add_mesh(cyl1, color=color0)
plotter.add_mesh(cyl2, color=color1)
plotter.add_mesh(cyl3, color=color2)
if plotter_show:
plotter.show()
def plot_model3d_slice(self, plotter=None, vario=False,
color0='red', color1='green', color2='blue',
extent=None, ncell=(101, 101, 101), **kwargs):
"""
Plot covariance or variogram function in 3D (sclices in 3D volume, using the function
drawImage3D_slice from geone.imgplot3d (based on pyvista)).
:param plotter: (pyvista plotter)
if given: add element to the plotter, a further call
to plotter.show() will be required to show the plot
if None (default): a plotter is created and the plot
is shown
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param color0, color1, color2: colors for main axes x''', y''', z'''
:param extent: (hxmin, hxmax, hymin, hymax, hzmin, hzmax): 4 floats defining the domain of the plot.
None for default
:param ncell: (nx, ny, nz): 3 ints defining the number of the cells in the plot (nx x ny x nz)
:param kwargs: keyword arguments passed to the funtion drawImage3D_slice from geone.imgplot3d
(cmap, ...)
"""
# Set extent if needed
r = max(self.r123())
hr = 1.2 * r
if extent is None:
extent = [-hr, hr, -hr, hr, -hr, hr]
hxmin, hxmax, hymin, hymax, hzmin, hzmax = extent
# Rotation matrix
mrot = self.mrot()
# Evaluate function on 3D mesh
nx, ny, nz = ncell
sx, sy, sz = (hxmax - hxmin) / nx, (hymax - hymin) / ny, (hzmax - hzmin) / nz
ox, oy, oz = hxmin, hymin, hzmin
hx = ox + sx * (0.5 + np.arange(nx))
hy = oy + sy * (0.5 + np.arange(ny))
hz = oz + sz * (0.5 + np.arange(nz))
hhz, hhy, hhx = np.meshgrid(hz, hy, hx, indexing='ij')
hh = np.hstack((hhx.reshape(-1,1), hhy.reshape(-1,1), hhz.reshape(-1,1))) # 3D-lags: (n, 3) array
if vario:
gg = self.vario_func()(hh).reshape(nz, ny, nx)
else:
gg = self.func()(hh).reshape(nz, ny, nx)
# Set image (Img class)
im = img.Img(nx=nx, ny=ny, nz=nz, sx=sx, sy=sy, sz=sz, ox=ox, oy=oy, oz=oz, nv=1, val=gg)
# In kwargs (for imgplt3d.drawImage3D_slice):
# - add 'slice_normal_custom' (orthogonal to axes x''', y''', z''' and going through origin) if not given
# - set color map 'cmap'
# - set 'show_bounds' to True
# - set 'scalar_bar_kwargs'
if 'slice_normal_custom' not in kwargs.keys():
kwargs['slice_normal_custom'] = [[mrot[:,0], (0,0,0)], [mrot[:,1], (0,0,0)], [mrot[:,2], (0,0,0)]]
if 'cmap' not in kwargs.keys():
kwargs['cmap'] = 'terrain'
if 'show_bounds' not in kwargs.keys():
kwargs['show_bounds'] = True
if 'scalar_bar_kwargs' not in kwargs.keys():
if vario:
title='vario'
else:
title='cov'
kwargs['scalar_bar_kwargs'] = {'vertical':True, 'title':title, 'title_font_size':16, 'label_font_size':12}
# Set plotter if not given
plotter_show = False
if plotter is None:
plotter = pv.Plotter()
plotter_show = True
# plot slices in 3D
imgplt3.drawImage3D_slice(im, plotter=plotter, **kwargs)
# add main axis x''' (cyl1), y''' (cyl2), z''' (cyl3)
height = min(hxmax-hxmin, hymax-hymin, hzmax-hzmin)
radius = 0.005*height
cyl1 = pv.Cylinder(center=(0.0, 0.0, 0.0), direction=mrot[:,0], radius=radius, height=height, resolution=100, capping=True)
cyl2 = pv.Cylinder(center=(0.0, 0.0, 0.0), direction=mrot[:,1], radius=radius, height=height, resolution=100, capping=True)
cyl3 = pv.Cylinder(center=(0.0, 0.0, 0.0), direction=mrot[:,2], radius=radius, height=height, resolution=100, capping=True)
plotter.add_mesh(cyl1, color=color0)
plotter.add_mesh(cyl2, color=color1)
plotter.add_mesh(cyl3, color=color2)
if plotter_show:
plotter.show()
def plot_model_curves(self, plotter=None, vario=False,
color0='red', color1='green', color2='blue',
h1min=0, h1max=None, h2min=0, h2max=None, h3min=0, h3max=None,
n1=500, n2=500, n3=500, grid=True, show_xlabel=True, show_ylabel=True):
"""
Plot covariance or variogram function along the main axes x''', y''', z''' (in current figure axis).
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param color0, color1, color2: colors for curves along main axes x''', y''', z'''
:param h1min, h1max: function is plotted along x''' for h in interval [h1min, h1max] (default h1max if None)
:param h2min, h2max: function is plotted along y''' for h in interval [h2min, h2max] (default h2max if None)
:param h3min, h3max: function is plotted along z''' for h in interval [h3min, h3max] (default h1max if None)
:param n1, n2, n3: number of points in interval [h1min, h1max], [h2min, h2max] and [h3min, h3max] resp.
:param show_xlabel, show_ylabel:
(bool) indicates if label for x axis (resp. y axis) is displayed (True by default)
:param grid: (bool) indicates if a grid is plotted (True by default)
"""
# Set h1max, h2max, h3max if needed
r = max(self.r123())
hr = 1.2 * r
# Set h1max, h2max if needed
if h1max is None:
h1max = hr
if h2max is None:
h2max = hr
if h3max is None:
h3max = hr
# Rotation matrix
mrot = self.mrot()
# Evaluate function along axis x'''
h1 = np.linspace(h1min, h1max, n1)
hh1 = np.hstack((h1.reshape(-1,1), np.zeros((len(h1),1)), np.zeros((len(h1),1)))) # (n1,3) array) 3D-lags along x''' expressed in system Ox''y'''z''''
if vario:
g1 = self.vario_func()(hh1.dot(mrot.T)) # hh1.dot(mrot.T): 3D-lags in system Oxyz (what is taken by the function)
else:
g1 = self.func()(hh1.dot(mrot.T)) # hh1.dot(mrot.T): 3D-lags in system Oxz (what is taken by the function)
# Evaluate function along axis y'''
h2 = np.linspace(h2min, h2max, n2)
hh2 = np.hstack((np.zeros((len(h2),1)), h2.reshape(-1,1), np.zeros((len(h2),1)))) # (n1,3) array) 3D-lags along y''' expressed in system Ox''y'''z''''
if vario:
g2 = self.vario_func()(hh2.dot(mrot.T)) # hh2.dot(mrot.T): 3D-lags in system Oxyz (what is taken by the function)
else:
g2 = self.func()(hh2.dot(mrot.T)) # hh2.dot(mrot.T): 3D-lags in system Oxz (what is taken by the function)
# Evaluate function along axis z'''
h3 = np.linspace(h3min, h3max, n3)
hh3 = np.hstack((np.zeros((len(h3),1)), np.zeros((len(h3),1)), h3.reshape(-1,1))) # (n1,3) array) 3D-lags along z''' expressed in system Ox''y'''z''''
if vario:
g3 = self.vario_func()(hh3.dot(mrot.T)) # hh3.dot(mrot.T): 3D-lags in system Oxyz (what is taken by the function)
else:
g3 = self.func()(hh3.dot(mrot.T)) # hh3.dot(mrot.T): 3D-lags in system Oxz (what is taken by the function)
# Plot curve along x'''
plt.plot(h1, g1, '-', c=color0, label="along x'''")
# Plot curve along y'''
plt.plot(h2, g2, '-', c=color1, label="along y'''")
# Plot curve along z'''
plt.plot(h3, g3, '-', c=color2, label="along z'''")
if show_xlabel:
plt.xlabel('h')
if show_ylabel:
if vario:
plt.ylabel(r'$\gamma(h)$')
else:
plt.ylabel(r'$cov(h)$')
plt.legend()
if grid:
plt.grid(True)
def plot_model_one_curve(self, main_axis=1, vario=False, hmin=0, hmax=None, npts=500,
grid=True, show_xlabel=True, show_ylabel=True, **kwargs):
"""
Plot covariance or variogram curve along one main axis (in current figure axis).
:param main_axis: (int) 1, 2 or 3:
1: plot curve along x''',
2: plot curve along y''',
3: plot curve along z'''
:param vario: (bool)
- if False: plot covariance function
- if True: plot variogram function
:param hmin, hmax: (float) function is plotted for h in interval [hmin, hmax]
hmax=None for default: 1.2 * range max
:param npts: (int) number of points used in interval [hmin, hmax]
:param grid: (bool) indicates if a grid is plotted (True by default)
:param show_xlabel, show_ylabel:
(bool) indicates if label for x axis (resp. y axis)
is displayed (True by default)
:kwargs: keyword arguments passed to the funtion plt.plot
"""
if main_axis not in (1, 2, 3):
print('ERROR: main_axis not valid (should be 1, 2 or 3)')
return
# In kwargs:
# - add default 'label' if not given
if 'label' not in kwargs.keys():
if vario:
kwargs['label'] = 'vario func'
else:
kwargs['label'] = 'cov func'
# Set hmax if needed
if hmax is None:
hmax = 1.2*self.r123()[main_axis-1]
# Rotation matrix
mrot = self.mrot()
# Evaluate function along selected axis
h = np.linspace(hmin, hmax, npts)
if main_axis == 1:
hh = np.hstack((h.reshape(-1,1), np.zeros((len(h),1)), np.zeros((len(h),1)))) # (npts,3) array) 3D-lags along x''' expressed in system Ox''y'''z''''
elif main_axis == 2:
hh = np.hstack((np.zeros((len(h),1)), h.reshape(-1,1), np.zeros((len(h),1)))) # (npts,3) array) 3D-lags along y''' expressed in system Ox''y'''z''''
else:
hh = np.hstack((np.zeros((len(h),1)), np.zeros((len(h),1)), h.reshape(-1,1))) # (npts,3) array) 3D-lags along z''' expressed in system Ox''y'''z''''
if vario:
g = self.vario_func()(hh.dot(mrot.T)) # hh.dot(mrot.T): 3D-lags in system Oxyz (what is taken by the function)
else:
g = self.func()(hh.dot(mrot.T)) # hh.dot(mrot.T): 3D-lags in system Oxyz (what is taken by the function)
plt.plot(h, g, **kwargs)
if show_xlabel:
plt.xlabel('h')
if show_ylabel:
if vario:
plt.ylabel(r'$\gamma(h)$')
else:
plt.ylabel(r'$cov(h)$')
if grid:
plt.grid(True)
# ----------------------------------------------------------------------------
# ============================================================================
# Definition of function to convert covariance models
# ============================================================================
# ----------------------------------------------------------------------------
def covModel1D_to_covModel2D(cov_model_1d):
"""
Converts a covariance model in 1D to a omni-directional covariance model in 2D.
:param cov_model_1d: (CovModel1D class) covariance model in 1D
:return cov_model_2d: (CovModel2D class) covariance model in 2D (omni-directional,
defined from cov_model_1d)
"""
cov_model_2d = CovModel2D()
cov_model_2d.elem = copy.deepcopy(cov_model_1d.elem)
for el in cov_model_2d.elem:
for k, val in el[1].items():
if k == 'r':
el[1]['r'] = [val, val]
return cov_model_2d
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def covModel1D_to_covModel3D(cov_model_1d):
"""
Converts a covariance model in 1D to a omni-directional covariance model in 3D.
:param cov_model_1d: (CovModel1D class) covariance model in 1D
:return cov_model_3d: (CovModel2D class) covariance model in 3D (omni-directional,
defined from cov_model_1d)
"""
cov_model_3d = CovModel3D()
cov_model_3d.elem = copy.deepcopy(cov_model_1d.elem)
for el in cov_model_3d.elem:
for k, val in el[1].items():
if k == 'r':
el[1]['r'] = [val, val, val]
return cov_model_3d
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def covModel2D_to_covModel3D(cov_model_2d, alpha=0., beta=0., gamma=0.):
"""
Converts a covariance model in 2D to a covariance model in 3D, where
the angles alpha, beta, gamma define the system supporting the axes of
the model (ranges) (see CoveModel3D class), and where the ranges along
the two first axes are set to the range along the first axis from the
covariance model in 2D, and the range along the third axis is set to
the range along the second axis from the covariance model in 2D.
:param cov_model_2d: (CovModel2D class) covariance model in 2D
(attribute cov_model_2d.alpha will be ignored)
:param alpha, beta, gamma:
(floats) angles in degrees defining the system supporting
the axes of the covariance model in 3D (ranges)
:return cov_model_3d: (CovModel3D class) covariance model in 3D
"""
cov_model_3d = CovModel3D()
cov_model_3d.elem = copy.deepcopy(cov_model_2d.elem)
cov_model_3d.alpha = alpha
cov_model_3d.beta = beta
cov_model_3d.gamma = gamma
for el in cov_model_3d.elem:
for k, val in el[1].items():
if k == 'r':
el[1]['r'] = [val[0], val[0], val[1]]
return cov_model_3d
# ----------------------------------------------------------------------------
# ============================================================================
# Basic functions for plotting variogram cloud and experimental variogram (1D)
# ============================================================================
# ----------------------------------------------------------------------------
def plot_variogramCloud1D(h, g, npair, grid=True, **kwargs):
"""
Plot a variogram cloud (1D) (in current figure axis).
:param h, g: (1-dimensional array of shape (npair,)) coordinates of the points
of the variogram cloud.
:param npair: (int) number of points (pairs of data points considered) in the variogram cloud.
:param grid: (bool) indicates if a grid is plotted (True by default)
:kwargs: keyword arguments passed to the funtion plt.plot
"""
# In kwargs:
# - add default 'label' if not given
# - set default 'marker' if not given
# - set default 'linestyle' (or 'ls') if not given
if 'label' not in kwargs.keys():
kwargs['label'] = 'vario cloud'
if 'marker' not in kwargs.keys():
kwargs['marker'] = '.'
if 'linestyle' not in kwargs.keys() and 'ls' not in kwargs.keys():
kwargs['linestyle'] = 'none'
plt.plot(h, g, **kwargs)
plt.xlabel('h')
plt.ylabel(r'$1/2(Z(x)-Z(x+h))^2$')
if grid:
plt.grid(True)
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def plot_variogramExp1D(hexp, gexp, cexp, show_count=True, grid=True, **kwargs):
"""
Plot an experimental variogram (1D) (in current figure axis).
:param hexp, gexp: (1-dimensional array of floats of same length) coordinates of
the points of the experimental variogram
:param cexp: (1-dimensional array of ints of same length as hexp, gexp)
numbers of points from the variogram cloud in each class
:param show_count: (bool) indicates if counters (cexp) are shown on plot
:param grid: (bool) indicates if a grid is plotted (True by default)
:kwargs: keyword arguments passed to the funtion plt.plot
"""
# In kwargs:
# - add default 'label' if not given
# - set default 'marker' if not given
# - set default 'linestyle' (or 'ls') if not given
if 'label' not in kwargs.keys():
kwargs['label'] = 'vario exp.'
if 'marker' not in kwargs.keys():
kwargs['marker'] = '.'
if 'linestyle' not in kwargs.keys() and 'ls' not in kwargs.keys():
kwargs['linestyle'] = 'dashed'
plt.plot(hexp, gexp, **kwargs)
if show_count:
for i, c in enumerate(cexp):
if c > 0:
plt.text(hexp[i], gexp[i], str(c), ha='left', va='top')
plt.xlabel('h')
plt.ylabel(r'$1/2(Z(x)-Z(x+h))^2$')
if grid:
plt.grid(True)
# ----------------------------------------------------------------------------
# ============================================================================
# Functions for variogram cloud, experimental variogram,
# and covariance model fitting (1D)
# ============================================================================
# ----------------------------------------------------------------------------
def variogramCloud1D(x, v, hmax=np.nan, make_plot=True, grid=True, **kwargs):
"""
Computes the omni-directional variogram cloud for data set in 1D, 2D or 3D.
- the pair of the i-th and j-th data points gives the following
point in the variogram cloud:
(h(i,j), g(i,j)) = (||x(i)-x(j)||, 0.5 * (v(i)-v(j))^2)
where x(i) and x(j) are the coordinates of the i-th and j-th data points
and v(i) and v(j) the values at these points
(v(i)=Z(x(i)), where Z is the considered variable).
:param x: (2-dimensional array of shape (n, d)) coordinates
of the data points (n: number of points, d: dimension)
Note: for data in 1D, it can be a 1-dimensional array of shape (n,)
:param v: (1-dimensional array of shape (n,)) values at data points
:param hmax: (float or nan) maximal distance between a pair of data points for
being integrated in the variogram cloud.
:param make_plot:
(bool) if True: the plot of the variogram cloud is done (in current figure axis)
:param grid: (bool) indicates if a grid is plotted (used if make_plot is True)
:kwargs: keyword arguments passed to the function plot_variogramCloud1D (used if make_plot is True)
:return: (h, g, npair), where
h, g are two 1-dimensional arrays of floats of same length containing
the coordinates of the points in the variogram cloud
npair is an int, the number of points (pairs of data points considered)
in the variogram cloud
"""
# Get dimension (d) from x
if np.asarray(x).ndim == 1:
# x is a 1-dimensional array
x = np.asarray(x).reshape(-1, 1)
d = 1
else:
# x is a 2-dimensional array
d = x.shape[1]
# Number of data points
n = x.shape[0]
# Check length of v
if len(v) != n:
print("ERROR: length of 'v' is not valid")
return (None, None, None)
if np.isnan(hmax):
# consider all pairs of points
npair = int(0.5*(n-1)*n)
h = np.zeros(npair)
g = np.zeros(npair)
j = 0
for i in range(n-1):
jj = n-1-i
h[j:(j+jj)]= np.sqrt(np.sum((x[i,:] - x[(i+1):, :])**2, axis=1))
g[j:(j+jj)]= 0.5*(v[i] - v[(i+1):])**2
j = j+jj
else:
# consider only pairs of points with a distance less than or equal to hmax
hmax2 = hmax**2
h, g = [], []
npair = 0
for i in range(n-1):
htmp = np.sum((x[i,:] - x[(i+1):, :])**2, axis=1)
ind = np.where(htmp <= hmax2)[0]
h.append(np.sqrt(htmp[ind]))
g.append(0.5*(v[i] - v[i+1+ind])**2)
npair = npair + len(ind)
if npair > 0:
h = np.hstack(h)
g = np.hstack(g)
if make_plot:
plot_variogramCloud1D(h, g, npair, grid=grid, **kwargs)
plt.title('Variogram cloud ({} pts)'.format(npair))
return (h, g, npair)
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def variogramExp1D(x, v, hmax=np.nan, ncla=10, cla_center=None, cla_length=None, variogramCloud=None, make_plot=True, show_count=True, grid=True, **kwargs):
"""
Computes the exprimental omni-directional variogram for data set in 1D, 2D or 3D.
The mean point in each class is retrieved from the variogram cloud (returned by
the function variogramCloud1D).
:param x: (2-dimensional array of shape (n, d)) coordinates
of the data points (n: number of points, d: dimension)
Note: for data in 1D, it can be a 1-dimensional array of shape (n,)
:param v: (1-dimensional array of shape (n,)) values at data points
:param hmax: (float or nan) maximal distance between a pair of data points for
being integrated in the variogram cloud.
:param ncla: (int) number of classes:
the parameter is used if cla_center is not specified (None),
in that situation ncla classes is considered and the class centers are set to
cla_center[i] = (i+0.5)*l, i=0,...,ncla-1
with l = H / ncla, H being the max of the distance between the two points of
the considered pairs (in the variogram cloud).
:param cla_center: (sequence of floats) center each class, if specified (not None),
then the parameter ncla is not used.
:param cla_length: (None, or float or sequence of floats) length of each class
- if not specified (None): the length of every class is set to the
minimum of difference between two sucessive class centers (np.inf if one class)
- if float: the length of every class is set to the specified number
- if a sequence, its length should be equal to the number of classes (length of
cla_center (or ncla))
Finally, the i-th class is determined by its center cla_center[i] and its
length cla_length[i], and corresponds to the interval
]cla_center[i]-cla_length[i]/2, cla_center[i]+cla_length[i]/2]
along h (lag) axis
:param variogramCloud:
(tuple of length 3) (h, g, npair): variogram cloud (returned by the function variogramCloud1D
(npair is not used))
If given (not None): this variogram cloud is used (not computed, then x, v, hmax are not used)
:param make_plot:
(bool) if True: the plot of the experimental variogram is done (in current figure axis)
:param show_count: (bool) indicates if counters (cexp) are shown on plot (used if make_plot is True)
:param grid: (bool) indicates if a grid is plotted (used if make_plot is True)
:kwargs: keyword arguments passed to the function plot_variogramExp1D (used if make_plot is True)
:return: (hexp, gexp, cexp), where
- hexp, gexp are two 1-dimensional arrays of floats of same length containing
the coordinates of the points of the experimental variogram, and
- cexp is a 1-dimensional array of ints of same length as hexp and gexp, containing
the number of points from the variogram cloud in each class
"""
# Compute variogram cloud if needed (npair won't be used)
if variogramCloud is None:
h, g, npair = variogramCloud1D(x, v, hmax=hmax, make_plot=False)
else:
h, g, npair = variogramCloud
if npair == 0:
print('No point in the variogram cloud (nothing is done).')
return (None, None, None)
# Set classes
if cla_center is not None:
cla_center = np.asarray(cla_center, dtype='float').reshape(-1)
ncla = len(cla_center)
else:
length = np.max(h) / ncla
cla_center = (np.arange(ncla, dtype='float') + 0.5) * length
if cla_length is not None:
cla_length = np.asarray(cla_length, dtype='float').reshape(-1)
if len(cla_length) == 1:
cla_length = np.repeat(cla_length, ncla)
elif len(cla_length) != ncla:
print("ERROR: 'cla_length' not valid")
return (None, None, None)
else:
if ncla == 1:
cla_length = np.array([np.inf], dtype='float')
else:
cla_length = np.repeat(np.min(np.diff(cla_center)), ncla)
# Compute experimental variogram
hexp = np.nan * np.ones(ncla)
gexp = np.nan * np.ones(ncla)
cexp = np.zeros(ncla, dtype='int')
for i, (c, l) in enumerate(zip(cla_center, cla_length)):
d = 0.5*l
ind = np.all((h > c-d , h <= c+d), axis=0)
hexp[i] = np.mean(h[ind])
gexp[i] = np.mean(g[ind])
cexp[i] = np.sum(ind)
if make_plot:
plot_variogramExp1D(hexp, gexp, cexp, show_count=show_count, grid=grid, **kwargs)
plt.title('Experimental variogram')
return (hexp, gexp, cexp)
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def covModel1D_fit(x, v, cov_model, hmax=np.nan, variogramCloud=None, make_plot=True, **kwargs):
"""
Fits a covariance model in 1D, used for data in 1D or as omni-directional model
for data in 2D or 3D.
The parameter 'cov_model' is a covariance model in 1D (CovModel1D class) with
the parameters to fit set to nan. For example, with
cov_model = CovModel1D(elem=[
('gaussian', {'w':np.nan, 'r':np.nan}), # elementary contribution
('nugget', {'w':np.nan}) # elementary contribution
])
it will fit the weight and range of the gaussian elementary contribution,
and the nugget (weigth of the nugget contribution).
:param x: (2-dimensional array of shape (n, d)) coordinates
of the data points (n: number of points, d: dimension)
Note: for data in 1D, it can be a 1-dimensional array of shape (n,)
:param v: (1-dimensional array of shape (n,)) values at data points
:param cov_model: (CovModel1D class) covariance model in 1D with parameters to fit set to nan
(see above)
:param hmax: (float or nan) maximal distance between a pair of data points for
being integrated in the variogram cloud.
:param variogramCloud:
(tuple of length 3) (h, g, npair): variogram cloud (returned by the function variogramCloud1D
(npair is not used)).
If given (not None): this variogram cloud is used (not computed, then x, v, hmax are not used)
:param make_plot:
(bool) if True: the plot of the optimized variogram is done (in current figure axis)
:kwargs: keyword arguments passed to the funtion curve_fit() from scipy.optimize
e.g.: p0=<array of initial parameters> (see doc of curve_fit), with
an array of floats of length equal to the number of paramters to fit,
considered in the order of appearance in the definition of cov_model;
bounds=(<array of lower bounds>, <array of upper bounds>)
:return: (cov_model_opt, popt) with:
- cov_model_opt: (covModel1D class) optimized covariance model
- popt: (sequence of floats) vector of optimized parameters
returned by curve_fit
"""
# Check dimension of cov_model and set if used as omni-directional model
if cov_model.__class__.__name__ != 'CovModel1D':
print("ERROR: 'cov_model' is incompatible with dimension (1D)")
return (None, None)
# Work on a (deep) copy of cov_model
cov_model_opt = copy.deepcopy(cov_model)
# Get index of element and key of parameters to fit
ielem_to_fit=[]
key_to_fit=[]
for i, el in enumerate(cov_model_opt.elem):
for k, val in el[1].items():
if np.isnan(val):
ielem_to_fit.append(i)
key_to_fit.append(k)
nparam = len(ielem_to_fit)
if nparam == 0:
print('No parameter to fit!')
return (cov_model_opt, np.array([]))
# Compute variogram cloud if needed (npair won't be used)
if variogramCloud is None:
h, g, npair = variogramCloud1D(x, v, hmax=hmax, make_plot=False) # npair won't be used
else:
h, g, npair = variogramCloud
if npair == 0:
print('No point to fit!')
return (cov_model_opt, np.nan * np.ones(nparam))
# Defines a function that returns a covariance model in 1D, given a vector of parameters
# (for the parameters to fit)
def cov_model_param(ielem, key, p):
"""
:param ielem: (sequence of ints of length m) indexes
:param key: (sequence of strings of length m) keys
:param p: (sequence of floats of length m) parameters of covariance model
:return: cov_model_opt with
cov_model_opt.elem[ielem[i]][key[i]] set to p[i], i=0,..., m-1
"""
for i, (iel, k) in enumerate(zip(ielem, key)):
cov_model_opt.elem[iel][1][k] = p[i]
return cov_model_opt
# Defines the function to optimize in a format compatible with curve_fit from scipy.optimize
def func(d, *p):
"""
Function whose p is the vector of parameters to optimize.
:param d: (tuple of length 3) with
- d[0] = x, coordinates of the data points (see above)
- d[1] = ielem, sequence of indexes of length m
- d[2] = keys, sequence of strings (keys) of length m
:param p: vector of parameters of length m, for the covariance model (variables to fit identified with d)
:return: variogram function of the corresponding covariance model evaluated at x
"""
return cov_model_param(d[1], d[2], p).vario_func()(d[0])
# Optimize parameters with curve_fit: initial vector of parameters (p0) must be given
# because number of parameter to fit in function func is not known in its expression
bounds = None
if 'bounds' in kwargs.keys():
bounds = kwargs['bounds']
if 'p0' not in kwargs.keys():
# add default p0 in kwargs
p0 = np.ones(nparam)
if bounds is not None:
# adjust p0 to given bounds
for i in range(nparam):
if np.isinf(bounds[0][i]):
if np.isinf(bounds[1][i]):
p0[i] = 1.
else:
p0[i] = bounds[1][i]
elif np.isinf(bounds[1][i]):
p0[i] = bounds[0][i]
else:
p0[i] = 0.5*(bounds[0][i]+bounds[1][i])
kwargs['p0'] = p0
else:
if len(kwargs['p0']) != nparam:
print("ERROR: length of 'p0' not compatible")
return (None, None)
# Fit with curve_fit
popt, pcov = curve_fit(func, (h, ielem_to_fit, key_to_fit), g, **kwargs)
# Retrieve the optimized covariance model (in cov_model_opt)
cov_model_opt = cov_model_param(ielem_to_fit, key_to_fit, popt)
if make_plot:
cov_model_opt.plot_model(vario=True, hmax=np.max(h), label='vario opt.')
s = ['Vario opt.:'] + ['{}'.format(el) for el in cov_model_opt.elem]
# plt.title(textwrap.TextWrapper(width=50).fill(s))
plt.title('\n'.join(s))
return (cov_model_opt, popt)
# ----------------------------------------------------------------------------
# ============================================================================
# Functions for variogram cloud, experimental variogram,
# and covariance model fitting (2D)
# ============================================================================
# ----------------------------------------------------------------------------
def variogramCloud2D(x, v, alpha=0.0, tol_dist=10.0, tol_angle=45.0, hmax=(np.nan, np.nan),
make_plot=True, color0='red', color1='green', figsize=None):
"""
Computes two directional variogram clouds for a data set in 2D:
- one along axis x',
- one along axis y',
where the system Ox'y' is obtained from the (usual) system Oxy by applying a rotation of
angle -alpha (see parameter alpha below).
:param x: (2-dimensional array of shape (n, 2)) 2D-coordinates in system Oxy of data points
:param v: (1-dimensional array of shape (n,)) values at data points
:param alpha: (float) angle in degrees:
the system Ox'y', supporting the principal axes along which the variograms
are computedof, is obtained from the system Oxy by applying a rotation of
angle -alpha.
The 2x2 matrix m for changing the coordinate system from
Ox'y' to Oxy is:
+ +
| cos(alpha) sin(alpha)|
m = | -sin(alpha) cos(alpha)|
+ +
:param tol_dist, tol_angle: (float) tolerances (tol_dist: distance, tol_angle: angle in degrees)
used to determines which pair of points are integrated in the variogram clouds.
A pair of points (x(i), x(j)) is in the directional variogram cloud along
axis x' (resp. y') iff, given the lag vector h = x(i) - x(j),
- the distance from the end of vector h issued from origin to that axis
is less than or equal to tol_dist and,
- the angle between h and that axis is less than or equal to tol_angle
:param hmax: (sequence of 2 floats (or nan)): maximal distance between a pair of data points for
being integrated in the directional variogram cloud along axis x' and axis y' resp.
:param make_plot:
(bool) if True: the plot of the variogram clouds is done (in a new 2x2 figure)
:param color0, color1: colors for variogram cloud along axis x' and along axis y' resp. (used if make_plot is True)
:param figsize: (tuple of 2 ints) size of the figure (used if make_plot is True)
:return: ((h0, g0, npair0), (h1, g1, npair1)), where
- (h0, g0, npair0) is the directional variogram cloud along the axis x'
(h0, g0 are two 1-dimensional arrays of same length containing
the coordinates of the points in the variagram cloud, and
npair is an int, the number of points (pairs of data points considered)
in the variogram cloud)
- (h1, g1, npair1) is the directional variogram cloud along the axis y'
(same type of object as for axis x')
"""
# Number of data points
n = x.shape[0]
# Check length of v
if len(v) != n:
print("ERROR: length of 'v' is not valid")
return ((None, None, None), (None, None, None))
# Rotation matrix
a = alpha * np.pi/180.
ca, sa = np.cos(a), np.sin(a)
mrot = np.array([[ca, sa], [-sa, ca]])
# Coordinates of data points in the new system Ox'y'
xnew = x.dot(mrot)
# Tolerance for distance to origin
hmax = list(hmax) # for assignment of its components
for i in (0, 1):
if np.isnan(hmax[i]):
hmax[i] = np.inf
# Tolerance for slope compute from tol_angle
tol_s = np.tan(tol_angle*np.pi/180)
eps = 1.e-8 # close to zero
# Compute variogram clouds
h0, g0, h1, g1 = [], [], [], []
for i in range(n-1):
for j in range(i+1, n):
h = xnew[i,:] - xnew[j,:]
habs = np.fabs(h)
if habs[0] < eps or (habs[0] <= hmax[0] and habs[1] <= tol_dist and habs[1]/habs[0] <= tol_s):
# Directional variogram along x' contains pair of points (i,j)
h0.append(habs[0]) # projection along x'
g0.append(0.5*(v[i]-v[j])**2)
if habs[1] < eps or (habs[1] <= hmax[1] and habs[0] <= tol_dist and habs[0]/habs[1] <= tol_s):
# Directional variogram along y' contains pair of points (i,j)
h1.append(habs[1]) # projection along y'
g1.append(0.5*(v[i]-v[j])**2)
h0 = np.asarray(h0)
g0 = np.asarray(g0)
npair0 = len(h0)
h1 = np.asarray(h1)
g1 = np.asarray(g1)
npair1 = len(h1)
if make_plot:
fig, ax = plt.subplots(2,2, figsize=figsize)
plt.sca(ax[0,0])
# Plot system Oxy and Ox'y'
# This:
plt.arrow(*[0,0], *[0.9,0], color='k', head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *[0,0.9], color='k', head_width=0.05, head_length=0.1)
plt.text(*[1,0], "x", c='k', ha='left', va='top')
plt.text(*[0,1], "y", c='k', ha='left', va='top')
plt.arrow(*[0,0], *(0.9*mrot[:,0]), color=color0, head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *(0.9*mrot[:,1]), color=color1, head_width=0.05, head_length=0.1)
plt.text(*mrot[:,0], "x'", c=color0, ha='right', va='bottom')
plt.text(*mrot[:,1], "y'", c=color1, ha='right', va='bottom')
plt.text(0, 0, "O", c='k', ha='right', va='top')
plt.xlim(min(min(mrot[0,:]), 0)-0.1, max(max(mrot[0,:]), 1)+0.1)
plt.ylim(min(min(mrot[1,:]), 0)-0.1, max(max(mrot[1,:]), 1)+0.1)
plt.gca().set_aspect('equal')
plt.axis('off')
# # Or that:
# plt.arrow(*[0,0], *(0.9*mrot[:,0]), color=color0, head_width=0.05, head_length=0.1)
# plt.arrow(*[0,0], *(0.9*mrot[:,1]), color=color1, head_width=0.05, head_length=0.1)
# plt.text(*mrot[:,0], "x'", c=color0, ha='right', va='bottom')
# plt.text(*mrot[:,1], "y'", c=color1, ha='right', va='bottom')
# plt.xlabel('x')
# plt.ylabel('y')
# plt.xlim(min(min(mrot[0,:]), 0)-0.1, max(max(mrot[0,:]), 1)+0.1)
# plt.ylim(min(min(mrot[1,:]), 0)-0.1, max(max(mrot[1,:]), 1)+0.1)
# plt.gca().set_aspect('equal')
# plt.gca().spines['left'].set_position('zero')
# plt.gca().spines['left'].set_position('zero')
# plt.gca().spines['right'].set_color('none')
# plt.gca().spines['bottom'].set_position('zero')
# plt.gca().spines['top'].set_color('none')
# plt.title("Vario cloud: alpha= {} deg.\ntol_dist ={}deg. / tol_angle ={}deg.".format(alpha, tol_dist, tol_angle))
plt.sca(ax[0,1])
# Plot both variogram clouds
plot_variogramCloud1D(h0, g0, npair0, c=color0, alpha=0.5, label="along x'")
plot_variogramCloud1D(h1, g1, npair1, c=color1, alpha=0.5, label="along y'")
plt.legend()
#plt.title('Total #points = {}'.format(npair0 + npair1))
plt.sca(ax[1,0])
# Plot variogram cloud along x'
plot_variogramCloud1D(h0, g0, npair0, c=color0)
plt.title("along x' ({} pts)".format(npair0))
plt.sca(ax[1,1])
# Plot variogram cloud along y'
plot_variogramCloud1D(h1, g1, npair1, c=color1)
plt.title("along y' ({} pts)".format(npair1))
plt.suptitle("Vario cloud: alpha={}deg.\ntol_dist={} / tol_angle={}deg.".format(alpha, tol_dist, tol_angle))
# plt.show()
return ((h0, g0, npair0), (h1, g1, npair1))
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def variogramExp2D(x, v, alpha=0.0, tol_dist=10.0, tol_angle=45.0, hmax=(np.nan, np.nan),
ncla=(10, 10), cla_center=(None, None), cla_length=(None, None),
variogramCloud=None, make_plot=True, color0='red', color1='green', figsize=None):
"""
Computes two directional exprimental variograms for a data set in 2D:
- one along axis x',
- one along axis y',
where the system Ox'y' is obtained from the (usual) system Oxy by applying a rotation of
angle -alpha (see parameter alpha below).
The mean point in each class is retrieved from the two directional variogram clouds
(returned by the function variogramCloud2D).
:param x: (2-dimensional array of shape (n, 2)) 2D-coordinates in system Oxy of data points
:param v: (1-dimensional array of shape (n,)) values at data points
:param alpha: (float) angle in degrees:
the system Ox'y', supporting the principal axes along which the variograms
are computedof, is obtained from the system Oxy by applying a rotation of
angle -alpha.
The 2x2 matrix m for changing the coordinate system from
Ox'y' to Oxy is:
+ +
| cos(alpha) sin(alpha)|
m = | -sin(alpha) cos(alpha)|
+ +
:param tol_dist, tol_angle: (float) tolerances (tol_dist: distance, tol_angle: angle in degrees)
used to determines which pair of points are integrated in the variogram clouds.
A pair of points (x(i), x(j)) is in the directional variogram cloud along
axis x' (resp. y') iff, given the lag vector h = x(i) - x(j),
- the distance from the end of vector h issued from origin to that axis
is less than or equal to tol_dist and,
- the angle between h and that axis is less than or equal to tol_angle
:param hmax: (sequence of 2 floats (or nan)): maximal distance between a pair of data points for
being integrated in the directional variogram cloud along axis x' and axis y' resp.
:param ncla: (sequence of 2 ints) ncla[0], ncla[1]: number of classes
for experimental variogram along axis x' (direction 0) and axis y' (direction 1) resp.
For direction j:
the parameter ncla[j] is used if cla_center[j] is not specified (None),
in that situation ncla[j] classes are considered and the class centers are set to
cla_center[j][i] = (i+0.5)*l, i=0,...,ncla[j]-1
with l = H / ncla[j], H being the max of the distance between the two points of
the considered pairs (in the variogram cloud of direction j).
:param cla_center: (sequence of 2 sequences of floats) cla_center[0], clac_center[1]: center of each class
for experimental variogram along axis x' (direction 0) and axis y' (direction 1) resp.
For direction j:
if cla_center[j] is specified (not None), then the parameter ncla[j] is not used.
:param cla_length: (sequence of length 2 of: None, or float or sequence of floats) cla_length[0], clac_length[1]:
length of each class
for experimental variogram along axis x' (direction 0) and axis y' (direction 1) resp.
For direction j:
- if cla_length[j] not specified (None): the length of every class is set to the
minimum of difference between two sucessive class centers (np.inf if one class)
- if float: the length of every class is set to the specified number
- if a sequence, its length should be equal to the number of classes (length of
cla_center[j] (or ncla[j]))
Finally, the i-th class is determined by its center cla_center[j][i] and its
length cla_length[j][i], and corresponds to the interval
]cla_center[j][i]-cla_length[j][i]/2, cla_center[j][i]+cla_length[j][i]/2]
along h (lag) axis
:param variogramCloud:
(sequence of 2 tuples of length 3, or None) If given: ((h0, g0, npair0), (h1, g1, npair1)):
variogram clouds (returned by the function variogramCloud2D (npair0, npair1 are not used))
along axis x' (direction 0) and axis y' (direction 1) resp., then
x, v, alpha, tol_dist, tol_angle, hmax are not used
(but alpha, tol_dist, tol_angle are used in plot if make_plot is True)
:param make_plot:
(bool) if True: the plot of the experimental variograms is done (in a new 2x2 figure)
:param color0, color1: colors for experimental variogram along axis x' and along axis y' resp. (used if make_plot is True)
:param figsize: (tuple of 2 ints) size of the figure (used if make_plot is True)
:return: ((hexp0, gexp0, cexp0), (hexp1, gexp1, cexp1)), where
- (hexp0, gexp0, cexp0) is the output for the experimental variogram along axis x':
- hexp0, gexp0: are two 1-dimensional arrays of floats of same length containing
the coordinates of the points of the experimental variogram along axis x', and
- cexp0 is a 1-dimensional array of ints of same length as hexp0 and gexp0, containing
the number of points from the variogram cloud in each class
- (hexp1, gexp1, cexp1) is the output for the experimental variogram along axis y'
"""
# Compute variogram clouds if needed
if variogramCloud is None:
vc = variogramCloud2D(x, v, alpha=alpha, tol_dist=tol_dist, tol_angle=tol_angle, hmax=hmax, make_plot=False)
else:
vc = variogramCloud
# -> vc[0] = (h0, g0, npair0) and vc[1] = (h1, g1, npair1)
# Compute variogram experimental in each direction (using function variogramExp1D)
ve = [None, None]
for j in (0, 1):
ve[j] = variogramExp1D(None, None, hmax=np.nan, ncla=ncla[j], cla_center=cla_center[j], cla_length=cla_length[j], variogramCloud=vc[j], make_plot=False)
(hexp0, gexp0, cexp0), (hexp1, gexp1, cexp1) = ve
if make_plot:
# Rotation matrix
a = alpha * np.pi/180.
ca, sa = np.cos(a), np.sin(a)
mrot = np.array([[ca, sa], [-sa, ca]])
fig, ax = plt.subplots(2,2, figsize=figsize)
plt.sca(ax[0,0])
# Plot system Oxy and Ox'y'
# This:
plt.arrow(*[0,0], *[0.9,0], color='k', head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *[0,0.9], color='k', head_width=0.05, head_length=0.1)
plt.text(*[1,0], "x", c='k', ha='left', va='top')
plt.text(*[0,1], "y", c='k', ha='left', va='top')
plt.arrow(*[0,0], *(0.9*mrot[:,0]), color=color0, head_width=0.05, head_length=0.1)
plt.arrow(*[0,0], *(0.9*mrot[:,1]), color=color1, head_width=0.05, head_length=0.1)
plt.text(*mrot[:,0], "x'", c=color0, ha='right', va='bottom')
plt.text(*mrot[:,1], "y'", c=color1, ha='right', va='bottom')
plt.text(0, 0, "O", c='k', ha='right', va='top')
plt.xlim(min(min(mrot[0,:]), 0)-0.1, max(max(mrot[0,:]), 1)+0.1)
plt.ylim(min(min(mrot[1,:]), 0)-0.1, max(max(mrot[1,:]), 1)+0.1)
plt.gca().set_aspect('equal')
plt.axis('off')
# # Or that:
# plt.arrow(*[0,0], *(0.9*mrot[:,0]), color=color0, head_width=0.05, head_length=0.1)
# plt.arrow(*[0,0], *(0.9*mrot[:,1]), color=color1, head_width=0.05, head_length=0.1)
# plt.text(*mrot[:,0], "x'", c=color0, ha='right', va='bottom')
# plt.text(*mrot[:,1], "y'", c=color1, ha='right', va='bottom')
# plt.xlabel('x')
# plt.ylabel('y')
# plt.xlim(min(min(mrot[0,:]), 0)-0.1, max(max(mrot[0,:]), 1)+0.1)
# plt.ylim(min(min(mrot[1,:]), 0)-0.1, max(max(mrot[1,:]), 1)+0.1)
# plt.gca().set_aspect('equal')
# plt.gca().spines['left'].set_position('zero')
# plt.gca().spines['left'].set_position('zero')
# plt.gca().spines['right'].set_color('none')
# plt.gca().spines['bottom'].set_position('zero')
# plt.gca().spines['top'].set_color('none')
plt.sca(ax[0,1])
# Plot variogram exp along x' and along y'
plot_variogramExp1D(hexp0, gexp0, cexp0, show_count=False, c=color0, alpha=0.5, label="along x'")
plot_variogramExp1D(hexp1, gexp1, cexp1, show_count=False, c=color1, alpha=0.5, label="along y'")
plt.legend()
plt.sca(ax[1,0])
# Plot variogram exp along x'
plot_variogramExp1D(hexp0, gexp0, cexp0, color=color0)
plt.title("along x'")
plt.sca(ax[1,1])
# Plot variogram exp along y'
plot_variogramExp1D(hexp1, gexp1, cexp1, color=color1)
plt.title("along y'")
plt.suptitle("Vario exp.: alpha={}deg.\ntol_dist={} / tol_angle={}deg.".format(alpha, tol_dist, tol_angle))
# plt.show()
return ((hexp0, gexp0, cexp0), (hexp1, gexp1, cexp1))
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def variogramExp2D_rose(x, v, r_max=np.nan, r_ncla=10, phi_ncla=12, set_polar_subplot=True, figsize=None, **kwargs):
"""
Shows shows an experimental variogram for a data set in 2D in the form of a
rose plot, i.e. the lags vectors between the pairs of data points are divided
in classes according to length (radius) and angle from the x-axis counter-clockwise
(warning: opposite sense to the sense given by angle in definition of a covariance model
in 2D).
:param x: (2-dimensional array of shape (n, 2)) 2D-coordinates in system Oxy of data points
:param v: (1-dimensional array of shape (n,)) values at data points
:param r_max: (float or nan) maximal radius, i.e. maximal length of 2D-lag vector between a pair
of data points for being integrated in the variogram rose plot.
:param r_ncla: (int) number of classes for radius
:param phi_ncla: (int) number of classes for angle for half of the whole disk:
on the whole disk, there will be 2*phi_ncla classes
:param set_polar_subplot:
(bool)
- True: a new figure is created with one axis "projection='polar'"
- False: the plot is done in the current figure axis assumed to be set
as "projection='polar'"
(this allows to plot in a figure with multiple axes)
:param figsize: (tuple of 2 ints) size of the figure, not used if set_polar_subplot is False
:kwargs: keyword arguments passed to the funtion plt.pcolormesh
(cmap, ...)
"""
# Number of data points
n = x.shape[0]
# Check length of v
if len(v) != n:
print("ERROR: length of 'v' is not valid")
return
# Compute lag vector (h) and gamma value (g) for pair of points with distance less than or equal to hmax
if np.isnan(r_max):
# consider all pairs of points
npair = int(0.5*(n-1)*n)
h = np.zeros((npair, 2))
g = np.zeros(npair)
j = 0
for i in range(n-1):
jj = n-1-i
h[j:(j+jj),:]= x[(i+1):, :] - x[i,:]
g[j:(j+jj)]= 0.5*(v[i] - v[(i+1):])**2
j = j+jj
else:
# consider only pairs of points with a distance less than or equal to hmax
r_max2 = r_max**2
h, g = [], []
npair = 0
for i in range(n-1):
htmp = x[(i+1):, :] - x[i,:] # 2-dimensional array (n-1-i) x dim
ind = np.where(np.sum(htmp**2, axis=1) <= r_max2)[0]
h.append(htmp[ind])
g.append(0.5*(v[i] - v[i+1+ind])**2)
npair = npair + len(ind)
if npair > 0:
h = np.vstack(h)
g = np.hstack(g)
# Compute r, phi (radius and angle in complex plane) for each lag vector
r = np.sqrt(np.sum(h*h, axis=1))
phi = np.array([np.arctan2(hh[1], hh[0]) for hh in h])
# or: phi = np.array([np.angle(np.complex(*hh)) for hh in h])
# ... set each angle phi in [-np.pi/2, np.pi/2[ (symmetry of variogram)
pi_half = 0.5*np.pi
np.putmask(phi, phi < -pi_half, phi + np.pi)
np.putmask(phi, phi >= pi_half, phi - np.pi)
# Set classes for r and phi
if np.isnan(r_max):
r_max = np.max(r)
r_cla = np.linspace(0., r_max, r_ncla+1)
phi_cla = np.linspace(-pi_half, pi_half, phi_ncla+1)
# Compute rose map
gg = np.nan * np.ones((phi_ncla, r_ncla)) # initialize gamma values
for ip in range(phi_ncla):
pind = np.all((phi >= phi_cla[ip], phi < phi_cla[ip+1]), axis=0)
for ir in range(r_ncla):
rind = np.all((r >= r_cla[ir], r < r_cla[ir+1]), axis=0)
gg[ip, ir] = np.mean(g[np.all((pind, rind), axis=0)])
gg = np.vstack((gg, gg))
rr, pp = np.meshgrid(r_cla, np.hstack((phi_cla[:-1],phi_cla+np.pi)))
# Set default color map to 'terrain' if not given in kwargs
if 'cmap' not in kwargs.keys():
kwargs['cmap'] = 'terrain' #'nipy_spectral'
if set_polar_subplot:
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(projection='polar')
plt.pcolormesh(pp, rr, gg, **kwargs)
plt.colorbar()
plt.title('Vario rose (gamma value)')
plt.grid()
# plt.show()
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def covModel2D_fit(x, v, cov_model, hmax=np.nan, make_plot=True, figsize=None, **kwargs):
"""
Fits a covariance model in 2D (for data in 2D).
The parameter 'cov_model' is a covariance model in 2D (CovModel2D class) with
the parameters to fit set to nan (a nan replace a float). For example, with
cov_model = CovModel2D(elem=[
('gaussian', {'w':np.nan, 'r':[np.nan, np.nan]}), # elementary contribution
('nugget', {'w':np.nan}) # elementary contribution
], alpha=np.nan, name='')
it will fit the weight and ranges of the gaussian elementary contribution,
the nugget (weigth of the nugget contribution), and the angle alpha.
:param x: (2-dimensional array of shape (n, 2)) coordinates
of the data points (n: number of points)
:param v: (1-dimensional array of shape (n,)) values at data points
:param cov_model: (CovModel2D class) covariance model in 2D with parameters to fit set to nan
(see above)
:param hmax: (float or nan) maximal distance between a pair of data points for
being integrated in the variogram cloud.
:param make_plot:
(bool) if True: the plot of the optimized variogram is done (in a new 1x2 figure)
:param figsize: (tuple of 2 ints) size of the figure (used if make_plot is True)
:kwargs: keyword arguments passed to the funtion curve_fit() from scipy.optimize
e.g.: p0=<array of initial parameters> (see doc of curve_fit), with
an array of floats of length equal to the number of paramters to fit,
considered in the order of appearance in the definition of cov_model;
bounds=(<array of lower bounds>, <array of upper bounds>)
:return: (cov_model_opt, popt) with:
- cov_model_opt: (covModel2D class) optimized covariance model
- popt: (sequence of floats) vector of optimized parameters
returned by curve_fit
"""
# Check dimension of cov_model and set if used as omni-directional model
if cov_model.__class__.__name__ != 'CovModel2D':
print("ERROR: 'cov_model' is incompatible with dimension (2D)")
return (None, None)
# Work on a (deep) copy of cov_model
cov_model_opt = copy.deepcopy(cov_model)
# Get index of element, key of parameters and index of range to fit
ielem_to_fit=[]
key_to_fit=[]
ir_to_fit=[] # if key is equal to 'r' (range), set the index of the range to fit, otherwise set np.nan
for i, el in enumerate(cov_model_opt.elem):
for k, val in el[1].items():
if k == 'r':
for j in (0, 1):
if np.isnan(val[j]):
ielem_to_fit.append(i)
key_to_fit.append(k)
ir_to_fit.append(j)
elif np.isnan(val):
ielem_to_fit.append(i)
key_to_fit.append(k)
ir_to_fit.append(np.nan)
# Is angle alpha must be fit ?
alpha_to_fit = np.isnan(cov_model_opt.alpha)
nparam = len(ielem_to_fit) + int(alpha_to_fit)
if nparam == 0:
print('No parameter to fit!')
return (cov_model_opt, np.array([]))
# Compute lag vector (h) and gamma value (g) for pair of points with distance less than or equal to hmax
n = x.shape[0] # number of points
if np.isnan(hmax):
# consider all pairs of points
npair = int(0.5*(n-1)*n)
h = np.zeros((npair, 2))
g = np.zeros(npair)
j = 0
for i in range(n-1):
jj = n-1-i
h[j:(j+jj),:]= x[(i+1):, :] - x[i,:]
g[j:(j+jj)]= 0.5*(v[i] - v[(i+1):])**2
j = j+jj
else:
# consider only pairs of points with a distance less than or equal to hmax
hmax2 = hmax**2
h, g = [], []
npair = 0
for i in range(n-1):
htmp = x[(i+1):, :] - x[i,:] # 2-dimensional array (n-1-i) x dim
ind = np.where(np.sum(htmp**2, axis=1) <= hmax2)[0]
h.append(htmp[ind])
g.append(0.5*(v[i] - v[i+1+ind])**2)
npair = npair + len(ind)
if npair > 0:
h = np.vstack(h)
g = np.hstack(g)
if npair == 0:
print('No point to fit!')
return (cov_model_opt, np.nan * np.ones(nparam))
# Defines a function that returns a covariance model in 2D, given a vector of parameters
# (for the parameters to fit)
def cov_model_param(ielem, key, ir, alpha_given, p):
"""
:param ielem: (sequence of ints of length m) indexes
:param key: (sequence of strings of length m) keys
:param ir: (sequence of ints of length m) index of ranges (nan if not for a range)
:param alpha_given: (bool) indicates if alpha is given (in last component of vector p)
:param p: (sequence of floats of length m) parameters of covariance model
:return: cov_model_opt with parameters idientified by ielem, key, ir set to values given by p
"""
for i, (iel, k, j) in enumerate(zip(ielem, key, ir)):
if k == 'r':
cov_model_opt.elem[iel][1]['r'][j] = p[i]
else:
cov_model_opt.elem[iel][1][k] = p[i]
if alpha_given:
cov_model_opt.alpha = p[-1]
return cov_model_opt
# Defines the function to optimize in a format compatible with curve_fit from scipy.optimize
def func(d, *p):
"""
Function whose p is the vector of parameters to optimize.
:param d: (tuple of length 5) with
- d[0] = h, lag vector for pair of data points (see above)
- d[1] = ielem, sequence of indexes of length m
- d[2] = keys, sequence of strings (keys) of length m
- d[3] = ir, sequence of indexes of ranges (nan if not for a range) of length m
- d[4] = alpha_to_set, bool indicating if alpha is given (in last component of vector p)
:param p: vector of parameters of length m, for the covariance model (variables to fit identified with d)
:return: variogram function of the corresponding covariance model evaluated at x
"""
return cov_model_param(d[1], d[2], d[3], d[4], p).vario_func()(d[0])
# Optimize parameters with curve_fit: initial vector of parameters (p0) must be given
# because number of parameter to fit in function func is not known in its expression
bounds = None
if 'bounds' in kwargs.keys():
bounds = kwargs['bounds']
if 'p0' not in kwargs.keys():
# add default p0 in kwargs
p0 = np.ones(nparam)
if bounds is not None:
# adjust p0 to given bounds
for i in range(nparam):
if np.isinf(bounds[0][i]):
if np.isinf(bounds[1][i]):
p0[i] = 1.
else:
p0[i] = bounds[1][i]
elif np.isinf(bounds[1][i]):
p0[i] = bounds[0][i]
else:
p0[i] = 0.5*(bounds[0][i]+bounds[1][i])
kwargs['p0'] = p0
else:
if len(kwargs['p0']) != nparam:
print("ERROR: length of 'p0' not compatible")
return (None, None)
# Fit with curve_fit
popt, pcov = curve_fit(func, (h, ielem_to_fit, key_to_fit, ir_to_fit, alpha_to_fit), g, **kwargs)
# Retrieve the optimized covariance model (in cov_model_opt)
cov_model_opt = cov_model_param(ielem_to_fit, key_to_fit, ir_to_fit, alpha_to_fit, popt)
if make_plot:
cov_model_opt.plot_model(vario=True, figsize=figsize)
# suptitle already in function cov_model_opt.plot_model...
# s = ['Vario opt.: alpha={}'.format(cov_model_opt.alpha)] + ['{}'.format(el) for el in cov_model_opt.elem]
# # plt.suptitle(textwrap.TextWrapper(width=50).fill(s))
# plt.suptitle('\n'.join(s))
return (cov_model_opt, popt)
# ----------------------------------------------------------------------------
# ============================================================================
# Functions for variogram cloud, experimental variogram,
# and covariance model fitting (3D)
# ============================================================================
# ----------------------------------------------------------------------------
def variogramCloud3D(x, v, alpha=0.0, beta=0.0, gamma=0.0, tol_dist=10.0, tol_angle=45.0, hmax=(np.nan, np.nan, np.nan),
make_plot=True, color0='red', color1='green', color2='blue', figsize=None):
"""
Computes three directional variogram clouds for a data set in 3D:
- one along axis x''',
- one along axis y''',
- one along axis z''',
where the system Ox'''y'''z''' is obtained from the (usual) system Oxyz as follows:
Oxyz -- rotation of angle -alpha around Oz --> Ox'y'z'
Ox'y'z' -- rotation of angle -beta around Ox' --> Ox''y''z''
Ox''y''z''-- rotation of angle -gamma around Oy''--> Ox'''y'''z'''
:param x: (2-dimensional array of shape (n, 3)) 3D-coordinates in system Oxyz of data points
:param v: (1-dimensional array of shape (n,)) values at data points
:param alpha, beta, gamma:
(floats) angle in degrees:
the system Ox'''y''''z''', supporting the axis of each variogram cloud,
is obtained from the system Oxyz as follows:
Oxyz -- rotation of angle -alpha around Oz --> Ox'y'z'
Ox'y'z' -- rotation of angle -beta around Ox' --> Ox''y''z''
Ox''y''z''-- rotation of angle -gamma around Oy''--> Ox'''y'''z'''
The 3x3 matrix m for changing the coordinate system from
Ox'''y'''z''' to Oxy is:
+ +
| ca * cc + sa * sb * sc, sa * cb, - ca * sc + sa * sb * cc|
m = |- sa * cc + ca * sb * sc, ca * cb, sa * sc + ca * sb * cc|
| cb * sc, - sb, cb * cc|
+ +
where
ca = cos(alpha), cb = cos(beta), cc = cos(gamma),
sa = sin(alpha), sb = sin(beta), sc = sin(gamma)
:param tol_dist, tol_angle: (float) tolerances (tol_dist: distance, tol_angle: angle in degrees)
used to determines which pair of points are integrated in the variogram clouds.
A pair of points (x(i), x(j)) is in the directional variogram cloud along
axis x''' (resp. y''' and z''') iff, given the lag vector h = x(i) - x(j),
- the distance from the end of vector h issued from origin to that axis
is less than or equal to tol_dist and,
- the angle between h and that axis is less than or equal to tol_angle
:param hmax: (sequence of 3 floats (or nan)): maximal distance between a pair of data points for
being integrated in the directional variogram cloud along axis x''', axis y'''
and axis z''' resp.
:param make_plot:
(bool) if True: the plot of the variogram clouds is done (in a new 2x2 figure)
:param color0, color1, color2:
colors for variogram cloud along axis x''', along axis y''', and along axis z''' resp.
(used if make_plot is True)
:param figsize: (tuple of 2 ints) size of the figure (used if make_plot is True)
:return: ((h0, g0, npair0), (h1, g1, npair1), (h2, g2, npair2)), where
- (h0, g0, npair0) is the directional variogram cloud along the axis x'''
(h0, g0 are two 1-dimensional arrays of same length containing
the coordinates of the points in the variagram cloud, and
npair is an int, the number of points (pairs of data points considered)
in the variogram cloud)
- (h1, g1, npair1) is the directional variogram cloud along the axis y'''
(same type of object as for axis x''')
- (h2, g2, npair2) is the directional variogram cloud along the axis z'''
(same type of object as for axis x''')
"""
# Number of data points
n = x.shape[0]
# Check length of v
if len(v) != n:
print("ERROR: length of 'v' is not valid")
return ((None, None, None), (None, None, None), (None, None, None))
# Rotation matrix
a = alpha * np.pi/180.
b = beta * np.pi/180.
c = gamma * np.pi/180.
ca, sa = np.cos(a), np.sin(a)
cb, sb = np.cos(b), np.sin(b)
cc, sc = np.cos(c), np.sin(c)
mrot = np.array([[ ca * cc + sa * sb * sc, sa * cb, - ca * sc + sa * sb * cc],
[- sa * cc + ca * sb * sc, ca * cb, sa * sc + ca * sb * cc],
[ cb * sc, - sb, cb * cc ]])
# Coordinates of data points in the new system Ox'y'
xnew = x.dot(mrot)
# Tolerance for distance to origin
hmax = list(hmax) # for assignment of its components
for i in (0, 1, 2):
if np.isnan(hmax[i]):
hmax[i] = np.inf
# Tolerance for slope compute from tol_angle
tol_s = np.tan(tol_angle*np.pi/180)
eps = 1.e-8 # close to zero
# Compute variogram clouds
h0, g0, h1, g1, h2, g2 = [], [], [], [], [], []
for i in range(n-1):
for j in range(i+1, n):
h = xnew[i,:] - xnew[j,:]
habs = np.fabs(h)
# di: distance to axe i (in new system)
d0 = np.sqrt((h[1]**2 + h[2]**2))
d1 = np.sqrt((h[0]**2 + h[2]**2))
d2 = np.sqrt((h[0]**2 + h[1]**2))
if habs[0] < eps or (habs[0] <= hmax[0] and d0 <= tol_dist and d0/habs[0] <= tol_s):
# Directional variogram along x''' contains pair of points (i,j)
h0.append(habs[0]) # projection along x'''
g0.append(0.5*(v[i]-v[j])**2)
if habs[1] < eps or (habs[1] <= hmax[1] and d1 <= tol_dist and d1/habs[1] <= tol_s):
# Directional variogram along y''' contains pair of points (i,j)
h1.append(habs[1]) # projection along y'''
g1.append(0.5*(v[i]-v[j])**2)
if habs[2] < eps or (habs[2] <= hmax[2] and d2 <= tol_dist and d2/habs[2] <= tol_s):
# Directional variogram along z''' contains pair of points (i,j)
h2.append(habs[2]) # projection along z'''
g2.append(0.5*(v[i]-v[j])**2)
h0 = np.asarray(h0)
g0 = np.asarray(g0)
npair0 = len(h0)
h1 = np.asarray(h1)
g1 = np.asarray(g1)
npair1 = len(h1)
h2 = np.asarray(h2)
g2 = np.asarray(g2)
npair2 = len(h2)
if make_plot:
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(2,2,1, projection='3d')
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax4 = fig.add_subplot(2,2,4)
# Plot system Oxzy and Ox'y'z'
# This:
ax1.plot([0,1], [0,0], [0,0], color='k')
ax1.plot([0,0], [0,1], [0,0], color='k')
ax1.plot([0,0], [0,0], [0,1], color='k')
ax1.plot([0, mrot[0,0]], [0, mrot[1,0]], [0, mrot[2,0]], color=color0, label="x'''")
ax1.plot([0, mrot[0,1]], [0, mrot[1,1]], [0, mrot[2,1]], color=color1, label="y'''")
ax1.plot([0, mrot[0,2]], [0, mrot[1,2]], [0, mrot[2,2]], color=color2, label="z'''")
ax1.set_xticks([0,1])
ax1.set_yticks([0,1])
ax1.set_zticks([0,1])
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('z')
ax1.legend()
plt.sca(ax1)
plt.title("System Ox'''y'''z'''")
plt.sca(ax2)
# Plot variogram cloud along x'''
plot_variogramCloud1D(h0, g0, npair0, c=color0)
plt.title("along x''' ({} pts)".format(npair0))
plt.sca(ax3)
# Plot variogram cloud along y'''
plot_variogramCloud1D(h1, g1, npair1, c=color1)
plt.title("along y''' ({} pts)".format(npair1))
plt.sca(ax4)
# Plot variogram cloud along z'''
plot_variogramCloud1D(h2, g2, npair2, c=color2)
plt.title("along z''' ({} pts)".format(npair2))
plt.suptitle("Vario cloud: alpha={}deg. beta={}deg. gamma={}deg.\ntol_dist={} / tol_angle={}deg.".format(alpha, beta, gamma, tol_dist, tol_angle))
# plt.show()
return ((h0, g0, npair0), (h1, g1, npair1), (h2, g2, npair2))
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def variogramExp3D(x, v, alpha=0.0, beta=0.0, gamma=0.0, tol_dist=10.0, tol_angle=45.0, hmax=(np.nan, np.nan, np.nan),
ncla=(10, 10, 10), cla_center=(None, None, None), cla_length=(None, None, None),
variogramCloud=None, make_plot=True, color0='red', color1='green', color2='blue', figsize=None):
"""
Computes three directional experimental variograms for a data set in 3D:
- one along axis x''',
- one along axis y''',
- one along axis z''',
where the system Ox'''y'''z''' is obtained from the (usual) system Oxyz as follows:
Oxyz -- rotation of angle -alpha around Oz --> Ox'y'z'
Ox'y'z' -- rotation of angle -beta around Ox' --> Ox''y''z''
Ox''y''z''-- rotation of angle -gamma around Oy''--> Ox'''y'''z'''
The mean point in each class is retrieved from the three directional variogram clouds
(returned by the function variogramCloud3D).
:param x: (2-dimensional array of shape (n, 3)) 3D-coordinates in system Oxyz of data points
:param v: (1-dimensional array of shape (n,)) values at data points
:param alpha, beta, gamma:
(floats) angle in degrees:
the system Ox'''y''''z''', supporting the axis of each variogram cloud,
is obtained from the system Oxyz as follows:
Oxyz -- rotation of angle -alpha around Oz --> Ox'y'z'
Ox'y'z' -- rotation of angle -beta around Ox' --> Ox''y''z''
Ox''y''z''-- rotation of angle -gamma around Oy''--> Ox'''y'''z'''
The 3x3 matrix m for changing the coordinate system from
Ox'''y'''z''' to Oxy is:
+ +
| ca * cc + sa * sb * sc, sa * cb, - ca * sc + sa * sb * cc|
m = |- sa * cc + ca * sb * sc, ca * cb, sa * sc + ca * sb * cc|
| cb * sc, - sb, cb * cc|
+ +
where
ca = cos(alpha), cb = cos(beta), cc = cos(gamma),
sa = sin(alpha), sb = sin(beta), sc = sin(gamma)
:param tol_dist, tol_angle: (float) tolerances (tol_dist: distance, tol_angle: angle in degrees)
used to determines which pair of points are integrated in the variogram clouds.
A pair of points (x(i), x(j)) is in the directional variogram cloud along
axis x''' (resp. y''' and z''') iff, given the lag vector h = x(i) - x(j),
- the distance from the end of vector h issued from origin to that axis
is less than or equal to tol_dist and,
- the angle between h and that axis is less than or equal to tol_angle
:param hmax: (sequence of 3 floats (or nan)): maximal distance between a pair of data points for
being integrated in the directional variogram cloud along axis x''', axis y'''
and axis z''' resp.
:param ncla: (sequence of 3 ints) ncla[0], ncla[1], ncla[1]: number of classes
for experimental variogram along axis x''' (direction 0), axis y''' (direction 1)
and axis z''' (direction 2) resp.
For direction j:
the parameter ncla[j] is used if cla_center[j] is not specified (None),
in that situation ncla[j] classes are considered and the class centers are set to
cla_center[j][i] = (i+0.5)*l, i=0,...,ncla[j]-1
with l = H / ncla[j], H being the max of the distance between the two points of
the considered pairs (in the variogram cloud of direction j).
:param cla_center: (sequence of 3 sequences of floats) cla_center[0], clac_center[1], clac_center[2]: center
of each class for experimental variogram along axis x''' (direction 0), axis y''' (direction 1)
and axis z''' (direction 2) resp.
For direction j:
if cla_center[j] is specified (not None), then the parameter ncla[j] is not used.
:param cla_length: (sequence of length 2 of: None, or float or sequence of floats) cla_length[0], clac_length[1]:
length of each class
for experimental variogram along axis x''' (direction 0), axis y''' (direction 1)
and axis z''' (direction 2) resp.
For direction j:
- if cla_length[j] not specified (None): the length of every class is set to the
minimum of difference between two sucessive class centers (np.inf if one class)
- if float: the length of every class is set to the specified number
- if a sequence, its length should be equal to the number of classes (length of
cla_center[j] (or ncla[j]))
Finally, the i-th class is determined by its center cla_center[j][i] and its
length cla_length[j][i], and corresponds to the interval
]cla_center[j][i]-cla_length[j][i]/2, cla_center[j][i]+cla_length[j][i]/2]
along h (lag) axis
:param variogramCloud:
(sequence of 3 tuples of length 3, or None) If given: ((h0, g0, npair0), (h1, g1, npair1), (h2, g2, npair2)):
variogram clouds (returned by the function variogramCloud3D (npair0, npair1, npair2 are not used))
along axis axis x''' (direction 0), axis y''' (direction 1) and axis z''' (direction 2) resp., then
x, v, alpha, beta, gamma, tol_dist, tol_angle, hmax are not used
(but alpha, beta, gamma, tol_dist, tol_angle are used in plot if make_plot is True)
:param make_plot:
(bool) if True: the plot of the experimental variograms is done (in a new 2x3 figure)
:param color0, color1, color2:
colors for experimental variogram along axis x''', along axis y''', and along axis z''' resp.
(used if make_plot is True)
:param figsize: (tuple of 2 ints) size of the figure (used if make_plot is True)
:return: ((hexp0, gexp0, cexp0), (hexp1, gexp1, cexp1), (hexp2, gexp2, cexp2)), where
- (hexp0, gexp0, cexp0) is the output for the experimental variogram along axis x''':
- hexp0, gexp0: are two 1-dimensional arrays of floats of same length containing
the coordinates of the points of the experimental variogram along axis x''', and
- cexp0 is a 1-dimensional array of ints of same length as hexp0 and gexp0, containing
the number of points from the variogram cloud in each class
- (hexp1, gexp1, cexp1) is the output for the experimental variogram along axis y'''
- (hexp2, gexp2, cexp2) is the output for the experimental variogram along axis z'''
"""
# Compute variogram clouds if needed
if variogramCloud is None:
vc = variogramCloud3D(x, v, alpha=alpha, beta=beta, gamma=gamma, tol_dist=tol_dist, tol_angle=tol_angle, hmax=hmax, make_plot=False)
else:
vc = variogramCloud
# -> vc[0] = (h0, g0, npair0) and vc[1] = (h1, g1, npair1) and vc[2] = (h2, g2, npair2)
# Compute variogram experimental in each direction (using function variogramExp1D)
ve = [None, None, None]
for j in (0, 1, 2):
ve[j] = variogramExp1D(None, None, hmax=np.nan, ncla=ncla[j], cla_center=cla_center[j], cla_length=cla_length[j], variogramCloud=vc[j], make_plot=False)
(hexp0, gexp0, cexp0), (hexp1, gexp1, cexp1), (hexp2, gexp2, cexp2) = ve
if make_plot:
# Rotation matrix
a = alpha * np.pi/180.
b = beta * np.pi/180.
c = gamma * np.pi/180.
ca, sa = np.cos(a), np.sin(a)
cb, sb = np.cos(b), np.sin(b)
cc, sc = np.cos(c), np.sin(c)
mrot = np.array([[ ca * cc + sa * sb * sc, sa * cb, - ca * sc + sa * sb * cc],
[- sa * cc + ca * sb * sc, ca * cb, sa * sc + ca * sb * cc],
[ cb * sc, - sb, cb * cc ]])
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(2,3,1, projection='3d')
# subplot(2,3,2) is empty
ax2 = fig.add_subplot(2,3,3)
ax3 = fig.add_subplot(2,3,4)
ax4 = fig.add_subplot(2,3,5)
ax5 = fig.add_subplot(2,3,6)
# Plot system Oxzy and Ox'y'z'
# This:
ax1.plot([0,1], [0,0], [0,0], color='k')
ax1.plot([0,0], [0,1], [0,0], color='k')
ax1.plot([0,0], [0,0], [0,1], color='k')
ax1.plot([0, mrot[0,0]], [0, mrot[1,0]], [0, mrot[2,0]], color=color0, label="x'''")
ax1.plot([0, mrot[0,1]], [0, mrot[1,1]], [0, mrot[2,1]], color=color1, label="y'''")
ax1.plot([0, mrot[0,2]], [0, mrot[1,2]], [0, mrot[2,2]], color=color2, label="z'''")
ax1.set_xticks([0,1])
ax1.set_yticks([0,1])
ax1.set_zticks([0,1])
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('z')
ax1.legend()
plt.sca(ax1)
plt.title("System Ox'''y'''z'''")
plt.sca(ax2)
# Plot variogram exp along x''', along y''' and along z'''
plot_variogramExp1D(hexp0, gexp0, cexp0, show_count=False, c=color0, alpha=0.5, label="along x'''")
plot_variogramExp1D(hexp1, gexp1, cexp1, show_count=False, c=color1, alpha=0.5, label="along y'''")
plot_variogramExp1D(hexp2, gexp2, cexp2, show_count=False, c=color2, alpha=0.5, label="along z'''")
plt.legend()
plt.sca(ax3)
# Plot variogram exp along x'''
plot_variogramExp1D(hexp0, gexp0, cexp0, c=color0)
plt.title("along x'''")
plt.sca(ax4)
# Plot variogram exp along y'''
plot_variogramExp1D(hexp1, gexp1, cexp1, c=color1)
plt.title("along y'''")
plt.sca(ax5)
# Plot variogram exp along z'''
plot_variogramExp1D(hexp2, gexp2, cexp2, c=color2)
plt.title("along z'''")
plt.suptitle("Vario exp.: alpha={}deg. beta={}deg. gamma={}deg.\ntol_dist={} / tol_angle={}deg.".format(alpha, beta, gamma, tol_dist, tol_angle))
# plt.show()
return ((hexp0, gexp0, cexp0), (hexp1, gexp1, cexp1), (hexp2, gexp2, cexp2))
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def covModel3D_fit(x, v, cov_model, hmax=np.nan, make_plot=True, **kwargs):
"""
Fits a covariance model in 3D (for data in 3D).
The parameter 'cov_model' is a covariance model in 3D (CovModel3D class) with
the parameters to fit set to nan (a nan replace a float). For example, with
cov_model = CovModel3D(elem=[
('gaussian', {'w':np.nan, 'r':[np.nan, np.nan, np.nan]}), # elementary contribution
('nugget', {'w':np.nan}) # elementary contribution
], alpha=np.nan, beta=np.nan, gamma=np.nan, name='')
it will fit the weight and ranges of the gaussian elementary contribution,
the nugget (weigth of the nugget contribution), and the angles alpha, beta, gamma.
:param x: (2-dimensional array of shape (n, 3)) 3D-coordinates
of the data points (n: number of points)
:param v: (1-dimensional array of shape (n,)) values at data points
:param cov_model: (CovModel3D class) covariance model in 3D with parameters to fit set to nan
(see above)
:param hmax: (float or nan) maximal distance between a pair of data points for
being integrated in the variogram cloud.
:param make_plot:
(bool) if True: the plot of the optimized variogram is done (in a new 1x2 figure)
:kwargs: keyword arguments passed to the funtion curve_fit() from scipy.optimize
e.g.: p0=<array of initial parameters> (see doc of curve_fit), with
an array of floats of length equal to the number of paramters to fit,
considered in the order of appearance in the definition of cov_model;
bounds=(<array of lower bounds>, <array of upper bounds>)
:return: (cov_model_opt, popt) with:
- cov_model_opt: (covModel3D class) optimized covariance model
- popt: (sequence of floats) vector of optimized parameters
returned by curve_fit
"""
# Check dimension of cov_model and set if used as omni-directional model
if cov_model.__class__.__name__ != 'CovModel3D':
print("ERROR: 'cov_model' is incompatible with dimension (3D)")
return (None, None)
# Work on a (deep) copy of cov_model
cov_model_opt = copy.deepcopy(cov_model)
# Get index of element, key of parameters and index of range to fit
ielem_to_fit=[]
key_to_fit=[]
ir_to_fit=[] # if key is equal to 'r' (range), set the index of the range to fit, otherwise set np.nan
for i, el in enumerate(cov_model_opt.elem):
for k, val in el[1].items():
if k == 'r':
for j in (0, 1, 2):
if np.isnan(val[j]):
ielem_to_fit.append(i)
key_to_fit.append(k)
ir_to_fit.append(j)
elif np.isnan(val):
ielem_to_fit.append(i)
key_to_fit.append(k)
ir_to_fit.append(np.nan)
# Is angle alpha, beta, gamma must be fit ?
alpha_to_fit = np.isnan(cov_model_opt.alpha)
beta_to_fit = np.isnan(cov_model_opt.beta)
gamma_to_fit = np.isnan(cov_model_opt.gamma)
nparam = len(ielem_to_fit) + int(alpha_to_fit) + int(beta_to_fit) + int(gamma_to_fit)
if nparam == 0:
print('No parameter to fit!')
return (cov_model_opt, np.array([]))
# Compute lag vector (h) and gamma value (g) for pair of points with distance less than or equal to hmax
n = x.shape[0] # number of points
if np.isnan(hmax):
# consider all pairs of points
npair = int(0.5*(n-1)*n)
h = np.zeros((npair, 3))
g = np.zeros(npair)
j = 0
for i in range(n-1):
jj = n-1-i
h[j:(j+jj),:]= x[(i+1):, :] - x[i,:]
g[j:(j+jj)]= 0.5*(v[i] - v[(i+1):])**2
j = j+jj
else:
# consider only pairs of points with a distance less than or equal to hmax
hmax2 = hmax**2
h, g = [], []
npair = 0
for i in range(n-1):
htmp = x[(i+1):, :] - x[i,:] # 2-dimensional array (n-1-i) x dim
ind = np.where(np.sum(htmp**2, axis=1) <= hmax2)[0]
h.append(htmp[ind])
g.append(0.5*(v[i] - v[i+1+ind])**2)
npair = npair + len(ind)
if npair > 0:
h = np.vstack(h)
g = np.hstack(g)
if npair == 0:
print('No point to fit!')
return (cov_model_opt, np.nan * np.ones(nparam))
# Defines a function that returns a covariance model in 3D, given a vector of parameters
# (for the parameters to fit)
def cov_model_param(ielem, key, ir, alpha_given, beta_given, gamma_given, p):
"""
:param ielem: (sequence of ints of length m) indexes
:param key: (sequence of strings of length m) keys
:param ir: (sequence of ints of length m) index of ranges (nan if not for a range)
:param alpha_given: (bool) indicates if alpha is given (at the end of vector p)
:param beta_given: (bool) indicates if beta is given (at the end of vector p)
:param gamma_given: (bool) indicates if gamma is given (at the end of vector p)
:param p: (sequence of floats of length m) parameters of covariance model
:return: cov_model_opt with parameters idientified by ielem, key, ir set to values given by p
"""
for i, (iel, k, j) in enumerate(zip(ielem, key, ir)):
if k == 'r':
cov_model_opt.elem[iel][1]['r'][j] = p[i]
else:
cov_model_opt.elem[iel][1][k] = p[i]
if alpha_given:
cov_model_opt.alpha = p[-1-int(beta_given)-int(gamma_given)]
if beta_given:
cov_model_opt.beta = p[-1-int(gamma_given)]
if gamma_given:
cov_model_opt.gamma = p[-1]
return cov_model_opt
# Defines the function to optimize in a format compatible with curve_fit from scipy.optimize
def func(d, *p):
"""
Function whose p is the vector of parameters to optimize.
:param d: (tuple of length 7) with
- d[0] = h, lag vector for pair of data points (see above)
- d[1] = ielem, sequence of indexes of length m
- d[2] = keys, sequence of strings (keys) of length m
- d[3] = ir, sequence of indexes of ranges (nan if not for a range) of length m
- d[4] = alpha_to_set, bool indicating if alpha is given (at the end of vector p)
- d[5] = beta_to_set, bool indicating if beta is given (at the end of vector p)
- d[6] = gamma_to_set, bool indicating if gamma is given (at the end of vector p)
:param p: vector of parameters of length m, for the covariance model (variables to fit identified with d)
:return: variogram function of the corresponding covariance model evaluated at x
"""
return cov_model_param(d[1], d[2], d[3], d[4], d[5], d[6], p).vario_func()(d[0])
# Optimize parameters with curve_fit: initial vector of parameters (p0) must be given
# because number of parameter to fit in function func is not known in its expression
bounds = None
if 'bounds' in kwargs.keys():
bounds = kwargs['bounds']
if 'p0' not in kwargs.keys():
# add default p0 in kwargs
p0 = np.ones(nparam)
if bounds is not None:
# adjust p0 to given bounds
for i in range(nparam):
if np.isinf(bounds[0][i]):
if np.isinf(bounds[1][i]):
p0[i] = 1.
else:
p0[i] = bounds[1][i]
elif np.isinf(bounds[1][i]):
p0[i] = bounds[0][i]
else:
p0[i] = 0.5*(bounds[0][i]+bounds[1][i])
kwargs['p0'] = p0
else:
if len(kwargs['p0']) != nparam:
print("ERROR: length of 'p0' not compatible")
return (None, None)
# Fit with curve_fit
popt, pcov = curve_fit(func, (h, ielem_to_fit, key_to_fit, ir_to_fit, alpha_to_fit, beta_to_fit, gamma_to_fit), g, **kwargs)
# Retrieve the optimized covariance model (in cov_model_opt)
cov_model_opt = cov_model_param(ielem_to_fit, key_to_fit, ir_to_fit, alpha_to_fit, beta_to_fit, gamma_to_fit, popt)
if make_plot:
# plt.suptitle(textwrap.TextWrapper(width=50).fill(s))
s = ['Vario opt.: alpha={}, beta={}, gamma={}'.format(cov_model_opt.alpha, cov_model_opt.beta, cov_model_opt.gamma)] + ['{}'.format(el) for el in cov_model_opt.elem]
cov_model_opt.plot_model3d_volume(vario=True, text='\n'.join(s), text_kwargs={'font_size':12})
return (cov_model_opt, popt)
# ----------------------------------------------------------------------------
# ============================================================================
# Ordinary kriging and cross validation by leave-one-out (loo)
# ============================================================================
# ----------------------------------------------------------------------------
def ordinary_kriging(x, v, xu, cov_model):
"""
Ordinary kriging - interpolates at locations xu the values v measured at locations x.
Covariance model given should be:
- in same dimension as dimension of locations x, xu
- in 1D, it is then used as an omni-directional covariance model
(see below).
:param x: (2-dimensional array of shape (n, d)) coordinates
of the data points (n: number of points, d: dimension)
Note: for data in 1D, it can be a 1-dimensional array of shape (n,)
:param v: (1-dimensional array of shape (n,)) values at data points
:param xu: (2-dimensional array of shape (nu, d)) coordinates
of the points where the interpolation has to be done
(nu: number of points, d: dimension same as for x),
called unknown points
Note: for data in 1D, it can be a 1-dimensional array of shape (nu,)
:param cov_model: covariance model:
- in same dimension as dimension of points (d), i.e.:
- CovModel1D class if data in 1D (d=1)
- CovModel2D class if data in 2D (d=2)
- CovModel3D class if data in 3D (d=3)
- or CovModel1D whatever dimension of points (d):
- used as an omni-directional covariance model
:return: (vu, vu_std) with:
vu: (1-dimensional array of shape (nu,)) kriged values (estimates) at points xu
vu_std: (1-dimensional array of shape (nu,)) kriged standard deviation at points xu
"""
# Get dimension (d) from x
if np.asarray(x).ndim == 1:
# x is a 1-dimensional array
x = np.asarray(x).reshape(-1, 1)
d = 1
else:
# x is a 2-dimensional array
d = x.shape[1]
# Get dimension (du) from xu
if np.asarray(xu).ndim == 1:
# xu is a 1-dimensional array
xu = np.asarray(xu).reshape(-1, 1)
du = 1
else:
# xu is a 2-dimensional array
du = xu.shape[1]
# Check dimension of x and xu
if d != du:
print("ERROR: 'x' and 'xu' do not have same dimension")
return (None, None)
# Check dimension of cov_model and set if used as omni-directional model
if cov_model.__class__.__name__ != 'CovModel{}D'.format(d):
if cov_model.__class__.__name__ == 'CovModel1D':
omni_dir = True
else:
print("ERROR: 'cov_model' is incompatible with dimension of points")
return (None, None)
else:
omni_dir = False
# Number of data points
n = x.shape[0]
# Number of unknown points
nu = xu.shape[0]
# Check length of v
if len(v) != n:
print("ERROR: length of 'v' is not valid")
return (None, None)
# Covariance function
cov_func = cov_model.func() # covariance function
if omni_dir:
# covariance model in 1D is used
cov0 = cov_func(0.) # covariance function at origin (lag=0)
else:
cov0 = cov_func(np.zeros(d)) # covariance function at origin (lag=0)
# Fill matrix of ordinary kriging system (matOK)
nOK = n+1 # order of the matrix
matOK = np.ones((nOK, nOK))
for i in range(n-1):
# lag between x[i] and x[j], j=i+1, ..., n-1
h = x[(i+1):] - x[i]
if omni_dir:
# compute norm of lag
h = np.sqrt(np.sum(h**2, axis=1))
cov_h = cov_func(h)
matOK[i, (i+1):-1] = cov_h
matOK[(i+1):-1, i] = cov_h
matOK[i,i] = cov0
matOK[-2,-2] = cov0
matOK[-1,-1] = 0.0
# Right hand side of the ordinary kriging system (b):
# b is a matrix of dimension nOK x nu
b = np.ones((nOK, nu))
for i in range(n):
# lag between x[i] and every xu
h = xu - x[i]
if omni_dir:
# compute norm of lag
h = np.sqrt(np.sum(h**2, axis=1))
b[i,:] = cov_func(h)
# Solve the kriging system
w = np.linalg.solve(matOK,b) # w: matrix of dimension nOK x nu
# Kriged values at unknown points
vu = v.dot(w[:-1,:])
# Kriged standard deviation at unknown points
vu_std = np.sqrt(np.maximum(0, cov0 - np.array([np.dot(w[:,i], b[:,i]) for i in range(nu)])))
return (vu, vu_std)
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
def cross_valid_loo_ok(x, v, cov_model, confidence=0.05, make_plot=True, figsize=None):
"""
Cross-validation of covariance model by leave-one-out error based on ordinary kriging.
Covariance model given should be:
- in same dimension as dimension of locations x
- in 1D, it is then used as an omni-directional covariance model
Two statisic tests are performed:
(1) normal law test for mean of normalized error:
Mean of normalized error times the square root of n-1
should follow approximately a law N(0,1) (CLT)
(2) Chi2 test for sum of squares of normalized error:
Sum of square of normalized error should follow a law
Chi2 with n-1 degrees of freedom,
n being the number of data points.
The statistc test passes with success if the obtained value is within
the central interval covering the 1-confidence part of the corresponding
distribution (by default: confidence is set to 5%), otherwise the test fails.
:param x: (2-dimensional array of shape (n, d)) coordinates
of the data points (n: number of points, d: dimension)
Note: for data in 1D, it can be a 1-dimensional array of shape (n,)
:param v: (1-dimensional array of shape (n,)) values at data points
:param cov_model: covariance model:
- in same dimension as dimension of points (d), i.e.:
- CovModel1D class if data in 1D (d=1)
- CovModel2D class if data in 2D (d=2)
- CovModel3D class if data in 3D (d=3)
- or CovModel1D whatever dimension of points (d):
- used as an omni-directional covariance model
:param confidence: (float) in [0,1] for setting limit in the two statistic tests
(see above)
:param make_plot:
(bool) if True: a plot is done (in a new 1x2 figure)
:param figsize: (tuple of 2 ints) size of the figure (used if make_plot is True)
:return: (valid_code1, valid_code2), a tuple of 2 bools:
valid_code1: True if test (1) passed with success, False otherwise
valid_code2: True if test (2) passed with success, False otherwise
"""
# Get dimension (d) from x
if np.asarray(x).ndim == 1:
# x is a 1-dimensional array
x = np.asarray(x).reshape(-1, 1)
d = 1
else:
# x is a 2-dimensional array
d = x.shape[1]
# Check dimension of cov_model and set if used as omni-directional model
if cov_model.__class__.__name__ != 'CovModel{}D'.format(d):
if cov_model.__class__.__name__ == 'CovModel1D':
omni_dir = True
else:
print("ERROR: 'cov_model' is incompatible with dimension of points")
return (None, None)
else:
omni_dir = False
# Number of data points
n = x.shape[0]
# Check length of v
if len(v) != n:
print("ERROR: length of 'v' is not valid")
return (None, None)
# Leave-one-out (loo) cross validation
v_est, v_std = np.zeros(n), np.zeros(n)
ind = np.arange(n)
for i in range(n):
indx = np.delete(ind, i)
v_est[i], v_std[i] = ordinary_kriging(x[indx], v[indx], np.array(x[i]).reshape(-1, d), cov_model)
# Normalized error
err = (v_est - v) / v_std
# Each err[i] should follows a law N(0,1), the set of err[i] has n-1 degrees of freedom (?), and:
# (1) sqrt(n-1)*mean(err) follows approximately a law N(0,1) (CLT)
# (2) sum(err^2) follows a law Chi2 with n-1 degrees of freedom
me = np.mean(err)
s2 = np.sum(err**2)
t = np.sqrt(n-1)*me
tlim = stats.norm.ppf(1.-0.5*confidence)
if np.abs(t) > tlim:
print("Model does not pass test for mean of normalized error!")
print(" Mean of normalized error times square root of number of data points = {}, not within interval +/-{}".format(t, tlim))
valid_code1 = False
else:
valid_code1 = True
s2lim = stats.chi2.ppf(1.-confidence, df=n-1)
if s2 > s2lim:
print("Model does not pass test for sum of square of normalized error (chi2)!")
print(" Sum of squares of normalized error = {}, above limit: {}".format(s2, s2lim))
valid_code2 = False
else:
valid_code2 = True
if make_plot:
fig, ax = plt.subplots(1,2, figsize=figsize)
plt.sca(ax[0])
plt.plot(v_est, v, 'o')
tmp = [np.min(v_est), np.max(v_est)]
plt.plot(tmp, tmp, ls='dashed')
plt.xlabel('Estimation Z*(x)')
plt.ylabel('True value Z(x)')
plt.title('Cross plot Z(x) vs Z*(x)')
plt.sca(ax[1])
plt.hist(err, density=True)
plt.xlabel(r'Normalized error $(Z*(x)-Z(x))/\sigma*(x)$')
# plt.show()
return (valid_code1, valid_code2)
# ----------------------------------------------------------------------------
# ============================================================================
if __name__ == "__main__":
print("Module 'geone.covModel' example:")
########## 1D case ##########
# Define covariance model
cov_model = CovModel1D(elem=[
('gaussian', {'w':5., 'r':100}), # elementary contribution
('nugget', {'w':1.}) # elementary contribution
], name='model-1D example')
# Plot covariance and variogram functions on same plot
cov_model.plot_model(label='cov', show_ylabel=False)
cov_model.plot_model(vario=True, label='vario', show_ylabel=False)
# plt.ylabel('') # remove label for y-axis
plt.legend()
plt.title(cov_model.name)
# Set custom axes (through the origin)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
#ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
#ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.show()
# ########## 2D case ##########
# Define covariance model
cov_model = CovModel2D(elem=[
('gaussian', {'w':8.5, 'r':[150, 40]}), # elementary contribution
('nugget', {'w':0.5}) # elementary contribution
], alpha=-30., name='model-2D example')
# Plot covariance function (in a new 1x2 figure, without suptitle)
cov_model.plot_model(show_suptitle=False)
plt.show()
# Plot variogram function (in a new 1x2 figure)
cov_model.plot_model(vario=True)
plt.show()
########## 3D case ##########
# Define covariance model
cov_model = CovModel3D(elem=[
('gaussian', {'w':8.5, 'r':[40, 20, 10]}), # elementary contribution
('nugget', {'w':0.5}) # elementary contribution
], alpha=-30., beta=-40., gamma=20., name='model-3D example')
# Plot covariance function
# ... volume (3D)
cov_model.plot_model3d_volume()
# ... slice in 3D block
cov_model.plot_model3d_slice()
# ... curves along each main axis
cov_model.plot_model_curves()
plt.show()
# Plot variogram function
# ... volume (3D)
cov_model.plot_model3d_volume(vario=True)
# ... slice in 3D block
cov_model.plot_model3d_slice(vario=True)
# ... curves along each main axis
cov_model.plot_model_curves(vario=True)
plt.show()
a = input("Press enter to continue...")
| 45.655257 | 173 | 0.519065 | 20,605 | 149,384 | 3.705848 | 0.040136 | 0.016553 | 0.003025 | 0.00846 | 0.865451 | 0.835252 | 0.805733 | 0.783693 | 0.769104 | 0.745492 | 0 | 0.029752 | 0.327699 | 149,384 | 3,271 | 174 | 45.669214 | 0.730561 | 0.53169 | 0 | 0.68476 | 0 | 0.002784 | 0.074909 | 0.001937 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045929 | false | 0.001392 | 0.006959 | 0 | 0.10856 | 0.020877 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a6243653ac0ac6312a75575c0412e305c022e3e | 49 | py | Python | regression_test_utils/__init__.py | JivanAmara/test_utils | f077083ebdd8cbcd626ef98994c582cf585fde14 | [
"BSD-3-Clause"
] | null | null | null | regression_test_utils/__init__.py | JivanAmara/test_utils | f077083ebdd8cbcd626ef98994c582cf585fde14 | [
"BSD-3-Clause"
] | null | null | null | regression_test_utils/__init__.py | JivanAmara/test_utils | f077083ebdd8cbcd626ef98994c582cf585fde14 | [
"BSD-3-Clause"
] | null | null | null | from regression_test_utils import log_test_case
| 16.333333 | 47 | 0.897959 | 8 | 49 | 5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 2 | 48 | 24.5 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4364ba3a7bba0131a77cddf3b2530d534f2f1996 | 161 | py | Python | daisy/persistence/__init__.py | rhoadesScholar/daisy | 78cdd2ed0d67647a6602fb53cc952214450f3753 | [
"MIT"
] | null | null | null | daisy/persistence/__init__.py | rhoadesScholar/daisy | 78cdd2ed0d67647a6602fb53cc952214450f3753 | [
"MIT"
] | null | null | null | daisy/persistence/__init__.py | rhoadesScholar/daisy | 78cdd2ed0d67647a6602fb53cc952214450f3753 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .mongodb_graph_provider import MongoDbGraphProvider # noqa
from .file_graph_provider import FileGraphProvider # noqa
| 40.25 | 63 | 0.875776 | 19 | 161 | 6.947368 | 0.578947 | 0.19697 | 0.287879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099379 | 161 | 3 | 64 | 53.666667 | 0.910345 | 0.055901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
437eda3d9eec4766611cc0bd6dbc3db782258f9c | 6,116 | py | Python | src/lin_my/s3_bullet_checker_eval.py | yifan-you-37/omnihang | c80b699b2cf2cf3422201cc8c3fa572d0e01d5a2 | [
"MIT"
] | 1 | 2022-01-16T20:24:09.000Z | 2022-01-16T20:24:09.000Z | src/lin_my/s3_bullet_checker_eval.py | yifan-you-37/omnihang | c80b699b2cf2cf3422201cc8c3fa572d0e01d5a2 | [
"MIT"
] | null | null | null | src/lin_my/s3_bullet_checker_eval.py | yifan-you-37/omnihang | c80b699b2cf2cf3422201cc8c3fa572d0e01d5a2 | [
"MIT"
] | 1 | 2022-03-16T03:14:37.000Z | 2022-03-16T03:14:37.000Z | import sys
import numpy as np
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(BASE_DIR, '../utils'))
import bullet_client as bc
from coord_helper import *
from data_helper import *
import time
import imageio
def check_one_pose_simple(p, hook_world_pos, hook_bullet_id, object_bullet_id, ori_transl, ori_quat, hook_urdf, object_urdf, fcl_hook_model, fcl_object_model):
failure = False
ori_object_pos = ori_transl
non_contact_count = 0
for i in range(100):
if i == 0:
start_time = time.time()
else:
ssecond = time.time() - start_time
# if ssecond > 60:
# failure = True
# break
#check overlap of model with hook
# if i == 0:
# L1 = p.getContactPoints(hook_bullet_id, object_bullet_id, linkIndexA=0, linkIndexB=-1)
# # print("Length of zero hook",len(L1))
# if len(L1) > 0:
# for tmp in L1:
# if tmp[8] < -0.003:
# failure = True
# print('penetration problem')
# break
# # print("finished geting closesetpoints")
# if failure:
# break
p.stepSimulation()
if 1:
hook_AABB = p.getAABB(hook_bullet_id,0)
object_AABB = p.getAABB(object_bullet_id)
if (hook_AABB[0][0] > object_AABB[1][0] or hook_AABB[1][0] < object_AABB[0][0]) \
or (hook_AABB[0][1] > object_AABB[1][1] or hook_AABB[1][1] < object_AABB[0][1]) \
or (hook_AABB[0][2] > object_AABB[1][2] or hook_AABB[1][2] < object_AABB[0][2]):
failure = True
break
if ((i+1)%1 == 0 and i < 10) or (i%5 == 0 and i >= 10):
object_pos_world, object_quat = p.getBasePositionAndOrientation(object_bullet_id)
object_pos = object_pos_world - hook_world_pos
CP_List = p.getContactPoints(bodyA=hook_bullet_id, bodyB=object_bullet_id, linkIndexA=0,linkIndexB=-1)
if len(CP_List) > 0:
for tmp in CP_List:
if tmp[8] < -0.003:
failure = True
break
if failure:
break
# if object center too low or object too far away
if object_pos_world[2] < 0.2 or np.linalg.norm(object_pos_world) > 5:
failure = True
break
# if touches ground
if check_object_touches_ground(object_bullet_id, p):
failure = True
break
# too much change in pos
if ori_object_pos[2] - object_pos[2] > 0.6:
failure = True
break
ssecond = time.time() - start_time
object_pos_world, object_quat = p.getBasePositionAndOrientation(object_bullet_id)
object_pos = object_pos_world - hook_world_pos
return (not failure), np.append(object_pos, np.array(object_quat))
# def check_one_pose_simple(p, hook_world_pos, hook_bullet_id, object_bullet_id, ori_transl, ori_quat, hook_urdf, object_urdf, fcl_hook_model, fcl_object_model):
# failure = False
# ori_object_pos = ori_transl
# non_contact_count = 0
# thres = 0.01
# ratio = 1
# for i in range(100):
# thres = (thres - 0.01) * (100 - i) / 100.0 + 0.01
# #print("step in simualtion",i,"thres",thres)
# if i == 0:
# #time.sleep(3)
# start_time = time.time()
# else:
# ssecond = time.time() - start_time
# #print("time spend",ssecond)
# if ssecond > 60:
# failure = True
# break
# #check overlap of model with hook
# if i == 0:
# #L1 = p.getContactPoints(hook_bullet_id, object_bullet_id, linkIndexA=-1, linkIndexB=-1)
# L1 = p.getContactPoints(hook_bullet_id, object_bullet_id, linkIndexA=0, linkIndexB=-1)
# # print("Length of zero hook",len(L1))
# if len(L1) > 0:
# for tmp in L1:
# if len(tmp) >= 8 and tmp[8] < -thres * ratio:
# failure = True
# print('penetration problem')
# break
# # print("finished geting closesetpoints")
# if failure:
# break
# p.stepSimulation()
# #p.changeDynamics(object_bullet_id, -1, contactStiffness=1.0+i, contactDamping=0.01)
# # print(p, p._client, hook_bullet_id, object_bullet_id)
# if 1:
# hook_AABB = p.getAABB(hook_bullet_id,0)
# object_AABB = p.getAABB(object_bullet_id)
# if (hook_AABB[0][0] > object_AABB[1][0] or hook_AABB[1][0] < object_AABB[0][0]) \
# or (hook_AABB[0][1] > object_AABB[1][1] or hook_AABB[1][1] < object_AABB[0][1]) \
# or (hook_AABB[0][2] > object_AABB[1][2] or hook_AABB[1][2] < object_AABB[0][2]):
# failure = True
# break
# if ((i+1)%1 == 0 and i < 10) or (i%5 == 0 and i >= 10):
# object_pos_world, object_quat = p.getBasePositionAndOrientation(object_bullet_id)
# object_pos = object_pos_world - hook_world_pos
# CP_List = p.getContactPoints(bodyA=hook_bullet_id, bodyB=object_bullet_id, linkIndexA=0,linkIndexB=-1)
# # print("getHookClosest len",len(CP_List))
# if len(CP_List) > 1:
# cp_distance = [tmp[8] for tmp in CP_List]
# # print("cp_distance",np.min(np.array(cp_distance)),np.array(cp_distance).shape)
# # print(np.array(cp_distance))
# failure = True
# if len(cp_distance) < 8 or i < 50:
# failure = False
# else:
# for c in cp_distance:
# if c > 0:
# failure = False
# break
# if failure:
# # print("all negative!!!!")
# break
# count_cp = 0
# if len(CP_List) > 0:
# for tmp in CP_List:
# if len(tmp) >= 8 and tmp[8] < 0.0:
# count_cp += 1.0
# if tmp[8] < -thres * ratio:
# failure = True
# # print('penetration problem')
# break
# if failure:
# break
# # if object center too low or object too far away
# if object_pos_world[2] < 0.2 or np.linalg.norm(object_pos_world) > 5:
# #print('pos problem')
# failure = True
# break
# # if touches ground
# if check_object_touches_ground(object_bullet_id, p):
# #print('touches problem')
# failure = True
# break
# # too much change in pos
# if ori_object_pos[2] - object_pos[2] > 0.6:
# #print('too much change problem')
# failure = True
# break
# # too much change in quat
# # print(i)
# ssecond = time.time() - start_time
# # print("time spend",ssecond)
# object_pos_world, object_quat = p.getBasePositionAndOrientation(object_bullet_id)
# object_pos = object_pos_world - hook_world_pos
# return (not failure), np.append(object_pos, np.array(object_quat))
| 32.359788 | 161 | 0.6483 | 942 | 6,116 | 3.988323 | 0.133758 | 0.057493 | 0.063348 | 0.028746 | 0.811286 | 0.793718 | 0.786798 | 0.773223 | 0.759915 | 0.742348 | 0 | 0.036565 | 0.217462 | 6,116 | 188 | 162 | 32.531915 | 0.748433 | 0.629496 | 0 | 0.326923 | 0 | 0 | 0.003766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0.153846 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4380ae372cc630e3b7d475d5f960e23d7ceb4e7d | 338 | py | Python | sublimetext/FSharp/lib/fs.py | mrward/fsharpbinding | 9b454474f0a90af6384645504801a8230176cfc0 | [
"Apache-2.0"
] | null | null | null | sublimetext/FSharp/lib/fs.py | mrward/fsharpbinding | 9b454474f0a90af6384645504801a8230176cfc0 | [
"Apache-2.0"
] | null | null | null | sublimetext/FSharp/lib/fs.py | mrward/fsharpbinding | 9b454474f0a90af6384645504801a8230176cfc0 | [
"Apache-2.0"
] | 2 | 2017-09-11T00:06:08.000Z | 2019-02-10T14:43:06.000Z | def is_fsharp_file(fname):
return any((is_fsharp_code(fname),
is_fsharp_project(fname)))
def is_fsharp_code(fname):
return fname.endswith(('.fs', '.fsx', '.fsi'))
def is_fsharp_script(fname):
return fname.endswith(('.fsscript', '.fsx'))
def is_fsharp_project(fname):
return fname.endswith('.fsproj')
| 21.125 | 50 | 0.66568 | 45 | 338 | 4.733333 | 0.355556 | 0.225352 | 0.206573 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171598 | 338 | 15 | 51 | 22.533333 | 0.760714 | 0 | 0 | 0 | 0 | 0 | 0.091716 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0 | 0.444444 | 0.888889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
43d6d9055771b9ba5671836f1e0950915d077673 | 30,037 | py | Python | toolium/test/test_config_driver.py | lmcalvo/toolium | 98025ccbb0726bb009968779971e92166f8cc6ea | [
"Apache-2.0"
] | 94 | 2016-02-15T11:32:36.000Z | 2022-02-14T12:31:42.000Z | toolium/test/test_config_driver.py | lmcalvo/toolium | 98025ccbb0726bb009968779971e92166f8cc6ea | [
"Apache-2.0"
] | 225 | 2016-03-18T16:14:21.000Z | 2022-03-30T10:21:26.000Z | toolium/test/test_config_driver.py | lmcalvo/toolium | 98025ccbb0726bb009968779971e92166f8cc6ea | [
"Apache-2.0"
] | 65 | 2016-05-12T13:23:56.000Z | 2022-02-16T08:33:18.000Z | # -*- coding: utf-8 -*-
u"""
Copyright 2016 Telefónica Investigación y Desarrollo, S.A.U.
This file is part of Toolium.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import mock
import pytest
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.options import Options
from toolium.config_driver import ConfigDriver
from toolium.config_parser import ExtendedConfigParser
from toolium.driver_wrappers_pool import DriverWrappersPool
@pytest.fixture
def config():
config_parser = ExtendedConfigParser()
config_parser.add_section('Server')
config_parser.add_section('Driver')
return config_parser
@pytest.fixture
def utils():
utils = mock.MagicMock()
utils.get_driver_name.return_value = 'firefox'
return utils
def test_create_driver_local_not_configured(config, utils):
config.set('Driver', 'type', 'firefox')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver = lambda: 'local driver mock'
config_driver._create_remote_driver = lambda: 'remote driver mock'
driver = config_driver.create_driver()
assert driver == 'local driver mock'
def test_create_driver_local(config, utils):
config.set('Server', 'enabled', 'false')
config.set('Driver', 'type', 'firefox')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver = lambda: 'local driver mock'
config_driver._create_remote_driver = lambda: 'remote driver mock'
driver = config_driver.create_driver()
assert driver == 'local driver mock'
def test_create_driver_remote(config, utils):
config.set('Server', 'enabled', 'true')
config.set('Driver', 'type', 'firefox')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver = lambda: 'local driver mock'
config_driver._create_remote_driver = lambda: 'remote driver mock'
driver = config_driver.create_driver()
assert driver == 'remote driver mock'
@mock.patch('toolium.config_driver.FirefoxOptions')
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_firefox(webdriver_mock, options, config, utils):
config.set('Driver', 'type', 'firefox')
config.add_section('Capabilities')
config.set('Capabilities', 'marionette', 'false')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_firefox_profile = lambda: 'firefox profile'
DriverWrappersPool.output_directory = ''
config_driver._create_local_driver()
expected_capabilities = DesiredCapabilities.FIREFOX.copy()
expected_capabilities['marionette'] = False
webdriver_mock.Firefox.assert_called_once_with(capabilities=expected_capabilities,
firefox_profile='firefox profile', executable_path=None,
firefox_options=options(), log_path='geckodriver.log')
@mock.patch('toolium.config_driver.FirefoxOptions')
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_firefox_gecko(webdriver_mock, options, config, utils):
config.set('Driver', 'type', 'firefox')
config.add_section('Capabilities')
config.set('Capabilities', 'marionette', 'true')
config.set('Driver', 'gecko_driver_path', '/tmp/driver')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_firefox_profile = lambda: 'firefox profile'
DriverWrappersPool.output_directory = ''
config_driver._create_local_driver()
expected_capabilities = DesiredCapabilities.FIREFOX.copy()
expected_capabilities['marionette'] = True
webdriver_mock.Firefox.assert_called_once_with(capabilities=expected_capabilities,
firefox_profile='firefox profile', executable_path='/tmp/driver',
firefox_options=options(), log_path='geckodriver.log')
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_firefox_binary(webdriver_mock, config, utils):
config.set('Driver', 'type', 'firefox')
config.add_section('Capabilities')
config.set('Capabilities', 'marionette', 'false')
config.add_section('Firefox')
config.set('Firefox', 'binary', '/tmp/firefox')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_firefox_profile = lambda: 'firefox profile'
DriverWrappersPool.output_directory = ''
config_driver._create_local_driver()
# Check that firefox options contain the firefox binary
args, kwargs = webdriver_mock.Firefox.call_args
firefox_options = kwargs['firefox_options']
assert isinstance(firefox_options, Options)
if isinstance(firefox_options.binary, str):
assert firefox_options.binary == '/tmp/firefox' # Selenium 2
else:
assert firefox_options.binary._start_cmd == '/tmp/firefox' # Selenium 3
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_chrome(webdriver_mock, config, utils):
config.set('Driver', 'type', 'chrome')
config.set('Driver', 'chrome_driver_path', '/tmp/driver')
utils.get_driver_name.return_value = 'chrome'
config_driver = ConfigDriver(config, utils)
config_driver._create_chrome_options = lambda: 'chrome options'
config_driver._add_chrome_options_to_capabilities = lambda x: None
config_driver._create_local_driver()
webdriver_mock.Chrome.assert_called_once_with('/tmp/driver', desired_capabilities=DesiredCapabilities.CHROME)
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_chrome_multiple_options(webdriver_mock, config, utils):
# From goog:chromeOptions in Capabilities section
options_from_capabilities = {
'excludeSwitches': ['enable-automation'], 'useAutomationExtension': False,
'prefs': {'download.default_directory': '/this_value_will_be_overwritten',
'download.prompt_for_download': False}
}
# From ChromePreferences, ChromeMobileEmulation, ChromeArguments and Chrome sections
options_from_sections = {
'prefs': {'download.default_directory': '/tmp'},
'mobileEmulation': {'deviceName': 'Google Nexus 5'},
'args': ['user-data-dir=C:\\Users\\USERNAME\\AppData\\Local\\Google\\Chrome\\User Data'],
'binary': '/usr/local/chrome_beta/chrome'
}
# Merged chrome options
final_chrome_options = {
'excludeSwitches': ['enable-automation'], 'useAutomationExtension': False,
'prefs': {'download.default_directory': '/tmp', 'download.prompt_for_download': False},
'mobileEmulation': {'deviceName': 'Google Nexus 5'},
'args': ['user-data-dir=C:\\Users\\USERNAME\\AppData\\Local\\Google\\Chrome\\User Data'],
'binary': '/usr/local/chrome_beta/chrome'
}
config.set('Driver', 'type', 'chrome')
config.set('Driver', 'chrome_driver_path', '/tmp/driver')
config.add_section('Capabilities')
config.set('Capabilities', 'goog:chromeOptions', str(options_from_capabilities))
utils.get_driver_name.return_value = 'chrome'
config_driver = ConfigDriver(config, utils)
# Chrome options mock
chrome_options = mock.MagicMock()
chrome_options.to_capabilities.return_value = {'goog:chromeOptions': options_from_sections}
config_driver._create_chrome_options = mock.MagicMock(return_value=chrome_options)
config_driver._create_local_driver()
capabilities = DesiredCapabilities.CHROME.copy()
capabilities['goog:chromeOptions'] = final_chrome_options
webdriver_mock.Chrome.assert_called_once_with('/tmp/driver', desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_safari(webdriver_mock, config, utils):
config.set('Driver', 'type', 'safari')
utils.get_driver_name.return_value = 'safari'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver()
webdriver_mock.Safari.assert_called_once_with(desired_capabilities=DesiredCapabilities.SAFARI)
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_opera(webdriver_mock, config, utils):
config.set('Driver', 'type', 'opera')
config.set('Driver', 'opera_driver_path', '/tmp/driver')
utils.get_driver_name.return_value = 'opera'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver()
webdriver_mock.Opera.assert_called_once_with(desired_capabilities=DesiredCapabilities.OPERA,
executable_path='/tmp/driver')
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_iexplore(webdriver_mock, config, utils):
config.set('Driver', 'type', 'iexplore')
config.set('Driver', 'explorer_driver_path', '/tmp/driver')
utils.get_driver_name.return_value = 'iexplore'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver()
webdriver_mock.Ie.assert_called_once_with('/tmp/driver', capabilities=DesiredCapabilities.INTERNETEXPLORER)
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_edge(webdriver_mock, config, utils):
config.set('Driver', 'type', 'edge')
config.set('Driver', 'edge_driver_path', '/tmp/driver')
utils.get_driver_name.return_value = 'edge'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver()
webdriver_mock.Edge.assert_called_once_with('/tmp/driver', capabilities=DesiredCapabilities.EDGE)
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_phantomjs(webdriver_mock, config, utils):
config.set('Driver', 'type', 'phantomjs')
config.set('Driver', 'phantomjs_driver_path', '/tmp/driver')
utils.get_driver_name.return_value = 'phantomjs'
config_driver = ConfigDriver(config, utils)
config_driver._create_local_driver()
webdriver_mock.PhantomJS.assert_called_once_with(desired_capabilities=DesiredCapabilities.PHANTOMJS,
executable_path='/tmp/driver')
def test_create_local_driver_android(config, utils):
config.set('Driver', 'type', 'android')
utils.get_driver_name.return_value = 'android'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver = lambda: 'remote driver mock'
driver = config_driver._create_local_driver()
assert driver == 'remote driver mock'
def test_create_local_driver_ios(config, utils):
config.set('Driver', 'type', 'ios')
utils.get_driver_name.return_value = 'ios'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver = lambda: 'remote driver mock'
driver = config_driver._create_local_driver()
assert driver == 'remote driver mock'
def test_create_local_driver_iphone(config, utils):
config.set('Driver', 'type', 'iphone')
utils.get_driver_name.return_value = 'iphone'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver = lambda: 'remote driver mock'
driver = config_driver._create_local_driver()
assert driver == 'remote driver mock'
def test_create_local_driver_unknown_driver(config, utils):
config.set('Driver', 'type', 'unknown')
utils.get_driver_name.return_value = 'unknown'
config_driver = ConfigDriver(config, utils)
with pytest.raises(Exception) as excinfo:
config_driver._create_local_driver()
assert 'Unknown driver unknown' == str(excinfo.value)
@mock.patch('toolium.config_driver.FirefoxOptions')
@mock.patch('toolium.config_driver.webdriver')
def test_create_local_driver_capabilities(webdriver_mock, options, config, utils):
config.set('Driver', 'type', 'firefox')
config.add_section('Capabilities')
config.set('Capabilities', 'marionette', 'false')
config.set('Capabilities', 'version', '45')
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
config_driver._create_firefox_profile = lambda: 'firefox profile'
DriverWrappersPool.output_directory = ''
config_driver._create_local_driver()
expected_capabilities = DesiredCapabilities.FIREFOX.copy()
expected_capabilities['marionette'] = False
expected_capabilities['version'] = '45'
webdriver_mock.Firefox.assert_called_once_with(capabilities=expected_capabilities,
firefox_profile='firefox profile', executable_path=None,
firefox_options=options(), log_path='geckodriver.log')
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_firefox(webdriver_mock, config, utils):
config.set('Driver', 'type', 'firefox')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'firefox'
config_driver = ConfigDriver(config, utils)
# Firefox profile mock
class ProfileMock(object):
encoded = 'encoded profile'
config_driver._create_firefox_profile = mock.MagicMock(return_value=ProfileMock())
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.FIREFOX.copy()
capabilities['firefox_profile'] = 'encoded profile'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_chrome(webdriver_mock, config, utils):
config.set('Driver', 'type', 'chrome')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'chrome'
config_driver = ConfigDriver(config, utils)
# Chrome options mock
chrome_options = mock.MagicMock()
chrome_options.to_capabilities.return_value = {'goog:chromeOptions': 'chrome options'}
config_driver._create_chrome_options = mock.MagicMock(return_value=chrome_options)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.CHROME.copy()
capabilities['goog:chromeOptions'] = 'chrome options'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_chrome_multiple_options(webdriver_mock, config, utils):
# From goog:chromeOptions in Capabilities section
options_from_capabilities = {
'excludeSwitches': ['enable-automation'], 'useAutomationExtension': False,
'prefs': {'download.default_directory': '/this_value_will_be_overwritten',
'download.prompt_for_download': False}
}
# From ChromePreferences, ChromeMobileEmulation, ChromeArguments and Chrome sections
options_from_sections = {
'prefs': {'download.default_directory': '/tmp'},
'mobileEmulation': {'deviceName': 'Google Nexus 5'},
'args': ['user-data-dir=C:\\Users\\USERNAME\\AppData\\Local\\Google\\Chrome\\User Data'],
'binary': '/usr/local/chrome_beta/chrome'
}
# Merged chrome options
final_chrome_options = {
'excludeSwitches': ['enable-automation'], 'useAutomationExtension': False,
'prefs': {'download.default_directory': '/tmp', 'download.prompt_for_download': False},
'mobileEmulation': {'deviceName': 'Google Nexus 5'},
'args': ['user-data-dir=C:\\Users\\USERNAME\\AppData\\Local\\Google\\Chrome\\User Data'],
'binary': '/usr/local/chrome_beta/chrome'
}
config.set('Driver', 'type', 'chrome')
config.add_section('Capabilities')
config.set('Capabilities', 'goog:chromeOptions', str(options_from_capabilities))
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'chrome'
config_driver = ConfigDriver(config, utils)
# Chrome options mock
chrome_options = mock.MagicMock()
chrome_options.to_capabilities.return_value = {'goog:chromeOptions': options_from_sections}
config_driver._create_chrome_options = mock.MagicMock(return_value=chrome_options)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.CHROME.copy()
capabilities['goog:chromeOptions'] = final_chrome_options
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_chrome_old_selenium(webdriver_mock, config, utils):
config.set('Driver', 'type', 'chrome')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'chrome'
config_driver = ConfigDriver(config, utils)
# Chrome options mock
chrome_options = mock.MagicMock()
chrome_options.to_capabilities.return_value = {'chromeOptions': 'chrome options'}
config_driver._create_chrome_options = mock.MagicMock(return_value=chrome_options)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.CHROME.copy()
capabilities['chromeOptions'] = 'chrome options'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_safari(webdriver_mock, config, utils):
config.set('Driver', 'type', 'safari')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'safari'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=DesiredCapabilities.SAFARI)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_opera(webdriver_mock, config, utils):
config.set('Driver', 'type', 'opera')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'opera'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.OPERA
capabilities['opera.autostart'] = True
capabilities['opera.arguments'] = '-fullscreen'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_iexplore(webdriver_mock, config, utils):
config.set('Driver', 'type', 'iexplore')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'iexplore'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=DesiredCapabilities.INTERNETEXPLORER)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_edge(webdriver_mock, config, utils):
config.set('Driver', 'type', 'edge')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'edge'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=DesiredCapabilities.EDGE)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_phantomjs(webdriver_mock, config, utils):
config.set('Driver', 'type', 'phantomjs')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'phantomjs'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=DesiredCapabilities.PHANTOMJS)
@mock.patch('toolium.config_driver.appiumdriver')
def test_create_remote_driver_android(appiumdriver_mock, config, utils):
config.set('Driver', 'type', 'android')
config.add_section('AppiumCapabilities')
config.set('AppiumCapabilities', 'automationName', 'Appium')
config.set('AppiumCapabilities', 'platformName', 'Android')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'android'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = {'automationName': 'Appium', 'platformName': 'Android'}
appiumdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.appiumdriver')
def test_create_remote_driver_ios(appiumdriver_mock, config, utils):
config.set('Driver', 'type', 'ios')
config.add_section('AppiumCapabilities')
config.set('AppiumCapabilities', 'automationName', 'Appium')
config.set('AppiumCapabilities', 'platformName', 'iOS')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'ios'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = {'automationName': 'Appium', 'platformName': 'iOS'}
appiumdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.appiumdriver')
def test_create_remote_driver_iphone(appiumdriver_mock, config):
config.set('Driver', 'type', 'iphone')
config.add_section('AppiumCapabilities')
config.set('AppiumCapabilities', 'automationName', 'Appium')
config.set('AppiumCapabilities', 'platformName', 'iOS')
server_url = 'http://10.20.30.40:5555'
utils = mock.MagicMock()
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'iphone'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = {'automationName': 'Appium', 'platformName': 'iOS'}
appiumdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_version_platform(webdriver_mock, config, utils):
config.set('Driver', 'type', 'iexplore-11-on-WIN10')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'iexplore'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.INTERNETEXPLORER
capabilities['version'] = '11'
capabilities['platform'] = 'WIN10'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_version(webdriver_mock, config, utils):
config.set('Driver', 'type', 'iexplore-11')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'iexplore'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.INTERNETEXPLORER.copy()
capabilities['version'] = '11'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
@mock.patch('toolium.config_driver.webdriver')
def test_create_remote_driver_capabilities(webdriver_mock, config, utils):
config.set('Driver', 'type', 'iexplore-11')
config.add_section('Capabilities')
config.set('Capabilities', 'version', '11')
server_url = 'http://10.20.30.40:5555'
utils.get_server_url.return_value = server_url
utils.get_driver_name.return_value = 'iexplore'
config_driver = ConfigDriver(config, utils)
config_driver._create_remote_driver()
capabilities = DesiredCapabilities.INTERNETEXPLORER.copy()
capabilities['version'] = '11'
webdriver_mock.Remote.assert_called_once_with(command_executor='%s/wd/hub' % server_url,
desired_capabilities=capabilities)
def test_convert_property_type_true(config, utils):
config_driver = ConfigDriver(config, utils)
value = 'True'
assert config_driver._convert_property_type(value) is True
def test_convert_property_type_false(config, utils):
config_driver = ConfigDriver(config, utils)
value = 'False'
assert config_driver._convert_property_type(value) is False
def test_convert_property_type_dict(config, utils):
config_driver = ConfigDriver(config, utils)
value = "{'a': 5}"
assert config_driver._convert_property_type(value) == {'a': 5}
def test_convert_property_type_int(config, utils):
config_driver = ConfigDriver(config, utils)
value = '5'
assert config_driver._convert_property_type(value) == 5
def test_convert_property_type_str(config, utils):
config_driver = ConfigDriver(config, utils)
value = 'string'
assert config_driver._convert_property_type(value) == value
def test_convert_property_type_list(config, utils):
config_driver = ConfigDriver(config, utils)
value = "[1, 2, 3]"
assert config_driver._convert_property_type(value) == [1, 2, 3]
@mock.patch('toolium.config_driver.webdriver')
def test_create_firefox_profile(webdriver_mock, config, utils):
config.add_section('Firefox')
config.set('Firefox', 'profile', '/tmp')
config.add_section('FirefoxPreferences')
config.set('FirefoxPreferences', 'browser.download.folderList', '2')
config.add_section('FirefoxExtensions')
config.set('FirefoxExtensions', 'firebug', 'resources/firebug-3.0.0-beta.3.xpi')
config_driver = ConfigDriver(config, utils)
config_driver._create_firefox_profile()
webdriver_mock.FirefoxProfile.assert_called_once_with(profile_directory='/tmp')
webdriver_mock.FirefoxProfile().set_preference.assert_called_once_with('browser.download.folderList', 2)
webdriver_mock.FirefoxProfile().update_preferences.assert_called_once_with()
webdriver_mock.FirefoxProfile().add_extension.assert_called_once_with('resources/firebug-3.0.0-beta.3.xpi')
def test_add_firefox_arguments(config, utils):
config.add_section('FirefoxArguments')
config.set('FirefoxArguments', '-private', '')
config_driver = ConfigDriver(config, utils)
firefox_options = Options()
config_driver._add_firefox_arguments(firefox_options)
assert firefox_options.arguments == ['-private']
@mock.patch('toolium.config_driver.webdriver')
def test_create_chrome_options(webdriver_mock, config, utils):
config.add_section('ChromePreferences')
config.set('ChromePreferences', 'download.default_directory', '/tmp')
config.add_section('ChromeMobileEmulation')
config.set('ChromeMobileEmulation', 'deviceName', 'Google Nexus 5')
config.add_section('ChromeArguments')
config.set('ChromeArguments', 'lang', 'es')
config_driver = ConfigDriver(config, utils)
config_driver._create_chrome_options()
webdriver_mock.ChromeOptions.assert_called_once_with()
webdriver_mock.ChromeOptions().add_experimental_option.assert_has_calls(
[mock.call('prefs', {'download.default_directory': '/tmp'}),
mock.call('mobileEmulation', {'deviceName': 'Google Nexus 5'})]
)
webdriver_mock.ChromeOptions().add_argument.assert_called_once_with('lang=es')
@mock.patch('toolium.config_driver.webdriver')
def test_create_chrome_options_headless(webdriver_mock, config, utils):
config.set('Driver', 'headless', 'true')
config_driver = ConfigDriver(config, utils)
config_driver._create_chrome_options()
webdriver_mock.ChromeOptions.assert_called_once_with()
if os.name == 'nt':
webdriver_mock.ChromeOptions().add_argument.assert_has_calls([mock.call('--headless'),
mock.call('--disable-gpu')])
else:
webdriver_mock.ChromeOptions().add_argument.assert_called_once_with('--headless')
| 43.785714 | 116 | 0.725938 | 3,449 | 30,037 | 6.013047 | 0.074224 | 0.080428 | 0.05738 | 0.062202 | 0.850523 | 0.828728 | 0.809538 | 0.777665 | 0.744684 | 0.736149 | 0 | 0.009481 | 0.160735 | 30,037 | 685 | 117 | 43.849635 | 0.813202 | 0.037321 | 0 | 0.685437 | 0 | 0.007767 | 0.208209 | 0.071676 | 0 | 0 | 0 | 0 | 0.100971 | 1 | 0.087379 | false | 0 | 0.015534 | 0 | 0.11068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
600be41b78d5b8d90cd9ac8f2e349fba220dcac4 | 5,561 | py | Python | tests/test_all.py | thulio/dict-to-csv | 6c163a7c9c87dcb6353167bd7ff1af55fc7d0731 | [
"MIT"
] | 12 | 2017-04-10T15:19:41.000Z | 2021-08-29T11:21:53.000Z | tests/test_all.py | thulio/dict-to-csv | 6c163a7c9c87dcb6353167bd7ff1af55fc7d0731 | [
"MIT"
] | null | null | null | tests/test_all.py | thulio/dict-to-csv | 6c163a7c9c87dcb6353167bd7ff1af55fc7d0731 | [
"MIT"
] | 1 | 2019-05-10T00:25:34.000Z | 2019-05-10T00:25:34.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import unittest
from dict_to_csv import extract_header, transform
class TestExtractKeys(unittest.TestCase):
def test_simple_data(self):
data = [
{"key_1": "value 1", "key_2": "value 2"},
{"key_1": "value 3", "key_2": "value 4"},
]
self.assertEqual(extract_header(data), ["key_1", "key_2"])
def test_nested_data(self):
data = [
{
"customer": {
"name": "John",
"address": {"street": "Street 1", "number": "42"},
},
"product": {"sku": "1", "price": 9.99},
},
{
"customer": {
"name": "Bob",
"address": {"street": "Street 2", "number": "314"},
},
"product": {"sku": "2", "price": 15.00},
},
]
self.assertEqual(
extract_header(data),
[
"customer.address.number",
"customer.address.street",
"customer.name",
"product.price",
"product.sku",
],
)
def test_interrupt_if_keys_dont_change(self):
data = [{"key": "value"} for _ in range(100)]
self.assertEqual(extract_header(data), ["key"])
class TestTransform(unittest.TestCase):
def test_simple_data(self):
data = [
{"key_1": "value 1", "key_2": "value 2"},
{"key_1": "value 3", "key_2": "value 4"},
]
self.assertEqual(
transform(data), "key_1,key_2\nvalue 1,value 2\nvalue 3,value 4\n"
)
def test_non_ascii_data(self):
data = [{"ã": "joão", "key_2": "value 2"}, {"ã": "value 3", "key_2": "value 4"}]
self.assertEqual(transform(data), "key_2,ã\nvalue 2,joão\nvalue 4,value 3\n")
def test_nested_data(self):
data = [
{
"customer": {
"name": "John",
"address": {"street": "Street 1", "number": "42"},
},
"product": {"sku": "1", "price": 9.99},
},
{
"customer": {
"name": "Bob",
"address": {"street": "Street 2", "number": "314"},
},
"product": {"sku": "2", "price": 15.00},
},
]
self.assertEqual(
transform(data),
"customer.address.number,customer.address.street,customer.name,product.price,product.sku\n42,Street 1,John,9.99,1\n314,Street 2,Bob,15.0,2\n",
)
def test_simple_data_missing_key_first(self):
data = [
{
"key_1": "value 1",
},
{"key_1": "value 3", "key_2": "value 4"},
]
self.assertEqual(transform(data), "key_1,key_2\nvalue 1,\nvalue 3,value 4\n")
def test_simple_data_missing_key_other(self):
data = [{"key_1": "value 1", "key_2": "value 2"}, {"key_2": "value 4"}]
self.assertEqual(transform(data), "key_1,key_2\nvalue 1,value 2\n,value 4\n")
def test_nested_data_missing_key_first(self):
data = [
{
"customer": {
"name": "John",
"address": {"street": "Street 1", "number": "42"},
},
"product": {"price": 9.99},
},
{
"customer": {
"name": "Bob",
"address": {"street": "Street 2", "number": "314"},
},
"product": {"sku": "2", "price": 15.00},
},
]
self.assertEqual(
transform(data),
"customer.address.number,customer.address.street,customer.name,product.price,product.sku\n42,Street 1,John,9.99,\n314,Street 2,Bob,15.0,2\n",
)
def test_nested_data_missing_key_other(self):
data = [
{
"customer": {
"name": "John",
"address": {"street": "Street 1", "number": "42"},
},
"product": {"sku": "1", "price": 9.99},
},
{
"customer": {
"name": "Bob",
"address": {"street": "Street 2", "number": "314"},
},
"product": {"price": 15.00},
},
]
self.assertEqual(
transform(data),
"customer.address.number,customer.address.street,customer.name,product.price,product.sku\n42,Street 1,John,9.99,1\n314,Street 2,Bob,15.0,\n",
)
def test_simple_data_without_header(self):
data = [
{"key_1": "value 1", "key_2": "value 2"},
{"key_1": "value 3", "key_2": "value 4"},
]
self.assertEqual(
transform(data, include_headers=False), "value 1,value 2\nvalue 3,value 4\n"
)
def test_use_given_keys(self):
data = [
{"key_1": "value 1", "key_2": "value 2"},
{"key_1": "value 3", "key_2": "value 4"},
]
self.assertEqual(transform(data, keys=["key_1"]), "key_1\nvalue 1\nvalue 3\n")
def test_use_invalid_given_keys(self):
data = [
{"key_1": "value 1", "key_2": "value 2"},
{"key_1": "value 3", "key_2": "value 4"},
]
self.assertEqual(transform(data, keys=["key_9"]), "key_9\n\n\n")
| 31.241573 | 154 | 0.452437 | 592 | 5,561 | 4.079392 | 0.128378 | 0.033126 | 0.055901 | 0.115942 | 0.857971 | 0.840166 | 0.80911 | 0.756522 | 0.756522 | 0.756522 | 0 | 0.057209 | 0.37763 | 5,561 | 177 | 155 | 31.418079 | 0.640566 | 0.003776 | 0 | 0.438356 | 0 | 0.020548 | 0.28801 | 0.074756 | 0 | 0 | 0 | 0 | 0.089041 | 1 | 0.089041 | false | 0 | 0.020548 | 0 | 0.123288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
600f92f05678bfa893f7d47e74dcdd89b7d8def9 | 22,601 | py | Python | melp/clustering/old/plots.py | maximilianKoeper/melp | 863d1c55a36adf29f3508e15ecd5ed0a77544f53 | [
"MIT"
] | 1 | 2021-12-07T10:00:23.000Z | 2021-12-07T10:00:23.000Z | melp/clustering/old/plots.py | maximilianKoeper/melp | 863d1c55a36adf29f3508e15ecd5ed0a77544f53 | [
"MIT"
] | null | null | null | melp/clustering/old/plots.py | maximilianKoeper/melp | 863d1c55a36adf29f3508e15ecd5ed0a77544f53 | [
"MIT"
] | 1 | 2021-11-15T13:41:06.000Z | 2021-11-15T13:41:06.000Z | import ROOT
import numpy as np
import melp
from melp import Detector
from melp.clustering.misc import*
import melp.clustering.spatial_cluster as sclump
import melp.clustering.three_frame_cluster as clump_3
import melp.clustering.time_cluster as tclump
#-------------------------------------------------
def compare_to_primary(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector: melp.Detector, mask_type, number_of_frames = None, rec_type = None, cluster_type = None):
frac_corr_frame = []
frac_corr_clusters_frame = []
frac_uncorr_frame = []
total_hits_counter = []
cluster_hits_counter = 0
tot_corr_counter = 0
tot_uncorr_counter = 0
#set frame number
if number_of_frames == None:
frames_to_analyze = ttree_mu3e.GetEntries()
else:
frames_to_analyze = number_of_frames
for frame in range(frames_to_analyze):
ttree_mu3e.GetEntry(frame)
#Printing status info
if frame % 5000 == 0:
print("Progress: ", np.round(frame / frames_to_analyze * 100), " %","of ", frames_to_analyze, " frames", end='\r')
#count total hits
total_hits_frame = ttree_mu3e.Ntilehit #len(ttree_mu3e.tilehit_tile)
total_hits_counter.append(total_hits_frame)
#set counters
corr_counter = 0
uncorr_counter = 0
#get primaries
primaries_frame = get_mc_primary_for_hit_frame(ttree_mu3e)
primaries_frame_arr = []
for key in primaries_frame.keys():
primaries_frame_arr.append([key,primaries_frame[key]]) #[hit tile, primary for tile hit]
#get clusters
clusters_with_primaries = sclump.build_cluster_with_truth_primary(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, frame, mask_type, rec_type, cluster_type)
cluster_primaries_arr = []
cluster_master_primaries_arr = []
for key in clusters_with_primaries.keys():
#cluster_master_primaries_arr.append(key)
cluster_master_primaries_arr.append(clusters_with_primaries[key][0])
cluster_primaries_arr.append(clusters_with_primaries[key])
#count hits in clusters
cluster_hits_counter_tmp = 0
for key in clusters_with_primaries.keys():
cluster_hits_counter_tmp += len(clusters_with_primaries[key])
cluster_hits_counter += cluster_hits_counter_tmp
#comparison hits in cluster
for j in range(len(cluster_primaries_arr)): #loop over all clusters in frame
for k in range(len(cluster_primaries_arr[j])): #loop over all primaries in cluster
if cluster_primaries_arr[j][k] == cluster_master_primaries_arr[j]: #if primary in cluster = primary of cluster master
corr_counter += 1
else:
uncorr_counter += 1
#comparison of different clusters
new_corr_cluster_flags = []
old_corr_cluster_flags = []
checked_primaries = []
for i in range(len(cluster_primaries_arr)):
master_primary = cluster_primaries_arr[i][0]
if master_primary not in checked_primaries:
number_of_primaries = cluster_primaries_arr[i].count(master_primary)
checked_primaries.append(master_primary)
else:
continue
for j in range(len(cluster_primaries_arr)):
number_of_primaries_comp = 0
if j != i and j not in new_corr_cluster_flags:
for k in range(len(cluster_primaries_arr[j])):
if cluster_primaries_arr[j][k] == master_primary:
number_of_primaries_comp += 1
if number_of_primaries_comp == 0: #if master primary of cluster i isn't found in cluster j do nothing
continue
elif number_of_primaries_comp <= number_of_primaries: #if correctly identified constituents are more in cluster i simply add cluster j as wrongly identified
#TODO: maybe split into < and = and decide for the correct cluster either via the smallest timestamp or by amount of wrong hits in cluster
corr_counter -= number_of_primaries_comp
uncorr_counter += number_of_primaries_comp
elif number_of_primaries_comp > number_of_primaries: #if cluster j has more correct primaries flag it as correct cluster and add cluster i to the incorrect counter
corr_counter -= number_of_primaries
uncorr_counter += number_of_primaries
new_corr_cluster_flags.append(j)
old_corr_cluster_flags.append(i)
#loop over old correct cluster flags
checked_primaries_2 = []
old_corr_cluster_flags_check = []
for i in old_corr_cluster_flags:
master_primary = cluster_primaries_arr[i][0]
if master_primary not in checked_primaries_2:
number_of_primaries = cluster_primaries_arr[i].count(master_primary)
checked_primaries_2.append(master_primary)
else:
continue
for j in range(len(cluster_primaries_arr)):
number_of_primaries_comp = 0
if j != i and j not in new_corr_cluster_flags:
for k in range(len(cluster_primaries_arr[j])):
if cluster_primaries_arr[j][k] == master_primary:
number_of_primaries_comp += 1
if number_of_primaries_comp == 0: #if master primary of cluster i isn't found in cluster j do nothing
continue
elif number_of_primaries_comp <= number_of_primaries: #if correctly identified constituents are more in cluster i simply add cluster j as wrongly identified
#TODO: maybe split into < and = and decide for the correct cluster either via the smallest timestamp or by amount of wrong hits in cluster
corr_counter -= number_of_primaries_comp
uncorr_counter += number_of_primaries_comp
elif number_of_primaries_comp > number_of_primaries: #if cluster j has more correct primaries flag it as correct cluster and add cluster i to the incorrect counter
corr_counter -= number_of_primaries
uncorr_counter += number_of_primaries
new_corr_cluster_flags.append(j)
old_corr_cluster_flags.append(i)
old_corr_cluster_flags_check.append(i)
####################################
if len(old_corr_cluster_flags_check) != 0:
print("Found a sneaky bastard")
####################################
#add to total corr and uncorr counters
tot_corr_counter += corr_counter
tot_uncorr_counter += uncorr_counter
if cluster_hits_counter_tmp != 0:
frac_corr_clusters_frame.append(corr_counter/cluster_hits_counter_tmp)
frac_uncorr_frame.append(uncorr_counter/cluster_hits_counter_tmp)
if total_hits_frame != 0:
frac_corr_frame.append(corr_counter/total_hits_frame)
print("Progress: 100 %","of ", frames_to_analyze, " frames")
print("Number of analyzed frames: ", len(total_hits_counter), "Number of correct counter fractions: ", len(frac_corr_frame))
print("Total #hits in frames/#hits in clusters = ", np.sum(total_hits_counter)/cluster_hits_counter)
print("Correctly associated out of all hits: ", tot_corr_counter/(np.sum(total_hits_counter)/100),"%")
print("Correctly associated out of all hits in clusters: ", tot_corr_counter/(cluster_hits_counter/100),"%")
print("Incorrectly associated out of all hits: ", tot_uncorr_counter/(np.sum(total_hits_counter)/100),"%")
print("Incorrectly associated out of all hits in clusters: ", tot_uncorr_counter/(cluster_hits_counter/100),"%")
return frac_corr_frame, frac_corr_clusters_frame, frac_uncorr_frame, tot_corr_counter
#----------------------------------------------------
#compares the tids of hits in cluster to the of cluster master. Returns the fractions (correctly associated hits)/(all hits in clusters)
# and (incorrectly associated hits)/(all hits in clusters)
def compare_to_tid(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector: melp.Detector, mask_type, number_of_frames = None, rec_type = None):
frac_corr_frame = []
frac_uncorr_frame = []
frac_corr_clusters_frame = []
total_hits_counter = 0
cluster_hits_counter = 0
tot_corr_counter = 0
tot_uncorr_counter = 0
#set frame number
if number_of_frames == None:
frames_to_analyze = ttree_mu3e.GetEntries()
else:
frames_to_analyze = number_of_frames
for frame in range(frames_to_analyze):
ttree_mu3e.GetEntry(frame)
#Printing status info
if frame % 5000 == 0:
print("Progress: ", np.round(frame / frames_to_analyze * 100), " %","of ", frames_to_analyze, " frames", end='\r')
#count total hits
total_hits_frame = len(ttree_mu3e.tilehit_tile)
total_hits_counter += total_hits_frame
#set counters
corr_counter = 0
uncorr_counter = 0
#get primaries
tids_frame = get_tid_frame(ttree_mu3e, ttree_mu3e_mc)
tids_frame_arr = []
for key in tids_frame.keys():
tids_frame_arr.append([key,tids_frame[key]]) #[hit tile, tid for tile hit]
#get clusters
clusters_with_tids = sclump.build_cluster_with_truth_tid(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, frame, mask_type, rec_type)
cluster_tids_arr = []
cluster_master_tids_arr = []
for key in clusters_with_tids.keys():
cluster_master_tids_arr.append(key)
cluster_tids_arr.append(clusters_with_tids[key])
#count hits in clusters
"""
cluster_hits_counter_tmp = 0
for key in clusters_with_tids.keys():
cluster_hits_counter_tmp += len(clusters_with_tids[key])
# cluster_hits_counter_tmp +=1
# for i in clusters_with_tids[key]:
# cluster_hits_counter_tmp +=1
cluster_hits_counter += cluster_hits_counter_tmp
"""
#count hits in clusters
cluster_hits_counter_tmp = 0
for key in clusters_with_tids.keys():
cluster_hits_counter_tmp += len(clusters_with_tids[key])
cluster_hits_counter += cluster_hits_counter_tmp
#comparison
for j in range(len(cluster_master_tids_arr)): #loop over all clusters in frame
for k in range(len(cluster_tids_arr[j])): #loop over all tids in cluster
if cluster_tids_arr[j][k] == cluster_master_tids_arr[j]: #if tid in cluster = tid of cluster master
corr_counter += 1
else:
uncorr_counter += 1
#add #master tiles to corr_counter
#corr_counter += len(cluster_master_tids_arr)
if (corr_counter + uncorr_counter) != cluster_hits_counter_tmp:
print("error: counters don't match",(corr_counter + uncorr_counter), cluster_hits_counter_tmp)
#add to total corr and uncorr counters
tot_corr_counter += corr_counter
tot_uncorr_counter += uncorr_counter
#calculate fractions
if corr_counter != 0:
frac_corr_clusters_frame.append(corr_counter/cluster_hits_counter_tmp)
frac_corr_frame.append(corr_counter/total_hits_frame)
if uncorr_counter != 0:
frac_uncorr_frame.append(uncorr_counter/cluster_hits_counter_tmp)
print("Progress: 100 %","of ", frames_to_analyze, " frames")
print("Total #hits in frames/#hits in clusters = ", total_hits_counter/cluster_hits_counter)
print("Correctly associated out of all hits: ", tot_corr_counter/(total_hits_counter/100),"%")
print("Correctly associated out of all hits in clusters: ", tot_corr_counter/(cluster_hits_counter/100),"%")
print("Incorrectly associated out of all hits: ", tot_uncorr_counter/(total_hits_counter/100),"%")
print("Incorrectly associated out of all hits in clusters: ", tot_uncorr_counter/(cluster_hits_counter/100),"%")
return frac_corr_frame, frac_corr_clusters_frame, frac_uncorr_frame, tot_corr_counter
#-----------------------------------------------------
#returns fraction of number of hits in cluster and total number of hits
def get_hits_not_in_cluster(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector: melp.Detector, mask_type, number_of_frames = None, rec_type = None, cluster_type = None):
#set frame number
if number_of_frames == None:
frames_to_analyze = ttree_mu3e.GetEntries()
else:
frames_to_analyze = number_of_frames
#set counters
total_hits_counter = []
cluster_hits_counter = []
frac_not_in_cluster = []
#counting
for frame in range(frames_to_analyze):
ttree_mu3e.GetEntry(frame)
#Printing status info
if frame % 5000 == 0:
print("Progress: ", np.round(frame / frames_to_analyze * 100), " %","of ", frames_to_analyze, " frames", end='\r')
#count total hits
tot_hits_frame = len(ttree_mu3e.tilehit_tile)
total_hits_counter.append(tot_hits_frame)
#count hits in clusters
if cluster_type == None:
clusters_frame = sclump.build_clusters_in_masks(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector,frame, mask_type, rec_type)
elif cluster_type == "time":
__ , clusters_frame = tclump.time_clustering_frame(ttree_mu3e, printing = None)
#clusters_frame = sclump.build_clusters_in_masks(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, frame, mask_type, rec_type)
cluster_hits_counter_tmp = 0
for key in clusters_frame.keys():
cluster_hits_counter_tmp += len(clusters_frame[key])
cluster_hits_counter.append(cluster_hits_counter_tmp)
#calculate fraction
if cluster_hits_counter_tmp != 0:
frac_not_in_cluster.append((tot_hits_frame-cluster_hits_counter_tmp)/tot_hits_frame)
print("Progress: 100 %","of ", frames_to_analyze, " frames")
print("Not associated hits out of all hits: ",(np.sum(total_hits_counter)-np.sum(cluster_hits_counter))/(np.sum(total_hits_counter)/100) , "%")
return frac_not_in_cluster
#-----------------------------------------------------
#returns fraction of number of hits in cluster and total number of hits
def get_hits_not_in_cluster_3_frame(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector: melp.Detector, mask_type, number_of_frames = None, rec_type = None):
#set frame number
if number_of_frames == None:
frames_to_analyze = ttree_mu3e.GetEntries()
else:
frames_to_analyze = number_of_frames
#set counters
total_hits_counter = []
cluster_hits_counter = []
frac_not_in_cluster = []
#############################
over_counter = 0
###############################
#get total hits
hits_all_frames ,hits_all_frames_counter_after = clump_3.del_double_hits_in_3_frame_cluster(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, mask_type, number_of_frames, rec_type)
#counting
for frame in np.arange(2, frames_to_analyze-2, 1):
ttree_mu3e.GetEntry(frame)
#Printing status info
if frame % 2000 == 0:
print("Progress: ", np.round(frame / frames_to_analyze * 100), " %","of ", frames_to_analyze, " frames", end='\r')
#count total hits
tot_hits_frame = len(hits_all_frames[frame])
total_hits_counter.append(tot_hits_frame)
#count hits in clusters and
#remove double hits in clusters like in check_for_mult_hit_tiles_diff_frame
clusters_frame = clump_3.build_clusters_in_masks_with_neighbours(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, frame, mask_type, rec_type)
cluster_hits_counter_tmp = 0
double_hit_counter_tmp = 0
for key in clusters_frame.keys():
cluster = clusters_frame[key]
cluster_hits_counter_tmp += len(cluster)
for hit1 in cluster:
for hit2 in cluster:
if hit1[0] == hit2[0] and hit1[1] != hit2[1]:
double_hit_counter_tmp += 1
#correct for moved cluster hits
for hit in cluster:
if hit[1] != frame:
tot_hits_frame += 1
cluster_hits_counter.append(cluster_hits_counter_tmp - double_hit_counter_tmp)
##################################
if (cluster_hits_counter_tmp - double_hit_counter_tmp) > tot_hits_frame:
over_counter += (cluster_hits_counter_tmp - double_hit_counter_tmp)- tot_hits_frame
# print(frame, (cluster_hits_counter_tmp - double_hit_counter_tmp)- tot_hits_frame)
################################
#calculate fraction
if tot_hits_frame != 0:
frac_not_in_cluster.append((tot_hits_frame - (cluster_hits_counter_tmp - double_hit_counter_tmp))/tot_hits_frame)
if np.sum(total_hits_counter) != hits_all_frames_counter_after:
print("ERROR: Total hit counters don't match", np.sum(total_hits_counter), hits_all_frames_counter_after)
print("Progress: 100 %","of ", frames_to_analyze, " frames")
print("Not associated hits out of all hits: ", np.sum(cluster_hits_counter)/(np.sum(total_hits_counter)/100), "%")
##########################
print(over_counter)
##########################
return frac_not_in_cluster
#---------------------------------------------------------
def compare_to_primary_3_frames(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector: melp.Detector, mask_type, number_of_frames = None, rec_type = None):
frac_corr_frame = []
frac_corr_clusters_frame = []
frac_uncorr_frame = []
total_hits_counter = []
cluster_hits_counter = 0
tot_corr_counter = 0
tot_uncorr_counter = 0
#set frame number
if number_of_frames == None:
frames_to_analyze = ttree_mu3e.GetEntries()
else:
frames_to_analyze = number_of_frames
#get total hits
hits_all_frames ,hits_all_frames_counter_after = clump_3.del_double_hits_in_3_frame_cluster(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, mask_type, number_of_frames, rec_type)
for frame in np.arange(2, frames_to_analyze-2, 1):
ttree_mu3e.GetEntry(frame)
#Printing status info
if frame % 5000 == 0:
print("Progress: ", np.round(frame / frames_to_analyze * 100), " %","of ", frames_to_analyze, " frames", end='\r')
#count total hits
tot_hits_frame = len(hits_all_frames[frame])
total_hits_counter.append(tot_hits_frame)
#set counters
corr_counter = 0
uncorr_counter = 0
#get primaries
primaries_frame_0 = get_mc_primary_for_hit_frame(ttree_mu3e)
ttree_mu3e.GetEntry(frame+1)
primaries_frame_plus = get_mc_primary_for_hit_frame(ttree_mu3e)
ttree_mu3e.GetEntry(frame-1)
primaries_frame_minus = get_mc_primary_for_hit_frame(ttree_mu3e)
ttree_mu3e.GetEntry(frame)
primaries_frame_arr_0 = []
for key in primaries_frame_0.keys():
primaries_frame_arr_0.append([key,primaries_frame_0[key]]) #[hit tile, primary for tile hit]
primaries_frame_arr_plus = []
for key in primaries_frame_plus.keys():
primaries_frame_arr_plus.append([key,primaries_frame_plus[key]]) #[hit tile, primary for tile hit]
primaries_frame_arr_minus = []
for key in primaries_frame_minus.keys():
primaries_frame_arr_minus.append([key,primaries_frame_minus[key]]) #[hit tile, primary for tile hit]
#get clusters
clusters_with_primaries = clump_3.build_cluster_with_truth_primary_3_frame(ttree_mu3e, ttree_mu3e_mc, ttree_sensor, ttree_tiles, mu3e_detector, frame, mask_type, rec_type)
cluster_primaries_arr = []
cluster_master_primaries_arr = []
for key in clusters_with_primaries.keys():
cluster_master_primaries_arr.append(key)
cluster_primaries_arr.append(clusters_with_primaries[key])
#count hits in clusters
cluster_hits_counter_tmp = 0
for key in clusters_with_primaries.keys():
cluster_hits_counter_tmp += len(clusters_with_primaries[key])
cluster_hits_counter += cluster_hits_counter_tmp
#comparison
for j in range(len(cluster_primaries_arr)): #loop over all clusters in frame
for k in range(len(cluster_primaries_arr[j])): #loop over all primaries in cluster
if cluster_primaries_arr[j][k] == cluster_master_primaries_arr[j]: #if primary in cluster = primary of cluster master
corr_counter += 1
else:
uncorr_counter += 1
#add to total corr and uncorr counters
tot_corr_counter += corr_counter
tot_uncorr_counter += uncorr_counter
if cluster_hits_counter_tmp != 0:
frac_corr_clusters_frame.append(corr_counter/cluster_hits_counter_tmp)
if tot_hits_frame != 0:
frac_corr_frame.append(corr_counter/tot_hits_frame)
if cluster_hits_counter_tmp != 0:
frac_uncorr_frame.append(uncorr_counter/cluster_hits_counter_tmp)
print("Progress: 100 %","of ", frames_to_analyze, " frames")
print("Total #hits in frames/#hits in clusters = ", np.sum(total_hits_counter)/cluster_hits_counter)
print("Correctly associated out of all hits", tot_corr_counter/(np.sum(total_hits_counter)/100),"%")
print("Correctly associated out of all hits in clusters", tot_corr_counter/(cluster_hits_counter/100),"%")
print("Incorrectly associated out of all hits", tot_uncorr_counter/(np.sum(total_hits_counter)/100),"%")
print("Incorrectly associated out of all hits in clusters", tot_uncorr_counter/(cluster_hits_counter/100),"%")
return frac_corr_frame, frac_corr_clusters_frame, frac_uncorr_frame, tot_corr_counter | 47.481092 | 203 | 0.660104 | 2,937 | 22,601 | 4.701736 | 0.057542 | 0.066913 | 0.076906 | 0.056268 | 0.871823 | 0.840032 | 0.826273 | 0.812514 | 0.792889 | 0.777319 | 0 | 0.014611 | 0.242954 | 22,601 | 476 | 204 | 47.481092 | 0.792461 | 0.143489 | 0 | 0.631922 | 0 | 0 | 0.061351 | 0 | 0 | 0 | 0 | 0.002101 | 0 | 1 | 0.016287 | false | 0 | 0.026059 | 0 | 0.058632 | 0.107492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60304b9b7b3db6f559aae1bbb9b85ab4cde380fa | 193 | py | Python | django_wma/admin.py | rohithasrk/water-monitoring | a54774b75c83381402f8454f293b47db53197037 | [
"MIT"
] | 1 | 2019-07-04T14:12:48.000Z | 2019-07-04T14:12:48.000Z | django_wma/admin.py | rohithasrk/django-wma | a54774b75c83381402f8454f293b47db53197037 | [
"MIT"
] | null | null | null | django_wma/admin.py | rohithasrk/django-wma | a54774b75c83381402f8454f293b47db53197037 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Container, WaterQuality, WaterQuantity
admin.site.register(Container)
admin.site.register(WaterQuantity)
admin.site.register(WaterQuality)
| 27.571429 | 58 | 0.84456 | 23 | 193 | 7.086957 | 0.478261 | 0.165644 | 0.312883 | 0.368098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072539 | 193 | 6 | 59 | 32.166667 | 0.910615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
606af2b5c162f38afb94218645ab137a672573b7 | 187 | py | Python | dask_ml/decomposition/__init__.py | GueroudjiAmal/dask-ml | 54a8913bfb22775c72ffd781bf29d6e53b5dd363 | [
"BSD-3-Clause"
] | null | null | null | dask_ml/decomposition/__init__.py | GueroudjiAmal/dask-ml | 54a8913bfb22775c72ffd781bf29d6e53b5dd363 | [
"BSD-3-Clause"
] | null | null | null | dask_ml/decomposition/__init__.py | GueroudjiAmal/dask-ml | 54a8913bfb22775c72ffd781bf29d6e53b5dd363 | [
"BSD-3-Clause"
] | null | null | null | from .incremental_pca import IncrementalPCA # noqa
from .in_situ_incremental_pca import InSituIncrementalPCA
from .pca import PCA # noqa
from .truncated_svd import TruncatedSVD # noqa
| 37.4 | 57 | 0.828877 | 24 | 187 | 6.25 | 0.5 | 0.18 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13369 | 187 | 4 | 58 | 46.75 | 0.925926 | 0.074866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6085fc8171b3de9c7f899b575cb62e3f0df03ec0 | 135 | py | Python | app/asistencias/__init__.py | originaltebas/chmembers | 983578ec8cb6d1da76e98b1467d996d6fac752ee | [
"MIT"
] | null | null | null | app/asistencias/__init__.py | originaltebas/chmembers | 983578ec8cb6d1da76e98b1467d996d6fac752ee | [
"MIT"
] | 2 | 2021-09-08T01:19:10.000Z | 2022-03-11T23:59:40.000Z | app/asistencias/__init__.py | originaltebas/chmembers | 983578ec8cb6d1da76e98b1467d996d6fac752ee | [
"MIT"
] | 1 | 2019-04-09T10:42:20.000Z | 2019-04-09T10:42:20.000Z | # app/asistencias/__init__.py
from flask import Blueprint
asistencias = Blueprint('asistencias', __name__)
from . import views | 19.285714 | 49 | 0.755556 | 15 | 135 | 6.266667 | 0.666667 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162963 | 135 | 7 | 50 | 19.285714 | 0.831858 | 0.2 | 0 | 0 | 0 | 0 | 0.108911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
60ce399daf5497ea06b0db0d4498a5e34fad67d1 | 5,433 | py | Python | tests/python/extensions/pybind/cuda/geometry/rdr2geo.py | isce-framework/isce3 | 59cdd2c659a4879367db5537604b0ca93d26b372 | [
"Apache-2.0"
] | 64 | 2019-08-06T19:22:22.000Z | 2022-03-20T17:11:46.000Z | tests/python/extensions/pybind/cuda/geometry/rdr2geo.py | isce-framework/isce3 | 59cdd2c659a4879367db5537604b0ca93d26b372 | [
"Apache-2.0"
] | 8 | 2020-09-01T22:46:53.000Z | 2021-11-04T00:05:28.000Z | tests/python/extensions/pybind/cuda/geometry/rdr2geo.py | isce-framework/isce3 | 59cdd2c659a4879367db5537604b0ca93d26b372 | [
"Apache-2.0"
] | 29 | 2019-08-05T21:40:55.000Z | 2022-03-23T00:17:03.000Z | #!/usr/bin/env python3
import os
from osgeo import gdal
import numpy as np
import iscetest
import isce3
from pybind_nisar.products.readers import SLC
def test_run():
'''
check if topo runs
'''
# prepare Rdr2Geo init params
h5_path = os.path.join(iscetest.data, "envisat.h5")
radargrid = isce3.product.RadarGridParameters(h5_path)
slc = SLC(hdf5file=h5_path)
orbit = slc.getOrbit()
doppler = slc.getDopplerCentroid()
ellipsoid = isce3.core.Ellipsoid()
# init Rdr2Geo class
rdr2geo_obj = isce3.cuda.geometry.Rdr2Geo(radargrid, orbit,
ellipsoid, doppler)
# load test DEM
dem_raster = isce3.io.Raster(os.path.join(iscetest.data, "srtm_cropped.tif"))
# run
rdr2geo_obj.topo(dem_raster, ".")
def test_run_raster_layers():
'''
check if topo runs
'''
# prepare Rdr2Geo init params
h5_path = os.path.join(iscetest.data, "envisat.h5")
radargrid = isce3.product.RadarGridParameters(h5_path)
slc = SLC(hdf5file=h5_path)
orbit = slc.getOrbit()
doppler = slc.getDopplerCentroid()
ellipsoid = isce3.core.Ellipsoid()
# init Rdr2Geo class
rdr2geo_obj = isce3.cuda.geometry.Rdr2Geo(radargrid, orbit,
ellipsoid, doppler)
# load test DEM
dem_raster = isce3.io.Raster(os.path.join(iscetest.data, "srtm_cropped.tif"))
x_raster = isce3.io.Raster("x.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float64, 'ENVI')
y_raster = isce3.io.Raster("y.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float64, 'ENVI')
height_raster = isce3.io.Raster("z.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float64, 'ENVI')
incidence_angle_raster = isce3.io.Raster("inc.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float32, 'ENVI')
heading_angle_raster = isce3.io.Raster("hgd.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float32, 'ENVI')
local_incidence_angle_raster = isce3.io.Raster("localInc.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float32, 'ENVI')
local_Psi_raster = isce3.io.Raster("localPsi.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float32, 'ENVI')
simulated_amplitude_raster = isce3.io.Raster("simamp.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float32, 'ENVI')
shadow_layover_raster = isce3.io.Raster("mask.rdr", radargrid.width,
radargrid.length, 1, gdal.GDT_Float32, 'ENVI')
# run
rdr2geo_obj.topo(dem_raster, x_raster,
y_raster, height_raster,
incidence_angle_raster,
heading_angle_raster,
local_incidence_angle_raster,
local_Psi_raster,
simulated_amplitude_raster, shadow_layover_raster)
topo_raster = isce3.io.Raster(
"topo_layers.vrt", raster_list=[x_raster,
y_raster, height_raster,
incidence_angle_raster,
heading_angle_raster,
local_incidence_angle_raster,
local_Psi_raster,
simulated_amplitude_raster])
def test_validate():
'''
validate generated results
'''
# load generated topo raster
test_ds = gdal.Open("topo.vrt", gdal.GA_ReadOnly)
# load reference topo raster
ref_ds = gdal.Open(os.path.join(iscetest.data, "topo/topo.vrt"),
gdal.GA_ReadOnly)
# define tolerances
tols = [1.0e-5, 1.0e-5, 0.15, 1.0e-4, 1.0e-4, 0.02, 0.02]
# loop thru bands and check tolerances
for i_band in range(ref_ds.RasterCount):
# retrieve test and ref arrays for current band
test_arr = test_ds.GetRasterBand(i_band+1).ReadAsArray()
ref_arr = ref_ds.GetRasterBand(i_band+1).ReadAsArray()
# calculate mean of absolute error and mask anything > 5.0
err = np.abs(test_arr - ref_arr)
err = np.ma.masked_array(err, mask=err > 5.0)
mean_err = np.mean(err)
# check if tolerances met
assert( mean_err < tols[i_band]), f"band {i_band} mean err fail"
def test_layers_validate():
'''
validate generated results
'''
# load generated topo raster
test_ds = gdal.Open("topo_layers.vrt", gdal.GA_ReadOnly)
# load reference topo raster
ref_ds = gdal.Open(os.path.join(iscetest.data, "topo/topo.vrt"),
gdal.GA_ReadOnly)
# define tolerances
tols = [1.0e-5, 1.0e-5, 0.15, 1.0e-4, 1.0e-4, 0.02, 0.02]
# loop thru bands and check tolerances
for i_band in range(ref_ds.RasterCount):
# retrieve test and ref arrays for current band
test_arr = test_ds.GetRasterBand(i_band+1).ReadAsArray()
ref_arr = ref_ds.GetRasterBand(i_band+1).ReadAsArray()
# calculate mean of absolute error and mask anything > 5.0
err = np.abs(test_arr - ref_arr)
err = np.ma.masked_array(err, mask=err > 5.0)
mean_err = np.mean(err)
# check if tolerances met
assert( mean_err < tols[i_band]), f"band {i_band} mean err fail"
if __name__ == "__main__":
test_run()
test_validate()
| 35.279221 | 83 | 0.615866 | 692 | 5,433 | 4.645954 | 0.197977 | 0.041058 | 0.048523 | 0.070918 | 0.853188 | 0.845723 | 0.80902 | 0.80902 | 0.80902 | 0.80902 | 0 | 0.030867 | 0.278483 | 5,433 | 153 | 84 | 35.509804 | 0.789286 | 0.131603 | 0 | 0.655172 | 0 | 0 | 0.061704 | 0 | 0 | 0 | 0 | 0 | 0.022989 | 1 | 0.045977 | false | 0 | 0.068966 | 0 | 0.114943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
88012db04abe7c73e13f2ffb6e7fc51eb2af40aa | 19,415 | py | Python | mindquantum/framework/operations.py | mindspore-ai/mindquantum | 785150e6b44bb79b37f2fa4a3d86edc0ab3c83ce | [
"Apache-2.0"
] | 13 | 2021-06-04T00:47:53.000Z | 2022-03-20T14:30:38.000Z | mindquantum/framework/operations.py | mindspore-ai/mindquantum | 785150e6b44bb79b37f2fa4a3d86edc0ab3c83ce | [
"Apache-2.0"
] | null | null | null | mindquantum/framework/operations.py | mindspore-ai/mindquantum | 785150e6b44bb79b37f2fa4a3d86edc0ab3c83ce | [
"Apache-2.0"
] | 4 | 2022-01-17T02:43:34.000Z | 2022-02-20T16:03:44.000Z | # -*- coding: utf-8 -*-
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Mindspore quantum simulator operator."""
import numpy as np
import mindspore as ms
from mindspore import context
from mindspore.ops import operations as P
import mindspore.nn as nn
from mindspore.ops.primitive import constexpr
from mindquantum.simulator import GradOpsWrapper
@constexpr
def check_enc_input_shape(data, x, enc_len):
if not isinstance(data, ms.Tensor):
raise TypeError(f"Encoder parameter requires a Tensor but get {type(data)}")
if len(x) != 2 or x[1] != enc_len:
raise ValueError(f'Encoder data requires a two dimension Tensor with second' +
f' dimension should be {enc_len}, but get shape {x}')
@constexpr
def check_ans_input_shape(data, x, ans_len):
if not isinstance(data, ms.Tensor):
raise TypeError(f"Ansatz parameter requires a Tensor but get {type(data)}")
if len(x) != 1 or x[0] != ans_len:
raise ValueError(f'Ansatz data requires a one dimension Tensor with shape {ans_len} ' + f'but get {x}')
class MQOps(nn.Cell):
"""
MindQuantum operator that get the expectation of a hamiltonian on a quantum
state evaluated by a parameterized quantum circuit (PQC). This PQC should contains
a encoder circuit and an ansatz circuit. This ops is `PYNATIVE_MODE` supported only.
Args:
expectation_with_grad (GradOpsWrapper): a grad ops that receive encoder data and
ansatz data and return the expectation value and gradient value of parameters
respect to expectation.
Inputs:
- **enc_data** (Tensor) - Tensor of encoder data with shape :math:`(N, M)` that
you want to encode into quantum state, where :math:`N` means the batch size
and :math:`M` means the number of encoder parameters.
- **ans_data** (Tensor) - Tensor with shape :math:`N` for ansatz circuit,
where :math:`N` means the number of ansatz parameters.
Outputs:
Tensor, The expectation value of the hamiltonian.
Supported Platforms:
``GPU``, ``CPU``
Examples:
>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQOps
>>> import mindspore as ms
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> enc = Circuit().ry('a', 0)
>>> ans = Circuit().h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, enc+ans,
... encoder_params_name=['a'],
... ansatz_params_name=['b'])
>>> enc_data = np.array([[0.1]])
>>> ans_data = np.array([0.2])
>>> f, g_enc, g_ans = grad_ops(enc_data, ans_data)
>>> f
array([[0.0978434+0.j]])
>>> net = MQOps(grad_ops)
>>> f_ms = net(ms.Tensor(enc_data), ms.Tensor(ans_data))
>>> f_ms
Tensor(shape=[1, 1], dtype=Float32, value=
[[ 9.78433937e-02]])
"""
def __init__(self, expectation_with_grad):
super(MQOps, self).__init__()
_mode_check(self)
_check_grad_ops(expectation_with_grad)
self.expectation_with_grad = expectation_with_grad
self.shape_ops = P.Shape()
def extend_repr(self):
return self.expectation_with_grad.str
def construct(self, enc_data, ans_data):
check_enc_input_shape(enc_data, self.shape_ops(enc_data), len(self.expectation_with_grad.encoder_params_name))
check_ans_input_shape(ans_data, self.shape_ops(ans_data), len(self.expectation_with_grad.ansatz_params_name))
enc_data = enc_data.asnumpy()
ans_data = ans_data.asnumpy()
f, g_enc, g_ans = self.expectation_with_grad(enc_data, ans_data)
f = ms.Tensor(np.real(f), dtype=ms.float32)
self.g_enc = np.real(g_enc)
self.g_ans = np.real(g_ans)
return f
def bprop(self, enc_data, ans_data, out, dout):
dout = dout.asnumpy()
enc_grad = np.einsum('smp,sm->sp', self.g_enc, dout)
ans_grad = np.einsum('smp,sm->p', self.g_ans, dout)
return ms.Tensor(enc_grad, dtype=ms.float32), ms.Tensor(ans_grad, dtype=ms.float32)
class MQN2Ops(nn.Cell):
r"""
MindQuantum operator that get the square of absolute value of expectation of a hamiltonian
on a quantum state evaluated by a parameterized quantum circuit (PQC). This PQC should contains
a encoder circuit and an ansatz circuit. This ops is `PYNATIVE_MODE` supported only.
.. math::
O = \left|\left<varphi\right| U^\dagger_l H U_r\left|\psi\right>\right|^2
Args:
expectation_with_grad (GradOpsWrapper): a grad ops that receive encoder data and
ansatz data and return the square of absolute value of expectation value and
gradient value of parameters respect to expectation.
Inputs:
- **enc_data** (Tensor) - Tensor of encoder data with shape :math:`(N, M)` that
you want to encode into quantum state, where :math:`N` means the batch size
and :math:`M` means the number of encoder parameters.
- **ans_data** (Tensor) - Tensor with shape :math:`N` for ansatz circuit,
where :math:`N` means the number of ansatz parameters.
Outputs:
Tensor, The square of absolute value of expectation value of the hamiltonian.
Supported Platforms:
``GPU``, ``CPU``
Examples:
>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQN2Ops
>>> import mindspore as ms
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> enc = Circuit().ry('a', 0)
>>> ans = Circuit().h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, enc+ans,
... encoder_params_name=['a'],
... ansatz_params_name=['b'])
>>> enc_data = np.array([[0.1]])
>>> ans_data = np.array([0.2])
>>> f, g_enc, g_ans = grad_ops(enc_data, ans_data)
>>> np.abs(f) ** 2
array([[0.00957333]])
>>> net = MQN2Ops(grad_ops)
>>> f_ms = net(ms.Tensor(enc_data), ms.Tensor(ans_data))
>>> f_ms
Tensor(shape=[1, 1], dtype=Float32, value=
[[ 9.57333017e-03]])
"""
def __init__(self, expectation_with_grad):
super(MQN2Ops, self).__init__()
_mode_check(self)
_check_grad_ops(expectation_with_grad)
self.expectation_with_grad = expectation_with_grad
self.shape_ops = P.Shape()
def extend_repr(self):
return self.expectation_with_grad.str
def construct(self, enc_data, ans_data):
check_enc_input_shape(enc_data, self.shape_ops(enc_data), len(self.expectation_with_grad.encoder_params_name))
check_ans_input_shape(ans_data, self.shape_ops(ans_data), len(self.expectation_with_grad.ansatz_params_name))
enc_data = enc_data.asnumpy()
ans_data = ans_data.asnumpy()
f, g_enc, g_ans = self.expectation_with_grad(enc_data, ans_data)
self.f = f
f = ms.Tensor(np.abs(f)**2, dtype=ms.float32)
self.g_enc = g_enc
self.g_ans = g_ans
return f
def bprop(self, enc_data, ans_data, out, dout):
dout = dout.asnumpy()
enc_grad = 2 * np.real(np.einsum('smp,sm,sm->sp', self.g_enc, dout, np.conj(self.f)))
ans_grad = 2 * np.real(np.einsum('smp,sm,sm->p', self.g_ans, dout, np.conj(self.f)))
return ms.Tensor(enc_grad, dtype=ms.float32), ms.Tensor(ans_grad, dtype=ms.float32)
class MQAnsatzOnlyOps(nn.Cell):
r"""
MindQuantum operator that get the expectation of a hamiltonian
on a quantum state evaluated by a parameterized quantum circuit (PQC). This PQC should
contains an ansatz circuit only. This ops is `PYNATIVE_MODE` supported only.
Args:
expectation_with_grad (GradOpsWrapper): a grad ops that receive encoder data and
ansatz data and return the expectation value and gradient value of parameters
respect to expectation.
Inputs:
- **ans_data** (Tensor) - Tensor with shape :math:`N` for ansatz circuit,
where :math:`N` means the number of ansatz parameters.
Outputs:
Tensor, The expectation value of the hamiltonian.
Supported Platforms:
``GPU``, ``CPU``
Examples:
>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQAnsatzOnlyOps
>>> import mindspore as ms
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> circ = Circuit().ry('a', 0).h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, circ)
>>> data = np.array([0.1, 0.2])
>>> f, g = grad_ops(data)
>>> f
array([[0.0978434+0.j]])
>>> net = MQAnsatzOnlyOps(grad_ops)
>>> f_ms = net(ms.Tensor(data))
>>> f_ms
Tensor(shape=[1], dtype=Float32, value= [ 9.78433937e-02])
"""
def __init__(self, expectation_with_grad):
super(MQAnsatzOnlyOps, self).__init__()
_mode_check(self)
_check_grad_ops(expectation_with_grad)
self.expectation_with_grad = expectation_with_grad
self.shape_ops = P.Shape()
def extend_repr(self):
return self.expectation_with_grad.str
def construct(self, x):
check_ans_input_shape(x, self.shape_ops(x), len(self.expectation_with_grad.ansatz_params_name))
x = x.asnumpy()
f, g = self.expectation_with_grad(x)
f = ms.Tensor(np.real(f[0]), dtype=ms.float32)
self.g = np.real(g[0])
return f
def bprop(self, x, out, dout):
dout = dout.asnumpy()
grad = dout @ self.g
return ms.Tensor(grad, dtype=ms.float32)
class MQN2AnsatzOnlyOps(nn.Cell):
r"""
MindQuantum operator that get the square of absolute value of expectation of a hamiltonian
on a quantum state evaluated by a parameterized quantum circuit (PQC). This PQC should
contains an ansatz circuit only. This ops is `PYNATIVE_MODE` supported only.
Args:
expectation_with_grad (GradOpsWrapper): a grad ops that receive encoder data and
ansatz data and return the square of absolute value of expectation value and
gradient value of parameters respect to expectation.
Inputs:
- **ans_data** (Tensor) - Tensor with shape :math:`N` for ansatz circuit,
where :math:`N` means the number of ansatz parameters.
Outputs:
Tensor, The square of absolute value of expectation value of the hamiltonian.
Supported Platforms:
``GPU``, ``CPU``
Examples:
>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQN2AnsatzOnlyOps
>>> import mindspore as ms
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> circ = Circuit().ry('a', 0).h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, circ)
>>> data = np.array([0.1, 0.2])
>>> f, g = grad_ops(data)
>>> np.abs(f) ** 2
array([[0.00957333]])
>>> net = MQN2AnsatzOnlyOps(grad_ops)
>>> f_ms = net(ms.Tensor(data))
>>> f_ms
Tensor(shape=[1], dtype=Float32, value= [ 9.57333017e-03])
"""
def __init__(self, expectation_with_grad):
super(MQN2AnsatzOnlyOps, self).__init__()
_mode_check(self)
_check_grad_ops(expectation_with_grad)
self.expectation_with_grad = expectation_with_grad
self.shape_ops = P.Shape()
def extend_repr(self):
return self.expectation_with_grad.str
def construct(self, x):
check_ans_input_shape(x, self.shape_ops(x), len(self.expectation_with_grad.ansatz_params_name))
x = x.asnumpy()
f, g = self.expectation_with_grad(x)
self.f = f[0]
f = ms.Tensor(np.abs(f[0])**2, dtype=ms.float32)
self.g = g[0]
return f
def bprop(self, x, out, dout):
dout = dout.asnumpy()
grad = 2 * np.real(np.einsum('m,m,mp->p', np.conj(self.f), dout, self.g))
return ms.Tensor(grad, dtype=ms.float32)
class MQEncoderOnlyOps(nn.Cell):
r"""
MindQuantum operator that get the expectation of a hamiltonian
on a quantum state evaluated by a parameterized quantum circuit (PQC). This PQC should
contains a encoder circuit only. This ops is `PYNATIVE_MODE` supported only.
Args:
expectation_with_grad (GradOpsWrapper): a grad ops that receive encoder data and
ansatz data and return the expectation value and gradient value of parameters
respect to expectation.
Inputs:
- **enc_data** (Tensor) - Tensor of encoder data with shape :math:`(N, M)` that
you want to encode into quantum state, where :math:`N` means the batch size
and :math:`M` means the number of encoder parameters.
Outputs:
Tensor, The expectation value of the hamiltonian.
Supported Platforms:
``GPU``, ``CPU``
Examples:
>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQEncoderOnlyOps
>>> import mindspore as ms
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> circ = Circuit().ry('a', 0).h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, circ, encoder_params_name=circ.params_name)
>>> data = np.array([[0.1, 0.2], [0.3, 0.4]])
>>> f, g = grad_ops(data)
>>> f
array([[0.0978434 +0.j],
[0.27219214+0.j]])
>>> net = MQEncoderOnlyOps(grad_ops)
>>> f_ms = net(ms.Tensor(data))
>>> f_ms
Tensor(shape=[2, 1], dtype=Float32, value=
[[ 9.78433937e-02],
[ 2.72192121e-01]])
"""
def __init__(self, expectation_with_grad):
super(MQEncoderOnlyOps, self).__init__()
_mode_check(self)
_check_grad_ops(expectation_with_grad)
self.expectation_with_grad = expectation_with_grad
self.shape_ops = P.Shape()
def extend_repr(self):
return self.expectation_with_grad.str
def construct(self, x):
check_enc_input_shape(x, self.shape_ops(x), len(self.expectation_with_grad.encoder_params_name))
x = x.asnumpy()
f, g = self.expectation_with_grad(x)
f = ms.Tensor(np.real(f), dtype=ms.float32)
self.g = np.real(g)
return f
def bprop(self, x, out, dout):
dout = dout.asnumpy()
grad = np.einsum('smp,sm->sp', self.g, dout)
return ms.Tensor(grad, dtype=ms.float32)
class MQN2EncoderOnlyOps(nn.Cell):
r"""
MindQuantum operator that get the square of absolute value of expectation of a hamiltonian
on a quantum state evaluated by a parameterized quantum circuit (PQC). This PQC should
contains a encoder circuit only. This ops is `PYNATIVE_MODE` supported only.
Args:
expectation_with_grad (GradOpsWrapper): a grad ops that receive encoder data and
ansatz data and return the square of absolute value of expectation value and
gradient value of parameters respect to expectation.
Inputs:
- **ans_data** (Tensor) - Tensor with shape :math:`N` for ansatz circuit,
where :math:`N` means the number of ansatz parameters.
Outputs:
Tensor, The square of absolute value of expectation value of the hamiltonian.
Supported Platforms:
``GPU``, ``CPU``
Examples:
>>> import numpy as np
>>> from mindquantum import Circuit, Hamiltonian, QubitOperator
>>> from mindquantum import Simulator, MQN2EncoderOnlyOps
>>> import mindspore as ms
>>> ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")
>>> circ = Circuit().ry('a', 0).h(0).rx('b', 0)
>>> ham = Hamiltonian(QubitOperator('Z0'))
>>> sim = Simulator('projectq', 1)
>>> grad_ops = sim.get_expectation_with_grad(ham, circ, encoder_params_name=circ.params_name)
>>> data = np.array([[0.1, 0.2], [0.3, 0.4]])
>>> f, g = grad_ops(data)
>>> np.abs(f) ** 2
array([[0.00957333],
[0.07408856]])
>>> net = MQN2EncoderOnlyOps(grad_ops)
>>> f_ms = net(ms.Tensor(data))
>>> f_ms
Tensor(shape=[2, 1], dtype=Float32, value=
[[ 9.57333017e-03],
[ 7.40885586e-02]])
"""
def __init__(self, expectation_with_grad):
super(MQN2EncoderOnlyOps, self).__init__()
_mode_check(self)
_check_grad_ops(expectation_with_grad)
self.expectation_with_grad = expectation_with_grad
self.shape_ops = P.Shape()
def extend_repr(self):
return self.expectation_with_grad.str
def construct(self, x):
check_enc_input_shape(x, self.shape_ops(x), len(self.expectation_with_grad.encoder_params_name))
x = x.asnumpy()
f, g = self.expectation_with_grad(x)
self.f = f
f = ms.Tensor(np.abs(f)**2, dtype=ms.float32)
self.g = g
return f
def bprop(self, x, out, dout):
dout = dout.asnumpy()
grad = 2 * np.real(np.einsum('smp,sm,sm->sp', self.g, dout, np.conj(self.f)))
return ms.Tensor(grad, dtype=ms.float32)
def _mode_check(self):
if context.get_context('mode') != context.PYNATIVE_MODE:
raise RuntimeError(f'{self.__class__} is `PYNATIVE_MODE` supported only. Run command below to set context\n\
import mindspore as ms\n\
ms.context.set_context(mode=ms.context.PYNATIVE_MODE, device_target="CPU")')
def _check_grad_ops(expectation_with_grad):
if not isinstance(expectation_with_grad, GradOpsWrapper):
raise TypeError(f'expectation_with_grad requires a GradOpsWrapper, but get {type(expectation_with_grad)}')
| 40.873684 | 118 | 0.631522 | 2,619 | 19,415 | 4.5147 | 0.088583 | 0.076116 | 0.096414 | 0.062246 | 0.855125 | 0.848275 | 0.840663 | 0.832629 | 0.814783 | 0.810301 | 0 | 0.021504 | 0.247901 | 19,415 | 474 | 119 | 40.959916 | 0.788248 | 0.564873 | 0 | 0.634146 | 0 | 0.006098 | 0.061516 | 0.006672 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0 | 0.04878 | 0.036585 | 0.365854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71470115c07fb8c0399135c3105fe2ab36dd6a2f | 114 | py | Python | CADMium/functionals/TW_paramke.py | wasserman-group/CADMium | 0d0e16f8965e0a3acfea9ea6ba5dd0d51f1bdc55 | [
"BSD-3-Clause"
] | null | null | null | CADMium/functionals/TW_paramke.py | wasserman-group/CADMium | 0d0e16f8965e0a3acfea9ea6ba5dd0d51f1bdc55 | [
"BSD-3-Clause"
] | 1 | 2021-05-12T17:24:27.000Z | 2021-05-12T17:24:27.000Z | CADMium/functionals/TW_paramke.py | VHchavez/CADMium | 39f3bd63ca69502a80c677855da72f9e691b57e2 | [
"BSD-3-Clause"
] | 2 | 2020-10-07T20:48:56.000Z | 2021-04-22T19:06:18.000Z | """
TW_paramke.py
"""
def TW_paramke(s, k):
F = 1 + k[0] - k[0] / (1 + k[1] / k[0] @ s ** 2 )
return F | 12.666667 | 54 | 0.412281 | 23 | 114 | 1.956522 | 0.478261 | 0.133333 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.324561 | 114 | 9 | 55 | 12.666667 | 0.493506 | 0.114035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
718edd0a1a2c021dcf6294e32dc7b2b28f2ad0fc | 197 | py | Python | parsely/resource.py | gengo/parsely | 5126c2223bcc12023e7d94471d83551f562632f5 | [
"MIT"
] | null | null | null | parsely/resource.py | gengo/parsely | 5126c2223bcc12023e7d94471d83551f562632f5 | [
"MIT"
] | null | null | null | parsely/resource.py | gengo/parsely | 5126c2223bcc12023e7d94471d83551f562632f5 | [
"MIT"
] | null | null | null | import os.path
resource_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'resources')
def resource_filename(filename: str) -> str:
return os.path.join(resource_dir, filename)
| 21.888889 | 84 | 0.746193 | 29 | 197 | 4.827586 | 0.482759 | 0.214286 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111675 | 197 | 8 | 85 | 24.625 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.045685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7197541b6894f659076c56eac2276ece55305d9c | 37,554 | py | Python | pybind/nos/v6_0_2c/interface/tengigabitethernet/switchport/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v6_0_2c/interface/tengigabitethernet/switchport/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v6_0_2c/interface/tengigabitethernet/switchport/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import mode
import port_security
import access
import access_mac_vlan_classification
import trunk_private_vlan_classification
import access_mac_group_vlan_classification
import trunk
import private_vlan
import access_mac_rspan_vlan_classification
import access_mac_group_rspan_vlan_classification
class switchport(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-interface - based on the path /interface/tengigabitethernet/switchport. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: The L2 switching characteristics of an interface.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__mode','__port_security','__access','__access_mac_vlan_classification','__trunk_private_vlan_classification','__access_mac_group_vlan_classification','__trunk','__private_vlan','__access_mac_rspan_vlan_classification','__access_mac_group_rspan_vlan_classification',)
_yang_name = 'switchport'
_rest_name = 'switchport'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__trunk_private_vlan_classification = YANGDynClass(base=trunk_private_vlan_classification.trunk_private_vlan_classification, is_container='container', presence=False, yang_name="trunk-private-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'ctag-pvlan-classification-phy-config'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__private_vlan = YANGDynClass(base=private_vlan.private_vlan, is_container='container', presence=False, yang_name="private-vlan", rest_name="private-vlan", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Private-Vlan Configuration'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__access_mac_vlan_classification = YANGDynClass(base=access_mac_vlan_classification.access_mac_vlan_classification, is_container='container', presence=False, yang_name="access-mac-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'gvlan-access-port-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__access = YANGDynClass(base=access.access, is_container='container', presence=False, yang_name="access", rest_name="access", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as Access', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__access_mac_group_vlan_classification = YANGDynClass(base=access_mac_group_vlan_classification.access_mac_group_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'mac-group-vlan-classification-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__port_security = YANGDynClass(base=port_security.port_security, is_container='container', presence=True, yang_name="port-security", rest_name="port-security", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Enable port-security feature', u'callpoint': u'interface_portsecurity'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__access_mac_group_rspan_vlan_classification = YANGDynClass(base=access_mac_group_rspan_vlan_classification.access_mac_group_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__mode = YANGDynClass(base=mode.mode, is_container='container', presence=False, yang_name="mode", rest_name="mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set mode of the Layer2 interface', u'cli-suppress-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__trunk = YANGDynClass(base=trunk.trunk, is_container='container', presence=False, yang_name="trunk", rest_name="trunk", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as trunk', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
self.__access_mac_rspan_vlan_classification = YANGDynClass(base=access_mac_rspan_vlan_classification.access_mac_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'interface', u'tengigabitethernet', u'switchport']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'interface', u'TenGigabitEthernet', u'switchport']
def _get_mode(self):
"""
Getter method for mode, mapped from YANG variable /interface/tengigabitethernet/switchport/mode (container)
YANG Description: The mode of the Layer2 interface.
"""
return self.__mode
def _set_mode(self, v, load=False):
"""
Setter method for mode, mapped from YANG variable /interface/tengigabitethernet/switchport/mode (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_mode is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mode() directly.
YANG Description: The mode of the Layer2 interface.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=mode.mode, is_container='container', presence=False, yang_name="mode", rest_name="mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set mode of the Layer2 interface', u'cli-suppress-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """mode must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=mode.mode, is_container='container', presence=False, yang_name="mode", rest_name="mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set mode of the Layer2 interface', u'cli-suppress-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__mode = t
if hasattr(self, '_set'):
self._set()
def _unset_mode(self):
self.__mode = YANGDynClass(base=mode.mode, is_container='container', presence=False, yang_name="mode", rest_name="mode", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set mode of the Layer2 interface', u'cli-suppress-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_port_security(self):
"""
Getter method for port_security, mapped from YANG variable /interface/tengigabitethernet/switchport/port_security (container)
YANG Description: Enable port-security feature
"""
return self.__port_security
def _set_port_security(self, v, load=False):
"""
Setter method for port_security, mapped from YANG variable /interface/tengigabitethernet/switchport/port_security (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_port_security is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_port_security() directly.
YANG Description: Enable port-security feature
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=port_security.port_security, is_container='container', presence=True, yang_name="port-security", rest_name="port-security", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Enable port-security feature', u'callpoint': u'interface_portsecurity'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """port_security must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=port_security.port_security, is_container='container', presence=True, yang_name="port-security", rest_name="port-security", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Enable port-security feature', u'callpoint': u'interface_portsecurity'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__port_security = t
if hasattr(self, '_set'):
self._set()
def _unset_port_security(self):
self.__port_security = YANGDynClass(base=port_security.port_security, is_container='container', presence=True, yang_name="port-security", rest_name="port-security", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Enable port-security feature', u'callpoint': u'interface_portsecurity'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_access(self):
"""
Getter method for access, mapped from YANG variable /interface/tengigabitethernet/switchport/access (container)
YANG Description: The access layer characteristics of this
interface.
"""
return self.__access
def _set_access(self, v, load=False):
"""
Setter method for access, mapped from YANG variable /interface/tengigabitethernet/switchport/access (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_access is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_access() directly.
YANG Description: The access layer characteristics of this
interface.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=access.access, is_container='container', presence=False, yang_name="access", rest_name="access", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as Access', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """access must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=access.access, is_container='container', presence=False, yang_name="access", rest_name="access", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as Access', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__access = t
if hasattr(self, '_set'):
self._set()
def _unset_access(self):
self.__access = YANGDynClass(base=access.access, is_container='container', presence=False, yang_name="access", rest_name="access", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as Access', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_access_mac_vlan_classification(self):
"""
Getter method for access_mac_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_vlan_classification (container)
"""
return self.__access_mac_vlan_classification
def _set_access_mac_vlan_classification(self, v, load=False):
"""
Setter method for access_mac_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_vlan_classification (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_access_mac_vlan_classification is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_access_mac_vlan_classification() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=access_mac_vlan_classification.access_mac_vlan_classification, is_container='container', presence=False, yang_name="access-mac-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'gvlan-access-port-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """access_mac_vlan_classification must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=access_mac_vlan_classification.access_mac_vlan_classification, is_container='container', presence=False, yang_name="access-mac-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'gvlan-access-port-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__access_mac_vlan_classification = t
if hasattr(self, '_set'):
self._set()
def _unset_access_mac_vlan_classification(self):
self.__access_mac_vlan_classification = YANGDynClass(base=access_mac_vlan_classification.access_mac_vlan_classification, is_container='container', presence=False, yang_name="access-mac-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'gvlan-access-port-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_trunk_private_vlan_classification(self):
"""
Getter method for trunk_private_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/trunk_private_vlan_classification (container)
"""
return self.__trunk_private_vlan_classification
def _set_trunk_private_vlan_classification(self, v, load=False):
"""
Setter method for trunk_private_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/trunk_private_vlan_classification (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_trunk_private_vlan_classification is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_trunk_private_vlan_classification() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=trunk_private_vlan_classification.trunk_private_vlan_classification, is_container='container', presence=False, yang_name="trunk-private-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'ctag-pvlan-classification-phy-config'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """trunk_private_vlan_classification must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=trunk_private_vlan_classification.trunk_private_vlan_classification, is_container='container', presence=False, yang_name="trunk-private-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'ctag-pvlan-classification-phy-config'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__trunk_private_vlan_classification = t
if hasattr(self, '_set'):
self._set()
def _unset_trunk_private_vlan_classification(self):
self.__trunk_private_vlan_classification = YANGDynClass(base=trunk_private_vlan_classification.trunk_private_vlan_classification, is_container='container', presence=False, yang_name="trunk-private-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'ctag-pvlan-classification-phy-config'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_access_mac_group_vlan_classification(self):
"""
Getter method for access_mac_group_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_group_vlan_classification (container)
"""
return self.__access_mac_group_vlan_classification
def _set_access_mac_group_vlan_classification(self, v, load=False):
"""
Setter method for access_mac_group_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_group_vlan_classification (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_access_mac_group_vlan_classification is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_access_mac_group_vlan_classification() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=access_mac_group_vlan_classification.access_mac_group_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'mac-group-vlan-classification-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """access_mac_group_vlan_classification must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=access_mac_group_vlan_classification.access_mac_group_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'mac-group-vlan-classification-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__access_mac_group_vlan_classification = t
if hasattr(self, '_set'):
self._set()
def _unset_access_mac_group_vlan_classification(self):
self.__access_mac_group_vlan_classification = YANGDynClass(base=access_mac_group_vlan_classification.access_mac_group_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'callpoint': u'mac-group-vlan-classification-config-phy'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_trunk(self):
"""
Getter method for trunk, mapped from YANG variable /interface/tengigabitethernet/switchport/trunk (container)
YANG Description: The trunking characteristics of this interface.
"""
return self.__trunk
def _set_trunk(self, v, load=False):
"""
Setter method for trunk, mapped from YANG variable /interface/tengigabitethernet/switchport/trunk (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_trunk is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_trunk() directly.
YANG Description: The trunking characteristics of this interface.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=trunk.trunk, is_container='container', presence=False, yang_name="trunk", rest_name="trunk", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as trunk', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """trunk must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=trunk.trunk, is_container='container', presence=False, yang_name="trunk", rest_name="trunk", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as trunk', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__trunk = t
if hasattr(self, '_set'):
self._set()
def _unset_trunk(self):
self.__trunk = YANGDynClass(base=trunk.trunk, is_container='container', presence=False, yang_name="trunk", rest_name="trunk", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set the Layer2 interface as trunk', u'cli-incomplete-no': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_private_vlan(self):
"""
Getter method for private_vlan, mapped from YANG variable /interface/tengigabitethernet/switchport/private_vlan (container)
YANG Description: Set Private-Vlan Configuration
"""
return self.__private_vlan
def _set_private_vlan(self, v, load=False):
"""
Setter method for private_vlan, mapped from YANG variable /interface/tengigabitethernet/switchport/private_vlan (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_private_vlan is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_private_vlan() directly.
YANG Description: Set Private-Vlan Configuration
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=private_vlan.private_vlan, is_container='container', presence=False, yang_name="private-vlan", rest_name="private-vlan", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Private-Vlan Configuration'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """private_vlan must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=private_vlan.private_vlan, is_container='container', presence=False, yang_name="private-vlan", rest_name="private-vlan", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Private-Vlan Configuration'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__private_vlan = t
if hasattr(self, '_set'):
self._set()
def _unset_private_vlan(self):
self.__private_vlan = YANGDynClass(base=private_vlan.private_vlan, is_container='container', presence=False, yang_name="private-vlan", rest_name="private-vlan", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set Private-Vlan Configuration'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_access_mac_rspan_vlan_classification(self):
"""
Getter method for access_mac_rspan_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_rspan_vlan_classification (container)
"""
return self.__access_mac_rspan_vlan_classification
def _set_access_mac_rspan_vlan_classification(self, v, load=False):
"""
Setter method for access_mac_rspan_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_rspan_vlan_classification (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_access_mac_rspan_vlan_classification is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_access_mac_rspan_vlan_classification() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=access_mac_rspan_vlan_classification.access_mac_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """access_mac_rspan_vlan_classification must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=access_mac_rspan_vlan_classification.access_mac_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__access_mac_rspan_vlan_classification = t
if hasattr(self, '_set'):
self._set()
def _unset_access_mac_rspan_vlan_classification(self):
self.__access_mac_rspan_vlan_classification = YANGDynClass(base=access_mac_rspan_vlan_classification.access_mac_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
def _get_access_mac_group_rspan_vlan_classification(self):
"""
Getter method for access_mac_group_rspan_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_group_rspan_vlan_classification (container)
"""
return self.__access_mac_group_rspan_vlan_classification
def _set_access_mac_group_rspan_vlan_classification(self, v, load=False):
"""
Setter method for access_mac_group_rspan_vlan_classification, mapped from YANG variable /interface/tengigabitethernet/switchport/access_mac_group_rspan_vlan_classification (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_access_mac_group_rspan_vlan_classification is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_access_mac_group_rspan_vlan_classification() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=access_mac_group_rspan_vlan_classification.access_mac_group_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """access_mac_group_rspan_vlan_classification must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=access_mac_group_rspan_vlan_classification.access_mac_group_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)""",
})
self.__access_mac_group_rspan_vlan_classification = t
if hasattr(self, '_set'):
self._set()
def _unset_access_mac_group_rspan_vlan_classification(self):
self.__access_mac_group_rspan_vlan_classification = YANGDynClass(base=access_mac_group_rspan_vlan_classification.access_mac_group_rspan_vlan_classification, is_container='container', presence=False, yang_name="access-mac-group-rspan-vlan-classification", rest_name="", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='container', is_config=True)
mode = __builtin__.property(_get_mode, _set_mode)
port_security = __builtin__.property(_get_port_security, _set_port_security)
access = __builtin__.property(_get_access, _set_access)
access_mac_vlan_classification = __builtin__.property(_get_access_mac_vlan_classification, _set_access_mac_vlan_classification)
trunk_private_vlan_classification = __builtin__.property(_get_trunk_private_vlan_classification, _set_trunk_private_vlan_classification)
access_mac_group_vlan_classification = __builtin__.property(_get_access_mac_group_vlan_classification, _set_access_mac_group_vlan_classification)
trunk = __builtin__.property(_get_trunk, _set_trunk)
private_vlan = __builtin__.property(_get_private_vlan, _set_private_vlan)
access_mac_rspan_vlan_classification = __builtin__.property(_get_access_mac_rspan_vlan_classification, _set_access_mac_rspan_vlan_classification)
access_mac_group_rspan_vlan_classification = __builtin__.property(_get_access_mac_group_rspan_vlan_classification, _set_access_mac_group_rspan_vlan_classification)
_pyangbind_elements = {'mode': mode, 'port_security': port_security, 'access': access, 'access_mac_vlan_classification': access_mac_vlan_classification, 'trunk_private_vlan_classification': trunk_private_vlan_classification, 'access_mac_group_vlan_classification': access_mac_group_vlan_classification, 'trunk': trunk, 'private_vlan': private_vlan, 'access_mac_rspan_vlan_classification': access_mac_rspan_vlan_classification, 'access_mac_group_rspan_vlan_classification': access_mac_group_rspan_vlan_classification, }
| 79.563559 | 590 | 0.770331 | 4,928 | 37,554 | 5.572646 | 0.038758 | 0.110771 | 0.042823 | 0.040784 | 0.915993 | 0.897458 | 0.877139 | 0.862501 | 0.842546 | 0.824121 | 0 | 0.00063 | 0.112105 | 37,554 | 471 | 591 | 79.732484 | 0.822967 | 0.169516 | 0 | 0.467626 | 0 | 0.035971 | 0.376754 | 0.197838 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118705 | false | 0 | 0.064748 | 0 | 0.294964 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71e7f8987715e3c5c383ef6368d4922cf04efac8 | 89 | py | Python | arguments/__init__.py | qychen13/ClusterAlignReID | 9dca1a39b7f1035c9579d80bbb73aa45480a616c | [
"MIT"
] | 15 | 2020-08-24T22:47:39.000Z | 2021-04-19T07:51:32.000Z | arguments/__init__.py | qychen13/ClusterAlignReID | 9dca1a39b7f1035c9579d80bbb73aa45480a616c | [
"MIT"
] | 1 | 2021-10-14T03:07:12.000Z | 2021-11-05T13:59:55.000Z | arguments/__init__.py | qychen13/ClusterAlignReID | 9dca1a39b7f1035c9579d80bbb73aa45480a616c | [
"MIT"
] | 1 | 2020-08-26T02:48:40.000Z | 2020-08-26T02:48:40.000Z | from .arguments_train import ArgumentsTrainVal
from .arguments_test import ArgumentsTest
| 29.666667 | 46 | 0.88764 | 10 | 89 | 7.7 | 0.7 | 0.337662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089888 | 89 | 2 | 47 | 44.5 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e08b5229dea0a5c38c73e746a4f793e1f17d620b | 25,732 | py | Python | tests/test_analyzers.py | rwitzel/python-deequ | 9bcc6bc69f450b5459866448ebcbc1f8d65d65a2 | [
"Apache-2.0"
] | 293 | 2020-11-10T17:40:15.000Z | 2022-03-31T19:38:03.000Z | tests/test_analyzers.py | rwitzel/python-deequ | 9bcc6bc69f450b5459866448ebcbc1f8d65d65a2 | [
"Apache-2.0"
] | 82 | 2020-11-16T14:45:27.000Z | 2022-03-30T19:52:10.000Z | tests/test_analyzers.py | rwitzel/python-deequ | 9bcc6bc69f450b5459866448ebcbc1f8d65d65a2 | [
"Apache-2.0"
] | 77 | 2020-11-10T19:29:16.000Z | 2022-03-02T17:49:50.000Z | # -*- coding: utf-8 -*-
import unittest
import pytest
from pyspark.sql import Row
from pydeequ import PyDeequSession
from pydeequ.analyzers import (
AnalyzerContext,
ApproxCountDistinct,
ApproxQuantile,
ApproxQuantiles,
Completeness,
Compliance,
Correlation,
CountDistinct,
DataType,
Distinctness,
Entropy,
Histogram,
KLLParameters,
KLLSketch,
Maximum,
MaxLength,
Mean,
Minimum,
MinLength,
MutualInformation,
PatternMatch,
Size,
StandardDeviation,
Sum,
Uniqueness,
UniqueValueRatio,
)
from tests.conftest import setup_pyspark
class TestAnalyzers(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.spark = setup_pyspark().appName("test-analyzers-local").getOrCreate()
# cls.AnalysisRunner = AnalysisRunner(cls.spark)
cls.pydeequ_session = PyDeequSession(cls.spark)
cls.AnalysisRunner = cls.pydeequ_session.createAnalysisRunner()
cls.sc = cls.spark.sparkContext
cls.df = cls.sc.parallelize(
[Row(a="foo", b=1, c=5, d=1), Row(a="bar", b=2, c=6, d=3), Row(a="baz", b=3, c=None, d=1)]
).toDF()
@classmethod
def tearDownClass(cls):
cls.spark.sparkContext._gateway.shutdown_callback_server()
cls.spark.stop()
def ApproxCountDistinct(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(ApproxCountDistinct(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def ApproxQuantile(self, column, quantile, where=None):
relativeError: float = 0.01
result = (
self.AnalysisRunner.onData(self.df)
.addAnalyzer(ApproxQuantile(column, quantile, relativeError, where))
.run()
)
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def ApproxQuantiles(self, column, quantiles):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(ApproxQuantiles(column, quantiles)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Completeness(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Completeness(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Compliance(self, instance, predicate, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Compliance(instance, predicate, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Correlation(self, column1, column2, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Correlation(column1, column2, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
AnalyzerContext.successMetricsAsJson(self.spark, result)
return result_df.select("value").collect()
def CountDistinct(self, columns):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(CountDistinct(columns)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Datatype(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(DataType(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Distinctness(self, columns, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Distinctness(columns, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Entropy(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Entropy(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Histogram(self, column, binningUdf=None, maxDetailBins: int = None, where: str = None):
result = (
self.AnalysisRunner.onData(self.df).addAnalyzer(Histogram(column, binningUdf, maxDetailBins, where)).run()
)
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def test_KLLSketch(self):
result = (
self.AnalysisRunner.onData(self.df).addAnalyzer(KLLSketch("b", KLLParameters(self.spark, 2, 0.64, 2))).run()
)
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_df.show()
return result_df.select("value").collect()
def Histogram_maxBins(self, column, binningUdf=None, maxDetailBins: int = None, where: str = None):
result = (
self.AnalysisRunner.onData(self.df).addAnalyzer(Histogram(column, binningUdf, maxDetailBins, where)).run()
)
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Maximum(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Maximum(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def MaxLength(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(MaxLength(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Mean(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Mean(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Minimum(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Minimum(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def MinLength(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(MinLength(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def MutualInformation(self, columns, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(MutualInformation(columns, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def StandardDeviation(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(StandardDeviation(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def Sum(self, column, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Sum(column, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def test_ApproxCountDistinct(self):
self.assertEqual(self.ApproxCountDistinct("b"), [Row(value=3)])
self.assertEqual(self.ApproxCountDistinct("c"), [Row(value=2)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_approxCountDistinct(self):
self.assertEqual(self.ApproxCountDistinct("b"), [Row(value=2)])
def test_ApproxQuantile(self):
self.assertEqual(self.ApproxQuantile("b", 0.5), [Row(value=2.0)])
self.assertEqual(self.ApproxQuantile("c", 0.5), [Row(value=5.0)])
self.assertEqual(self.ApproxQuantile("b", 0.25), [Row(value=1.0)])
def Uniqueness(self, columns, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Uniqueness(columns, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
def UniqueValueRatio(self, columns, where=None):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(UniqueValueRatio(columns, where)).run()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
return result_df.select("value").collect()
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_approxQuantiles(self):
self.assertEqual(self.ApproxQuantiles("b", [0.2, 0.5, 0.73]), [Row(value=1.5), Row(value=2.0), Row(value=3.0)])
def test_ApproxQuantiles(self):
self.assertEqual(self.ApproxQuantiles("b", [0.25, 0.5, 0.75]), [Row(value=1.0), Row(value=2.0), Row(value=3.0)])
self.assertEqual(self.ApproxQuantiles("c", [0.25, 0.5, 0.75]), [Row(value=5.0), Row(value=5.0), Row(value=6.0)])
def test_Completeness(self):
self.assertEqual(self.Completeness("b"), [Row(value=1.0)])
self.assertEqual(self.Completeness("c"), [Row(value=2 / 3)])
self.assertEqual(self.Completeness("a"), [Row(value=1)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Completeness(self):
self.assertEqual(self.Completeness("c"), [Row(value=1.0)])
def test_Compliance(self):
self.assertEqual(self.Compliance("top b", "b >= 2"), [Row(value=2 / 3)])
self.assertEqual(self.Compliance("c", "c >= 2"), [Row(value=2 / 3)])
self.assertEqual(self.Compliance("b, e value", "b >=2 AND d >= 2"), [Row(value=1 / 3)])
self.assertEqual(self.Compliance("find a", 'a = "foo"'), [Row(value=1 / 3)])
def test_Correlation(self):
self.assertEqual(self.Correlation("b", "c"), [Row(value=1.0)])
self.assertEqual(self.Correlation("b", "d"), [Row(value=0.0)])
self.assertEqual(self.Correlation("b", "a"), [])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Correlation(self):
self.assertEqual(self.Correlation("b", "c"), [Row(value=-1.0)])
def test_CountDistinct(self):
self.assertEqual(self.CountDistinct("b"), [Row(value=3.0)])
self.assertEqual(self.CountDistinct(["b", "c"]), [Row(value=3.0)])
self.assertEqual(self.CountDistinct(["b", "d"]), [Row(value=3.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_CountDistinct(self):
self.assertEqual(self.CountDistinct("b"), [Row(value=1.0)])
def test_DataType(self):
self.assertEqual(
self.Datatype("b"),
[
Row(value=5.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=3.0),
Row(value=1.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
],
)
self.assertEqual(
self.Datatype("c"),
[
Row(value=5.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=2.0),
Row(value=0.6666666666666666),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=0.0),
Row(value=0.0),
],
)
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Datatype(self):
self.assertEqual(
self.Datatype("c"),
[
Row(value=3.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=0.0),
Row(value=2.0),
Row(value=0.6666666666666666),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=0.0),
Row(value=0.0),
],
)
def test_Distinctness(self):
self.assertEqual(self.Distinctness("b"), [Row(value=1.0)])
self.assertEqual(self.Distinctness(["b", "c"]), [Row(value=1.0)])
self.assertEqual(self.Distinctness(["b", "d"]), [Row(value=1.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Distinctness(self):
self.assertEqual(self.Distinctness("b"), [Row(value=0)])
def test_Entropy(self):
self.assertEqual(self.Entropy("b"), [Row(value=1.0986122886681096)])
self.assertEqual(self.Entropy("a"), [Row(value=1.0986122886681096)])
self.assertEqual(self.Entropy("c"), [Row(value=0.6931471805599453)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Entropy(self):
self.assertEqual(self.Entropy("b"), [Row(value=0)])
def test_Histogram(self):
self.assertEqual(
self.Histogram("b"),
[
Row(value=3.0),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
],
)
self.assertEqual(
self.Histogram("c"),
[
Row(value=3.0),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
],
)
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Histogram(self):
self.assertEqual(
self.Histogram("b"),
[
Row(value=2.0),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
],
)
def test_Histogram_maxBins(self):
self.assertEqual(
self.Histogram_maxBins("b", maxDetailBins=2),
[
Row(value=3.0),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
],
)
self.assertEqual(
self.Histogram_maxBins("c", maxDetailBins=2),
[
Row(value=3.0),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
],
)
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Histogram_maxBins(self):
self.assertEqual(
self.Histogram_maxBins("b"),
[
Row(value=2.0),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
Row(value=1.0),
Row(value=0.3333333333333333),
],
)
def test_Maximum(self):
self.assertEqual(self.Maximum("b"), [Row(value=3.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Maximum(self):
self.assertEqual(self.Maximum("c"), [Row(value=3.0)])
def test_MaxLength(self):
self.assertEqual(self.MaxLength("a"), [Row(value=3.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_MaxLength(self):
self.assertEqual(self.MaxLength("b"), [Row(value=3.0)])
def test_Mean(self):
self.assertEqual(self.Mean("b"), [Row(value=2.0)])
self.assertEqual(self.Mean("c"), [Row(value=11 / 2)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Mean(self):
self.assertEqual(self.Mean("b"), [Row(value=3.0)])
def test_Minimum(self):
self.assertEqual(self.Minimum("b"), [Row(value=1.0)])
self.assertEqual(self.Minimum("c"), [Row(value=5.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Minimum(self):
self.assertEqual(self.Minimum("a"), [Row(value=3.0)])
self.assertEqual(self.Minimum("b"), [Row(value=3.0)])
def test_MinLength(self):
self.assertEqual(self.MinLength("a"), [Row(value=3.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_MinLength(self):
self.assertEqual(self.MinLength("a"), [])
def test_MutualInformation(self):
self.assertEqual(self.MutualInformation(["b", "c"]), [Row(value=0.7324081924454064)])
self.assertEqual(self.MutualInformation(["b", "d"]), [Row(value=0.6365141682948128)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_MutualInformation(self):
self.assertEqual(self.MutualInformation(["b", "d"]), [])
# TODO: Revisit when PatternMatch class is sorted out
def test_PatternMatch(self):
result = (
self.AnalysisRunner.onData(self.df).addAnalyzer(PatternMatch(column="a", pattern_regex="ba(r|z)")).run()
)
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_json = AnalyzerContext.successMetricsAsJson(self.spark, result)
df_from_json = self.spark.read.json(self.sc.parallelize([result_json]))
self.assertEqual(df_from_json.select("value").collect(), result_df.select("value").collect())
self.assertEqual(result_df.select("value").collect(), [Row(value=0.0)])
def test_Size(self):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Size()).run()
# result_df = result.select('value').collect()
result_df = AnalyzerContext.successMetricsAsDataFrame(self.spark, result)
result_df_row = result_df.select("value").collect()
self.assertEqual(result_df_row, [Row(value=3.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Size(self):
result = self.AnalysisRunner.onData(self.df).addAnalyzer(Size()).run()
result_df = result.select("value").collect()
self.assertEqual(result_df, [Row(value=4.0)])
def test_StandardDeviation(self):
self.assertEqual(self.StandardDeviation("b"), [Row(value=0.816496580927726)])
self.assertEqual(self.StandardDeviation("c"), [Row(value=0.5)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_StandardDeviation(self):
self.assertEqual(self.StandardDeviation("c"), [Row(value=0.8)])
def test_Sum(self):
self.assertEqual(self.Sum("b"), [Row(value=6.0)])
self.assertEqual(self.Sum("c"), [Row(value=11.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Sum(self):
self.assertEqual(self.Sum("b"), [Row(value=3.0)])
def test_Uniqueness(self):
self.assertEqual(self.Uniqueness(["b", "c"]), [Row(value=1.0)])
self.assertEqual(self.Uniqueness(["b", "d"]), [Row(value=1.0)])
self.assertEqual(self.Uniqueness(["a", "a"]), [Row(value=1.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_Uniqueness(self):
self.assertEqual(self.Uniqueness(["a", "a"]), [])
def test_UniqueValueRatio(self):
self.assertEqual(self.UniqueValueRatio(["b", "d"]), [Row(value=1.0)])
self.assertEqual(self.UniqueValueRatio(["b"]), [Row(value=1.0)])
@pytest.mark.xfail(reason="@unittest.expectedFailure")
def test_fail_UniqueValueRatio(self):
self.assertEqual(self.UniqueValueRatio(["a", "a"]), [])
if __name__ == "__main__":
unittest.main()
| 46.280576 | 120 | 0.648881 | 2,954 | 25,732 | 5.55044 | 0.050779 | 0.066846 | 0.077946 | 0.054465 | 0.8707 | 0.859539 | 0.809283 | 0.763296 | 0.724811 | 0.61082 | 0 | 0.034847 | 0.2015 | 25,732 | 555 | 121 | 46.363964 | 0.763128 | 0.006412 | 0 | 0.502101 | 0 | 0 | 0.041624 | 0.020538 | 0 | 0 | 0 | 0.001802 | 0.201681 | 1 | 0.147059 | false | 0 | 0.012605 | 0 | 0.210084 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e80f9b93887c913aa95c8c88d2d248c56d99b37e | 20 | py | Python | recon/__init__.py | sdnjyxr/deepcassi | 430120ed14c0b1dc6500b0c959d643c00489c320 | [
"Unlicense"
] | 41 | 2018-02-21T23:12:33.000Z | 2021-12-06T07:49:15.000Z | recon/__init__.py | sdnjyxr/deepcassi | 430120ed14c0b1dc6500b0c959d643c00489c320 | [
"Unlicense"
] | 4 | 2018-05-31T14:32:00.000Z | 2021-12-19T01:03:27.000Z | recon/__init__.py | sdnjyxr/deepcassi | 430120ed14c0b1dc6500b0c959d643c00489c320 | [
"Unlicense"
] | 12 | 2018-05-18T06:40:10.000Z | 2022-03-07T10:45:41.000Z |
import recon.model
| 6.666667 | 18 | 0.8 | 3 | 20 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 2 | 19 | 10 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e827c722b255c012e0317b810898f49feaabc74e | 41,823 | py | Python | tests/python/relay/test_op_qnn_conv2d.py | optima2005/incubator-tvm | 545f6ea3fede7a99f0a1b2c6933875550214a46d | [
"Apache-2.0"
] | 3 | 2020-03-12T10:25:51.000Z | 2020-08-05T05:36:23.000Z | tests/python/relay/test_op_qnn_conv2d.py | optima2005/incubator-tvm | 545f6ea3fede7a99f0a1b2c6933875550214a46d | [
"Apache-2.0"
] | null | null | null | tests/python/relay/test_op_qnn_conv2d.py | optima2005/incubator-tvm | 545f6ea3fede7a99f0a1b2c6933875550214a46d | [
"Apache-2.0"
] | 1 | 2018-10-19T18:11:41.000Z | 2018-10-19T18:11:41.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import tvm
import numpy as np
from tvm import relay
from tvm.relay import transform
from tvm.relay.testing import run_infer_type
from tvm.contrib import graph_runtime
from tvm.relay.testing.temp_op_attr import TempOpAttr
# We use llvm target for testing functionality. `llvm` points to an older Intel
# generation machine, that legalizes to a simple lowering. Therefore, the
# legalization is overwritten such that it can be skipped and we use the
# QNNCanonicalizeOps lowering for the testing.
def legalize_qnn_conv2d(attrs, inputs, types):
return None
def get_ref_func(data,
kernel,
input_zero_point,
kernel_zero_point,
input_scale,
kernel_scale,
kernel_size,
padding,
strides,
dilation,
data_layout,
kernel_layout,
out_dtype,
groups,
channels=None):
casted_data = relay.op.cast(data, "int32")
casted_kernel = relay.op.cast(kernel, "int32")
shifted_data = relay.op.subtract(casted_data,
relay.const(input_zero_point, "int32"))
shifted_kernel = relay.op.subtract(casted_kernel,
relay.const(kernel_zero_point, "int32"))
func = relay.op.nn.conv2d(shifted_data,
shifted_kernel,
padding=padding,
strides=strides,
dilation=dilation,
groups=groups,
channels=channels,
kernel_size=kernel_size,
out_dtype=out_dtype,
data_layout=data_layout,
kernel_layout=kernel_layout)
func = relay.Function(relay.analysis.free_vars(func), func)
return func
def get_qnn_func(data,
kernel,
input_zero_point,
kernel_zero_point,
input_scale,
kernel_scale,
kernel_size,
padding,
strides,
dilation,
data_layout,
kernel_layout,
out_dtype,
channels,
groups):
func = relay.qnn.op.conv2d(
data, kernel,
input_zero_point=relay.const(input_zero_point, 'int32'),
kernel_zero_point=relay.const(kernel_zero_point, 'int32'),
input_scale=relay.const(input_scale, 'float32'),
kernel_scale=relay.const(kernel_scale, 'float32'),
kernel_size=kernel_size,
strides=strides,
dilation=dilation,
padding=padding,
out_dtype=out_dtype,
groups=groups,
channels=channels,
data_layout=data_layout,
kernel_layout=kernel_layout)
mod = relay.Function(relay.analysis.free_vars(func), func)
mod = tvm.IRModule.from_expr(mod)
return mod
def get_funcs(data_shape,
data_dtype,
kernel_shape,
kernel_dtype,
input_zero_point,
kernel_zero_point,
input_scale,
kernel_scale,
kernel_size,
padding,
strides,
dilation,
data_layout,
kernel_layout,
out_dtype,
groups=1,
channels=None):
data = relay.var("data", shape=data_shape,
dtype=data_dtype)
kernel = relay.var("kernel", shape=kernel_shape,
dtype=kernel_dtype)
ref_func = get_ref_func(data,
kernel,
input_zero_point,
kernel_zero_point,
input_scale,
kernel_scale,
kernel_size,
padding,
strides,
dilation,
data_layout,
kernel_layout,
out_dtype,
groups,
channels)
ref_func = run_infer_type(ref_func)
ref_func = tvm.IRModule.from_expr(ref_func)
qnn_func = get_qnn_func(data,
kernel,
input_zero_point,
kernel_zero_point,
input_scale,
kernel_scale,
kernel_size,
padding,
strides,
dilation,
data_layout,
kernel_layout,
out_dtype,
channels,
groups)
return (ref_func, qnn_func)
def verify(ref_func, qnn_func, data_shape, data_dtype, kernel_shape,
kernel_dtype):
def get_inputs(data_shape, data_dtype, kernel_shape, kernel_dtype):
# Keeping inputs multiple of 4 because of a bug in Average Pool2d
# https://discuss.tvm.ai/t/pool2d-gives-bad-output-for-integer-inputs/3377
low = -128
high = 127
if data_dtype == "uint8":
low = 0
high = 255
golden_data = np.random.randint(low=low, high=high,
size=data_shape).astype(data_dtype)
low = -128
high = 127
if kernel_dtype == "uint8":
low = 0
high = 255
golden_weight = np.random.randint(low=low, high=high,
size=kernel_shape).astype(kernel_dtype)
return (golden_data, golden_weight)
def get_output(func, golden_inputs):
with relay.build_config(opt_level=2):
golden_data, golden_weight = golden_inputs
params = {'kernel': golden_weight}
graph, lib, params = relay.build(func, "llvm", params=params)
mod = graph_runtime.create(graph, lib, ctx=tvm.cpu(0))
mod.set_input("data", golden_data)
mod.set_input(**params)
mod.run()
res = mod.get_output(0).asnumpy()
return res
golden_inputs = get_inputs(data_shape, data_dtype,
kernel_shape, kernel_dtype)
golden_output = get_output(ref_func, golden_inputs)
qnn_output = get_output(qnn_func, golden_inputs)
np.testing.assert_equal(qnn_output, golden_output)
def test_no_zero_point():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 1, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 1, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=0,
kernel_zero_point=0,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# int8 input
data_shape = (2, 1, 2, 4)
data_dtype = 'int8'
kernel_shape = (3, 1, 2, 2)
kernel_dtype = 'int8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=0,
kernel_zero_point=0,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_kernel_zero_point():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 4, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=0,
kernel_zero_point=1,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# int8 input
data_shape = (2, 1, 2, 4)
data_dtype = 'int8'
kernel_shape = (3, 1, 2, 2)
kernel_dtype = 'int8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=0,
kernel_zero_point=5,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_input_zero_point():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 4, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=0,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# int8 input
data_shape = (2, 4, 2, 4)
data_dtype = 'int8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'int8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=0,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_both_zero_point():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 4, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# int8 input
data_shape = (2, 4, 2, 4)
data_dtype = 'int8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'int8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_layout():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 2, 4, 4) # NHWC
data_dtype = 'uint8'
kernel_shape = (2, 2, 4, 3) # HWIO
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NHWC",
kernel_layout="HWIO",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# NHWC and HWOI layout. Used in depthwise conv.
data_shape = (2, 2, 4, 3) # NHWC
data_dtype = 'uint8'
kernel_shape = (2, 2, 3, 1) # HWOI
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
groups=3,
data_layout="NHWC",
kernel_layout="HWOI",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_padding():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (1, 4, 2, 2)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=8,
kernel_zero_point=5,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(1, 1),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# Try different layout
data_shape = (2, 2, 4, 4) # NHWC
data_dtype = 'uint8'
kernel_shape = (2, 2, 4, 3) # HWIO
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=8,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(1, 1),
strides=(1, 1),
dilation=(1, 1),
data_layout="NHWC",
kernel_layout="HWIO",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_dilation():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# Non-zero kernel point - fall back to simpler lowering.
data_shape = (2, 4, 4, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(2, 2),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# Zero kernel point
data_shape = (2, 4, 4, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=0,
kernel_zero_point=0,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 1),
dilation=(2, 2),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_const_folding():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
data_shape = (2, 4, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 2, 2)
kernel_dtype = 'uint8'
golden_weight = np.random.randint(low=0, high=255,
size=kernel_shape).astype(kernel_dtype)
data = relay.var("data", shape=data_shape,
dtype=data_dtype)
kernel = relay.const(golden_weight)
qnn_func = get_qnn_func(data,
kernel,
input_zero_point=8,
kernel_zero_point=3,
kernel_size=(2, 2),
input_scale=1.0,
kernel_scale=1.0,
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32",
channels=kernel_shape[0],
groups=1)
folded_mod = transform.FoldConstant()(qnn_func)
folded_func = folded_mod["main"]
assert "reshape" not in folded_func.astext()
def test_kernel_size_1x1():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 4, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 4, 1, 1)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(1, 1),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
assert 'avg_pool2d' not in qnn_func.astext()
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_tflite_large_irregular():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (1, 1024, 1, 1)
data_dtype = 'uint8'
kernel_shape = (1001, 1024, 1, 1)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=127,
kernel_zero_point=127,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(1, 1),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
golden_data = np.full(data_shape, 127).astype('uint8')
golden_weight = np.full(kernel_shape, 127).astype('uint8')
with relay.build_config(opt_level=2):
params = {'kernel': golden_weight}
graph, lib, params = relay.build(qnn_func, "llvm", params=params)
mod = graph_runtime.create(graph, lib, ctx=tvm.cpu(0))
mod.set_input("data", golden_data)
mod.set_input(**params)
mod.run()
qnn_output = mod.get_output(0).asnumpy()
golden_output = np.full((1, 1001, 1, 1), 0).astype('uint8')
np.testing.assert_equal(qnn_output, golden_output)
def test_tflite_output_multiplier_greater_than_one():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (2, 1, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 1, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_scale=1.0,
kernel_scale=1.0,
input_zero_point=128,
kernel_zero_point=128,
kernel_size=(2, 2),
padding=(0, 0),
strides=(2, 2),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
golden_data = 128 + np.array((1, 1, 1, 1,
2, 2, 2, 2,
1, 2, 3, 4,
1, 2, 3, 4)).reshape(data_shape).astype('uint8')
golden_weight = 128 + np.array((1, 2, 3, 4,
-1, 1, -1, 1,
-1, -1, 1, 1)).reshape(kernel_shape)
golden_weight = golden_weight.astype('uint8')
with relay.build_config(opt_level=2):
params = {'kernel': golden_weight}
graph, lib, params = relay.build(qnn_func, "llvm", params=params)
mod = graph_runtime.create(graph, lib, ctx=tvm.cpu(0))
mod.set_input("data", golden_data)
mod.set_input(**params)
mod.run()
qnn_output = mod.get_output(0).asnumpy()
golden_output = np.array((17, 17,
0, 0,
2, 2,
16, 36,
2, 2,
0, 0)).reshape(2, 3, 1, 2)
np.testing.assert_equal(qnn_output, golden_output)
def test_tflite_anistropic_strides():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input
data_shape = (1, 1, 3, 6)
data_dtype = 'uint8'
kernel_shape = (1, 1, 2, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=127,
kernel_zero_point=127,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(2, 2),
padding=(0, 0),
strides=(1, 3),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
golden_data = np.array((133, 131, 129, 125, 123, 121,
135, 133, 131, 123, 121, 119,
137, 135, 133, 121, 119, 117)).reshape(data_shape)
golden_data = golden_data.astype('uint8')
golden_weight = np.array((129, 131, 133, 135)).reshape(kernel_shape)
golden_weight = golden_weight.astype('uint8')
with relay.build_config(opt_level=2):
params = {'kernel': golden_weight}
graph, lib, params = relay.build(qnn_func, "llvm", params=params)
mod = graph_runtime.create(graph, lib, ctx=tvm.cpu(0))
mod.set_input("data", golden_data)
mod.set_input(**params)
mod.run()
qnn_output = mod.get_output(0).asnumpy()
golden_output = np.array((124, -92, 164, -132)).reshape(1, 1, 2, 2)
np.testing.assert_equal(qnn_output, golden_output)
def test_broadcast_layout():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# Test broadcast support for NHWC layout.
data_shape = (1, 229, 229, 3) # NHWC
data_dtype = 'uint8'
kernel_shape = (7, 7, 3, 64) # HWIO
kernel_dtype = 'int8'
_, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=8,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(7, 7),
padding=(1, 1),
strides=(1, 1),
dilation=(1, 1),
data_layout="NHWC",
kernel_layout="HWIO",
out_dtype="int32")
func = qnn_func['main'].body
bias = relay.var("bias", shape=(64,), dtype="int32")
bias2 = relay.var("bias2", shape=(1, 225, 225, 1), dtype="int32")
# Check broadcast support on both lhs and rhs
func = relay.add(func, bias2)
func = relay.add(bias2, func)
func = relay.add(bias, func)
func = relay.add(func, bias)
func = relay.Function(relay.analysis.free_vars(func), func)
mod = tvm.IRModule.from_expr(func)
with relay.build_config(opt_level=3):
graph, lib, params = relay.build(mod, "llvm -mcpu=skylake-avx512")
def test_depthwise_depth_multiplier():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
# uint8 input, NCHW and OIHW
# Depthwise multiplier = 1
data_shape = (2, 4, 16, 16)
data_dtype = 'uint8'
kernel_shape = (4, 1, 3, 3)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(3, 3),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32",
groups=4)
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# Depthwise multiplier = 2
data_shape = (10, 4, 16, 16)
data_dtype = 'uint8'
kernel_shape = (4, 2, 3, 3)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(3, 3),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32",
groups=4,
channels=8)
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# uint8 input, NHWC and HWOI
# Depthwise multiplier = 1
data_shape = (2, 16, 16, 4)
data_dtype = 'uint8'
kernel_shape = (3, 3, 4, 1)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(3, 3),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NHWC",
kernel_layout="HWOI",
out_dtype="int32",
groups=4)
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
# Depthwise multiplier = 2
data_shape = (2, 16, 16, 4)
data_dtype = 'uint8'
kernel_shape = (3, 3, 4, 2)
kernel_dtype = 'uint8'
ref_func, qnn_func = get_funcs(data_shape=data_shape,
data_dtype=data_dtype,
kernel_shape=kernel_shape,
kernel_dtype=kernel_dtype,
input_zero_point=5,
kernel_zero_point=3,
input_scale=1.0,
kernel_scale=1.0,
kernel_size=(3, 3),
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NHWC",
kernel_layout="HWOI",
out_dtype="int32",
groups=4,
channels=8)
verify(ref_func, qnn_func, data_shape, data_dtype,
kernel_shape, kernel_dtype)
def test_per_channel_kernel_scale():
with TempOpAttr("qnn.conv2d", "FTVMQnnLegalize", legalize_qnn_conv2d):
data_shape = (2, 1, 2, 4)
data_dtype = 'uint8'
kernel_shape = (3, 1, 2, 2)
kernel_dtype = 'uint8'
data = relay.var("data", shape=data_shape,
dtype=data_dtype)
kernel = relay.var("kernel", shape=kernel_shape,
dtype=kernel_dtype)
kernel_scales = [2, 2, 2]
kernel_scales = relay.const(np.array(kernel_scales).astype('float32'))
func = relay.qnn.op.conv2d(
data, kernel,
input_zero_point=relay.const(0, 'int32'),
kernel_zero_point=relay.const(0, 'int32'),
input_scale=relay.const(2.0, 'float32'),
kernel_scale=kernel_scales,
kernel_size=(2, 2),
channels=kernel_shape[0],
padding=(0, 0),
strides=(1, 1),
dilation=(1, 1),
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32")
mod = relay.Function(relay.analysis.free_vars(func), func)
mod = tvm.IRModule.from_expr(mod)
if __name__ == "__main__":
test_no_zero_point()
test_input_zero_point()
test_kernel_zero_point()
test_both_zero_point()
test_layout()
test_padding()
test_dilation()
test_const_folding()
test_kernel_size_1x1()
test_tflite_large_irregular()
test_broadcast_layout()
test_tflite_output_multiplier_greater_than_one()
test_tflite_anistropic_strides()
test_depthwise_depth_multiplier()
test_per_channel_kernel_scale()
| 45.410423 | 86 | 0.422806 | 3,840 | 41,823 | 4.316146 | 0.073438 | 0.069687 | 0.056474 | 0.049958 | 0.788585 | 0.770725 | 0.750573 | 0.738989 | 0.716725 | 0.701219 | 0 | 0.048101 | 0.499438 | 41,823 | 920 | 87 | 45.459783 | 0.743587 | 0.041532 | 0 | 0.811881 | 0 | 0 | 0.030253 | 0 | 0 | 0 | 0 | 0 | 0.007426 | 1 | 0.027228 | false | 0 | 0.008663 | 0.001238 | 0.043317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e82912e6fbf405fbfd2750598cd0c5e4ffe22fbd | 27,135 | py | Python | test/projectfile/parser/test_api.py | tiborsimon/p | 44d1caf2bab001a2b0bf33c40d7669ae1206f534 | [
"MIT"
] | null | null | null | test/projectfile/parser/test_api.py | tiborsimon/p | 44d1caf2bab001a2b0bf33c40d7669ae1206f534 | [
"MIT"
] | null | null | null | test/projectfile/parser/test_api.py | tiborsimon/p | 44d1caf2bab001a2b0bf33c40d7669ae1206f534 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from unittest import TestCase
try:
import mock
except ImportError:
from unittest import mock
try:
import __builtin__
builtin_module = '__builtin__'
except ImportError:
builtin_module = 'builtins'
from test.helpers import *
from projects.projectfile import error
from projects.projectfile.parser import state
from projects.projectfile import parser
class LinesProcessing(TestCase):
def test__single_command_no_dependencies(self):
lines = [
'from v1.2.3',
'',
'command:',
' echo "hello"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'pre': ['echo "hello"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__single_command_no_dependencies_more_commands(self):
lines = [
'from v1.2.3',
'',
'command:',
' echo "hello"',
' cd ~',
' make html'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'pre': ['echo "hello"', 'cd ~', 'make html']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__single_command_with_dependencies(self):
lines = [
'from v1.2.3',
'',
'command: [a, b]',
' echo "hello"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'dependencies': ['a', 'b'],
'pre': ['echo "hello"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__more_commands_with_no_dependencies(self):
lines = [
'from v1.2.3',
'',
'command:',
' echo "hello"',
'command2:',
' echo "vmi"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'pre': ['echo "hello"']
},
'command2': {
'pre': ['echo "vmi"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__single_command_with_only_post(self):
lines = [
'from v1.2.3',
'',
'command: [a, b]',
' ===',
' echo "hello"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'dependencies': ['a', 'b'],
'post': ['echo "hello"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__single_command_with_pre_and_post(self):
lines = [
'from v1.2.3',
'',
'command: [a, b]',
' echo "pre"',
' ===',
' echo "post"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'dependencies': ['a', 'b'],
'post': ['echo "post"'],
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__single_command_with_variable(self):
lines = [
'from v1.2.3',
'',
'a = 42',
'command: [a, b]',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'variables': {'a': '42'},
'commands': {
'command': {
'dependencies': ['a', 'b'],
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__single_command_with_variables(self):
lines = [
'from v1.2.3',
'',
'a = 42',
'b = 54',
'command: [a, b]',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'variables': {
'a': '42',
'b': '54'
},
'commands': {
'command': {
'dependencies': ['a', 'b'],
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__main_comment(self):
lines = [
'from v1.2.3',
'',
'"""',
'This is the main description',
'"""',
'command:',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is the main description',
'commands': {
'command': {
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__main_comment_indentation_gets_ignored(self):
lines = [
'from v1.2.3',
'',
' """',
' This is the main description',
' """',
'command:',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is the main description',
'commands': {
'command': {
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__main_comment__inserting_line_break(self):
lines = [
'from v1.2.3',
'',
'"""',
'This is the main description',
'',
'after break',
'"""',
'command:',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is the main description\n\nafter break',
'commands': {
'command': {
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__main_comment__appending_lines(self):
lines = [
'from v1.2.3',
'',
'"""',
'This is the main description',
'after break',
'"""',
'command:',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is the main description after break',
'commands': {
'command': {
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__command_comment(self):
lines = [
'from v1.2.3',
'',
'command:',
' """',
' This is the command description',
' """',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'description': 'This is the command description',
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__command_comment_indentation_gets_ignored_1(self):
lines = [
'from v1.2.3',
'',
'command:',
'"""',
'This is the command description',
'"""',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'description': 'This is the command description',
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__command_comment_indentation_gets_ignored_2(self):
lines = [
'from v1.2.3',
'',
'command:',
' """',
' This is the command description',
' """',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'description': 'This is the command description',
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__command_comment__inserting_line_break(self):
lines = [
'from v1.2.3',
'',
'command:',
' """',
' This is the command description',
' ',
' vmi',
' """',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'description': 'This is the command description\n\nvmi',
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__command_comment__lines_appended_nicely(self):
lines = [
'from v1.2.3',
'',
'command:',
' """',
' This is the command description',
' vmi',
' """',
' echo "pre"'
]
expected = {
'min-version': (1, 2, 3),
'commands': {
'command': {
'description': 'This is the command description vmi',
'pre': ['echo "pre"']
}
}
}
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__full_parsing(self):
lines = [
'from v1.2.3',
'"""',
'This is a test..',
'"""',
'a = 42',
'b = 45',
'',
'command|com|c:',
' """',
' This is the command description.',
' vmi',
' """',
' echo "pre"',
' ===',
' echo "post"',
'',
'other_command|oth|oo|o: [command]',
' """',
' Another command..',
' """',
' echo "other"',
' echo "something"',
' ===',
' echo "post2"'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is a test..',
'variables': {
'a': '42',
'b': '45'
},
'commands': {
'command': {
'alternatives': ['com', 'c'],
'description': 'This is the command description. vmi',
'pre': ['echo "pre"'],
'post': ['echo "post"']
},
'com': {
'alias': 'command'
},
'c': {
'alias': 'command'
},
'other_command': {
'alternatives': ['oth', 'oo', 'o'],
'dependencies': ['command'],
'description': 'Another command..',
'pre': ['echo "other"', 'echo "something"'],
'post': ['echo "post2"']
},
'oth': {
'alias': 'other_command'
},
'oo': {
'alias': 'other_command'
},
'o': {
'alias': 'other_command'
}
}
}
self.maxDiff = None
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__full_parsing_with_comments_1(self):
lines = [
'from v1.2.3#comment',
'"""#comment',
'This is a test..#comment',
'"""#comment',
'a = 42#comment',
'b = 45#comment',
'#comment',
'command|com|c:#comment',
' """#comment',
' This is the command description.#comment',
' vmi#comment',
' """#comment',
' echo "pre"#comment',
' ===#comment',
' echo "post"#comment',
'#comment',
'other_command|oth|oo|o: [command]#comment',
' """#comment',
' Another command..#comment',
' """#comment',
' echo "other"#comment',
' echo "something"#comment',
' ===#comment',
' echo "post2"#comment',
'#comment'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is a test..#comment',
'variables': {
'a': '42',
'b': '45'
},
'commands': {
'command': {
'alternatives': ['com', 'c'],
'description': 'This is the command description.#comment vmi#comment',
'pre': ['echo "pre"'],
'post': ['echo "post"']
},
'com': {
'alias': 'command'
},
'c': {
'alias': 'command'
},
'other_command': {
'alternatives': ['oth', 'oo', 'o'],
'dependencies': ['command'],
'description': 'Another command..#comment',
'pre': ['echo "other"', 'echo "something"'],
'post': ['echo "post2"']
},
'oth': {
'alias': 'other_command'
},
'oo': {
'alias': 'other_command'
},
'o': {
'alias': 'other_command'
}
}
}
self.maxDiff = None
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
def test__full_parsing_with_comments_2(self):
lines = [
'from v1.2.3 #comment',
'""" #comment',
'This is a test.. #comment',
'""" #comment',
'a = 42 #comment',
'b = 45 #comment',
' #comment',
'command|com|c: #comment',
' """ #comment',
' This is the command description. #comment',
' vmi #comment',
' """ #comment',
' echo "pre" #comment',
' === #comment',
' echo "post" #comment',
' #comment',
'other_command|oth|oo|o: [command] #comment',
' """ #comment',
' Another command.. #comment',
' """ #comment',
' echo "other" #comment',
' echo "something" #comment',
' === #comment',
' echo "post2" #comment',
' #comment'
]
expected = {
'min-version': (1, 2, 3),
'description': 'This is a test.. #comment',
'variables': {
'a': '42',
'b': '45'
},
'commands': {
'command': {
'alternatives': ['com', 'c'],
'description': 'This is the command description. #comment vmi #comment',
'pre': ['echo "pre"'],
'post': ['echo "post"']
},
'com': {
'alias': 'command'
},
'c': {
'alias': 'command'
},
'other_command': {
'alternatives': ['oth', 'oo', 'o'],
'dependencies': ['command'],
'description': 'Another command.. #comment',
'pre': ['echo "other"', 'echo "something"'],
'post': ['echo "post2"']
},
'oth': {
'alias': 'other_command'
},
'oo': {
'alias': 'other_command'
},
'o': {
'alias': 'other_command'
}
}
}
self.maxDiff = None
result = parser._parse_lines(lines)
self.assertEqual(expected, result)
class LineProcessingExceptionWrapping(TestCase):
@mock.patch.object(state, 'start')
def test__line_numbers_prepended_to_exception_message(self, mock_state):
error_message = 'Some error'
mock_state.side_effect = SyntaxError(error_message)
expected_error = {
'line': 1,
'error': error_message
}
with self.assertRaises(Exception) as cm:
parser._parse_lines([''])
assert_exception(self, cm, error.ProjectfileError, expected_error)
class ParserErrorCases(TestCase):
def test__unexpected_comment_delimiter_1(self):
lines = [
'from v1.2.3',
'',
'a = 42',
'b = 54',
'"""'
]
expected_error = {
'line': 5,
'error': error.COMMENT_DELIMITER_UNEXPECTED_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__unexpected_comment_delimiter_2(self):
lines = [
'from v1.2.3',
'',
'command:',
' cat file',
' """'
]
expected_error = {
'line': 5,
'error': error.COMMENT_DELIMITER_UNEXPECTED_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__unexpected_comment_delimiter_3(self):
lines = [
'from v1.2.3',
'',
'command:',
' cat file',
' ===',
' cat file',
' """'
]
expected_error = {
'line': 7,
'error': error.COMMENT_DELIMITER_UNEXPECTED_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__unexpected_command_delimiter(self):
lines = [
'from v1.2.3',
'',
'command:',
' cat file',
' ===',
' cat file',
' ==='
]
expected_error = {
'line': 7,
'error': error.COMMAND_DELIMITER_UNEXPECTED_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__version_indentation_error(self):
lines = [
' from v1.2.3'
]
expected_error = {
'line': 1,
'error': error.VERSION_INDENTATION_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__invalid_version_format_error(self):
lines = [
'from v12.3'
]
expected_error = {
'line': 1,
'error': error.VERSION_FORMAT_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__version_missing_error(self):
lines = [
'variable = 4'
]
expected_error = {
'line': 1,
'error': error.VERSION_MISSING_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__variable_indentation_error(self):
lines = [
'from v1.2.3',
' variable = 4'
]
expected_error = {
'line': 2,
'error': error.VARIABLE_INDENTATION_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__variable_quote_before_error(self):
lines = [
'from v1.2.3',
'variable = 4"'
]
expected_error = {
'line': 2,
'error': error.VARIABLE_QUOTE_BEFORE_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__variable_quote_after_error(self):
lines = [
'from v1.2.3',
'variable = "4'
]
expected_error = {
'line': 2,
'error': error.VARIABLE_QUOTE_AFTER_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test__variable_wrong_comment_placement(self):
lines = [
'from v1.2.3',
'variable = #4'
]
expected_error = {
'line': 2,
'error': error.VARIABLE_SYNTAX_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_header_indentation_error(self):
lines = [
'from v1.2.3',
' command:'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_INDENTATION_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_missing_colon_error(self):
lines = [
'from v1.2.3',
'command'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_MISSING_COLON_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_invalid_colon_error(self):
lines = [
'from v1.2.3',
'command:vmi'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_COLON_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_invalid_alternative_error(self):
lines = [
'from v1.2.3',
'command|ffd|:'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_INVALID_ALTERNATIVE
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_empty_dependency_list_error(self):
lines = [
'from v1.2.3',
'command: []'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_EMPTY_DEPENDENCY_LIST
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_invalid_dependency_list_error(self):
lines = [
'from v1.2.3',
'command: [vmi,]'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_INVALID_DEPENDENCY_LIST
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_syntax_error(self):
lines = [
'from v1.2.3',
'command: : [vmi,]'
]
expected_error = {
'line': 2,
'error': error.COMMAND_HEADER_SYNTAX_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
def test_command_unexpected_unindented_line_error(self):
lines = [
'from v1.2.3',
'command:',
'vmi'
]
expected_error = {
'line': 3,
'error': error.COMMAND_HEADER_UNEXPECTED_UNINDENTED_ERROR
}
with self.assertRaises(Exception) as cm:
parser._parse_lines(lines)
assert_exception(self, cm, error.ProjectfileError, expected_error)
| 30.765306 | 95 | 0.415736 | 2,126 | 27,135 | 5.10254 | 0.067262 | 0.010509 | 0.058997 | 0.075498 | 0.900719 | 0.894727 | 0.886707 | 0.882651 | 0.859698 | 0.838219 | 0 | 0.017397 | 0.459812 | 27,135 | 881 | 96 | 30.800227 | 0.722677 | 0.001548 | 0 | 0.687124 | 0 | 0 | 0.206601 | 0.003472 | 0 | 0 | 0 | 0 | 0.072202 | 1 | 0.048135 | false | 0 | 0.012034 | 0 | 0.063779 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c470a49de9d387d8b037e3f4380dafd7305f1b0 | 97 | py | Python | applications/gestiune/controllers/loguri.py | Vlad-Iliescu/gest | 32fbd3a859316727cd8564029d51b8d3c94cc0a0 | [
"BSD-3-Clause"
] | null | null | null | applications/gestiune/controllers/loguri.py | Vlad-Iliescu/gest | 32fbd3a859316727cd8564029d51b8d3c94cc0a0 | [
"BSD-3-Clause"
] | null | null | null | applications/gestiune/controllers/loguri.py | Vlad-Iliescu/gest | 32fbd3a859316727cd8564029d51b8d3c94cc0a0 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf8
# try something like
def index():
return dict(message="hello from loguri.py")
| 19.4 | 47 | 0.701031 | 14 | 97 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.175258 | 97 | 4 | 48 | 24.25 | 0.8375 | 0.319588 | 0 | 0 | 0 | 0 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1c723a891f680f3c9bef907da5085cf7f8c8a562 | 135 | py | Python | pulotu/tests/test_functional.py | blurks/pulotu | 621460f3d4dbe05367ed4814b95d192df348cb72 | [
"Apache-2.0"
] | null | null | null | pulotu/tests/test_functional.py | blurks/pulotu | 621460f3d4dbe05367ed4814b95d192df348cb72 | [
"Apache-2.0"
] | 1 | 2021-11-19T16:50:11.000Z | 2021-11-19T16:55:17.000Z | pulotu/tests/test_functional.py | blurks/pulotu | 621460f3d4dbe05367ed4814b95d192df348cb72 | [
"Apache-2.0"
] | 1 | 2021-11-22T13:28:14.000Z | 2021-11-22T13:28:14.000Z | def test_home(app):
app.get_html('/', status=200)
app.get_html('/about', status=200)
app.get_html('/glossary', status=200)
| 27 | 41 | 0.651852 | 21 | 135 | 4 | 0.47619 | 0.214286 | 0.357143 | 0.357143 | 0.452381 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078261 | 0.148148 | 135 | 4 | 42 | 33.75 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0.118519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c74537452a88536d1c522714805610fb28ad0635 | 95 | py | Python | automl_workflow/data_augmentor.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 3 | 2020-12-15T02:40:43.000Z | 2021-01-14T02:32:13.000Z | automl_workflow/data_augmentor.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | null | null | null | automl_workflow/data_augmentor.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 4 | 2021-01-07T05:41:38.000Z | 2021-04-07T08:02:22.000Z | from automl_workflow.api import DataAugmentor
class MyDataAugmentor(DataAugmentor):
pass | 15.833333 | 45 | 0.821053 | 10 | 95 | 7.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136842 | 95 | 6 | 46 | 15.833333 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c74ad6cd49c8e96ea19da9a93fdf26b767d0a7fb | 44 | py | Python | workon/templatetags/workon_google.py | dalou/django-workon | ef63c0a81c00ef560ed693e435cf3825f5170126 | [
"BSD-3-Clause"
] | null | null | null | workon/templatetags/workon_google.py | dalou/django-workon | ef63c0a81c00ef560ed693e435cf3825f5170126 | [
"BSD-3-Clause"
] | null | null | null | workon/templatetags/workon_google.py | dalou/django-workon | ef63c0a81c00ef560ed693e435cf3825f5170126 | [
"BSD-3-Clause"
] | null | null | null |
from ..contrib.google.templatetags import * | 22 | 43 | 0.795455 | 5 | 44 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 2 | 43 | 22 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7767e42e4cb67f83fc3434befe27155672b3c23 | 22 | py | Python | elliot/evaluation/metrics/rating/rmse/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 175 | 2021-03-04T15:46:25.000Z | 2022-03-31T05:56:58.000Z | elliot/evaluation/metrics/rating/rmse/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 15 | 2021-03-06T17:53:56.000Z | 2022-03-24T17:02:07.000Z | elliot/evaluation/metrics/rating/rmse/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 39 | 2021-03-04T15:46:26.000Z | 2022-03-09T15:37:12.000Z | from .rmse import RMSE | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7deebcfc027d4463c4bdcba6da483001d88cdf9 | 139 | py | Python | cd/modules/voice/cogs/__init__.py | Axelware/cd-bot | d9b704d50b86ea25238242ae67c93e447b24636e | [
"MIT"
] | 1 | 2022-03-20T00:53:35.000Z | 2022-03-20T00:53:35.000Z | cd/modules/voice/cogs/__init__.py | Axelware/cd-bot | d9b704d50b86ea25238242ae67c93e447b24636e | [
"MIT"
] | 1 | 2022-03-23T18:38:52.000Z | 2022-03-23T22:24:53.000Z | cd/modules/voice/cogs/__init__.py | Axelware/cd-bot | d9b704d50b86ea25238242ae67c93e447b24636e | [
"MIT"
] | null | null | null | # Future
from __future__ import annotations
# Local
from .effects import *
from .play import *
from .player import *
from .queue import *
| 15.444444 | 34 | 0.748201 | 18 | 139 | 5.555556 | 0.5 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179856 | 139 | 8 | 35 | 17.375 | 0.877193 | 0.086331 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7e18f0a7cb676ff301cd25c3dd2e27a28f0e211 | 50 | py | Python | keycloak_admin_aio/_resources/client_scopes/by_id/scope_mappings/realm/__init__.py | V-Mann-Nick/keycloak-admin-aio | 83ac1af910e492a5864eb369aacfc0512e5c8c45 | [
"Apache-2.0"
] | 12 | 2021-11-08T18:03:09.000Z | 2022-03-17T16:34:06.000Z | keycloak_admin_aio/_resources/client_scopes/by_id/scope_mappings/realm/__init__.py | V-Mann-Nick/keycloak-admin-aio | 83ac1af910e492a5864eb369aacfc0512e5c8c45 | [
"Apache-2.0"
] | null | null | null | keycloak_admin_aio/_resources/client_scopes/by_id/scope_mappings/realm/__init__.py | V-Mann-Nick/keycloak-admin-aio | 83ac1af910e492a5864eb369aacfc0512e5c8c45 | [
"Apache-2.0"
] | 1 | 2021-11-14T13:55:30.000Z | 2021-11-14T13:55:30.000Z | from .realm import ClientScopesScopeMappingsRealm
| 25 | 49 | 0.9 | 4 | 50 | 11.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.978261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7ec5c72d104b74bf7f280e1c424930b33158ea1 | 34 | py | Python | jsobj/__init__.py | gkovacs/jsobj | 5996c3a83b927c588b598e6032db507b195af1c5 | [
"MIT"
] | null | null | null | jsobj/__init__.py | gkovacs/jsobj | 5996c3a83b927c588b598e6032db507b195af1c5 | [
"MIT"
] | null | null | null | jsobj/__init__.py | gkovacs/jsobj | 5996c3a83b927c588b598e6032db507b195af1c5 | [
"MIT"
] | null | null | null | from .jsobj import Object # noqa
| 17 | 33 | 0.735294 | 5 | 34 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205882 | 34 | 1 | 34 | 34 | 0.925926 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7f5911a22d70245a42f9b6648dbf3da36fbdc01 | 72,903 | py | Python | cryosat_toolkit/read_cryosat_L1b.py | Sibada/read-cryosat-2 | 3267a0bb52857feb142a67cbb0e352160415c28f | [
"MIT"
] | null | null | null | cryosat_toolkit/read_cryosat_L1b.py | Sibada/read-cryosat-2 | 3267a0bb52857feb142a67cbb0e352160415c28f | [
"MIT"
] | null | null | null | cryosat_toolkit/read_cryosat_L1b.py | Sibada/read-cryosat-2 | 3267a0bb52857feb142a67cbb0e352160415c28f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
u"""
read_cryosat_L1b.py
Written by Tyler Sutterley (10/2018)
Reads CryoSat Level-1b data products from baselines A, B and C
Supported CryoSat Modes: LRM, SAR, SARin, FDM, SID, GDR
INPUTS:
full_filename: full path of CryoSat .DBL file
OUTPUTS:
Location: Time and Orbit Group
Data: Measurements Group
Geometry: External Corrections Group
Waveform_1Hz: Average Waveforms Group
Waveform_20Hz: Waveforms Group (with SAR/SARIN Beam Behavior Parameters)
METADATA: MPH, SPH and DSD Header data
UPDATE HISTORY:
Updated 10/2018: updated header read functions for python3
Updated 05/2016: using __future__ print and division functions
Written 03/2016
"""
from __future__ import print_function
from __future__ import division
import os
import re
import numpy as np
#-- PURPOSE: Initiate L1b MDS variables for CryoSat Baselines A and B
def cryosat_baseline_AB(fid, n_records, MODE):
n_SARIN_RW = 512
n_SAR_RW = 128
n_LRM_RW = 128
n_blocks = 20
n_BeamBehaviourParams = 50
#-- CryoSat-2 Time and Orbit Group
L1b_location_parameters = {}
#-- Time: day part
L1b_location_parameters['Day'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Time: second part
L1b_location_parameters['Second'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Time: microsecond part
L1b_location_parameters['Micsec'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- USO correction factor
L1b_location_parameters['USO_Corr'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Mode ID
L1b_location_parameters['Mode_ID'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Source sequence counter
L1b_location_parameters['SSC'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Instrument configuration
L1b_location_parameters['Inst_config'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Record Counter
L1b_location_parameters['Rec_Count'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_location_parameters['Lat'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_location_parameters['Lon'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_location_parameters['Alt'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instantaneous altitude rate derived from orbit: packed units (mm/s, 1e-3 m/s)
L1b_location_parameters['Alt_rate'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Satellite velocity vector. In ITRF: packed units (mm/s, 1e-3 m/s)
#-- ITRF= International Terrestrial Reference Frame
L1b_location_parameters['Sat_velocity'] = np.zeros((n_records,n_blocks,3),dtype=np.int32)
#-- Real beam direction vector. In CRF: packed units (micro-m, 1e-6 m)
#-- CRF= CryoSat Reference Frame.
L1b_location_parameters['Real_beam'] = np.zeros((n_records,n_blocks,3),dtype=np.int32)
#-- Interferometric baseline vector. In CRF: packed units (micro-m, 1e-6 m)
L1b_location_parameters['Baseline'] = np.zeros((n_records,n_blocks,3),dtype=np.int32)
#-- Measurement Confidence Data Flags
#-- Generally the MCD flags indicate problems when set
#-- If MCD is 0 then no problems or non-nominal conditions were detected
#-- Serious errors are indicated by setting bit 31
L1b_location_parameters['MCD'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- CryoSat-2 Measurement Group
#-- Derived from instrument measurement parameters
L1b_measurements = {}
#-- Window Delay reference (two-way) corrected for instrument delays
L1b_measurements['TD'] = np.zeros((n_records,n_blocks),dtype=np.int64)
#-- H0 Initial Height Word from telemetry
L1b_measurements['H_0'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- COR2 Height Rate: on-board tracker height rate over the radar cycle
L1b_measurements['COR2'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Coarse Range Word (LAI) derived from telemetry
L1b_measurements['LAI'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Fine Range Word (FAI) derived from telemetry
L1b_measurements['FAI'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Automatic Gain Control Channel 1: AGC gain applied on Rx channel 1.
#-- Gain calibration corrections are applied (Sum of AGC stages 1 and 2
#-- plus the corresponding corrections) (dB/100)
L1b_measurements['AGC_CH1'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Automatic Gain Control Channel 2: AGC gain applied on Rx channel 2.
#-- Gain calibration corrections are applied (dB/100)
L1b_measurements['AGC_CH2'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Total Fixed Gain On Channel 1: gain applied by the RF unit. (dB/100)
L1b_measurements['TR_gain_CH1'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Total Fixed Gain On Channel 2: gain applied by the RF unit. (dB/100)
L1b_measurements['TR_gain_CH2'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Transmit Power in microWatts
L1b_measurements['TX_Power'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Doppler range correction: Radial component (mm)
#-- computed for the component of satellite velocity in the nadir direction
L1b_measurements['Doppler_range'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Range Correction: transmit-receive antenna (mm)
#-- Calibration correction to range on channel 1 computed from CAL1.
L1b_measurements['TR_inst_range'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Range Correction: receive-only antenna (mm)
#-- Calibration correction to range on channel 2 computed from CAL1.
L1b_measurements['R_inst_range'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Gain Correction: transmit-receive antenna (dB/100)
#-- Calibration correction to gain on channel 1 computed from CAL1
L1b_measurements['TR_inst_gain'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Gain Correction: receive-only (dB/100)
#-- Calibration correction to gain on channel 2 computed from CAL1
L1b_measurements['R_inst_gain'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Internal Phase Correction (microradians)
L1b_measurements['Internal_phase'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- External Phase Correction (microradians)
L1b_measurements['External_phase'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Noise Power measurement (dB/100): converted from telemetry units to be
#-- the noise floor of FBR measurement echoes.
#-- Set to -9999.99 when the telemetry contains zero.
L1b_measurements['Noise_power'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Phase slope correction (microradians)
#-- Computed from the CAL-4 packets during the azimuth impulse response
#-- amplitude (SARIN only). Set from the latest available CAL-4 packet.
L1b_measurements['Phase_slope'] = np.zeros((n_records,n_blocks),dtype=np.int32)
L1b_measurements['Spares1'] = np.zeros((n_records,n_blocks,4),dtype=np.int8)
#-- CryoSat-2 External Corrections Group
L1b_geo_corrections = {}
#-- Dry Tropospheric Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['dryTrop'] = np.zeros((n_records),dtype=np.int32)
#-- Wet Tropospheric Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['wetTrop'] = np.zeros((n_records),dtype=np.int32)
#-- Inverse Barometric Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['InvBar'] = np.zeros((n_records),dtype=np.int32)
#-- Delta Inverse Barometric Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['DAC'] = np.zeros((n_records),dtype=np.int32)
#-- GIM Ionospheric Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['Iono_GIM'] = np.zeros((n_records),dtype=np.int32)
#-- Model Ionospheric Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['Iono_model'] = np.zeros((n_records),dtype=np.int32)
#-- Ocean tide Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['ocTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Long period equilibrium ocean tide Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['lpeTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Ocean loading tide Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['olTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Solid Earth tide Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['seTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Geocentric Polar tide Correction packed units (mm, 1e-3 m)
L1b_geo_corrections['gpTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Surface Type: enumerated key to classify surface at nadir
#-- 0 = Open Ocean
#-- 1 = Closed Sea
#-- 2 = Continental Ice
#-- 3 = Land
L1b_geo_corrections['Surf_type'] = np.zeros((n_records),dtype=np.uint32)
L1b_geo_corrections['Spare1'] = np.zeros((n_records,4),dtype=np.int8)
#-- Corrections Status Flag
L1b_geo_corrections['Corr_status'] = np.zeros((n_records),dtype=np.uint32)
#-- Correction Error Flag
L1b_geo_corrections['Corr_error'] = np.zeros((n_records),dtype=np.uint32)
L1b_geo_corrections['Spare2'] = np.zeros((n_records,4),dtype=np.int8)
#-- CryoSat-2 Average Waveforms Groups
#-- Low-Resolution Mode
L1b_1Hz_LRM_waveform = {}
#-- Data Record Time (MDSR Time Stamp)
L1b_1Hz_LRM_waveform['Day_1Hz'] = np.zeros((n_records),dtype=np.int32)
L1b_1Hz_LRM_waveform['Sec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
L1b_1Hz_LRM_waveform['Micsec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_1Hz_LRM_waveform['Lat_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_1Hz_LRM_waveform['Lon_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_1Hz_LRM_waveform['Alt_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Window Delay (two-way) corrected for instrument delays
L1b_1Hz_LRM_waveform['TD_1Hz'] = np.zeros((n_records),dtype=np.int64)
#-- 1 Hz Averaged Power Echo Waveform
L1b_1Hz_LRM_waveform['Waveform'] = np.zeros((n_records,n_LRM_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_1Hz_LRM_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Echo Scale Power (a power of 2)
L1b_1Hz_LRM_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Number of echoes averaged
L1b_1Hz_LRM_waveform['N_avg_echoes'] = np.zeros((n_records),dtype=np.uint16)
L1b_1Hz_LRM_waveform['Flags'] = np.zeros((n_records),dtype=np.uint16)
#-- SAR Mode
L1b_1Hz_SAR_waveform = {}
#-- Data Record Time (MDSR Time Stamp)
L1b_1Hz_SAR_waveform['Day_1Hz'] = np.zeros((n_records),dtype=np.int32)
L1b_1Hz_SAR_waveform['Sec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
L1b_1Hz_SAR_waveform['Micsec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_1Hz_SAR_waveform['Lat_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_1Hz_SAR_waveform['Lon_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_1Hz_SAR_waveform['Alt_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Window Delay (two-way) corrected for instrument delays
L1b_1Hz_SAR_waveform['TD_1Hz'] = np.zeros((n_records),dtype=np.int64)
#-- 1 Hz Averaged Power Echo Waveform
L1b_1Hz_SAR_waveform['Waveform'] = np.zeros((n_records,n_SAR_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_1Hz_SAR_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Echo Scale Power (a power of 2)
L1b_1Hz_SAR_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Number of echoes averaged
L1b_1Hz_SAR_waveform['N_avg_echoes'] = np.zeros((n_records),dtype=np.uint16)
L1b_1Hz_SAR_waveform['Flags'] = np.zeros((n_records),dtype=np.uint16)
#-- SARIN Mode
#-- Same as the LRM/SAR groups but the waveform array is 512 bins instead of
#-- 128 and the number of echoes averaged is different.
L1b_1Hz_SARIN_waveform = {}
#-- Data Record Time (MDSR Time Stamp)
L1b_1Hz_SARIN_waveform['Day'] = np.zeros((n_records),dtype=np.int32)
L1b_1Hz_SARIN_waveform['Sec'] = np.zeros((n_records),dtype=np.uint32)
L1b_1Hz_SARIN_waveform['Micsec'] = np.zeros((n_records),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_1Hz_SARIN_waveform['Lat'] = np.zeros((n_records),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_1Hz_SARIN_waveform['Lon'] = np.zeros((n_records),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_1Hz_SARIN_waveform['Alt'] = np.zeros((n_records),dtype=np.int32)
#-- Window Delay (two-way) corrected for instrument delays
L1b_1Hz_SARIN_waveform['TD'] = np.zeros((n_records),dtype=np.int64)
#-- 1 Hz Averaged Power Echo Waveform
L1b_1Hz_SARIN_waveform['Waveform'] = np.zeros((n_records,n_SARIN_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_1Hz_SARIN_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Echo Scale Power (a power of 2)
L1b_1Hz_SARIN_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Number of echoes averaged
L1b_1Hz_SARIN_waveform['N_avg_echoes'] = np.zeros((n_records),dtype=np.uint16)
L1b_1Hz_SARIN_waveform['Flags'] = np.zeros((n_records),dtype=np.uint16)
#-- CryoSat-2 Waveforms Groups
#-- Beam Behavior Parameters
L1b_Beam_Behavior = {}
#-- Standard Deviation of Gaussian fit to range integrated stack power.
L1b_Beam_Behavior['SD'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Stack Center: Mean of Gaussian fit to range integrated stack power.
L1b_Beam_Behavior['Center'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Stack amplitude parameter scaled in dB/100.
L1b_Beam_Behavior['Amplitude'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- 3rd moment: providing the degree of asymmetry of the range integrated
#-- stack power distribution.
L1b_Beam_Behavior['Skewness'] = np.zeros((n_records,n_blocks),dtype=np.int16)
#-- 4th moment: Measure of peakiness of range integrated stack power distribution.
L1b_Beam_Behavior['Kurtosis'] = np.zeros((n_records,n_blocks),dtype=np.int16)
L1b_Beam_Behavior['Spare'] = np.zeros((n_records,n_blocks,n_BeamBehaviourParams-5),dtype=np.int16)
#-- Low-Resolution Mode
L1b_LRM_waveform = {}
#-- Averaged Power Echo Waveform [128]
L1b_LRM_waveform['Waveform'] = np.zeros((n_records,n_blocks,n_LRM_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_LRM_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Echo Scale Power (a power of 2 to scale echo to Watts)
L1b_LRM_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Number of echoes averaged
L1b_LRM_waveform['N_avg_echoes'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
L1b_LRM_waveform['Flags'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- SAR Mode
L1b_SAR_waveform = {}
#-- Averaged Power Echo Waveform [128]
L1b_SAR_waveform['Waveform'] = np.zeros((n_records,n_blocks,n_SAR_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_SAR_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Echo Scale Power (a power of 2 to scale echo to Watts)
L1b_SAR_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Number of echoes averaged
L1b_SAR_waveform['N_avg_echoes'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
L1b_SAR_waveform['Flags'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Beam behaviour parameters
L1b_SAR_waveform['Beam'] = L1b_Beam_Behavior
#-- SARIN Mode
L1b_SARIN_waveform = {}
#-- Averaged Power Echo Waveform [512]
L1b_SARIN_waveform['Waveform'] = np.zeros((n_records,n_blocks,n_SARIN_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_SARIN_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Echo Scale Power (a power of 2 to scale echo to Watts)
L1b_SARIN_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Number of echoes averaged
L1b_SARIN_waveform['N_avg_echoes'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
L1b_SARIN_waveform['Flags'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Beam behaviour parameters
L1b_SARIN_waveform['Beam'] = L1b_Beam_Behavior
#-- Coherence [512]: packed units (1/1000)
L1b_SARIN_waveform['Coherence'] = np.zeros((n_records,n_blocks,n_SARIN_RW),dtype=np.int16)
#-- Phase Difference [512]: packed units (microradians)
L1b_SARIN_waveform['Phase_diff'] = np.zeros((n_records,n_blocks,n_SARIN_RW),dtype=np.int32)
#-- for each record in the CryoSat file
for r in range(n_records):
#-- CryoSat-2 Time and Orbit Group
for b in range(n_blocks):
L1b_location_parameters['Day'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_location_parameters['Second'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_location_parameters['Micsec'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_location_parameters['USO_Corr'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_location_parameters['Mode_ID'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_location_parameters['SSC'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_location_parameters['Inst_config'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_location_parameters['Rec_Count'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_location_parameters['Lat'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_location_parameters['Lon'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_location_parameters['Alt'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_location_parameters['Alt_rate'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_location_parameters['Sat_velocity'][r,b,:] = np.fromfile(fid,dtype='>i4',count=3)
L1b_location_parameters['Real_beam'][r,b,:] = np.fromfile(fid,dtype='>i4',count=3)
L1b_location_parameters['Baseline'][r,b,:] = np.fromfile(fid,dtype='>i4',count=3)
L1b_location_parameters['MCD'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
#-- CryoSat-2 Measurement Group
#-- Derived from instrument measurement parameters
for b in range(n_blocks):
L1b_measurements['TD'][r,b] = np.fromfile(fid,dtype='>i8',count=1)
L1b_measurements['H_0'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['COR2'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['LAI'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['FAI'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['AGC_CH1'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['AGC_CH2'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['TR_gain_CH1'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['TR_gain_CH2'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['TX_Power'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['Doppler_range'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['TR_inst_range'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['R_inst_range'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['TR_inst_gain'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['R_inst_gain'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['Internal_phase'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['External_phase'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['Noise_power'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['Phase_slope'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_measurements['Spares1'][r,b,:] = np.fromfile(fid,dtype='>i1',count=4)
#-- CryoSat-2 External Corrections Group
L1b_geo_corrections['dryTrop'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['wetTrop'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['InvBar'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['DAC'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['Iono_GIM'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['Iono_model'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['ocTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['lpeTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['olTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['seTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['gpTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_geo_corrections['Surf_type'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_geo_corrections['Spare1'][r,:] = np.fromfile(fid,dtype='>i1',count=4)
L1b_geo_corrections['Corr_status'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_geo_corrections['Corr_error'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_geo_corrections['Spare2'][r,:] = np.fromfile(fid,dtype='>i1',count=4)
#-- CryoSat-2 Average Waveforms Groups
if (MODE == 'LRM'):
#-- Low-Resolution Mode
L1b_1Hz_LRM_waveform['Day_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_LRM_waveform['Sec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_1Hz_LRM_waveform['Micsec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_1Hz_LRM_waveform['Lat_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_LRM_waveform['Lon_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_LRM_waveform['Alt_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_LRM_waveform['TD_1Hz'][r] = np.fromfile(fid,dtype='>i8',count=1)
L1b_1Hz_LRM_waveform['Waveform'][r,:] = np.fromfile(fid,dtype='>u2',count=n_LRM_RW)
L1b_1Hz_LRM_waveform['Linear_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_LRM_waveform['Power2_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_LRM_waveform['N_avg_echoes'][r] = np.fromfile(fid,dtype='>u2',count=1)
L1b_1Hz_LRM_waveform['Flags'][r] = np.fromfile(fid,dtype='>u2',count=1)
elif (MODE == 'SAR'):
#-- SAR Mode
L1b_1Hz_SAR_waveform['Day_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SAR_waveform['Sec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_1Hz_SAR_waveform['Micsec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_1Hz_SAR_waveform['Lat_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SAR_waveform['Lon_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SAR_waveform['Alt_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SAR_waveform['TD_1Hz'][r] = np.fromfile(fid,dtype='>i8',count=1)
L1b_1Hz_SAR_waveform['Waveform'][r,:] = np.fromfile(fid,dtype='>u2',count=n_SAR_RW)
L1b_1Hz_SAR_waveform['Linear_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SAR_waveform['Power2_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SAR_waveform['N_avg_echoes'][r] = np.fromfile(fid,dtype='>u2',count=1)
L1b_1Hz_SAR_waveform['Flags'][r] = np.fromfile(fid,dtype='>u2',count=1)
elif (MODE == 'SIN'):
#-- SARIN Mode
L1b_1Hz_SARIN_waveform['Day'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SARIN_waveform['Sec'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_1Hz_SARIN_waveform['Micsec'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_1Hz_SARIN_waveform['Lat'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SARIN_waveform['Lon'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SARIN_waveform['Alt'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SARIN_waveform['TD'][r] = np.fromfile(fid,dtype='>i8',count=1)
L1b_1Hz_SARIN_waveform['Waveform'][r,:] = np.fromfile(fid,dtype='>u2',count=n_SARIN_RW)
L1b_1Hz_SARIN_waveform['Linear_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SARIN_waveform['Power2_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_1Hz_SARIN_waveform['N_avg_echoes'][r] = np.fromfile(fid,dtype='>u2',count=1)
L1b_1Hz_SARIN_waveform['Flags'][r] = np.fromfile(fid,dtype='>u2',count=1)
#-- CryoSat-2 Waveforms Groups
if (MODE == 'LRM'):
#-- Low-Resolution Mode
for b in range(n_blocks):
L1b_LRM_waveform['Waveform'][r,b,:] = np.fromfile(fid,dtype='>u2',count=n_LRM_RW)
L1b_LRM_waveform['Linear_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_LRM_waveform['Power2_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_LRM_waveform['N_avg_echoes'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_LRM_waveform['Flags'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
elif (MODE == 'SAR'):
#-- SAR Mode
for b in range(n_blocks):
L1b_SAR_waveform['Waveform'][r,b,:] = np.fromfile(fid,dtype='>u2',count=n_SAR_RW)
L1b_SAR_waveform['Linear_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_SAR_waveform['Power2_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_SAR_waveform['N_avg_echoes'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SAR_waveform['Flags'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SAR_waveform['Beam']['SD'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SAR_waveform['Beam']['Center'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SAR_waveform['Beam']['Amplitude'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_SAR_waveform['Beam']['Skewness'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_SAR_waveform['Beam']['Kurtosis'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_SAR_waveform['Beam']['Spare'][r,b,:] = np.fromfile(fid,dtype='>i2',count=(n_BeamBehaviourParams-5))
elif (MODE == 'SIN'):
#-- SARIN Mode
for b in range(n_blocks):
L1b_SARIN_waveform['Waveform'][r,b,:] = np.fromfile(fid,dtype='>u2',count=n_SARIN_RW)
L1b_SARIN_waveform['Linear_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_SARIN_waveform['Power2_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_SARIN_waveform['N_avg_echoes'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SARIN_waveform['Flags'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SARIN_waveform['Beam']['SD'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SARIN_waveform['Beam']['Center'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_SARIN_waveform['Beam']['Amplitude'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_SARIN_waveform['Beam']['Skewness'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_SARIN_waveform['Beam']['Kurtosis'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_SARIN_waveform['Beam']['Spare'][r,b,:] = np.fromfile(fid,dtype='>i2',count=(n_BeamBehaviourParams-5))
L1b_SARIN_waveform['Coherence'][r,b,:] = np.fromfile(fid,dtype='>i2',count=n_SARIN_RW)
L1b_SARIN_waveform['Phase_diff'][r,b,:] = np.fromfile(fid,dtype='>i4',count=n_SARIN_RW)
#-- Bind all the bits of the l1b_mds together into a single dictionary
CS_l1b_mds = {}
CS_l1b_mds['Location'] = L1b_location_parameters
CS_l1b_mds['Data'] = L1b_measurements
CS_l1b_mds['Geometry'] = L1b_geo_corrections
if (MODE == 'LRM'):
CS_l1b_mds['Waveform_1Hz'] = L1b_1Hz_LRM_waveform
CS_l1b_mds['Waveform_20Hz'] = L1b_LRM_waveform
elif (MODE == 'SAR'):
CS_l1b_mds['Waveform_1Hz'] = L1b_1Hz_SAR_waveform
CS_l1b_mds['Waveform_20Hz'] = L1b_SAR_waveform
elif (MODE == 'SIN'):
CS_l1b_mds['Waveform_1Hz'] = L1b_1Hz_SARIN_waveform
CS_l1b_mds['Waveform_20Hz'] = L1b_SARIN_waveform
#-- return the output dictionary
return CS_l1b_mds
#-- PURPOSE: Initiate L1b MDS variables for CryoSat Baseline C
def cryosat_baseline_C(fid, n_records, MODE):
n_SARIN_BC_RW = 1024
n_SARIN_RW = 512
n_SAR_BC_RW = 256
n_SAR_RW = 128
n_LRM_RW = 128
n_blocks = 20
n_BeamBehaviourParams = 50
#-- CryoSat-2 Time and Orbit Group
L1b_c_location_parameters = {}
#-- Time: day part
L1b_c_location_parameters['Day'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Time: second part
L1b_c_location_parameters['Second'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Time: microsecond part
L1b_c_location_parameters['Micsec'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- USO correction factor
L1b_c_location_parameters['USO_Corr'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Mode ID
L1b_c_location_parameters['Mode_ID'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Source sequence counter
L1b_c_location_parameters['SSC'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Instrument configuration
L1b_c_location_parameters['Inst_config'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Record Counter
L1b_c_location_parameters['Rec_Count'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_location_parameters['Lat'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_location_parameters['Lon'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_c_location_parameters['Alt'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instantaneous altitude rate derived from orbit: packed units (mm/s, 1e-3 m/s)
L1b_c_location_parameters['Alt_rate'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Satellite velocity vector. In ITRF: packed units (mm/s, 1e-3 m/s)
#-- ITRF= International Terrestrial Reference Frame
L1b_c_location_parameters['Sat_velocity'] = np.zeros((n_records,n_blocks,3),dtype=np.int32)
#-- Real beam direction vector. In CRF: packed units (micro-m/s, 1e-6 m/s)
#-- CRF= CryoSat Reference Frame.
L1b_c_location_parameters['Real_beam'] = np.zeros((n_records,n_blocks,3),dtype=np.int32)
#-- Interferometric baseline vector. In CRF: packed units (micro-m/s, 1e-6 m/s)
L1b_c_location_parameters['Baseline'] = np.zeros((n_records,n_blocks,3),dtype=np.int32)
#-- Star Tracker ID
L1b_c_location_parameters['ST_ID'] = np.zeros((n_records,n_blocks),dtype=np.int16)
#-- Antenna Bench Roll Angle (Derived from star trackers)
#-- packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_location_parameters['Roll'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Antenna Bench Pitch Angle (Derived from star trackers)
#-- packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_location_parameters['Pitch'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Antenna Bench Yaw Angle (Derived from star trackers)
#-- packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_location_parameters['Yaw'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Measurement Confidence Data Flags
#-- Generally the MCD flags indicate problems when set
#-- If MCD is 0 then no problems or non-nominal conditions were detected
#-- Serious errors are indicated by setting bit 31
L1b_c_location_parameters['MCD'] = np.zeros((n_records,n_blocks),dtype=np.uint32)
L1b_c_location_parameters['Spares'] = np.zeros((n_records,n_blocks,2),dtype=np.int16)
#-- CryoSat-2 Measurement Group
#-- Derived from instrument measurement parameters
L1b_c_measurements = {}
#-- Window Delay reference (two-way) corrected for instrument delays
L1b_c_measurements['TD'] = np.zeros((n_records,n_blocks),dtype=np.int64)
#-- H0 Initial Height Word from telemetry
L1b_c_measurements['H_0'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- COR2 Height Rate: on-board tracker height rate over the radar cycle
L1b_c_measurements['COR2'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Coarse Range Word (LAI) derived from telemetry
L1b_c_measurements['LAI'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Fine Range Word (FAI) derived from telemetry
L1b_c_measurements['FAI'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Automatic Gain Control Channel 1: AGC gain applied on Rx channel 1.
#-- Gain calibration corrections are applied (Sum of AGC stages 1 and 2
#-- plus the corresponding corrections) (dB/100)
L1b_c_measurements['AGC_CH1'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Automatic Gain Control Channel 2: AGC gain applied on Rx channel 2.
#-- Gain calibration corrections are applied (dB/100)
L1b_c_measurements['AGC_CH2'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Total Fixed Gain On Channel 1: gain applied by the RF unit. (dB/100)
L1b_c_measurements['TR_gain_CH1'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Total Fixed Gain On Channel 2: gain applied by the RF unit. (dB/100)
L1b_c_measurements['TR_gain_CH2'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Transmit Power in microWatts
L1b_c_measurements['TX_Power'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Doppler range correction: Radial component (mm)
#-- computed for the component of satellite velocity in the nadir direction
L1b_c_measurements['Doppler_range'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Range Correction: transmit-receive antenna (mm)
#-- Calibration correction to range on channel 1 computed from CAL1.
L1b_c_measurements['TR_inst_range'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Range Correction: receive-only antenna (mm)
#-- Calibration correction to range on channel 2 computed from CAL1.
L1b_c_measurements['R_inst_range'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Gain Correction: transmit-receive antenna (dB/100)
#-- Calibration correction to gain on channel 1 computed from CAL1
L1b_c_measurements['TR_inst_gain'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Instrument Gain Correction: receive-only (dB/100)
#-- Calibration correction to gain on channel 2 computed from CAL1
L1b_c_measurements['R_inst_gain'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Internal Phase Correction (microradians)
L1b_c_measurements['Internal_phase'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- External Phase Correction (microradians)
L1b_c_measurements['External_phase'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Noise Power measurement (dB/100)
L1b_c_measurements['Noise_power'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Phase slope correction (microradians)
#-- Computed from the CAL-4 packets during the azimuth impulse response
#-- amplitude (SARIN only). Set from the latest available CAL-4 packet.
L1b_c_measurements['Phase_slope'] = np.zeros((n_records,n_blocks),dtype=np.int32)
L1b_c_measurements['Spares1'] = np.zeros((n_records,n_blocks,4),dtype=np.int8)
#-- CryoSat-2 External Corrections Group
L1b_c_geo_corrections = {}
#-- Dry Tropospheric Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['dryTrop'] = np.zeros((n_records),dtype=np.int32)
#-- Wet Tropospheric Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['wetTrop'] = np.zeros((n_records),dtype=np.int32)
#-- Inverse Barometric Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['InvBar'] = np.zeros((n_records),dtype=np.int32)
#-- Delta Inverse Barometric Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['DAC'] = np.zeros((n_records),dtype=np.int32)
#-- GIM Ionospheric Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['Iono_GIM'] = np.zeros((n_records),dtype=np.int32)
#-- Model Ionospheric Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['Iono_model'] = np.zeros((n_records),dtype=np.int32)
#-- Ocean tide Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['ocTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Long period equilibrium ocean tide Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['lpeTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Ocean loading tide Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['olTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Solid Earth tide Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['seTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Geocentric Polar tide Correction packed units (mm, 1e-3 m)
L1b_c_geo_corrections['gpTideElv'] = np.zeros((n_records),dtype=np.int32)
#-- Surface Type: enumerated key to classify surface at nadir
#-- 0 = Open Ocean
#-- 1 = Closed Sea
#-- 2 = Continental Ice
#-- 3 = Land
L1b_c_geo_corrections['Surf_type'] = np.zeros((n_records),dtype=np.uint32)
L1b_c_geo_corrections['Spare1'] = np.zeros((n_records,4),dtype=np.int8)
#-- Corrections Status Flag
L1b_c_geo_corrections['Corr_status'] = np.zeros((n_records),dtype=np.uint32)
#-- Correction Error Flag
L1b_c_geo_corrections['Corr_error'] = np.zeros((n_records),dtype=np.uint32)
L1b_c_geo_corrections['Spare2'] = np.zeros((n_records,4),dtype=np.int8)
#-- CryoSat-2 Average Waveforms Groups
#-- Low-Resolution Mode
L1b_c_1Hz_LRM_waveform = {}
#-- Data Record Time (MDSR Time Stamp)
L1b_c_1Hz_LRM_waveform['Day_1Hz'] = np.zeros((n_records),dtype=np.int32)
L1b_c_1Hz_LRM_waveform['Sec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
L1b_c_1Hz_LRM_waveform['Micsec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_1Hz_LRM_waveform['Lat_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_1Hz_LRM_waveform['Lon_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_c_1Hz_LRM_waveform['Alt_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Window Delay (two-way) corrected for instrument delays
L1b_c_1Hz_LRM_waveform['TD_1Hz'] = np.zeros((n_records),dtype=np.int64)
#-- 1 Hz Averaged Power Echo Waveform
L1b_c_1Hz_LRM_waveform['Waveform'] = np.zeros((n_records,n_LRM_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_c_1Hz_LRM_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Echo Scale Power (a power of 2 to scale echo to Watts)
L1b_c_1Hz_LRM_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Number of echoes averaged
L1b_c_1Hz_LRM_waveform['N_avg_echoes'] = np.zeros((n_records),dtype=np.uint16)
L1b_c_1Hz_LRM_waveform['Flags'] = np.zeros((n_records),dtype=np.uint16)
#-- SAR Mode
L1b_c_1Hz_SAR_waveform = {}
#-- Data Record Time (MDSR Time Stamp)
L1b_c_1Hz_SAR_waveform['Day_1Hz'] = np.zeros((n_records),dtype=np.int32)
L1b_c_1Hz_SAR_waveform['Sec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
L1b_c_1Hz_SAR_waveform['Micsec_1Hz'] = np.zeros((n_records),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_1Hz_SAR_waveform['Lat_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_1Hz_SAR_waveform['Lon_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_c_1Hz_SAR_waveform['Alt_1Hz'] = np.zeros((n_records),dtype=np.int32)
#-- Window Delay (two-way) corrected for instrument delays
L1b_c_1Hz_SAR_waveform['TD_1Hz'] = np.zeros((n_records),dtype=np.int64)
#-- 1 Hz Averaged Power Echo Waveform
L1b_c_1Hz_SAR_waveform['Waveform'] = np.zeros((n_records,n_SAR_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_c_1Hz_SAR_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Echo Scale Power (a power of 2 to scale echo to Watts)
L1b_c_1Hz_SAR_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Number of echoes averaged
L1b_c_1Hz_SAR_waveform['N_avg_echoes'] = np.zeros((n_records),dtype=np.uint16)
L1b_c_1Hz_SAR_waveform['Flags'] = np.zeros((n_records),dtype=np.uint16)
#-- SARIN Mode
#-- Same as the LRM/SAR groups but the waveform array is 512 bins instead of
#-- 128 and the number of echoes averaged is different.
L1b_c_1Hz_SARIN_waveform = {}
#-- Data Record Time (MDSR Time Stamp)
L1b_c_1Hz_SARIN_waveform['Day'] = np.zeros((n_records),dtype=np.int32)
L1b_c_1Hz_SARIN_waveform['Sec'] = np.zeros((n_records),dtype=np.uint32)
L1b_c_1Hz_SARIN_waveform['Micsec'] = np.zeros((n_records),dtype=np.uint32)
#-- Lat: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_1Hz_SARIN_waveform['Lat'] = np.zeros((n_records),dtype=np.int32)
#-- Lon: packed units (0.1 micro-degree, 1e-7 degrees)
L1b_c_1Hz_SARIN_waveform['Lon'] = np.zeros((n_records),dtype=np.int32)
#-- Alt: packed units (mm, 1e-3 m)
#-- Altitude of COG above reference ellipsoid (interpolated value)
L1b_c_1Hz_SARIN_waveform['Alt'] = np.zeros((n_records),dtype=np.int32)
#-- Window Delay (two-way) corrected for instrument delays
L1b_c_1Hz_SARIN_waveform['TD'] = np.zeros((n_records),dtype=np.int64)
#-- 1 Hz Averaged Power Echo Waveform
L1b_c_1Hz_SARIN_waveform['Waveform'] = np.zeros((n_records,n_SARIN_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_c_1Hz_SARIN_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Echo Scale Power (a power of 2 to scale echo to Watts)
L1b_c_1Hz_SARIN_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records),dtype=np.int32)
#-- Number of echoes averaged
L1b_c_1Hz_SARIN_waveform['N_avg_echoes'] = np.zeros((n_records),dtype=np.uint16)
L1b_c_1Hz_SARIN_waveform['Flags'] = np.zeros((n_records),dtype=np.uint16)
#-- CryoSat-2 Waveforms Groups
#-- Beam Behavior Parameters
L1b_c_Beam_Behavior = {}
#-- Standard Deviation of Gaussian fit to range integrated stack power.
L1b_c_Beam_Behavior['SD'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Stack Center: Mean of Gaussian fit to range integrated stack power.
L1b_c_Beam_Behavior['Center'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Stack amplitude parameter scaled in dB/100.
L1b_c_Beam_Behavior['Amplitude'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- 3rd moment: providing the degree of asymmetry of the range integrated
#-- stack power distribution.
L1b_c_Beam_Behavior['Skewness'] = np.zeros((n_records,n_blocks),dtype=np.int16)
#-- 4th moment: Measure of peakiness of range integrated stack power distribution.
L1b_c_Beam_Behavior['Kurtosis'] = np.zeros((n_records,n_blocks),dtype=np.int16)
#-- Standard deviation as a function of boresight angle (microradians)
L1b_c_Beam_Behavior['SD_boresight_angle'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Stack Center angle as a function of boresight angle (microradians)
L1b_c_Beam_Behavior['Center_boresight_angle'] = np.zeros((n_records,n_blocks),dtype=np.int16)
L1b_c_Beam_Behavior['Spare'] = np.zeros((n_records,n_blocks,n_BeamBehaviourParams-7),dtype=np.int16)
#-- Low-Resolution Mode
L1b_c_LRM_waveform = {}
#-- Averaged Power Echo Waveform [128]
L1b_c_LRM_waveform['Waveform'] = np.zeros((n_records,n_blocks,n_LRM_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_c_LRM_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Echo Scale Power (a power of 2)
L1b_c_LRM_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Number of echoes averaged
L1b_c_LRM_waveform['N_avg_echoes'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
L1b_c_LRM_waveform['Flags'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- SAR Mode
L1b_c_SAR_waveform = {}
#-- Averaged Power Echo Waveform [256]
L1b_c_SAR_waveform['Waveform'] = np.zeros((n_records,n_blocks,n_SAR_BC_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_c_SAR_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Echo Scale Power (a power of 2)
L1b_c_SAR_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Number of echoes averaged
L1b_c_SAR_waveform['N_avg_echoes'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
L1b_c_SAR_waveform['Flags'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Beam behaviour parameters
L1b_c_SAR_waveform['Beam'] = L1b_c_Beam_Behavior
#-- SARIN Mode
L1b_c_SARIN_waveform = {}
#-- Averaged Power Echo Waveform [1024]
L1b_c_SARIN_waveform['Waveform'] = np.zeros((n_records,n_blocks,n_SARIN_BC_RW),dtype=np.uint16)
#-- Echo Scale Factor (to scale echo to watts)
L1b_c_SARIN_waveform['Linear_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Echo Scale Power (a power of 2)
L1b_c_SARIN_waveform['Power2_Wfm_Multiplier'] = np.zeros((n_records,n_blocks),dtype=np.int32)
#-- Number of echoes averaged
L1b_c_SARIN_waveform['N_avg_echoes'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
L1b_c_SARIN_waveform['Flags'] = np.zeros((n_records,n_blocks),dtype=np.uint16)
#-- Beam behaviour parameters
L1b_c_SARIN_waveform['Beam'] = L1b_c_Beam_Behavior
#-- Coherence [1024]: packed units (1/1000)
L1b_c_SARIN_waveform['Coherence'] = np.zeros((n_records,n_blocks,n_SARIN_BC_RW),dtype=np.int16)
#-- Phase Difference [1024]: packed units (microradians)
L1b_c_SARIN_waveform['Phase_diff'] = np.zeros((n_records,n_blocks,n_SARIN_BC_RW),dtype=np.int32)
#-- for each record in the CryoSat file
for r in range(n_records):
#-- CryoSat-2 Time and Orbit Group
for b in range(n_blocks):
L1b_c_location_parameters['Day'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Second'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_location_parameters['Micsec'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_location_parameters['USO_Corr'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_location_parameters['Mode_ID'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_location_parameters['SSC'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_location_parameters['Inst_config'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_location_parameters['Rec_Count'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_location_parameters['Lat'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Lon'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Alt'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Alt_rate'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Sat_velocity'][r,b,:] = np.fromfile(fid,dtype='>i4',count=3)
L1b_c_location_parameters['Real_beam'][r,b,:] = np.fromfile(fid,dtype='>i4',count=3)
L1b_c_location_parameters['Baseline'][r,b,:] = np.fromfile(fid,dtype='>i4',count=3)
L1b_c_location_parameters['ST_ID'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_location_parameters['Roll'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Pitch'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['Yaw'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_location_parameters['MCD'][r,b] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_location_parameters['Spares'][r,b,:] = np.fromfile(fid,dtype='>i2',count=2)
#-- CryoSat-2 Measurement Group
#-- Derived from instrument measurement parameters
for b in range(n_blocks):
L1b_c_measurements['TD'][r,b] = np.fromfile(fid,dtype='>i8',count=1)
L1b_c_measurements['H_0'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['COR2'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['LAI'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['FAI'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['AGC_CH1'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['AGC_CH2'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['TR_gain_CH1'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['TR_gain_CH2'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['TX_Power'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['Doppler_range'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['TR_inst_range'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['R_inst_range'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['TR_inst_gain'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['R_inst_gain'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['Internal_phase'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['External_phase'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['Noise_power'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['Phase_slope'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_measurements['Spares1'][r,b,:] = np.fromfile(fid,dtype='>i1',count=4)
#-- CryoSat-2 External Corrections Group
L1b_c_geo_corrections['dryTrop'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['wetTrop'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['InvBar'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['DAC'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['Iono_GIM'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['Iono_model'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['ocTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['lpeTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['olTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['seTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['gpTideElv'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_geo_corrections['Surf_type'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_geo_corrections['Spare1'][r,:] = np.fromfile(fid,dtype='>i1',count=4)
L1b_c_geo_corrections['Corr_status'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_geo_corrections['Corr_error'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_geo_corrections['Spare2'][r,:] = np.fromfile(fid,dtype='>i1',count=4)
#-- CryoSat-2 Average Waveforms Groups
if (MODE == 'LRM'):
#-- Low-Resolution Mode
L1b_c_1Hz_LRM_waveform['Day_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_LRM_waveform['Sec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_1Hz_LRM_waveform['Micsec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_1Hz_LRM_waveform['Lat_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_LRM_waveform['Lon_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_LRM_waveform['Alt_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_LRM_waveform['TD_1Hz'][r] = np.fromfile(fid,dtype='>i8',count=1)
L1b_c_1Hz_LRM_waveform['Waveform'][r,:] = np.fromfile(fid,dtype='>u2',count=n_LRM_RW)
L1b_c_1Hz_LRM_waveform['Linear_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_LRM_waveform['Power2_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_LRM_waveform['N_avg_echoes'][r] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_1Hz_LRM_waveform['Flags'][r] = np.fromfile(fid,dtype='>u2',count=1)
elif (MODE == 'SAR'):
#-- SAR Mode
L1b_c_1Hz_SAR_waveform['Day_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SAR_waveform['Sec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_1Hz_SAR_waveform['Micsec_1Hz'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_1Hz_SAR_waveform['Lat_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SAR_waveform['Lon_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SAR_waveform['Alt_1Hz'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SAR_waveform['TD_1Hz'][r] = np.fromfile(fid,dtype='>i8',count=1)
L1b_c_1Hz_SAR_waveform['Waveform'][r,:] = np.fromfile(fid,dtype='>u2',count=n_SAR_RW)
L1b_c_1Hz_SAR_waveform['Linear_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SAR_waveform['Power2_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SAR_waveform['N_avg_echoes'][r] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_1Hz_SAR_waveform['Flags'][r] = np.fromfile(fid,dtype='>u2',count=1)
elif (MODE == 'SIN'):
#-- SARIN Mode
L1b_c_1Hz_SARIN_waveform['Day'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SARIN_waveform['Sec'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_1Hz_SARIN_waveform['Micsec'][r] = np.fromfile(fid,dtype='>u4',count=1)
L1b_c_1Hz_SARIN_waveform['Lat'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SARIN_waveform['Lon'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SARIN_waveform['Alt'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SARIN_waveform['TD'][r] = np.fromfile(fid,dtype='>i8',count=1)
L1b_c_1Hz_SARIN_waveform['Waveform'][r,:] = np.fromfile(fid,dtype='>u2',count=n_SARIN_RW)
L1b_c_1Hz_SARIN_waveform['Linear_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SARIN_waveform['Power2_Wfm_Multiplier'][r] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_1Hz_SARIN_waveform['N_avg_echoes'][r] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_1Hz_SARIN_waveform['Flags'][r] = np.fromfile(fid,dtype='>u2',count=1)
#-- CryoSat-2 Waveforms Groups
if (MODE == 'LRM'):
#-- Low-Resolution Mode
for b in range(n_blocks):
L1b_c_LRM_waveform['Waveform'][r,b,:] = np.fromfile(fid,dtype='>u2',count=n_LRM_RW)
L1b_c_LRM_waveform['Linear_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_LRM_waveform['Power2_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_LRM_waveform['N_avg_echoes'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_LRM_waveform['Flags'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
elif (MODE == 'SAR'):
#-- SAR Mode
for b in range(n_blocks):
L1b_c_SAR_waveform['Waveform'][r,b,:] = np.fromfile(fid,dtype='>u2',count=n_SAR_BC_RW)
L1b_c_SAR_waveform['Linear_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_SAR_waveform['Power2_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_SAR_waveform['N_avg_echoes'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SAR_waveform['Flags'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SAR_waveform['Beam']['SD'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SAR_waveform['Beam']['Center'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SAR_waveform['Beam']['Amplitude'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SAR_waveform['Beam']['Skewness'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SAR_waveform['Beam']['Kurtosis'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SAR_waveform['Beam']['SD_boresight_angle'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SAR_waveform['Beam']['Center_boresight_angle'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SAR_waveform['Beam']['Spare'][r,b,:] = np.fromfile(fid,dtype='>i2',count=(n_BeamBehaviourParams-7))
elif (MODE == 'SIN'):
#-- SARIN Mode
for b in range(n_blocks):
L1b_c_SARIN_waveform['Waveform'][r,b,:] = np.fromfile(fid,dtype='>u2',count=n_SARIN_BC_RW)
L1b_c_SARIN_waveform['Linear_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_SARIN_waveform['Power2_Wfm_Multiplier'][r,b] = np.fromfile(fid,dtype='>i4',count=1)
L1b_c_SARIN_waveform['N_avg_echoes'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SARIN_waveform['Flags'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SARIN_waveform['Beam']['SD'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SARIN_waveform['Beam']['Center'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SARIN_waveform['Beam']['Amplitude'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SARIN_waveform['Beam']['Skewness'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SARIN_waveform['Beam']['Kurtosis'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SARIN_waveform['Beam']['SD_boresight_angle'][r,b] = np.fromfile(fid,dtype='>u2',count=1)
L1b_c_SARIN_waveform['Beam']['Center_boresight_angle'][r,b] = np.fromfile(fid,dtype='>i2',count=1)
L1b_c_SARIN_waveform['Beam']['Spare'][r,b,:] = np.fromfile(fid,dtype='>i2',count=(n_BeamBehaviourParams-7))
L1b_c_SARIN_waveform['Coherence'][r,b,:] = np.fromfile(fid,dtype='>i2',count=n_SARIN_BC_RW)
L1b_c_SARIN_waveform['Phase_diff'][r,b,:] = np.fromfile(fid,dtype='>i4',count=n_SARIN_BC_RW)
#-- Bind all the bits of the l1b_mds together into a single dictionary
CS_l1b_c_mds = {}
CS_l1b_c_mds['Location'] = L1b_c_location_parameters
CS_l1b_c_mds['Data'] = L1b_c_measurements
CS_l1b_c_mds['Geometry'] = L1b_c_geo_corrections
if (MODE == 'LRM'):
CS_l1b_c_mds['Waveform_1Hz'] = L1b_c_1Hz_LRM_waveform
CS_l1b_c_mds['Waveform_20Hz'] = L1b_c_LRM_waveform
elif (MODE == 'SAR'):
CS_l1b_c_mds['Waveform_1Hz'] = L1b_c_1Hz_SAR_waveform
CS_l1b_c_mds['Waveform_20Hz'] = L1b_c_SAR_waveform
elif (MODE == 'SIN'):
CS_l1b_c_mds['Waveform_1Hz'] = L1b_c_1Hz_SARIN_waveform
CS_l1b_c_mds['Waveform_20Hz'] = L1b_c_SARIN_waveform
#-- return the output dictionary
return CS_l1b_c_mds
#-- PURPOSE: Read ASCII Main Product Header (MPH) block from an ESA PDS file
def read_MPH(full_filename):
#-- read input data file
with open(full_filename, 'rb') as fid:
file_contents = fid.read().splitlines()
#-- Define constant values associated with PDS file formats
#-- number of text lines in standard MPH
n_MPH_lines = 41
#-- check that first line of header matches PRODUCT
if not bool(re.match(b'PRODUCT\=\"(.*)(?=\")',file_contents[0])):
raise IOError('File does not start with a valid PDS MPH')
#-- read MPH header text
s_MPH_fields = {}
for i in range(n_MPH_lines):
#-- use regular expression operators to read headers
if bool(re.match(b'(.*?)\=\"(.*)(?=\")',file_contents[i])):
#-- data fields within quotes
field,value=re.findall(b'(.*?)\=\"(.*)(?=\")',file_contents[i]).pop()
s_MPH_fields[field.decode('utf-8')] = value.decode('utf-8').rstrip()
elif bool(re.match(b'(.*?)\=(.*)',file_contents[i])):
#-- data fields without quotes
field,value=re.findall(b'(.*?)\=(.*)',file_contents[i]).pop()
s_MPH_fields[field.decode('utf-8')] = value.decode('utf-8').rstrip()
#-- Return block name array to calling function
return s_MPH_fields
#-- PURPOSE: Read ASCII Specific Product Header (SPH) block from a PDS file
def read_SPH(full_filename,j_sph_size):
#-- read input data file
with open(full_filename, 'rb') as fid:
file_contents = fid.read().splitlines()
#-- Define constant values associated with PDS file formats
#-- number of text lines in standard MPH
n_MPH_lines = 41
#-- compile regular expression operator for reading headers
rx = re.compile(b'(.*?)\=\"?(.*)',re.VERBOSE)
#-- check first line of header matches SPH_DESCRIPTOR
if not bool(re.match(b'SPH\_DESCRIPTOR\=',file_contents[n_MPH_lines+1])):
raise IOError('File does not have a valid PDS DSD')
#-- read SPH header text (no binary control characters)
s_SPH_lines = [li for li in file_contents[n_MPH_lines+1:] if rx.match(li)
and not re.search(b'[^\x20-\x7e]+',li)]
#-- extract SPH header text
s_SPH_fields = {}
c = 0
while (c < len(s_SPH_lines)):
#-- check if line is within DS_NAME portion of SPH header
if bool(re.match(b'DS_NAME',s_SPH_lines[c])):
#-- add dictionary for DS_NAME
field,value=re.findall(b'(.*?)\=\"(.*)(?=\")',s_SPH_lines[c]).pop()
key = value.decode('utf-8').rstrip()
s_SPH_fields[key] = {}
for line in s_SPH_lines[c+1:c+7]:
if bool(re.match(b'(.*?)\=\"(.*)(?=\")',line)):
#-- data fields within quotes
dsfield,dsvalue=re.findall(b'(.*?)\=\"(.*)(?=\")',line).pop()
s_SPH_fields[key][dsfield.decode('utf-8')] = dsvalue.decode('utf-8').rstrip()
elif bool(re.match(b'(.*?)\=(.*)',line)):
#-- data fields without quotes
dsfield,dsvalue=re.findall(b'(.*?)\=(.*)',line).pop()
s_SPH_fields[key][dsfield.decode('utf-8')] = dsvalue.decode('utf-8').rstrip()
#-- add 6 to counter to go to next entry
c += 6
#-- use regular expression operators to read headers
elif bool(re.match(b'(.*?)\=\"(.*)(?=\")',s_SPH_lines[c])):
#-- data fields within quotes
field,value=re.findall(b'(.*?)\=\"(.*)(?=\")',s_SPH_lines[c]).pop()
s_SPH_fields[field.decode('utf-8')] = value.decode('utf-8').rstrip()
elif bool(re.match(b'(.*?)\=(.*)',s_SPH_lines[c])):
#-- data fields without quotes
field,value=re.findall(b'(.*?)\=(.*)',s_SPH_lines[c]).pop()
s_SPH_fields[field.decode('utf-8')] = value.decode('utf-8').rstrip()
#-- add 1 to counter to go to next line
c += 1
#-- Return block name array to calling function
return s_SPH_fields
#-- PURPOSE: Read ASCII Data Set Descriptors (DSD) block from a PDS file
def read_DSD(full_filename, DS_TYPE=None):
#-- read input data file
with open(full_filename, 'rb') as fid:
file_contents = fid.read().splitlines()
#-- Define constant values associated with PDS file formats
#-- number of text lines in standard MPH
n_MPH_lines = 41
#-- number of text lines in a DSD header
n_DSD_lines = 8
#-- Level-1b CryoSat DS_NAMES within files
regex_patterns = []
if (DS_TYPE == 'CS_L1B'):
regex_patterns.append(b'DS_NAME\="SIR_L1B_LRM[\s+]*"')
regex_patterns.append(b'DS_NAME\="SIR_L1B_SAR[\s+]*"')
regex_patterns.append(b'DS_NAME\="SIR_L1B_SARIN[\s+]*"')
elif (DS_TYPE == 'SIR_L1B_FDM'):
regex_patterns.append(b'DS_NAME\="SIR_L1B_FDM[\s+]*"')
#-- find the DSD starting line within the SPH header
c = 0
Flag = False
while ((Flag is False) and (c < len(regex_patterns))):
#-- find indice within
indice = [i for i,line in enumerate(file_contents[n_MPH_lines+1:]) if
re.search(regex_patterns[c],line)]
if indice:
Flag = True
else:
c+=1
#-- check that valid indice was found within header
if not indice:
raise IOError('Can not find correct DSD field')
#-- extract s_DSD_fields info
DSD_START = n_MPH_lines + indice[0] + 1
s_DSD_fields = {}
for i in range(DSD_START,DSD_START+n_DSD_lines):
#-- use regular expression operators to read headers
if bool(re.match(b'(.*?)\=\"(.*)(?=\")',file_contents[i])):
#-- data fields within quotes
field,value=re.findall(b'(.*?)\=\"(.*)(?=\")',file_contents[i]).pop()
s_DSD_fields[field.decode('utf-8')] = value.decode('utf-8').rstrip()
elif bool(re.match(b'(.*?)\=(.*)',file_contents[i])):
#-- data fields without quotes
field,value=re.findall(b'(.*?)\=(.*)',file_contents[i]).pop()
s_DSD_fields[field.decode('utf-8')] = value.decode('utf-8').rstrip()
#-- Return block name array to calling function
return s_DSD_fields
#-- PURPOSE: read CryoSat Level-1b data
def read_cryosat_L1b(full_filename, VERBOSE=False):
#-- file basename and file extension of input file
fileBasename,fileExtension=os.path.splitext(os.path.basename(full_filename))
#-- CryoSat file class
#-- OFFL (Off Line Processing/Systematic)
#-- NRT_ (Near Real Time)
#-- RPRO (ReProcessing)
#-- TEST (Testing)
#-- TIxx (Stand alone IPF1 testing)
#-- LTA_ (Long Term Archive)
regex_class = 'OFFL|NRT_|RPRO|TEST|TIxx|LTA_'
#-- CryoSat mission products
#-- SIR1SAR_FR: Level 1 FBR SAR Mode (Rx1 Channel)
#-- SIR2SAR_FR: Level 1 FBR SAR Mode (Rx2 Channel)
#-- SIR_SIN_FR: Level 1 FBR SARin Mode
#-- SIR_LRM_1B: Level-1 Product Low Rate Mode
#-- SIR_FDM_1B: Level-1 Product Fast Delivery Marine Mode
#-- SIR_SAR_1B: Level-1 SAR Mode
#-- SIR_SIN_1B: Level-1 SARin Mode
#-- SIR1LRC11B: Level-1 CAL1 Low Rate Mode (Rx1 Channel)
#-- SIR2LRC11B: Level-1 CAL1 Low Rate Mode (Rx2 Channel)
#-- SIR1SAC11B: Level-1 CAL1 SAR Mode (Rx1 Channel)
#-- SIR2SAC11B: Level-1 CAL1 SAR Mode (Rx2 Channel)
#-- SIR_SIC11B: Level-1 CAL1 SARin Mode
#-- SIR_SICC1B: Level-1 CAL1 SARIN Exotic Data
#-- SIR1SAC21B: Level-1 CAL2 SAR Mode (Rx1 Channel)
#-- SIR2SAC21B: Level-1 CAL2 SAR Mode (Rx2 Channel)
#-- SIR1SIC21B: Level-1 CAL2 SARin Mode (Rx1 Channel)
#-- SIR2SIC21B: Level-1 CAL2 SARin Mode (Rx1 Channel)
#-- SIR1LRM_0M: LRM and TRK Monitoring Data from Rx 1 Channel
#-- SIR2LRM_0M: LRM and TRK Monitoring Data from Rx 2 Channel
#-- SIR1SAR_0M: SAR Monitoring Data from Rx 1 Channel
#-- SIR2SAR_0M: SAR Monitoring Data from Rx 1 Channel
#-- SIR_SIN_0M: SARIN Monitoring Data
#-- SIR_SIC40M: CAL4 Monitoring Data
regex_products = ('SIR1SAR_FR|SIR2SAR_FR|SIR_SIN_FR|SIR_LRM_1B|SIR_FDM_1B|'
'SIR_SAR_1B|SIR_SIN_1B|SIR1LRC11B|SIR2LRC11B|SIR1SAC11B|SIR2SAC11B|'
'SIR_SIC11B|SIR_SICC1B|SIR1SAC21B|SIR2SAC21B|SIR1SIC21B|SIR2SIC21B|'
'SIR1LRM_0M|SIR2LRM_0M|SIR1SAR_0M|SIR2SAR_0M|SIR_SIN_0M|SIR_SIC40M')
#-- CRYOSAT LEVEL-1b PRODUCTS NAMING RULES
#-- Mission Identifier
#-- File Class
#-- File Product
#-- Validity Start Date and Time
#-- Validity Stop Date and Time
#-- Baseline Identifier
#-- Version Number
regex_pattern = '(.*?)_({0})_({1})_(\d+T?\d+)_(\d+T?\d+)_(.*?)(\d+)'.format(
regex_class, regex_products)
rx = re.compile(regex_pattern, re.VERBOSE)
#-- extract file information from filename
MI,CLASS,PRODUCT,START,STOP,BASELINE,VERSION=rx.findall(fileBasename).pop()
#-- Extract Date information
start_yr,start_mon,start_day=np.array([START[:4],START[4:6],START[6:8]],dtype=np.uint16)
start_hh,start_mm,start_ss=np.array([START[-6:-4],START[-4:-2],START[-2:]],dtype=np.uint8)
stop_yr,stop_mon,stop_day=np.array([STOP[:4],STOP[4:6],STOP[6:8]],dtype=np.uint16)
stop_hh,stop_mm,stop_ss=np.array([STOP[-6:-4],STOP[-4:-2],STOP[-2:]],dtype=np.uint8)
#-- CryoSat-2 Mode record sizes
i_size_timestamp = 12
n_SARIN_BC_RW = 1024
n_SARIN_RW = 512
n_SAR_BC_RW = 256
n_SAR_RW = 125
n_LRM_RW = 128
n_blocks = 20
n_BeamBehaviourParams = 50
#-- check baseline from file to set i_record_size and allocation function
if (BASELINE == 'C'):
#-- calculate total record sizes of each dataset group
i_size_timegroup = i_size_timestamp + 4 + 2*2 + 6*4 + 3*3*4 + 3*2 + 4*4
i_size_measuregroup = 8 + 4*17 + 8
i_size_external_corr = 4*13 + 12
i_size_1Hz_LRM = i_size_timestamp + 3*4 + 8 + n_LRM_RW*2 + 2*4 + 2*2
i_size_1Hz_SAR = i_size_timestamp + 4*3 + 8 + n_SAR_RW*2 + 4 + 4 + 2 + 2
i_size_1Hz_SARIN = i_size_timestamp + 4*3 + 8 + n_SARIN_RW*2 + 4 + 4 + 2 + 2
i_size_LRM_waveform = n_LRM_RW*2 + 4 + 4 + 2 + 2
i_size_SAR_waveform = n_SAR_BC_RW*2 + 4 + 4 + 2 + 2 + n_BeamBehaviourParams*2
i_size_SARIN_waveform = n_SARIN_BC_RW*2 + 4 + 4 + 2 + 2 + n_SARIN_BC_RW*2 + \
n_SARIN_BC_RW*4 + n_BeamBehaviourParams*2
#-- Low-Resolution Mode Record Size
i_record_size_LRM_L1b = n_blocks * (i_size_timegroup + \
i_size_measuregroup + i_size_LRM_waveform) + i_size_external_corr + \
i_size_1Hz_LRM
#-- SAR Mode Record Size
i_record_size_SAR_L1b = n_blocks * (i_size_timegroup + \
i_size_measuregroup + i_size_SAR_waveform) + i_size_external_corr + \
i_size_1Hz_SAR
#-- SARIN Mode Record Size
i_record_size_SARIN_L1b = n_blocks * (i_size_timegroup + \
i_size_measuregroup + i_size_SARIN_waveform) + i_size_external_corr + \
i_size_1Hz_SARIN
#-- set read function for Baseline C
read_cryosat_variables = cryosat_baseline_C
else:
#-- calculate total record sizes of each dataset group
i_size_timegroup = i_size_timestamp + 4 + 2*2+ 6*4 + 3*3*4 + 4
i_size_measuregroup = 8 + 4*17 + 8
i_size_external_corr = 4*13 + 12
i_size_1Hz_LRM = i_size_timestamp + 3*4 + 8 + n_LRM_RW*2 + 2*4 + 2*2
i_size_1Hz_SAR = i_size_timestamp + 4*3 + 8 + n_SAR_RW*2 + 4 + 4 + 2 + 2
i_size_1Hz_SARIN = i_size_timestamp + 4*3 + 8 + n_SARIN_RW*2 + 4 + 4 + 2 + 2
i_size_LRM_waveform = n_LRM_RW*2 + 4 + 4 + 2 + 2
i_size_SAR_waveform = n_SAR_RW*2 + 4 + 4 + 2 + 2 + n_BeamBehaviourParams*2
i_size_SARIN_waveform = n_SARIN_RW*2 + 4 + 4 + 2 + 2 + n_SARIN_RW*2 + \
n_SARIN_RW*4 + n_BeamBehaviourParams*2
#-- Low-Resolution Mode Record Size
i_record_size_LRM_L1b = n_blocks * (i_size_timegroup + \
i_size_measuregroup + i_size_LRM_waveform) + i_size_external_corr + \
i_size_1Hz_LRM
#-- SAR Mode Record Size
i_record_size_SAR_L1b = n_blocks * (i_size_timegroup + \
i_size_measuregroup + i_size_SAR_waveform) + i_size_external_corr + \
i_size_1Hz_SAR
#-- SARIN Mode Record Size
i_record_size_SARIN_L1b = n_blocks * (i_size_timegroup + \
i_size_measuregroup + i_size_SARIN_waveform) + i_size_external_corr + \
i_size_1Hz_SARIN
#-- set read function for Baselines A and B
read_cryosat_variables = cryosat_baseline_AB
#-- get dataset MODE from PRODUCT portion of file name
#-- set record sizes and DS_TYPE for read_DSD function
MODE = re.findall('(LRM|FDM|SAR|SIN)', PRODUCT).pop()
if (MODE == 'LRM'):
i_record_size = i_record_size_LRM_L1b
DS_TYPE = 'CS_L1B'
elif (MODE == 'FDM'):
i_record_size = i_record_size_FDM_L1b
DS_TYPE = 'SIR_L1B_FDM'
elif (MODE == 'SAR'):
i_record_size = i_record_size_SAR_L1b
DS_TYPE = 'CS_L1B'
elif (MODE == 'SIN'):
i_record_size = i_record_size_SARIN_L1b
DS_TYPE = 'CS_L1B'
#-- read the input file to get file information
fid = os.open(os.path.expanduser(full_filename),os.O_RDONLY)
file_info = os.fstat(fid)
os.close(fid)
#-- num DSRs from SPH
j_num_DSR = np.int32(file_info.st_size//i_record_size)
#-- print file information
if VERBOSE:
print(fileBasename)
print('{0:d} {1:d} {2:d}'.format(j_num_DSR,file_info.st_size,i_record_size))
#-- Check if MPH/SPH/DSD headers
if (j_num_DSR*i_record_size == file_info.st_size):
print('No Header on file')
print('The number of DSRs is: {0:d}'.format(j_num_DSR))
else:
print('Header on file')
#-- Check if MPH/SPH/DSD headers
if (j_num_DSR*i_record_size != file_info.st_size):
#-- If there are MPH/SPH/DSD headers
s_MPH_fields = read_MPH(full_filename)
j_sph_size = np.int32(re.findall('[-+]?\d+',s_MPH_fields['SPH_SIZE']).pop())
s_SPH_fields = read_SPH(full_filename, j_sph_size)
#-- extract information from DSD fields
s_DSD_fields = read_DSD(full_filename, DS_TYPE=DS_TYPE)
#-- extract DS_OFFSET
j_DS_start = np.int32(re.findall('[-+]?\d+',s_DSD_fields['DS_OFFSET']).pop())
#-- extract number of DSR in the file
j_num_DSR = np.int32(re.findall('[-+]?\d+',s_DSD_fields['NUM_DSR']).pop())
#-- check the record size
j_DSR_size = np.int32(re.findall('[-+]?\d+',s_DSD_fields['DSR_SIZE']).pop())
#-- minimum size is start of the read plus number of records to read
j_check_size = j_DS_start + (j_DSR_size*j_num_DSR)
if VERBOSE:
print('The offset of the DSD is: {0:d} bytes'.format(j_DS_start))
print('The number of DSRs is {0:d}'.format(j_num_DSR))
print('The size of the DSR is {0:d}'.format(j_DSR_size))
#-- check if invalid file size
if (j_check_size > file_info.st_size):
raise IOError('File size error')
#-- extract binary data from input CryoSat data file (skip headers)
fid = open(full_filename, 'rb')
cryosat_header = fid.read(j_DS_start)
#-- iterate through CryoSat file and fill output variables
CS_L1b_mds = read_cryosat_variables(fid, j_num_DSR, MODE)
#-- add headers to output dictionary as METADATA
CS_L1b_mds['METADATA'] = {}
CS_L1b_mds['METADATA']['MPH'] = s_MPH_fields
CS_L1b_mds['METADATA']['SPH'] = s_SPH_fields
CS_L1b_mds['METADATA']['DSD'] = s_DSD_fields
#-- close the input CryoSat binary file
fid.close()
else:
#-- If there are not MPH/SPH/DSD headers
#-- extract binary data from input CryoSat data file
fid = open(full_filename, 'rb')
#-- iterate through CryoSat file and fill output variables
CS_L1b_mds = read_cryosat_variables(fid, j_num_DSR, MODE)
#-- close the input CryoSat binary file
fid.close()
#-- return the data and headers
return CS_L1b_mds
| 56.866615 | 112 | 0.712097 | 11,832 | 72,903 | 4.14427 | 0.051809 | 0.022678 | 0.064423 | 0.089202 | 0.898623 | 0.879657 | 0.865871 | 0.845845 | 0.834037 | 0.823861 | 0 | 0.040832 | 0.125893 | 72,903 | 1,281 | 113 | 56.911007 | 0.728647 | 0.269591 | 0 | 0.165796 | 0 | 0.001305 | 0.128941 | 0.02986 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007833 | false | 0 | 0.006527 | 0 | 0.022193 | 0.011749 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1bfab1d4a450c0a7f68eb395ac9dd8510ef3989c | 120 | py | Python | gphoto2_webui/app/views.py | daniego/gphoto2_webui | 1496b3935d06310a989eeb2728fea27752d4a185 | [
"MIT"
] | null | null | null | gphoto2_webui/app/views.py | daniego/gphoto2_webui | 1496b3935d06310a989eeb2728fea27752d4a185 | [
"MIT"
] | null | null | null | gphoto2_webui/app/views.py | daniego/gphoto2_webui | 1496b3935d06310a989eeb2728fea27752d4a185 | [
"MIT"
] | null | null | null | from django.shortcuts import render, get_object_or_404
def homepage(request):
return render(request, 'index.html')
| 24 | 54 | 0.783333 | 17 | 120 | 5.352941 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.125 | 120 | 4 | 55 | 30 | 0.838095 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
408b7beda91e3264d9e1d81a0d0a03c50d4732c0 | 171,193 | py | Python | vfp2py/VisualFoxpro9Lexer.py | wroldwiedbwe/vfp2py | 4e18f95249ee92b36f66e4882909187061e3cb97 | [
"MIT"
] | 36 | 2017-05-02T10:10:51.000Z | 2021-12-11T18:31:23.000Z | vfp2py/VisualFoxpro9Lexer.py | wroldwiedbwe/vfp2py | 4e18f95249ee92b36f66e4882909187061e3cb97 | [
"MIT"
] | 6 | 2017-09-21T02:17:49.000Z | 2021-10-20T19:48:48.000Z | vfp2py/VisualFoxpro9Lexer.py | wroldwiedbwe/vfp2py | 4e18f95249ee92b36f66e4882909187061e3cb97 | [
"MIT"
] | 20 | 2018-02-19T12:54:46.000Z | 2022-03-23T12:53:58.000Z | # Generated from VisualFoxpro9.g4 by ANTLR 4.8
from antlr4 import *
from io import StringIO
from typing.io import TextIO
import sys
def serializedATN():
with StringIO() as buf:
buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\2\u013a")
buf.write("\u0c93\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7")
buf.write("\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r")
buf.write("\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23")
buf.write("\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30")
buf.write("\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36")
buf.write("\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4%\t%")
buf.write("\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t-\4.")
buf.write("\t.\4/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63\t\63\4\64")
buf.write("\t\64\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\49\t9\4:\t:")
buf.write("\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA\4B\tB\4C\t")
buf.write("C\4D\tD\4E\tE\4F\tF\4G\tG\4H\tH\4I\tI\4J\tJ\4K\tK\4L\t")
buf.write("L\4M\tM\4N\tN\4O\tO\4P\tP\4Q\tQ\4R\tR\4S\tS\4T\tT\4U\t")
buf.write("U\4V\tV\4W\tW\4X\tX\4Y\tY\4Z\tZ\4[\t[\4\\\t\\\4]\t]\4")
buf.write("^\t^\4_\t_\4`\t`\4a\ta\4b\tb\4c\tc\4d\td\4e\te\4f\tf\4")
buf.write("g\tg\4h\th\4i\ti\4j\tj\4k\tk\4l\tl\4m\tm\4n\tn\4o\to\4")
buf.write("p\tp\4q\tq\4r\tr\4s\ts\4t\tt\4u\tu\4v\tv\4w\tw\4x\tx\4")
buf.write("y\ty\4z\tz\4{\t{\4|\t|\4}\t}\4~\t~\4\177\t\177\4\u0080")
buf.write("\t\u0080\4\u0081\t\u0081\4\u0082\t\u0082\4\u0083\t\u0083")
buf.write("\4\u0084\t\u0084\4\u0085\t\u0085\4\u0086\t\u0086\4\u0087")
buf.write("\t\u0087\4\u0088\t\u0088\4\u0089\t\u0089\4\u008a\t\u008a")
buf.write("\4\u008b\t\u008b\4\u008c\t\u008c\4\u008d\t\u008d\4\u008e")
buf.write("\t\u008e\4\u008f\t\u008f\4\u0090\t\u0090\4\u0091\t\u0091")
buf.write("\4\u0092\t\u0092\4\u0093\t\u0093\4\u0094\t\u0094\4\u0095")
buf.write("\t\u0095\4\u0096\t\u0096\4\u0097\t\u0097\4\u0098\t\u0098")
buf.write("\4\u0099\t\u0099\4\u009a\t\u009a\4\u009b\t\u009b\4\u009c")
buf.write("\t\u009c\4\u009d\t\u009d\4\u009e\t\u009e\4\u009f\t\u009f")
buf.write("\4\u00a0\t\u00a0\4\u00a1\t\u00a1\4\u00a2\t\u00a2\4\u00a3")
buf.write("\t\u00a3\4\u00a4\t\u00a4\4\u00a5\t\u00a5\4\u00a6\t\u00a6")
buf.write("\4\u00a7\t\u00a7\4\u00a8\t\u00a8\4\u00a9\t\u00a9\4\u00aa")
buf.write("\t\u00aa\4\u00ab\t\u00ab\4\u00ac\t\u00ac\4\u00ad\t\u00ad")
buf.write("\4\u00ae\t\u00ae\4\u00af\t\u00af\4\u00b0\t\u00b0\4\u00b1")
buf.write("\t\u00b1\4\u00b2\t\u00b2\4\u00b3\t\u00b3\4\u00b4\t\u00b4")
buf.write("\4\u00b5\t\u00b5\4\u00b6\t\u00b6\4\u00b7\t\u00b7\4\u00b8")
buf.write("\t\u00b8\4\u00b9\t\u00b9\4\u00ba\t\u00ba\4\u00bb\t\u00bb")
buf.write("\4\u00bc\t\u00bc\4\u00bd\t\u00bd\4\u00be\t\u00be\4\u00bf")
buf.write("\t\u00bf\4\u00c0\t\u00c0\4\u00c1\t\u00c1\4\u00c2\t\u00c2")
buf.write("\4\u00c3\t\u00c3\4\u00c4\t\u00c4\4\u00c5\t\u00c5\4\u00c6")
buf.write("\t\u00c6\4\u00c7\t\u00c7\4\u00c8\t\u00c8\4\u00c9\t\u00c9")
buf.write("\4\u00ca\t\u00ca\4\u00cb\t\u00cb\4\u00cc\t\u00cc\4\u00cd")
buf.write("\t\u00cd\4\u00ce\t\u00ce\4\u00cf\t\u00cf\4\u00d0\t\u00d0")
buf.write("\4\u00d1\t\u00d1\4\u00d2\t\u00d2\4\u00d3\t\u00d3\4\u00d4")
buf.write("\t\u00d4\4\u00d5\t\u00d5\4\u00d6\t\u00d6\4\u00d7\t\u00d7")
buf.write("\4\u00d8\t\u00d8\4\u00d9\t\u00d9\4\u00da\t\u00da\4\u00db")
buf.write("\t\u00db\4\u00dc\t\u00dc\4\u00dd\t\u00dd\4\u00de\t\u00de")
buf.write("\4\u00df\t\u00df\4\u00e0\t\u00e0\4\u00e1\t\u00e1\4\u00e2")
buf.write("\t\u00e2\4\u00e3\t\u00e3\4\u00e4\t\u00e4\4\u00e5\t\u00e5")
buf.write("\4\u00e6\t\u00e6\4\u00e7\t\u00e7\4\u00e8\t\u00e8\4\u00e9")
buf.write("\t\u00e9\4\u00ea\t\u00ea\4\u00eb\t\u00eb\4\u00ec\t\u00ec")
buf.write("\4\u00ed\t\u00ed\4\u00ee\t\u00ee\4\u00ef\t\u00ef\4\u00f0")
buf.write("\t\u00f0\4\u00f1\t\u00f1\4\u00f2\t\u00f2\4\u00f3\t\u00f3")
buf.write("\4\u00f4\t\u00f4\4\u00f5\t\u00f5\4\u00f6\t\u00f6\4\u00f7")
buf.write("\t\u00f7\4\u00f8\t\u00f8\4\u00f9\t\u00f9\4\u00fa\t\u00fa")
buf.write("\4\u00fb\t\u00fb\4\u00fc\t\u00fc\4\u00fd\t\u00fd\4\u00fe")
buf.write("\t\u00fe\4\u00ff\t\u00ff\4\u0100\t\u0100\4\u0101\t\u0101")
buf.write("\4\u0102\t\u0102\4\u0103\t\u0103\4\u0104\t\u0104\4\u0105")
buf.write("\t\u0105\4\u0106\t\u0106\4\u0107\t\u0107\4\u0108\t\u0108")
buf.write("\4\u0109\t\u0109\4\u010a\t\u010a\4\u010b\t\u010b\4\u010c")
buf.write("\t\u010c\4\u010d\t\u010d\4\u010e\t\u010e\4\u010f\t\u010f")
buf.write("\4\u0110\t\u0110\4\u0111\t\u0111\4\u0112\t\u0112\4\u0113")
buf.write("\t\u0113\4\u0114\t\u0114\4\u0115\t\u0115\4\u0116\t\u0116")
buf.write("\4\u0117\t\u0117\4\u0118\t\u0118\4\u0119\t\u0119\4\u011a")
buf.write("\t\u011a\4\u011b\t\u011b\4\u011c\t\u011c\4\u011d\t\u011d")
buf.write("\4\u011e\t\u011e\4\u011f\t\u011f\4\u0120\t\u0120\4\u0121")
buf.write("\t\u0121\4\u0122\t\u0122\4\u0123\t\u0123\4\u0124\t\u0124")
buf.write("\4\u0125\t\u0125\4\u0126\t\u0126\4\u0127\t\u0127\4\u0128")
buf.write("\t\u0128\4\u0129\t\u0129\4\u012a\t\u012a\4\u012b\t\u012b")
buf.write("\4\u012c\t\u012c\4\u012d\t\u012d\4\u012e\t\u012e\4\u012f")
buf.write("\t\u012f\4\u0130\t\u0130\4\u0131\t\u0131\4\u0132\t\u0132")
buf.write("\4\u0133\t\u0133\4\u0134\t\u0134\4\u0135\t\u0135\4\u0136")
buf.write("\t\u0136\4\u0137\t\u0137\4\u0138\t\u0138\4\u0139\t\u0139")
buf.write("\4\u013a\t\u013a\4\u013b\t\u013b\4\u013c\t\u013c\4\u013d")
buf.write("\t\u013d\4\u013e\t\u013e\4\u013f\t\u013f\4\u0140\t\u0140")
buf.write("\4\u0141\t\u0141\4\u0142\t\u0142\4\u0143\t\u0143\4\u0144")
buf.write("\t\u0144\4\u0145\t\u0145\4\u0146\t\u0146\4\u0147\t\u0147")
buf.write("\4\u0148\t\u0148\4\u0149\t\u0149\4\u014a\t\u014a\4\u014b")
buf.write("\t\u014b\4\u014c\t\u014c\4\u014d\t\u014d\4\u014e\t\u014e")
buf.write("\4\u014f\t\u014f\4\u0150\t\u0150\4\u0151\t\u0151\4\u0152")
buf.write("\t\u0152\4\u0153\t\u0153\4\u0154\t\u0154\4\u0155\t\u0155")
buf.write("\4\u0156\t\u0156\4\u0157\t\u0157\3\2\3\2\3\3\7\3\u02b3")
buf.write("\n\3\f\3\16\3\u02b6\13\3\3\3\5\3\u02b9\n\3\3\3\6\3\u02bc")
buf.write("\n\3\r\3\16\3\u02bd\3\3\3\3\5\3\u02c2\n\3\3\3\7\3\u02c5")
buf.write("\n\3\f\3\16\3\u02c8\13\3\5\3\u02ca\n\3\3\3\6\3\u02cd\n")
buf.write("\3\r\3\16\3\u02ce\3\3\3\3\3\3\3\3\3\3\7\3\u02d6\n\3\f")
buf.write("\3\16\3\u02d9\13\3\5\3\u02db\n\3\3\4\3\4\3\4\7\4\u02e0")
buf.write("\n\4\f\4\16\4\u02e3\13\4\3\5\3\5\3\6\3\6\3\7\3\7\3\b\3")
buf.write("\b\3\t\3\t\3\n\3\n\3\13\3\13\3\f\3\f\3\r\3\r\3\16\3\16")
buf.write("\3\17\3\17\3\20\3\20\3\21\3\21\3\22\3\22\3\23\3\23\3\24")
buf.write("\3\24\3\25\3\25\3\26\3\26\3\27\3\27\3\30\3\30\3\30\3\31")
buf.write("\3\31\3\31\3\31\5\31\u0312\n\31\3\32\3\32\3\32\3\32\5")
buf.write("\32\u0318\n\32\3\33\3\33\3\33\3\33\5\33\u031e\n\33\3\34")
buf.write("\3\34\3\35\3\35\3\36\3\36\3\37\3\37\3 \3 \3!\3!\3\"\3")
buf.write("\"\3#\3#\3$\3$\3%\3%\3%\3%\7%\u0336\n%\f%\16%\u0339\13")
buf.write("%\3%\3%\7%\u033d\n%\f%\16%\u0340\13%\3%\3%\3%\3%\7%\u0346")
buf.write("\n%\f%\16%\u0349\13%\3%\5%\u034c\n%\3%\3%\3&\3&\7&\u0352")
buf.write("\n&\f&\16&\u0355\13&\3&\3&\3&\3&\3\'\7\'\u035c\n\'\f\'")
buf.write("\16\'\u035f\13\'\3\'\3\'\3\'\7\'\u0364\n\'\f\'\16\'\u0367")
buf.write("\13\'\3(\3(\3(\3(\3(\3(\3(\3(\5(\u0371\n(\5(\u0373\n(")
buf.write("\5(\u0375\n(\5(\u0377\n(\3)\3)\3)\3)\3*\3*\3*\3*\3*\3")
buf.write("*\3*\3*\3*\3+\3+\3+\3+\3+\3+\3,\3,\3,\3,\3,\3,\3-\3-\3")
buf.write("-\3-\3.\3.\3.\3.\3.\3.\3/\3/\3/\3/\3/\3/\3/\3/\3/\3/\3")
buf.write("\60\3\60\3\60\3\60\3\61\3\61\3\61\3\61\3\61\3\61\5\61")
buf.write("\u03b0\n\61\5\61\u03b2\n\61\3\62\3\62\3\62\3\62\3\62\5")
buf.write("\62\u03b9\n\62\3\63\3\63\3\63\3\64\3\64\3\64\3\64\3\64")
buf.write("\3\64\3\64\3\64\3\64\3\64\3\65\3\65\3\65\3\65\3\65\3\65")
buf.write("\5\65\u03ce\n\65\5\65\u03d0\n\65\3\66\3\66\3\66\3\66\3")
buf.write("\66\3\66\3\66\3\66\3\67\3\67\3\67\38\38\38\38\39\39\3")
buf.write("9\39\39\39\39\3:\3:\3:\3:\3:\3;\3;\3;\3;\3;\5;\u03f2\n")
buf.write(";\3<\3<\3<\3<\5<\u03f8\n<\3=\3=\3=\3=\3=\3=\3=\3=\3=\3")
buf.write("=\3=\3=\5=\u0406\n=\3>\3>\3>\3>\3>\3>\5>\u040e\n>\5>\u0410")
buf.write("\n>\3?\3?\3?\3@\3@\3@\3@\3@\3@\3@\3@\3@\3@\3A\3A\3A\3")
buf.write("A\3A\3B\3B\3B\3B\3B\3C\3C\3C\3C\3C\5C\u042e\nC\3D\3D\3")
buf.write("D\3D\3D\3D\3D\5D\u0437\nD\5D\u0439\nD\5D\u043b\nD\3E\3")
buf.write("E\3E\3E\3E\3E\3E\3E\5E\u0445\nE\5E\u0447\nE\3F\3F\3F\3")
buf.write("F\3F\3F\3G\3G\3G\3G\3G\3G\3G\3G\3G\3H\3H\3H\3H\3H\5H\u045d")
buf.write("\nH\3I\3I\3I\3I\3I\3I\3J\3J\3J\3J\3J\5J\u046a\nJ\3K\3")
buf.write("K\3K\3K\3K\3K\3K\3K\3K\3K\3K\3L\3L\3L\3L\3L\3L\3M\3M\3")
buf.write("M\3M\3M\3M\3M\3N\3N\3N\3N\3N\3N\3N\3N\3O\3O\3O\3O\3O\3")
buf.write("O\3O\3O\3P\3P\3P\3P\3P\3P\3P\3P\3P\3P\3P\3Q\3Q\3Q\3Q\3")
buf.write("Q\3Q\3Q\5Q\u04a6\nQ\5Q\u04a8\nQ\5Q\u04aa\nQ\3R\3R\3R\3")
buf.write("R\3R\3R\3R\3R\3S\3S\3S\3S\3S\3S\3S\3S\5S\u04bc\nS\5S\u04be")
buf.write("\nS\5S\u04c0\nS\5S\u04c2\nS\3T\3T\3T\3T\3T\3U\3U\3U\3")
buf.write("U\3U\5U\u04ce\nU\3V\3V\3V\3V\3V\3V\5V\u04d6\nV\5V\u04d8")
buf.write("\nV\3W\3W\3W\3W\3W\3W\5W\u04e0\nW\5W\u04e2\nW\3X\3X\3")
buf.write("X\3X\3X\3X\3X\3X\3X\5X\u04ed\nX\5X\u04ef\nX\3Y\3Y\3Y\3")
buf.write("Y\3Y\3Y\3Y\3Y\3Y\3Y\3Y\3Y\3Z\3Z\3Z\3Z\3Z\3[\3[\3[\3[\3")
buf.write("\\\3\\\3\\\3\\\3]\3]\3]\3]\3]\3]\3]\3]\3]\3]\5]\u0514")
buf.write("\n]\5]\u0516\n]\5]\u0518\n]\5]\u051a\n]\5]\u051c\n]\5")
buf.write("]\u051e\n]\3^\3^\3^\3^\3^\5^\u0525\n^\3_\3_\3_\3_\3_\3")
buf.write("_\3_\3_\5_\u052f\n_\5_\u0531\n_\3`\3`\3`\3`\3`\3`\3`\5")
buf.write("`\u053a\n`\5`\u053c\n`\5`\u053e\n`\3a\3a\3a\3a\3a\3a\3")
buf.write("a\3a\3b\3b\3b\3b\3b\3b\5b\u054e\nb\5b\u0550\nb\3c\3c\3")
buf.write("c\3c\3c\3c\5c\u0558\nc\5c\u055a\nc\3d\3d\3d\3d\3d\3d\3")
buf.write("d\3d\3e\3e\3e\3e\3e\3e\3e\3e\3e\3e\3e\3f\3f\3f\3f\3f\3")
buf.write("f\3f\3f\3f\5f\u0578\nf\5f\u057a\nf\5f\u057c\nf\5f\u057e")
buf.write("\nf\5f\u0580\nf\3g\3g\3g\3g\3g\3g\3g\3g\3g\3h\3h\3h\3")
buf.write("h\3h\3i\3i\3i\3j\3j\3j\3j\3j\3j\3j\3j\5j\u059b\nj\5j\u059d")
buf.write("\nj\5j\u059f\nj\5j\u05a1\nj\3k\3k\3k\3k\3k\3l\3l\3l\3")
buf.write("l\3l\3m\3m\3m\3m\3m\3n\3n\3n\3n\3n\3o\3o\3o\3o\3o\3o\3")
buf.write("o\3o\3p\3p\3p\3p\3p\3p\3p\5p\u05c6\np\5p\u05c8\np\5p\u05ca")
buf.write("\np\3q\3q\3q\3q\3q\3q\3q\3q\3q\5q\u05d5\nq\5q\u05d7\n")
buf.write("q\5q\u05d9\nq\5q\u05db\nq\3r\3r\3r\3r\3r\3r\3s\3s\3s\3")
buf.write("s\3s\3s\5s\u05e9\ns\3t\3t\3t\3t\3t\5t\u05f0\nt\3u\3u\3")
buf.write("u\3u\3u\3u\3u\5u\u05f9\nu\5u\u05fb\nu\5u\u05fd\nu\3u\3")
buf.write("u\3u\3u\3u\3u\3u\5u\u0606\nu\5u\u0608\nu\5u\u060a\nu\5")
buf.write("u\u060c\nu\5u\u060e\nu\3v\3v\3v\3v\3v\3v\3v\5v\u0617\n")
buf.write("v\5v\u0619\nv\5v\u061b\nv\3w\3w\3w\3w\3w\3w\3w\5w\u0624")
buf.write("\nw\5w\u0626\nw\5w\u0628\nw\3x\3x\3x\3x\3x\3x\5x\u0630")
buf.write("\nx\3y\3y\3y\3y\3y\3y\3y\5y\u0639\ny\5y\u063b\ny\5y\u063d")
buf.write("\ny\3z\3z\3z\3z\3z\5z\u0644\nz\3{\3{\3{\3{\3{\5{\u064b")
buf.write("\n{\3|\3|\3|\3|\3|\3|\3|\3}\3}\3}\3}\3}\3}\3}\3~\3~\3")
buf.write("~\3~\3~\5~\u0660\n~\3\177\3\177\3\177\3\177\3\177\3\177")
buf.write("\3\177\3\u0080\3\u0080\3\u0080\3\u0080\3\u0080\3\u0080")
buf.write("\3\u0080\3\u0080\3\u0080\3\u0080\5\u0080\u0673\n\u0080")
buf.write("\3\u0081\3\u0081\3\u0081\3\u0081\3\u0081\3\u0081\3\u0081")
buf.write("\3\u0081\3\u0081\3\u0082\3\u0082\3\u0082\3\u0082\3\u0082")
buf.write("\3\u0082\3\u0082\3\u0082\5\u0082\u0686\n\u0082\5\u0082")
buf.write("\u0688\n\u0082\5\u0082\u068a\n\u0082\5\u0082\u068c\n\u0082")
buf.write("\3\u0083\3\u0083\3\u0083\3\u0083\3\u0083\3\u0083\3\u0083")
buf.write("\3\u0084\3\u0084\3\u0084\3\u0084\3\u0084\5\u0084\u069a")
buf.write("\n\u0084\3\u0085\3\u0085\3\u0085\3\u0085\3\u0085\3\u0086")
buf.write("\3\u0086\3\u0086\3\u0086\3\u0086\3\u0086\5\u0086\u06a7")
buf.write("\n\u0086\5\u0086\u06a9\n\u0086\3\u0087\3\u0087\3\u0087")
buf.write("\3\u0087\3\u0087\3\u0087\3\u0087\5\u0087\u06b2\n\u0087")
buf.write("\5\u0087\u06b4\n\u0087\5\u0087\u06b6\n\u0087\3\u0088\3")
buf.write("\u0088\3\u0088\3\u0088\3\u0088\3\u0088\3\u0089\3\u0089")
buf.write("\3\u0089\3\u0089\3\u0089\3\u008a\3\u008a\3\u008a\3\u008a")
buf.write("\3\u008b\3\u008b\3\u008b\3\u008b\3\u008b\3\u008b\3\u008c")
buf.write("\3\u008c\3\u008c\3\u008c\3\u008c\3\u008d\3\u008d\3\u008d")
buf.write("\3\u008d\3\u008d\3\u008d\3\u008d\3\u008d\3\u008d\3\u008d")
buf.write("\3\u008e\3\u008e\3\u008e\3\u008e\3\u008e\3\u008e\3\u008e")
buf.write("\3\u008e\3\u008f\3\u008f\3\u008f\3\u008f\3\u008f\3\u0090")
buf.write("\3\u0090\3\u0090\3\u0090\3\u0090\3\u0091\3\u0091\3\u0091")
buf.write("\3\u0091\3\u0091\3\u0091\5\u0091\u06f4\n\u0091\5\u0091")
buf.write("\u06f6\n\u0091\3\u0092\3\u0092\3\u0092\3\u0092\3\u0092")
buf.write("\3\u0093\3\u0093\3\u0093\3\u0093\3\u0093\5\u0093\u0702")
buf.write("\n\u0093\3\u0094\3\u0094\3\u0094\3\u0094\3\u0094\3\u0095")
buf.write("\3\u0095\3\u0095\3\u0095\3\u0095\3\u0096\3\u0096\3\u0096")
buf.write("\3\u0096\3\u0096\3\u0097\3\u0097\3\u0097\3\u0098\3\u0098")
buf.write("\3\u0098\3\u0098\3\u0098\3\u0098\3\u0099\3\u0099\3\u0099")
buf.write("\3\u009a\3\u009a\3\u009a\3\u009a\3\u009a\3\u009a\3\u009a")
buf.write("\3\u009a\3\u009b\3\u009b\3\u009b\3\u009b\3\u009b\3\u009b")
buf.write("\3\u009c\3\u009c\3\u009c\3\u009c\3\u009c\3\u009c\3\u009c")
buf.write("\3\u009c\3\u009d\3\u009d\3\u009d\3\u009d\3\u009d\3\u009d")
buf.write("\5\u009d\u073b\n\u009d\5\u009d\u073d\n\u009d\3\u009e\3")
buf.write("\u009e\3\u009e\3\u009e\3\u009e\3\u009f\3\u009f\3\u009f")
buf.write("\3\u009f\3\u009f\3\u00a0\3\u00a0\3\u00a0\3\u00a0\3\u00a1")
buf.write("\3\u00a1\3\u00a1\3\u00a1\3\u00a1\3\u00a1\3\u00a1\3\u00a1")
buf.write("\5\u00a1\u0755\n\u00a1\5\u00a1\u0757\n\u00a1\5\u00a1\u0759")
buf.write("\n\u00a1\5\u00a1\u075b\n\u00a1\3\u00a2\3\u00a2\3\u00a2")
buf.write("\3\u00a2\3\u00a2\3\u00a2\3\u00a3\3\u00a3\3\u00a3\3\u00a3")
buf.write("\3\u00a3\3\u00a3\3\u00a3\3\u00a3\3\u00a4\3\u00a4\3\u00a4")
buf.write("\3\u00a4\3\u00a4\3\u00a5\3\u00a5\3\u00a5\3\u00a5\3\u00a5")
buf.write("\3\u00a6\3\u00a6\3\u00a6\3\u00a6\3\u00a6\3\u00a6\3\u00a6")
buf.write("\3\u00a7\3\u00a7\3\u00a7\3\u00a7\3\u00a7\3\u00a8\3\u00a8")
buf.write("\3\u00a8\3\u00a8\3\u00a8\3\u00a8\5\u00a8\u0787\n\u00a8")
buf.write("\5\u00a8\u0789\n\u00a8\3\u00a9\3\u00a9\3\u00a9\3\u00a9")
buf.write("\3\u00a9\3\u00a9\3\u00a9\3\u00aa\3\u00aa\3\u00aa\3\u00aa")
buf.write("\3\u00aa\3\u00aa\3\u00aa\3\u00ab\3\u00ab\3\u00ab\3\u00ab")
buf.write("\3\u00ab\3\u00ac\3\u00ac\3\u00ac\3\u00ac\3\u00ac\3\u00ac")
buf.write("\3\u00ac\3\u00ad\3\u00ad\3\u00ad\3\u00ad\3\u00ae\3\u00ae")
buf.write("\3\u00ae\3\u00ae\3\u00ae\3\u00af\3\u00af\3\u00af\3\u00af")
buf.write("\3\u00af\3\u00af\3\u00af\3\u00b0\3\u00b0\3\u00b0\3\u00b0")
buf.write("\3\u00b0\3\u00b0\3\u00b0\3\u00b0\3\u00b0\3\u00b0\3\u00b1")
buf.write("\3\u00b1\3\u00b1\3\u00b1\3\u00b1\3\u00b1\3\u00b1\3\u00b2")
buf.write("\3\u00b2\3\u00b2\3\u00b2\3\u00b2\3\u00b3\3\u00b3\3\u00b3")
buf.write("\3\u00b3\3\u00b3\3\u00b3\3\u00b4\3\u00b4\3\u00b4\3\u00b4")
buf.write("\3\u00b4\3\u00b4\3\u00b4\5\u00b4\u07d8\n\u00b4\5\u00b4")
buf.write("\u07da\n\u00b4\5\u00b4\u07dc\n\u00b4\3\u00b5\3\u00b5\3")
buf.write("\u00b5\3\u00b5\3\u00b6\3\u00b6\3\u00b6\3\u00b6\3\u00b6")
buf.write("\5\u00b6\u07e7\n\u00b6\3\u00b6\3\u00b6\3\u00b6\5\u00b6")
buf.write("\u07ec\n\u00b6\3\u00b7\3\u00b7\3\u00b7\3\u00b7\3\u00b7")
buf.write("\3\u00b7\5\u00b7\u07f4\n\u00b7\5\u00b7\u07f6\n\u00b7\3")
buf.write("\u00b8\3\u00b8\3\u00b8\3\u00b8\3\u00b8\3\u00b8\3\u00b8")
buf.write("\3\u00b8\3\u00b8\3\u00b8\3\u00b8\3\u00b9\3\u00b9\3\u00b9")
buf.write("\3\u00b9\3\u00b9\3\u00ba\3\u00ba\3\u00ba\3\u00ba\3\u00ba")
buf.write("\3\u00bb\3\u00bb\3\u00bb\3\u00bb\3\u00bb\3\u00bb\3\u00bb")
buf.write("\3\u00bb\3\u00bb\3\u00bb\3\u00bc\3\u00bc\3\u00bc\3\u00bc")
buf.write("\3\u00bc\3\u00bd\3\u00bd\3\u00bd\3\u00bd\3\u00bd\3\u00bd")
buf.write("\3\u00bd\3\u00bd\3\u00be\3\u00be\3\u00be\3\u00be\3\u00be")
buf.write("\3\u00be\3\u00be\3\u00be\3\u00be\3\u00be\3\u00bf\3\u00bf")
buf.write("\3\u00bf\3\u00bf\3\u00c0\3\u00c0\3\u00c0\3\u00c0\3\u00c0")
buf.write("\3\u00c0\3\u00c0\3\u00c0\3\u00c1\3\u00c1\3\u00c1\3\u00c1")
buf.write("\3\u00c1\3\u00c1\3\u00c1\3\u00c1\3\u00c1\3\u00c2\3\u00c2")
buf.write("\3\u00c2\3\u00c2\3\u00c2\3\u00c2\3\u00c2\3\u00c3\3\u00c3")
buf.write("\3\u00c3\3\u00c3\3\u00c3\3\u00c3\3\u00c3\3\u00c3\3\u00c3")
buf.write("\3\u00c3\3\u00c3\3\u00c4\3\u00c4\3\u00c4\3\u00c4\3\u00c4")
buf.write("\3\u00c4\3\u00c4\3\u00c4\3\u00c4\3\u00c5\3\u00c5\3\u00c5")
buf.write("\3\u00c5\3\u00c5\3\u00c6\3\u00c6\3\u00c6\3\u00c6\3\u00c6")
buf.write("\3\u00c6\3\u00c6\3\u00c7\3\u00c7\3\u00c7\3\u00c7\3\u00c7")
buf.write("\3\u00c7\3\u00c7\3\u00c8\3\u00c8\3\u00c8\3\u00c8\3\u00c9")
buf.write("\3\u00c9\3\u00c9\3\u00c9\3\u00c9\3\u00ca\3\u00ca\3\u00ca")
buf.write("\3\u00ca\3\u00ca\3\u00ca\3\u00ca\3\u00cb\3\u00cb\3\u00cb")
buf.write("\3\u00cb\3\u00cb\3\u00cb\3\u00cb\3\u00cb\3\u00cb\3\u00cc")
buf.write("\3\u00cc\3\u00cc\3\u00cc\3\u00cc\3\u00cc\3\u00cc\3\u00cd")
buf.write("\3\u00cd\3\u00cd\3\u00cd\3\u00cd\3\u00ce\3\u00ce\3\u00ce")
buf.write("\3\u00ce\3\u00ce\3\u00ce\3\u00ce\3\u00cf\3\u00cf\3\u00cf")
buf.write("\3\u00cf\3\u00cf\3\u00cf\3\u00cf\3\u00d0\3\u00d0\3\u00d0")
buf.write("\3\u00d1\3\u00d1\3\u00d1\3\u00d1\3\u00d2\3\u00d2\3\u00d2")
buf.write("\3\u00d3\3\u00d3\3\u00d3\3\u00d4\3\u00d4\3\u00d4\3\u00d4")
buf.write("\3\u00d4\3\u00d4\3\u00d5\3\u00d5\3\u00d5\3\u00d5\3\u00d6")
buf.write("\3\u00d6\3\u00d6\3\u00d6\3\u00d7\3\u00d7\3\u00d7\3\u00d7")
buf.write("\3\u00d8\3\u00d8\3\u00d8\3\u00d8\3\u00d8\3\u00d8\3\u00d8")
buf.write("\3\u00d8\3\u00d8\5\u00d8\u08cc\n\u00d8\5\u00d8\u08ce\n")
buf.write("\u00d8\5\u00d8\u08d0\n\u00d8\5\u00d8\u08d2\n\u00d8\5\u00d8")
buf.write("\u08d4\n\u00d8\3\u00d9\3\u00d9\3\u00d9\3\u00d9\3\u00d9")
buf.write("\3\u00da\3\u00da\3\u00da\3\u00da\3\u00db\3\u00db\3\u00db")
buf.write("\3\u00db\3\u00db\3\u00db\3\u00db\3\u00db\3\u00db\3\u00db")
buf.write("\3\u00db\5\u00db\u08ea\n\u00db\5\u00db\u08ec\n\u00db\5")
buf.write("\u00db\u08ee\n\u00db\5\u00db\u08f0\n\u00db\5\u00db\u08f2")
buf.write("\n\u00db\5\u00db\u08f4\n\u00db\5\u00db\u08f6\n\u00db\3")
buf.write("\u00db\3\u00db\3\u00db\3\u00db\3\u00db\3\u00db\3\u00db")
buf.write("\3\u00db\3\u00db\3\u00db\5\u00db\u0902\n\u00db\5\u00db")
buf.write("\u0904\n\u00db\5\u00db\u0906\n\u00db\5\u00db\u0908\n\u00db")
buf.write("\5\u00db\u090a\n\u00db\5\u00db\u090c\n\u00db\5\u00db\u090e")
buf.write("\n\u00db\3\u00dc\3\u00dc\3\u00dc\3\u00dc\3\u00dc\3\u00dc")
buf.write("\3\u00dd\3\u00dd\3\u00dd\3\u00dd\3\u00de\3\u00de\3\u00de")
buf.write("\3\u00de\3\u00de\3\u00de\5\u00de\u0920\n\u00de\3\u00df")
buf.write("\3\u00df\3\u00df\3\u00df\3\u00df\3\u00e0\3\u00e0\3\u00e0")
buf.write("\3\u00e0\3\u00e0\3\u00e0\3\u00e0\5\u00e0\u092e\n\u00e0")
buf.write("\5\u00e0\u0930\n\u00e0\5\u00e0\u0932\n\u00e0\3\u00e1\3")
buf.write("\u00e1\3\u00e1\3\u00e1\3\u00e1\3\u00e1\3\u00e1\3\u00e1")
buf.write("\3\u00e1\5\u00e1\u093d\n\u00e1\5\u00e1\u093f\n\u00e1\5")
buf.write("\u00e1\u0941\n\u00e1\5\u00e1\u0943\n\u00e1\5\u00e1\u0945")
buf.write("\n\u00e1\3\u00e1\3\u00e1\3\u00e1\3\u00e1\3\u00e1\3\u00e1")
buf.write("\3\u00e1\3\u00e1\5\u00e1\u094f\n\u00e1\5\u00e1\u0951\n")
buf.write("\u00e1\5\u00e1\u0953\n\u00e1\5\u00e1\u0955\n\u00e1\5\u00e1")
buf.write("\u0957\n\u00e1\3\u00e2\3\u00e2\3\u00e2\3\u00e2\3\u00e2")
buf.write("\3\u00e2\3\u00e2\3\u00e2\3\u00e3\3\u00e3\3\u00e3\3\u00e3")
buf.write("\3\u00e3\3\u00e3\5\u00e3\u0967\n\u00e3\5\u00e3\u0969\n")
buf.write("\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3")
buf.write("\3\u00e3\5\u00e3\u0972\n\u00e3\5\u00e3\u0974\n\u00e3\5")
buf.write("\u00e3\u0976\n\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3")
buf.write("\u00e3\3\u00e3\5\u00e3\u097e\n\u00e3\5\u00e3\u0980\n\u00e3")
buf.write("\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3")
buf.write("\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3")
buf.write("\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3\3\u00e3")
buf.write("\3\u00e3\3\u00e3\3\u00e3\5\u00e3\u099a\n\u00e3\5\u00e3")
buf.write("\u099c\n\u00e3\5\u00e3\u099e\n\u00e3\5\u00e3\u09a0\n\u00e3")
buf.write("\5\u00e3\u09a2\n\u00e3\5\u00e3\u09a4\n\u00e3\3\u00e4\3")
buf.write("\u00e4\3\u00e4\3\u00e4\3\u00e4\3\u00e4\3\u00e4\3\u00e5")
buf.write("\3\u00e5\3\u00e5\3\u00e5\3\u00e5\3\u00e6\3\u00e6\3\u00e6")
buf.write("\3\u00e6\3\u00e6\3\u00e7\3\u00e7\3\u00e7\3\u00e7\3\u00e7")
buf.write("\3\u00e7\5\u00e7\u09bd\n\u00e7\5\u00e7\u09bf\n\u00e7\3")
buf.write("\u00e8\3\u00e8\3\u00e8\3\u00e8\3\u00e8\3\u00e8\3\u00e8")
buf.write("\3\u00e9\3\u00e9\3\u00e9\3\u00e9\3\u00e9\3\u00e9\3\u00e9")
buf.write("\3\u00e9\3\u00ea\3\u00ea\3\u00ea\3\u00ea\3\u00ea\3\u00ea")
buf.write("\3\u00ea\3\u00ea\3\u00ea\3\u00ea\3\u00eb\3\u00eb\3\u00eb")
buf.write("\3\u00eb\3\u00eb\3\u00eb\3\u00eb\3\u00eb\3\u00ec\3\u00ec")
buf.write("\3\u00ec\3\u00ec\3\u00ec\3\u00ec\3\u00ec\5\u00ec\u09e9")
buf.write("\n\u00ec\5\u00ec\u09eb\n\u00ec\5\u00ec\u09ed\n\u00ec\3")
buf.write("\u00ed\3\u00ed\3\u00ed\3\u00ed\3\u00ed\3\u00ed\3\u00ed")
buf.write("\3\u00ed\3\u00ed\3\u00ee\3\u00ee\3\u00ee\3\u00ee\3\u00ee")
buf.write("\3\u00ee\3\u00ee\3\u00ee\3\u00ee\3\u00ef\3\u00ef\3\u00ef")
buf.write("\3\u00ef\3\u00ef\3\u00ef\3\u00ef\5\u00ef\u0a08\n\u00ef")
buf.write("\5\u00ef\u0a0a\n\u00ef\5\u00ef\u0a0c\n\u00ef\3\u00f0\3")
buf.write("\u00f0\3\u00f0\3\u00f0\3\u00f0\3\u00f0\5\u00f0\u0a14\n")
buf.write("\u00f0\5\u00f0\u0a16\n\u00f0\3\u00f1\3\u00f1\3\u00f1\3")
buf.write("\u00f1\3\u00f1\3\u00f1\3\u00f1\5\u00f1\u0a1f\n\u00f1\5")
buf.write("\u00f1\u0a21\n\u00f1\5\u00f1\u0a23\n\u00f1\3\u00f2\3\u00f2")
buf.write("\3\u00f2\3\u00f2\3\u00f2\3\u00f2\3\u00f2\3\u00f3\3\u00f3")
buf.write("\3\u00f3\3\u00f3\3\u00f3\3\u00f3\3\u00f3\3\u00f3\3\u00f3")
buf.write("\3\u00f3\3\u00f4\3\u00f4\3\u00f4\3\u00f4\3\u00f4\3\u00f5")
buf.write("\3\u00f5\3\u00f5\3\u00f5\3\u00f5\3\u00f5\3\u00f5\5\u00f5")
buf.write("\u0a42\n\u00f5\5\u00f5\u0a44\n\u00f5\5\u00f5\u0a46\n\u00f5")
buf.write("\3\u00f6\3\u00f6\3\u00f6\3\u00f6\3\u00f6\5\u00f6\u0a4d")
buf.write("\n\u00f6\3\u00f7\3\u00f7\3\u00f7\3\u00f7\3\u00f7\3\u00f7")
buf.write("\5\u00f7\u0a55\n\u00f7\5\u00f7\u0a57\n\u00f7\3\u00f8\3")
buf.write("\u00f8\3\u00f8\3\u00f8\3\u00f8\5\u00f8\u0a5e\n\u00f8\3")
buf.write("\u00f8\3\u00f8\3\u00f8\5\u00f8\u0a63\n\u00f8\3\u00f9\3")
buf.write("\u00f9\3\u00f9\3\u00f9\3\u00f9\3\u00f9\3\u00f9\3\u00f9")
buf.write("\3\u00f9\3\u00fa\3\u00fa\3\u00fa\3\u00fa\3\u00fb\3\u00fb")
buf.write("\3\u00fb\3\u00fb\3\u00fb\3\u00fb\5\u00fb\u0a78\n\u00fb")
buf.write("\5\u00fb\u0a7a\n\u00fb\3\u00fc\3\u00fc\3\u00fc\3\u00fc")
buf.write("\3\u00fc\3\u00fd\3\u00fd\3\u00fd\3\u00fd\3\u00fd\3\u00fe")
buf.write("\3\u00fe\3\u00fe\3\u00fe\3\u00ff\3\u00ff\3\u00ff\3\u00ff")
buf.write("\3\u00ff\3\u0100\3\u0100\3\u0100\3\u0100\3\u0100\3\u0100")
buf.write("\3\u0100\5\u0100\u0a96\n\u0100\5\u0100\u0a98\n\u0100\5")
buf.write("\u0100\u0a9a\n\u0100\3\u0101\3\u0101\3\u0101\3\u0101\3")
buf.write("\u0101\3\u0101\3\u0101\3\u0102\3\u0102\3\u0102\3\u0102")
buf.write("\3\u0102\3\u0102\3\u0102\3\u0102\3\u0102\5\u0102\u0aac")
buf.write("\n\u0102\5\u0102\u0aae\n\u0102\5\u0102\u0ab0\n\u0102\5")
buf.write("\u0102\u0ab2\n\u0102\5\u0102\u0ab4\n\u0102\3\u0102\3\u0102")
buf.write("\3\u0102\3\u0102\3\u0102\3\u0102\5\u0102\u0abc\n\u0102")
buf.write("\5\u0102\u0abe\n\u0102\3\u0102\3\u0102\3\u0102\3\u0102")
buf.write("\3\u0102\3\u0102\5\u0102\u0ac6\n\u0102\5\u0102\u0ac8\n")
buf.write("\u0102\3\u0102\3\u0102\3\u0102\3\u0102\3\u0102\3\u0102")
buf.write("\3\u0102\5\u0102\u0ad1\n\u0102\5\u0102\u0ad3\n\u0102\5")
buf.write("\u0102\u0ad5\n\u0102\3\u0102\3\u0102\3\u0102\3\u0102\3")
buf.write("\u0102\3\u0102\5\u0102\u0add\n\u0102\3\u0103\3\u0103\3")
buf.write("\u0103\3\u0103\3\u0103\3\u0103\5\u0103\u0ae5\n\u0103\5")
buf.write("\u0103\u0ae7\n\u0103\3\u0104\3\u0104\3\u0104\3\u0104\3")
buf.write("\u0104\3\u0105\3\u0105\3\u0105\3\u0105\3\u0105\3\u0105")
buf.write("\5\u0105\u0af4\n\u0105\5\u0105\u0af6\n\u0105\3\u0106\3")
buf.write("\u0106\3\u0106\3\u0106\3\u0106\3\u0106\3\u0106\3\u0106")
buf.write("\3\u0106\3\u0106\3\u0107\3\u0107\3\u0107\3\u0107\3\u0108")
buf.write("\3\u0108\3\u0108\3\u0108\3\u0108\3\u0108\3\u0108\3\u0109")
buf.write("\3\u0109\3\u0109\3\u0109\3\u0109\3\u0109\5\u0109\u0b13")
buf.write("\n\u0109\3\u010a\3\u010a\3\u010a\3\u010a\3\u010a\3\u010b")
buf.write("\3\u010b\3\u010b\3\u010b\3\u010b\3\u010b\3\u010b\3\u010b")
buf.write("\3\u010b\3\u010c\3\u010c\3\u010c\3\u010c\3\u010c\3\u010d")
buf.write("\3\u010d\3\u010d\3\u010d\3\u010d\3\u010e\3\u010e\3\u010e")
buf.write("\3\u010e\3\u010e\3\u010f\3\u010f\3\u010f\3\u010f\3\u010f")
buf.write("\3\u010f\3\u010f\3\u0110\3\u0110\3\u0110\3\u0110\3\u0110")
buf.write("\3\u0111\3\u0111\3\u0111\3\u0111\3\u0111\5\u0111\u0b43")
buf.write("\n\u0111\3\u0112\3\u0112\3\u0112\3\u0112\3\u0112\3\u0112")
buf.write("\3\u0112\3\u0112\3\u0112\5\u0112\u0b4e\n\u0112\5\u0112")
buf.write("\u0b50\n\u0112\5\u0112\u0b52\n\u0112\5\u0112\u0b54\n\u0112")
buf.write("\5\u0112\u0b56\n\u0112\3\u0113\3\u0113\3\u0113\3\u0113")
buf.write("\3\u0113\3\u0113\3\u0114\3\u0114\3\u0114\3\u0114\3\u0115")
buf.write("\3\u0115\3\u0115\3\u0115\3\u0115\3\u0115\3\u0115\3\u0115")
buf.write("\3\u0116\3\u0116\3\u0116\3\u0116\3\u0116\3\u0116\3\u0116")
buf.write("\3\u0117\3\u0117\3\u0117\3\u0117\3\u0117\3\u0117\5\u0117")
buf.write("\u0b77\n\u0117\5\u0117\u0b79\n\u0117\3\u0118\3\u0118\3")
buf.write("\u0118\3\u0118\3\u0118\3\u0118\3\u0118\3\u0118\3\u0118")
buf.write("\3\u0118\3\u0118\3\u0118\3\u0119\3\u0119\3\u0119\3\u0119")
buf.write("\3\u011a\3\u011a\3\u011a\3\u011a\3\u011a\3\u011b\3\u011b")
buf.write("\3\u011b\3\u011b\3\u011b\3\u011c\3\u011c\3\u011c\3\u011c")
buf.write("\3\u011c\3\u011c\3\u011c\3\u011d\3\u011d\3\u011d\3\u011d")
buf.write("\3\u011d\3\u011e\3\u011e\3\u011e\3\u011e\3\u011e\5\u011e")
buf.write("\u0ba6\n\u011e\3\u011f\3\u011f\3\u011f\3\u011f\3\u011f")
buf.write("\3\u011f\3\u011f\5\u011f\u0baf\n\u011f\5\u011f\u0bb1\n")
buf.write("\u011f\5\u011f\u0bb3\n\u011f\3\u0120\3\u0120\3\u0120\3")
buf.write("\u0120\3\u0120\3\u0120\3\u0121\3\u0121\3\u0121\3\u0122")
buf.write("\3\u0122\3\u0122\3\u0122\3\u0123\3\u0123\3\u0123\3\u0123")
buf.write("\3\u0124\3\u0124\3\u0124\3\u0124\3\u0124\3\u0125\3\u0125")
buf.write("\3\u0125\3\u0125\3\u0125\3\u0125\3\u0125\3\u0125\3\u0125")
buf.write("\3\u0125\3\u0126\3\u0126\3\u0126\3\u0126\3\u0126\3\u0126")
buf.write("\3\u0126\3\u0126\3\u0126\3\u0127\3\u0127\3\u0127\3\u0127")
buf.write("\3\u0127\3\u0127\3\u0127\3\u0127\3\u0127\5\u0127\u0be7")
buf.write("\n\u0127\3\u0128\3\u0128\3\u0128\3\u0128\3\u0128\3\u0128")
buf.write("\3\u0128\3\u0129\3\u0129\3\u0129\3\u0129\3\u0129\3\u0129")
buf.write("\5\u0129\u0bf6\n\u0129\5\u0129\u0bf8\n\u0129\3\u012a\3")
buf.write("\u012a\3\u012a\3\u012a\3\u012a\3\u012a\5\u012a\u0c00\n")
buf.write("\u012a\5\u012a\u0c02\n\u012a\3\u012b\3\u012b\3\u012b\3")
buf.write("\u012b\3\u012c\3\u012c\3\u012c\3\u012c\3\u012c\3\u012c")
buf.write("\3\u012d\3\u012d\3\u012d\3\u012d\3\u012d\3\u012d\3\u012d")
buf.write("\3\u012e\3\u012e\3\u012e\3\u012e\3\u012e\3\u012f\3\u012f")
buf.write("\3\u012f\3\u012f\3\u012f\3\u0130\3\u0130\3\u0130\3\u0130")
buf.write("\3\u0130\3\u0130\3\u0131\3\u0131\3\u0131\3\u0131\3\u0131")
buf.write("\3\u0131\3\u0132\3\u0132\3\u0132\3\u0132\3\u0132\3\u0132")
buf.write("\3\u0132\5\u0132\u0c32\n\u0132\5\u0132\u0c34\n\u0132\5")
buf.write("\u0132\u0c36\n\u0132\3\u0133\3\u0133\3\u0133\3\u0133\3")
buf.write("\u0133\3\u0134\3\u0134\3\u0134\3\u0134\3\u0135\3\u0135")
buf.write("\3\u0135\3\u0135\3\u0135\3\u0136\3\u0136\7\u0136\u0c48")
buf.write("\n\u0136\f\u0136\16\u0136\u0c4b\13\u0136\3\u0137\3\u0137")
buf.write("\3\u0138\3\u0138\3\u0138\3\u0138\3\u0139\3\u0139\3\u013a")
buf.write("\3\u013a\3\u013b\3\u013b\3\u013c\3\u013c\3\u013d\3\u013d")
buf.write("\3\u013e\3\u013e\3\u013f\3\u013f\3\u0140\3\u0140\3\u0141")
buf.write("\3\u0141\3\u0142\3\u0142\3\u0143\3\u0143\3\u0144\3\u0144")
buf.write("\3\u0145\3\u0145\3\u0146\3\u0146\3\u0147\3\u0147\3\u0148")
buf.write("\3\u0148\3\u0149\3\u0149\3\u014a\3\u014a\3\u014b\3\u014b")
buf.write("\3\u014c\3\u014c\3\u014d\3\u014d\3\u014e\3\u014e\3\u014f")
buf.write("\3\u014f\3\u0150\3\u0150\3\u0151\3\u0151\3\u0152\3\u0152")
buf.write("\3\u0153\3\u0153\3\u0154\3\u0154\3\u0155\3\u0155\3\u0156")
buf.write("\5\u0156\u0c8e\n\u0156\3\u0157\3\u0157\5\u0157\u0c92\n")
buf.write("\u0157\2\2\u0158\3\3\5\4\7\5\t\6\13\7\r\b\17\t\21\n\23")
buf.write("\13\25\f\27\r\31\16\33\17\35\20\37\21!\22#\23%\24\'\25")
buf.write(")\26+\27-\30/\31\61\32\63\33\65\34\67\359\36;\37= ?!A")
buf.write("\"C#E$G%I&K\'M(O)Q*S+U,W-Y.[/]\60_\61a\62c\63e\64g\65")
buf.write("i\66k\67m8o9q:s;u<w=y>{?}@\177A\u0081B\u0083C\u0085D\u0087")
buf.write("E\u0089F\u008bG\u008dH\u008fI\u0091J\u0093K\u0095L\u0097")
buf.write("M\u0099N\u009bO\u009dP\u009fQ\u00a1R\u00a3S\u00a5T\u00a7")
buf.write("U\u00a9V\u00abW\u00adX\u00afY\u00b1Z\u00b3[\u00b5\\\u00b7")
buf.write("]\u00b9^\u00bb_\u00bd`\u00bfa\u00c1b\u00c3c\u00c5d\u00c7")
buf.write("e\u00c9f\u00cbg\u00cdh\u00cfi\u00d1j\u00d3k\u00d5l\u00d7")
buf.write("m\u00d9n\u00dbo\u00ddp\u00dfq\u00e1r\u00e3s\u00e5t\u00e7")
buf.write("u\u00e9v\u00ebw\u00edx\u00efy\u00f1z\u00f3{\u00f5|\u00f7")
buf.write("}\u00f9~\u00fb\177\u00fd\u0080\u00ff\u0081\u0101\u0082")
buf.write("\u0103\u0083\u0105\u0084\u0107\u0085\u0109\u0086\u010b")
buf.write("\u0087\u010d\u0088\u010f\u0089\u0111\u008a\u0113\u008b")
buf.write("\u0115\u008c\u0117\u008d\u0119\u008e\u011b\u008f\u011d")
buf.write("\u0090\u011f\u0091\u0121\u0092\u0123\u0093\u0125\u0094")
buf.write("\u0127\u0095\u0129\u0096\u012b\u0097\u012d\u0098\u012f")
buf.write("\u0099\u0131\u009a\u0133\u009b\u0135\u009c\u0137\u009d")
buf.write("\u0139\u009e\u013b\u009f\u013d\u00a0\u013f\u00a1\u0141")
buf.write("\u00a2\u0143\u00a3\u0145\u00a4\u0147\u00a5\u0149\u00a6")
buf.write("\u014b\u00a7\u014d\u00a8\u014f\u00a9\u0151\u00aa\u0153")
buf.write("\u00ab\u0155\u00ac\u0157\u00ad\u0159\u00ae\u015b\u00af")
buf.write("\u015d\u00b0\u015f\u00b1\u0161\u00b2\u0163\u00b3\u0165")
buf.write("\u00b4\u0167\u00b5\u0169\u00b6\u016b\u00b7\u016d\u00b8")
buf.write("\u016f\u00b9\u0171\u00ba\u0173\u00bb\u0175\u00bc\u0177")
buf.write("\u00bd\u0179\u00be\u017b\u00bf\u017d\u00c0\u017f\u00c1")
buf.write("\u0181\u00c2\u0183\u00c3\u0185\u00c4\u0187\u00c5\u0189")
buf.write("\u00c6\u018b\u00c7\u018d\u00c8\u018f\u00c9\u0191\u00ca")
buf.write("\u0193\u00cb\u0195\u00cc\u0197\u00cd\u0199\u00ce\u019b")
buf.write("\u00cf\u019d\u00d0\u019f\u00d1\u01a1\u00d2\u01a3\u00d3")
buf.write("\u01a5\u00d4\u01a7\u00d5\u01a9\u00d6\u01ab\u00d7\u01ad")
buf.write("\u00d8\u01af\u00d9\u01b1\u00da\u01b3\u00db\u01b5\u00dc")
buf.write("\u01b7\u00dd\u01b9\u00de\u01bb\u00df\u01bd\u00e0\u01bf")
buf.write("\u00e1\u01c1\u00e2\u01c3\u00e3\u01c5\u00e4\u01c7\u00e5")
buf.write("\u01c9\u00e6\u01cb\u00e7\u01cd\u00e8\u01cf\u00e9\u01d1")
buf.write("\u00ea\u01d3\u00eb\u01d5\u00ec\u01d7\u00ed\u01d9\u00ee")
buf.write("\u01db\u00ef\u01dd\u00f0\u01df\u00f1\u01e1\u00f2\u01e3")
buf.write("\u00f3\u01e5\u00f4\u01e7\u00f5\u01e9\u00f6\u01eb\u00f7")
buf.write("\u01ed\u00f8\u01ef\u00f9\u01f1\u00fa\u01f3\u00fb\u01f5")
buf.write("\u00fc\u01f7\u00fd\u01f9\u00fe\u01fb\u00ff\u01fd\u0100")
buf.write("\u01ff\u0101\u0201\u0102\u0203\u0103\u0205\u0104\u0207")
buf.write("\u0105\u0209\u0106\u020b\u0107\u020d\u0108\u020f\u0109")
buf.write("\u0211\u010a\u0213\u010b\u0215\u010c\u0217\u010d\u0219")
buf.write("\u010e\u021b\u010f\u021d\u0110\u021f\u0111\u0221\u0112")
buf.write("\u0223\u0113\u0225\u0114\u0227\u0115\u0229\u0116\u022b")
buf.write("\u0117\u022d\u0118\u022f\u0119\u0231\u011a\u0233\u011b")
buf.write("\u0235\u011c\u0237\u011d\u0239\u011e\u023b\u011f\u023d")
buf.write("\u0120\u023f\u0121\u0241\u0122\u0243\u0123\u0245\u0124")
buf.write("\u0247\u0125\u0249\u0126\u024b\u0127\u024d\u0128\u024f")
buf.write("\u0129\u0251\u012a\u0253\u012b\u0255\u012c\u0257\u012d")
buf.write("\u0259\u012e\u025b\u012f\u025d\u0130\u025f\u0131\u0261")
buf.write("\u0132\u0263\u0133\u0265\u0134\u0267\u0135\u0269\u0136")
buf.write("\u026b\u0137\u026d\u0138\u026f\u0139\u0271\u013a\u0273")
buf.write("\2\u0275\2\u0277\2\u0279\2\u027b\2\u027d\2\u027f\2\u0281")
buf.write("\2\u0283\2\u0285\2\u0287\2\u0289\2\u028b\2\u028d\2\u028f")
buf.write("\2\u0291\2\u0293\2\u0295\2\u0297\2\u0299\2\u029b\2\u029d")
buf.write("\2\u029f\2\u02a1\2\u02a3\2\u02a5\2\u02a7\2\u02a9\2\u02ab")
buf.write("\2\u02ad\2\3\2$\4\2--//\3\2\f\f\4\2\f\f((\5\2\13\13\17")
buf.write("\17\"\"\4\2CCcc\4\2DDdd\4\2EEee\4\2FFff\4\2GGgg\4\2HH")
buf.write("hh\4\2IIii\4\2JJjj\4\2KKkk\4\2LLll\4\2MMmm\4\2NNnn\4\2")
buf.write("OOoo\4\2PPpp\4\2QQqq\4\2RRrr\4\2SSss\4\2TTtt\4\2UUuu\4")
buf.write("\2VVvv\4\2WWww\4\2XXxx\4\2YYyy\4\2ZZzz\4\2[[{{\4\2\\\\")
buf.write("||\3\2\62;\5\2\62;CHch\u0129\2C\\aac|\u00ac\u00ac\u00b7")
buf.write("\u00b7\u00bc\u00bc\u00c2\u00d8\u00da\u00f8\u00fa\u0243")
buf.write("\u0252\u02c3\u02c8\u02d3\u02e2\u02e6\u02f0\u02f0\u037c")
buf.write("\u037c\u0388\u0388\u038a\u038c\u038e\u038e\u0390\u03a3")
buf.write("\u03a5\u03d0\u03d2\u03f7\u03f9\u0483\u048c\u04d0\u04d2")
buf.write("\u04fb\u0502\u0511\u0533\u0558\u055b\u055b\u0563\u0589")
buf.write("\u05d2\u05ec\u05f2\u05f4\u0623\u063c\u0642\u064c\u0670")
buf.write("\u0671\u0673\u06d5\u06d7\u06d7\u06e7\u06e8\u06f0\u06f1")
buf.write("\u06fc\u06fe\u0701\u0701\u0712\u0712\u0714\u0731\u074f")
buf.write("\u076f\u0782\u07a7\u07b3\u07b3\u0906\u093b\u093f\u093f")
buf.write("\u0952\u0952\u095a\u0963\u097f\u097f\u0987\u098e\u0991")
buf.write("\u0992\u0995\u09aa\u09ac\u09b2\u09b4\u09b4\u09b8\u09bb")
buf.write("\u09bf\u09bf\u09d0\u09d0\u09de\u09df\u09e1\u09e3\u09f2")
buf.write("\u09f3\u0a07\u0a0c\u0a11\u0a12\u0a15\u0a2a\u0a2c\u0a32")
buf.write("\u0a34\u0a35\u0a37\u0a38\u0a3a\u0a3b\u0a5b\u0a5e\u0a60")
buf.write("\u0a60\u0a74\u0a76\u0a87\u0a8f\u0a91\u0a93\u0a95\u0aaa")
buf.write("\u0aac\u0ab2\u0ab4\u0ab5\u0ab7\u0abb\u0abf\u0abf\u0ad2")
buf.write("\u0ad2\u0ae2\u0ae3\u0b07\u0b0e\u0b11\u0b12\u0b15\u0b2a")
buf.write("\u0b2c\u0b32\u0b34\u0b35\u0b37\u0b3b\u0b3f\u0b3f\u0b5e")
buf.write("\u0b5f\u0b61\u0b63\u0b73\u0b73\u0b85\u0b85\u0b87\u0b8c")
buf.write("\u0b90\u0b92\u0b94\u0b97\u0b9b\u0b9c\u0b9e\u0b9e\u0ba0")
buf.write("\u0ba1\u0ba5\u0ba6\u0baa\u0bac\u0bb0\u0bbb\u0c07\u0c0e")
buf.write("\u0c10\u0c12\u0c14\u0c2a\u0c2c\u0c35\u0c37\u0c3b\u0c62")
buf.write("\u0c63\u0c87\u0c8e\u0c90\u0c92\u0c94\u0caa\u0cac\u0cb5")
buf.write("\u0cb7\u0cbb\u0cbf\u0cbf\u0ce0\u0ce0\u0ce2\u0ce3\u0d07")
buf.write("\u0d0e\u0d10\u0d12\u0d14\u0d2a\u0d2c\u0d3b\u0d62\u0d63")
buf.write("\u0d87\u0d98\u0d9c\u0db3\u0db5\u0dbd\u0dbf\u0dbf\u0dc2")
buf.write("\u0dc8\u0e03\u0e32\u0e34\u0e35\u0e42\u0e48\u0e83\u0e84")
buf.write("\u0e86\u0e86\u0e89\u0e8a\u0e8c\u0e8c\u0e8f\u0e8f\u0e96")
buf.write("\u0e99\u0e9b\u0ea1\u0ea3\u0ea5\u0ea7\u0ea7\u0ea9\u0ea9")
buf.write("\u0eac\u0ead\u0eaf\u0eb2\u0eb4\u0eb5\u0ebf\u0ebf\u0ec2")
buf.write("\u0ec6\u0ec8\u0ec8\u0ede\u0edf\u0f02\u0f02\u0f42\u0f49")
buf.write("\u0f4b\u0f6c\u0f8a\u0f8d\u1002\u1023\u1025\u1029\u102b")
buf.write("\u102c\u1052\u1057\u10a2\u10c7\u10d2\u10fc\u10fe\u10fe")
buf.write("\u1102\u115b\u1161\u11a4\u11aa\u11fb\u1202\u124a\u124c")
buf.write("\u124f\u1252\u1258\u125a\u125a\u125c\u125f\u1262\u128a")
buf.write("\u128c\u128f\u1292\u12b2\u12b4\u12b7\u12ba\u12c0\u12c2")
buf.write("\u12c2\u12c4\u12c7\u12ca\u12d8\u12da\u1312\u1314\u1317")
buf.write("\u131a\u135c\u1382\u1391\u13a2\u13f6\u1403\u166e\u1671")
buf.write("\u1678\u1683\u169c\u16a2\u16ec\u16f0\u16f2\u1702\u170e")
buf.write("\u1710\u1713\u1722\u1733\u1742\u1753\u1762\u176e\u1770")
buf.write("\u1772\u1782\u17b5\u17d9\u17d9\u17de\u17de\u1822\u1879")
buf.write("\u1882\u18aa\u1902\u191e\u1952\u196f\u1972\u1976\u1982")
buf.write("\u19ab\u19c3\u19c9\u1a02\u1a18\u1d02\u1dc1\u1e02\u1e9d")
buf.write("\u1ea2\u1efb\u1f02\u1f17\u1f1a\u1f1f\u1f22\u1f47\u1f4a")
buf.write("\u1f4f\u1f52\u1f59\u1f5b\u1f5b\u1f5d\u1f5d\u1f5f\u1f5f")
buf.write("\u1f61\u1f7f\u1f82\u1fb6\u1fb8\u1fbe\u1fc0\u1fc0\u1fc4")
buf.write("\u1fc6\u1fc8\u1fce\u1fd2\u1fd5\u1fd8\u1fdd\u1fe2\u1fee")
buf.write("\u1ff4\u1ff6\u1ff8\u1ffe\u2073\u2073\u2081\u2081\u2092")
buf.write("\u2096\u2104\u2104\u2109\u2109\u210c\u2115\u2117\u2117")
buf.write("\u211a\u211f\u2126\u2126\u2128\u2128\u212a\u212a\u212c")
buf.write("\u2133\u2135\u213b\u213e\u2141\u2147\u214b\u2162\u2185")
buf.write("\u2c02\u2c30\u2c32\u2c60\u2c82\u2ce6\u2d02\u2d27\u2d32")
buf.write("\u2d67\u2d71\u2d71\u2d82\u2d98\u2da2\u2da8\u2daa\u2db0")
buf.write("\u2db2\u2db8\u2dba\u2dc0\u2dc2\u2dc8\u2dca\u2dd0\u2dd2")
buf.write("\u2dd8\u2dda\u2de0\u3007\u3009\u3023\u302b\u3033\u3037")
buf.write("\u303a\u303e\u3043\u3098\u309d\u30a1\u30a3\u30fc\u30fe")
buf.write("\u3101\u3107\u312e\u3133\u3190\u31a2\u31b9\u31f2\u3201")
buf.write("\u3402\u4db7\u4e02\u9fbd\ua002\ua48e\ua802\ua803\ua805")
buf.write("\ua807\ua809\ua80c\ua80e\ua824\uac02\ud7a5\uf902\ufa2f")
buf.write("\ufa32\ufa6c\ufa72\ufadb\ufb02\ufb08\ufb15\ufb19\ufb1f")
buf.write("\ufb1f\ufb21\ufb2a\ufb2c\ufb38\ufb3a\ufb3e\ufb40\ufb40")
buf.write("\ufb42\ufb43\ufb45\ufb46\ufb48\ufbb3\ufbd5\ufd3f\ufd52")
buf.write("\ufd91\ufd94\ufdc9\ufdf2\ufdfd\ufe72\ufe76\ufe78\ufefe")
buf.write("\uff23\uff3c\uff43\uff5c\uff68\uffc0\uffc4\uffc9\uffcc")
buf.write("\uffd1\uffd4\uffd9\uffdc\uffde\u0096\2\62;\u0302\u0371")
buf.write("\u0485\u0488\u0593\u05bb\u05bd\u05bf\u05c1\u05c1\u05c3")
buf.write("\u05c4\u05c6\u05c7\u05c9\u05c9\u0612\u0617\u064d\u0660")
buf.write("\u0662\u066b\u0672\u0672\u06d8\u06de\u06e1\u06e6\u06e9")
buf.write("\u06ea\u06ec\u06ef\u06f2\u06fb\u0713\u0713\u0732\u074c")
buf.write("\u07a8\u07b2\u0903\u0905\u093e\u093e\u0940\u094f\u0953")
buf.write("\u0956\u0964\u0965\u0968\u0971\u0983\u0985\u09be\u09be")
buf.write("\u09c0\u09c6\u09c9\u09ca\u09cd\u09cf\u09d9\u09d9\u09e4")
buf.write("\u09e5\u09e8\u09f1\u0a03\u0a05\u0a3e\u0a3e\u0a40\u0a44")
buf.write("\u0a49\u0a4a\u0a4d\u0a4f\u0a68\u0a73\u0a83\u0a85\u0abe")
buf.write("\u0abe\u0ac0\u0ac7\u0ac9\u0acb\u0acd\u0acf\u0ae4\u0ae5")
buf.write("\u0ae8\u0af1\u0b03\u0b05\u0b3e\u0b3e\u0b40\u0b45\u0b49")
buf.write("\u0b4a\u0b4d\u0b4f\u0b58\u0b59\u0b68\u0b71\u0b84\u0b84")
buf.write("\u0bc0\u0bc4\u0bc8\u0bca\u0bcc\u0bcf\u0bd9\u0bd9\u0be8")
buf.write("\u0bf1\u0c03\u0c05\u0c40\u0c46\u0c48\u0c4a\u0c4c\u0c4f")
buf.write("\u0c57\u0c58\u0c68\u0c71\u0c84\u0c85\u0cbe\u0cbe\u0cc0")
buf.write("\u0cc6\u0cc8\u0cca\u0ccc\u0ccf\u0cd7\u0cd8\u0ce8\u0cf1")
buf.write("\u0d04\u0d05\u0d40\u0d45\u0d48\u0d4a\u0d4c\u0d4f\u0d59")
buf.write("\u0d59\u0d68\u0d71\u0d84\u0d85\u0dcc\u0dcc\u0dd1\u0dd6")
buf.write("\u0dd8\u0dd8\u0dda\u0de1\u0df4\u0df5\u0e33\u0e33\u0e36")
buf.write("\u0e3c\u0e49\u0e50\u0e52\u0e5b\u0eb3\u0eb3\u0eb6\u0ebb")
buf.write("\u0ebd\u0ebe\u0eca\u0ecf\u0ed2\u0edb\u0f1a\u0f1b\u0f22")
buf.write("\u0f2b\u0f37\u0f37\u0f39\u0f39\u0f3b\u0f3b\u0f40\u0f41")
buf.write("\u0f73\u0f86\u0f88\u0f89\u0f92\u0f99\u0f9b\u0fbe\u0fc8")
buf.write("\u0fc8\u102e\u1034\u1038\u103b\u1042\u104b\u1058\u105b")
buf.write("\u1361\u1361\u136b\u1373\u1714\u1716\u1734\u1736\u1754")
buf.write("\u1755\u1774\u1775\u17b8\u17d5\u17df\u17df\u17e2\u17eb")
buf.write("\u180d\u180f\u1812\u181b\u18ab\u18ab\u1922\u192d\u1932")
buf.write("\u193d\u1948\u1951\u19b2\u19c2\u19ca\u19cb\u19d2\u19db")
buf.write("\u1a19\u1a1d\u1dc2\u1dc5\u2041\u2042\u2056\u2056\u20d2")
buf.write("\u20de\u20e3\u20e3\u20e7\u20ed\u302c\u3031\u309b\u309c")
buf.write("\ua804\ua804\ua808\ua808\ua80d\ua80d\ua825\ua829\ufb20")
buf.write("\ufb20\ufe02\ufe11\ufe22\ufe25\ufe35\ufe36\ufe4f\ufe51")
buf.write("\uff12\uff1b\uff41\uff41\2\u0d7b\2\3\3\2\2\2\2\5\3\2\2")
buf.write("\2\2\7\3\2\2\2\2\t\3\2\2\2\2\13\3\2\2\2\2\r\3\2\2\2\2")
buf.write("\17\3\2\2\2\2\21\3\2\2\2\2\23\3\2\2\2\2\25\3\2\2\2\2\27")
buf.write("\3\2\2\2\2\31\3\2\2\2\2\33\3\2\2\2\2\35\3\2\2\2\2\37\3")
buf.write("\2\2\2\2!\3\2\2\2\2#\3\2\2\2\2%\3\2\2\2\2\'\3\2\2\2\2")
buf.write(")\3\2\2\2\2+\3\2\2\2\2-\3\2\2\2\2/\3\2\2\2\2\61\3\2\2")
buf.write("\2\2\63\3\2\2\2\2\65\3\2\2\2\2\67\3\2\2\2\29\3\2\2\2\2")
buf.write(";\3\2\2\2\2=\3\2\2\2\2?\3\2\2\2\2A\3\2\2\2\2C\3\2\2\2")
buf.write("\2E\3\2\2\2\2G\3\2\2\2\2I\3\2\2\2\2K\3\2\2\2\2M\3\2\2")
buf.write("\2\2O\3\2\2\2\2Q\3\2\2\2\2S\3\2\2\2\2U\3\2\2\2\2W\3\2")
buf.write("\2\2\2Y\3\2\2\2\2[\3\2\2\2\2]\3\2\2\2\2_\3\2\2\2\2a\3")
buf.write("\2\2\2\2c\3\2\2\2\2e\3\2\2\2\2g\3\2\2\2\2i\3\2\2\2\2k")
buf.write("\3\2\2\2\2m\3\2\2\2\2o\3\2\2\2\2q\3\2\2\2\2s\3\2\2\2\2")
buf.write("u\3\2\2\2\2w\3\2\2\2\2y\3\2\2\2\2{\3\2\2\2\2}\3\2\2\2")
buf.write("\2\177\3\2\2\2\2\u0081\3\2\2\2\2\u0083\3\2\2\2\2\u0085")
buf.write("\3\2\2\2\2\u0087\3\2\2\2\2\u0089\3\2\2\2\2\u008b\3\2\2")
buf.write("\2\2\u008d\3\2\2\2\2\u008f\3\2\2\2\2\u0091\3\2\2\2\2\u0093")
buf.write("\3\2\2\2\2\u0095\3\2\2\2\2\u0097\3\2\2\2\2\u0099\3\2\2")
buf.write("\2\2\u009b\3\2\2\2\2\u009d\3\2\2\2\2\u009f\3\2\2\2\2\u00a1")
buf.write("\3\2\2\2\2\u00a3\3\2\2\2\2\u00a5\3\2\2\2\2\u00a7\3\2\2")
buf.write("\2\2\u00a9\3\2\2\2\2\u00ab\3\2\2\2\2\u00ad\3\2\2\2\2\u00af")
buf.write("\3\2\2\2\2\u00b1\3\2\2\2\2\u00b3\3\2\2\2\2\u00b5\3\2\2")
buf.write("\2\2\u00b7\3\2\2\2\2\u00b9\3\2\2\2\2\u00bb\3\2\2\2\2\u00bd")
buf.write("\3\2\2\2\2\u00bf\3\2\2\2\2\u00c1\3\2\2\2\2\u00c3\3\2\2")
buf.write("\2\2\u00c5\3\2\2\2\2\u00c7\3\2\2\2\2\u00c9\3\2\2\2\2\u00cb")
buf.write("\3\2\2\2\2\u00cd\3\2\2\2\2\u00cf\3\2\2\2\2\u00d1\3\2\2")
buf.write("\2\2\u00d3\3\2\2\2\2\u00d5\3\2\2\2\2\u00d7\3\2\2\2\2\u00d9")
buf.write("\3\2\2\2\2\u00db\3\2\2\2\2\u00dd\3\2\2\2\2\u00df\3\2\2")
buf.write("\2\2\u00e1\3\2\2\2\2\u00e3\3\2\2\2\2\u00e5\3\2\2\2\2\u00e7")
buf.write("\3\2\2\2\2\u00e9\3\2\2\2\2\u00eb\3\2\2\2\2\u00ed\3\2\2")
buf.write("\2\2\u00ef\3\2\2\2\2\u00f1\3\2\2\2\2\u00f3\3\2\2\2\2\u00f5")
buf.write("\3\2\2\2\2\u00f7\3\2\2\2\2\u00f9\3\2\2\2\2\u00fb\3\2\2")
buf.write("\2\2\u00fd\3\2\2\2\2\u00ff\3\2\2\2\2\u0101\3\2\2\2\2\u0103")
buf.write("\3\2\2\2\2\u0105\3\2\2\2\2\u0107\3\2\2\2\2\u0109\3\2\2")
buf.write("\2\2\u010b\3\2\2\2\2\u010d\3\2\2\2\2\u010f\3\2\2\2\2\u0111")
buf.write("\3\2\2\2\2\u0113\3\2\2\2\2\u0115\3\2\2\2\2\u0117\3\2\2")
buf.write("\2\2\u0119\3\2\2\2\2\u011b\3\2\2\2\2\u011d\3\2\2\2\2\u011f")
buf.write("\3\2\2\2\2\u0121\3\2\2\2\2\u0123\3\2\2\2\2\u0125\3\2\2")
buf.write("\2\2\u0127\3\2\2\2\2\u0129\3\2\2\2\2\u012b\3\2\2\2\2\u012d")
buf.write("\3\2\2\2\2\u012f\3\2\2\2\2\u0131\3\2\2\2\2\u0133\3\2\2")
buf.write("\2\2\u0135\3\2\2\2\2\u0137\3\2\2\2\2\u0139\3\2\2\2\2\u013b")
buf.write("\3\2\2\2\2\u013d\3\2\2\2\2\u013f\3\2\2\2\2\u0141\3\2\2")
buf.write("\2\2\u0143\3\2\2\2\2\u0145\3\2\2\2\2\u0147\3\2\2\2\2\u0149")
buf.write("\3\2\2\2\2\u014b\3\2\2\2\2\u014d\3\2\2\2\2\u014f\3\2\2")
buf.write("\2\2\u0151\3\2\2\2\2\u0153\3\2\2\2\2\u0155\3\2\2\2\2\u0157")
buf.write("\3\2\2\2\2\u0159\3\2\2\2\2\u015b\3\2\2\2\2\u015d\3\2\2")
buf.write("\2\2\u015f\3\2\2\2\2\u0161\3\2\2\2\2\u0163\3\2\2\2\2\u0165")
buf.write("\3\2\2\2\2\u0167\3\2\2\2\2\u0169\3\2\2\2\2\u016b\3\2\2")
buf.write("\2\2\u016d\3\2\2\2\2\u016f\3\2\2\2\2\u0171\3\2\2\2\2\u0173")
buf.write("\3\2\2\2\2\u0175\3\2\2\2\2\u0177\3\2\2\2\2\u0179\3\2\2")
buf.write("\2\2\u017b\3\2\2\2\2\u017d\3\2\2\2\2\u017f\3\2\2\2\2\u0181")
buf.write("\3\2\2\2\2\u0183\3\2\2\2\2\u0185\3\2\2\2\2\u0187\3\2\2")
buf.write("\2\2\u0189\3\2\2\2\2\u018b\3\2\2\2\2\u018d\3\2\2\2\2\u018f")
buf.write("\3\2\2\2\2\u0191\3\2\2\2\2\u0193\3\2\2\2\2\u0195\3\2\2")
buf.write("\2\2\u0197\3\2\2\2\2\u0199\3\2\2\2\2\u019b\3\2\2\2\2\u019d")
buf.write("\3\2\2\2\2\u019f\3\2\2\2\2\u01a1\3\2\2\2\2\u01a3\3\2\2")
buf.write("\2\2\u01a5\3\2\2\2\2\u01a7\3\2\2\2\2\u01a9\3\2\2\2\2\u01ab")
buf.write("\3\2\2\2\2\u01ad\3\2\2\2\2\u01af\3\2\2\2\2\u01b1\3\2\2")
buf.write("\2\2\u01b3\3\2\2\2\2\u01b5\3\2\2\2\2\u01b7\3\2\2\2\2\u01b9")
buf.write("\3\2\2\2\2\u01bb\3\2\2\2\2\u01bd\3\2\2\2\2\u01bf\3\2\2")
buf.write("\2\2\u01c1\3\2\2\2\2\u01c3\3\2\2\2\2\u01c5\3\2\2\2\2\u01c7")
buf.write("\3\2\2\2\2\u01c9\3\2\2\2\2\u01cb\3\2\2\2\2\u01cd\3\2\2")
buf.write("\2\2\u01cf\3\2\2\2\2\u01d1\3\2\2\2\2\u01d3\3\2\2\2\2\u01d5")
buf.write("\3\2\2\2\2\u01d7\3\2\2\2\2\u01d9\3\2\2\2\2\u01db\3\2\2")
buf.write("\2\2\u01dd\3\2\2\2\2\u01df\3\2\2\2\2\u01e1\3\2\2\2\2\u01e3")
buf.write("\3\2\2\2\2\u01e5\3\2\2\2\2\u01e7\3\2\2\2\2\u01e9\3\2\2")
buf.write("\2\2\u01eb\3\2\2\2\2\u01ed\3\2\2\2\2\u01ef\3\2\2\2\2\u01f1")
buf.write("\3\2\2\2\2\u01f3\3\2\2\2\2\u01f5\3\2\2\2\2\u01f7\3\2\2")
buf.write("\2\2\u01f9\3\2\2\2\2\u01fb\3\2\2\2\2\u01fd\3\2\2\2\2\u01ff")
buf.write("\3\2\2\2\2\u0201\3\2\2\2\2\u0203\3\2\2\2\2\u0205\3\2\2")
buf.write("\2\2\u0207\3\2\2\2\2\u0209\3\2\2\2\2\u020b\3\2\2\2\2\u020d")
buf.write("\3\2\2\2\2\u020f\3\2\2\2\2\u0211\3\2\2\2\2\u0213\3\2\2")
buf.write("\2\2\u0215\3\2\2\2\2\u0217\3\2\2\2\2\u0219\3\2\2\2\2\u021b")
buf.write("\3\2\2\2\2\u021d\3\2\2\2\2\u021f\3\2\2\2\2\u0221\3\2\2")
buf.write("\2\2\u0223\3\2\2\2\2\u0225\3\2\2\2\2\u0227\3\2\2\2\2\u0229")
buf.write("\3\2\2\2\2\u022b\3\2\2\2\2\u022d\3\2\2\2\2\u022f\3\2\2")
buf.write("\2\2\u0231\3\2\2\2\2\u0233\3\2\2\2\2\u0235\3\2\2\2\2\u0237")
buf.write("\3\2\2\2\2\u0239\3\2\2\2\2\u023b\3\2\2\2\2\u023d\3\2\2")
buf.write("\2\2\u023f\3\2\2\2\2\u0241\3\2\2\2\2\u0243\3\2\2\2\2\u0245")
buf.write("\3\2\2\2\2\u0247\3\2\2\2\2\u0249\3\2\2\2\2\u024b\3\2\2")
buf.write("\2\2\u024d\3\2\2\2\2\u024f\3\2\2\2\2\u0251\3\2\2\2\2\u0253")
buf.write("\3\2\2\2\2\u0255\3\2\2\2\2\u0257\3\2\2\2\2\u0259\3\2\2")
buf.write("\2\2\u025b\3\2\2\2\2\u025d\3\2\2\2\2\u025f\3\2\2\2\2\u0261")
buf.write("\3\2\2\2\2\u0263\3\2\2\2\2\u0265\3\2\2\2\2\u0267\3\2\2")
buf.write("\2\2\u0269\3\2\2\2\2\u026b\3\2\2\2\2\u026d\3\2\2\2\2\u026f")
buf.write("\3\2\2\2\2\u0271\3\2\2\2\3\u02af\3\2\2\2\5\u02da\3\2\2")
buf.write("\2\7\u02dc\3\2\2\2\t\u02e4\3\2\2\2\13\u02e6\3\2\2\2\r")
buf.write("\u02e8\3\2\2\2\17\u02ea\3\2\2\2\21\u02ec\3\2\2\2\23\u02ee")
buf.write("\3\2\2\2\25\u02f0\3\2\2\2\27\u02f2\3\2\2\2\31\u02f4\3")
buf.write("\2\2\2\33\u02f6\3\2\2\2\35\u02f8\3\2\2\2\37\u02fa\3\2")
buf.write("\2\2!\u02fc\3\2\2\2#\u02fe\3\2\2\2%\u0300\3\2\2\2\'\u0302")
buf.write("\3\2\2\2)\u0304\3\2\2\2+\u0306\3\2\2\2-\u0308\3\2\2\2")
buf.write("/\u030a\3\2\2\2\61\u0311\3\2\2\2\63\u0317\3\2\2\2\65\u031d")
buf.write("\3\2\2\2\67\u031f\3\2\2\29\u0321\3\2\2\2;\u0323\3\2\2")
buf.write("\2=\u0325\3\2\2\2?\u0327\3\2\2\2A\u0329\3\2\2\2C\u032b")
buf.write("\3\2\2\2E\u032d\3\2\2\2G\u032f\3\2\2\2I\u034b\3\2\2\2")
buf.write("K\u034f\3\2\2\2M\u035d\3\2\2\2O\u0368\3\2\2\2Q\u0378\3")
buf.write("\2\2\2S\u037c\3\2\2\2U\u0385\3\2\2\2W\u038b\3\2\2\2Y\u0391")
buf.write("\3\2\2\2[\u0395\3\2\2\2]\u039b\3\2\2\2_\u03a5\3\2\2\2")
buf.write("a\u03a9\3\2\2\2c\u03b3\3\2\2\2e\u03ba\3\2\2\2g\u03bd\3")
buf.write("\2\2\2i\u03c7\3\2\2\2k\u03d1\3\2\2\2m\u03d9\3\2\2\2o\u03dc")
buf.write("\3\2\2\2q\u03e0\3\2\2\2s\u03e7\3\2\2\2u\u03ec\3\2\2\2")
buf.write("w\u03f7\3\2\2\2y\u0405\3\2\2\2{\u0407\3\2\2\2}\u0411\3")
buf.write("\2\2\2\177\u0414\3\2\2\2\u0081\u041e\3\2\2\2\u0083\u0423")
buf.write("\3\2\2\2\u0085\u0428\3\2\2\2\u0087\u042f\3\2\2\2\u0089")
buf.write("\u0446\3\2\2\2\u008b\u0448\3\2\2\2\u008d\u044e\3\2\2\2")
buf.write("\u008f\u0457\3\2\2\2\u0091\u045e\3\2\2\2\u0093\u0464\3")
buf.write("\2\2\2\u0095\u046b\3\2\2\2\u0097\u0476\3\2\2\2\u0099\u047c")
buf.write("\3\2\2\2\u009b\u0483\3\2\2\2\u009d\u048b\3\2\2\2\u009f")
buf.write("\u0493\3\2\2\2\u00a1\u049e\3\2\2\2\u00a3\u04ab\3\2\2\2")
buf.write("\u00a5\u04b3\3\2\2\2\u00a7\u04c3\3\2\2\2\u00a9\u04c8\3")
buf.write("\2\2\2\u00ab\u04cf\3\2\2\2\u00ad\u04d9\3\2\2\2\u00af\u04e3")
buf.write("\3\2\2\2\u00b1\u04f0\3\2\2\2\u00b3\u04fc\3\2\2\2\u00b5")
buf.write("\u0501\3\2\2\2\u00b7\u0505\3\2\2\2\u00b9\u0509\3\2\2\2")
buf.write("\u00bb\u051f\3\2\2\2\u00bd\u0526\3\2\2\2\u00bf\u0532\3")
buf.write("\2\2\2\u00c1\u053f\3\2\2\2\u00c3\u0547\3\2\2\2\u00c5\u0551")
buf.write("\3\2\2\2\u00c7\u055b\3\2\2\2\u00c9\u0563\3\2\2\2\u00cb")
buf.write("\u056e\3\2\2\2\u00cd\u0581\3\2\2\2\u00cf\u058a\3\2\2\2")
buf.write("\u00d1\u058f\3\2\2\2\u00d3\u0592\3\2\2\2\u00d5\u05a2\3")
buf.write("\2\2\2\u00d7\u05a7\3\2\2\2\u00d9\u05ac\3\2\2\2\u00db\u05b1")
buf.write("\3\2\2\2\u00dd\u05b6\3\2\2\2\u00df\u05be\3\2\2\2\u00e1")
buf.write("\u05cb\3\2\2\2\u00e3\u05dc\3\2\2\2\u00e5\u05e2\3\2\2\2")
buf.write("\u00e7\u05ea\3\2\2\2\u00e9\u060d\3\2\2\2\u00eb\u060f\3")
buf.write("\2\2\2\u00ed\u061c\3\2\2\2\u00ef\u0629\3\2\2\2\u00f1\u0631")
buf.write("\3\2\2\2\u00f3\u063e\3\2\2\2\u00f5\u0645\3\2\2\2\u00f7")
buf.write("\u064c\3\2\2\2\u00f9\u0653\3\2\2\2\u00fb\u065a\3\2\2\2")
buf.write("\u00fd\u0661\3\2\2\2\u00ff\u0668\3\2\2\2\u0101\u0674\3")
buf.write("\2\2\2\u0103\u067d\3\2\2\2\u0105\u068d\3\2\2\2\u0107\u0694")
buf.write("\3\2\2\2\u0109\u069b\3\2\2\2\u010b\u06a0\3\2\2\2\u010d")
buf.write("\u06aa\3\2\2\2\u010f\u06b7\3\2\2\2\u0111\u06bd\3\2\2\2")
buf.write("\u0113\u06c2\3\2\2\2\u0115\u06c6\3\2\2\2\u0117\u06cc\3")
buf.write("\2\2\2\u0119\u06d1\3\2\2\2\u011b\u06db\3\2\2\2\u011d\u06e3")
buf.write("\3\2\2\2\u011f\u06e8\3\2\2\2\u0121\u06ed\3\2\2\2\u0123")
buf.write("\u06f7\3\2\2\2\u0125\u06fc\3\2\2\2\u0127\u0703\3\2\2\2")
buf.write("\u0129\u0708\3\2\2\2\u012b\u070d\3\2\2\2\u012d\u0712\3")
buf.write("\2\2\2\u012f\u0715\3\2\2\2\u0131\u071b\3\2\2\2\u0133\u071e")
buf.write("\3\2\2\2\u0135\u0726\3\2\2\2\u0137\u072c\3\2\2\2\u0139")
buf.write("\u0734\3\2\2\2\u013b\u073e\3\2\2\2\u013d\u0743\3\2\2\2")
buf.write("\u013f\u0748\3\2\2\2\u0141\u074c\3\2\2\2\u0143\u075c\3")
buf.write("\2\2\2\u0145\u0762\3\2\2\2\u0147\u076a\3\2\2\2\u0149\u076f")
buf.write("\3\2\2\2\u014b\u0774\3\2\2\2\u014d\u077b\3\2\2\2\u014f")
buf.write("\u0780\3\2\2\2\u0151\u078a\3\2\2\2\u0153\u0791\3\2\2\2")
buf.write("\u0155\u0798\3\2\2\2\u0157\u079d\3\2\2\2\u0159\u07a4\3")
buf.write("\2\2\2\u015b\u07a8\3\2\2\2\u015d\u07ad\3\2\2\2\u015f\u07b4")
buf.write("\3\2\2\2\u0161\u07be\3\2\2\2\u0163\u07c5\3\2\2\2\u0165")
buf.write("\u07ca\3\2\2\2\u0167\u07d0\3\2\2\2\u0169\u07dd\3\2\2\2")
buf.write("\u016b\u07eb\3\2\2\2\u016d\u07ed\3\2\2\2\u016f\u07f7\3")
buf.write("\2\2\2\u0171\u0802\3\2\2\2\u0173\u0807\3\2\2\2\u0175\u080c")
buf.write("\3\2\2\2\u0177\u0816\3\2\2\2\u0179\u081b\3\2\2\2\u017b")
buf.write("\u0823\3\2\2\2\u017d\u082d\3\2\2\2\u017f\u0831\3\2\2\2")
buf.write("\u0181\u0839\3\2\2\2\u0183\u0842\3\2\2\2\u0185\u0849\3")
buf.write("\2\2\2\u0187\u0854\3\2\2\2\u0189\u085d\3\2\2\2\u018b\u0862")
buf.write("\3\2\2\2\u018d\u0869\3\2\2\2\u018f\u0870\3\2\2\2\u0191")
buf.write("\u0874\3\2\2\2\u0193\u0879\3\2\2\2\u0195\u0880\3\2\2\2")
buf.write("\u0197\u0889\3\2\2\2\u0199\u0890\3\2\2\2\u019b\u0895\3")
buf.write("\2\2\2\u019d\u089c\3\2\2\2\u019f\u08a3\3\2\2\2\u01a1\u08a6")
buf.write("\3\2\2\2\u01a3\u08aa\3\2\2\2\u01a5\u08ad\3\2\2\2\u01a7")
buf.write("\u08b0\3\2\2\2\u01a9\u08b6\3\2\2\2\u01ab\u08ba\3\2\2\2")
buf.write("\u01ad\u08be\3\2\2\2\u01af\u08c2\3\2\2\2\u01b1\u08d5\3")
buf.write("\2\2\2\u01b3\u08da\3\2\2\2\u01b5\u090d\3\2\2\2\u01b7\u090f")
buf.write("\3\2\2\2\u01b9\u0915\3\2\2\2\u01bb\u0919\3\2\2\2\u01bd")
buf.write("\u0921\3\2\2\2\u01bf\u0926\3\2\2\2\u01c1\u0956\3\2\2\2")
buf.write("\u01c3\u0958\3\2\2\2\u01c5\u09a3\3\2\2\2\u01c7\u09a5\3")
buf.write("\2\2\2\u01c9\u09ac\3\2\2\2\u01cb\u09b1\3\2\2\2\u01cd\u09b6")
buf.write("\3\2\2\2\u01cf\u09c0\3\2\2\2\u01d1\u09c7\3\2\2\2\u01d3")
buf.write("\u09cf\3\2\2\2\u01d5\u09d9\3\2\2\2\u01d7\u09e1\3\2\2\2")
buf.write("\u01d9\u09ee\3\2\2\2\u01db\u09f7\3\2\2\2\u01dd\u0a00\3")
buf.write("\2\2\2\u01df\u0a0d\3\2\2\2\u01e1\u0a17\3\2\2\2\u01e3\u0a24")
buf.write("\3\2\2\2\u01e5\u0a2b\3\2\2\2\u01e7\u0a35\3\2\2\2\u01e9")
buf.write("\u0a3a\3\2\2\2\u01eb\u0a47\3\2\2\2\u01ed\u0a4e\3\2\2\2")
buf.write("\u01ef\u0a62\3\2\2\2\u01f1\u0a64\3\2\2\2\u01f3\u0a6d\3")
buf.write("\2\2\2\u01f5\u0a71\3\2\2\2\u01f7\u0a7b\3\2\2\2\u01f9\u0a80")
buf.write("\3\2\2\2\u01fb\u0a85\3\2\2\2\u01fd\u0a89\3\2\2\2\u01ff")
buf.write("\u0a8e\3\2\2\2\u0201\u0a9b\3\2\2\2\u0203\u0adc\3\2\2\2")
buf.write("\u0205\u0ade\3\2\2\2\u0207\u0ae8\3\2\2\2\u0209\u0aed\3")
buf.write("\2\2\2\u020b\u0af7\3\2\2\2\u020d\u0b01\3\2\2\2\u020f\u0b05")
buf.write("\3\2\2\2\u0211\u0b0c\3\2\2\2\u0213\u0b14\3\2\2\2\u0215")
buf.write("\u0b19\3\2\2\2\u0217\u0b22\3\2\2\2\u0219\u0b27\3\2\2\2")
buf.write("\u021b\u0b2c\3\2\2\2\u021d\u0b31\3\2\2\2\u021f\u0b38\3")
buf.write("\2\2\2\u0221\u0b3d\3\2\2\2\u0223\u0b44\3\2\2\2\u0225\u0b57")
buf.write("\3\2\2\2\u0227\u0b5d\3\2\2\2\u0229\u0b61\3\2\2\2\u022b")
buf.write("\u0b69\3\2\2\2\u022d\u0b70\3\2\2\2\u022f\u0b7a\3\2\2\2")
buf.write("\u0231\u0b86\3\2\2\2\u0233\u0b8a\3\2\2\2\u0235\u0b8f\3")
buf.write("\2\2\2\u0237\u0b94\3\2\2\2\u0239\u0b9b\3\2\2\2\u023b\u0ba0")
buf.write("\3\2\2\2\u023d\u0ba7\3\2\2\2\u023f\u0bb4\3\2\2\2\u0241")
buf.write("\u0bba\3\2\2\2\u0243\u0bbd\3\2\2\2\u0245\u0bc1\3\2\2\2")
buf.write("\u0247\u0bc5\3\2\2\2\u0249\u0bca\3\2\2\2\u024b\u0bd4\3")
buf.write("\2\2\2\u024d\u0bdd\3\2\2\2\u024f\u0be8\3\2\2\2\u0251\u0bef")
buf.write("\3\2\2\2\u0253\u0bf9\3\2\2\2\u0255\u0c03\3\2\2\2\u0257")
buf.write("\u0c07\3\2\2\2\u0259\u0c0d\3\2\2\2\u025b\u0c14\3\2\2\2")
buf.write("\u025d\u0c19\3\2\2\2\u025f\u0c1e\3\2\2\2\u0261\u0c24\3")
buf.write("\2\2\2\u0263\u0c2a\3\2\2\2\u0265\u0c37\3\2\2\2\u0267\u0c3c")
buf.write("\3\2\2\2\u0269\u0c40\3\2\2\2\u026b\u0c45\3\2\2\2\u026d")
buf.write("\u0c4c\3\2\2\2\u026f\u0c4e\3\2\2\2\u0271\u0c52\3\2\2\2")
buf.write("\u0273\u0c54\3\2\2\2\u0275\u0c56\3\2\2\2\u0277\u0c58\3")
buf.write("\2\2\2\u0279\u0c5a\3\2\2\2\u027b\u0c5c\3\2\2\2\u027d\u0c5e")
buf.write("\3\2\2\2\u027f\u0c60\3\2\2\2\u0281\u0c62\3\2\2\2\u0283")
buf.write("\u0c64\3\2\2\2\u0285\u0c66\3\2\2\2\u0287\u0c68\3\2\2\2")
buf.write("\u0289\u0c6a\3\2\2\2\u028b\u0c6c\3\2\2\2\u028d\u0c6e\3")
buf.write("\2\2\2\u028f\u0c70\3\2\2\2\u0291\u0c72\3\2\2\2\u0293\u0c74")
buf.write("\3\2\2\2\u0295\u0c76\3\2\2\2\u0297\u0c78\3\2\2\2\u0299")
buf.write("\u0c7a\3\2\2\2\u029b\u0c7c\3\2\2\2\u029d\u0c7e\3\2\2\2")
buf.write("\u029f\u0c80\3\2\2\2\u02a1\u0c82\3\2\2\2\u02a3\u0c84\3")
buf.write("\2\2\2\u02a5\u0c86\3\2\2\2\u02a7\u0c88\3\2\2\2\u02a9\u0c8a")
buf.write("\3\2\2\2\u02ab\u0c8d\3\2\2\2\u02ad\u0c91\3\2\2\2\u02af")
buf.write("\u02b0\7a\2\2\u02b0\4\3\2\2\2\u02b1\u02b3\5\u02a7\u0154")
buf.write("\2\u02b2\u02b1\3\2\2\2\u02b3\u02b6\3\2\2\2\u02b4\u02b2")
buf.write("\3\2\2\2\u02b4\u02b5\3\2\2\2\u02b5\u02b7\3\2\2\2\u02b6")
buf.write("\u02b4\3\2\2\2\u02b7\u02b9\7\60\2\2\u02b8\u02b4\3\2\2")
buf.write("\2\u02b8\u02b9\3\2\2\2\u02b9\u02bb\3\2\2\2\u02ba\u02bc")
buf.write("\5\u02a7\u0154\2\u02bb\u02ba\3\2\2\2\u02bc\u02bd\3\2\2")
buf.write("\2\u02bd\u02bb\3\2\2\2\u02bd\u02be\3\2\2\2\u02be\u02c9")
buf.write("\3\2\2\2\u02bf\u02c1\5\u027b\u013e\2\u02c0\u02c2\t\2\2")
buf.write("\2\u02c1\u02c0\3\2\2\2\u02c1\u02c2\3\2\2\2\u02c2\u02c6")
buf.write("\3\2\2\2\u02c3\u02c5\5\u02a7\u0154\2\u02c4\u02c3\3\2\2")
buf.write("\2\u02c5\u02c8\3\2\2\2\u02c6\u02c4\3\2\2\2\u02c6\u02c7")
buf.write("\3\2\2\2\u02c7\u02ca\3\2\2\2\u02c8\u02c6\3\2\2\2\u02c9")
buf.write("\u02bf\3\2\2\2\u02c9\u02ca\3\2\2\2\u02ca\u02db\3\2\2\2")
buf.write("\u02cb\u02cd\5\u02a7\u0154\2\u02cc\u02cb\3\2\2\2\u02cd")
buf.write("\u02ce\3\2\2\2\u02ce\u02cc\3\2\2\2\u02ce\u02cf\3\2\2\2")
buf.write("\u02cf\u02d0\3\2\2\2\u02d0\u02d1\7\60\2\2\u02d1\u02db")
buf.write("\3\2\2\2\u02d2\u02d3\7\62\2\2\u02d3\u02d7\5\u02a1\u0151")
buf.write("\2\u02d4\u02d6\5\u02a9\u0155\2\u02d5\u02d4\3\2\2\2\u02d6")
buf.write("\u02d9\3\2\2\2\u02d7\u02d5\3\2\2\2\u02d7\u02d8\3\2\2\2")
buf.write("\u02d8\u02db\3\2\2\2\u02d9\u02d7\3\2\2\2\u02da\u02b8\3")
buf.write("\2\2\2\u02da\u02cc\3\2\2\2\u02da\u02d2\3\2\2\2\u02db\6")
buf.write("\3\2\2\2\u02dc\u02dd\7\62\2\2\u02dd\u02e1\5\u0281\u0141")
buf.write("\2\u02de\u02e0\5\u02a9\u0155\2\u02df\u02de\3\2\2\2\u02e0")
buf.write("\u02e3\3\2\2\2\u02e1\u02df\3\2\2\2\u02e1\u02e2\3\2\2\2")
buf.write("\u02e2\b\3\2\2\2\u02e3\u02e1\3\2\2\2\u02e4\u02e5\7=\2")
buf.write("\2\u02e5\n\3\2\2\2\u02e6\u02e7\7(\2\2\u02e7\f\3\2\2\2")
buf.write("\u02e8\u02e9\7B\2\2\u02e9\16\3\2\2\2\u02ea\u02eb\7,\2")
buf.write("\2\u02eb\20\3\2\2\2\u02ec\u02ed\7-\2\2\u02ed\22\3\2\2")
buf.write("\2\u02ee\u02ef\7/\2\2\u02ef\24\3\2\2\2\u02f0\u02f1\7\61")
buf.write("\2\2\u02f1\26\3\2\2\2\u02f2\u02f3\7\60\2\2\u02f3\30\3")
buf.write("\2\2\2\u02f4\u02f5\7]\2\2\u02f5\32\3\2\2\2\u02f6\u02f7")
buf.write("\7_\2\2\u02f7\34\3\2\2\2\u02f8\u02f9\7}\2\2\u02f9\36\3")
buf.write("\2\2\2\u02fa\u02fb\7\177\2\2\u02fb \3\2\2\2\u02fc\u02fd")
buf.write("\7*\2\2\u02fd\"\3\2\2\2\u02fe\u02ff\7+\2\2\u02ff$\3\2")
buf.write("\2\2\u0300\u0301\7^\2\2\u0301&\3\2\2\2\u0302\u0303\7>")
buf.write("\2\2\u0303(\3\2\2\2\u0304\u0305\7@\2\2\u0305*\3\2\2\2")
buf.write("\u0306\u0307\7#\2\2\u0307,\3\2\2\2\u0308\u0309\7%\2\2")
buf.write("\u0309.\3\2\2\2\u030a\u030b\7?\2\2\u030b\u030c\7?\2\2")
buf.write("\u030c\60\3\2\2\2\u030d\u030e\7#\2\2\u030e\u0312\7?\2")
buf.write("\2\u030f\u0310\7>\2\2\u0310\u0312\7@\2\2\u0311\u030d\3")
buf.write("\2\2\2\u0311\u030f\3\2\2\2\u0312\62\3\2\2\2\u0313\u0314")
buf.write("\7@\2\2\u0314\u0318\7?\2\2\u0315\u0316\7?\2\2\u0316\u0318")
buf.write("\7@\2\2\u0317\u0313\3\2\2\2\u0317\u0315\3\2\2\2\u0318")
buf.write("\64\3\2\2\2\u0319\u031a\7>\2\2\u031a\u031e\7?\2\2\u031b")
buf.write("\u031c\7?\2\2\u031c\u031e\7>\2\2\u031d\u0319\3\2\2\2\u031d")
buf.write("\u031b\3\2\2\2\u031e\66\3\2\2\2\u031f\u0320\7\'\2\2\u0320")
buf.write("8\3\2\2\2\u0321\u0322\7?\2\2\u0322:\3\2\2\2\u0323\u0324")
buf.write("\7`\2\2\u0324<\3\2\2\2\u0325\u0326\7.\2\2\u0326>\3\2\2")
buf.write("\2\u0327\u0328\7&\2\2\u0328@\3\2\2\2\u0329\u032a\7<\2")
buf.write("\2\u032aB\3\2\2\2\u032b\u032c\7A\2\2\u032cD\3\2\2\2\u032d")
buf.write("\u032e\7$\2\2\u032eF\3\2\2\2\u032f\u0330\7)\2\2\u0330")
buf.write("H\3\2\2\2\u0331\u0332\7(\2\2\u0332\u0333\7(\2\2\u0333")
buf.write("\u0337\3\2\2\2\u0334\u0336\n\3\2\2\u0335\u0334\3\2\2\2")
buf.write("\u0336\u0339\3\2\2\2\u0337\u0335\3\2\2\2\u0337\u0338\3")
buf.write("\2\2\2\u0338\u034c\3\2\2\2\u0339\u0337\3\2\2\2\u033a\u033e")
buf.write("\7=\2\2\u033b\u033d\5\u026f\u0138\2\u033c\u033b\3\2\2")
buf.write("\2\u033d\u0340\3\2\2\2\u033e\u033c\3\2\2\2\u033e\u033f")
buf.write("\3\2\2\2\u033f\u0341\3\2\2\2\u0340\u033e\3\2\2\2\u0341")
buf.write("\u0342\7(\2\2\u0342\u0343\7(\2\2\u0343\u0347\3\2\2\2\u0344")
buf.write("\u0346\n\3\2\2\u0345\u0344\3\2\2\2\u0346\u0349\3\2\2\2")
buf.write("\u0347\u0345\3\2\2\2\u0347\u0348\3\2\2\2\u0348\u034a\3")
buf.write("\2\2\2\u0349\u0347\3\2\2\2\u034a\u034c\5\u026d\u0137\2")
buf.write("\u034b\u0331\3\2\2\2\u034b\u033a\3\2\2\2\u034c\u034d\3")
buf.write("\2\2\2\u034d\u034e\b%\2\2\u034eJ\3\2\2\2\u034f\u0353\7")
buf.write("=\2\2\u0350\u0352\5\u026f\u0138\2\u0351\u0350\3\2\2\2")
buf.write("\u0352\u0355\3\2\2\2\u0353\u0351\3\2\2\2\u0353\u0354\3")
buf.write("\2\2\2\u0354\u0356\3\2\2\2\u0355\u0353\3\2\2\2\u0356\u0357")
buf.write("\5\u026d\u0137\2\u0357\u0358\3\2\2\2\u0358\u0359\b&\3")
buf.write("\2\u0359L\3\2\2\2\u035a\u035c\n\4\2\2\u035b\u035a\3\2")
buf.write("\2\2\u035c\u035f\3\2\2\2\u035d\u035b\3\2\2\2\u035d\u035e")
buf.write("\3\2\2\2\u035e\u0360\3\2\2\2\u035f\u035d\3\2\2\2\u0360")
buf.write("\u0361\7(\2\2\u0361\u0365\5\u026b\u0136\2\u0362\u0364")
buf.write("\n\3\2\2\u0363\u0362\3\2\2\2\u0364\u0367\3\2\2\2\u0365")
buf.write("\u0363\3\2\2\2\u0365\u0366\3\2\2\2\u0366N\3\2\2\2\u0367")
buf.write("\u0365\3\2\2\2\u0368\u0369\5\u0273\u013a\2\u0369\u036a")
buf.write("\5\u0277\u013c\2\u036a\u036b\5\u0299\u014d\2\u036b\u0376")
buf.write("\5\u0283\u0142\2\u036c\u0374\5\u029d\u014f\2\u036d\u0372")
buf.write("\5\u0273\u013a\2\u036e\u0370\5\u0299\u014d\2\u036f\u0371")
buf.write("\5\u027b\u013e\2\u0370\u036f\3\2\2\2\u0370\u0371\3\2\2")
buf.write("\2\u0371\u0373\3\2\2\2\u0372\u036e\3\2\2\2\u0372\u0373")
buf.write("\3\2\2\2\u0373\u0375\3\2\2\2\u0374\u036d\3\2\2\2\u0374")
buf.write("\u0375\3\2\2\2\u0375\u0377\3\2\2\2\u0376\u036c\3\2\2\2")
buf.write("\u0376\u0377\3\2\2\2\u0377P\3\2\2\2\u0378\u0379\5\u0273")
buf.write("\u013a\2\u0379\u037a\5\u0279\u013d\2\u037a\u037b\5\u0279")
buf.write("\u013d\2\u037bR\3\2\2\2\u037c\u037d\5\u0273\u013a\2\u037d")
buf.write("\u037e\5\u0279\u013d\2\u037e\u037f\5\u0279\u013d\2\u037f")
buf.write("\u0380\5\u0283\u0142\2\u0380\u0381\5\u0299\u014d\2\u0381")
buf.write("\u0382\5\u0283\u0142\2\u0382\u0383\5\u029d\u014f\2\u0383")
buf.write("\u0384\5\u027b\u013e\2\u0384T\3\2\2\2\u0385\u0386\5\u0273")
buf.write("\u013a\2\u0386\u0387\5\u027d\u013f\2\u0387\u0388\5\u0299")
buf.write("\u014d\2\u0388\u0389\5\u027b\u013e\2\u0389\u038a\5\u0295")
buf.write("\u014b\2\u038aV\3\2\2\2\u038b\u038c\5\u0273\u013a\2\u038c")
buf.write("\u038d\5\u0289\u0145\2\u038d\u038e\5\u0283\u0142\2\u038e")
buf.write("\u038f\5\u0273\u013a\2\u038f\u0390\5\u0297\u014c\2\u0390")
buf.write("X\3\2\2\2\u0391\u0392\5\u0273\u013a\2\u0392\u0393\5\u0289")
buf.write("\u0145\2\u0393\u0394\5\u0289\u0145\2\u0394Z\3\2\2\2\u0395")
buf.write("\u0396\5\u0273\u013a\2\u0396\u0397\5\u0289\u0145\2\u0397")
buf.write("\u0398\5\u0299\u014d\2\u0398\u0399\5\u027b\u013e\2\u0399")
buf.write("\u039a\5\u0295\u014b\2\u039a\\\3\2\2\2\u039b\u039c\5\u0273")
buf.write("\u013a\2\u039c\u039d\5\u0289\u0145\2\u039d\u039e\5\u0299")
buf.write("\u014d\2\u039e\u039f\5\u027b\u013e\2\u039f\u03a0\5\u0295")
buf.write("\u014b\2\u03a0\u03a1\5\u028d\u0147\2\u03a1\u03a2\5\u0273")
buf.write("\u013a\2\u03a2\u03a3\5\u0299\u014d\2\u03a3\u03a4\5\u027b")
buf.write("\u013e\2\u03a4^\3\2\2\2\u03a5\u03a6\5\u0273\u013a\2\u03a6")
buf.write("\u03a7\5\u028d\u0147\2\u03a7\u03a8\5\u0279\u013d\2\u03a8")
buf.write("`\3\2\2\2\u03a9\u03aa\5\u0273\u013a\2\u03aa\u03ab\5\u0291")
buf.write("\u0149\2\u03ab\u03ac\5\u0291\u0149\2\u03ac\u03b1\5\u027b")
buf.write("\u013e\2\u03ad\u03af\5\u028d\u0147\2\u03ae\u03b0\5\u0279")
buf.write("\u013d\2\u03af\u03ae\3\2\2\2\u03af\u03b0\3\2\2\2\u03b0")
buf.write("\u03b2\3\2\2\2\u03b1\u03ad\3\2\2\2\u03b1\u03b2\3\2\2\2")
buf.write("\u03b2b\3\2\2\2\u03b3\u03b4\5\u0273\u013a\2\u03b4\u03b5")
buf.write("\5\u0295\u014b\2\u03b5\u03b6\5\u0295\u014b\2\u03b6\u03b8")
buf.write("\5\u0273\u013a\2\u03b7\u03b9\5\u02a3\u0152\2\u03b8\u03b7")
buf.write("\3\2\2\2\u03b8\u03b9\3\2\2\2\u03b9d\3\2\2\2\u03ba\u03bb")
buf.write("\5\u0273\u013a\2\u03bb\u03bc\5\u0297\u014c\2\u03bcf\3")
buf.write("\2\2\2\u03bd\u03be\5\u0273\u013a\2\u03be\u03bf\5\u0297")
buf.write("\u014c\2\u03bf\u03c0\5\u0277\u013c\2\u03c0\u03c1\5\u027b")
buf.write("\u013e\2\u03c1\u03c2\5\u028d\u0147\2\u03c2\u03c3\5\u0279")
buf.write("\u013d\2\u03c3\u03c4\5\u0283\u0142\2\u03c4\u03c5\5\u028d")
buf.write("\u0147\2\u03c5\u03c6\5\u027f\u0140\2\u03c6h\3\2\2\2\u03c7")
buf.write("\u03c8\5\u0273\u013a\2\u03c8\u03c9\5\u0297\u014c\2\u03c9")
buf.write("\u03ca\5\u0297\u014c\2\u03ca\u03cf\5\u027b\u013e\2\u03cb")
buf.write("\u03cd\5\u0295\u014b\2\u03cc\u03ce\5\u0299\u014d\2\u03cd")
buf.write("\u03cc\3\2\2\2\u03cd\u03ce\3\2\2\2\u03ce\u03d0\3\2\2\2")
buf.write("\u03cf\u03cb\3\2\2\2\u03cf\u03d0\3\2\2\2\u03d0j\3\2\2")
buf.write("\2\u03d1\u03d2\5\u0273\u013a\2\u03d2\u03d3\5\u0297\u014c")
buf.write("\2\u03d3\u03d4\5\u0297\u014c\2\u03d4\u03d5\5\u027b\u013e")
buf.write("\2\u03d5\u03d6\5\u0295\u014b\2\u03d6\u03d7\5\u0299\u014d")
buf.write("\2\u03d7\u03d8\5\u0297\u014c\2\u03d8l\3\2\2\2\u03d9\u03da")
buf.write("\5\u0273\u013a\2\u03da\u03db\5\u0299\u014d\2\u03dbn\3")
buf.write("\2\2\2\u03dc\u03dd\5\u0275\u013b\2\u03dd\u03de\5\u0273")
buf.write("\u013a\2\u03de\u03df\5\u0295\u014b\2\u03dfp\3\2\2\2\u03e0")
buf.write("\u03e1\5\u0275\u013b\2\u03e1\u03e2\5\u027b\u013e\2\u03e2")
buf.write("\u03e3\5\u027d\u013f\2\u03e3\u03e4\5\u028f\u0148\2\u03e4")
buf.write("\u03e5\5\u0295\u014b\2\u03e5\u03e6\5\u027b\u013e\2\u03e6")
buf.write("r\3\2\2\2\u03e7\u03e8\5\u0275\u013b\2\u03e8\u03e9\5\u027b")
buf.write("\u013e\2\u03e9\u03ea\5\u0289\u0145\2\u03ea\u03eb\5\u0289")
buf.write("\u0145\2\u03ebt\3\2\2\2\u03ec\u03ed\5\u0275\u013b\2\u03ed")
buf.write("\u03ee\5\u0289\u0145\2\u03ee\u03ef\5\u0273\u013a\2\u03ef")
buf.write("\u03f1\5\u028d\u0147\2\u03f0\u03f2\5\u0287\u0144\2\u03f1")
buf.write("\u03f0\3\2\2\2\u03f1\u03f2\3\2\2\2\u03f2v\3\2\2\2\u03f3")
buf.write("\u03f8\5\u027d\u013f\2\u03f4\u03f8\5\u028d\u0147\2\u03f5")
buf.write("\u03f8\5\u0299\u014d\2\u03f6\u03f8\5\u02a3\u0152\2\u03f7")
buf.write("\u03f3\3\2\2\2\u03f7\u03f4\3\2\2\2\u03f7\u03f5\3\2\2\2")
buf.write("\u03f7\u03f6\3\2\2\2\u03f8x\3\2\2\2\u03f9\u03fa\5\u0275")
buf.write("\u013b\2\u03fa\u03fb\5\u028f\u0148\2\u03fb\u03fc\5\u0299")
buf.write("\u014d\2\u03fc\u03fd\5\u0299\u014d\2\u03fd\u03fe\5\u028f")
buf.write("\u0148\2\u03fe\u03ff\5\u028b\u0146\2\u03ff\u0406\3\2\2")
buf.write("\2\u0400\u0401\5\u0275\u013b\2\u0401\u0402\5\u028f\u0148")
buf.write("\2\u0402\u0403\5\u0299\u014d\2\u0403\u0404\5\u0299\u014d")
buf.write("\2\u0404\u0406\3\2\2\2\u0405\u03f9\3\2\2\2\u0405\u0400")
buf.write("\3\2\2\2\u0406z\3\2\2\2\u0407\u0408\5\u0275\u013b\2\u0408")
buf.write("\u0409\5\u0295\u014b\2\u0409\u040a\5\u028f\u0148\2\u040a")
buf.write("\u040f\5\u029f\u0150\2\u040b\u040d\5\u0297\u014c\2\u040c")
buf.write("\u040e\5\u027b\u013e\2\u040d\u040c\3\2\2\2\u040d\u040e")
buf.write("\3\2\2\2\u040e\u0410\3\2\2\2\u040f\u040b\3\2\2\2\u040f")
buf.write("\u0410\3\2\2\2\u0410|\3\2\2\2\u0411\u0412\5\u0275\u013b")
buf.write("\2\u0412\u0413\5\u02a3\u0152\2\u0413~\3\2\2\2\u0414\u0415")
buf.write("\5\u0277\u013c\2\u0415\u0416\5\u0273\u013a\2\u0416\u0417")
buf.write("\5\u028d\u0147\2\u0417\u0418\5\u0279\u013d\2\u0418\u0419")
buf.write("\5\u0283\u0142\2\u0419\u041a\5\u0279\u013d\2\u041a\u041b")
buf.write("\5\u0273\u013a\2\u041b\u041c\5\u0299\u014d\2\u041c\u041d")
buf.write("\5\u027b\u013e\2\u041d\u0080\3\2\2\2\u041e\u041f\5\u0277")
buf.write("\u013c\2\u041f\u0420\5\u0273\u013a\2\u0420\u0421\5\u0297")
buf.write("\u014c\2\u0421\u0422\5\u027b\u013e\2\u0422\u0082\3\2\2")
buf.write("\2\u0423\u0424\5\u0277\u013c\2\u0424\u0425\5\u0273\u013a")
buf.write("\2\u0425\u0426\5\u0297\u014c\2\u0426\u0427\5\u0299\u014d")
buf.write("\2\u0427\u0084\3\2\2\2\u0428\u0429\5\u0277\u013c\2\u0429")
buf.write("\u042a\5\u0273\u013a\2\u042a\u042b\5\u0299\u014d\2\u042b")
buf.write("\u042d\5\u0277\u013c\2\u042c\u042e\5\u0281\u0141\2\u042d")
buf.write("\u042c\3\2\2\2\u042d\u042e\3\2\2\2\u042e\u0086\3\2\2\2")
buf.write("\u042f\u0430\5\u0277\u013c\2\u0430\u0431\5\u027b\u013e")
buf.write("\2\u0431\u0432\5\u028d\u0147\2\u0432\u043a\5\u0299\u014d")
buf.write("\2\u0433\u0438\5\u029b\u014e\2\u0434\u0436\5\u0295\u014b")
buf.write("\2\u0435\u0437\5\u02a3\u0152\2\u0436\u0435\3\2\2\2\u0436")
buf.write("\u0437\3\2\2\2\u0437\u0439\3\2\2\2\u0438\u0434\3\2\2\2")
buf.write("\u0438\u0439\3\2\2\2\u0439\u043b\3\2\2\2\u043a\u0433\3")
buf.write("\2\2\2\u043a\u043b\3\2\2\2\u043b\u0088\3\2\2\2\u043c\u043d")
buf.write("\5\u0277\u013c\2\u043d\u043e\5\u0279\u013d\2\u043e\u0447")
buf.write("\3\2\2\2\u043f\u0440\5\u0277\u013c\2\u0440\u0441\5\u0281")
buf.write("\u0141\2\u0441\u0442\5\u0279\u013d\2\u0442\u0444\5\u0283")
buf.write("\u0142\2\u0443\u0445\5\u0295\u014b\2\u0444\u0443\3\2\2")
buf.write("\2\u0444\u0445\3\2\2\2\u0445\u0447\3\2\2\2\u0446\u043c")
buf.write("\3\2\2\2\u0446\u043f\3\2\2\2\u0447\u008a\3\2\2\2\u0448")
buf.write("\u0449\5\u0277\u013c\2\u0449\u044a\5\u0289\u0145\2\u044a")
buf.write("\u044b\5\u0273\u013a\2\u044b\u044c\5\u0297\u014c\2\u044c")
buf.write("\u044d\5\u0297\u014c\2\u044d\u008c\3\2\2\2\u044e\u044f")
buf.write("\5\u0277\u013c\2\u044f\u0450\5\u0289\u0145\2\u0450\u0451")
buf.write("\5\u0273\u013a\2\u0451\u0452\5\u0297\u014c\2\u0452\u0453")
buf.write("\5\u0297\u014c\2\u0453\u0454\5\u0289\u0145\2\u0454\u0455")
buf.write("\5\u0283\u0142\2\u0455\u0456\5\u0275\u013b\2\u0456\u008e")
buf.write("\3\2\2\2\u0457\u0458\5\u0277\u013c\2\u0458\u0459\5\u0289")
buf.write("\u0145\2\u0459\u045a\5\u027b\u013e\2\u045a\u045c\5\u0273")
buf.write("\u013a\2\u045b\u045d\5\u0295\u014b\2\u045c\u045b\3\2\2")
buf.write("\2\u045c\u045d\3\2\2\2\u045d\u0090\3\2\2\2\u045e\u045f")
buf.write("\5\u0277\u013c\2\u045f\u0460\5\u0289\u0145\2\u0460\u0461")
buf.write("\5\u028f\u0148\2\u0461\u0462\5\u0277\u013c\2\u0462\u0463")
buf.write("\5\u0287\u0144\2\u0463\u0092\3\2\2\2\u0464\u0465\5\u0277")
buf.write("\u013c\2\u0465\u0466\5\u0289\u0145\2\u0466\u0467\5\u028f")
buf.write("\u0148\2\u0467\u0469\5\u0297\u014c\2\u0468\u046a\5\u027b")
buf.write("\u013e\2\u0469\u0468\3\2\2\2\u0469\u046a\3\2\2\2\u046a")
buf.write("\u0094\3\2\2\2\u046b\u046c\5\u0277\u013c\2\u046c\u046d")
buf.write("\5\u028f\u0148\2\u046d\u046e\5\u0289\u0145\2\u046e\u046f")
buf.write("\5\u0289\u0145\2\u046f\u0470\5\u027b\u013e\2\u0470\u0471")
buf.write("\5\u0277\u013c\2\u0471\u0472\5\u0299\u014d\2\u0472\u0473")
buf.write("\5\u0283\u0142\2\u0473\u0474\5\u028f\u0148\2\u0474\u0475")
buf.write("\5\u028d\u0147\2\u0475\u0096\3\2\2\2\u0476\u0477\5\u0277")
buf.write("\u013c\2\u0477\u0478\5\u028f\u0148\2\u0478\u0479\5\u0289")
buf.write("\u0145\2\u0479\u047a\5\u028f\u0148\2\u047a\u047b\5\u0295")
buf.write("\u014b\2\u047b\u0098\3\2\2\2\u047c\u047d\5\u0277\u013c")
buf.write("\2\u047d\u047e\5\u028f\u0148\2\u047e\u047f\5\u0289\u0145")
buf.write("\2\u047f\u0480\5\u029b\u014e\2\u0480\u0481\5\u028b\u0146")
buf.write("\2\u0481\u0482\5\u028d\u0147\2\u0482\u009a\3\2\2\2\u0483")
buf.write("\u0484\5\u0277\u013c\2\u0484\u0485\5\u028f\u0148\2\u0485")
buf.write("\u0486\5\u028b\u0146\2\u0486\u0487\5\u028b\u0146\2\u0487")
buf.write("\u0488\5\u0273\u013a\2\u0488\u0489\5\u028d\u0147\2\u0489")
buf.write("\u048a\5\u0279\u013d\2\u048a\u009c\3\2\2\2\u048b\u048c")
buf.write("\5\u0277\u013c\2\u048c\u048d\5\u028f\u0148\2\u048d\u048e")
buf.write("\5\u028b\u0146\2\u048e\u048f\5\u0291\u0149\2\u048f\u0490")
buf.write("\5\u0273\u013a\2\u0490\u0491\5\u0277\u013c\2\u0491\u0492")
buf.write("\5\u0299\u014d\2\u0492\u009e\3\2\2\2\u0493\u0494\5\u0277")
buf.write("\u013c\2\u0494\u0495\5\u028f\u0148\2\u0495\u0496\5\u028b")
buf.write("\u0146\2\u0496\u0497\5\u0291\u0149\2\u0497\u0498\5\u0273")
buf.write("\u013a\2\u0498\u0499\5\u0299\u014d\2\u0499\u049a\5\u0283")
buf.write("\u0142\2\u049a\u049b\5\u0275\u013b\2\u049b\u049c\5\u0289")
buf.write("\u0145\2\u049c\u049d\5\u027b\u013e\2\u049d\u00a0\3\2\2")
buf.write("\2\u049e\u049f\5\u0277\u013c\2\u049f\u04a0\5\u028f\u0148")
buf.write("\2\u04a0\u04a1\5\u028b\u0146\2\u04a1\u04a9\5\u0291\u0149")
buf.write("\2\u04a2\u04a7\5\u0283\u0142\2\u04a3\u04a5\5\u0289\u0145")
buf.write("\2\u04a4\u04a6\5\u027b\u013e\2\u04a5\u04a4\3\2\2\2\u04a5")
buf.write("\u04a6\3\2\2\2\u04a6\u04a8\3\2\2\2\u04a7\u04a3\3\2\2\2")
buf.write("\u04a7\u04a8\3\2\2\2\u04a8\u04aa\3\2\2\2\u04a9\u04a2\3")
buf.write("\2\2\2\u04a9\u04aa\3\2\2\2\u04aa\u00a2\3\2\2\2\u04ab\u04ac")
buf.write("\5\u0277\u013c\2\u04ac\u04ad\5\u028f\u0148\2\u04ad\u04ae")
buf.write("\5\u028d\u0147\2\u04ae\u04af\5\u0297\u014c\2\u04af\u04b0")
buf.write("\5\u028f\u0148\2\u04b0\u04b1\5\u0289\u0145\2\u04b1\u04b2")
buf.write("\5\u027b\u013e\2\u04b2\u00a4\3\2\2\2\u04b3\u04b4\5\u0277")
buf.write("\u013c\2\u04b4\u04b5\5\u028f\u0148\2\u04b5\u04b6\5\u028d")
buf.write("\u0147\2\u04b6\u04c1\5\u0299\u014d\2\u04b7\u04bf\5\u0283")
buf.write("\u0142\2\u04b8\u04bd\5\u028d\u0147\2\u04b9\u04bb\5\u029b")
buf.write("\u014e\2\u04ba\u04bc\5\u027b\u013e\2\u04bb\u04ba\3\2\2")
buf.write("\2\u04bb\u04bc\3\2\2\2\u04bc\u04be\3\2\2\2\u04bd\u04b9")
buf.write("\3\2\2\2\u04bd\u04be\3\2\2\2\u04be\u04c0\3\2\2\2\u04bf")
buf.write("\u04b8\3\2\2\2\u04bf\u04c0\3\2\2\2\u04c0\u04c2\3\2\2\2")
buf.write("\u04c1\u04b7\3\2\2\2\u04c1\u04c2\3\2\2\2\u04c2\u00a6\3")
buf.write("\2\2\2\u04c3\u04c4\5\u0277\u013c\2\u04c4\u04c5\5\u028f")
buf.write("\u0148\2\u04c5\u04c6\5\u0291\u0149\2\u04c6\u04c7\5\u02a3")
buf.write("\u0152\2\u04c7\u00a8\3\2\2\2\u04c8\u04c9\5\u0277\u013c")
buf.write("\2\u04c9\u04ca\5\u028f\u0148\2\u04ca\u04cb\5\u029b\u014e")
buf.write("\2\u04cb\u04cd\5\u028d\u0147\2\u04cc\u04ce\5\u0299\u014d")
buf.write("\2\u04cd\u04cc\3\2\2\2\u04cd\u04ce\3\2\2\2\u04ce\u00aa")
buf.write("\3\2\2\2\u04cf\u04d0\5\u0277\u013c\2\u04d0\u04d1\5\u0295")
buf.write("\u014b\2\u04d1\u04d2\5\u027b\u013e\2\u04d2\u04d7\5\u0273")
buf.write("\u013a\2\u04d3\u04d5\5\u0299\u014d\2\u04d4\u04d6\5\u027b")
buf.write("\u013e\2\u04d5\u04d4\3\2\2\2\u04d5\u04d6\3\2\2\2\u04d6")
buf.write("\u04d8\3\2\2\2\u04d7\u04d3\3\2\2\2\u04d7\u04d8\3\2\2\2")
buf.write("\u04d8\u00ac\3\2\2\2\u04d9\u04da\5\u0277\u013c\2\u04da")
buf.write("\u04db\5\u029b\u014e\2\u04db\u04dc\5\u0295\u014b\2\u04dc")
buf.write("\u04e1\5\u0297\u014c\2\u04dd\u04df\5\u028f\u0148\2\u04de")
buf.write("\u04e0\5\u0295\u014b\2\u04df\u04de\3\2\2\2\u04df\u04e0")
buf.write("\3\2\2\2\u04e0\u04e2\3\2\2\2\u04e1\u04dd\3\2\2\2\u04e1")
buf.write("\u04e2\3\2\2\2\u04e2\u00ae\3\2\2\2\u04e3\u04e4\5\u0279")
buf.write("\u013d\2\u04e4\u04e5\5\u0273\u013a\2\u04e5\u04e6\5\u0299")
buf.write("\u014d\2\u04e6\u04ee\5\u0273\u013a\2\u04e7\u04e8\5\u0275")
buf.write("\u013b\2\u04e8\u04e9\5\u0273\u013a\2\u04e9\u04ea\5\u0297")
buf.write("\u014c\2\u04ea\u04ec\5\u027b\u013e\2\u04eb\u04ed\5\u0297")
buf.write("\u014c\2\u04ec\u04eb\3\2\2\2\u04ec\u04ed\3\2\2\2\u04ed")
buf.write("\u04ef\3\2\2\2\u04ee\u04e7\3\2\2\2\u04ee\u04ef\3\2\2\2")
buf.write("\u04ef\u00b0\3\2\2\2\u04f0\u04f1\5\u0279\u013d\2\u04f1")
buf.write("\u04f2\5\u0273\u013a\2\u04f2\u04f3\5\u0299\u014d\2\u04f3")
buf.write("\u04f4\5\u0273\u013a\2\u04f4\u04f5\5\u0297\u014c\2\u04f5")
buf.write("\u04f6\5\u027b\u013e\2\u04f6\u04f7\5\u0297\u014c\2\u04f7")
buf.write("\u04f8\5\u0297\u014c\2\u04f8\u04f9\5\u0283\u0142\2\u04f9")
buf.write("\u04fa\5\u028f\u0148\2\u04fa\u04fb\5\u028d\u0147\2\u04fb")
buf.write("\u00b2\3\2\2\2\u04fc\u04fd\5\u0279\u013d\2\u04fd\u04fe")
buf.write("\5\u0273\u013a\2\u04fe\u04ff\5\u0299\u014d\2\u04ff\u0500")
buf.write("\5\u027b\u013e\2\u0500\u00b4\3\2\2\2\u0501\u0502\5\u0279")
buf.write("\u013d\2\u0502\u0503\5\u0275\u013b\2\u0503\u0504\7\66")
buf.write("\2\2\u0504\u00b6\3\2\2\2\u0505\u0506\5\u0279\u013d\2\u0506")
buf.write("\u0507\5\u0275\u013b\2\u0507\u0508\5\u027d\u013f\2\u0508")
buf.write("\u00b8\3\2\2\2\u0509\u050a\5\u0279\u013d\2\u050a\u050b")
buf.write("\5\u027b\u013e\2\u050b\u050c\5\u0273\u013a\2\u050c\u051d")
buf.write("\5\u0277\u013c\2\u050d\u051b\5\u0299\u014d\2\u050e\u0519")
buf.write("\5\u0283\u0142\2\u050f\u0517\5\u029d\u014f\2\u0510\u0515")
buf.write("\5\u0273\u013a\2\u0511\u0513\5\u0299\u014d\2\u0512\u0514")
buf.write("\5\u027b\u013e\2\u0513\u0512\3\2\2\2\u0513\u0514\3\2\2")
buf.write("\2\u0514\u0516\3\2\2\2\u0515\u0511\3\2\2\2\u0515\u0516")
buf.write("\3\2\2\2\u0516\u0518\3\2\2\2\u0517\u0510\3\2\2\2\u0517")
buf.write("\u0518\3\2\2\2\u0518\u051a\3\2\2\2\u0519\u050f\3\2\2\2")
buf.write("\u0519\u051a\3\2\2\2\u051a\u051c\3\2\2\2\u051b\u050e\3")
buf.write("\2\2\2\u051b\u051c\3\2\2\2\u051c\u051e\3\2\2\2\u051d\u050d")
buf.write("\3\2\2\2\u051d\u051e\3\2\2\2\u051e\u00ba\3\2\2\2\u051f")
buf.write("\u0520\5\u0279\u013d\2\u0520\u0521\5\u027b\u013e\2\u0521")
buf.write("\u0522\5\u0275\u013b\2\u0522\u0524\5\u029b\u014e\2\u0523")
buf.write("\u0525\5\u027f\u0140\2\u0524\u0523\3\2\2\2\u0524\u0525")
buf.write("\3\2\2\2\u0525\u00bc\3\2\2\2\u0526\u0527\5\u0279\u013d")
buf.write("\2\u0527\u0528\5\u027b\u013e\2\u0528\u0529\5\u0275\u013b")
buf.write("\2\u0529\u052a\5\u029b\u014e\2\u052a\u052b\5\u027f\u0140")
buf.write("\2\u052b\u0530\5\u028f\u0148\2\u052c\u052e\5\u029b\u014e")
buf.write("\2\u052d\u052f\5\u0299\u014d\2\u052e\u052d\3\2\2\2\u052e")
buf.write("\u052f\3\2\2\2\u052f\u0531\3\2\2\2\u0530\u052c\3\2\2\2")
buf.write("\u0530\u0531\3\2\2\2\u0531\u00be\3\2\2\2\u0532\u0533\5")
buf.write("\u0279\u013d\2\u0533\u0534\5\u027b\u013e\2\u0534\u0535")
buf.write("\5\u0277\u013c\2\u0535\u053d\5\u0289\u0145\2\u0536\u053b")
buf.write("\5\u0273\u013a\2\u0537\u0539\5\u0295\u014b\2\u0538\u053a")
buf.write("\5\u027b\u013e\2\u0539\u0538\3\2\2\2\u0539\u053a\3\2\2")
buf.write("\2\u053a\u053c\3\2\2\2\u053b\u0537\3\2\2\2\u053b\u053c")
buf.write("\3\2\2\2\u053c\u053e\3\2\2\2\u053d\u0536\3\2\2\2\u053d")
buf.write("\u053e\3\2\2\2\u053e\u00c0\3\2\2\2\u053f\u0540\5\u0279")
buf.write("\u013d\2\u0540\u0541\5\u027b\u013e\2\u0541\u0542\5\u027d")
buf.write("\u013f\2\u0542\u0543\5\u0273\u013a\2\u0543\u0544\5\u029b")
buf.write("\u014e\2\u0544\u0545\5\u0289\u0145\2\u0545\u0546\5\u0299")
buf.write("\u014d\2\u0546\u00c2\3\2\2\2\u0547\u0548\5\u0279\u013d")
buf.write("\2\u0548\u0549\5\u027b\u013e\2\u0549\u054a\5\u027d\u013f")
buf.write("\2\u054a\u054f\5\u0283\u0142\2\u054b\u054d\5\u028d\u0147")
buf.write("\2\u054c\u054e\5\u027b\u013e\2\u054d\u054c\3\2\2\2\u054d")
buf.write("\u054e\3\2\2\2\u054e\u0550\3\2\2\2\u054f\u054b\3\2\2\2")
buf.write("\u054f\u0550\3\2\2\2\u0550\u00c4\3\2\2\2\u0551\u0552\5")
buf.write("\u0279\u013d\2\u0552\u0553\5\u027b\u013e\2\u0553\u0554")
buf.write("\5\u0289\u0145\2\u0554\u0559\5\u027b\u013e\2\u0555\u0557")
buf.write("\5\u0299\u014d\2\u0556\u0558\5\u027b\u013e\2\u0557\u0556")
buf.write("\3\2\2\2\u0557\u0558\3\2\2\2\u0558\u055a\3\2\2\2\u0559")
buf.write("\u0555\3\2\2\2\u0559\u055a\3\2\2\2\u055a\u00c6\3\2\2\2")
buf.write("\u055b\u055c\5\u0279\u013d\2\u055c\u055d\5\u027b\u013e")
buf.write("\2\u055d\u055e\5\u0289\u0145\2\u055e\u055f\5\u027b\u013e")
buf.write("\2\u055f\u0560\5\u0299\u014d\2\u0560\u0561\5\u027b\u013e")
buf.write("\2\u0561\u0562\5\u0279\u013d\2\u0562\u00c8\3\2\2\2\u0563")
buf.write("\u0564\5\u0279\u013d\2\u0564\u0565\5\u027b\u013e\2\u0565")
buf.write("\u0566\5\u0297\u014c\2\u0566\u0567\5\u0277\u013c\2\u0567")
buf.write("\u0568\5\u027b\u013e\2\u0568\u0569\5\u028d\u0147\2\u0569")
buf.write("\u056a\5\u0279\u013d\2\u056a\u056b\5\u0283\u0142\2\u056b")
buf.write("\u056c\5\u028d\u0147\2\u056c\u056d\5\u027f\u0140\2\u056d")
buf.write("\u00ca\3\2\2\2\u056e\u056f\5\u0279\u013d\2\u056f\u0570")
buf.write("\5\u0283\u0142\2\u0570\u0571\5\u028b\u0146\2\u0571\u057f")
buf.write("\5\u027b\u013e\2\u0572\u057d\5\u028d\u0147\2\u0573\u057b")
buf.write("\5\u0297\u014c\2\u0574\u0579\5\u0283\u0142\2\u0575\u0577")
buf.write("\5\u028f\u0148\2\u0576\u0578\5\u028d\u0147\2\u0577\u0576")
buf.write("\3\2\2\2\u0577\u0578\3\2\2\2\u0578\u057a\3\2\2\2\u0579")
buf.write("\u0575\3\2\2\2\u0579\u057a\3\2\2\2\u057a\u057c\3\2\2\2")
buf.write("\u057b\u0574\3\2\2\2\u057b\u057c\3\2\2\2\u057c\u057e\3")
buf.write("\2\2\2\u057d\u0573\3\2\2\2\u057d\u057e\3\2\2\2\u057e\u0580")
buf.write("\3\2\2\2\u057f\u0572\3\2\2\2\u057f\u0580\3\2\2\2\u0580")
buf.write("\u00cc\3\2\2\2\u0581\u0582\5\u0279\u013d\2\u0582\u0583")
buf.write("\5\u0283\u0142\2\u0583\u0584\5\u0297\u014c\2\u0584\u0585")
buf.write("\5\u0299\u014d\2\u0585\u0586\5\u0283\u0142\2\u0586\u0587")
buf.write("\5\u028d\u0147\2\u0587\u0588\5\u0277\u013c\2\u0588\u0589")
buf.write("\5\u0299\u014d\2\u0589\u00ce\3\2\2\2\u058a\u058b\5\u0279")
buf.write("\u013d\2\u058b\u058c\5\u0289\u0145\2\u058c\u058d\5\u0289")
buf.write("\u0145\2\u058d\u058e\5\u0297\u014c\2\u058e\u00d0\3\2\2")
buf.write("\2\u058f\u0590\5\u0279\u013d\2\u0590\u0591\5\u028f\u0148")
buf.write("\2\u0591\u00d2\3\2\2\2\u0592\u0593\5\u0279\u013d\2\u0593")
buf.write("\u0594\5\u028f\u0148\2\u0594\u0595\5\u027b\u013e\2\u0595")
buf.write("\u05a0\5\u029d\u014f\2\u0596\u059e\5\u027b\u013e\2\u0597")
buf.write("\u059c\5\u028d\u0147\2\u0598\u059a\5\u0299\u014d\2\u0599")
buf.write("\u059b\5\u0297\u014c\2\u059a\u0599\3\2\2\2\u059a\u059b")
buf.write("\3\2\2\2\u059b\u059d\3\2\2\2\u059c\u0598\3\2\2\2\u059c")
buf.write("\u059d\3\2\2\2\u059d\u059f\3\2\2\2\u059e\u0597\3\2\2\2")
buf.write("\u059e\u059f\3\2\2\2\u059f\u05a1\3\2\2\2\u05a0\u0596\3")
buf.write("\2\2\2\u05a0\u05a1\3\2\2\2\u05a1\u00d4\3\2\2\2\u05a2\u05a3")
buf.write("\5\u0279\u013d\2\u05a3\u05a4\5\u0295\u014b\2\u05a4\u05a5")
buf.write("\5\u028f\u0148\2\u05a5\u05a6\5\u0291\u0149\2\u05a6\u00d6")
buf.write("\3\2\2\2\u05a7\u05a8\5\u027b\u013e\2\u05a8\u05a9\5\u0273")
buf.write("\u013a\2\u05a9\u05aa\5\u0277\u013c\2\u05aa\u05ab\5\u0281")
buf.write("\u0141\2\u05ab\u00d8\3\2\2\2\u05ac\u05ad\5\u027b\u013e")
buf.write("\2\u05ad\u05ae\5\u0289\u0145\2\u05ae\u05af\5\u0283\u0142")
buf.write("\2\u05af\u05b0\5\u027d\u013f\2\u05b0\u00da\3\2\2\2\u05b1")
buf.write("\u05b2\5\u027b\u013e\2\u05b2\u05b3\5\u0289\u0145\2\u05b3")
buf.write("\u05b4\5\u0297\u014c\2\u05b4\u05b5\5\u027b\u013e\2\u05b5")
buf.write("\u00dc\3\2\2\2\u05b6\u05b7\5\u027b\u013e\2\u05b7\u05b8")
buf.write("\5\u028d\u0147\2\u05b8\u05b9\5\u0277\u013c\2\u05b9\u05ba")
buf.write("\5\u0295\u014b\2\u05ba\u05bb\5\u02a3\u0152\2\u05bb\u05bc")
buf.write("\5\u0291\u0149\2\u05bc\u05bd\5\u0299\u014d\2\u05bd\u00de")
buf.write("\3\2\2\2\u05be\u05bf\5\u027b\u013e\2\u05bf\u05c0\5\u028d")
buf.write("\u0147\2\u05c0\u05c1\5\u0279\u013d\2\u05c1\u05c9\5\u0277")
buf.write("\u013c\2\u05c2\u05c7\5\u0273\u013a\2\u05c3\u05c5\5\u0297")
buf.write("\u014c\2\u05c4\u05c6\5\u027b\u013e\2\u05c5\u05c4\3\2\2")
buf.write("\2\u05c5\u05c6\3\2\2\2\u05c6\u05c8\3\2\2\2\u05c7\u05c3")
buf.write("\3\2\2\2\u05c7\u05c8\3\2\2\2\u05c8\u05ca\3\2\2\2\u05c9")
buf.write("\u05c2\3\2\2\2\u05c9\u05ca\3\2\2\2\u05ca\u00e0\3\2\2\2")
buf.write("\u05cb\u05cc\5\u027b\u013e\2\u05cc\u05cd\5\u028d\u0147")
buf.write("\2\u05cd\u05ce\5\u0279\u013d\2\u05ce\u05cf\5\u0279\u013d")
buf.write("\2\u05cf\u05da\5\u027b\u013e\2\u05d0\u05d8\5\u027d\u013f")
buf.write("\2\u05d1\u05d6\5\u0283\u0142\2\u05d2\u05d4\5\u028d\u0147")
buf.write("\2\u05d3\u05d5\5\u027b\u013e\2\u05d4\u05d3\3\2\2\2\u05d4")
buf.write("\u05d5\3\2\2\2\u05d5\u05d7\3\2\2\2\u05d6\u05d2\3\2\2\2")
buf.write("\u05d6\u05d7\3\2\2\2\u05d7\u05d9\3\2\2\2\u05d8\u05d1\3")
buf.write("\2\2\2\u05d8\u05d9\3\2\2\2\u05d9\u05db\3\2\2\2\u05da\u05d0")
buf.write("\3\2\2\2\u05da\u05db\3\2\2\2\u05db\u00e2\3\2\2\2\u05dc")
buf.write("\u05dd\5\u027b\u013e\2\u05dd\u05de\5\u028d\u0147\2\u05de")
buf.write("\u05df\5\u0279\u013d\2\u05df\u05e0\5\u0279\u013d\2\u05e0")
buf.write("\u05e1\5\u028f\u0148\2\u05e1\u00e4\3\2\2\2\u05e2\u05e3")
buf.write("\5\u027b\u013e\2\u05e3\u05e4\5\u028d\u0147\2\u05e4\u05e5")
buf.write("\5\u0279\u013d\2\u05e5\u05e6\5\u027d\u013f\2\u05e6\u05e8")
buf.write("\5\u028f\u0148\2\u05e7\u05e9\5\u0295\u014b\2\u05e8\u05e7")
buf.write("\3\2\2\2\u05e8\u05e9\3\2\2\2\u05e9\u00e6\3\2\2\2\u05ea")
buf.write("\u05eb\5\u027b\u013e\2\u05eb\u05ec\5\u028d\u0147\2\u05ec")
buf.write("\u05ed\5\u0279\u013d\2\u05ed\u05ef\5\u0283\u0142\2\u05ee")
buf.write("\u05f0\5\u027d\u013f\2\u05ef\u05ee\3\2\2\2\u05ef\u05f0")
buf.write("\3\2\2\2\u05f0\u00e8\3\2\2\2\u05f1\u05f2\5\u027b\u013e")
buf.write("\2\u05f2\u05f3\5\u028d\u0147\2\u05f3\u05f4\5\u0279\u013d")
buf.write("\2\u05f4\u05fc\5\u0291\u0149\2\u05f5\u05fa\5\u0295\u014b")
buf.write("\2\u05f6\u05f8\5\u028f\u0148\2\u05f7\u05f9\5\u0277\u013c")
buf.write("\2\u05f8\u05f7\3\2\2\2\u05f8\u05f9\3\2\2\2\u05f9\u05fb")
buf.write("\3\2\2\2\u05fa\u05f6\3\2\2\2\u05fa\u05fb\3\2\2\2\u05fb")
buf.write("\u05fd\3\2\2\2\u05fc\u05f5\3\2\2\2\u05fc\u05fd\3\2\2\2")
buf.write("\u05fd\u060e\3\2\2\2\u05fe\u05ff\5\u027b\u013e\2\u05ff")
buf.write("\u0600\5\u028d\u0147\2\u0600\u060b\5\u0279\u013d\2\u0601")
buf.write("\u0609\5\u027d\u013f\2\u0602\u0607\5\u029b\u014e\2\u0603")
buf.write("\u0605\5\u028d\u0147\2\u0604\u0606\5\u0277\u013c\2\u0605")
buf.write("\u0604\3\2\2\2\u0605\u0606\3\2\2\2\u0606\u0608\3\2\2\2")
buf.write("\u0607\u0603\3\2\2\2\u0607\u0608\3\2\2\2\u0608\u060a\3")
buf.write("\2\2\2\u0609\u0602\3\2\2\2\u0609\u060a\3\2\2\2\u060a\u060c")
buf.write("\3\2\2\2\u060b\u0601\3\2\2\2\u060b\u060c\3\2\2\2\u060c")
buf.write("\u060e\3\2\2\2\u060d\u05f1\3\2\2\2\u060d\u05fe\3\2\2\2")
buf.write("\u060e\u00ea\3\2\2\2\u060f\u0610\5\u027b\u013e\2\u0610")
buf.write("\u0611\5\u028d\u0147\2\u0611\u0612\5\u0279\u013d\2\u0612")
buf.write("\u061a\5\u0297\u014c\2\u0613\u0618\5\u0277\u013c\2\u0614")
buf.write("\u0616\5\u0273\u013a\2\u0615\u0617\5\u028d\u0147\2\u0616")
buf.write("\u0615\3\2\2\2\u0616\u0617\3\2\2\2\u0617\u0619\3\2\2\2")
buf.write("\u0618\u0614\3\2\2\2\u0618\u0619\3\2\2\2\u0619\u061b\3")
buf.write("\2\2\2\u061a\u0613\3\2\2\2\u061a\u061b\3\2\2\2\u061b\u00ec")
buf.write("\3\2\2\2\u061c\u061d\5\u027b\u013e\2\u061d\u061e\5\u028d")
buf.write("\u0147\2\u061e\u061f\5\u0279\u013d\2\u061f\u0627\5\u0299")
buf.write("\u014d\2\u0620\u0625\5\u027b\u013e\2\u0621\u0623\5\u02a1")
buf.write("\u0151\2\u0622\u0624\5\u0299\u014d\2\u0623\u0622\3\2\2")
buf.write("\2\u0623\u0624\3\2\2\2\u0624\u0626\3\2\2\2\u0625\u0621")
buf.write("\3\2\2\2\u0625\u0626\3\2\2\2\u0626\u0628\3\2\2\2\u0627")
buf.write("\u0620\3\2\2\2\u0627\u0628\3\2\2\2\u0628\u00ee\3\2\2\2")
buf.write("\u0629\u062a\5\u027b\u013e\2\u062a\u062b\5\u028d\u0147")
buf.write("\2\u062b\u062c\5\u0279\u013d\2\u062c\u062d\5\u0299\u014d")
buf.write("\2\u062d\u062f\5\u0295\u014b\2\u062e\u0630\5\u02a3\u0152")
buf.write("\2\u062f\u062e\3\2\2\2\u062f\u0630\3\2\2\2\u0630\u00f0")
buf.write("\3\2\2\2\u0631\u0632\5\u027b\u013e\2\u0632\u0633\5\u028d")
buf.write("\u0147\2\u0633\u0634\5\u0279\u013d\2\u0634\u063c\5\u029f")
buf.write("\u0150\2\u0635\u063a\5\u0283\u0142\2\u0636\u0638\5\u0299")
buf.write("\u014d\2\u0637\u0639\5\u0281\u0141\2\u0638\u0637\3\2\2")
buf.write("\2\u0638\u0639\3\2\2\2\u0639\u063b\3\2\2\2\u063a\u0636")
buf.write("\3\2\2\2\u063a\u063b\3\2\2\2\u063b\u063d\3\2\2\2\u063c")
buf.write("\u0635\3\2\2\2\u063c\u063d\3\2\2\2\u063d\u00f2\3\2\2\2")
buf.write("\u063e\u063f\5\u027b\u013e\2\u063f\u0640\5\u0295\u014b")
buf.write("\2\u0640\u0641\5\u0273\u013a\2\u0641\u0643\5\u0297\u014c")
buf.write("\2\u0642\u0644\5\u027b\u013e\2\u0643\u0642\3\2\2\2\u0643")
buf.write("\u0644\3\2\2\2\u0644\u00f4\3\2\2\2\u0645\u0646\5\u027b")
buf.write("\u013e\2\u0646\u0647\5\u0295\u014b\2\u0647\u0648\5\u0295")
buf.write("\u014b\2\u0648\u064a\5\u028f\u0148\2\u0649\u064b\5\u0295")
buf.write("\u014b\2\u064a\u0649\3\2\2\2\u064a\u064b\3\2\2\2\u064b")
buf.write("\u00f6\3\2\2\2\u064c\u064d\5\u027b\u013e\2\u064d\u064e")
buf.write("\5\u0297\u014c\2\u064e\u064f\5\u0277\u013c\2\u064f\u0650")
buf.write("\5\u0273\u013a\2\u0650\u0651\5\u0291\u0149\2\u0651\u0652")
buf.write("\5\u027b\u013e\2\u0652\u00f8\3\2\2\2\u0653\u0654\5\u027b")
buf.write("\u013e\2\u0654\u0655\5\u029d\u014f\2\u0655\u0656\5\u027b")
buf.write("\u013e\2\u0656\u0657\5\u028d\u0147\2\u0657\u0658\5\u0299")
buf.write("\u014d\2\u0658\u0659\5\u0297\u014c\2\u0659\u00fa\3\2\2")
buf.write("\2\u065a\u065b\5\u027b\u013e\2\u065b\u065c\5\u02a1\u0151")
buf.write("\2\u065c\u065d\5\u0273\u013a\2\u065d\u065f\5\u0277\u013c")
buf.write("\2\u065e\u0660\5\u0299\u014d\2\u065f\u065e\3\2\2\2\u065f")
buf.write("\u0660\3\2\2\2\u0660\u00fc\3\2\2\2\u0661\u0662\5\u027b")
buf.write("\u013e\2\u0662\u0663\5\u02a1\u0151\2\u0663\u0664\5\u0277")
buf.write("\u013c\2\u0664\u0665\5\u027b\u013e\2\u0665\u0666\5\u0291")
buf.write("\u0149\2\u0666\u0667\5\u0299\u014d\2\u0667\u00fe\3\2\2")
buf.write("\2\u0668\u0669\5\u027b\u013e\2\u0669\u066a\5\u02a1\u0151")
buf.write("\2\u066a\u066b\5\u0277\u013c\2\u066b\u0672\5\u0289\u0145")
buf.write("\2\u066c\u066d\5\u029b\u014e\2\u066d\u066e\5\u0297\u014c")
buf.write("\2\u066e\u066f\5\u0283\u0142\2\u066f\u0670\5\u029d\u014f")
buf.write("\2\u0670\u0671\5\u027b\u013e\2\u0671\u0673\3\2\2\2\u0672")
buf.write("\u066c\3\2\2\2\u0672\u0673\3\2\2\2\u0673\u0100\3\2\2\2")
buf.write("\u0674\u0675\5\u027b\u013e\2\u0675\u0676\5\u02a1\u0151")
buf.write("\2\u0676\u0677\5\u0299\u014d\2\u0677\u0678\5\u027b\u013e")
buf.write("\2\u0678\u0679\5\u028d\u0147\2\u0679\u067a\5\u0279\u013d")
buf.write("\2\u067a\u067b\5\u027b\u013e\2\u067b\u067c\5\u0279\u013d")
buf.write("\2\u067c\u0102\3\2\2\2\u067d\u067e\5\u027b\u013e\2\u067e")
buf.write("\u067f\5\u02a1\u0151\2\u067f\u0680\5\u0299\u014d\2\u0680")
buf.write("\u068b\5\u027b\u013e\2\u0681\u0689\5\u0295\u014b\2\u0682")
buf.write("\u0687\5\u028d\u0147\2\u0683\u0685\5\u0273\u013a\2\u0684")
buf.write("\u0686\5\u0289\u0145\2\u0685\u0684\3\2\2\2\u0685\u0686")
buf.write("\3\2\2\2\u0686\u0688\3\2\2\2\u0687\u0683\3\2\2\2\u0687")
buf.write("\u0688\3\2\2\2\u0688\u068a\3\2\2\2\u0689\u0682\3\2\2\2")
buf.write("\u0689\u068a\3\2\2\2\u068a\u068c\3\2\2\2\u068b\u0681\3")
buf.write("\2\2\2\u068b\u068c\3\2\2\2\u068c\u0104\3\2\2\2\u068d\u068e")
buf.write("\5\u027d\u013f\2\u068e\u068f\5\u0283\u0142\2\u068f\u0690")
buf.write("\5\u027b\u013e\2\u0690\u0691\5\u0289\u0145\2\u0691\u0692")
buf.write("\5\u0279\u013d\2\u0692\u0693\5\u0297\u014c\2\u0693\u0106")
buf.write("\3\2\2\2\u0694\u0695\5\u027d\u013f\2\u0695\u0696\5\u0283")
buf.write("\u0142\2\u0696\u0697\5\u0289\u0145\2\u0697\u0699\5\u027b")
buf.write("\u013e\2\u0698\u069a\5\u0297\u014c\2\u0699\u0698\3\2\2")
buf.write("\2\u0699\u069a\3\2\2\2\u069a\u0108\3\2\2\2\u069b\u069c")
buf.write("\5\u027d\u013f\2\u069c\u069d\5\u0283\u0142\2\u069d\u069e")
buf.write("\5\u0289\u0145\2\u069e\u069f\5\u0289\u0145\2\u069f\u010a")
buf.write("\3\2\2\2\u06a0\u06a1\5\u027d\u013f\2\u06a1\u06a2\5\u0283")
buf.write("\u0142\2\u06a2\u06a3\5\u0289\u0145\2\u06a3\u06a8\5\u0299")
buf.write("\u014d\2\u06a4\u06a6\5\u027b\u013e\2\u06a5\u06a7\5\u0295")
buf.write("\u014b\2\u06a6\u06a5\3\2\2\2\u06a6\u06a7\3\2\2\2\u06a7")
buf.write("\u06a9\3\2\2\2\u06a8\u06a4\3\2\2\2\u06a8\u06a9\3\2\2\2")
buf.write("\u06a9\u010c\3\2\2\2\u06aa\u06ab\5\u027d\u013f\2\u06ab")
buf.write("\u06ac\5\u0283\u0142\2\u06ac\u06ad\5\u028d\u0147\2\u06ad")
buf.write("\u06b5\5\u0273\u013a\2\u06ae\u06b3\5\u0289\u0145\2\u06af")
buf.write("\u06b1\5\u0289\u0145\2\u06b0\u06b2\5\u02a3\u0152\2\u06b1")
buf.write("\u06b0\3\2\2\2\u06b1\u06b2\3\2\2\2\u06b2\u06b4\3\2\2\2")
buf.write("\u06b3\u06af\3\2\2\2\u06b3\u06b4\3\2\2\2\u06b4\u06b6\3")
buf.write("\2\2\2\u06b5\u06ae\3\2\2\2\u06b5\u06b6\3\2\2\2\u06b6\u010e")
buf.write("\3\2\2\2\u06b7\u06b8\5\u027d\u013f\2\u06b8\u06b9\5\u0289")
buf.write("\u0145\2\u06b9\u06ba\5\u0273\u013a\2\u06ba\u06bb\5\u027f")
buf.write("\u0140\2\u06bb\u06bc\5\u0297\u014c\2\u06bc\u0110\3\2\2")
buf.write("\2\u06bd\u06be\5\u027d\u013f\2\u06be\u06bf\5\u028f\u0148")
buf.write("\2\u06bf\u06c0\5\u028d\u0147\2\u06c0\u06c1\5\u0299\u014d")
buf.write("\2\u06c1\u0112\3\2\2\2\u06c2\u06c3\5\u027d\u013f\2\u06c3")
buf.write("\u06c4\5\u028f\u0148\2\u06c4\u06c5\5\u0295\u014b\2\u06c5")
buf.write("\u0114\3\2\2\2\u06c6\u06c7\5\u027d\u013f\2\u06c7\u06c8")
buf.write("\5\u028f\u0148\2\u06c8\u06c9\5\u0295\u014b\2\u06c9\u06ca")
buf.write("\5\u0277\u013c\2\u06ca\u06cb\5\u027b\u013e\2\u06cb\u0116")
buf.write("\3\2\2\2\u06cc\u06cd\5\u027d\u013f\2\u06cd\u06ce\5\u028f")
buf.write("\u0148\2\u06ce\u06cf\5\u0295\u014b\2\u06cf\u06d0\5\u028b")
buf.write("\u0146\2\u06d0\u0118\3\2\2\2\u06d1\u06d2\5\u027d\u013f")
buf.write("\2\u06d2\u06d3\5\u028f\u0148\2\u06d3\u06d4\5\u02a1\u0151")
buf.write("\2\u06d4\u06d5\5\u028f\u0148\2\u06d5\u06d6\5\u0275\u013b")
buf.write("\2\u06d6\u06d7\5\u0285\u0143\2\u06d7\u06d8\5\u027b\u013e")
buf.write("\2\u06d8\u06d9\5\u0277\u013c\2\u06d9\u06da\5\u0299\u014d")
buf.write("\2\u06da\u011a\3\2\2\2\u06db\u06dc\5\u027d\u013f\2\u06dc")
buf.write("\u06dd\5\u028f\u0148\2\u06dd\u06de\5\u02a1\u0151\2\u06de")
buf.write("\u06df\5\u0291\u0149\2\u06df\u06e0\5\u0289\u0145\2\u06e0")
buf.write("\u06e1\5\u029b\u014e\2\u06e1\u06e2\5\u0297\u014c\2\u06e2")
buf.write("\u011c\3\2\2\2\u06e3\u06e4\5\u027d\u013f\2\u06e4\u06e5")
buf.write("\5\u0295\u014b\2\u06e5\u06e6\5\u027b\u013e\2\u06e6\u06e7")
buf.write("\5\u027b\u013e\2\u06e7\u011e\3\2\2\2\u06e8\u06e9\5\u027d")
buf.write("\u013f\2\u06e9\u06ea\5\u0295\u014b\2\u06ea\u06eb\5\u028f")
buf.write("\u0148\2\u06eb\u06ec\5\u028b\u0146\2\u06ec\u0120\3\2\2")
buf.write("\2\u06ed\u06ee\5\u027f\u0140\2\u06ee\u06ef\5\u0273\u013a")
buf.write("\2\u06ef\u06f0\5\u0299\u014d\2\u06f0\u06f5\5\u0281\u0141")
buf.write("\2\u06f1\u06f3\5\u027b\u013e\2\u06f2\u06f4\5\u0295\u014b")
buf.write("\2\u06f3\u06f2\3\2\2\2\u06f3\u06f4\3\2\2\2\u06f4\u06f6")
buf.write("\3\2\2\2\u06f5\u06f1\3\2\2\2\u06f5\u06f6\3\2\2\2\u06f6")
buf.write("\u0122\3\2\2\2\u06f7\u06f8\5\u027f\u0140\2\u06f8\u06f9")
buf.write("\5\u027b\u013e\2\u06f9\u06fa\5\u0299\u014d\2\u06fa\u06fb")
buf.write("\5\u0297\u014c\2\u06fb\u0124\3\2\2\2\u06fc\u06fd\5\u027f")
buf.write("\u0140\2\u06fd\u0701\5\u028f\u0148\2\u06fe\u06ff\5\u0299")
buf.write("\u014d\2\u06ff\u0700\5\u028f\u0148\2\u0700\u0702\3\2\2")
buf.write("\2\u0701\u06fe\3\2\2\2\u0701\u0702\3\2\2\2\u0702\u0126")
buf.write("\3\2\2\2\u0703\u0704\5\u0281\u0141\2\u0704\u0705\5\u027b")
buf.write("\u013e\2\u0705\u0706\5\u0289\u0145\2\u0706\u0707\5\u0291")
buf.write("\u0149\2\u0707\u0128\3\2\2\2\u0708\u0709\5\u0281\u0141")
buf.write("\2\u0709\u070a\5\u0283\u0142\2\u070a\u070b\5\u0279\u013d")
buf.write("\2\u070b\u070c\5\u027b\u013e\2\u070c\u012a\3\2\2\2\u070d")
buf.write("\u070e\5\u0283\u0142\2\u070e\u070f\5\u0277\u013c\2\u070f")
buf.write("\u0710\5\u028f\u0148\2\u0710\u0711\5\u028d\u0147\2\u0711")
buf.write("\u012c\3\2\2\2\u0712\u0713\5\u0283\u0142\2\u0713\u0714")
buf.write("\5\u027d\u013f\2\u0714\u012e\3\2\2\2\u0715\u0716\5\u0283")
buf.write("\u0142\2\u0716\u0717\5\u027d\u013f\2\u0717\u0718\5\u0279")
buf.write("\u013d\2\u0718\u0719\5\u027b\u013e\2\u0719\u071a\5\u027d")
buf.write("\u013f\2\u071a\u0130\3\2\2\2\u071b\u071c\5\u0283\u0142")
buf.write("\2\u071c\u071d\5\u028d\u0147\2\u071d\u0132\3\2\2\2\u071e")
buf.write("\u071f\5\u0283\u0142\2\u071f\u0720\5\u028d\u0147\2\u0720")
buf.write("\u0721\5\u0277\u013c\2\u0721\u0722\5\u0289\u0145\2\u0722")
buf.write("\u0723\5\u029b\u014e\2\u0723\u0724\5\u0279\u013d\2\u0724")
buf.write("\u0725\5\u027b\u013e\2\u0725\u0134\3\2\2\2\u0726\u0727")
buf.write("\5\u0283\u0142\2\u0727\u0728\5\u028d\u0147\2\u0728\u0729")
buf.write("\5\u0279\u013d\2\u0729\u072a\5\u027b\u013e\2\u072a\u072b")
buf.write("\5\u02a1\u0151\2\u072b\u0136\3\2\2\2\u072c\u072d\5\u0283")
buf.write("\u0142\2\u072d\u072e\5\u028d\u0147\2\u072e\u072f\5\u0279")
buf.write("\u013d\2\u072f\u0730\5\u027b\u013e\2\u0730\u0731\5\u02a1")
buf.write("\u0151\2\u0731\u0732\5\u027b\u013e\2\u0732\u0733\5\u0297")
buf.write("\u014c\2\u0733\u0138\3\2\2\2\u0734\u0735\5\u0283\u0142")
buf.write("\2\u0735\u0736\5\u028d\u0147\2\u0736\u0737\5\u0297\u014c")
buf.write("\2\u0737\u073c\5\u027b\u013e\2\u0738\u073a\5\u0295\u014b")
buf.write("\2\u0739\u073b\5\u0299\u014d\2\u073a\u0739\3\2\2\2\u073a")
buf.write("\u073b\3\2\2\2\u073b\u073d\3\2\2\2\u073c\u0738\3\2\2\2")
buf.write("\u073c\u073d\3\2\2\2\u073d\u013a\3\2\2\2\u073e\u073f\5")
buf.write("\u0283\u0142\2\u073f\u0740\5\u028d\u0147\2\u0740\u0741")
buf.write("\5\u0299\u014d\2\u0741\u0742\5\u028f\u0148\2\u0742\u013c")
buf.write("\3\2\2\2\u0743\u0744\5\u0285\u0143\2\u0744\u0745\5\u028f")
buf.write("\u0148\2\u0745\u0746\5\u0283\u0142\2\u0746\u0747\5\u028d")
buf.write("\u0147\2\u0747\u013e\3\2\2\2\u0748\u0749\5\u0287\u0144")
buf.write("\2\u0749\u074a\5\u027b\u013e\2\u074a\u074b\5\u02a3\u0152")
buf.write("\2\u074b\u0140\3\2\2\2\u074c\u074d\5\u0287\u0144\2\u074d")
buf.write("\u074e\5\u027b\u013e\2\u074e\u074f\5\u02a3\u0152\2\u074f")
buf.write("\u075a\5\u0275\u013b\2\u0750\u0758\5\u028f\u0148\2\u0751")
buf.write("\u0756\5\u0273\u013a\2\u0752\u0754\5\u0295\u014b\2\u0753")
buf.write("\u0755\5\u0279\u013d\2\u0754\u0753\3\2\2\2\u0754\u0755")
buf.write("\3\2\2\2\u0755\u0757\3\2\2\2\u0756\u0752\3\2\2\2\u0756")
buf.write("\u0757\3\2\2\2\u0757\u0759\3\2\2\2\u0758\u0751\3\2\2\2")
buf.write("\u0758\u0759\3\2\2\2\u0759\u075b\3\2\2\2\u075a\u0750\3")
buf.write("\2\2\2\u075a\u075b\3\2\2\2\u075b\u0142\3\2\2\2\u075c\u075d")
buf.write("\5\u0289\u0145\2\u075d\u075e\5\u0273\u013a\2\u075e\u075f")
buf.write("\5\u0275\u013b\2\u075f\u0760\5\u027b\u013e\2\u0760\u0761")
buf.write("\5\u0289\u0145\2\u0761\u0144\3\2\2\2\u0762\u0763\5\u0289")
buf.write("\u0145\2\u0763\u0764\5\u0283\u0142\2\u0764\u0765\5\u0275")
buf.write("\u013b\2\u0765\u0766\5\u0295\u014b\2\u0766\u0767\5\u0273")
buf.write("\u013a\2\u0767\u0768\5\u0295\u014b\2\u0768\u0769\5\u02a3")
buf.write("\u0152\2\u0769\u0146\3\2\2\2\u076a\u076b\5\u0289\u0145")
buf.write("\2\u076b\u076c\5\u0283\u0142\2\u076c\u076d\5\u0287\u0144")
buf.write("\2\u076d\u076e\5\u027b\u013e\2\u076e\u0148\3\2\2\2\u076f")
buf.write("\u0770\5\u0289\u0145\2\u0770\u0771\5\u0283\u0142\2\u0771")
buf.write("\u0772\5\u028d\u0147\2\u0772\u0773\5\u027b\u013e\2\u0773")
buf.write("\u014a\3\2\2\2\u0774\u0775\5\u0289\u0145\2\u0775\u0776")
buf.write("\5\u0283\u0142\2\u0776\u0777\5\u028d\u0147\2\u0777\u0778")
buf.write("\5\u0287\u0144\2\u0778\u0779\5\u027b\u013e\2\u0779\u077a")
buf.write("\5\u0279\u013d\2\u077a\u014c\3\2\2\2\u077b\u077c\5\u0289")
buf.write("\u0145\2\u077c\u077d\5\u0283\u0142\2\u077d\u077e\5\u0297")
buf.write("\u014c\2\u077e\u077f\5\u0299\u014d\2\u077f\u014e\3\2\2")
buf.write("\2\u0780\u0781\5\u0289\u0145\2\u0781\u0782\5\u028f\u0148")
buf.write("\2\u0782\u0783\5\u0277\u013c\2\u0783\u0788\5\u0273\u013a")
buf.write("\2\u0784\u0786\5\u0299\u014d\2\u0785\u0787\5\u027b\u013e")
buf.write("\2\u0786\u0785\3\2\2\2\u0786\u0787\3\2\2\2\u0787\u0789")
buf.write("\3\2\2\2\u0788\u0784\3\2\2\2\u0788\u0789\3\2\2\2\u0789")
buf.write("\u0150\3\2\2\2\u078a\u078b\5\u028b\u0146\2\u078b\u078c")
buf.write("\5\u0273\u013a\2\u078c\u078d\5\u0277\u013c\2\u078d\u078e")
buf.write("\5\u0295\u014b\2\u078e\u078f\5\u028f\u0148\2\u078f\u0790")
buf.write("\5\u0297\u014c\2\u0790\u0152\3\2\2\2\u0791\u0792\5\u028b")
buf.write("\u0146\2\u0792\u0793\5\u0273\u013a\2\u0793\u0794\5\u0295")
buf.write("\u014b\2\u0794\u0795\5\u027f\u0140\2\u0795\u0796\5\u0283")
buf.write("\u0142\2\u0796\u0797\5\u028d\u0147\2\u0797\u0154\3\2\2")
buf.write("\2\u0798\u0799\5\u028b\u0146\2\u0799\u079a\5\u0273\u013a")
buf.write("\2\u079a\u079b\5\u0295\u014b\2\u079b\u079c\5\u0287\u0144")
buf.write("\2\u079c\u0156\3\2\2\2\u079d\u079e\5\u028b\u0146\2\u079e")
buf.write("\u079f\5\u0273\u013a\2\u079f\u07a0\5\u0297\u014c\2\u07a0")
buf.write("\u07a1\5\u0299\u014d\2\u07a1\u07a2\5\u027b\u013e\2\u07a2")
buf.write("\u07a3\5\u0295\u014b\2\u07a3\u0158\3\2\2\2\u07a4\u07a5")
buf.write("\5\u028b\u0146\2\u07a5\u07a6\5\u0273\u013a\2\u07a6\u07a7")
buf.write("\5\u02a1\u0151\2\u07a7\u015a\3\2\2\2\u07a8\u07a9\5\u028b")
buf.write("\u0146\2\u07a9\u07aa\5\u027b\u013e\2\u07aa\u07ab\5\u028b")
buf.write("\u0146\2\u07ab\u07ac\5\u028f\u0148\2\u07ac\u015c\3\2\2")
buf.write("\2\u07ad\u07ae\5\u028b\u0146\2\u07ae\u07af\5\u027b\u013e")
buf.write("\2\u07af\u07b0\5\u028b\u0146\2\u07b0\u07b1\5\u028f\u0148")
buf.write("\2\u07b1\u07b2\5\u0295\u014b\2\u07b2\u07b3\5\u02a3\u0152")
buf.write("\2\u07b3\u015e\3\2\2\2\u07b4\u07b5\5\u028b\u0146\2\u07b5")
buf.write("\u07b6\5\u027b\u013e\2\u07b6\u07b7\5\u028b\u0146\2\u07b7")
buf.write("\u07b8\5\u028f\u0148\2\u07b8\u07b9\5\u029f\u0150\2\u07b9")
buf.write("\u07ba\5\u0283\u0142\2\u07ba\u07bb\5\u0279\u013d\2\u07bb")
buf.write("\u07bc\5\u0299\u014d\2\u07bc\u07bd\5\u0281\u0141\2\u07bd")
buf.write("\u0160\3\2\2\2\u07be\u07bf\5\u028b\u0146\2\u07bf\u07c0")
buf.write("\5\u027b\u013e\2\u07c0\u07c1\5\u028b\u0146\2\u07c1\u07c2")
buf.write("\5\u029d\u014f\2\u07c2\u07c3\5\u0273\u013a\2\u07c3\u07c4")
buf.write("\5\u0295\u014b\2\u07c4\u0162\3\2\2\2\u07c5\u07c6\5\u028b")
buf.write("\u0146\2\u07c6\u07c7\5\u027b\u013e\2\u07c7\u07c8\5\u028d")
buf.write("\u0147\2\u07c8\u07c9\5\u029b\u014e\2\u07c9\u0164\3\2\2")
buf.write("\2\u07ca\u07cb\5\u028b\u0146\2\u07cb\u07cc\5\u027b\u013e")
buf.write("\2\u07cc\u07cd\5\u028d\u0147\2\u07cd\u07ce\5\u029b\u014e")
buf.write("\2\u07ce\u07cf\5\u0297\u014c\2\u07cf\u0166\3\2\2\2\u07d0")
buf.write("\u07d1\5\u028b\u0146\2\u07d1\u07d2\5\u027b\u013e\2\u07d2")
buf.write("\u07d3\5\u0297\u014c\2\u07d3\u07db\5\u0297\u014c\2\u07d4")
buf.write("\u07d9\5\u0273\u013a\2\u07d5\u07d7\5\u027f\u0140\2\u07d6")
buf.write("\u07d8\5\u027b\u013e\2\u07d7\u07d6\3\2\2\2\u07d7\u07d8")
buf.write("\3\2\2\2\u07d8\u07da\3\2\2\2\u07d9\u07d5\3\2\2\2\u07d9")
buf.write("\u07da\3\2\2\2\u07da\u07dc\3\2\2\2\u07db\u07d4\3\2\2\2")
buf.write("\u07db\u07dc\3\2\2\2\u07dc\u0168\3\2\2\2\u07dd\u07de\5")
buf.write("\u028b\u0146\2\u07de\u07df\5\u0283\u0142\2\u07df\u07e0")
buf.write("\5\u028d\u0147\2\u07e0\u016a\3\2\2\2\u07e1\u07e2\5\u028b")
buf.write("\u0146\2\u07e2\u07e3\5\u0287\u0144\2\u07e3\u07e4\5\u0279")
buf.write("\u013d\2\u07e4\u07e6\5\u0283\u0142\2\u07e5\u07e7\5\u0295")
buf.write("\u014b\2\u07e6\u07e5\3\2\2\2\u07e6\u07e7\3\2\2\2\u07e7")
buf.write("\u07ec\3\2\2\2\u07e8\u07e9\5\u028b\u0146\2\u07e9\u07ea")
buf.write("\5\u0279\u013d\2\u07ea\u07ec\3\2\2\2\u07eb\u07e1\3\2\2")
buf.write("\2\u07eb\u07e8\3\2\2\2\u07ec\u016c\3\2\2\2\u07ed\u07ee")
buf.write("\5\u028b\u0146\2\u07ee\u07ef\5\u028f\u0148\2\u07ef\u07f0")
buf.write("\5\u0279\u013d\2\u07f0\u07f5\5\u0283\u0142\2\u07f1\u07f3")
buf.write("\5\u027d\u013f\2\u07f2\u07f4\5\u02a3\u0152\2\u07f3\u07f2")
buf.write("\3\2\2\2\u07f3\u07f4\3\2\2\2\u07f4\u07f6\3\2\2\2\u07f5")
buf.write("\u07f1\3\2\2\2\u07f5\u07f6\3\2\2\2\u07f6\u016e\3\2\2\2")
buf.write("\u07f7\u07f8\5\u028b\u0146\2\u07f8\u07f9\5\u029b\u014e")
buf.write("\2\u07f9\u07fa\5\u0289\u0145\2\u07fa\u07fb\5\u0299\u014d")
buf.write("\2\u07fb\u07fc\5\u0283\u0142\2\u07fc\u07fd\5\u0289\u0145")
buf.write("\2\u07fd\u07fe\5\u028f\u0148\2\u07fe\u07ff\5\u0277\u013c")
buf.write("\2\u07ff\u0800\5\u0287\u0144\2\u0800\u0801\5\u0297\u014c")
buf.write("\2\u0801\u0170\3\2\2\2\u0802\u0803\5\u028d\u0147\2\u0803")
buf.write("\u0804\5\u0273\u013a\2\u0804\u0805\5\u028b\u0146\2\u0805")
buf.write("\u0806\5\u027b\u013e\2\u0806\u0172\3\2\2\2\u0807\u0808")
buf.write("\5\u028d\u0147\2\u0808\u0809\5\u027b\u013e\2\u0809\u080a")
buf.write("\5\u0273\u013a\2\u080a\u080b\5\u0295\u014b\2\u080b\u0174")
buf.write("\3\2\2\2\u080c\u080d\5\u028d\u0147\2\u080d\u080e\5\u027b")
buf.write("\u013e\2\u080e\u080f\5\u027f\u0140\2\u080f\u0810\5\u028f")
buf.write("\u0148\2\u0810\u0811\5\u0299\u014d\2\u0811\u0812\5\u0283")
buf.write("\u0142\2\u0812\u0813\5\u0273\u013a\2\u0813\u0814\5\u0299")
buf.write("\u014d\2\u0814\u0815\5\u027b\u013e\2\u0815\u0176\3\2\2")
buf.write("\2\u0816\u0817\5\u028d\u0147\2\u0817\u0818\5\u027b\u013e")
buf.write("\2\u0818\u0819\5\u02a1\u0151\2\u0819\u081a\5\u0299\u014d")
buf.write("\2\u081a\u0178\3\2\2\2\u081b\u081c\5\u028d\u0147\2\u081c")
buf.write("\u081d\5\u028f\u0148\2\u081d\u081e\5\u0277\u013c\2\u081e")
buf.write("\u081f\5\u0289\u0145\2\u081f\u0820\5\u027b\u013e\2\u0820")
buf.write("\u0821\5\u0273\u013a\2\u0821\u0822\5\u0295\u014b\2\u0822")
buf.write("\u017a\3\2\2\2\u0823\u0824\5\u028d\u0147\2\u0824\u0825")
buf.write("\5\u028f\u0148\2\u0825\u0826\5\u0277\u013c\2\u0826\u0827")
buf.write("\5\u028f\u0148\2\u0827\u0828\5\u028d\u0147\2\u0828\u0829")
buf.write("\5\u0297\u014c\2\u0829\u082a\5\u028f\u0148\2\u082a\u082b")
buf.write("\5\u0289\u0145\2\u082b\u082c\5\u027b\u013e\2\u082c\u017c")
buf.write("\3\2\2\2\u082d\u082e\5\u028d\u0147\2\u082e\u082f\5\u028f")
buf.write("\u0148\2\u082f\u0830\5\u00bb^\2\u0830\u017e\3\2\2\2\u0831")
buf.write("\u0832\5\u028d\u0147\2\u0832\u0833\5\u028f\u0148\2\u0833")
buf.write("\u0834\5\u027b\u013e\2\u0834\u0835\5\u0285\u0143\2\u0835")
buf.write("\u0836\5\u027b\u013e\2\u0836\u0837\5\u0277\u013c\2\u0837")
buf.write("\u0838\5\u0299\u014d\2\u0838\u0180\3\2\2\2\u0839\u083a")
buf.write("\5\u028d\u0147\2\u083a\u083b\5\u028f\u0148\2\u083b\u083c")
buf.write("\5\u028b\u0146\2\u083c\u083d\5\u0273\u013a\2\u083d\u083e")
buf.write("\5\u0295\u014b\2\u083e\u083f\5\u027f\u0140\2\u083f\u0840")
buf.write("\5\u0283\u0142\2\u0840\u0841\5\u028d\u0147\2\u0841\u0182")
buf.write("\3\2\2\2\u0842\u0843\5\u028d\u0147\2\u0843\u0844\5\u028f")
buf.write("\u0148\2\u0844\u0845\5\u028b\u0146\2\u0845\u0846\5\u027b")
buf.write("\u013e\2\u0846\u0847\5\u028d\u0147\2\u0847\u0848\5\u029b")
buf.write("\u014e\2\u0848\u0184\3\2\2\2\u0849\u084a\5\u028d\u0147")
buf.write("\2\u084a\u084b\5\u028f\u0148\2\u084b\u084c\5\u028f\u0148")
buf.write("\2\u084c\u084d\5\u0291\u0149\2\u084d\u084e\5\u0299\u014d")
buf.write("\2\u084e\u084f\5\u0283\u0142\2\u084f\u0850\5\u028b\u0146")
buf.write("\2\u0850\u0851\5\u0283\u0142\2\u0851\u0852\5\u02a5\u0153")
buf.write("\2\u0852\u0853\5\u027b\u013e\2\u0853\u0186\3\2\2\2\u0854")
buf.write("\u0855\5\u028d\u0147\2\u0855\u0856\5\u028f\u0148\2\u0856")
buf.write("\u0857\5\u0291\u0149\2\u0857\u0858\5\u0295\u014b\2\u0858")
buf.write("\u0859\5\u028f\u0148\2\u0859\u085a\5\u028b\u0146\2\u085a")
buf.write("\u085b\5\u0291\u0149\2\u085b\u085c\5\u0299\u014d\2\u085c")
buf.write("\u0188\3\2\2\2\u085d\u085e\5\u028d\u0147\2\u085e\u085f")
buf.write("\5\u028f\u0148\2\u085f\u0860\5\u0295\u014b\2\u0860\u0861")
buf.write("\5\u028b\u0146\2\u0861\u018a\3\2\2\2\u0862\u0863\5\u028d")
buf.write("\u0147\2\u0863\u0864\5\u028f\u0148\2\u0864\u0865\5\u0297")
buf.write("\u014c\2\u0865\u0866\5\u0273\u013a\2\u0866\u0867\5\u029d")
buf.write("\u014f\2\u0867\u0868\5\u027b\u013e\2\u0868\u018c\3\2\2")
buf.write("\2\u0869\u086a\5\u028d\u0147\2\u086a\u086b\5\u028f\u0148")
buf.write("\2\u086b\u086c\5\u0297\u014c\2\u086c\u086d\5\u0281\u0141")
buf.write("\2\u086d\u086e\5\u028f\u0148\2\u086e\u086f\5\u029f\u0150")
buf.write("\2\u086f\u018e\3\2\2\2\u0870\u0871\5\u028d\u0147\2\u0871")
buf.write("\u0872\5\u028f\u0148\2\u0872\u0873\5\u0299\u014d\2\u0873")
buf.write("\u0190\3\2\2\2\u0874\u0875\5\u028d\u0147\2\u0875\u0876")
buf.write("\5\u028f\u0148\2\u0876\u0877\5\u0299\u014d\2\u0877\u0878")
buf.write("\5\u027b\u013e\2\u0878\u0192\3\2\2\2\u0879\u087a\5\u028d")
buf.write("\u0147\2\u087a\u087b\5\u028f\u0148\2\u087b\u087c\5\u0299")
buf.write("\u014d\2\u087c\u087d\5\u0283\u0142\2\u087d\u087e\5\u027d")
buf.write("\u013f\2\u087e\u087f\5\u02a3\u0152\2\u087f\u0194\3\2\2")
buf.write("\2\u0880\u0881\5\u028d\u0147\2\u0881\u0882\5\u028f\u0148")
buf.write("\2\u0882\u0883\5\u029b\u014e\2\u0883\u0884\5\u0291\u0149")
buf.write("\2\u0884\u0885\5\u0279\u013d\2\u0885\u0886\5\u0273\u013a")
buf.write("\2\u0886\u0887\5\u0299\u014d\2\u0887\u0888\5\u027b\u013e")
buf.write("\2\u0888\u0196\3\2\2\2\u0889\u088a\5\u028d\u0147\2\u088a")
buf.write("\u088b\5\u028f\u0148\2\u088b\u088c\5\u029f\u0150\2\u088c")
buf.write("\u088d\5\u0273\u013a\2\u088d\u088e\5\u0283\u0142\2\u088e")
buf.write("\u088f\5\u0299\u014d\2\u088f\u0198\3\2\2\2\u0890\u0891")
buf.write("\5\u028d\u0147\2\u0891\u0892\5\u029b\u014e\2\u0892\u0893")
buf.write("\5\u0289\u0145\2\u0893\u0894\5\u0289\u0145\2\u0894\u019a")
buf.write("\3\2\2\2\u0895\u0896\5\u028d\u0147\2\u0896\u0897\5\u029b")
buf.write("\u014e\2\u0897\u0898\5\u028b\u0146\2\u0898\u0899\5\u0275")
buf.write("\u013b\2\u0899\u089a\5\u027b\u013e\2\u089a\u089b\5\u0295")
buf.write("\u014b\2\u089b\u019c\3\2\2\2\u089c\u089d\5\u028f\u0148")
buf.write("\2\u089d\u089e\5\u0275\u013b\2\u089e\u089f\5\u0285\u0143")
buf.write("\2\u089f\u08a0\5\u027b\u013e\2\u08a0\u08a1\5\u0277\u013c")
buf.write("\2\u08a1\u08a2\5\u0299\u014d\2\u08a2\u019e\3\2\2\2\u08a3")
buf.write("\u08a4\5\u028f\u0148\2\u08a4\u08a5\5\u027d\u013f\2\u08a5")
buf.write("\u01a0\3\2\2\2\u08a6\u08a7\5\u028f\u0148\2\u08a7\u08a8")
buf.write("\5\u027d\u013f\2\u08a8\u08a9\5\u027d\u013f\2\u08a9\u01a2")
buf.write("\3\2\2\2\u08aa\u08ab\5\u028f\u0148\2\u08ab\u08ac\5\u028d")
buf.write("\u0147\2\u08ac\u01a4\3\2\2\2\u08ad\u08ae\5\u028f\u0148")
buf.write("\2\u08ae\u08af\5\u0295\u014b\2\u08af\u01a6\3\2\2\2\u08b0")
buf.write("\u08b1\5\u028f\u0148\2\u08b1\u08b2\5\u0295\u014b\2\u08b2")
buf.write("\u08b3\5\u0279\u013d\2\u08b3\u08b4\5\u027b\u013e\2\u08b4")
buf.write("\u08b5\5\u0295\u014b\2\u08b5\u01a8\3\2\2\2\u08b6\u08b7")
buf.write("\7\60\2\2\u08b7\u08b8\5_\60\2\u08b8\u08b9\7\60\2\2\u08b9")
buf.write("\u01aa\3\2\2\2\u08ba\u08bb\7\60\2\2\u08bb\u08bc\5\u018f")
buf.write("\u00c8\2\u08bc\u08bd\7\60\2\2\u08bd\u01ac\3\2\2\2\u08be")
buf.write("\u08bf\7\60\2\2\u08bf\u08c0\5\u01a5\u00d3\2\u08c0\u08c1")
buf.write("\7\60\2\2\u08c1\u01ae\3\2\2\2\u08c2\u08c3\5\u028f\u0148")
buf.write("\2\u08c3\u08c4\5\u0299\u014d\2\u08c4\u08c5\5\u0281\u0141")
buf.write("\2\u08c5\u08d3\5\u027b\u013e\2\u08c6\u08d1\5\u0295\u014b")
buf.write("\2\u08c7\u08cf\5\u029f\u0150\2\u08c8\u08cd\5\u0283\u0142")
buf.write("\2\u08c9\u08cb\5\u0297\u014c\2\u08ca\u08cc\5\u027b\u013e")
buf.write("\2\u08cb\u08ca\3\2\2\2\u08cb\u08cc\3\2\2\2\u08cc\u08ce")
buf.write("\3\2\2\2\u08cd\u08c9\3\2\2\2\u08cd\u08ce\3\2\2\2\u08ce")
buf.write("\u08d0\3\2\2\2\u08cf\u08c8\3\2\2\2\u08cf\u08d0\3\2\2\2")
buf.write("\u08d0\u08d2\3\2\2\2\u08d1\u08c7\3\2\2\2\u08d1\u08d2\3")
buf.write("\2\2\2\u08d2\u08d4\3\2\2\2\u08d3\u08c6\3\2\2\2\u08d3\u08d4")
buf.write("\3\2\2\2\u08d4\u01b0\3\2\2\2\u08d5\u08d6\5\u0291\u0149")
buf.write("\2\u08d6\u08d7\5\u0273\u013a\2\u08d7\u08d8\5\u0277\u013c")
buf.write("\2\u08d8\u08d9\5\u0287\u0144\2\u08d9\u01b2\3\2\2\2\u08da")
buf.write("\u08db\5\u0291\u0149\2\u08db\u08dc\5\u0273\u013a\2\u08dc")
buf.write("\u08dd\5\u0279\u013d\2\u08dd\u01b4\3\2\2\2\u08de\u08df")
buf.write("\5\u0289\u0145\2\u08df\u08e0\5\u0291\u0149\2\u08e0\u08e1")
buf.write("\5\u0273\u013a\2\u08e1\u08f5\5\u0295\u014b\2\u08e2\u08f3")
buf.write("\5\u0273\u013a\2\u08e3\u08f1\5\u028b\u0146\2\u08e4\u08ef")
buf.write("\5\u027b\u013e\2\u08e5\u08ed\5\u0299\u014d\2\u08e6\u08eb")
buf.write("\5\u027b\u013e\2\u08e7\u08e9\5\u0295\u014b\2\u08e8\u08ea")
buf.write("\5\u0297\u014c\2\u08e9\u08e8\3\2\2\2\u08e9\u08ea\3\2\2")
buf.write("\2\u08ea\u08ec\3\2\2\2\u08eb\u08e7\3\2\2\2\u08eb\u08ec")
buf.write("\3\2\2\2\u08ec\u08ee\3\2\2\2\u08ed\u08e6\3\2\2\2\u08ed")
buf.write("\u08ee\3\2\2\2\u08ee\u08f0\3\2\2\2\u08ef\u08e5\3\2\2\2")
buf.write("\u08ef\u08f0\3\2\2\2\u08f0\u08f2\3\2\2\2\u08f1\u08e4\3")
buf.write("\2\2\2\u08f1\u08f2\3\2\2\2\u08f2\u08f4\3\2\2\2\u08f3\u08e3")
buf.write("\3\2\2\2\u08f3\u08f4\3\2\2\2\u08f4\u08f6\3\2\2\2\u08f5")
buf.write("\u08e2\3\2\2\2\u08f5\u08f6\3\2\2\2\u08f6\u090e\3\2\2\2")
buf.write("\u08f7\u08f8\5\u0291\u0149\2\u08f8\u08f9\5\u0273\u013a")
buf.write("\2\u08f9\u08fa\5\u0295\u014b\2\u08fa\u090b\5\u0273\u013a")
buf.write("\2\u08fb\u0909\5\u028b\u0146\2\u08fc\u0907\5\u027b\u013e")
buf.write("\2\u08fd\u0905\5\u0299\u014d\2\u08fe\u0903\5\u027b\u013e")
buf.write("\2\u08ff\u0901\5\u0295\u014b\2\u0900\u0902\5\u0297\u014c")
buf.write("\2\u0901\u0900\3\2\2\2\u0901\u0902\3\2\2\2\u0902\u0904")
buf.write("\3\2\2\2\u0903\u08ff\3\2\2\2\u0903\u0904\3\2\2\2\u0904")
buf.write("\u0906\3\2\2\2\u0905\u08fe\3\2\2\2\u0905\u0906\3\2\2\2")
buf.write("\u0906\u0908\3\2\2\2\u0907\u08fd\3\2\2\2\u0907\u0908\3")
buf.write("\2\2\2\u0908\u090a\3\2\2\2\u0909\u08fc\3\2\2\2\u0909\u090a")
buf.write("\3\2\2\2\u090a\u090c\3\2\2\2\u090b\u08fb\3\2\2\2\u090b")
buf.write("\u090c\3\2\2\2\u090c\u090e\3\2\2\2\u090d\u08de\3\2\2\2")
buf.write("\u090d\u08f7\3\2\2\2\u090e\u01b6\3\2\2\2\u090f\u0910\5")
buf.write("\u0291\u0149\2\u0910\u0911\5\u0289\u0145\2\u0911\u0912")
buf.write("\5\u0273\u013a\2\u0912\u0913\5\u0283\u0142\2\u0913\u0914")
buf.write("\5\u028d\u0147\2\u0914\u01b8\3\2\2\2\u0915\u0916\5\u0291")
buf.write("\u0149\2\u0916\u0917\5\u028f\u0148\2\u0917\u0918\5\u0291")
buf.write("\u0149\2\u0918\u01ba\3\2\2\2\u0919\u091a\5\u0291\u0149")
buf.write("\2\u091a\u091b\5\u028f\u0148\2\u091b\u091c\5\u0291\u0149")
buf.write("\2\u091c\u091d\5\u029b\u014e\2\u091d\u091f\5\u0291\u0149")
buf.write("\2\u091e\u0920\5\u0297\u014c\2\u091f\u091e\3\2\2\2\u091f")
buf.write("\u0920\3\2\2\2\u0920\u01bc\3\2\2\2\u0921\u0922\5\u0291")
buf.write("\u0149\2\u0922\u0923\5\u0295\u014b\2\u0923\u0924\5\u027b")
buf.write("\u013e\2\u0924\u0925\5\u0235\u011b\2\u0925\u01be\3\2\2")
buf.write("\2\u0926\u0927\5\u0291\u0149\2\u0927\u0928\5\u0295\u014b")
buf.write("\2\u0928\u0929\5\u0283\u0142\2\u0929\u0931\5\u028d\u0147")
buf.write("\2\u092a\u092f\5\u0299\u014d\2\u092b\u092d\5\u027b\u013e")
buf.write("\2\u092c\u092e\5\u0295\u014b\2\u092d\u092c\3\2\2\2\u092d")
buf.write("\u092e\3\2\2\2\u092e\u0930\3\2\2\2\u092f\u092b\3\2\2\2")
buf.write("\u092f\u0930\3\2\2\2\u0930\u0932\3\2\2\2\u0931\u092a\3")
buf.write("\2\2\2\u0931\u0932\3\2\2\2\u0932\u01c0\3\2\2\2\u0933\u0934")
buf.write("\5\u0291\u0149\2\u0934\u0935\5\u0295\u014b\2\u0935\u0936")
buf.write("\5\u028f\u0148\2\u0936\u0944\5\u0277\u013c\2\u0937\u0942")
buf.write("\5\u027b\u013e\2\u0938\u0940\5\u0279\u013d\2\u0939\u093e")
buf.write("\5\u029b\u014e\2\u093a\u093c\5\u0295\u014b\2\u093b\u093d")
buf.write("\5\u027b\u013e\2\u093c\u093b\3\2\2\2\u093c\u093d\3\2\2")
buf.write("\2\u093d\u093f\3\2\2\2\u093e\u093a\3\2\2\2\u093e\u093f")
buf.write("\3\2\2\2\u093f\u0941\3\2\2\2\u0940\u0939\3\2\2\2\u0940")
buf.write("\u0941\3\2\2\2\u0941\u0943\3\2\2\2\u0942\u0938\3\2\2\2")
buf.write("\u0942\u0943\3\2\2\2\u0943\u0945\3\2\2\2\u0944\u0937\3")
buf.write("\2\2\2\u0944\u0945\3\2\2\2\u0945\u0957\3\2\2\2\u0946\u0947")
buf.write("\5\u027d\u013f\2\u0947\u0948\5\u029b\u014e\2\u0948\u0949")
buf.write("\5\u028d\u0147\2\u0949\u0954\5\u0277\u013c\2\u094a\u0952")
buf.write("\5\u0299\u014d\2\u094b\u0950\5\u0283\u0142\2\u094c\u094e")
buf.write("\5\u028f\u0148\2\u094d\u094f\5\u028d\u0147\2\u094e\u094d")
buf.write("\3\2\2\2\u094e\u094f\3\2\2\2\u094f\u0951\3\2\2\2\u0950")
buf.write("\u094c\3\2\2\2\u0950\u0951\3\2\2\2\u0951\u0953\3\2\2\2")
buf.write("\u0952\u094b\3\2\2\2\u0952\u0953\3\2\2\2\u0953\u0955\3")
buf.write("\2\2\2\u0954\u094a\3\2\2\2\u0954\u0955\3\2\2\2\u0955\u0957")
buf.write("\3\2\2\2\u0956\u0933\3\2\2\2\u0956\u0946\3\2\2\2\u0957")
buf.write("\u01c2\3\2\2\2\u0958\u0959\5\u0291\u0149\2\u0959\u095a")
buf.write("\5\u0295\u014b\2\u095a\u095b\5\u028f\u0148\2\u095b\u095c")
buf.write("\5\u027f\u0140\2\u095c\u095d\5\u0295\u014b\2\u095d\u095e")
buf.write("\5\u0273\u013a\2\u095e\u095f\5\u028b\u0146\2\u095f\u01c4")
buf.write("\3\2\2\2\u0960\u0961\5\u0277\u013c\2\u0961\u0962\5\u0273")
buf.write("\u013a\2\u0962\u0963\5\u028d\u0147\2\u0963\u0968\5\u0277")
buf.write("\u013c\2\u0964\u0966\5\u027b\u013e\2\u0965\u0967\5\u0289")
buf.write("\u0145\2\u0966\u0965\3\2\2\2\u0966\u0967\3\2\2\2\u0967")
buf.write("\u0969\3\2\2\2\u0968\u0964\3\2\2\2\u0968\u0969\3\2\2\2")
buf.write("\u0969\u09a4\3\2\2\2\u096a\u096b\5\u0297\u014c\2\u096b")
buf.write("\u096c\5\u029b\u014e\2\u096c\u096d\5\u0297\u014c\2\u096d")
buf.write("\u0975\5\u0291\u0149\2\u096e\u0973\5\u027b\u013e\2\u096f")
buf.write("\u0971\5\u028d\u0147\2\u0970\u0972\5\u0279\u013d\2\u0971")
buf.write("\u0970\3\2\2\2\u0971\u0972\3\2\2\2\u0972\u0974\3\2\2\2")
buf.write("\u0973\u096f\3\2\2\2\u0973\u0974\3\2\2\2\u0974\u0976\3")
buf.write("\2\2\2\u0975\u096e\3\2\2\2\u0975\u0976\3\2\2\2\u0976\u09a4")
buf.write("\3\2\2\2\u0977\u0978\5\u0295\u014b\2\u0978\u0979\5\u027b")
buf.write("\u013e\2\u0979\u097a\5\u0297\u014c\2\u097a\u097f\5\u029b")
buf.write("\u014e\2\u097b\u097d\5\u028b\u0146\2\u097c\u097e\5\u027b")
buf.write("\u013e\2\u097d\u097c\3\2\2\2\u097d\u097e\3\2\2\2\u097e")
buf.write("\u0980\3\2\2\2\u097f\u097b\3\2\2\2\u097f\u0980\3\2\2\2")
buf.write("\u0980\u09a4\3\2\2\2\u0981\u0982\5\u0293\u014a\2\u0982")
buf.write("\u0983\5\u029b\u014e\2\u0983\u0984\5\u0283\u0142\2\u0984")
buf.write("\u0985\5\u0299\u014d\2\u0985\u09a4\3\2\2\2\u0986\u0987")
buf.write("\5\u027b\u013e\2\u0987\u0988\5\u02a1\u0151\2\u0988\u0989")
buf.write("\5\u0283\u0142\2\u0989\u098a\5\u0299\u014d\2\u098a\u09a4")
buf.write("\3\2\2\2\u098b\u098c\5\u0289\u0145\2\u098c\u098d\5\u028f")
buf.write("\u0148\2\u098d\u098e\5\u028f\u0148\2\u098e\u098f\5\u0291")
buf.write("\u0149\2\u098f\u09a4\3\2\2\2\u0990\u0991\5\u028d\u0147")
buf.write("\2\u0991\u0992\5\u028f\u0148\2\u0992\u0993\5\u0279\u013d")
buf.write("\2\u0993\u09a1\5\u027b\u013e\2\u0994\u099f\5\u027d\u013f")
buf.write("\2\u0995\u099d\5\u0273\u013a\2\u0996\u099b\5\u029b\u014e")
buf.write("\2\u0997\u0999\5\u0289\u0145\2\u0998\u099a\5\u0299\u014d")
buf.write("\2\u0999\u0998\3\2\2\2\u0999\u099a\3\2\2\2\u099a\u099c")
buf.write("\3\2\2\2\u099b\u0997\3\2\2\2\u099b\u099c\3\2\2\2\u099c")
buf.write("\u099e\3\2\2\2\u099d\u0996\3\2\2\2\u099d\u099e\3\2\2\2")
buf.write("\u099e\u09a0\3\2\2\2\u099f\u0995\3\2\2\2\u099f\u09a0\3")
buf.write("\2\2\2\u09a0\u09a2\3\2\2\2\u09a1\u0994\3\2\2\2\u09a1\u09a2")
buf.write("\3\2\2\2\u09a2\u09a4\3\2\2\2\u09a3\u0960\3\2\2\2\u09a3")
buf.write("\u096a\3\2\2\2\u09a3\u0977\3\2\2\2\u09a3\u0981\3\2\2\2")
buf.write("\u09a3\u0986\3\2\2\2\u09a3\u098b\3\2\2\2\u09a3\u0990\3")
buf.write("\2\2\2\u09a4\u01c6\3\2\2\2\u09a5\u09a6\5\u0291\u0149\2")
buf.write("\u09a6\u09a7\5\u0295\u014b\2\u09a7\u09a8\5\u028f\u0148")
buf.write("\2\u09a8\u09a9\5\u028b\u0146\2\u09a9\u09aa\5\u0291\u0149")
buf.write("\2\u09aa\u09ab\5\u0299\u014d\2\u09ab\u01c8\3\2\2\2\u09ac")
buf.write("\u09ad\5\u0291\u0149\2\u09ad\u09ae\5\u029b\u014e\2\u09ae")
buf.write("\u09af\5\u0297\u014c\2\u09af\u09b0\5\u0281\u0141\2\u09b0")
buf.write("\u01ca\3\2\2\2\u09b1\u09b2\5\u0295\u014b\2\u09b2\u09b3")
buf.write("\5\u027b\u013e\2\u09b3\u09b4\5\u0273\u013a\2\u09b4\u09b5")
buf.write("\5\u0279\u013d\2\u09b5\u01cc\3\2\2\2\u09b6\u09b7\5\u0295")
buf.write("\u014b\2\u09b7\u09b8\5\u027b\u013e\2\u09b8\u09b9\5\u0277")
buf.write("\u013c\2\u09b9\u09be\5\u0273\u013a\2\u09ba\u09bc\5\u0289")
buf.write("\u0145\2\u09bb\u09bd\5\u0289\u0145\2\u09bc\u09bb\3\2\2")
buf.write("\2\u09bc\u09bd\3\2\2\2\u09bd\u09bf\3\2\2\2\u09be\u09ba")
buf.write("\3\2\2\2\u09be\u09bf\3\2\2\2\u09bf\u01ce\3\2\2\2\u09c0")
buf.write("\u09c1\5\u0295\u014b\2\u09c1\u09c2\5\u027b\u013e\2\u09c2")
buf.write("\u09c3\5\u0277\u013c\2\u09c3\u09c4\5\u028f\u0148\2\u09c4")
buf.write("\u09c5\5\u0295\u014b\2\u09c5\u09c6\5\u0279\u013d\2\u09c6")
buf.write("\u01d0\3\2\2\2\u09c7\u09c8\5\u0295\u014b\2\u09c8\u09c9")
buf.write("\5\u027b\u013e\2\u09c9\u09ca\5\u0277\u013c\2\u09ca\u09cb")
buf.write("\5\u02a3\u0152\2\u09cb\u09cc\5\u0277\u013c\2\u09cc\u09cd")
buf.write("\5\u0289\u0145\2\u09cd\u09ce\5\u027b\u013e\2\u09ce\u01d2")
buf.write("\3\2\2\2\u09cf\u09d0\5\u0295\u014b\2\u09d0\u09d1\5\u027b")
buf.write("\u013e\2\u09d1\u09d2\5\u027d\u013f\2\u09d2\u09d3\5\u027b")
buf.write("\u013e\2\u09d3\u09d4\5\u0295\u014b\2\u09d4\u09d5\5\u027b")
buf.write("\u013e\2\u09d5\u09d6\5\u028d\u0147\2\u09d6\u09d7\5\u0277")
buf.write("\u013c\2\u09d7\u09d8\5\u027b\u013e\2\u09d8\u01d4\3\2\2")
buf.write("\2\u09d9\u09da\5\u0295\u014b\2\u09da\u09db\5\u027b\u013e")
buf.write("\2\u09db\u09dc\5\u027d\u013f\2\u09dc\u09dd\5\u0295\u014b")
buf.write("\2\u09dd\u09de\5\u027b\u013e\2\u09de\u09df\5\u0297\u014c")
buf.write("\2\u09df\u09e0\5\u0281\u0141\2\u09e0\u01d6\3\2\2\2\u09e1")
buf.write("\u09e2\5\u0295\u014b\2\u09e2\u09e3\5\u027b\u013e\2\u09e3")
buf.write("\u09e4\5\u0283\u0142\2\u09e4\u09ec\5\u028d\u0147\2\u09e5")
buf.write("\u09ea\5\u0279\u013d\2\u09e6\u09e8\5\u027b\u013e\2\u09e7")
buf.write("\u09e9\5\u02a1\u0151\2\u09e8\u09e7\3\2\2\2\u09e8\u09e9")
buf.write("\3\2\2\2\u09e9\u09eb\3\2\2\2\u09ea\u09e6\3\2\2\2\u09ea")
buf.write("\u09eb\3\2\2\2\u09eb\u09ed\3\2\2\2\u09ec\u09e5\3\2\2\2")
buf.write("\u09ec\u09ed\3\2\2\2\u09ed\u01d8\3\2\2\2\u09ee\u09ef\5")
buf.write("\u0295\u014b\2\u09ef\u09f0\5\u027b\u013e\2\u09f0\u09f1")
buf.write("\5\u0289\u0145\2\u09f1\u09f2\5\u0273\u013a\2\u09f2\u09f3")
buf.write("\5\u0299\u014d\2\u09f3\u09f4\5\u0283\u0142\2\u09f4\u09f5")
buf.write("\5\u028f\u0148\2\u09f5\u09f6\5\u028d\u0147\2\u09f6\u01da")
buf.write("\3\2\2\2\u09f7\u09f8\5\u0295\u014b\2\u09f8\u09f9\5\u027b")
buf.write("\u013e\2\u09f9\u09fa\5\u0289\u0145\2\u09fa\u09fb\5\u0273")
buf.write("\u013a\2\u09fb\u09fc\5\u0299\u014d\2\u09fc\u09fd\5\u0283")
buf.write("\u0142\2\u09fd\u09fe\5\u029d\u014f\2\u09fe\u09ff\5\u027b")
buf.write("\u013e\2\u09ff\u01dc\3\2\2\2\u0a00\u0a01\5\u0295\u014b")
buf.write("\2\u0a01\u0a02\5\u027b\u013e\2\u0a02\u0a03\5\u0289\u0145")
buf.write("\2\u0a03\u0a0b\5\u027b\u013e\2\u0a04\u0a09\5\u0273\u013a")
buf.write("\2\u0a05\u0a07\5\u0297\u014c\2\u0a06\u0a08\5\u027b\u013e")
buf.write("\2\u0a07\u0a06\3\2\2\2\u0a07\u0a08\3\2\2\2\u0a08\u0a0a")
buf.write("\3\2\2\2\u0a09\u0a05\3\2\2\2\u0a09\u0a0a\3\2\2\2\u0a0a")
buf.write("\u0a0c\3\2\2\2\u0a0b\u0a04\3\2\2\2\u0a0b\u0a0c\3\2\2\2")
buf.write("\u0a0c\u01de\3\2\2\2\u0a0d\u0a0e\5\u0295\u014b\2\u0a0e")
buf.write("\u0a0f\5\u027b\u013e\2\u0a0f\u0a10\5\u028d\u0147\2\u0a10")
buf.write("\u0a15\5\u0273\u013a\2\u0a11\u0a13\5\u028b\u0146\2\u0a12")
buf.write("\u0a14\5\u027b\u013e\2\u0a13\u0a12\3\2\2\2\u0a13\u0a14")
buf.write("\3\2\2\2\u0a14\u0a16\3\2\2\2\u0a15\u0a11\3\2\2\2\u0a15")
buf.write("\u0a16\3\2\2\2\u0a16\u01e0\3\2\2\2\u0a17\u0a18\5\u0295")
buf.write("\u014b\2\u0a18\u0a19\5\u027b\u013e\2\u0a19\u0a1a\5\u0291")
buf.write("\u0149\2\u0a1a\u0a22\5\u0289\u0145\2\u0a1b\u0a20\5\u0273")
buf.write("\u013a\2\u0a1c\u0a1e\5\u0277\u013c\2\u0a1d\u0a1f\5\u027b")
buf.write("\u013e\2\u0a1e\u0a1d\3\2\2\2\u0a1e\u0a1f\3\2\2\2\u0a1f")
buf.write("\u0a21\3\2\2\2\u0a20\u0a1c\3\2\2\2\u0a20\u0a21\3\2\2\2")
buf.write("\u0a21\u0a23\3\2\2\2\u0a22\u0a1b\3\2\2\2\u0a22\u0a23\3")
buf.write("\2\2\2\u0a23\u01e2\3\2\2\2\u0a24\u0a25\5\u0295\u014b\2")
buf.write("\u0a25\u0a26\5\u027b\u013e\2\u0a26\u0a27\5\u0291\u0149")
buf.write("\2\u0a27\u0a28\5\u028f\u0148\2\u0a28\u0a29\5\u0295\u014b")
buf.write("\2\u0a29\u0a2a\5\u0299\u014d\2\u0a2a\u01e4\3\2\2\2\u0a2b")
buf.write("\u0a2c\5\u0295\u014b\2\u0a2c\u0a2d\5\u027b\u013e\2\u0a2d")
buf.write("\u0a2e\5\u0297\u014c\2\u0a2e\u0a2f\5\u028f\u0148\2\u0a2f")
buf.write("\u0a30\5\u029b\u014e\2\u0a30\u0a31\5\u0295\u014b\2\u0a31")
buf.write("\u0a32\5\u0277\u013c\2\u0a32\u0a33\5\u027b\u013e\2\u0a33")
buf.write("\u0a34\5\u0297\u014c\2\u0a34\u01e6\3\2\2\2\u0a35\u0a36")
buf.write("\5\u0295\u014b\2\u0a36\u0a37\5\u027b\u013e\2\u0a37\u0a38")
buf.write("\5\u0297\u014c\2\u0a38\u0a39\5\u0299\u014d\2\u0a39\u01e8")
buf.write("\3\2\2\2\u0a3a\u0a3b\5\u0295\u014b\2\u0a3b\u0a3c\5\u027b")
buf.write("\u013e\2\u0a3c\u0a3d\5\u0297\u014c\2\u0a3d\u0a45\5\u0299")
buf.write("\u014d\2\u0a3e\u0a43\5\u028f\u0148\2\u0a3f\u0a41\5\u0295")
buf.write("\u014b\2\u0a40\u0a42\5\u027b\u013e\2\u0a41\u0a40\3\2\2")
buf.write("\2\u0a41\u0a42\3\2\2\2\u0a42\u0a44\3\2\2\2\u0a43\u0a3f")
buf.write("\3\2\2\2\u0a43\u0a44\3\2\2\2\u0a44\u0a46\3\2\2\2\u0a45")
buf.write("\u0a3e\3\2\2\2\u0a45\u0a46\3\2\2\2\u0a46\u01ea\3\2\2\2")
buf.write("\u0a47\u0a48\5\u0295\u014b\2\u0a48\u0a49\5\u027b\u013e")
buf.write("\2\u0a49\u0a4a\5\u0299\u014d\2\u0a4a\u0a4c\5\u0295\u014b")
buf.write("\2\u0a4b\u0a4d\5\u02a3\u0152\2\u0a4c\u0a4b\3\2\2\2\u0a4c")
buf.write("\u0a4d\3\2\2\2\u0a4d\u01ec\3\2\2\2\u0a4e\u0a4f\5\u0295")
buf.write("\u014b\2\u0a4f\u0a50\5\u027b\u013e\2\u0a50\u0a51\5\u0299")
buf.write("\u014d\2\u0a51\u0a56\5\u029b\u014e\2\u0a52\u0a54\5\u0295")
buf.write("\u014b\2\u0a53\u0a55\5\u028d\u0147\2\u0a54\u0a53\3\2\2")
buf.write("\2\u0a54\u0a55\3\2\2\2\u0a55\u0a57\3\2\2\2\u0a56\u0a52")
buf.write("\3\2\2\2\u0a56\u0a57\3\2\2\2\u0a57\u01ee\3\2\2\2\u0a58")
buf.write("\u0a59\5\u0295\u014b\2\u0a59\u0a5a\5\u028b\u0146\2\u0a5a")
buf.write("\u0a5b\5\u0279\u013d\2\u0a5b\u0a5d\5\u0283\u0142\2\u0a5c")
buf.write("\u0a5e\5\u0295\u014b\2\u0a5d\u0a5c\3\2\2\2\u0a5d\u0a5e")
buf.write("\3\2\2\2\u0a5e\u0a63\3\2\2\2\u0a5f\u0a60\5\u0295\u014b")
buf.write("\2\u0a60\u0a61\5\u0279\u013d\2\u0a61\u0a63\3\2\2\2\u0a62")
buf.write("\u0a58\3\2\2\2\u0a62\u0a5f\3\2\2\2\u0a63\u01f0\3\2\2\2")
buf.write("\u0a64\u0a65\5\u0295\u014b\2\u0a65\u0a66\5\u028f\u0148")
buf.write("\2\u0a66\u0a67\5\u0289\u0145\2\u0a67\u0a68\5\u0289\u0145")
buf.write("\2\u0a68\u0a69\5\u028f\u0148\2\u0a69\u0a6a\5\u029d\u014f")
buf.write("\2\u0a6a\u0a6b\5\u027b\u013e\2\u0a6b\u0a6c\5\u0295\u014b")
buf.write("\2\u0a6c\u01f2\3\2\2\2\u0a6d\u0a6e\5\u0295\u014b\2\u0a6e")
buf.write("\u0a6f\5\u029b\u014e\2\u0a6f\u0a70\5\u028d\u0147\2\u0a70")
buf.write("\u01f4\3\2\2\2\u0a71\u0a72\5\u0297\u014c\2\u0a72\u0a73")
buf.write("\5\u0273\u013a\2\u0a73\u0a74\5\u027d\u013f\2\u0a74\u0a79")
buf.write("\5\u027b\u013e\2\u0a75\u0a77\5\u0299\u014d\2\u0a76\u0a78")
buf.write("\5\u02a3\u0152\2\u0a77\u0a76\3\2\2\2\u0a77\u0a78\3\2\2")
buf.write("\2\u0a78\u0a7a\3\2\2\2\u0a79\u0a75\3\2\2\2\u0a79\u0a7a")
buf.write("\3\2\2\2\u0a7a\u01f6\3\2\2\2\u0a7b\u0a7c\5\u0297\u014c")
buf.write("\2\u0a7c\u0a7d\5\u0273\u013a\2\u0a7d\u0a7e\5\u028b\u0146")
buf.write("\2\u0a7e\u0a7f\5\u027b\u013e\2\u0a7f\u01f8\3\2\2\2\u0a80")
buf.write("\u0a81\5\u0297\u014c\2\u0a81\u0a82\5\u0273\u013a\2\u0a82")
buf.write("\u0a83\5\u029d\u014f\2\u0a83\u0a84\5\u027b\u013e\2\u0a84")
buf.write("\u01fa\3\2\2\2\u0a85\u0a86\5\u0297\u014c\2\u0a86\u0a87")
buf.write("\5\u0273\u013a\2\u0a87\u0a88\5\u02a3\u0152\2\u0a88\u01fc")
buf.write("\3\2\2\2\u0a89\u0a8a\5\u0297\u014c\2\u0a8a\u0a8b\5\u0277")
buf.write("\u013c\2\u0a8b\u0a8c\5\u0273\u013a\2\u0a8c\u0a8d\5\u028d")
buf.write("\u0147\2\u0a8d\u01fe\3\2\2\2\u0a8e\u0a8f\5\u0297\u014c")
buf.write("\2\u0a8f\u0a90\5\u0277\u013c\2\u0a90\u0a91\5\u0273\u013a")
buf.write("\2\u0a91\u0a99\5\u0299\u014d\2\u0a92\u0a97\5\u0299\u014d")
buf.write("\2\u0a93\u0a95\5\u027b\u013e\2\u0a94\u0a96\5\u0295\u014b")
buf.write("\2\u0a95\u0a94\3\2\2\2\u0a95\u0a96\3\2\2\2\u0a96\u0a98")
buf.write("\3\2\2\2\u0a97\u0a93\3\2\2\2\u0a97\u0a98\3\2\2\2\u0a98")
buf.write("\u0a9a\3\2\2\2\u0a99\u0a92\3\2\2\2\u0a99\u0a9a\3\2\2\2")
buf.write("\u0a9a\u0200\3\2\2\2\u0a9b\u0a9c\5\u0297\u014c\2\u0a9c")
buf.write("\u0a9d\5\u0277\u013c\2\u0a9d\u0a9e\5\u0281\u0141\2\u0a9e")
buf.write("\u0a9f\5\u027b\u013e\2\u0a9f\u0aa0\5\u028b\u0146\2\u0aa0")
buf.write("\u0aa1\5\u027b\u013e\2\u0aa1\u0202\3\2\2\2\u0aa2\u0aa3")
buf.write("\5\u0291\u0149\2\u0aa3\u0aa4\5\u0295\u014b\2\u0aa4\u0aa5")
buf.write("\5\u028f\u0148\2\u0aa5\u0ab3\5\u0299\u014d\2\u0aa6\u0ab1")
buf.write("\5\u027b\u013e\2\u0aa7\u0aaf\5\u0277\u013c\2\u0aa8\u0aad")
buf.write("\5\u0299\u014d\2\u0aa9\u0aab\5\u027b\u013e\2\u0aaa\u0aac")
buf.write("\5\u0279\u013d\2\u0aab\u0aaa\3\2\2\2\u0aab\u0aac\3\2\2")
buf.write("\2\u0aac\u0aae\3\2\2\2\u0aad\u0aa9\3\2\2\2\u0aad\u0aae")
buf.write("\3\2\2\2\u0aae\u0ab0\3\2\2\2\u0aaf\u0aa8\3\2\2\2\u0aaf")
buf.write("\u0ab0\3\2\2\2\u0ab0\u0ab2\3\2\2\2\u0ab1\u0aa7\3\2\2\2")
buf.write("\u0ab1\u0ab2\3\2\2\2\u0ab2\u0ab4\3\2\2\2\u0ab3\u0aa6\3")
buf.write("\2\2\2\u0ab3\u0ab4\3\2\2\2\u0ab4\u0add\3\2\2\2\u0ab5\u0ab6")
buf.write("\5\u0281\u0141\2\u0ab6\u0ab7\5\u0283\u0142\2\u0ab7\u0ab8")
buf.write("\5\u0279\u013d\2\u0ab8\u0abd\5\u0279\u013d\2\u0ab9\u0abb")
buf.write("\5\u027b\u013e\2\u0aba\u0abc\5\u028d\u0147\2\u0abb\u0aba")
buf.write("\3\2\2\2\u0abb\u0abc\3\2\2\2\u0abc\u0abe\3\2\2\2\u0abd")
buf.write("\u0ab9\3\2\2\2\u0abd\u0abe\3\2\2\2\u0abe\u0add\3\2\2\2")
buf.write("\u0abf\u0ac0\5\u0291\u0149\2\u0ac0\u0ac1\5\u029b\u014e")
buf.write("\2\u0ac1\u0ac2\5\u0275\u013b\2\u0ac2\u0ac7\5\u0289\u0145")
buf.write("\2\u0ac3\u0ac5\5\u0283\u0142\2\u0ac4\u0ac6\5\u0277\u013c")
buf.write("\2\u0ac5\u0ac4\3\2\2\2\u0ac5\u0ac6\3\2\2\2\u0ac6\u0ac8")
buf.write("\3\2\2\2\u0ac7\u0ac3\3\2\2\2\u0ac7\u0ac8\3\2\2\2\u0ac8")
buf.write("\u0add\3\2\2\2\u0ac9\u0aca\5\u0291\u0149\2\u0aca\u0acb")
buf.write("\5\u0295\u014b\2\u0acb\u0acc\5\u0283\u0142\2\u0acc\u0ad4")
buf.write("\5\u029d\u014f\2\u0acd\u0ad2\5\u0273\u013a\2\u0ace\u0ad0")
buf.write("\5\u0299\u014d\2\u0acf\u0ad1\5\u027b\u013e\2\u0ad0\u0acf")
buf.write("\3\2\2\2\u0ad0\u0ad1\3\2\2\2\u0ad1\u0ad3\3\2\2\2\u0ad2")
buf.write("\u0ace\3\2\2\2\u0ad2\u0ad3\3\2\2\2\u0ad3\u0ad5\3\2\2\2")
buf.write("\u0ad4\u0acd\3\2\2\2\u0ad4\u0ad5\3\2\2\2\u0ad5\u0add\3")
buf.write("\2\2\2\u0ad6\u0ad7\5\u0289\u0145\2\u0ad7\u0ad8\5\u028f")
buf.write("\u0148\2\u0ad8\u0ad9\5\u0277\u013c\2\u0ad9\u0ada\5\u0273")
buf.write("\u013a\2\u0ada\u0adb\5\u0289\u0145\2\u0adb\u0add\3\2\2")
buf.write("\2\u0adc\u0aa2\3\2\2\2\u0adc\u0ab5\3\2\2\2\u0adc\u0abf")
buf.write("\3\2\2\2\u0adc\u0ac9\3\2\2\2\u0adc\u0ad6\3\2\2\2\u0add")
buf.write("\u0204\3\2\2\2\u0ade\u0adf\5\u0297\u014c\2\u0adf\u0ae0")
buf.write("\5\u0277\u013c\2\u0ae0\u0ae1\5\u0295\u014b\2\u0ae1\u0ae6")
buf.write("\5\u027b\u013e\2\u0ae2\u0ae4\5\u027b\u013e\2\u0ae3\u0ae5")
buf.write("\5\u028d\u0147\2\u0ae4\u0ae3\3\2\2\2\u0ae4\u0ae5\3\2\2")
buf.write("\2\u0ae5\u0ae7\3\2\2\2\u0ae6\u0ae2\3\2\2\2\u0ae6\u0ae7")
buf.write("\3\2\2\2\u0ae7\u0206\3\2\2\2\u0ae8\u0ae9\5\u0297\u014c")
buf.write("\2\u0ae9\u0aea\5\u027b\u013e\2\u0aea\u0aeb\5\u027b\u013e")
buf.write("\2\u0aeb\u0aec\5\u0287\u0144\2\u0aec\u0208\3\2\2\2\u0aed")
buf.write("\u0aee\5\u0297\u014c\2\u0aee\u0aef\5\u027b\u013e\2\u0aef")
buf.write("\u0af0\5\u0289\u0145\2\u0af0\u0af5\5\u027b\u013e\2\u0af1")
buf.write("\u0af3\5\u0277\u013c\2\u0af2\u0af4\5\u0299\u014d\2\u0af3")
buf.write("\u0af2\3\2\2\2\u0af3\u0af4\3\2\2\2\u0af4\u0af6\3\2\2\2")
buf.write("\u0af5\u0af1\3\2\2\2\u0af5\u0af6\3\2\2\2\u0af6\u020a\3")
buf.write("\2\2\2\u0af7\u0af8\5\u0297\u014c\2\u0af8\u0af9\5\u027b")
buf.write("\u013e\2\u0af9\u0afa\5\u0289\u0145\2\u0afa\u0afb\5\u027b")
buf.write("\u013e\2\u0afb\u0afc\5\u0277\u013c\2\u0afc\u0afd\5\u0299")
buf.write("\u014d\2\u0afd\u0afe\5\u0283\u0142\2\u0afe\u0aff\5\u028f")
buf.write("\u0148\2\u0aff\u0b00\5\u028d\u0147\2\u0b00\u020c\3\2\2")
buf.write("\2\u0b01\u0b02\5\u0297\u014c\2\u0b02\u0b03\5\u027b\u013e")
buf.write("\2\u0b03\u0b04\5\u0299\u014d\2\u0b04\u020e\3\2\2\2\u0b05")
buf.write("\u0b06\5\u0297\u014c\2\u0b06\u0b07\5\u0281\u0141\2\u0b07")
buf.write("\u0b08\5\u0273\u013a\2\u0b08\u0b09\5\u0279\u013d\2\u0b09")
buf.write("\u0b0a\5\u028f\u0148\2\u0b0a\u0b0b\5\u029f\u0150\2\u0b0b")
buf.write("\u0210\3\2\2\2\u0b0c\u0b0d\5\u0297\u014c\2\u0b0d\u0b0e")
buf.write("\5\u0281\u0141\2\u0b0e\u0b0f\5\u0273\u013a\2\u0b0f\u0b10")
buf.write("\5\u0295\u014b\2\u0b10\u0b12\5\u027b\u013e\2\u0b11\u0b13")
buf.write("\5\u0279\u013d\2\u0b12\u0b11\3\2\2\2\u0b12\u0b13\3\2\2")
buf.write("\2\u0b13\u0212\3\2\2\2\u0b14\u0b15\5\u0297\u014c\2\u0b15")
buf.write("\u0b16\5\u0281\u0141\2\u0b16\u0b17\5\u028f\u0148\2\u0b17")
buf.write("\u0b18\5\u029f\u0150\2\u0b18\u0214\3\2\2\2\u0b19\u0b1a")
buf.write("\5\u0297\u014c\2\u0b1a\u0b1b\5\u0281\u0141\2\u0b1b\u0b1c")
buf.write("\5\u029b\u014e\2\u0b1c\u0b1d\5\u0299\u014d\2\u0b1d\u0b1e")
buf.write("\5\u0279\u013d\2\u0b1e\u0b1f\5\u028f\u0148\2\u0b1f\u0b20")
buf.write("\5\u029f\u0150\2\u0b20\u0b21\5\u028d\u0147\2\u0b21\u0216")
buf.write("\3\2\2\2\u0b22\u0b23\5\u0297\u014c\2\u0b23\u0b24\5\u0283")
buf.write("\u0142\2\u0b24\u0b25\5\u02a5\u0153\2\u0b25\u0b26\5\u027b")
buf.write("\u013e\2\u0b26\u0218\3\2\2\2\u0b27\u0b28\5\u0297\u014c")
buf.write("\2\u0b28\u0b29\5\u0287\u0144\2\u0b29\u0b2a\5\u0283\u0142")
buf.write("\2\u0b2a\u0b2b\5\u0291\u0149\2\u0b2b\u021a\3\2\2\2\u0b2c")
buf.write("\u0b2d\5\u0297\u014c\2\u0b2d\u0b2e\5\u028f\u0148\2\u0b2e")
buf.write("\u0b2f\5\u0295\u014b\2\u0b2f\u0b30\5\u0299\u014d\2\u0b30")
buf.write("\u021c\3\2\2\2\u0b31\u0b32\5\u0297\u014c\2\u0b32\u0b33")
buf.write("\5\u0299\u014d\2\u0b33\u0b34\5\u0273\u013a\2\u0b34\u0b35")
buf.write("\5\u0299\u014d\2\u0b35\u0b36\5\u029b\u014e\2\u0b36\u0b37")
buf.write("\5\u0297\u014c\2\u0b37\u021e\3\2\2\2\u0b38\u0b39\5\u0297")
buf.write("\u014c\2\u0b39\u0b3a\5\u0299\u014d\2\u0b3a\u0b3b\5\u027b")
buf.write("\u013e\2\u0b3b\u0b3c\5\u0291\u0149\2\u0b3c\u0220\3\2\2")
buf.write("\2\u0b3d\u0b3e\5\u0297\u014c\2\u0b3e\u0b3f\5\u0299\u014d")
buf.write("\2\u0b3f\u0b40\5\u028f\u0148\2\u0b40\u0b42\5\u0295\u014b")
buf.write("\2\u0b41\u0b43\5\u027b\u013e\2\u0b42\u0b41\3\2\2\2\u0b42")
buf.write("\u0b43\3\2\2\2\u0b43\u0222\3\2\2\2\u0b44\u0b45\5\u0297")
buf.write("\u014c\2\u0b45\u0b46\5\u0299\u014d\2\u0b46\u0b47\5\u0295")
buf.write("\u014b\2\u0b47\u0b55\5\u029b\u014e\2\u0b48\u0b53\5\u0277")
buf.write("\u013c\2\u0b49\u0b51\5\u0299\u014d\2\u0b4a\u0b4f\5\u029b")
buf.write("\u014e\2\u0b4b\u0b4d\5\u0295\u014b\2\u0b4c\u0b4e\5\u027b")
buf.write("\u013e\2\u0b4d\u0b4c\3\2\2\2\u0b4d\u0b4e\3\2\2\2\u0b4e")
buf.write("\u0b50\3\2\2\2\u0b4f\u0b4b\3\2\2\2\u0b4f\u0b50\3\2\2\2")
buf.write("\u0b50\u0b52\3\2\2\2\u0b51\u0b4a\3\2\2\2\u0b51\u0b52\3")
buf.write("\2\2\2\u0b52\u0b54\3\2\2\2\u0b53\u0b49\3\2\2\2\u0b53\u0b54")
buf.write("\3\2\2\2\u0b54\u0b56\3\2\2\2\u0b55\u0b48\3\2\2\2\u0b55")
buf.write("\u0b56\3\2\2\2\u0b56\u0224\3\2\2\2\u0b57\u0b58\5\u0297")
buf.write("\u014c\2\u0b58\u0b59\5\u0299\u014d\2\u0b59\u0b5a\5\u02a3")
buf.write("\u0152\2\u0b5a\u0b5b\5\u0289\u0145\2\u0b5b\u0b5c\5\u027b")
buf.write("\u013e\2\u0b5c\u0226\3\2\2\2\u0b5d\u0b5e\5\u0297\u014c")
buf.write("\2\u0b5e\u0b5f\5\u029b\u014e\2\u0b5f\u0b60\5\u028b\u0146")
buf.write("\2\u0b60\u0228\3\2\2\2\u0b61\u0b62\5\u0297\u014c\2\u0b62")
buf.write("\u0b63\5\u02a3\u0152\2\u0b63\u0b64\5\u0297\u014c\2\u0b64")
buf.write("\u0b65\5\u028b\u0146\2\u0b65\u0b66\5\u027b\u013e\2\u0b66")
buf.write("\u0b67\5\u028d\u0147\2\u0b67\u0b68\5\u029b\u014e\2\u0b68")
buf.write("\u022a\3\2\2\2\u0b69\u0b6a\5\u0297\u014c\2\u0b6a\u0b6b")
buf.write("\5\u02a3\u0152\2\u0b6b\u0b6c\5\u0297\u014c\2\u0b6c\u0b6d")
buf.write("\5\u0299\u014d\2\u0b6d\u0b6e\5\u027b\u013e\2\u0b6e\u0b6f")
buf.write("\5\u028b\u0146\2\u0b6f\u022c\3\2\2\2\u0b70\u0b71\5\u0299")
buf.write("\u014d\2\u0b71\u0b72\5\u0273\u013a\2\u0b72\u0b73\5\u0275")
buf.write("\u013b\2\u0b73\u0b78\5\u0289\u0145\2\u0b74\u0b76\5\u027b")
buf.write("\u013e\2\u0b75\u0b77\5\u0297\u014c\2\u0b76\u0b75\3\2\2")
buf.write("\2\u0b76\u0b77\3\2\2\2\u0b77\u0b79\3\2\2\2\u0b78\u0b74")
buf.write("\3\2\2\2\u0b78\u0b79\3\2\2\2\u0b79\u022e\3\2\2\2\u0b7a")
buf.write("\u0b7b\5\u0299\u014d\2\u0b7b\u0b7c\5\u0273\u013a\2\u0b7c")
buf.write("\u0b7d\5\u0275\u013b\2\u0b7d\u0b7e\5\u0289\u0145\2\u0b7e")
buf.write("\u0b7f\5\u027b\u013e\2\u0b7f\u0b80\5\u0291\u0149\2\u0b80")
buf.write("\u0b81\5\u0295\u014b\2\u0b81\u0b82\5\u028f\u0148\2\u0b82")
buf.write("\u0b83\5\u028b\u0146\2\u0b83\u0b84\5\u0291\u0149\2\u0b84")
buf.write("\u0b85\5\u0299\u014d\2\u0b85\u0230\3\2\2\2\u0b86\u0b87")
buf.write("\5\u0299\u014d\2\u0b87\u0b88\5\u0273\u013a\2\u0b88\u0b89")
buf.write("\5\u027f\u0140\2\u0b89\u0232\3\2\2\2\u0b8a\u0b8b\5\u0299")
buf.write("\u014d\2\u0b8b\u0b8c\5\u0273\u013a\2\u0b8c\u0b8d\5\u0289")
buf.write("\u0145\2\u0b8d\u0b8e\5\u0287\u0144\2\u0b8e\u0234\3\2\2")
buf.write("\2\u0b8f\u0b90\5\u0299\u014d\2\u0b90\u0b91\5\u027b\u013e")
buf.write("\2\u0b91\u0b92\5\u02a1\u0151\2\u0b92\u0b93\5\u0299\u014d")
buf.write("\2\u0b93\u0236\3\2\2\2\u0b94\u0b95\5\u0235\u011b\2\u0b95")
buf.write("\u0b96\5\u028b\u0146\2\u0b96\u0b97\5\u027b\u013e\2\u0b97")
buf.write("\u0b98\5\u0295\u014b\2\u0b98\u0b99\5\u027f\u0140\2\u0b99")
buf.write("\u0b9a\5\u027b\u013e\2\u0b9a\u0238\3\2\2\2\u0b9b\u0b9c")
buf.write("\5\u0299\u014d\2\u0b9c\u0b9d\5\u0281\u0141\2\u0b9d\u0b9e")
buf.write("\5\u027b\u013e\2\u0b9e\u0b9f\5\u028d\u0147\2\u0b9f\u023a")
buf.write("\3\2\2\2\u0ba0\u0ba1\5\u0299\u014d\2\u0ba1\u0ba2\5\u0281")
buf.write("\u0141\2\u0ba2\u0ba3\5\u0295\u014b\2\u0ba3\u0ba5\5\u028f")
buf.write("\u0148\2\u0ba4\u0ba6\5\u029f\u0150\2\u0ba5\u0ba4\3\2\2")
buf.write("\2\u0ba5\u0ba6\3\2\2\2\u0ba6\u023c\3\2\2\2\u0ba7\u0ba8")
buf.write("\5\u0299\u014d\2\u0ba8\u0ba9\5\u0283\u0142\2\u0ba9\u0baa")
buf.write("\5\u028b\u0146\2\u0baa\u0bb2\5\u027b\u013e\2\u0bab\u0bb0")
buf.write("\5\u028f\u0148\2\u0bac\u0bae\5\u029b\u014e\2\u0bad\u0baf")
buf.write("\5\u0299\u014d\2\u0bae\u0bad\3\2\2\2\u0bae\u0baf\3\2\2")
buf.write("\2\u0baf\u0bb1\3\2\2\2\u0bb0\u0bac\3\2\2\2\u0bb0\u0bb1")
buf.write("\3\2\2\2\u0bb1\u0bb3\3\2\2\2\u0bb2\u0bab\3\2\2\2\u0bb2")
buf.write("\u0bb3\3\2\2\2\u0bb3\u023e\3\2\2\2\u0bb4\u0bb5\5\u0299")
buf.write("\u014d\2\u0bb5\u0bb6\5\u0283\u0142\2\u0bb6\u0bb7\5\u0299")
buf.write("\u014d\2\u0bb7\u0bb8\5\u0289\u0145\2\u0bb8\u0bb9\5\u027b")
buf.write("\u013e\2\u0bb9\u0240\3\2\2\2\u0bba\u0bbb\5\u0299\u014d")
buf.write("\2\u0bbb\u0bbc\5\u028f\u0148\2\u0bbc\u0242\3\2\2\2\u0bbd")
buf.write("\u0bbe\5\u0299\u014d\2\u0bbe\u0bbf\5\u028f\u0148\2\u0bbf")
buf.write("\u0bc0\5\u0291\u0149\2\u0bc0\u0244\3\2\2\2\u0bc1\u0bc2")
buf.write("\5\u0299\u014d\2\u0bc2\u0bc3\5\u0295\u014b\2\u0bc3\u0bc4")
buf.write("\5\u02a3\u0152\2\u0bc4\u0246\3\2\2\2\u0bc5\u0bc6\5\u0299")
buf.write("\u014d\2\u0bc6\u0bc7\5\u02a3\u0152\2\u0bc7\u0bc8\5\u0291")
buf.write("\u0149\2\u0bc8\u0bc9\5\u027b\u013e\2\u0bc9\u0248\3\2\2")
buf.write("\2\u0bca\u0bcb\5\u0299\u014d\2\u0bcb\u0bcc\5\u02a3\u0152")
buf.write("\2\u0bcc\u0bcd\5\u0291\u0149\2\u0bcd\u0bce\5\u027b\u013e")
buf.write("\2\u0bce\u0bcf\5\u0273\u013a\2\u0bcf\u0bd0\5\u0281\u0141")
buf.write("\2\u0bd0\u0bd1\5\u027b\u013e\2\u0bd1\u0bd2\5\u0273\u013a")
buf.write("\2\u0bd2\u0bd3\5\u0279\u013d\2\u0bd3\u024a\3\2\2\2\u0bd4")
buf.write("\u0bd5\5\u029b\u014e\2\u0bd5\u0bd6\5\u0279\u013d\2\u0bd6")
buf.write("\u0bd7\5\u027d\u013f\2\u0bd7\u0bd8\5\u0291\u0149\2\u0bd8")
buf.write("\u0bd9\5\u0273\u013a\2\u0bd9\u0bda\5\u0295\u014b\2\u0bda")
buf.write("\u0bdb\5\u028b\u0146\2\u0bdb\u0bdc\5\u0297\u014c\2\u0bdc")
buf.write("\u024c\3\2\2\2\u0bdd\u0bde\5\u029b\u014e\2\u0bde\u0bdf")
buf.write("\5\u028d\u0147\2\u0bdf\u0be0\5\u0279\u013d\2\u0be0\u0be1")
buf.write("\5\u027b\u013e\2\u0be1\u0be6\5\u027d\u013f\2\u0be2\u0be3")
buf.write("\5\u0283\u0142\2\u0be3\u0be4\5\u028d\u0147\2\u0be4\u0be5")
buf.write("\5\u027b\u013e\2\u0be5\u0be7\3\2\2\2\u0be6\u0be2\3\2\2")
buf.write("\2\u0be6\u0be7\3\2\2\2\u0be7\u024e\3\2\2\2\u0be8\u0be9")
buf.write("\5\u029b\u014e\2\u0be9\u0bea\5\u028d\u0147\2\u0bea\u0beb")
buf.write("\5\u0283\u0142\2\u0beb\u0bec\5\u0293\u014a\2\u0bec\u0bed")
buf.write("\5\u029b\u014e\2\u0bed\u0bee\5\u027b\u013e\2\u0bee\u0250")
buf.write("\3\2\2\2\u0bef\u0bf0\5\u029b\u014e\2\u0bf0\u0bf1\5\u028d")
buf.write("\u0147\2\u0bf1\u0bf2\5\u0289\u0145\2\u0bf2\u0bf7\5\u028f")
buf.write("\u0148\2\u0bf3\u0bf5\5\u0277\u013c\2\u0bf4\u0bf6\5\u0287")
buf.write("\u0144\2\u0bf5\u0bf4\3\2\2\2\u0bf5\u0bf6\3\2\2\2\u0bf6")
buf.write("\u0bf8\3\2\2\2\u0bf7\u0bf3\3\2\2\2\u0bf7\u0bf8\3\2\2\2")
buf.write("\u0bf8\u0252\3\2\2\2\u0bf9\u0bfa\5\u029b\u014e\2\u0bfa")
buf.write("\u0bfb\5\u0291\u0149\2\u0bfb\u0bfc\5\u0279\u013d\2\u0bfc")
buf.write("\u0c01\5\u0273\u013a\2\u0bfd\u0bff\5\u0299\u014d\2\u0bfe")
buf.write("\u0c00\5\u027b\u013e\2\u0bff\u0bfe\3\2\2\2\u0bff\u0c00")
buf.write("\3\2\2\2\u0c00\u0c02\3\2\2\2\u0c01\u0bfd\3\2\2\2\u0c01")
buf.write("\u0c02\3\2\2\2\u0c02\u0254\3\2\2\2\u0c03\u0c04\5\u029b")
buf.write("\u014e\2\u0c04\u0c05\5\u0297\u014c\2\u0c05\u0c06\5\u027b")
buf.write("\u013e\2\u0c06\u0256\3\2\2\2\u0c07\u0c08\5\u029d\u014f")
buf.write("\2\u0c08\u0c09\5\u0273\u013a\2\u0c09\u0c0a\5\u0289\u0145")
buf.write("\2\u0c0a\u0c0b\5\u029b\u014e\2\u0c0b\u0c0c\5\u027b\u013e")
buf.write("\2\u0c0c\u0258\3\2\2\2\u0c0d\u0c0e\5\u029d\u014f\2\u0c0e")
buf.write("\u0c0f\5\u0273\u013a\2\u0c0f\u0c10\5\u0289\u0145\2\u0c10")
buf.write("\u0c11\5\u029b\u014e\2\u0c11\u0c12\5\u027b\u013e\2\u0c12")
buf.write("\u0c13\5\u0297\u014c\2\u0c13\u025a\3\2\2\2\u0c14\u0c15")
buf.write("\5\u029f\u0150\2\u0c15\u0c16\5\u0273\u013a\2\u0c16\u0c17")
buf.write("\5\u0283\u0142\2\u0c17\u0c18\5\u0299\u014d\2\u0c18\u025c")
buf.write("\3\2\2\2\u0c19\u0c1a\5\u029f\u0150\2\u0c1a\u0c1b\5\u0281")
buf.write("\u0141\2\u0c1b\u0c1c\5\u027b\u013e\2\u0c1c\u0c1d\5\u028d")
buf.write("\u0147\2\u0c1d\u025e\3\2\2\2\u0c1e\u0c1f\5\u029f\u0150")
buf.write("\2\u0c1f\u0c20\5\u0281\u0141\2\u0c20\u0c21\5\u027b\u013e")
buf.write("\2\u0c21\u0c22\5\u0295\u014b\2\u0c22\u0c23\5\u027b\u013e")
buf.write("\2\u0c23\u0260\3\2\2\2\u0c24\u0c25\5\u029f\u0150\2\u0c25")
buf.write("\u0c26\5\u0281\u0141\2\u0c26\u0c27\5\u0283\u0142\2\u0c27")
buf.write("\u0c28\5\u0289\u0145\2\u0c28\u0c29\5\u027b\u013e\2\u0c29")
buf.write("\u0262\3\2\2\2\u0c2a\u0c2b\5\u029f\u0150\2\u0c2b\u0c2c")
buf.write("\5\u0283\u0142\2\u0c2c\u0c2d\5\u028d\u0147\2\u0c2d\u0c35")
buf.write("\5\u0279\u013d\2\u0c2e\u0c33\5\u028f\u0148\2\u0c2f\u0c31")
buf.write("\5\u029f\u0150\2\u0c30\u0c32\5\u0297\u014c\2\u0c31\u0c30")
buf.write("\3\2\2\2\u0c31\u0c32\3\2\2\2\u0c32\u0c34\3\2\2\2\u0c33")
buf.write("\u0c2f\3\2\2\2\u0c33\u0c34\3\2\2\2\u0c34\u0c36\3\2\2\2")
buf.write("\u0c35\u0c2e\3\2\2\2\u0c35\u0c36\3\2\2\2\u0c36\u0264\3")
buf.write("\2\2\2\u0c37\u0c38\5\u029f\u0150\2\u0c38\u0c39\5\u0283")
buf.write("\u0142\2\u0c39\u0c3a\5\u0299\u014d\2\u0c3a\u0c3b\5\u0281")
buf.write("\u0141\2\u0c3b\u0266\3\2\2\2\u0c3c\u0c3d\5\u02a5\u0153")
buf.write("\2\u0c3d\u0c3e\5\u0273\u013a\2\u0c3e\u0c3f\5\u0291\u0149")
buf.write("\2\u0c3f\u0268\3\2\2\2\u0c40\u0c41\5\u02a5\u0153\2\u0c41")
buf.write("\u0c42\5\u028f\u0148\2\u0c42\u0c43\5\u028f\u0148\2\u0c43")
buf.write("\u0c44\5\u028b\u0146\2\u0c44\u026a\3\2\2\2\u0c45\u0c49")
buf.write("\5\u02ab\u0156\2\u0c46\u0c48\5\u02ad\u0157\2\u0c47\u0c46")
buf.write("\3\2\2\2\u0c48\u0c4b\3\2\2\2\u0c49\u0c47\3\2\2\2\u0c49")
buf.write("\u0c4a\3\2\2\2\u0c4a\u026c\3\2\2\2\u0c4b\u0c49\3\2\2\2")
buf.write("\u0c4c\u0c4d\7\f\2\2\u0c4d\u026e\3\2\2\2\u0c4e\u0c4f\t")
buf.write("\5\2\2\u0c4f\u0c50\3\2\2\2\u0c50\u0c51\b\u0138\2\2\u0c51")
buf.write("\u0270\3\2\2\2\u0c52\u0c53\13\2\2\2\u0c53\u0272\3\2\2")
buf.write("\2\u0c54\u0c55\t\6\2\2\u0c55\u0274\3\2\2\2\u0c56\u0c57")
buf.write("\t\7\2\2\u0c57\u0276\3\2\2\2\u0c58\u0c59\t\b\2\2\u0c59")
buf.write("\u0278\3\2\2\2\u0c5a\u0c5b\t\t\2\2\u0c5b\u027a\3\2\2\2")
buf.write("\u0c5c\u0c5d\t\n\2\2\u0c5d\u027c\3\2\2\2\u0c5e\u0c5f\t")
buf.write("\13\2\2\u0c5f\u027e\3\2\2\2\u0c60\u0c61\t\f\2\2\u0c61")
buf.write("\u0280\3\2\2\2\u0c62\u0c63\t\r\2\2\u0c63\u0282\3\2\2\2")
buf.write("\u0c64\u0c65\t\16\2\2\u0c65\u0284\3\2\2\2\u0c66\u0c67")
buf.write("\t\17\2\2\u0c67\u0286\3\2\2\2\u0c68\u0c69\t\20\2\2\u0c69")
buf.write("\u0288\3\2\2\2\u0c6a\u0c6b\t\21\2\2\u0c6b\u028a\3\2\2")
buf.write("\2\u0c6c\u0c6d\t\22\2\2\u0c6d\u028c\3\2\2\2\u0c6e\u0c6f")
buf.write("\t\23\2\2\u0c6f\u028e\3\2\2\2\u0c70\u0c71\t\24\2\2\u0c71")
buf.write("\u0290\3\2\2\2\u0c72\u0c73\t\25\2\2\u0c73\u0292\3\2\2")
buf.write("\2\u0c74\u0c75\t\26\2\2\u0c75\u0294\3\2\2\2\u0c76\u0c77")
buf.write("\t\27\2\2\u0c77\u0296\3\2\2\2\u0c78\u0c79\t\30\2\2\u0c79")
buf.write("\u0298\3\2\2\2\u0c7a\u0c7b\t\31\2\2\u0c7b\u029a\3\2\2")
buf.write("\2\u0c7c\u0c7d\t\32\2\2\u0c7d\u029c\3\2\2\2\u0c7e\u0c7f")
buf.write("\t\33\2\2\u0c7f\u029e\3\2\2\2\u0c80\u0c81\t\34\2\2\u0c81")
buf.write("\u02a0\3\2\2\2\u0c82\u0c83\t\35\2\2\u0c83\u02a2\3\2\2")
buf.write("\2\u0c84\u0c85\t\36\2\2\u0c85\u02a4\3\2\2\2\u0c86\u0c87")
buf.write("\t\37\2\2\u0c87\u02a6\3\2\2\2\u0c88\u0c89\t \2\2\u0c89")
buf.write("\u02a8\3\2\2\2\u0c8a\u0c8b\t!\2\2\u0c8b\u02aa\3\2\2\2")
buf.write("\u0c8c\u0c8e\t\"\2\2\u0c8d\u0c8c\3\2\2\2\u0c8e\u02ac\3")
buf.write("\2\2\2\u0c8f\u0c92\5\u02ab\u0156\2\u0c90\u0c92\t#\2\2")
buf.write("\u0c91\u0c8f\3\2\2\2\u0c91\u0c90\3\2\2\2\u0c92\u02ae\3")
buf.write("\2\2\2\u0100\2\u02b4\u02b8\u02bd\u02c1\u02c6\u02c9\u02ce")
buf.write("\u02d7\u02da\u02e1\u0311\u0317\u031d\u0337\u033e\u0347")
buf.write("\u034b\u0353\u035d\u0365\u0370\u0372\u0374\u0376\u03af")
buf.write("\u03b1\u03b8\u03cd\u03cf\u03f1\u03f7\u0405\u040d\u040f")
buf.write("\u042d\u0436\u0438\u043a\u0444\u0446\u045c\u0469\u04a5")
buf.write("\u04a7\u04a9\u04bb\u04bd\u04bf\u04c1\u04cd\u04d5\u04d7")
buf.write("\u04df\u04e1\u04ec\u04ee\u0513\u0515\u0517\u0519\u051b")
buf.write("\u051d\u0524\u052e\u0530\u0539\u053b\u053d\u054d\u054f")
buf.write("\u0557\u0559\u0577\u0579\u057b\u057d\u057f\u059a\u059c")
buf.write("\u059e\u05a0\u05c5\u05c7\u05c9\u05d4\u05d6\u05d8\u05da")
buf.write("\u05e8\u05ef\u05f8\u05fa\u05fc\u0605\u0607\u0609\u060b")
buf.write("\u060d\u0616\u0618\u061a\u0623\u0625\u0627\u062f\u0638")
buf.write("\u063a\u063c\u0643\u064a\u065f\u0672\u0685\u0687\u0689")
buf.write("\u068b\u0699\u06a6\u06a8\u06b1\u06b3\u06b5\u06f3\u06f5")
buf.write("\u0701\u073a\u073c\u0754\u0756\u0758\u075a\u0786\u0788")
buf.write("\u07d7\u07d9\u07db\u07e6\u07eb\u07f3\u07f5\u08cb\u08cd")
buf.write("\u08cf\u08d1\u08d3\u08e9\u08eb\u08ed\u08ef\u08f1\u08f3")
buf.write("\u08f5\u0901\u0903\u0905\u0907\u0909\u090b\u090d\u091f")
buf.write("\u092d\u092f\u0931\u093c\u093e\u0940\u0942\u0944\u094e")
buf.write("\u0950\u0952\u0954\u0956\u0966\u0968\u0971\u0973\u0975")
buf.write("\u097d\u097f\u0999\u099b\u099d\u099f\u09a1\u09a3\u09bc")
buf.write("\u09be\u09e8\u09ea\u09ec\u0a07\u0a09\u0a0b\u0a13\u0a15")
buf.write("\u0a1e\u0a20\u0a22\u0a41\u0a43\u0a45\u0a4c\u0a54\u0a56")
buf.write("\u0a5d\u0a62\u0a77\u0a79\u0a95\u0a97\u0a99\u0aab\u0aad")
buf.write("\u0aaf\u0ab1\u0ab3\u0abb\u0abd\u0ac5\u0ac7\u0ad0\u0ad2")
buf.write("\u0ad4\u0adc\u0ae4\u0ae6\u0af3\u0af5\u0b12\u0b42\u0b4d")
buf.write("\u0b4f\u0b51\u0b53\u0b55\u0b76\u0b78\u0ba5\u0bae\u0bb0")
buf.write("\u0bb2\u0be6\u0bf5\u0bf7\u0bff\u0c01\u0c31\u0c33\u0c35")
buf.write("\u0c49\u0c8d\u0c91\4\2\3\2\2\4\2")
return buf.getvalue()
class VisualFoxpro9Lexer(Lexer):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
T__0 = 1
NUMBER_LITERAL = 2
BLOB_LITERAL = 3
SEMICOLON = 4
AMPERSAND = 5
COMMERCIALAT = 6
ASTERISK = 7
PLUS_SIGN = 8
MINUS_SIGN = 9
FORWARDSLASH = 10
PERIOD = 11
LEFTBRACKET = 12
RIGHTBRACKET = 13
LEFTBRACE = 14
RIGHTBRACE = 15
LEFTPAREN = 16
RIGHTPAREN = 17
BACKSLASH = 18
LESSTHAN = 19
GREATERTHAN = 20
EXCLAMATION = 21
HASH = 22
DOUBLEEQUALS = 23
NOTEQUALS = 24
GTEQ = 25
LTEQ = 26
MODULO = 27
EQUALS = 28
CARAT = 29
COMMA = 30
DOLLAR = 31
COLON = 32
QUESTION = 33
DOUBLEQUOTE = 34
SINGLEQUOTE = 35
COMMENT = 36
LINECONT = 37
MACROLINE = 38
ACTIVATE = 39
ADD = 40
ADDITIVE = 41
AFTER = 42
ALIAS = 43
ALL = 44
ALTER = 45
ALTERNATE = 46
AND = 47
APPEND = 48
ARRAY = 49
AS = 50
ASCENDING = 51
ASSERT = 52
ASSERTS = 53
AT = 54
BAR = 55
BEFORE = 56
BELL = 57
BLANK = 58
BOOLEANCHAR = 59
BOTTOM = 60
BROWSE = 61
BY = 62
CANDIDATE = 63
CASE = 64
CAST = 65
CATCH = 66
CENTURY = 67
CHDIR = 68
CLASS = 69
CLASSLIB = 70
CLEAR = 71
CLOCK = 72
CLOSE = 73
COLLECTION = 74
COLOR = 75
COLUMN = 76
COMMAND = 77
COMPACT = 78
COMPATIBLE = 79
COMPILE = 80
CONSOLE = 81
CONTINUE = 82
COPY = 83
COUNT = 84
CREATE = 85
CURSOR = 86
DATABASE = 87
DATASESSION = 88
DATE = 89
DB4 = 90
DBF = 91
DEACTIVATE = 92
DEBUG = 93
DEBUGOUT = 94
DECLARE = 95
DEFAULT = 96
DEFINE = 97
DELETE = 98
DELETED = 99
DESCENDING = 100
DIMENSION = 101
DISTINCT = 102
DLLS = 103
DO = 104
DOEVENTS = 105
DROP = 106
EACH = 107
ELIF = 108
ELSE = 109
ENCRYPT = 110
ENDCASE = 111
ENDDEFINE = 112
ENDDO = 113
ENDFOR = 114
ENDIF = 115
ENDPROC = 116
ENDSCAN = 117
ENDTEXT = 118
ENDTRY = 119
ENDWITH = 120
ERASE = 121
ERROR = 122
ESCAPE = 123
EVENTS = 124
EXACT = 125
EXCEPT = 126
EXCLUSIVE = 127
EXTENDED = 128
EXTERNAL = 129
FIELDS = 130
FILE = 131
FILL = 132
FILTER = 133
FINALLY = 134
FLAGS = 135
FONT = 136
FOR = 137
FORCE = 138
FORM = 139
FOXOBJECT = 140
FOXPLUS = 141
FREE = 142
FROM = 143
GATHER = 144
GETS = 145
GOTO = 146
HELP = 147
HIDE = 148
ICON = 149
IF = 150
IFDEF = 151
IN = 152
INCLUDE = 153
INDEX = 154
INDEXES = 155
INSERT = 156
INTO = 157
JOIN = 158
KEY = 159
KEYBOARD = 160
LABEL = 161
LIBRARY = 162
LIKE = 163
LINE = 164
LINKED = 165
LIST = 166
LOCATE = 167
MACROS = 168
MARGIN = 169
MARK = 170
MASTER = 171
MAX = 172
MEMO = 173
MEMORY = 174
MEMOWIDTH = 175
MEMVAR = 176
MENU = 177
MENUS = 178
MESSAGE = 179
MIN = 180
MKDIR = 181
MODIFY = 182
MULTILOCKS = 183
NAME = 184
NEAR = 185
NEGOTIATE = 186
NEXT = 187
NOCLEAR = 188
NOCONSOLE = 189
NODEBUG = 190
NOEJECT = 191
NOMARGIN = 192
NOMENU = 193
NOOPTIMIZE = 194
NOPROMPT = 195
NORM = 196
NOSAVE = 197
NOSHOW = 198
NOT = 199
NOTE = 200
NOTIFY = 201
NOUPDATE = 202
NOWAIT = 203
NULL = 204
NUMBER = 205
OBJECT = 206
OF = 207
OFF = 208
ON = 209
OR = 210
ORDER = 211
OTHERAND = 212
OTHERNOT = 213
OTHEROR = 214
OTHERWISE = 215
PACK = 216
PAD = 217
PARAMETER = 218
PLAIN = 219
POP = 220
POPUP = 221
PRETEXT = 222
PRINTER = 223
PROCEDURE = 224
PROGRAM = 225
PROGRAMCONTROL = 226
PROMPT = 227
PUSH = 228
READ = 229
RECALL = 230
RECORD = 231
RECYCLE = 232
REFERENCE = 233
REFRESH = 234
REINDEX = 235
RELATION = 236
RELATIVE = 237
RELEASE = 238
RENAME = 239
REPLACE = 240
REPORT = 241
RESOURCES = 242
REST = 243
RESTORE = 244
RETRY = 245
RETURN = 246
RMDIR = 247
ROLLOVER = 248
RUN = 249
SAFETY = 250
SAME = 251
SAVE = 252
SAY = 253
SCAN = 254
SCATTER = 255
SCHEME = 256
SCOPE = 257
SCREEN = 258
SEEK = 259
SELECT = 260
SELECTION = 261
SET = 262
SHADOW = 263
SHARED = 264
SHOW = 265
SHUTDOWN = 266
SIZE = 267
SKIPKW = 268
SORT = 269
STATUS = 270
STEP = 271
STORE = 272
STRUCTURE = 273
STYLE = 274
SUM = 275
SYSMENU = 276
SYSTEM = 277
TABLE = 278
TABLEPROMPT = 279
TAG = 280
TALK = 281
TEXT = 282
TEXTMERGE = 283
THEN = 284
THROW = 285
TIMEOUT = 286
TITLE = 287
TO = 288
TOP = 289
TRY = 290
TYPE = 291
TYPEAHEAD = 292
UDFPARMS = 293
UNDEFINE = 294
UNIQUE = 295
UNLOCK = 296
UPDATE = 297
USE = 298
VALUE = 299
VALUES = 300
WAIT = 301
WHEN = 302
WHERE = 303
WHILE = 304
WINDOW = 305
WITH = 306
ZAP = 307
ZOOM = 308
ID = 309
NL = 310
WS = 311
UNMATCHED = 312
channelNames = [ u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN" ]
modeNames = [ "DEFAULT_MODE" ]
literalNames = [ "<INVALID>",
"'_'", "';'", "'&'", "'@'", "'*'", "'+'", "'-'", "'/'", "'.'",
"'['", "']'", "'{'", "'}'", "'('", "')'", "'\\'", "'<'", "'>'",
"'!'", "'#'", "'=='", "'%'", "'='", "'^'", "','", "'$'", "':'",
"'?'", "'\"'", "'''", "'\n'" ]
symbolicNames = [ "<INVALID>",
"NUMBER_LITERAL", "BLOB_LITERAL", "SEMICOLON", "AMPERSAND",
"COMMERCIALAT", "ASTERISK", "PLUS_SIGN", "MINUS_SIGN", "FORWARDSLASH",
"PERIOD", "LEFTBRACKET", "RIGHTBRACKET", "LEFTBRACE", "RIGHTBRACE",
"LEFTPAREN", "RIGHTPAREN", "BACKSLASH", "LESSTHAN", "GREATERTHAN",
"EXCLAMATION", "HASH", "DOUBLEEQUALS", "NOTEQUALS", "GTEQ",
"LTEQ", "MODULO", "EQUALS", "CARAT", "COMMA", "DOLLAR", "COLON",
"QUESTION", "DOUBLEQUOTE", "SINGLEQUOTE", "COMMENT", "LINECONT",
"MACROLINE", "ACTIVATE", "ADD", "ADDITIVE", "AFTER", "ALIAS",
"ALL", "ALTER", "ALTERNATE", "AND", "APPEND", "ARRAY", "AS",
"ASCENDING", "ASSERT", "ASSERTS", "AT", "BAR", "BEFORE", "BELL",
"BLANK", "BOOLEANCHAR", "BOTTOM", "BROWSE", "BY", "CANDIDATE",
"CASE", "CAST", "CATCH", "CENTURY", "CHDIR", "CLASS", "CLASSLIB",
"CLEAR", "CLOCK", "CLOSE", "COLLECTION", "COLOR", "COLUMN",
"COMMAND", "COMPACT", "COMPATIBLE", "COMPILE", "CONSOLE", "CONTINUE",
"COPY", "COUNT", "CREATE", "CURSOR", "DATABASE", "DATASESSION",
"DATE", "DB4", "DBF", "DEACTIVATE", "DEBUG", "DEBUGOUT", "DECLARE",
"DEFAULT", "DEFINE", "DELETE", "DELETED", "DESCENDING", "DIMENSION",
"DISTINCT", "DLLS", "DO", "DOEVENTS", "DROP", "EACH", "ELIF",
"ELSE", "ENCRYPT", "ENDCASE", "ENDDEFINE", "ENDDO", "ENDFOR",
"ENDIF", "ENDPROC", "ENDSCAN", "ENDTEXT", "ENDTRY", "ENDWITH",
"ERASE", "ERROR", "ESCAPE", "EVENTS", "EXACT", "EXCEPT", "EXCLUSIVE",
"EXTENDED", "EXTERNAL", "FIELDS", "FILE", "FILL", "FILTER",
"FINALLY", "FLAGS", "FONT", "FOR", "FORCE", "FORM", "FOXOBJECT",
"FOXPLUS", "FREE", "FROM", "GATHER", "GETS", "GOTO", "HELP",
"HIDE", "ICON", "IF", "IFDEF", "IN", "INCLUDE", "INDEX", "INDEXES",
"INSERT", "INTO", "JOIN", "KEY", "KEYBOARD", "LABEL", "LIBRARY",
"LIKE", "LINE", "LINKED", "LIST", "LOCATE", "MACROS", "MARGIN",
"MARK", "MASTER", "MAX", "MEMO", "MEMORY", "MEMOWIDTH", "MEMVAR",
"MENU", "MENUS", "MESSAGE", "MIN", "MKDIR", "MODIFY", "MULTILOCKS",
"NAME", "NEAR", "NEGOTIATE", "NEXT", "NOCLEAR", "NOCONSOLE",
"NODEBUG", "NOEJECT", "NOMARGIN", "NOMENU", "NOOPTIMIZE", "NOPROMPT",
"NORM", "NOSAVE", "NOSHOW", "NOT", "NOTE", "NOTIFY", "NOUPDATE",
"NOWAIT", "NULL", "NUMBER", "OBJECT", "OF", "OFF", "ON", "OR",
"ORDER", "OTHERAND", "OTHERNOT", "OTHEROR", "OTHERWISE", "PACK",
"PAD", "PARAMETER", "PLAIN", "POP", "POPUP", "PRETEXT", "PRINTER",
"PROCEDURE", "PROGRAM", "PROGRAMCONTROL", "PROMPT", "PUSH",
"READ", "RECALL", "RECORD", "RECYCLE", "REFERENCE", "REFRESH",
"REINDEX", "RELATION", "RELATIVE", "RELEASE", "RENAME", "REPLACE",
"REPORT", "RESOURCES", "REST", "RESTORE", "RETRY", "RETURN",
"RMDIR", "ROLLOVER", "RUN", "SAFETY", "SAME", "SAVE", "SAY",
"SCAN", "SCATTER", "SCHEME", "SCOPE", "SCREEN", "SEEK", "SELECT",
"SELECTION", "SET", "SHADOW", "SHARED", "SHOW", "SHUTDOWN",
"SIZE", "SKIPKW", "SORT", "STATUS", "STEP", "STORE", "STRUCTURE",
"STYLE", "SUM", "SYSMENU", "SYSTEM", "TABLE", "TABLEPROMPT",
"TAG", "TALK", "TEXT", "TEXTMERGE", "THEN", "THROW", "TIMEOUT",
"TITLE", "TO", "TOP", "TRY", "TYPE", "TYPEAHEAD", "UDFPARMS",
"UNDEFINE", "UNIQUE", "UNLOCK", "UPDATE", "USE", "VALUE", "VALUES",
"WAIT", "WHEN", "WHERE", "WHILE", "WINDOW", "WITH", "ZAP", "ZOOM",
"ID", "NL", "WS", "UNMATCHED" ]
ruleNames = [ "T__0", "NUMBER_LITERAL", "BLOB_LITERAL", "SEMICOLON",
"AMPERSAND", "COMMERCIALAT", "ASTERISK", "PLUS_SIGN",
"MINUS_SIGN", "FORWARDSLASH", "PERIOD", "LEFTBRACKET",
"RIGHTBRACKET", "LEFTBRACE", "RIGHTBRACE", "LEFTPAREN",
"RIGHTPAREN", "BACKSLASH", "LESSTHAN", "GREATERTHAN",
"EXCLAMATION", "HASH", "DOUBLEEQUALS", "NOTEQUALS", "GTEQ",
"LTEQ", "MODULO", "EQUALS", "CARAT", "COMMA", "DOLLAR",
"COLON", "QUESTION", "DOUBLEQUOTE", "SINGLEQUOTE", "COMMENT",
"LINECONT", "MACROLINE", "ACTIVATE", "ADD", "ADDITIVE",
"AFTER", "ALIAS", "ALL", "ALTER", "ALTERNATE", "AND",
"APPEND", "ARRAY", "AS", "ASCENDING", "ASSERT", "ASSERTS",
"AT", "BAR", "BEFORE", "BELL", "BLANK", "BOOLEANCHAR",
"BOTTOM", "BROWSE", "BY", "CANDIDATE", "CASE", "CAST",
"CATCH", "CENTURY", "CHDIR", "CLASS", "CLASSLIB", "CLEAR",
"CLOCK", "CLOSE", "COLLECTION", "COLOR", "COLUMN", "COMMAND",
"COMPACT", "COMPATIBLE", "COMPILE", "CONSOLE", "CONTINUE",
"COPY", "COUNT", "CREATE", "CURSOR", "DATABASE", "DATASESSION",
"DATE", "DB4", "DBF", "DEACTIVATE", "DEBUG", "DEBUGOUT",
"DECLARE", "DEFAULT", "DEFINE", "DELETE", "DELETED", "DESCENDING",
"DIMENSION", "DISTINCT", "DLLS", "DO", "DOEVENTS", "DROP",
"EACH", "ELIF", "ELSE", "ENCRYPT", "ENDCASE", "ENDDEFINE",
"ENDDO", "ENDFOR", "ENDIF", "ENDPROC", "ENDSCAN", "ENDTEXT",
"ENDTRY", "ENDWITH", "ERASE", "ERROR", "ESCAPE", "EVENTS",
"EXACT", "EXCEPT", "EXCLUSIVE", "EXTENDED", "EXTERNAL",
"FIELDS", "FILE", "FILL", "FILTER", "FINALLY", "FLAGS",
"FONT", "FOR", "FORCE", "FORM", "FOXOBJECT", "FOXPLUS",
"FREE", "FROM", "GATHER", "GETS", "GOTO", "HELP", "HIDE",
"ICON", "IF", "IFDEF", "IN", "INCLUDE", "INDEX", "INDEXES",
"INSERT", "INTO", "JOIN", "KEY", "KEYBOARD", "LABEL",
"LIBRARY", "LIKE", "LINE", "LINKED", "LIST", "LOCATE",
"MACROS", "MARGIN", "MARK", "MASTER", "MAX", "MEMO", "MEMORY",
"MEMOWIDTH", "MEMVAR", "MENU", "MENUS", "MESSAGE", "MIN",
"MKDIR", "MODIFY", "MULTILOCKS", "NAME", "NEAR", "NEGOTIATE",
"NEXT", "NOCLEAR", "NOCONSOLE", "NODEBUG", "NOEJECT",
"NOMARGIN", "NOMENU", "NOOPTIMIZE", "NOPROMPT", "NORM",
"NOSAVE", "NOSHOW", "NOT", "NOTE", "NOTIFY", "NOUPDATE",
"NOWAIT", "NULL", "NUMBER", "OBJECT", "OF", "OFF", "ON",
"OR", "ORDER", "OTHERAND", "OTHERNOT", "OTHEROR", "OTHERWISE",
"PACK", "PAD", "PARAMETER", "PLAIN", "POP", "POPUP", "PRETEXT",
"PRINTER", "PROCEDURE", "PROGRAM", "PROGRAMCONTROL", "PROMPT",
"PUSH", "READ", "RECALL", "RECORD", "RECYCLE", "REFERENCE",
"REFRESH", "REINDEX", "RELATION", "RELATIVE", "RELEASE",
"RENAME", "REPLACE", "REPORT", "RESOURCES", "REST", "RESTORE",
"RETRY", "RETURN", "RMDIR", "ROLLOVER", "RUN", "SAFETY",
"SAME", "SAVE", "SAY", "SCAN", "SCATTER", "SCHEME", "SCOPE",
"SCREEN", "SEEK", "SELECT", "SELECTION", "SET", "SHADOW",
"SHARED", "SHOW", "SHUTDOWN", "SIZE", "SKIPKW", "SORT",
"STATUS", "STEP", "STORE", "STRUCTURE", "STYLE", "SUM",
"SYSMENU", "SYSTEM", "TABLE", "TABLEPROMPT", "TAG", "TALK",
"TEXT", "TEXTMERGE", "THEN", "THROW", "TIMEOUT", "TITLE",
"TO", "TOP", "TRY", "TYPE", "TYPEAHEAD", "UDFPARMS", "UNDEFINE",
"UNIQUE", "UNLOCK", "UPDATE", "USE", "VALUE", "VALUES",
"WAIT", "WHEN", "WHERE", "WHILE", "WINDOW", "WITH", "ZAP",
"ZOOM", "ID", "NL", "WS", "UNMATCHED", "A", "B", "C",
"D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N",
"O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y",
"Z", "DIGIT", "HEXDIGIT", "ID_START", "ID_CONTINUE" ]
grammarFileName = "VisualFoxpro9.g4"
def __init__(self, input=None, output:TextIO = sys.stdout):
super().__init__(input, output)
self.checkVersion("4.8")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
| 68.669475 | 103 | 0.623811 | 33,579 | 171,193 | 3.178743 | 0.134697 | 0.069572 | 0.051968 | 0.057449 | 0.330666 | 0.276647 | 0.112143 | 0.09022 | 0.079409 | 0.070733 | 0 | 0.387652 | 0.134959 | 171,193 | 2,492 | 104 | 68.697031 | 0.333124 | 0.000257 | 0 | 0.002829 | 1 | 0.293048 | 0.676444 | 0.652435 | 0 | 0 | 0 | 0 | 0.001617 | 1 | 0.000808 | false | 0 | 0.001617 | 0 | 0.132579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
40e255d5020c2321b9334f57a12198018e69ec3c | 87 | py | Python | test/support/__init__.py | cfelton/parallella_elink | ccd2e7d49cca6cf10ed327aadad2d096e38121eb | [
"MIT"
] | null | null | null | test/support/__init__.py | cfelton/parallella_elink | ccd2e7d49cca6cf10ed327aadad2d096e38121eb | [
"MIT"
] | null | null | null | test/support/__init__.py | cfelton/parallella_elink | ccd2e7d49cca6cf10ed327aadad2d096e38121eb | [
"MIT"
] | null | null | null |
from _elink_extract_ports import parse_ports
from _elink_prep_cosim import prep_cosim
| 21.75 | 44 | 0.896552 | 14 | 87 | 5 | 0.571429 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 87 | 3 | 45 | 29 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc09eb9a6b8d936a34a03659585eb609502f90ea | 7,818 | py | Python | purchase.py | mdy98cs/www | f21d87a20fac847f562f62d549d1e82de1be4df7 | [
"MIT"
] | 12 | 2017-06-08T00:21:27.000Z | 2021-09-11T09:27:01.000Z | purchase.py | mdy98cs/www | f21d87a20fac847f562f62d549d1e82de1be4df7 | [
"MIT"
] | 2 | 2018-06-01T14:46:36.000Z | 2018-12-17T19:50:44.000Z | purchase.py | mdy98cs/www | f21d87a20fac847f562f62d549d1e82de1be4df7 | [
"MIT"
] | 5 | 2018-08-06T08:25:17.000Z | 2021-05-06T04:31:37.000Z | from flask import Flask, render_template, request, session, url_for, redirect
import pymysql.cursors
import string, sys, random
from appdef import app, conn
@app.route('/purchasePageCustomer')
def purchasePage():
return render_template('purchaseCustomer.html')
@app.route('/purchasePageAgent')
def purchasePageAgent():
return render_template('purchaseAgent.html')
@app.route('/searchPurchaseCustomer', methods=['POST'])
def searchPurchaseCustomer():
cursor = conn.cursor()
fromcity = request.form['fromcity']
fromairport = request.form['fromairport']
fromdate = request.form['fromdate']
tocity = request.form['tocity']
toairport = request.form['toairport']
todate = request.form['todate']
query = 'SELECT distinct f.airline_name, f.flight_num, departure_airport, departure_time, arrival_airport, arrival_time, price, airplane_id \
FROM flight as f, airport \
WHERE airport.airport_name=f.departure_airport \
AND airport.airport_city = %s \
AND airport.airport_name = %s \
AND %s BETWEEN DATE_SUB(f.departure_time, INTERVAL 2 DAY) AND DATE_ADD(f.departure_time, INTERVAL 2 DAY)\
AND %s BETWEEN DATE_SUB(f.arrival_time, INTERVAL 2 DAY) AND DATE_ADD(f.arrival_time, INTERVAL 2 DAY)\
AND (f.airline_name, f.flight_num) in \
(SELECT flight.airline_name, flight.flight_num FROM flight, airport \
WHERE airport.airport_name=flight.arrival_airport \
AND airport.airport_city = %s \
AND airport.airport_name = %s) \
AND (SELECT DISTINCT seats \
FROM flight, airplane \
WHERE flight.airplane_id = airplane.airplane_id AND flight.airline_name = airplane.airline_name \
AND flight.airline_name = f.airline_name AND flight.flight_num = f.flight_num) \
>= (SELECT COUNT(*) \
FROM ticket \
WHERE ticket.airline_name = f.airline_name AND ticket.flight_num = f.flight_num)'
cursor.execute(query, (fromcity, fromairport, fromdate, todate, tocity, toairport))
# print cursor._executed
data = cursor.fetchall()
cursor.close()
error = None
if(data):
return render_template('purchaseCustomer.html', results=data)
else:
#returns an error message to the html page
error = 'No results found'
return render_template('purchaseCustomer.html', searchError=error)
# Thought it works, not really...
# def _genTix(ticketCount, airline_name, flight_num):
# pre = [str(flight_num), str(ticketCount+1)]
# di = dict(zip(string.letters,[ord(c)%32 for c in string.letters])) # taken from http://stackoverflow.com/a/4535403
# for c in airline_name:
# pre.append(str(di[c]))
# return ''.join(pre)
def _genTix():
cursor = conn.cursor()
cand = random.randint(1, 2147483647)
query = 'SELECT ticket_id FROM ticket'
cursor.execute(query)
allTix = cursor.fetchall()
cursor.close()
while cand in allTix:
cand = random.randint(1, 2147483647)
return cand
@app.route('/purchaseCustomer', methods=['POST'])
def purchaseCustomer():
username = session['username']
cursor = conn.cursor()
airline_name = request.form['airline_name']
flight_num = request.form['flight_num']
# Find the number of tickets to generate the next ticket_id
queryCount = 'SELECT COUNT(*) as count FROM ticket \
WHERE ticket.airline_name = %s AND ticket.flight_num = %s'
cursor.execute(queryCount, (airline_name, flight_num))
ticketCount = cursor.fetchone()
ticketCountVal = 0
if ticketCount != None:
ticketCountVal = ticketCount['count']
# ticket_id = _genTix(ticketCountVal, airline_name.strip().replace(' ', ''), flight_num)
ticket_id = _genTix()
# print("WHAT FUCKING NUMBER: ", ticket_id)
# Create the new ticket
queryNewTicket = 'INSERT INTO ticket VALUES(%s, %s, %s)'
cursor.execute(queryNewTicket, (ticket_id, airline_name, flight_num))
# Finalize the purchase
queryPurchase = 'INSERT INTO purchases VALUES(%s, %s, %s, CURDATE())'
cursor.execute(queryPurchase, (ticket_id, username, None))
data = cursor.fetchone()
conn.commit()
cursor.close()
return render_template('purchaseCustomer.html')
@app.route('/searchPurchaseAgent', methods=['POST'])
def searchPurchaseAgent():
cursor = conn.cursor()
fromcity = request.form['fromcity']
fromairport = request.form['fromairport']
fromdate = request.form['fromdate']
tocity = request.form['tocity']
toairport = request.form['toairport']
todate = request.form['todate']
query = 'SELECT distinct f.airline_name, f.flight_num, departure_airport, departure_time, arrival_airport, arrival_time, price, airplane_id \
FROM flight as f, airport \
WHERE airport.airport_name=f.departure_airport \
AND airport.airport_city = %s \
AND airport.airport_name = %s \
AND %s BETWEEN DATE_SUB(f.departure_time, INTERVAL 2 DAY) AND DATE_ADD(f.departure_time, INTERVAL 2 DAY)\
AND %s BETWEEN DATE_SUB(f.arrival_time, INTERVAL 2 DAY) AND DATE_ADD(f.arrival_time, INTERVAL 2 DAY)\
AND (f.airline_name, f.flight_num) in \
(SELECT flight.airline_name, flight.flight_num FROM flight, airport \
WHERE airport.airport_name=flight.arrival_airport \
AND airport.airport_city = %s \
AND airport.airport_name = %s) \
AND (SELECT DISTINCT seats \
FROM flight, airplane \
WHERE flight.airplane_id = airplane.airplane_id AND flight.airline_name = airplane.airline_name \
AND flight.airline_name = f.airline_name AND flight.flight_num = f.flight_num) \
>= (SELECT COUNT(*) \
FROM ticket \
WHERE ticket.airline_name = f.airline_name AND ticket.flight_num = f.flight_num)'
cursor.execute(query, (fromcity, fromairport, fromdate, todate, tocity, toairport))
# print cursor._executed
data = cursor.fetchall()
cursor.close()
error = None
if(data):
print(data)
return render_template('purchaseAgent.html', results=data)
else:
#returns an error message to the html page
error = 'No results found'
return render_template('purchaseAgent.html', searchError=error)
@app.route('/purchaseAgent', methods=['POST'])
def purchaseAgent():
username = session['username']
customer_email = request.form['customer_email']
cursor = conn.cursor()
airline_name = request.form['airline_name']
flight_num = request.form['flight_num']
# Find the number of tickets to generate the next ticket_id
queryCount = 'SELECT COUNT(*) as count FROM ticket \
WHERE ticket.airline_name = %s AND ticket.flight_num = %s'
cursor.execute(queryCount, (airline_name, flight_num))
ticketCount = cursor.fetchone()
ticketCountVal = 0
if ticketCount != None:
ticketCountVal = ticketCount['count']
# ticket_id = _genTix(ticketCountVal, airline_name.strip().replace(' ', ''), flight_num)
ticket_id = _genTix()
# Create the new ticket
queryNewTicket = 'INSERT INTO ticket VALUES(%s, %s, %s)'
cursor.execute(queryNewTicket, (ticket_id, airline_name, flight_num))
# Get booking_agent_id
queryGetID = 'SELECT booking_agent_id FROM booking_agent WHERE email=%s'
cursor.execute(queryGetID, username)
agentID = cursor.fetchone() # returns a dict
# Finalize the purchase
queryPurchase = 'INSERT INTO purchases VALUES(%s, %s, %s, CURDATE())'
cursor.execute(queryPurchase, (ticket_id, customer_email, agentID['booking_agent_id']))
data = cursor.fetchone()
conn.commit()
cursor.close()
error = None
if(data):
return render_template('agent.html', results=data)
else:
#returns an error message to the html page
error = 'Cannot complete purchase'
return render_template('purchaseAgent.html', error=error) | 43.433333 | 143 | 0.69647 | 975 | 7,818 | 5.433846 | 0.16 | 0.06644 | 0.033975 | 0.02416 | 0.782371 | 0.741601 | 0.741601 | 0.710834 | 0.710834 | 0.698377 | 0 | 0.006636 | 0.190458 | 7,818 | 180 | 144 | 43.433333 | 0.830463 | 0.120875 | 0 | 0.75 | 0 | 0.040541 | 0.117973 | 0.018689 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047297 | false | 0 | 0.027027 | 0.013514 | 0.141892 | 0.006757 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc14590bffa9364ecbfd1bda68c74109ef183cbf | 2,428 | py | Python | tests/unit/saltenv/ops/test_unit_remove_version.py | eitrtechnologies/saltenv | 66add964657fe270ed96ddfe50802e27539a6526 | [
"Apache-2.0"
] | 5 | 2022-03-25T17:15:04.000Z | 2022-03-28T23:24:26.000Z | tests/unit/saltenv/ops/test_unit_remove_version.py | eitrtechnologies/saltenv | 66add964657fe270ed96ddfe50802e27539a6526 | [
"Apache-2.0"
] | null | null | null | tests/unit/saltenv/ops/test_unit_remove_version.py | eitrtechnologies/saltenv | 66add964657fe270ed96ddfe50802e27539a6526 | [
"Apache-2.0"
] | 2 | 2022-03-26T06:33:30.000Z | 2022-03-29T19:43:50.000Z | from pathlib import Path
from unittest.mock import patch
async def test_unit_remove_version_exists(mock_hub, hub, tmp_path):
"""
SCENARIO #1:
- The version exists within LOCAL_VERSIONS
"""
# Link the function to the mock_hub
mock_hub.saltenv.ops.remove_version = hub.saltenv.ops.remove_version
# Add two versions to LOCAL_VERSIONS
mock_hub.saltenv.ops.LOCAL_VERSIONS = {
"3001": Path(tmp_path / "salt-3001"),
"3004": Path(tmp_path / "salt-3004"),
}
with patch("pathlib.PosixPath.unlink", return_value=None) as mock_unlink:
# Call remove_version with a version that is present in LOCAL_VERSIONS
ret = await mock_hub.saltenv.ops.remove_version("3004")
assert ret == True
# Ensure every mocked function was called the appropriate number of times
mock_unlink.assert_called_once()
async def test_unit_remove_version_does_not_exist(mock_hub, hub, tmp_path):
"""
SCENARIO #2:
- The version does not exist within LOCAL_VERSIONS
"""
# Link the function to the mock_hub
mock_hub.saltenv.ops.remove_version = hub.saltenv.ops.remove_version
# Add two versions to LOCAL_VERSIONS
mock_hub.saltenv.ops.LOCAL_VERSIONS = {
"3001": Path(tmp_path / "salt-3001"),
"3004": Path(tmp_path / "salt-3004"),
}
with patch("pathlib.PosixPath.unlink", return_value=None) as mock_unlink:
# Call remove_version with a version that is present in LOCAL_VERSIONS
ret = await mock_hub.saltenv.ops.remove_version("3003")
assert ret == True
# Ensure every mocked function was called the appropriate number of times
mock_unlink.assert_not_called()
async def test_unit_remove_version_empty_local_versions(mock_hub, hub):
"""
SCENARIO #3:
- LOCAL_VERSIONS is empty
"""
# Link the function to the mock_hub
mock_hub.saltenv.ops.remove_version = hub.saltenv.ops.remove_version
# Set LOCAL_VERSIONS to contain no versions
mock_hub.saltenv.ops.LOCAL_VERSIONS = {}
with patch("pathlib.PosixPath.unlink", return_value=None) as mock_unlink:
# Call remove_version with a version that is present in LOCAL_VERSIONS
ret = await mock_hub.saltenv.ops.remove_version("3003")
assert ret == True
# Ensure every mocked function was called the appropriate number of times
mock_unlink.assert_not_called()
| 35.188406 | 81 | 0.703048 | 339 | 2,428 | 4.811209 | 0.20354 | 0.119559 | 0.095647 | 0.093807 | 0.874923 | 0.874923 | 0.790926 | 0.767627 | 0.767627 | 0.767627 | 0 | 0.024659 | 0.214992 | 2,428 | 68 | 82 | 35.705882 | 0.83106 | 0.261944 | 0 | 0.655172 | 0 | 0 | 0.087012 | 0.046065 | 0 | 0 | 0 | 0 | 0.206897 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc168f0c6e44451ca23d17e36f6d08f3a8cd48a4 | 124 | py | Python | inna/compiler/applications/resnet/__init__.py | caoqichun/inspur-inna | 0848ec5db3c04aa8e2b65caff8095dd3ac4040ca | [
"Apache-2.0"
] | null | null | null | inna/compiler/applications/resnet/__init__.py | caoqichun/inspur-inna | 0848ec5db3c04aa8e2b65caff8095dd3ac4040ca | [
"Apache-2.0"
] | null | null | null | inna/compiler/applications/resnet/__init__.py | caoqichun/inspur-inna | 0848ec5db3c04aa8e2b65caff8095dd3ac4040ca | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from . import tensorflow
from . import keras
from . import mxnet
from . import onnx
| 17.714286 | 38 | 0.798387 | 17 | 124 | 5.529412 | 0.470588 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169355 | 124 | 6 | 39 | 20.666667 | 0.912621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.