hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cd69bffdbc63b26c4ce0934f7b2dab4592aff845 | 13,762 | py | Python | srfnef/functions/emap_generator.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | null | null | null | srfnef/functions/emap_generator.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | null | null | null | srfnef/functions/emap_generator.py | twj2417/srf | 63365cfd75199d70eea2273214a4fa580a9fdf2a | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
'''
@author: Minghao Guo
@contact: mh.guo0111@gmail.com
@software: nef
@file: emap_generator_mixin.py
@date: 4/23/2019
@desc:
'''
import numpy as np
import srfnef as nef
from srfnef import nef_class
from srfnef.utils import tqdm
from srfnef.geometry import PetScanner, PetCylindricalScanner, PetEcatScanner
from srfnef.data import Image, Emap
from srfnef.functions import BackProject, ScannerToLors, LorsToListmode
from srfnef.ops.deform_mixins import DeformMixin
import tensorflow as tf
from srfnef.utils import declare_eager_execution
mem_limit = 1e7
@nef_class
class EcatEmapGenerator(DeformMixin):
mode: str
scanner: PetEcatScanner
def __call__(self, image: Image):
from srfnef import EcatIndexToCrystalPos
if self.mode == 'full':
declare_eager_execution()
ind2pos = EcatIndexToCrystalPos(self.scanner)
ind = np.arange(self.scanner.nb_crystals)
pos1 = pos2 = ind2pos(ind)
pos1_ = np.kron(pos1, [1] * pos2.size)
pos2_ = np.kron(pos2, [[1]] * pos1.size).reshape(-1, 3)
lors_data = np.hstack((pos1_, pos2_))
listmode = LorsToListmode()(nef.Lors(lors_data))
return Emap(**BackProject(mode = 'tf-eager')(listmode, image).asdict())
elif self.mode == 'block':
declare_eager_execution()
single_block_scanner = self.scanner.update(nb_blocks_per_ring = 1)
ind2pos = EcatIndexToCrystalPos(single_block_scanner)
ind = np.arange(self.scanner.nb_crystals_per_block * self.scanner.nb_rings)
pos1 = pos2 = ind2pos(ind)
pos1_x = np.kron(pos1[:, 0], [1] * ind.size)
pos1_y = np.kron(pos1[:, 1], [1] * ind.size)
pos1_z = np.kron(pos1[:, 2], [1] * ind.size)
pos1_ = np.vstack((pos1_x, pos1_y, pos1_z)).transpose()
emap_data = np.zeros(image.shape, np.float32)
emap_tf = Emap(data = tf.Variable(emap_data), center = image.center,
size = image.size)
for d in tqdm(range(self.scanner.nb_blocks_per_ring)):
angle = d * self.scanner.angle_per_block
print(angle)
pos2_x = np.kron(pos2[:, 0], [[1]] * ind.size).ravel()
pos2_y = np.kron(pos2[:, 1], [[1]] * ind.size).ravel()
pos2_z = np.kron(pos2[:, 2], [[1]] * ind.size).ravel()
pos2_ = np.vstack((pos2_x * np.cos(angle) - pos2_y * np.sin(angle),
pos2_x * np.sin(angle) + pos2_y * np.cos(angle),
pos2_z)).transpose()
lors_data = np.hstack((pos1_, pos2_)).astype(np.float32)
listmode = LorsToListmode()(nef.Lors(lors_data))
listmode_tf = listmode.update(data = tf.Variable(listmode.data),
lors = nef.Lors(tf.Variable(lors_data)))
_emap = BackProject(mode = 'tf')(listmode_tf, emap_tf)
for i in range(self.scanner.nb_blocks_per_ring):
_emap_rotate_data = self._rotate_tf(_emap.data,
i * self.scanner.angle_per_block)
tf.compat.v1.assign_add(emap_tf.data, _emap_rotate_data)
emap_data = emap_tf.data.numpy()
return emap_tf.update(data = emap_data, center = image.center, size = image.size)
elif self.mode == 'block-full':
declare_eager_execution()
single_block_scanner = self.scanner.update(nb_blocks_per_ring = 1)
ind2pos = EcatIndexToCrystalPos(single_block_scanner)
ind = np.arange(self.scanner.nb_crystals_per_block * self.scanner.nb_rings)
pos1 = pos2 = ind2pos(ind)
emap_data = np.zeros(image.shape, np.float32)
emap_tf = Emap(data = tf.Variable(emap_data), center = image.center,
size = image.size)
for i in tqdm(range(self.scanner.nb_blocks_per_ring)):
angle1 = i * self.scanner.angle_per_block
pos1_x = np.kron(pos1[:, 0], [1] * ind.size)
pos1_y = np.kron(pos1[:, 1], [1] * ind.size)
pos1_z = np.kron(pos1[:, 2], [1] * ind.size)
pos1_ = np.vstack((pos1_x * np.cos(angle1) - pos1_y * np.sin(angle1),
pos1_x * np.sin(angle1) + pos1_y * np.cos(angle1),
pos1_z)).transpose()
for j in range(self.scanner.nb_blocks_per_ring):
angle2 = j * self.scanner.angle_per_block
pos2_x = np.kron(pos2[:, 0], [[1]] * ind.size).ravel()
pos2_y = np.kron(pos2[:, 1], [[1]] * ind.size).ravel()
pos2_z = np.kron(pos2[:, 2], [[1]] * ind.size).ravel()
pos2_ = np.vstack((pos2_x * np.cos(angle2) - pos2_y * np.sin(angle2),
pos2_x * np.sin(angle2) + pos2_y * np.cos(angle2),
pos2_z)).transpose()
lors_data = np.hstack((pos1_, pos2_)).astype(np.float32)
listmode = LorsToListmode()(nef.Lors(lors_data))
listmode_tf = listmode.update(data = tf.Variable(listmode.data),
lors = nef.Lors(tf.Variable(lors_data)))
_emap = BackProject(mode = 'tf')(listmode_tf, emap_tf)
tf.compat.v1.assign_add(emap_tf.data, _emap.data)
emap_data = emap_tf.data.numpy()
return emap_tf.update(data = emap_data, center = image.center, size = image.size)
elif self.mode == 'rsector':
return self.update(mode = 'block')(image)
elif self.mode == 'rsector-full':
return self.update(mode = 'block-full')(image)
else:
raise NotImplementedError
@nef_class
class CylindricalEmapGenerator(DeformMixin):
mode: str
scanner: PetCylindricalScanner
def __call__(self, image: Image):
from srfnef import CylindricalIndexToCrystalPos
if self.mode == 'full':
declare_eager_execution()
ind2pos = CylindricalIndexToCrystalPos(self.scanner)
ind = np.arange(self.scanner.nb_crystals)
pos1 = pos2 = ind2pos(ind)
pos1_ = np.kron(pos1, [1] * pos2.size)
pos2_ = np.kron(pos2, [[1]] * pos1.size).reshape(-1, 3)
lors_data = np.hstack((pos1_, pos2_))
listmode = LorsToListmode()(nef.Lors(lors_data))
return Emap(**BackProject(mode = 'tf-eager')(listmode, image).asdict())
elif self.mode == 'rsector':
declare_eager_execution()
single_block_scanner = self.scanner.update(nb_rsector = 1)
ind2pos = CylindricalIndexToCrystalPos(single_block_scanner)
ind = np.arange(self.scanner.nb_crystal_per_rsector)
pos1 = pos2 = ind2pos(ind)
pos1_x = np.kron(pos1[:, 0], [1] * ind.size)
pos1_y = np.kron(pos1[:, 1], [1] * ind.size)
pos1_z = np.kron(pos1[:, 2], [1] * ind.size)
pos1_ = np.vstack((pos1_x, pos1_y, pos1_z)).transpose()
emap_data = np.zeros(image.shape, np.float32)
emap_tf = Emap(data = tf.Variable(emap_data), center = image.center,
size = image.size)
for d in tqdm(range(self.scanner.nb_rsector)):
angle = d * self.scanner.angle_per_rsector
pos2_x = np.kron(pos2[:, 0], [[1]] * ind.size).ravel()
pos2_y = np.kron(pos2[:, 1], [[1]] * ind.size).ravel()
pos2_z = np.kron(pos2[:, 2], [[1]] * ind.size).ravel()
pos2_ = np.vstack((pos2_x * np.cos(angle) - pos2_y * np.sin(angle),
pos2_x * np.sin(angle) + pos2_y * np.cos(angle),
pos2_z)).transpose()
lors_data = np.hstack((pos1_, pos2_)).astype(np.float32)
listmode = LorsToListmode()(nef.Lors(lors_data))
_emap = BackProject(mode = 'tf')(listmode, emap_tf)
for i in range(self.scanner.nb_rsector):
_emap_rotate_data = self._rotate_tf(_emap.data,
i * self.scanner.angle_per_rsector)
tf.compat.v1.assign_add(emap_tf.data, _emap_rotate_data)
emap_data = emap_tf.data.numpy()
return emap_tf.update(data = emap_data, center = image.center, size = image.size)
elif self.mode == 'rsector-full':
declare_eager_execution()
single_block_scanner = self.scanner.update(nb_rsector = 1)
ind2pos = CylindricalIndexToCrystalPos(single_block_scanner)
ind = np.arange(self.scanner.nb_crystal_per_rsector)
pos1 = pos2 = ind2pos(ind)
emap_data = np.zeros(image.shape, np.float32)
emap_tf = Emap(data = tf.Variable(emap_data), center = image.center,
size = image.size)
for i in tqdm(range(self.scanner.nb_rsector)):
angle1 = i * self.scanner.angle_per_rsector
pos1_x = np.kron(pos1[:, 0], [1] * ind.size)
pos1_y = np.kron(pos1[:, 1], [1] * ind.size)
pos1_z = np.kron(pos1[:, 2], [1] * ind.size)
pos1_ = np.vstack((pos1_x * np.cos(angle1) - pos1_y * np.sin(angle1),
pos1_x * np.sin(angle1) + pos1_y * np.cos(angle1),
pos1_z)).transpose().astype(np.float32)
for j in range(self.scanner.nb_rsector):
angle2 = j * self.scanner.angle_per_rsector
pos2_x = np.kron(pos2[:, 0], [[1]] * ind.size).ravel()
pos2_y = np.kron(pos2[:, 1], [[1]] * ind.size).ravel()
pos2_z = np.kron(pos2[:, 2], [[1]] * ind.size).ravel()
pos2_ = np.vstack((pos2_x * np.cos(angle2) - pos2_y * np.sin(angle2),
pos2_x * np.sin(angle2) + pos2_y * np.cos(angle2),
pos2_z)).transpose()
lors_data = np.hstack((pos1_, pos2_)).astype(np.float32)
listmode = LorsToListmode()(nef.Lors(lors_data))
_emap = BackProject(mode = 'tf')(listmode, emap_tf)
tf.compat.v1.assign_add(emap_tf.data, _emap.data)
emap_data = emap_tf.data.numpy()
return emap_tf.update(data = emap_data, center = image.center, size = image.size)
elif self.mode == 'auto':
declare_eager_execution()
single_block_scanner = self.scanner.update(nb_rsector = 1)
ind2pos = CylindricalIndexToCrystalPos(single_block_scanner)
num_rsector = int(np.sqrt(mem_limit // (self.scanner.nb_crystal_per_rsector ** 2)))
while not self.scanner.nb_rsector % num_rsector == 0:
num_rsector -= 1
ind = np.arange(self.scanner.nb_crystal_per_rsector * num_rsector)
pos1 = pos2 = ind2pos(ind)
emap_data = np.zeros(image.shape, np.float32)
emap_tf = Emap(data = tf.Variable(emap_data), center = image.center,
size = image.size)
for i in tqdm(range(0, self.scanner.nb_rsector, num_rsector)):
angle1 = i * self.scanner.angle_per_rsector
pos1_x = np.kron(pos1[:, 0], [1] * ind.size)
pos1_y = np.kron(pos1[:, 1], [1] * ind.size)
pos1_z = np.kron(pos1[:, 2], [1] * ind.size)
pos1_ = np.vstack((pos1_x * np.cos(angle1) - pos1_y * np.sin(angle1),
pos1_x * np.sin(angle1) + pos1_y * np.cos(angle1),
pos1_z)).transpose().astype(np.float32)
for j in range(0, self.scanner.nb_rsector, num_rsector):
angle2 = j * self.scanner.angle_per_rsector
pos2_x = np.kron(pos2[:, 0], [[1]] * ind.size).ravel()
pos2_y = np.kron(pos2[:, 1], [[1]] * ind.size).ravel()
pos2_z = np.kron(pos2[:, 2], [[1]] * ind.size).ravel()
pos2_ = np.vstack((pos2_x * np.cos(angle2) - pos2_y * np.sin(angle2),
pos2_x * np.sin(angle2) + pos2_y * np.cos(angle2),
pos2_z)).transpose()
lors_data = np.hstack((pos1_, pos2_)).astype(np.float32)
listmode = LorsToListmode()(nef.Lors(lors_data))
_emap = BackProject(mode = 'tf')(listmode, emap_tf)
tf.compat.v1.assign_add(emap_tf.data, _emap.data)
emap_data = emap_tf.data.numpy()
return emap_tf.update(data = emap_data, center = image.center, size = image.size)
elif self.mode == 'block':
return self.update(mode = 'rsector')(image)
elif self.mode == 'block-full':
return self.update(mode = 'rsector-full')(image)
else:
raise NotImplementedError
@nef_class
class EmapGenerator(DeformMixin):
mode: str
scanner: PetEcatScanner
def __call__(self, *args, **kwargs):
if isinstance(self.scanner, PetEcatScanner):
return EcatEmapGenerator(self.mode, self.scanner)(*args, **kwargs)
elif isinstance(self.scanner, PetCylindricalScanner):
return CylindricalEmapGenerator(self.mode, self.scanner)(*args, **kwargs)
else:
raise NotImplementedError
| 53.341085 | 95 | 0.548467 | 1,648 | 13,762 | 4.378034 | 0.083131 | 0.064033 | 0.033264 | 0.024948 | 0.870963 | 0.849064 | 0.817879 | 0.814553 | 0.75343 | 0.737491 | 0 | 0.034624 | 0.326333 | 13,762 | 257 | 96 | 53.548638 | 0.743609 | 0.010028 | 0 | 0.77533 | 0 | 0 | 0.010282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013216 | false | 0 | 0.052863 | 0 | 0.162996 | 0.004405 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cd99f4754c4ea7cce423fb78e2ae2ab8d45aa342 | 136 | py | Python | maltose/sundries/admin.py | maltoseeditor/backend | 3c1960bd3b5e3b2b10f6b5780832d3d3aadcea0c | [
"Apache-2.0"
] | 13 | 2019-05-18T08:28:42.000Z | 2022-01-06T09:08:34.000Z | maltose/sundries/admin.py | maltoseeditor/backend | 3c1960bd3b5e3b2b10f6b5780832d3d3aadcea0c | [
"Apache-2.0"
] | null | null | null | maltose/sundries/admin.py | maltoseeditor/backend | 3c1960bd3b5e3b2b10f6b5780832d3d3aadcea0c | [
"Apache-2.0"
] | 5 | 2020-11-19T10:24:31.000Z | 2022-01-06T09:08:27.000Z | from django.contrib import admin
from .models import *
@admin.register(FriendLink)
class FriendLinkAdmin(admin.ModelAdmin):
pass
| 15.111111 | 40 | 0.779412 | 16 | 136 | 6.625 | 0.75 | 0.207547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139706 | 136 | 8 | 41 | 17 | 0.905983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
269a2aa24e3ff77afe584c9511d0557bf27184f4 | 13,535 | py | Python | grid.py | FrankWhoee/StatsPath | 134e63f6467b9030a67b0ba86d8ca4c13accbe93 | [
"MIT"
] | null | null | null | grid.py | FrankWhoee/StatsPath | 134e63f6467b9030a67b0ba86d8ca4c13accbe93 | [
"MIT"
] | null | null | null | grid.py | FrankWhoee/StatsPath | 134e63f6467b9030a67b0ba86d8ca4c13accbe93 | [
"MIT"
] | null | null | null | from moviepy.editor import VideoClip
from PIL import Image
import numpy as np
import pathing
from time import time
# graph = np.zeros((50, 40))
graph = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, ],
[1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, ],
[1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, ],
[0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, ],
[0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, ],
[0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, ],
[0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, ],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ]
])
print(graph.shape)
def graph_to_animation(ani,frame_reduction=1):
output = []
for frame in ani:
rgb_frame = np.zeros((3, frame.shape[0], frame.shape[1]))
for row in range(0, len(frame)):
for col in range(0, len(frame[0])):
if frame[row][col] == 1:
rgb_frame[0][row][col] = 255
rgb_frame[1][row][col] = 255
rgb_frame[2][row][col] = 255
if frame[row][col] == 2:
rgb_frame[0][row][col] = 128
rgb_frame[1][row][col] = 128
rgb_frame[2][row][col] = 128
if frame[row][col] == 3:
rgb_frame[0][row][col] = 0
rgb_frame[1][row][col] = 255
rgb_frame[2][row][col] = 0
if frame[row][col] == 4:
rgb_frame[0][row][col] = 255
rgb_frame[1][row][col] = 0
rgb_frame[2][row][col] = 0
if frame[row][col] == 5:
rgb_frame[0][row][col] = 0
rgb_frame[1][row][col] = 0
rgb_frame[2][row][col] = 255
output.append(rgb_frame)
return output
t_0 = time()
path,ani = pathing.astar_pathing(graph=graph, start=(0, 0), goal=(59, 59), return_animation=True)
t_f = time()
print(path)
print("Path calculated in " + str(t_f - t_0) + " seconds.")
ani = graph_to_animation(ani)
print(ani[int(0)].T.shape)
print(np.array(ani).shape)
def make_frame(t):
""" returns an image of the frame at time t """
# ... create the frame with any library
return np.array(ani[int(t)].T)
def numpy2pil(np_array: np.ndarray) -> Image:
"""
Convert an HxWx3 numpy array into an RGB Image
"""
assert_msg = 'Input shall be a HxWx3 ndarray'
assert isinstance(np_array, np.ndarray), assert_msg
assert len(np_array.shape) == 3, assert_msg
assert np_array.shape[2] == 3, assert_msg
img = Image.fromarray(np_array, 'RGB')
return img
animation = VideoClip(make_frame, duration=1526) # 3-second clip
# For the export, many options/formats/optimizations are supported
animation.write_videofile("my_animation.mp4", fps=60) # export as video
animation.write_gif("my_animation.gif", fps=60) # export as GIF (slow)
| 98.79562 | 201 | 0.370373 | 3,963 | 13,535 | 1.2541 | 0.025233 | 0.533602 | 0.514286 | 0.631791 | 0.790141 | 0.770624 | 0.769417 | 0.769215 | 0.769014 | 0.767203 | 0 | 0.403743 | 0.324935 | 13,535 | 136 | 202 | 99.522059 | 0.140199 | 0.019874 | 0 | 0.13913 | 0 | 0 | 0.007025 | 0 | 0 | 0 | 0 | 0 | 0.034783 | 1 | 0.026087 | false | 0 | 0.043478 | 0 | 0.095652 | 0.043478 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
f80f090e64d26e089737f1fe373c00d7a6518b37 | 2,390 | py | Python | tests/batch_generation/test_generate_bad_bodies.py | ashton-szabo/api-automation-tools | 279e258623cfe919a4385e63f3badaed66a61561 | [
"MIT"
] | null | null | null | tests/batch_generation/test_generate_bad_bodies.py | ashton-szabo/api-automation-tools | 279e258623cfe919a4385e63f3badaed66a61561 | [
"MIT"
] | null | null | null | tests/batch_generation/test_generate_bad_bodies.py | ashton-szabo/api-automation-tools | 279e258623cfe919a4385e63f3badaed66a61561 | [
"MIT"
] | 4 | 2022-03-09T06:11:59.000Z | 2022-03-10T02:09:34.000Z | import pytest
import apiautomationtools.batch_generation.batch_generation as bg
pytestmark = pytest.mark.batch_generation
def test_generate_bad_bodies_sub_value():
body = {"field1": "value1", "field2": "2", "file": "file", "file2": "file2"}
bad_bodies = bg.generate_bad_bodies(body, "0")
expected_bodies = [
{"aaaaa0": "value1", "field2": "2", "file": "file", "file2": "file2"},
{"field1": "aaaaa0", "field2": "2", "file": "file", "file2": "file2"},
{"field1": "value1", "aaaaa0": "2", "file": "file", "file2": "file2"},
{"field1": "value1", "field2": "0", "file": "file", "file2": "file2"},
{"field1": "value1", "field2": "2", "file": "file", "file2": "file2"},
{"aaaaa0": "0", "file": "file", "file2": "file2"},
]
assert bad_bodies == expected_bodies
def test_generate_bad_bodies_replacements():
body = {"field1": "value1", "field2": "2", "file": "file", "file2": "file2"}
bad_bodies = bg.generate_bad_bodies(body, replacements=["value1", "9f"])
expected_bodies = [
{"field1": "9f", "field2": "2", "file": "file", "file2": "file2"}
]
assert bad_bodies == expected_bodies
def test_generate_bad_bodies_full():
body = {"field1": "value1", "field2": "2", "file": "file", "file2": "file2"}
bad_bodies = bg.generate_bad_bodies(body, "0", full=True)
expected_bodies = [
{"aaaaa0": "value1", "field2": "2", "file": "file", "file2": "file2"},
{"field1": "aaaaa0", "field2": "2", "file": "file", "file2": "file2"},
{"field1": "value1", "aaaaa0": "2", "file": "file", "file2": "file2"},
{"field1": "value1", "field2": "0", "file": "file", "file2": "file2"},
{"field1": "value1", "field2": "2", "file": "file", "file2": "file2"},
{"aaaaa0": "0", "file": "file", "file2": "file2"},
]
assert bad_bodies == expected_bodies
def test_generate_bad_bodies_original_keys():
body = {"field1": "value1", "field2": "2", "file": "file", "file2": "file2"}
bad_bodies = bg.generate_bad_bodies(body, "0", original_keys=True)
expected_bodies = [
{"field1": "aaaaa0", "field2": "2", "file": "file", "file2": "file2"},
{"field1": "value1", "field2": "0", "file": "file", "file2": "file2"},
{"field1": "value1", "field2": "2", "file": "file", "file2": "file2"},
]
assert bad_bodies == expected_bodies
| 44.259259 | 80 | 0.568201 | 268 | 2,390 | 4.895522 | 0.123134 | 0.121951 | 0.198171 | 0.27439 | 0.831555 | 0.813262 | 0.813262 | 0.813262 | 0.813262 | 0.813262 | 0 | 0.062371 | 0.188285 | 2,390 | 53 | 81 | 45.09434 | 0.613918 | 0 | 0 | 0.627907 | 1 | 0 | 0.30251 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 1 | 0.093023 | false | 0 | 0.046512 | 0 | 0.139535 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f89ae8b0947fa3d99fbc3b93de6f85b74076dfd9 | 2,902 | py | Python | cdpt/eval.py | tzshi/flat-mwe-parsing | d5837e7d5b46907306affabb9887da2acd1416d8 | [
"MIT"
] | 1 | 2021-06-06T09:29:28.000Z | 2021-06-06T09:29:28.000Z | cdpt/eval.py | tzshi/flat-mwe-parsing | d5837e7d5b46907306affabb9887da2acd1416d8 | [
"MIT"
] | null | null | null | cdpt/eval.py | tzshi/flat-mwe-parsing | d5837e7d5b46907306affabb9887da2acd1416d8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
def extract_spans_bio(g, only_multi=False):
spans = set()
start = None
for i in range(1, len(g.nodes)):
if g.nodes[i].feats == "B":
if start is not None:
if not only_multi or (i - 1 > start):
spans.add((start, i - 1))
start = i
elif g.nodes[i].feats == "I":
if start is None:
start = i
else:
if start is not None:
if not only_multi or (i - 1 > start):
spans.add((start, i - 1))
start = None
if start is not None:
if not only_multi or (len(g.nodes) - 1 > start):
spans.add((start, len(g.nodes) - 1))
return spans
def extract_spans_parsetree(g):
spans = {}
for i in range(1, len(g.nodes)):
if g.rels[i] == "mwe_NNP":
spans[g.heads[i]] = i
spans = {(k, spans[k]) for k in spans}
return spans
def extract_spans_parsetree_ud(g):
spans = {}
for i in range(1, len(g.nodes)):
if g.rels[i] == "flat":
spans[g.heads[i]] = i
spans = {(k, spans[k]) for k in spans}
return spans
def parse_f1(gold, pred):
recall = 0
precision = 0
correct = 0
mismatch = 0
for d, g in zip(gold, pred):
gold_spans = extract_spans_bio(d, only_multi=True)
pred_spans = extract_spans_parsetree(g)
precision += len(pred_spans)
recall += len(gold_spans)
correct += len(pred_spans.intersection(gold_spans))
if correct == 0 or precision == 0 or recall == 0:
return 0.
precision = correct / precision
recall = correct / recall
f1 = 2. / (1./precision + 1./recall)
return f1 * 100.
def parse_f1_ud(gold, pred):
recall = 0
precision = 0
correct = 0
mismatch = 0
for d, g in zip(gold, pred):
gold_spans = extract_spans_bio(d, only_multi=True)
pred_spans = extract_spans_parsetree_ud(g)
precision += len(pred_spans)
recall += len(gold_spans)
correct += len(pred_spans.intersection(gold_spans))
if correct == 0 or precision == 0 or recall == 0:
return 0.
precision = correct / precision
recall = correct / recall
f1 = 2. / (1./precision + 1./recall)
return f1 * 100.
def bio_f1(gold, pred):
recall = 0
precision = 0
correct = 0
mismatch = 0
for d, g in zip(gold, pred):
gold_spans = extract_spans_bio(d, only_multi=True)
pred_spans = extract_spans_bio(g, only_multi=True)
precision += len(pred_spans)
recall += len(gold_spans)
correct += len(pred_spans.intersection(gold_spans))
if correct == 0 or precision == 0 or recall == 0:
return 0.
precision = correct / precision
recall = correct / recall
f1 = 2. / (1./precision + 1./recall)
return f1 * 100.
| 25.910714 | 59 | 0.557202 | 409 | 2,902 | 3.828851 | 0.132029 | 0.068966 | 0.065134 | 0.051086 | 0.909323 | 0.894636 | 0.83461 | 0.83461 | 0.83461 | 0.83461 | 0 | 0.031091 | 0.323915 | 2,902 | 111 | 60 | 26.144144 | 0.767074 | 0.007236 | 0 | 0.790698 | 0 | 0 | 0.004515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0 | 0 | 0.174419 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3e2e444a29baca05eb5760585581ddbfb1cf91c9 | 683 | py | Python | settings/global_param.py | andeyeluguo/AI_physicist | b242204da5a284cd22175bae66e6b4f79814ceeb | [
"MIT"
] | 25 | 2019-10-22T16:49:45.000Z | 2021-12-21T03:53:59.000Z | settings/global_param.py | andeyeluguo/AI_physicist | b242204da5a284cd22175bae66e6b4f79814ceeb | [
"MIT"
] | 1 | 2021-01-21T15:57:19.000Z | 2021-04-04T15:51:27.000Z | settings/global_param.py | andeyeluguo/AI_physicist | b242204da5a284cd22175bae66e6b4f79814ceeb | [
"MIT"
] | 10 | 2019-10-30T03:42:32.000Z | 2022-03-18T14:20:48.000Z | PrecisionFloorLoss = 2 ** (-32)
COLOR_LIST = ["b", "r", "g", "y", "c", "m", "skyblue", "indigo", "goldenrod", "salmon", "pink",
"silver", "darkgreen", "lightcoral", "navy", "orchid", "steelblue", "saddlebrown",
"orange", "olive", "tan", "firebrick", "maroon", "darkslategray", "crimson", "dodgerblue", "aquamarine",
"b", "r", "g", "y", "c", "m", "skyblue", "indigo", "goldenrod", "salmon", "pink",
"silver", "darkgreen", "lightcoral", "navy", "orchid", "steelblue", "saddlebrown",
"orange", "olive", "tan", "firebrick", "maroon", "darkslategray", "crimson", "dodgerblue", "aquamarine"]
Dt = 0.05 | 75.888889 | 122 | 0.527086 | 62 | 683 | 5.790323 | 0.564516 | 0.011142 | 0.016713 | 0.022284 | 0.902507 | 0.902507 | 0.902507 | 0.902507 | 0.902507 | 0.902507 | 0 | 0.011278 | 0.221083 | 683 | 9 | 123 | 75.888889 | 0.663534 | 0 | 0 | 0.25 | 0 | 0 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3e8e10026db13cea6d5d78b71102e1ee1ea5f266 | 81,221 | py | Python | openapi_client/api/submissions_api.py | osuka/dognews-scraper | 12373064061157083a48ced8e2cabf9d1ace30a5 | [
"MIT"
] | 1 | 2019-11-15T13:19:36.000Z | 2019-11-15T13:19:36.000Z | openapi_client/api/submissions_api.py | osuka/news-extractor | 12373064061157083a48ced8e2cabf9d1ace30a5 | [
"MIT"
] | null | null | null | openapi_client/api/submissions_api.py | osuka/news-extractor | 12373064061157083a48ced8e2cabf9d1ace30a5 | [
"MIT"
] | null | null | null | """
Dognews Server API
Dognews Server client API # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from openapi_client.api_client import ApiClient, Endpoint as _Endpoint
from openapi_client.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from openapi_client.model.fetch import Fetch
from openapi_client.model.moderation import Moderation
from openapi_client.model.paginated_submission_list import PaginatedSubmissionList
from openapi_client.model.paginated_vote_list import PaginatedVoteList
from openapi_client.model.patched_submission import PatchedSubmission
from openapi_client.model.submission import Submission
from openapi_client.model.vote import Vote
class SubmissionsApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __submissions_create(
self,
submission,
**kwargs
):
"""submissions_create # noqa: E501
Submitted articles for review **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_create(submission, async_req=True)
>>> result = thread.get()
Args:
submission (Submission):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Submission
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission'] = \
submission
return self.call_with_http_info(**kwargs)
self.submissions_create = _Endpoint(
settings={
'response_type': (Submission,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions',
'operation_id': 'submissions_create',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'submission',
],
'required': [
'submission',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission':
(Submission,),
},
'attribute_map': {
},
'location_map': {
'submission': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__submissions_create
)
def __submissions_destroy(
self,
id,
**kwargs
):
"""submissions_destroy # noqa: E501
Submitted articles for review **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_destroy(id, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this submission.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.submissions_destroy = _Endpoint(
settings={
'response_type': None,
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{id}',
'operation_id': 'submissions_destroy',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [],
'content_type': [],
},
api_client=api_client,
callable=__submissions_destroy
)
def __submissions_fetch_destroy(
self,
submission_id,
**kwargs
):
"""submissions_fetch_destroy # noqa: E501
SFetching results attached to a submission **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_fetch_destroy(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int): A unique value identifying this fetch.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_fetch_destroy = _Endpoint(
settings={
'response_type': None,
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/fetch',
'operation_id': 'submissions_fetch_destroy',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'submission_id',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [],
'content_type': [],
},
api_client=api_client,
callable=__submissions_fetch_destroy
)
def __submissions_fetch_retrieve(
self,
submission_id,
**kwargs
):
"""submissions_fetch_retrieve # noqa: E501
SFetching results attached to a submission **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_fetch_retrieve(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int): A unique value identifying this fetch.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Fetch
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_fetch_retrieve = _Endpoint(
settings={
'response_type': (Fetch,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/fetch',
'operation_id': 'submissions_fetch_retrieve',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'submission_id',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__submissions_fetch_retrieve
)
def __submissions_fetch_update(
self,
submission_id,
**kwargs
):
"""submissions_fetch_update # noqa: E501
SFetching results attached to a submission **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_fetch_update(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int): A unique value identifying this fetch.
Keyword Args:
fetch (Fetch): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Fetch
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_fetch_update = _Endpoint(
settings={
'response_type': (Fetch,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/fetch',
'operation_id': 'submissions_fetch_update',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'submission_id',
'fetch',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
'fetch':
(Fetch,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
'fetch': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__submissions_fetch_update
)
def __submissions_list(
self,
**kwargs
):
"""submissions_list # noqa: E501
Submitted articles for review **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_list(async_req=True)
>>> result = thread.get()
Keyword Args:
analysis__status (str): [optional]
analysis__status__isnull (bool): [optional]
fetch__generated_thumbnail__isnull (bool): [optional]
fetch__isnull (bool): [optional]
fetch__status (str): [optional]
fetch__status__isnull (bool): [optional]
fetch__thumbnail__isnull (bool): [optional]
limit (int): Number of results to return per page.. [optional]
moderation__isnull (bool): [optional]
moderation__status (str): [optional]
moderation__status__isnull (bool): [optional]
offset (int): The initial index from which to return the results.. [optional]
ordering (str): Which field to use when ordering the results.. [optional]
status (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
PaginatedSubmissionList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.submissions_list = _Endpoint(
settings={
'response_type': (PaginatedSubmissionList,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions',
'operation_id': 'submissions_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'analysis__status',
'analysis__status__isnull',
'fetch__generated_thumbnail__isnull',
'fetch__isnull',
'fetch__status',
'fetch__status__isnull',
'fetch__thumbnail__isnull',
'limit',
'moderation__isnull',
'moderation__status',
'moderation__status__isnull',
'offset',
'ordering',
'status',
],
'required': [],
'nullable': [
],
'enum': [
'analysis__status',
'fetch__status',
'moderation__status',
'status',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('analysis__status',): {
"FAILED": "failed",
"PASSED": "passed",
"PENDING": "pending"
},
('fetch__status',): {
"FETCHED": "fetched",
"PENDING": "pending",
"REJ_ERROR": "rej_error",
"REJ_FETCH": "rej_fetch"
},
('moderation__status',): {
"ACCEPTED": "accepted",
"PENDING": "pending",
"REJECTED": "rejected"
},
('status',): {
"ACCEPTED": "accepted",
"PENDING": "pending",
"REJ_BANNED": "rej_banned",
"REJ_FETCH": "rej_fetch",
"REJ_MOD": "rej_mod",
"REJ_SENTIM": "rej_sentim"
},
},
'openapi_types': {
'analysis__status':
(str,),
'analysis__status__isnull':
(bool,),
'fetch__generated_thumbnail__isnull':
(bool,),
'fetch__isnull':
(bool,),
'fetch__status':
(str,),
'fetch__status__isnull':
(bool,),
'fetch__thumbnail__isnull':
(bool,),
'limit':
(int,),
'moderation__isnull':
(bool,),
'moderation__status':
(str,),
'moderation__status__isnull':
(bool,),
'offset':
(int,),
'ordering':
(str,),
'status':
(str,),
},
'attribute_map': {
'analysis__status': 'analysis__status',
'analysis__status__isnull': 'analysis__status__isnull',
'fetch__generated_thumbnail__isnull': 'fetch__generated_thumbnail__isnull',
'fetch__isnull': 'fetch__isnull',
'fetch__status': 'fetch__status',
'fetch__status__isnull': 'fetch__status__isnull',
'fetch__thumbnail__isnull': 'fetch__thumbnail__isnull',
'limit': 'limit',
'moderation__isnull': 'moderation__isnull',
'moderation__status': 'moderation__status',
'moderation__status__isnull': 'moderation__status__isnull',
'offset': 'offset',
'ordering': 'ordering',
'status': 'status',
},
'location_map': {
'analysis__status': 'query',
'analysis__status__isnull': 'query',
'fetch__generated_thumbnail__isnull': 'query',
'fetch__isnull': 'query',
'fetch__status': 'query',
'fetch__status__isnull': 'query',
'fetch__thumbnail__isnull': 'query',
'limit': 'query',
'moderation__isnull': 'query',
'moderation__status': 'query',
'moderation__status__isnull': 'query',
'offset': 'query',
'ordering': 'query',
'status': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__submissions_list
)
def __submissions_moderation_destroy(
self,
submission_id,
**kwargs
):
"""submissions_moderation_destroy # noqa: E501
Moderation attached to a submission **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_moderation_destroy(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int): A unique value identifying this moderation.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_moderation_destroy = _Endpoint(
settings={
'response_type': None,
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/moderation',
'operation_id': 'submissions_moderation_destroy',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'submission_id',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [],
'content_type': [],
},
api_client=api_client,
callable=__submissions_moderation_destroy
)
def __submissions_moderation_retrieve(
self,
submission_id,
**kwargs
):
"""submissions_moderation_retrieve # noqa: E501
Moderation attached to a submission **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_moderation_retrieve(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int): A unique value identifying this moderation.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Moderation
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_moderation_retrieve = _Endpoint(
settings={
'response_type': (Moderation,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/moderation',
'operation_id': 'submissions_moderation_retrieve',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'submission_id',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__submissions_moderation_retrieve
)
def __submissions_moderation_update(
self,
submission_id,
**kwargs
):
"""submissions_moderation_update # noqa: E501
Moderation attached to a submission **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_moderation_update(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int): A unique value identifying this moderation.
Keyword Args:
moderation (Moderation): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Moderation
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_moderation_update = _Endpoint(
settings={
'response_type': (Moderation,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/moderation',
'operation_id': 'submissions_moderation_update',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'submission_id',
'moderation',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
'moderation':
(Moderation,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
'moderation': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__submissions_moderation_update
)
def __submissions_partial_update(
self,
id,
**kwargs
):
"""submissions_partial_update # noqa: E501
Submitted articles for review **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_partial_update(id, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this submission.
Keyword Args:
patched_submission (PatchedSubmission): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Submission
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.submissions_partial_update = _Endpoint(
settings={
'response_type': (Submission,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{id}',
'operation_id': 'submissions_partial_update',
'http_method': 'PATCH',
'servers': None,
},
params_map={
'all': [
'id',
'patched_submission',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
'patched_submission':
(PatchedSubmission,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
'patched_submission': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__submissions_partial_update
)
def __submissions_retrieve(
self,
id,
**kwargs
):
"""submissions_retrieve # noqa: E501
Submitted articles for review **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_retrieve(id, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this submission.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Submission
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.submissions_retrieve = _Endpoint(
settings={
'response_type': (Submission,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{id}',
'operation_id': 'submissions_retrieve',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__submissions_retrieve
)
def __submissions_update(
self,
id,
submission,
**kwargs
):
"""submissions_update # noqa: E501
Submitted articles for review **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_update(id, submission, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this submission.
submission (Submission):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Submission
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
kwargs['submission'] = \
submission
return self.call_with_http_info(**kwargs)
self.submissions_update = _Endpoint(
settings={
'response_type': (Submission,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{id}',
'operation_id': 'submissions_update',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'id',
'submission',
],
'required': [
'id',
'submission',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
'submission':
(Submission,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
'submission': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__submissions_update
)
def __submissions_votes_create(
self,
submission_id,
**kwargs
):
"""submissions_votes_create # noqa: E501
Vote management /submissions/(id)/votes (get, post) **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrModeratorOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user * AND if the user is not in the Moderators group Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_votes_create(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int):
Keyword Args:
vote (Vote): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Vote
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_votes_create = _Endpoint(
settings={
'response_type': (Vote,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/votes',
'operation_id': 'submissions_votes_create',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'submission_id',
'vote',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
'vote':
(Vote,),
},
'attribute_map': {
'submission_id': 'submission_id',
},
'location_map': {
'submission_id': 'path',
'vote': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__submissions_votes_create
)
def __submissions_votes_list(
self,
submission_id,
**kwargs
):
"""submissions_votes_list # noqa: E501
Vote management /submissions/(id)/votes (get, post) **Permission restrictions:** + `IsAuthenticated`: *Rejects all operations if the user is not authenticated* + `IsOwnerOrModeratorOrStaff`: *Blocks update/partial_updated/destroy if: * the user is NOT in the staff group * AND if the model has a property called 'owner' and its value differs from the request user * AND if the user is not in the Moderators group Everything else is allowed* + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submissions_votes_list(submission_id, async_req=True)
>>> result = thread.get()
Args:
submission_id (int):
Keyword Args:
limit (int): Number of results to return per page.. [optional]
offset (int): The initial index from which to return the results.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
PaginatedVoteList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['submission_id'] = \
submission_id
return self.call_with_http_info(**kwargs)
self.submissions_votes_list = _Endpoint(
settings={
'response_type': (PaginatedVoteList,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/submissions/{submission_id}/votes',
'operation_id': 'submissions_votes_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'submission_id',
'limit',
'offset',
],
'required': [
'submission_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'submission_id':
(int,),
'limit':
(int,),
'offset':
(int,),
},
'attribute_map': {
'submission_id': 'submission_id',
'limit': 'limit',
'offset': 'offset',
},
'location_map': {
'submission_id': 'path',
'limit': 'query',
'offset': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__submissions_votes_list
)
| 42.435214 | 885 | 0.49124 | 7,085 | 81,221 | 5.409315 | 0.038391 | 0.023014 | 0.018995 | 0.019726 | 0.916451 | 0.902727 | 0.886236 | 0.88334 | 0.876138 | 0.870737 | 0 | 0.002428 | 0.426897 | 81,221 | 1,913 | 886 | 42.457397 | 0.820916 | 0.400377 | 0 | 0.678108 | 1 | 0 | 0.24054 | 0.05299 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011442 | false | 0.000763 | 0.008391 | 0 | 0.031274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e4168dc3d7174676870a495d95bed2ea5e2e0bf2 | 24,143 | py | Python | evaluation/evaluation.py | bhoov/PaCMAP | 270725cbb0d8374ee670bf0266bccb2b872cbc13 | [
"Apache-2.0"
] | 142 | 2020-12-09T20:00:37.000Z | 2022-03-29T07:49:32.000Z | evaluation/evaluation.py | bhoov/PaCMAP | 270725cbb0d8374ee670bf0266bccb2b872cbc13 | [
"Apache-2.0"
] | 26 | 2020-12-11T21:05:25.000Z | 2022-03-28T19:18:33.000Z | evaluation/evaluation.py | bhoov/PaCMAP | 270725cbb0d8374ee670bf0266bccb2b872cbc13 | [
"Apache-2.0"
] | 16 | 2020-12-12T04:12:26.000Z | 2022-03-28T22:37:18.000Z | import os
import json
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
import pickle
import numba
from run_script import data_prep
from sklearn.svm import SVC, LinearSVC
from sklearn.model_selection import StratifiedKFold, LeaveOneOut
from sklearn.neighbors import KNeighborsClassifier, NearestNeighbors
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.kernel_approximation import Nystroem
from sklearn.pipeline import make_pipeline
from collections import Counter
from numpy.random import default_rng
import numpy as np
import numba
from sklearn.decomposition import TruncatedSVD
@numba.njit()
def euclid_dist(x1, x2):
result = 0.0
for i in range(x1.shape[0]):
result += (x1[i] - x2[i]) ** 2
return np.sqrt(result)
def score(X, Y, i,j,k):
yij = euclid_dist(Y[i], Y[j])
yik = euclid_dist(Y[i], Y[k])
if yik < yij:
return 1
else:
return 0
def score_largely(X, Y, i,j,k):
xij = euclid_dist(X[i], X[j])
xik = euclid_dist(X[i], X[k])
yij = euclid_dist(Y[i], Y[j])
yik = euclid_dist(Y[i], Y[k])
if (xik-xij)/(xik+1e-15) < 0.2: # when the triplet is less important in high-dim space
if (yij-yik)/(yik+1e-15) < 0.2: # no violation or slight violation
return 0
else:
return 1
else: # when the triplet is important in high-dim space
if yij < yik:
return 0
else:
return 1
def eval_random(X, Y, num=20):
n, x_dim = X.shape
if x_dim > 100:
X -= np.mean(X, axis=0)
X = TruncatedSVD(n_components=100, random_state=0).fit_transform(X)
res = 0
for i in range(n):
for j in range(num):
selected = np.random.randint(0, n, 2)
if euclid_dist(X[i], X[selected[0]]) < euclid_dist(X[i], X[selected[1]]):
res += score(X, Y, i, selected[0], selected[1])
else:
res += score(X, Y, i, selected[1], selected[0])
return res
def knn_clf(nbr_vec, y):
'''
Helper function to generate knn classification result.
'''
y_vec = y[nbr_vec]
c = Counter(y_vec)
return c.most_common(1)[0][0]
def knn_eval(X, y, n_neighbors=1):
'''
This is a function that is used to evaluate the lower dimension embedding.
An accuracy is calculated by an k-nearest neighbor classifier.
Input:
X: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset.
kwargs: Any keyword argument that is send into the knn clf.
Output:
acc: The avg accuracy generated by the clf, using leave one out cross val.
'''
sum_acc = 0
max_acc = X.shape[0]
# Train once, reuse multiple times
nbrs = NearestNeighbors(n_neighbors=n_neighbors+1).fit(X)
distances, indices = nbrs.kneighbors(X)
indices = indices [:, 1:]
distances = distances[:, 1:]
for i in range(X.shape[0]):
result = knn_clf(indices[i], y)
if result == y[i]:
sum_acc += 1
avg_acc = sum_acc / max_acc
return avg_acc
def knn_eval_series(X, y, n_neighbors_list=[1, 3, 5, 10, 15, 20, 25, 30]):
'''
This is a function that is used to evaluate the lower dimension embedding.
An accuracy is calculated by an k-nearest neighbor classifier.
A series of accuracy will be calculated for the given n_neighbors.
Input:
X: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset.
n_neighbors_list: A list of int.
kwargs: Any keyword argument that is send into the knn clf.
Output:
accs: The avg accuracy generated by the clf, using leave one out cross val.
'''
avg_accs = []
for n_neighbors in n_neighbors_list:
avg_acc = knn_eval(X, y, n_neighbors)
avg_accs.append(avg_acc)
return avg_accs
def faster_knn_eval_series(X, y, n_neighbors_list=[1, 3, 5, 10, 15, 20, 25, 30]):
'''
This is a function that is used to evaluate the lower dimension embedding.
An accuracy is calculated by an k-nearest neighbor classifier.
A series of accuracy will be calculated for the given n_neighbors.
Input:
X: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset.
n_neighbors_list: A list of int.
kwargs: Any keyword argument that is send into the knn clf.
Output:
accs: The avg accuracy generated by the clf, using leave one out cross val.
'''
avg_accs = []
max_acc = X.shape[0]
# Train once, reuse multiple times
nbrs = NearestNeighbors(n_neighbors=n_neighbors_list[-1]+1).fit(X)
distances, indices = nbrs.kneighbors(X)
indices = indices [:, 1:]
distances = distances[:, 1:]
for n_neighbors in n_neighbors_list:
sum_acc = 0
for i in range(X.shape[0]):
indices_temp = indices[:, :n_neighbors]
result = knn_clf(indices_temp[i], y)
if result == y[i]:
sum_acc += 1
avg_acc = sum_acc / max_acc
avg_accs.append(avg_acc)
return avg_accs
def svm_eval(X, y, img_verbose=False, n_splits=5, **kwargs):
'''
This is a function that is used to evaluate the lower dimension embedding.
An accuracy is calculated by an SVM with rbf kernel.
Input:
X: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset.
kwargs: Any keyword argument that is send into the SVM.
Output:
acc: The (avg) accuracy generated by an SVM with rbf kernel.
'''
X = scale(X)
skf = StratifiedKFold(n_splits=n_splits)
sum_acc = 0
max_acc = n_splits
for train_index, test_index in skf.split(X, y):
clf = SVC(**kwargs)
clf.fit(X[train_index], y[train_index])
acc = clf.score(X[test_index], y[test_index])
sum_acc += acc
avg_acc = sum_acc/max_acc
return avg_acc
def faster_svm_eval(X, y, n_splits=5, **kwargs):
'''
This is an accelerated version of the svm_eval function.
An accuracy is calculated by an SVM with rbf kernel.
Input:
X: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset.
kwargs: Any keyword argument that is send into the SVM.
Output:
acc: The (avg) accuracy generated by an SVM with rbf kernel.
'''
X = X.astype(np.float)
X = scale(X)
skf = StratifiedKFold(n_splits=n_splits)
sum_acc = 0
max_acc = n_splits
for train_index, test_index in skf.split(X, y):
feature_map_nystroem = Nystroem(gamma=1/(X.var()*X.shape[1]), random_state=1, n_components=300)
data_transformed = feature_map_nystroem.fit_transform(X[train_index])
clf = LinearSVC(random_state=0, tol=1e-5, **kwargs)
clf.fit(data_transformed, y[train_index])
test_transformed = feature_map_nystroem.transform(X[test_index])
acc = clf.score(test_transformed, y[test_index])
sum_acc += acc
avg_acc = sum_acc/max_acc
return avg_acc
def centroid_triplet_eval(X, X_new, y):
'''
This is a function that is used to evaluate the lower dimension embedding.
An triplet satisfaction score is calculated by evaluating how many triplets
of cluster centroids have been violated.
Input:
X: A numpy array with the shape [N, p]. The higher dimension embedding
of some dataset. Expected to have some clusters.
X_new: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters as well.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset. Used to identify clusters
Output:
acc: The score generated by the algorithm.
'''
cluster_mean_ori, cluster_mean_new = [], []
categories = np.unique(y)
num_cat = len(categories)
mask = np.mask_indices(num_cat, np.tril, -1)
for i in range(num_cat):
label = categories[i]
X_clus_ori = X[y == label]
X_clus_new = X_new[y == label]
cluster_mean_ori.append(np.mean(X_clus_ori, axis = 0))
cluster_mean_new.append(np.mean(X_clus_new, axis = 0))
cluster_mean_ori = np.array(cluster_mean_ori)
cluster_mean_new = np.array(cluster_mean_new)
ori_dist = euclidean_distances(cluster_mean_ori)[mask]
new_dist = euclidean_distances(cluster_mean_new)[mask]
dist_agree = 0. # two distance agrees
dist_all = 0. # count
for i in range(len(ori_dist)):
for j in range(i+1, len(ori_dist)):
if ori_dist[i] > ori_dist[j] and new_dist[i] > new_dist[j]:
dist_agree += 1
elif ori_dist[i] <= ori_dist[j] and new_dist[i] <= new_dist[j]:
dist_agree += 1
dist_all += 1
return dist_agree/dist_all
def faster_centroid_triplet_eval(X, X_new, y):
'''
This is a function that is used to evaluate the lower dimension embedding.
An triplet satisfaction score is calculated by evaluating how many triplets
of cluster median centroids have been violated.
Input:
X: A numpy array with the shape [N, p]. The higher dimension embedding
of some dataset. Expected to have some clusters.
X_new: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters as well.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset. Used to identify clusters
Output:
acc: The score generated by the algorithm.
'''
cluster_mean_ori, cluster_mean_new = [], []
categories = np.unique(y)
num_cat = len(categories)
mask = np.mask_indices(num_cat, np.tril, -1)
for i in range(num_cat):
label = categories[i]
X_clus_ori = X[y == label]
X_clus_new = X_new[y == label]
cluster_mean_ori.append(np.median(X_clus_ori, axis = 0))
cluster_mean_new.append(np.median(X_clus_new, axis = 0))
cluster_mean_ori = np.array(cluster_mean_ori)
cluster_mean_new = np.array(cluster_mean_new)
ori_dist = euclidean_distances(cluster_mean_ori)[mask]
new_dist = euclidean_distances(cluster_mean_new)[mask]
dist_agree = 0. # two distance agrees
dist_all = 0. # count
for i in range(len(ori_dist)):
for j in range(i+1, len(ori_dist)):
if ori_dist[i] > ori_dist[j] and new_dist[i] > new_dist[j]:
dist_agree += 1
elif ori_dist[i] <= ori_dist[j] and new_dist[i] <= new_dist[j]:
dist_agree += 1
dist_all += 1
return dist_agree/dist_all
def random_triplet_eval(X, X_new, y):
'''
This is a function that is used to evaluate the lower dimension embedding.
An triplet satisfaction score is calculated by evaluating how many randomly
selected triplets have been violated. Each point will generate 5 triplets.
Input:
X: A numpy array with the shape [N, p]. The higher dimension embedding
of some dataset. Expected to have some clusters.
X_new: A numpy array with the shape [N, k]. The lower dimension embedding
of some dataset. Expected to have some clusters as well.
y: A numpy array with the shape [N, 1]. The labels of the original
dataset. Used to identify clusters
Output:
acc: The score generated by the algorithm.
'''
# Sampling Triplets
# Five triplet per point
anchors = np.arange(X.shape[0])
rng = default_rng()
triplets = rng.choice(anchors, (X.shape[0], 5, 2))
triplet_labels = np.zeros((X.shape[0], 5))
anchors = anchors.reshape((-1, 1, 1))
# Calculate the distances and generate labels
b = np.broadcast(anchors, triplets)
distances = np.empty(b.shape)
distances.flat = [np.linalg.norm(X[u] - X[v]) for (u,v) in b]
labels = distances[:, :, 0] < distances[: , :, 1]
# Calculate distances for LD
b = np.broadcast(anchors, triplets)
distances_l = np.empty(b.shape)
distances_l.flat = [np.linalg.norm(X_new[u] - X_new[v]) for (u,v) in b]
pred_vals = distances_l[:, :, 0] < distances_l[:, :, 1]
correct = np.sum(pred_vals == labels)
acc = correct/X.shape[0]/5
return acc
def evaluate_output(X, X_new, y, name, baseline=False, labelled=True):
results = {}
results['name'] = name
if labelled:
if baseline:
baseline_knn_accs = knn_eval_series(X, y)
baseline_svm_acc = faster_svm_eval(X, y)
results['baseline_knn'] = baseline_knn_accs
results['baseline_svm'] = baseline_svm_acc
knn_accs = knn_eval_series(X_new, y)
svm_acc = faster_svm_eval(X_new, y)
cte_acc = centroid_triplet_eval(X, X_new, y)
results['knn'] = knn_accs
results['svm'] = svm_acc
results['cte'] = cte_acc
rte_acc = random_triplet_eval(X, X_new, y)
results['rte'] = rte_acc
return results
def evaluate_output_non_svm(X, X_new, y, name, baseline=False, labelled=True):
results = {}
results['name'] = name
if labelled:
if baseline:
baseline_knn_accs = knn_eval_series(X, y)
results['baseline_knn'] = baseline_knn_accs
knn_accs = knn_eval_series(X_new, y)
cte_acc = centroid_triplet_eval(X, X_new, y)
results['knn'] = knn_accs
results['cte'] = cte_acc
rte_acc = random_triplet_eval(X, X_new, y)
results['rte'] = rte_acc
return results
def evaluate_output_cte_only(X, X_new, y, name, baseline=False, labelled=True):
results = {}
results['name'] = name
if labelled:
knn_accs = knn_eval_series(X_new, y)
cte_acc = centroid_triplet_eval(X, X_new, y)
results['knn'] = knn_accs
results['cte'] = cte_acc
rte_acc = random_triplet_eval(X, X_new, y)
results['rte'] = rte_acc
return results
def evaluate_output_svm_only(X, X_new, y, name, baseline=False, labelled=True):
results = {}
results['name'] = name
if labelled:
if baseline:
baseline_svm_acc = faster_svm_eval(X, y)
results['baseline_svm'] = baseline_svm_acc
svm_acc = faster_svm_eval(X_new, y)
results['svm'] = svm_acc
return results
def fetch_output(dataset_name='MNIST'):
location = '../output'
all_file = os.listdir(location)
selected_file = []
for file in all_file:
if file[:len(dataset_name)] == dataset_name and file[len(dataset_name)+1] != 'h' and file[len(dataset_name)+1] != 'b':
selected_file.append(file)
return selected_file
def evaluate_category(dataset_name='MNIST', labelled=True, data_pca=True, svm=True, svm_only=False):
if data_pca:
print('data_pca')
if svm:
print('svm')
if svm_only:
print('svm_only')
X, y = data_prep(dataset_name, 70000)
if X.shape[1] > 100:
if data_pca and dataset_name != 'Mouse_scRNA':
pca = PCA(n_components=100)
X = pca.fit_transform(X)
elif data_pca and dataset_name == 'Mouse_scRNA':
pca = PCA(n_components=1000)
X = pca.fit_transform(X)
location = '../output'
selected_file = fetch_output(dataset_name)
i = 0
all_results = {}
for file in selected_file:
X_new = np.load(location + file)
for j in range(5):
if i == 0 and j == 0:
if svm:
results = evaluate_output(X, X_new[j], y, file, baseline=True, labelled=labelled)
elif svm_only:
results = evaluate_output_svm_only(X, X_new[j], y, file, baseline=True, labelled=labelled)
else:
results = evaluate_output_non_svm(X, X_new[j], y, file, baseline=True, labelled=labelled)
all_results[results['name'] + str(j)] = results
if labelled:
if not svm_only:
all_results['baseline_knn'] = results['baseline_knn']
if svm or svm_only:
all_results['baseline_svm'] = results['baseline_svm']
else:
if svm:
results = evaluate_output(X, X_new[j], y, file, baseline=False, labelled=labelled)
elif svm_only:
results = evaluate_output_svm_only(X, X_new[j], y, file, baseline=False, labelled=labelled)
else:
results = evaluate_output_non_svm(X, X_new[j], y, file, baseline=False, labelled=labelled)
all_results[results['name'] + str(j)] = results
i += 1
if data_pca:
dataset_name += '_pca'
if labelled:
dataset_name += '_l'
if svm_only:
dataset_name += '_svm'
with open(dataset_name, 'wb') as fp:
pickle.dump(all_results, fp, protocol=pickle.HIGHEST_PROTOCOL)
print('Finished')
return all_results
def fetch_LargeVis(dataset_name='MNIST'):
location = '../output'
all_file = os.listdir(location)
selected_file = []
for file in all_file:
# To solve the error of LargeVis
if file[len(dataset_name)+1] != 'L':
continue
if file[:len(dataset_name)] == dataset_name and file[len(dataset_name)+1] != 'h':
selected_file.append(file)
return selected_file
def evaluate_LargeVis(dataset_name='MNIST', labelled=True, data_pca=True, svm=True, svm_only=False):
X, y = data_prep(dataset_name, 70000)
if X.shape[1] > 100:
if data_pca and dataset_name != 'Mouse_scRNA':
pca = PCA(n_components=100)
X = pca.fit_transform(X)
elif data_pca and dataset_name == 'Mouse_scRNA':
pca = PCA(n_components=1000)
X = pca.fit_transform(X)
location = '../output'
selected_file = fetch_LargeVis(dataset_name)
i = 0
all_results = {}
for file in selected_file:
X_new = np.load(location + file)
for j in range(5):
if i == 0 and j == 0:
if svm:
results = evaluate_output(X, X_new[j], y, file, baseline=True, labelled=labelled)
elif svm_only:
results = evaluate_output_svm_only(X, X_new[j], y, file, baseline=True, labelled=labelled)
else:
results = evaluate_output_non_svm(X, X_new[j], y, file, baseline=True, labelled=labelled)
all_results[results['name'] + str(j)] = results
if labelled:
if not svm_only:
all_results['baseline_knn'] = results['baseline_knn']
if svm or svm_only:
all_results['baseline_svm'] = results['baseline_svm']
else:
if svm:
results = evaluate_output(X, X_new[j], y, file, baseline=False, labelled=labelled)
elif svm_only:
results = evaluate_output_svm_only(X, X_new[j], y, file, baseline=False, labelled=labelled)
else:
results = evaluate_output_non_svm(X, X_new[j], y, file, baseline=False, labelled=labelled)
all_results[results['name'] + str(j)] = results
i += 1
dataset_name += '_largevis'
if data_pca:
dataset_name += '_pca'
if labelled:
dataset_name += '_l'
if svm_only:
dataset_name += '_svm'
elif svm == False:
dataset_name += '_nonsvm'
with open(dataset_name, 'wb') as fp:
pickle.dump(all_results, fp, protocol=pickle.HIGHEST_PROTOCOL)
print('Finished')
return all_results
def evaluate_npy(selected_file, dataset_name='MNIST', labelled=True, data_pca=True, svm=True):
size_arg = 10000000
if dataset_name == 's_curve' or dataset_name == 's_curve_hole':
size_arg = 10000
X, y = data_prep(dataset_name, size_arg)
if X.shape[1] > 100:
if data_pca and dataset_name != 'Mouse_scRNA':
pca = PCA(n_components=100)
X = pca.fit_transform(X)
elif data_pca and dataset_name == 'Mouse_scRNA':
pca = PCA(n_components=1000)
X = pca.fit_transform(X)
location = '../output'
output_location = '../test_results/'
for file in selected_file:
all_results = {}
X_new = np.load(location + file)
for j in range(5):
if svm:
results = evaluate_output_svm_only(X, X_new[j], y, file, baseline=False, labelled=labelled)
else:
results = evaluate_output_non_svm(X, X_new[j], y, file, baseline=False, labelled=labelled)
all_results[str(j)] = results
outfilename = file[:-4]
if svm:
outfilename += '_svm'
outfilename = output_location + outfilename + '.json'
with open(outfilename, 'wb') as fp:
pickle.dump(all_results, fp, protocol=pickle.HIGHEST_PROTOCOL)
print('Succesfully evaluated ' + file)
print('Finished evaluation')
def evaluate_ctes(selected_file, dataset_name='MNIST', labelled=True, data_pca=True):
size_arg = 10000000
if dataset_name == 's_curve' or dataset_name == 's_curve_hole':
size_arg = 10000
X, y = data_prep(dataset_name, size_arg)
if X.shape[1] > 100:
if data_pca and dataset_name != 'Mouse_scRNA':
pca = PCA(n_components=100)
X = pca.fit_transform(X)
elif data_pca and dataset_name == 'Mouse_scRNA':
pca = PCA(n_components=1000)
X = pca.fit_transform(X)
location = '../output'
output_location = '../test_results/'
for file in selected_file:
all_results = {}
X_new = np.load(location + file)
for j in range(5):
results = centroid_triplet_eval(X, X_new[j], y)
all_results[str(j)] = results
outfilename = file[:-4]
outfilename += '_cte'
outfilename = output_location + outfilename + '.json'
with open(outfilename, 'wb') as fp:
pickle.dump(all_results, fp, protocol=pickle.HIGHEST_PROTOCOL)
print('Succesfully evaluated ' + file)
print('Finished evaluation')
def evaluate_rtes(selected_file, dataset_name='MNIST', labelled=True, data_pca=True):
size_arg = 10000000
if dataset_name == 's_curve' or dataset_name == 's_curve_hole':
size_arg = 10000
X, y = data_prep(dataset_name, size_arg)
if X.shape[1] > 100:
if data_pca and dataset_name != 'Mouse_scRNA':
pca = PCA(n_components=100)
X = pca.fit_transform(X)
elif data_pca and dataset_name == 'Mouse_scRNA':
pca = PCA(n_components=1000)
X = pca.fit_transform(X)
location = '../output'
output_location = '../test_results/'
for file in selected_file:
all_results = {}
X_new = np.load(location + file)
for j in range(5):
results = random_triplet_eval(X, X_new[j], y)
all_results[str(j)] = results
outfilename = file[:-4]
outfilename += '_rte'
outfilename = output_location + outfilename + '.json'
with open(outfilename, 'wb') as fp:
pickle.dump(all_results, fp, protocol=pickle.HIGHEST_PROTOCOL)
print('Succesfully evaluated ' + file)
print('Finished evaluation')
| 39.449346 | 126 | 0.621878 | 3,456 | 24,143 | 4.157697 | 0.090856 | 0.036746 | 0.010091 | 0.019834 | 0.840908 | 0.820308 | 0.79755 | 0.790173 | 0.775698 | 0.763727 | 0 | 0.015364 | 0.277513 | 24,143 | 611 | 127 | 39.513912 | 0.808405 | 0.212235 | 0 | 0.742919 | 0 | 0 | 0.040831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052288 | false | 0 | 0.045752 | 0 | 0.152505 | 0.023965 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e46c837b9928d1a78ace66960e9acb63e5a614ce | 183 | py | Python | sixth.py | SoursosK/Linux-Security-Tools | 6069e5ea125406e4cd4b3d053e2f0c016073aade | [
"MIT"
] | null | null | null | sixth.py | SoursosK/Linux-Security-Tools | 6069e5ea125406e4cd4b3d053e2f0c016073aade | [
"MIT"
] | null | null | null | sixth.py | SoursosK/Linux-Security-Tools | 6069e5ea125406e4cd4b3d053e2f0c016073aade | [
"MIT"
] | null | null | null | import os
os.system("sudo awk -F\":\" '($2 == \"!\" || $2 == \"*\") {print $1}' /etc/shadow")
os.system("sudo awk -F\":\" '($2 == \"!\" || $2 == \"*\") {passwd $1 -l}' /etc/shadow")
| 36.6 | 87 | 0.415301 | 25 | 183 | 3.04 | 0.52 | 0.210526 | 0.315789 | 0.394737 | 0.473684 | 0.473684 | 0.473684 | 0 | 0 | 0 | 0 | 0.039216 | 0.163934 | 183 | 4 | 88 | 45.75 | 0.457516 | 0 | 0 | 0 | 0 | 0 | 0.655738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 10 |
e47c2090b39bb5348bd1b594f9d2ae1c5309a76f | 965 | py | Python | exercism/python/robot-name/robot_name.py | Cythun/online-judge-practice | 1205480a2ff30e2a698917a7717ffe4db2fba2a5 | [
"MIT"
] | null | null | null | exercism/python/robot-name/robot_name.py | Cythun/online-judge-practice | 1205480a2ff30e2a698917a7717ffe4db2fba2a5 | [
"MIT"
] | null | null | null | exercism/python/robot-name/robot_name.py | Cythun/online-judge-practice | 1205480a2ff30e2a698917a7717ffe4db2fba2a5 | [
"MIT"
] | null | null | null | import random
class Robot(object):
def __init__(self):
chars = ('A', 'B', 'C', 'D', 'E', 'F', 'F', 'H', 'I', 'J', 'K', 'L',
'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z')
nums = ('1', '2', '3', '4', '5', '6', '7', '8', '9', '0')
output = ''
for i in range(2):
output += random.SystemRandom().choice(chars)
for i in range(3):
output += random.SystemRandom().choice(nums)
self.name = output
def reset(self):
chars = ('A', 'B', 'C', 'D', 'E', 'F', 'F', 'H', 'I', 'J', 'K', 'L',
'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z')
nums = ('1', '2', '3', '4', '5', '6', '7', '8', '9', '0')
output = ''
for i in range(2):
output += random.SystemRandom().choice(chars)
for i in range(3):
output += random.SystemRandom().choice(nums)
self.name = output
| 28.382353 | 77 | 0.384456 | 135 | 965 | 2.718519 | 0.392593 | 0.043597 | 0.065395 | 0.119891 | 0.882834 | 0.882834 | 0.882834 | 0.882834 | 0.882834 | 0.882834 | 0 | 0.036474 | 0.318135 | 965 | 33 | 78 | 29.242424 | 0.521277 | 0 | 0 | 0.818182 | 0 | 0 | 0.074689 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e4c09e415c3d46340df9e146cc9e9d120a240e7b | 56,887 | py | Python | ooiservices/tests/test_calibration_events.py | asascience-open/ooi-ui-services | a3254b612b5831e5e34beaf93000228826c1ed5a | [
"Apache-2.0"
] | 2 | 2015-02-28T00:20:30.000Z | 2015-04-30T12:40:31.000Z | ooiservices/tests/test_calibration_events.py | asascience-open/ooi-ui-services | a3254b612b5831e5e34beaf93000228826c1ed5a | [
"Apache-2.0"
] | 266 | 2015-01-02T21:29:25.000Z | 2020-01-23T16:00:11.000Z | ooiservices/tests/test_calibration_events.py | oceanobservatories/ooi-ui-services | a3254b612b5831e5e34beaf93000228826c1ed5a | [
"Apache-2.0"
] | 13 | 2015-02-04T21:13:34.000Z | 2016-10-18T14:39:36.000Z | #!/usr/bin/env python
"""
Asset Management - Specific testing for calibration event routes and supporting functions.
"""
__author__ = 'Edna Donoughe'
import unittest
from ooiservices.tests.common_tools import (dump_dict, get_event_input_as_unicode, get_event_input_as_string)
from base64 import b64encode
from ooiservices.app import (create_app, db)
from ooiservices.app.models import (User, UserScope, Organization)
from unittest import skipIf
import os
from flask import (url_for)
from ooiservices.app.uframe.uframe_tools import get_uframe_event
from ooiservices.app.uframe.common_tools import is_instrument
from ooiservices.app.uframe.events_create_update import get_calibration_event_id
from random import randint
import datetime
import json
@skipIf(os.getenv('TRAVIS'), 'Skip if testing from Travis CI.')
class CalibrationEventsTestCase(unittest.TestCase):
# enable verbose (during development and documentation) to get a list of
# urls used throughout test cases. Always set to False before check in.
verbose = False
debug = False
root = 'http://localhost:4000'
def setUp(self):
self.app = create_app('TESTING_CONFIG')
self.app_context = self.app.app_context()
self.app_context.push()
db.create_all()
test_username = 'admin'
test_password = 'test'
Organization.insert_org()
User.insert_user(username=test_username, password=test_password)
self.client = self.app.test_client(use_cookies=False)
UserScope.insert_scopes()
admin = User.query.filter_by(user_name='admin').first()
scope = UserScope.query.filter_by(scope_name='user_admin').first()
admin.scopes.append(scope)
scope = UserScope.query.filter_by(scope_name='redmine').first() # added
admin.scopes.append(scope)
scope = UserScope.query.filter_by(scope_name='asset_manager').first()
admin.scopes.append(scope)
db.session.add(admin)
db.session.commit()
def tearDown(self):
db.session.remove()
db.drop_all()
self.app_context.pop()
def get_api_headers(self, username, password):
return {
'Authorization': 'Basic ' + b64encode(
(username + ':' + password).encode('utf-8')).decode('utf-8'),
'Accept': 'application/json',
'Content-Type': 'application/json'
}
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Test cases
# test_calibration_events
# test_negative_create_duplicate_calibration_events
# test_negative_calibration_events
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Test calibration events.
def test_calibration_events(self):
"""
Create CALIBRATION_DATA event. Only applied for instrument ('Sensor') assets.
http://uframe-3-test.ooi.rutgers.edu:12587/asset/cal?uid=A00679, or,
http://uframe-3-test.ooi.rutgers.edu:12587/asset?uid=A00679
New calibration data format:
{
"@class" : ".XInstrument",
"calibration" : [ {
"@class" : ".XCalibration",
"name" : "CC_scale_factor_volume_scatter",
"calData" : [ {
"@class" : ".XCalibrationData",
"value" : 1.883E-6,
"comments" : null,
"eventId" : 15238,
"assetUid" : "A00992",
"eventType" : "CALIBRATION_DATA",
"eventName" : "CC_scale_factor_volume_scatter",
"eventStartTime" : 1394755200000,
"eventStopTime" : null,
"notes" : null,
"tense" : "UNKNOWN",
"dataSource" : "FLORT_Cal_Info.xlsx",
"lastModifiedTimestamp" : 1473180383529
} ]
},
}
Three different (basic) calibration data types:
1. scalar,
2. one dimensional array, and
3. two dimensional array
Descriptions:
1. Scalar value
"value" : 10.0,
2. One dimensional array of n values
"value" : [ 10.0, 11.0, 12.0 ... 20.0 ], // eleven values in array
3. Two dimensional array of m times n values
"value" : [[10.0, 11.0, 12.0], [20.0, 21.0, 22.0], [30.0, 31.0, 32.0]], // 3 x 3 array
http://host:12587/asset/cal/A00679
Sample verbose output:
Creating CALIBRATION_DATA event ...
Have some assets (4917)
Note: Number of loops to get instrument asset: 4
----- Instrument:
instrument_id: 3723
instrument_uid: N00104
instrument_rd: CP02PMUO-WFP01-01-VEL3DK000
Processing calibration data type of scalar.
Calibration create...
Creating new event of type CALIBRATION_DATA
Created eventId: 34445 and lastModifiedTimestamp: 1473974539601
Now performing an UPDATE on event we just created...
Calibration update...
Updated eventId: 34445
Update CALIBRATION_DATA event, event id: 34445
Updated eventId: 34445
Calibration update - check results...
Processing calibration data type of one_dimensional.
Calibration create...
Creating new event of type CALIBRATION_DATA
Created eventId: 34447 and lastModifiedTimestamp: 1473974540513
Now performing an UPDATE on event we just created...
Calibration update...
Updated eventId: 34447
Update CALIBRATION_DATA event, event id: 34447
Updated eventId: 34447
Calibration update - check results...
Processing calibration data type of two_dimensional.
Calibration create...
Creating new event of type CALIBRATION_DATA
Created eventId: 34449 and lastModifiedTimestamp: 1473974541435
Now performing an UPDATE on event we just created...
Calibration update...
Updated eventId: 34449
Update CALIBRATION_DATA event, event id: 34449
Updated eventId: 34449
Calibration update - check results...
"""
debug = self.debug
verbose = self.verbose
event_type = 'CALIBRATION_DATA'
if verbose: print '\n'
#if verbose: print '\n event_types: ', event_types
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Add calibration event to an instrument asset.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if verbose:
print '\n ----------------------------------'
print '\n Creating %s event ...' % event_type
# Get some assets...
assets = self.get_some_assets()
self.assertTrue(assets is not None)
self.assertTrue(assets)
self.assertTrue(isinstance(assets, list))
data_types = ['scalar', 'one_dimensional', 'two_dimensional']
number_of_assets = len(assets)
if verbose: print '\n Have some assets (%d)' % number_of_assets
have_instrument_id = False
instrument_id = None
instrument_uid = None
instrument_rd = None
count = 0
while not have_instrument_id and count <= number_of_assets:
count +=1
asset_index = randint(0, (number_of_assets-1))
#if debug: print '\n Random asset_index: %d' % asset_index
# Select an asset...
asset = assets[asset_index]
self.assertTrue(asset is not None)
self.assertTrue(asset)
self.assertTrue(isinstance(asset, dict))
# do not touch asset id 1.
if asset['id'] == 1:
continue
# Get asset_id, asset_uid, rd.
asset_id, asset_uid, rd = self.get_id_uid_rd(asset)
if is_instrument(rd):
if not have_instrument_id:
have_instrument_id = True
instrument_id = asset_id
instrument_uid = asset_uid
instrument_rd = rd
if verbose:
print '\n Note: Number of loops to get instrument asset: %d ' % count
print '\n ----- Instrument:'
print '\n\t instrument_id: %d' % instrument_id
print '\n\t instrument_uid: %s' % instrument_uid
print '\n\t instrument_rd: %s' % instrument_rd
for data_type in data_types:
if verbose: print '\nProcessing calibration data type of %s.' % data_type
# Get data to create calibration event.
#data_type = 'scalar'
input = self.calibration_data_for_create(event_type, instrument_uid, instrument_rd, data_type)
event_name = input['eventName']
if verbose: print '\n\tCalibration create...'
# Create calibration event.
event_id, last_modified = self.create_calibration_event(event_type, instrument_uid, input, event_name)
if debug:
print '\n\tCalibration create input: '
dump_dict(input, debug)
self.assertTrue(input is not None)
self.assertTrue('assetUid' in input)
self.assertTrue(input['assetUid'] is not None)
if verbose:
print '\n\tCreated eventId: %d and lastModifiedTimestamp: %d' % (event_id, last_modified)
print '\n\tNow performing an UPDATE on event we just created...'
# Update calibration event.
if verbose: print '\n\tCalibration update...'
update_input = self.calibration_data_for_update(event_type, instrument_uid, event_id, last_modified, event_name)
self.assertTrue(update_input is not None)
self.assertTrue('eventId' in update_input)
self.assertEquals(int(update_input['eventId']), event_id)
self.assertTrue('assetUid' in update_input)
self.assertEquals(update_input['assetUid'], instrument_uid)
if not isinstance(update_input['eventId'], int):
update_input['eventId'] = int(str(update_input['eventId']))
self.assertTrue(isinstance(update_input['eventId'], int))
if verbose: print '\n\tUpdated eventId: %d' % update_input['eventId']
# Save copy of 'update' data before issuing update request.
update_data = update_input.copy()
if debug:
print '\n ----- calibration event update data: '
dump_dict(update_data, debug)
# Update calibration event, returns event id.
update_event_id = self.update_calibration_event(event_type, update_input, event_id, instrument_uid, event_name)
self.assertTrue(update_event_id is not None)
self.assertTrue(isinstance(update_event_id, int))
if verbose: print '\n\tUpdated eventId: %d' % update_event_id
# Check eventId against the eventId returned on update.
if verbose: print '\n\tCalibration update - check results...'
if debug:
print '\n instrument_uid: ', instrument_uid
print '\n event_name: ', event_name
event_id, last_modified = self.get_calibration_event_id_last_modified(instrument_uid, event_name)
self.assertTrue(event_id is not None)
self.assertTrue(last_modified is not None)
self.assertEquals(update_event_id, event_id)
# Get calibration event by event id
event = get_uframe_event(event_id)
if debug: print '\n\tUpdated calibration data event(id: %d): %s' % (event_id, event)
self.assertTrue(event is not None)
if verbose:
print '\n Updated uframe calibration data event (2d): '
dump_dict(event, verbose)
# Check calibration content changes are reflected in 'updated' calibration event.
update_data_keys = update_data.keys()
event_keys = event.keys()
self.assertEquals(len(event_keys), len(update_data_keys))
for key in event_keys:
self.assertTrue(key in update_data_keys)
for key in update_data_keys:
if key != '@class':
self.assertTrue(key in event_keys)
if verbose: print '\n'
def test_calibration_events_two_dimensional(self):
"""
Create CALIBRATION_DATA event. Only applied for instrument ('Sensor') assets.
http://uframe-3-test.ooi.rutgers.edu:12587/asset/cal?uid=A00679, or,
http://uframe-3-test.ooi.rutgers.edu:12587/asset?uid=A00679
New calibration data format:
{
"@class" : ".XInstrument",
"calibration" : [ {
"@class" : ".XCalibration",
"name" : "CC_scale_factor_volume_scatter",
"calData" : [ {
"@class" : ".XCalibrationData",
"value" : 1.883E-6,
"comments" : null,
"eventId" : 15238,
"assetUid" : "A00992",
"eventType" : "CALIBRATION_DATA",
"eventName" : "CC_scale_factor_volume_scatter",
"eventStartTime" : 1394755200000,
"eventStopTime" : null,
"notes" : null,
"tense" : "UNKNOWN",
"dataSource" : "FLORT_Cal_Info.xlsx",
"lastModifiedTimestamp" : 1473180383529
} ]
},
}
Test two dimensional array calibration data types:
Descriptions:
Two dimensional array of m times n values (2 x 5, two per row, 5 rows)
"value" : [[10.0, 11.0], [20.0, 21.0], [30.0, 31.0], [40.0, 41.0], [50.0, 51.0]], // 2 x 5 array
http://host:12587/asset/cal/A00679
Sample verbose output:
Creating CALIBRATION_DATA event ...
Have some assets (4917)
Note: Number of loops to get instrument asset: 4
----- Instrument:
instrument_id: 3723
instrument_uid: N00104
instrument_rd: CP02PMUO-WFP01-01-VEL3DK000
Processing calibration data type of two_dimensional.
Calibration create...
Creating new event of type CALIBRATION_DATA
Created eventId: 34449 and lastModifiedTimestamp: 1473974541435
Now performing an UPDATE on event we just created...
Calibration update...
Updated eventId: 34449
Update CALIBRATION_DATA event, event id: 34449
Updated eventId: 34449
Calibration update - check results...
"""
debug = self.debug
verbose = self.verbose
event_type = 'CALIBRATION_DATA'
if verbose: print '\n'
#if verbose: print '\n event_types: ', event_types
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Add calibration event to an instrument asset.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if verbose:
print '\n ----------------------------------'
print '\n Creating %s event ...' % event_type
# Get some assets...
assets = self.get_some_assets()
self.assertTrue(assets is not None)
self.assertTrue(assets)
self.assertTrue(isinstance(assets, list))
data_types = ['two_dimensional']
number_of_assets = len(assets)
if verbose: print '\n Have some assets (%d)' % number_of_assets
have_instrument_id = False
instrument_id = None
instrument_uid = None
instrument_rd = None
two_dimensional_test_values = [[10.0, 11.0], [20.0, 21.0], [30.0, 31.0], [40.0, 41.0], [50.0, 51.0]]
count = 0
while not have_instrument_id and count <= number_of_assets:
count +=1
asset_index = randint(0, (number_of_assets-1))
#if debug: print '\n Random asset_index: %d' % asset_index
# Select an asset...
asset = assets[asset_index]
self.assertTrue(asset is not None)
self.assertTrue(asset)
self.assertTrue(isinstance(asset, dict))
# do not touch asset id 1.
if asset['id'] == 1:
continue
# Get asset_id, asset_uid, rd.
asset_id, asset_uid, rd = self.get_id_uid_rd(asset)
if is_instrument(rd):
if not have_instrument_id:
have_instrument_id = True
instrument_id = asset_id
instrument_uid = asset_uid
instrument_rd = rd
if verbose:
print '\n Note: Number of loops to get instrument asset: %d ' % count
print '\n ----- Instrument:'
print '\n\t instrument_id: %d' % instrument_id
print '\n\t instrument_uid: %s' % instrument_uid
print '\n\t instrument_rd: %s' % instrument_rd
for data_type in data_types:
if verbose: print '\nProcessing calibration data type of %s.' % data_type
# Get data to create calibration event.
input = self.calibration_data_for_create_two_dimensional(event_type, instrument_uid, instrument_rd)
event_name = input['eventName']
if verbose: print '\n\tCalibration create...'
# Create calibration event.
event_id, last_modified = self.create_calibration_event(event_type, instrument_uid, input, event_name)
if verbose:
print '\n\tCalibration create input: '
dump_dict(input, verbose)
self.assertTrue(input is not None)
self.assertTrue('assetUid' in input)
self.assertTrue(input['assetUid'] is not None)
if verbose:
print '\n\tCreated eventId: %d and lastModifiedTimestamp: %d' % (event_id, last_modified)
# Get calibration event just created.
# Get calibration event by event id
uframe_event = get_uframe_event(event_id)
if debug: print '\n\tUpdated calibration data event(id: %d):' % event_id
self.assertTrue(uframe_event is not None)
if verbose:
print '\n Updated uframe calibration data event (2d): '
dump_dict(uframe_event, verbose)
"""
#- - - - - - - - - - - - - - - - - - - - - - - - - - -
# todo - Add 2d special update function for this test.
#- - - - - - - - - - - - - - - - - - - - - - - - - - -
if verbose:
print '\n\tNow performing an UPDATE on event we just created...'
# Update calibration event.
if verbose: print '\n\tCalibration update...'
update_input = self.calibration_data_for_update(event_type, instrument_uid, event_id, last_modified, event_name)
self.assertTrue(update_input is not None)
self.assertTrue('eventId' in update_input)
self.assertEquals(int(update_input['eventId']), event_id)
self.assertTrue('assetUid' in update_input)
self.assertEquals(update_input['assetUid'], instrument_uid)
if not isinstance(update_input['eventId'], int):
update_input['eventId'] = int(str(update_input['eventId']))
self.assertTrue(isinstance(update_input['eventId'], int))
if verbose: print '\n\tUpdated eventId: %d' % update_input['eventId']
# Save copy of 'update' data before issuing update request.
update_data = update_input.copy()
if debug:
print '\n ----- calibration event update data: '
dump_dict(update_data, debug)
# Update calibration event, returns event id.
update_event_id = self.update_calibration_event(event_type, update_input, event_id, instrument_uid, event_name)
self.assertTrue(update_event_id is not None)
self.assertTrue(isinstance(update_event_id, int))
if verbose: print '\n\tUpdated eventId: %d' % update_event_id
# Check eventId against the eventId returned on update.
if verbose: print '\n\tCalibration update - check results...'
if debug:
print '\n instrument_uid: ', instrument_uid
print '\n event_name: ', event_name
event_id, last_modified = self.get_calibration_event_id_last_modified(instrument_uid, event_name)
self.assertTrue(event_id is not None)
self.assertTrue(last_modified is not None)
self.assertEquals(update_event_id, event_id)
# Get calibration event by event id
event = get_uframe_event(event_id)
if debug: print '\n\tUpdated calibration data event(id: %d): %s' % (event_id, event)
self.assertTrue(event is not None)
if debug:
print '\n Update uframe event: '
dump_dict(event, debug)
# Check calibration content changes are reflected in 'updated' calibration event.
update_data_keys = update_data.keys()
event_keys = event.keys()
self.assertEquals(len(event_keys), len(update_data_keys))
for key in event_keys:
self.assertTrue(key in update_data_keys)
for key in update_data_keys:
if key != '@class':
self.assertTrue(key in event_keys)
"""
if verbose: print '\n'
def test_negative_create_duplicate_calibration_events(self):
"""
Create CALIBRATION_DATA event. Only applied for instrument ('Sensor') assets.
http://uframe-3-test.ooi.rutgers.edu:12587/asset/cal?uid=A00679, or,
http://uframe-3-test.ooi.rutgers.edu:12587/asset?uid=A00679
{
"@class" : ".XCalibrationData",
"values" : [ -1.493703E-4 ],
"dimensions" : [ 1 ],
"comments" : "Test entry",
"cardinality" : 0,
"assetUid" : "A00679",
"eventType" : "CALIBRATION_DATA",
"eventName" : "CC_a0",
"eventStartTime" : 1443614400000
}
Three different (basic) calibration data types:
1. scalar,
2. one dimensional array, and
3. two dimensional array
Descriptions:
1. Scalar value
"values" : [ 10.0 ],
"dimensions" : [ 1 ],
"cardinality" : 0,
2. One dimensional array of n values
"values" : [ 10.0, 11.0, 12.0 ... 20.0 ], // eleven values in array
"dimensions" : [ 11 ],
"cardinality" : 1,
3. Two dimensional array of m times n values
"values" : [ 10.0, 11.0, 12.0, 20.0, 21.0, 22.0, 30.0, 31.0, 32.0 ], // 3 x 3 array
"dimensions" : [ 3, 3 ],
"cardinality" : 2,
http://host:12587/asset/cal/A00679
Sample input:
input = {
"@class" : ".XCalibrationData",
"values" : [ -1.493703E-4 ],
"dimensions" : [ 1 ],
"comments" : "Test entry",
"cardinality" : 0,
"assetUid" : "A00679",
"eventType" : "CALIBRATION_DATA",
"eventName" : "CC_a0",
"eventStartTime" : 1443614400000
}
"""
debug = self.debug
verbose = self.verbose
event_type = 'CALIBRATION_DATA'
if verbose: print '\n'
data_types = ['scalar', 'one_dimensional', 'two_dimensional']
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Add calibration event to an instrument asset.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if verbose:
print '\n ----------------------------------'
print '\n Creating %s event ...' % event_type
# Get some assets...
assets = self.get_some_assets()
self.assertTrue(assets is not None)
self.assertTrue(assets)
self.assertTrue(isinstance(assets, list))
number_of_assets = len(assets)
if verbose: print '\n Have some assets (%d)' % number_of_assets
have_instrument_id = False
instrument_id = None
instrument_uid = None
instrument_rd = None
count = 0
while not have_instrument_id and count <= number_of_assets:
count +=1
asset_index = randint(0, (number_of_assets-1))
#if debug: print '\n Random asset_index: %d' % asset_index
# Select an asset...
asset = assets[asset_index]
self.assertTrue(asset is not None)
self.assertTrue(asset)
self.assertTrue(isinstance(asset, dict))
self.assertTrue(asset['id'] != 1)
# Get asset_id, asset_uid, rd.
asset_id, asset_uid, rd = self.get_id_uid_rd(asset)
if is_instrument(rd):
if not have_instrument_id:
have_instrument_id = True
instrument_id = asset_id
instrument_uid = asset_uid
instrument_rd = rd
if verbose:
print '\n Note: Number of loops to get instrument asset: %d ' % count
print '\n ----- Instrument:'
print '\n\t instrument_id: %d' % instrument_id
print '\n\t instrument_uid: %s' % instrument_uid
print '\n\t instrument_rd: %s' % instrument_rd
for data_type in data_types:
if verbose: print '\nProcessing calibration data type of %s.' % data_type
# Get data to create calibration event.
#data_type = 'one_dimensional'
input = self.calibration_data_for_create(event_type, instrument_uid, instrument_rd, data_type)
event_name = input['eventName']
if verbose: print '\n\tCalibration create...'
if debug:
print '\n\tCalibration create input: '
dump_dict(input, debug)
# Create calibration event.
event_id, last_modified = self.create_calibration_event(event_type, instrument_uid, input, event_name)
if verbose:
print '\n event_id: ', event_id
print '\n last_modified: ', last_modified
self.assertTrue(input is not None)
self.assertTrue('assetUid' in input)
self.assertTrue(input['assetUid'] is not None)
if verbose:
print '\n\tCreated eventId: %d and lastModifiedTimestamp: %d' % (event_id, last_modified)
print '\n\tNow performing an UPDATE on event we just created...'
#- - - - - - - - - - - - - - - - - - - - - - - - - - -
# (Negative) Try to create same event, expect error.
#- - - - - - - - - - - - - - - - - - - - - - - - - - -
event_id, last_modified = self.negative_create_calibration_event(event_type, instrument_uid, input, event_name)
def test_negative_calibration_events(self):
"""
Create CALIBRATION_DATA event. Only applied for instrument ('Sensor') assets.
http://uframe-3-test.ooi.rutgers.edu:12587/asset/cal?uid=A00679, or,
http://uframe-3-test.ooi.rutgers.edu:12587/asset?uid=A00679
{
"@class" : ".XCalibrationData",
"values" : [ -1.493703E-4 ],
"dimensions" : [ 1 ],
"comments" : "Test entry",
"cardinality" : 0,
"assetUid" : "A00679",
"eventType" : "CALIBRATION_DATA",
"eventName" : "CC_a0",
"eventStartTime" : 1443614400000
}
Three different (basic) calibration data types:
1. scalar,
2. one dimensional array, and
3. two dimensional array
Descriptions:
1. Scalar value
"values" : [ 10.0 ],
"dimensions" : [ 1 ],
"cardinality" : 0,
2. One dimensional array of n values
"values" : [ 10.0, 11.0, 12.0 ... 20.0 ], // eleven values in array
"dimensions" : [ 11 ],
"cardinality" : 1,
3. Two dimensional array of m times n values
"values" : [ 10.0, 11.0, 12.0, 20.0, 21.0, 22.0, 30.0, 31.0, 32.0 ], // 3 x 3 array
"dimensions" : [ 3, 3 ],
"cardinality" : 2,
http://host:12587/asset/cal/A00679
Sample input:
input = {
"@class" : ".XCalibrationData",
"values" : [ -1.493703E-4 ],
"dimensions" : [ 1 ],
"comments" : "Test entry",
"cardinality" : 0,
"assetUid" : "A00679",
"eventType" : "CALIBRATION_DATA",
"eventName" : "CC_a0",
"eventStartTime" : 1443614400000
}
"""
debug = self.debug
verbose = self.verbose
headers = self.get_api_headers('admin', 'test')
event_type = 'CALIBRATION_DATA'
if verbose: print '\n'
data_types = ['scalar', 'one_dimensional', 'two_dimensional']
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Add calibration event to an instrument asset.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if verbose:
print '\n ----------------------------------'
print '\n Creating %s event ...' % event_type
# Get some assets...
assets = self.get_some_assets()
self.assertTrue(assets is not None)
self.assertTrue(assets)
self.assertTrue(isinstance(assets, list))
number_of_assets = len(assets)
if verbose: print '\n Have some assets (%d)' % number_of_assets
have_instrument_id = False
instrument_id = None
instrument_uid = None
instrument_rd = None
count = 0
while not have_instrument_id and count <= number_of_assets:
count +=1
asset_index = randint(0, (number_of_assets-1))
#if debug: print '\n Random asset_index: %d' % asset_index
# Select an asset...
asset = assets[asset_index]
self.assertTrue(asset is not None)
self.assertTrue(asset)
self.assertTrue(isinstance(asset, dict))
self.assertTrue(asset['id'] != 1)
# Get asset_id, asset_uid, rd.
asset_id, asset_uid, rd = self.get_id_uid_rd(asset)
if is_instrument(rd):
if not have_instrument_id:
have_instrument_id = True
instrument_id = asset_id
instrument_uid = asset_uid
instrument_rd = rd
if verbose:
print '\n Note: Number of loops to get instrument asset: %d ' % count
print '\n ----- Instrument:'
print '\n\t instrument_id: %d' % instrument_id
print '\n\t instrument_uid: %s' % instrument_uid
print '\n\t instrument_rd: %s' % instrument_rd
for data_type in data_types:
if verbose: print '\nProcessing bad calibration data type of %s.' % data_type
# Get data to create calibration event.
#data_type = 'one_dimensional'
input = self.bad_calibration_data_for_create(event_type, instrument_uid, instrument_rd, data_type)
event_name = input['eventName']
if verbose: print '\n\tCalibration create...'
if debug:
print '\n\tCalibration create input: '
dump_dict(input, debug)
# Create calibration event.
url = url_for('uframe.create_event')
if debug: print '\n create url: ', url
data = json.dumps(input)
response = self.client.post(url, headers=headers, data=data)
self.assertEquals(response.status_code, 400)
if debug:
print '\n Create calibration event -- response.status_code: ', response.status_code
if response.status_code != 204:
print '\n Create calibration event -- response.content: ', json.loads(response.data)
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Supporting functions:
# create_calibration_event
# update_calibration_event
# calibration_data_for_create
# calibration_data_for_update
# negative_create_calibration_event
# get_calibration_event_id_last_modified
# - - - - - - - - - -
# get_some_assets
# get_id_uid_rd
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Create calibration event.
def create_calibration_event(self, _event_type, uid, input, event_name):
"""
Create CALIBRATION_DATA event.
"""
debug = self.debug
verbose = self.verbose
headers = self.get_api_headers('admin', 'test')
self.assertTrue(_event_type is not None)
self.assertTrue(uid is not None)
self.assertTrue(input is not None)
self.assertTrue(event_name is not None)
# Define variables specific to event type
if verbose: print '\n\tCreating new event of type %s' % _event_type
target_event_type = _event_type
key = target_event_type
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# (Positive) GET event types
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
test_url = url_for('uframe.get_event_type')
response = self.client.get(test_url, headers=headers)
self.assertEquals(response.status_code, 200)
results = json.loads(response.data)
self.assertTrue('event_types' in results)
if debug: print '\n -- len(results): ', len(results)
self.assertTrue(results is not None)
self.assertTrue(isinstance(results, dict))
# Verify there are event_types in a list
events_by_type = results['event_types']
self.assertTrue(events_by_type is not None)
self.assertTrue(isinstance(events_by_type, list))
if debug: print '\n -- len(events_by_type): ', len(events_by_type)
#- - - - - - - - - - - - - - - - - - - - - - - - - - -
# Create Event
#- - - - - - - - - - - - - - - - - - - - - - - - - - -
if debug:
print '\n Create %s event' % key
print '\n debug -- Create request_data(%d): ' % len(input)
dump_dict(input, debug)
url = url_for('uframe.create_event')
if debug: print '\n create url: ', url
data = json.dumps(input)
response = self.client.post(url, headers=headers, data=data)
if debug:
print '\n Create calibration event -- response.status_code: ', response.status_code
if response.status_code != 204:
print '\n Create calibration event -- response.content: ', json.loads(response.data)
self.assertEquals(response.status_code, 200)
if debug: print '\n instrument_uid: ', uid
if debug: print '\n event_name: ', event_name
event_id, last_modified = self.get_calibration_event_id_last_modified(uid, event_name)
self.assertTrue(event_id is not None)
self.assertTrue(last_modified is not None)
self.assertTrue(event_id > 0)
return event_id, last_modified
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Update calibration event. Return event_id and last_modified (timestamp)
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def update_calibration_event(self, _event_type, input, event_id, uid, event_name):
""" Update calibration event.
"""
debug = self.debug
verbose = self.verbose
headers = self.get_api_headers('admin', 'test')
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Update Event
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if verbose:
print '\n\tUpdate %s event, event id: %d' % (_event_type, event_id)
self.assertTrue('eventId' in input)
self.assertTrue(input['eventId'] is not None)
self.assertTrue(isinstance(input['eventId'], int))
if debug:
print '\n test update -- UPDATE request_data: '
dump_dict(input, debug)
url = url_for('uframe.update_event', id=event_id)
if debug: print '\n **** Update url: ', url
data = json.dumps(input)
response = self.client.put(url, headers=headers, data=data)
if debug: print '\n uframe update response.status_code: ', response.status_code
if response.status_code != 200 and response.status_code != 204:
if debug: print '\n response.status_code: ', response.status_code
response_error = json.loads(response.data)
if debug: print '\n response_error: ', response_error
self.assertEquals(response.status_code, 200)
self.assertTrue(response.data is not None)
response_data = json.loads(response.data)
self.assertTrue('event' in response_data)
event = response_data['event']
self.assertTrue(event is not None)
#print '\n debug -- event: ', event
self.assertTrue('eventId' in event)
event_id = event['eventId']
self.assertTrue(event_id is not None)
update_event_id, last_modified = self.get_calibration_event_id_last_modified(uid, event_name)
self.assertTrue(event_id is not None)
self.assertTrue(last_modified is not None)
self.assertEquals(update_event_id, event_id)
event_id = int(str(event['eventId']))
self.assertTrue(event_id is not None)
self.assertTrue(isinstance(event_id, int))
self.assertTrue(event_id > 0)
return event_id
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Get data to create calibration_data event.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def calibration_data_for_create(self, event_type, uid, rd, data_type):
input = {}
debug = False
data_types = ['scalar', 'one_dimensional', 'two_dimensional']
if debug: print '\n Create new %s event for %s, (assetUid: %s)' % (event_type, rd, uid)
self.assertEquals(event_type, 'CALIBRATION_DATA')
self.assertTrue(event_type is not None)
self.assertTrue(uid is not None)
self.assertTrue(rd is not None)
self.assertTrue(is_instrument(rd))
self.assertTrue(data_type in data_types)
if event_type == 'CALIBRATION_DATA':
#"@class" : ".XCalibrationData",
unique_int = randint(5000, 10000)
event_name = 'CC_test_' + uid + str(unique_int)
# 'CC_a0'
unique_num = randint(1000, 2000)
if data_type == 'scalar':
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': event_name,
'eventStartTime': 1443614400000,
'value': 42.0027,
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
elif data_type == 'one_dimensional':
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': event_name,
'eventStartTime': 1443614400000,
'value': [-1.493703E-4, 2.0],
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
elif data_type == 'two_dimensional':
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': event_name,
'eventStartTime': 1443614400000,
'value': [[-1.493703E-4, -2.0], [31.0, 32]],
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
string_input = get_event_input_as_string(input, debug)
self.assertTrue(input is not None)
return string_input
def calibration_data_for_create_two_dimensional(self, event_type, uid, rd):
# two_dimensional_test_values
two_dimensional_test_values = [[10.0, 11.0], [20.0, 21.0], [30.0, 31.0], [40.0, 41.0], [50.0, 51.0]]
debug = False
if debug: print '\n Create new %s event for %s, (assetUid: %s)' % (event_type, rd, uid)
self.assertEquals(event_type, 'CALIBRATION_DATA')
self.assertTrue(event_type is not None)
self.assertTrue(uid is not None)
self.assertTrue(rd is not None)
self.assertTrue(is_instrument(rd))
self.assertEquals(event_type, 'CALIBRATION_DATA')
#"@class" : ".XCalibrationData",
unique_int = randint(5000, 10000)
event_name = 'CC_test_' + uid + str(unique_int)
# 'CC_a0'
unique_num = randint(1000, 2000)
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': event_name,
'eventStartTime': 1443644400000,
'value': two_dimensional_test_values,
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
string_input = get_event_input_as_string(input, debug)
self.assertTrue(input is not None)
return string_input
def bad_calibration_data_for_create(self, event_type, uid, rd, data_type):
input = {}
debug = False
data_types = ['scalar', 'one_dimensional', 'two_dimensional']
if debug: print '\n Create new %s event for %s, (assetUid: %s)' % (event_type, rd, uid)
self.assertEquals(event_type, 'CALIBRATION_DATA')
self.assertTrue(event_type is not None)
self.assertTrue(uid is not None)
self.assertTrue(rd is not None)
self.assertTrue(is_instrument(rd))
self.assertTrue(data_type in data_types)
if event_type == 'CALIBRATION_DATA':
#"@class" : ".XCalibrationData",
unique_int = randint(5000, 10000)
event_name = 'CC_test_' + uid + str(unique_int)
# 'CC_a0'
unique_num = randint(1000, 2000)
if data_type == 'scalar':
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': None,
'eventStartTime': 1443614400000,
'value': 42.0027,
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
elif data_type == 'one_dimensional':
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': None,
'eventStartTime': 1443614400000,
'value': [-1.493703E-4, 2.0],
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
elif data_type == 'two_dimensional':
if debug: print '\n Create new %s...' % event_type
input = {
'assetUid': uid,
'comments': 'Test entry (scalar) ' + str(unique_num),
'eventType': 'CALIBRATION_DATA',
'eventName': None,
'eventStartTime': 1443614400000,
'value': [[-1.493703E-4, -2.0], [31.0, 32]],
'notes': 'Create calibration at ' + str(datetime.datetime.now()),
'dataSource': 'Test data ' + str(datetime.datetime.now()),
'eventStopTime': None,
'tense': 'UNKNOWN'
}
string_input = get_event_input_as_string(input, debug)
self.assertTrue(input is not None)
return string_input
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Get data to update CALIBRATION_DATA event types.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def calibration_data_for_update(self, event_type, uid, event_id, last_modified, event_name):
debug = self.debug
try:
if debug: print '\n debug -- calibration_data_for_update -- event_type/uid/eventId: %s/%s/%d' % (event_type, uid, event_id)
self.assertTrue(event_type is not None)
self.assertEquals(event_type, 'CALIBRATION_DATA')
self.assertTrue(uid is not None)
self.assertTrue(event_id is not None)
self.assertTrue(isinstance(event_id, int))
self.assertTrue(event_id > 0)
self.assertTrue(last_modified is not None)
self.assertTrue(last_modified > 0)
self.assertTrue(event_name is not None)
input = {}
#eventName = 'CC_a0'
unique_num = randint(1000, 2000)
eventStartTime = 1453309000000 + 10000
eventStopTime = eventStartTime + (unique_num*2)
input = {
"@class": ".XCalibrationData",
"value": [-1.493703E-4, 3.0],
"comments": "Updated test entry.",
"eventId": event_id,
"assetUid": uid,
"eventType": event_type,
"eventName": event_name,
"eventStartTime": eventStartTime,
'eventStopTime': eventStopTime,
'lastModifiedTimestamp': last_modified,
'dataSource': 'Automated test data ' + str(datetime.datetime.now()),
'notes': 'Update calibration at ' + str(datetime.datetime.now()),
'tense': 'UNKNOWN'
}
"""
{
'eventStartTime': '1443614400000',
'notes': 'Create calibration at 2016-09-15 14:05:42.756471',
'value': '[-0.0001493703, 2.0]',
'eventName': 'CC_test_A01247',
'tense': 'UNKNOWN',
'comments': 'Test entry (scalar) 1458',
'eventType': 'CALIBRATION_DATA',
'eventStopTime': None,
'assetUid': 'A01247',
'dataSource': 'Test data 2016-09-15 14:05:42.756494'
}
"""
# Make all value in dictionary type string (simulate jgrid output).
string_input = get_event_input_as_unicode(input, debug)
self.assertTrue(input is not None)
return string_input
except Exception as err:
message = str(err)
self.assertEquals('Exception calibration_data_for_update: ', message)
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Create calibration event which already exists.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def negative_create_calibration_event(self, _event_type, uid, input, event_name):
"""
Create calibration event (which already exists.)
"""
debug = self.debug
verbose = self.verbose
headers = self.get_api_headers('admin', 'test')
# Define variables specific to event type
if verbose: print '\n\t\tNegative test creation of duplicate CALIBRATION_DATA event...'
target_event_type = _event_type
key = target_event_type
# Get event types
test_url = url_for('uframe.get_event_type')
response = self.client.get(test_url, headers=headers)
self.assertEquals(response.status_code, 200)
results = json.loads(response.data)
self.assertTrue('event_types' in results)
if debug: print '\n -- len(results): ', len(results)
self.assertTrue(results is not None)
self.assertTrue(isinstance(results, dict))
# Verify there are event_types in a list
events_by_type = results['event_types']
self.assertTrue(events_by_type is not None)
self.assertTrue(isinstance(events_by_type, list))
if debug: print '\n -- len(events_by_type): ', len(events_by_type)
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# (Negative) Create Event.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if debug:
print '\n Create %s event' % key
print '\n debug -- Create request_data(%d): ' % len(input)
dump_dict(input, debug)
url = url_for('uframe.create_event')
if debug: print '\n create url: ', url
data = json.dumps(input)
response = self.client.post(url, headers=headers, data=data)
if debug: print '\n Create calibration event -- response.status_code: ', response.status_code
self.assertEquals(response.status_code, 400)
event_id, last_modified = self.get_calibration_event_id_last_modified(uid, event_name)
self.assertTrue(event_id is not None)
self.assertTrue(last_modified is not None)
return event_id, last_modified
def get_calibration_event_id_last_modified(self, uid, event_name):
""" Get calibration event id and lastModified from asset using calibration_data event name.
"""
debug = self.debug
self.assertTrue(uid is not None)
self.assertTrue(event_name is not None)
if debug:
print '\n get_calibration_event_id_last_modified: uid: ', uid
print '\n get_calibration_event_id_last_modified: event_name: ', event_name
error_text = ' uid: %s, event name: %s' % (uid, event_name)
try:
# Get asset by uid, retrieve eventId and name from calibration event.
event_id = None
last_modified = None
try:
event_id, last_modified = get_calibration_event_id(uid, event_name)
except Exception as err:
self.assetEquals('Failed to get event id for calibration', event_name + ' ' + str(err))
if debug: print '\n Calibration event_id: %r, uid: %s, last_modified: %d' % (event_id, uid, last_modified)
self.assertTrue(event_id is not None)
self.assertTrue(isinstance(event_id, int))
self.assertTrue(event_id > 0)
self.assertTrue(last_modified is not None)
return event_id, last_modified
except Exception as err:
message = 'Failed to get event id for calibration event for ' + error_text + '.' + str(err)
self.assertEquals('Failed to get event id for calibration event for ', error_text)
raise Exception(message)
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Get assets to assist in testing events.
#- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def get_some_assets(self):
""" Get assets to assist in testing events.
"""
headers = self.get_api_headers('admin', 'test')
try:
# Get assets.
url = url_for('uframe.get_assets')
response = self.client.get(url, headers=headers)
self.assertEquals(response.status_code, 200)
results = json.loads(response.data)
self.assertTrue('assets' in results)
self.assertTrue(results is not None)
self.assertTrue(isinstance(results, dict))
# Verify there are assets in list.
assets = results['assets']
self.assertTrue(assets is not None)
self.assertTrue(isinstance(assets, list))
return assets
except Exception as err:
message = str(err)
self.assertEquals('Exception get_some_assets: %s' % message)
return None
# Get id, uid and rd.
def get_id_uid_rd(self, asset):
""" For an asset, get id, uid and rd.
"""
debug = self.debug
try:
# Get asset_id
self.assertTrue('id' in asset)
asset_id = asset['id']
self.assertTrue(asset_id is not None)
self.assertTrue(asset_id)
if debug: print '\n Have asset_id: %d' % asset_id
# Get asset uid
self.assertTrue('uid' in asset)
asset_uid = asset['uid']
self.assertTrue(asset_uid is not None)
self.assertTrue(asset_uid)
if debug: print '\n Have asset_uid: %s ' % asset_uid
# Get reference designator
self.assertTrue('ref_des' in asset)
rd = asset['ref_des']
return asset_id, asset_uid, rd
except Exception:
print '\n exception getting asset id, uid and rd.'
return None, None, None | 42.326637 | 135 | 0.531756 | 5,878 | 56,887 | 4.970398 | 0.064308 | 0.064211 | 0.021564 | 0.023138 | 0.8482 | 0.829854 | 0.796002 | 0.782071 | 0.770023 | 0.754587 | 0 | 0.031337 | 0.345369 | 56,887 | 1,344 | 136 | 42.326637 | 0.753195 | 0.109814 | 0 | 0.718795 | 0 | 0 | 0.165677 | 0.014324 | 0 | 0 | 0 | 0.000744 | 0.205165 | 0 | null | null | 0.005739 | 0.020086 | null | null | 0.159254 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
902dfe91656356615bd0080f55e6a827a98a4244 | 6,853 | py | Python | contentcuration/contentcuration/tests/test_included_languages_migration.py | Tlazypanda/studio | cd1c2f169c705027cdd808cbbcae907d0a9b21d2 | [
"MIT"
] | 1 | 2019-03-30T18:14:25.000Z | 2019-03-30T18:14:25.000Z | contentcuration/contentcuration/tests/test_included_languages_migration.py | Tlazypanda/studio | cd1c2f169c705027cdd808cbbcae907d0a9b21d2 | [
"MIT"
] | 2 | 2019-04-06T07:06:08.000Z | 2019-04-08T23:33:53.000Z | contentcuration/contentcuration/tests/test_included_languages_migration.py | Tlazypanda/studio | cd1c2f169c705027cdd808cbbcae907d0a9b21d2 | [
"MIT"
] | 1 | 2020-10-20T05:21:56.000Z | 2020-10-20T05:21:56.000Z | import datetime
from le_utils.constants import content_kinds
from .base import MigrationTestCase
included_languages_deploy_date = datetime.datetime(2017, 11, 30)
included_languages_should_up_date = datetime.datetime(2016, 11, 30)
class TestForwardIncludedLanguagesMigrationPublishedChannel(MigrationTestCase):
migrate_from = '0099_auto_20190715_2201'
migrate_to = '0100_calculate_included_languages'
app = 'contentcuration'
def setUpBeforeMigration(self, apps):
Channel = apps.get_model(self.app, 'Channel')
self.channel = Channel.objects.create(last_published=included_languages_should_up_date)
self.unpublished_channel = Channel.objects.create()
ContentKind = apps.get_model(self.app, 'ContentKind')
topic, _created = ContentKind.objects.get_or_create(kind=content_kinds.TOPIC)
ContentNode = apps.get_model(self.app, 'ContentNode')
self.channel.main_tree = ContentNode.objects.create(lft=1, rght=4, tree_id=3, level=0, kind=topic)
self.channel.save()
Language = apps.get_model(self.app, 'Language')
self.language = Language.objects.create(id="tes_t", lang_code="tes", lang_subcode="t")
ContentNode.objects.create(tree_id=self.channel.main_tree.tree_id, language=self.language, lft=2, rght=3, level=1, kind=topic, published=True)
unpublished_language = Language.objects.create(id="nes_t", lang_code="nes", lang_subcode="t")
ContentNode.objects.create(tree_id=self.channel.main_tree.tree_id, language=unpublished_language, lft=2, rght=3, level=1, kind=topic, published=False)
def test_include_language(self):
Channel = self.apps.get_model(self.app, 'Channel')
included_languages = Channel.objects.filter(last_published__isnull=False).first().included_languages
self.assertEqual(included_languages.count(), 1)
self.assertEqual(included_languages.first().id, self.language.id)
self.assertEqual(included_languages.first().lang_code, self.language.lang_code)
self.assertEqual(included_languages.first().lang_subcode, self.language.lang_subcode)
class TestForwardIncludedLanguagesMigrationNewlyPublishedChannel(MigrationTestCase):
migrate_from = '0099_auto_20190715_2201'
migrate_to = '0100_calculate_included_languages'
app = 'contentcuration'
def setUpBeforeMigration(self, apps):
Channel = apps.get_model(self.app, 'Channel')
self.channel = Channel.objects.create(last_published=included_languages_deploy_date)
self.unpublished_channel = Channel.objects.create()
ContentKind = apps.get_model(self.app, 'ContentKind')
topic, _created = ContentKind.objects.get_or_create(kind=content_kinds.TOPIC)
ContentNode = apps.get_model(self.app, 'ContentNode')
self.channel.main_tree = ContentNode.objects.create(lft=1, rght=4, tree_id=3, level=0, kind=topic)
self.channel.save()
Language = apps.get_model(self.app, 'Language')
self.language = Language.objects.create(id="tes_t", lang_code="tes", lang_subcode="t")
ContentNode.objects.create(tree_id=self.channel.main_tree.tree_id, language=self.language, lft=2, rght=3, level=1, kind=topic, published=True)
unpublished_language = Language.objects.create(id="nes_t", lang_code="nes", lang_subcode="t")
ContentNode.objects.create(tree_id=self.channel.main_tree.tree_id, language=unpublished_language, lft=2, rght=3, level=1, kind=topic, published=False)
def test_include_language_no_changes(self):
Channel = self.apps.get_model(self.app, 'Channel')
included_languages = Channel.objects.filter(last_published__isnull=True).first().included_languages
self.assertEqual(included_languages.count(), 0)
class TestForwardIncludedLanguagesMigrationUnpublishedChannel(MigrationTestCase):
migrate_from = '0099_auto_20190715_2201'
migrate_to = '0100_calculate_included_languages'
app = 'contentcuration'
def setUpBeforeMigration(self, apps):
Channel = apps.get_model(self.app, 'Channel')
self.unpublished_channel = Channel.objects.create()
ContentKind = apps.get_model(self.app, 'ContentKind')
topic, _created = ContentKind.objects.get_or_create(kind=content_kinds.TOPIC)
ContentNode = apps.get_model(self.app, 'ContentNode')
self.unpublished_channel.main_tree = ContentNode.objects.create(lft=1, rght=4, tree_id=3, level=0, kind=topic)
self.unpublished_channel.save()
Language = apps.get_model(self.app, 'Language')
self.language = Language.objects.create(id="tes_t", lang_code="tes", lang_subcode="t")
ContentNode.objects.create(
tree_id=self.unpublished_channel.main_tree.tree_id,
language=self.language, lft=2, rght=3, level=1, kind=topic, published=True)
def test_unpublished_no_include_language(self):
Channel = self.apps.get_model(self.app, 'Channel')
included_languages = Channel.objects.filter(last_published__isnull=True).first().included_languages
self.assertEqual(included_languages.count(), 0)
class TestForwardIncludedLanguagesMigrationFile(MigrationTestCase):
migrate_from = '0099_auto_20190715_2201'
migrate_to = '0100_calculate_included_languages'
app = 'contentcuration'
def setUpBeforeMigration(self, apps):
Channel = apps.get_model(self.app, 'Channel')
self.channel = Channel.objects.create(last_published=included_languages_should_up_date)
self.unpublished_channel = Channel.objects.create()
ContentKind = apps.get_model(self.app, 'ContentKind')
topic, _created = ContentKind.objects.get_or_create(kind=content_kinds.TOPIC)
ContentNode = apps.get_model(self.app, 'ContentNode')
self.channel.main_tree = ContentNode.objects.create(lft=1, rght=4, tree_id=3, level=0, kind=topic)
self.channel.save()
Language = apps.get_model(self.app, 'Language')
self.language = Language.objects.create(id="tes_t", lang_code="tes", lang_subcode="t")
published_node = ContentNode.objects.create(tree_id=self.channel.main_tree.tree_id, lft=2, rght=3, level=1, kind=topic, published=True)
File = apps.get_model(self.app, 'File')
File.objects.create(contentnode=published_node, language=self.language)
def test_include_language(self):
Channel = self.apps.get_model(self.app, 'Channel')
included_languages = Channel.objects.filter(last_published__isnull=False).first().included_languages
self.assertEqual(included_languages.count(), 1)
self.assertEqual(included_languages.first().id, self.language.id)
self.assertEqual(included_languages.first().lang_code, self.language.lang_code)
self.assertEqual(included_languages.first().lang_subcode, self.language.lang_subcode)
| 56.172131 | 158 | 0.73953 | 851 | 6,853 | 5.721504 | 0.099882 | 0.09427 | 0.051756 | 0.069008 | 0.896693 | 0.882317 | 0.882317 | 0.882317 | 0.882317 | 0.882317 | 0 | 0.022926 | 0.147089 | 6,853 | 121 | 159 | 56.636364 | 0.810094 | 0 | 0 | 0.795918 | 0 | 0 | 0.075587 | 0.032686 | 0 | 0 | 0 | 0 | 0.102041 | 1 | 0.081633 | false | 0 | 0.030612 | 0 | 0.27551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8408bc183478a478675977fddf7e2346ad116db3 | 120,176 | py | Python | ctypesgen/parser/parsetab.py | bingqingsuimeng/ctypesgen | 5ddc80e89abe8b02591a9baa1d857ce1f55bdee6 | [
"BSD-2-Clause"
] | null | null | null | ctypesgen/parser/parsetab.py | bingqingsuimeng/ctypesgen | 5ddc80e89abe8b02591a9baa1d857ce1f55bdee6 | [
"BSD-2-Clause"
] | null | null | null | ctypesgen/parser/parsetab.py | bingqingsuimeng/ctypesgen | 5ddc80e89abe8b02591a9baa1d857ce1f55bdee6 | [
"BSD-2-Clause"
] | null | null | null |
# new_parsetab.py
# This file is automatically generated. Do not edit.
_lr_method = 'LALR'
_lr_signature = b'\xf3\x9d\x04\xbe\x81j\xb2x\x12\xa3\x9f)\t\x10#T'
_lr_action_items = {'PP_DEFINE':([0,1,2,3,4,5,6,7,8,15,22,57,78,85,87,152,162,164,180,191,194,196,267,268,308,366,378,417,418,424,425,451,],[-1,12,-2,-3,-255,-256,-261,-262,-263,-275,-106,-260,-258,-259,-231,-264,-271,-272,-257,-232,-233,-235,-265,-266,-234,-267,-276,-268,-269,-277,-278,-270,]),'PP_UNDEFINE':([0,1,2,3,4,5,6,7,8,15,22,57,78,85,87,152,162,164,180,191,194,196,267,268,308,366,378,417,418,424,425,451,],[-1,14,-2,-3,-255,-256,-261,-262,-263,-275,-106,-260,-258,-259,-231,-264,-271,-272,-257,-232,-233,-235,-265,-266,-234,-267,-276,-268,-269,-277,-278,-270,]),'PRAGMA':([0,1,2,3,4,5,6,7,8,15,22,57,78,85,87,152,162,164,180,191,194,196,267,268,308,366,378,417,418,424,425,451,],[-1,19,-2,-3,-255,-256,-261,-262,-263,-275,-106,-260,-258,-259,-231,-264,-271,-272,-257,-232,-233,-235,-265,-266,-234,-267,-276,-268,-269,-277,-278,-270,]),'*':([0,1,2,3,4,5,6,7,8,10,13,15,20,22,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,57,58,59,60,61,64,66,67,70,73,75,76,78,80,81,84,85,86,87,88,89,91,92,93,94,95,96,97,98,99,100,104,107,112,115,117,118,119,120,121,122,123,125,126,127,128,129,130,131,133,135,136,137,139,140,141,142,144,150,152,155,158,159,160,162,164,165,175,178,179,180,181,183,185,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,236,237,238,239,240,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,267,268,270,273,274,275,276,277,278,291,298,302,304,306,308,309,310,311,312,313,316,317,321,322,323,328,332,333,339,340,350,351,352,353,354,355,358,360,365,366,368,377,378,385,391,395,396,397,398,399,400,401,402,403,404,405,406,407,417,418,423,424,425,429,430,432,434,436,437,438,439,441,446,451,452,462,464,465,473,474,475,476,478,485,487,],[-1,20,-2,-3,-255,-256,-261,-262,-263,20,-143,-275,20,-106,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-260,-236,127,20,127,20,-143,-144,127,20,-190,-191,-258,20,-110,-166,-259,-237,-231,127,127,-238,-222,-223,-224,-225,-226,-227,-240,-4,127,127,127,127,-58,-45,127,127,127,-60,127,-51,-22,-52,-53,-54,-55,-56,-57,-12,-18,-19,-20,-16,-9,-10,-13,-11,263,-264,20,-58,127,-4,-271,-272,-109,20,-192,-193,-257,127,-139,-140,-232,-239,-233,127,-235,-242,127,127,-241,127,127,127,127,127,-251,-252,-253,127,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,127,127,127,127,-28,-29,-46,127,-47,-48,-49,127,127,127,-14,-15,-16,127,127,-17,127,127,127,127,127,127,127,127,127,127,127,127,127,-265,-266,-143,-157,-158,-143,127,-143,127,-143,127,20,-164,127,-234,-228,-5,-7,127,-230,-21,127,127,-250,-254,-24,-26,-27,-8,-6,263,263,-61,-62,-63,-156,127,20,-155,-267,127,20,-276,-138,127,-165,-229,127,-59,127,127,127,127,127,-23,-25,127,-50,-268,-269,127,-277,-278,127,-136,20,127,-137,-243,-245,-246,127,-34,-270,-145,127,-248,127,-244,-247,-249,127,-35,-36,-37,]),'IDENTIFIER':([0,1,2,3,4,5,6,7,8,10,13,15,17,20,22,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,57,58,59,60,61,64,66,67,70,71,73,74,75,76,78,80,81,82,83,84,85,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,109,112,118,119,120,122,126,127,128,129,130,131,152,159,162,164,165,175,176,177,178,179,180,181,183,185,190,191,192,194,195,196,197,198,199,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,237,241,242,245,249,250,251,253,254,255,256,257,258,259,260,261,262,263,264,265,267,268,270,273,274,276,277,278,282,286,290,291,298,302,304,305,306,308,309,312,313,317,321,322,323,355,358,365,366,368,377,378,381,385,391,395,396,397,399,400,401,402,403,406,417,418,422,423,424,425,429,430,432,434,436,437,438,439,441,451,452,456,462,464,465,473,474,475,476,],[-1,21,-2,-3,-255,-256,-261,-262,-263,21,-143,-275,21,-186,-106,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,84,-141,-142,-260,-236,99,21,160,21,-143,-144,160,173,-187,-188,-190,-191,-258,21,-110,183,189,-166,-259,-237,-231,99,99,-238,-222,-223,-224,-225,-226,-227,-240,160,160,99,211,160,160,160,160,160,-52,-53,-54,-55,-56,-57,-264,160,-271,-272,-109,21,295,-189,-192,-193,-257,160,-139,-140,189,-232,-239,-233,99,-235,-242,99,160,99,-241,160,160,160,160,160,-251,-252,-253,160,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,160,160,160,160,332,333,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,-265,-266,-143,-157,-158,160,-143,160,372,373,21,-143,160,21,-164,189,160,-234,-228,99,-230,160,160,-250,-254,-156,160,-155,-267,160,21,-276,426,-138,160,-165,-229,99,99,99,160,160,160,160,-268,-269,372,160,-277,-278,160,-136,21,160,-137,-243,-245,-246,99,-270,-145,471,99,-248,99,-244,-247,-249,160,]),'(':([0,1,2,3,4,5,6,7,8,10,13,15,17,18,20,21,22,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,57,58,59,60,61,62,64,66,67,68,69,70,72,73,74,75,76,78,80,81,84,85,86,87,88,89,91,92,93,94,95,96,97,98,99,100,103,104,105,106,107,108,112,117,118,119,120,122,125,126,127,128,129,130,131,132,133,135,136,137,139,140,141,142,144,152,155,159,160,162,164,165,166,168,170,175,177,178,179,180,181,183,185,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,237,241,242,243,244,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,267,268,270,271,272,273,274,275,276,277,278,281,283,284,285,290,291,298,302,304,306,308,309,310,311,312,313,316,317,320,321,322,323,328,332,333,339,340,355,356,358,360,361,363,365,366,368,372,377,378,385,391,395,396,397,399,400,401,402,403,404,405,406,410,412,415,416,417,418,423,424,425,429,430,432,434,436,437,438,439,441,448,449,450,451,452,462,464,465,466,473,474,475,476,],[-1,13,-2,-3,-255,-256,-261,-262,-263,13,-143,-275,13,71,-186,-179,-106,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-260,-236,104,13,159,161,13,-143,-144,166,71,159,176,-187,-188,-190,-191,-258,13,-110,-166,-259,-237,-231,104,104,-238,-222,-223,-224,-225,-226,-227,-240,-4,159,204,104,207,208,104,210,104,231,237,237,159,241,-22,-52,-53,-54,-55,-56,-57,-43,-12,-18,-19,-20,-16,-9,-10,-13,-11,-264,275,104,-4,-271,-272,-109,282,-182,-185,291,-189,-192,-193,-257,104,-139,-140,-232,-239,-233,104,-235,-242,104,104,-241,104,104,104,104,104,-251,-252,-253,104,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,104,159,104,104,-28,-29,104,104,159,336,-44,159,-14,-15,-16,159,159,-17,159,159,159,159,159,159,159,159,159,159,159,159,159,-265,-266,-143,275,359,-157,-158,-143,159,-143,159,-180,-181,-183,-184,291,-143,104,13,-164,159,-234,-228,-5,-7,104,-230,-21,159,401,104,-250,-254,-24,-26,-27,-8,-6,-156,359,159,275,-213,-209,-155,-267,159,423,291,-276,-138,159,-165,-229,104,104,104,104,104,159,-23,-25,104,-211,-215,-214,-210,-268,-269,104,-277,-278,104,-136,13,159,-137,-243,-245,-246,104,-212,-216,-208,-270,-145,104,-248,104,476,-244,-247,-249,104,]),'$end':([0,1,2,3,4,5,6,7,8,15,22,57,78,85,87,152,162,164,180,191,194,196,267,268,308,366,378,417,418,424,425,451,],[-1,0,-2,-3,-255,-256,-261,-262,-263,-275,-106,-260,-258,-259,-231,-264,-271,-272,-257,-232,-233,-235,-265,-266,-234,-267,-276,-268,-269,-277,-278,-270,]),'__ATTRIBUTE__':([0,1,2,3,4,5,6,7,8,11,13,15,16,18,20,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,54,55,56,57,58,59,61,64,66,67,69,71,73,75,76,77,78,79,81,82,84,85,86,87,89,104,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,151,152,156,157,158,159,160,162,164,165,168,170,178,179,180,183,184,185,191,194,196,231,234,235,236,238,239,240,241,246,247,248,252,266,267,268,270,272,273,274,275,277,281,283,284,285,287,288,291,299,300,301,303,304,308,310,311,316,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,359,360,361,363,365,366,376,377,378,384,385,386,388,390,392,395,398,404,405,406,407,409,410,412,415,416,417,418,423,424,425,430,431,433,435,436,443,446,448,449,450,451,452,460,461,472,478,485,487,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,-143,-143,-275,68,-178,68,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-141,-142,-143,-260,-236,-143,-143,68,-143,-144,-177,-143,68,-190,-191,-143,-258,68,68,68,-166,-259,-237,-231,-143,-143,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-143,-264,-105,68,-58,-143,-4,-271,-272,68,-182,-185,-192,-193,-257,-139,-143,-140,-232,-233,-235,-143,-28,-29,-46,-47,-48,-49,-143,-14,-15,-16,-17,68,-265,-266,-143,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-143,-151,-143,-164,-234,-5,-7,-21,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,68,-143,68,-143,68,-213,-209,68,-267,68,68,-276,-143,-138,-152,-154,-143,-143,-165,-59,-23,-25,-143,-50,68,-211,-215,-214,-210,-268,-269,-143,-277,-278,-136,-153,68,-143,-137,-89,-34,-212,-216,-208,-270,-145,-143,68,68,-35,-36,-37,]),'TYPEDEF':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,66,67,69,71,77,78,79,81,84,85,86,87,89,152,162,164,165,168,170,175,180,183,185,191,194,196,267,268,275,281,283,284,285,287,291,304,308,359,360,366,377,378,385,395,417,418,424,425,430,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,30,-143,-275,30,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,30,-143,-144,-177,-143,-143,-258,30,-110,-166,-259,-237,-231,-143,-264,-271,-272,-109,-182,-185,30,-257,-139,-140,-232,-233,-235,-265,-266,-143,-180,-181,-183,-184,-143,-143,-164,-234,-143,30,-267,30,-276,-138,-165,-268,-269,-277,-278,-136,-137,-270,-145,]),'EXTERN':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,66,67,69,71,77,78,79,81,84,85,86,87,89,152,162,164,165,168,170,175,180,183,185,191,194,196,267,268,275,281,283,284,285,287,291,304,308,359,360,366,377,378,385,395,417,418,424,425,430,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,31,-143,-275,31,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,31,-143,-144,-177,-143,-143,-258,31,-110,-166,-259,-237,-231,-143,-264,-271,-272,-109,-182,-185,31,-257,-139,-140,-232,-233,-235,-265,-266,-143,-180,-181,-183,-184,-143,-143,-164,-234,-143,31,-267,31,-276,-138,-165,-268,-269,-277,-278,-136,-137,-270,-145,]),'STATIC':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,66,67,69,71,77,78,79,81,84,85,86,87,89,152,162,164,165,168,170,175,180,183,185,191,194,196,267,268,275,281,283,284,285,287,291,304,308,359,360,366,377,378,385,395,417,418,424,425,430,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,32,-143,-275,32,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,32,-143,-144,-177,-143,-143,-258,32,-110,-166,-259,-237,-231,-143,-264,-271,-272,-109,-182,-185,32,-257,-139,-140,-232,-233,-235,-265,-266,-143,-180,-181,-183,-184,-143,-143,-164,-234,-143,32,-267,32,-276,-138,-165,-268,-269,-277,-278,-136,-137,-270,-145,]),'AUTO':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,66,67,69,71,77,78,79,81,84,85,86,87,89,152,162,164,165,168,170,175,180,183,185,191,194,196,267,268,275,281,283,284,285,287,291,304,308,359,360,366,377,378,385,395,417,418,424,425,430,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,33,-143,-275,33,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,33,-143,-144,-177,-143,-143,-258,33,-110,-166,-259,-237,-231,-143,-264,-271,-272,-109,-182,-185,33,-257,-139,-140,-232,-233,-235,-265,-266,-143,-180,-181,-183,-184,-143,-143,-164,-234,-143,33,-267,33,-276,-138,-165,-268,-269,-277,-278,-136,-137,-270,-145,]),'REGISTER':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,66,67,69,71,77,78,79,81,84,85,86,87,89,152,162,164,165,168,170,175,180,183,185,191,194,196,267,268,275,281,283,284,285,287,291,304,308,359,360,366,377,378,385,395,417,418,424,425,430,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,34,-143,-275,34,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,34,-143,-144,-177,-143,-143,-258,34,-110,-166,-259,-237,-231,-143,-264,-271,-272,-109,-182,-185,34,-257,-139,-140,-232,-233,-235,-265,-266,-143,-180,-181,-183,-184,-143,-143,-164,-234,-143,34,-267,34,-276,-138,-165,-268,-269,-277,-278,-136,-137,-270,-145,]),'VOID':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,35,-143,-275,35,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,35,-143,-143,-144,-177,-143,-143,-258,35,-110,-166,-259,-237,-231,-143,-143,-264,35,35,-143,-271,-272,-109,-182,-185,35,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,35,-143,-164,-234,-156,-143,35,-155,-267,35,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'_BOOL':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,36,-143,-275,36,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,36,-143,-143,-144,-177,-143,-143,-258,36,-110,-166,-259,-237,-231,-143,-143,-264,36,36,-143,-271,-272,-109,-182,-185,36,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,36,-143,-164,-234,-156,-143,36,-155,-267,36,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'CHAR':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,37,-143,-275,37,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,37,-143,-143,-144,-177,-143,-143,-258,37,-110,-166,-259,-237,-231,-143,-143,-264,37,37,-143,-271,-272,-109,-182,-185,37,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,37,-143,-164,-234,-156,-143,37,-155,-267,37,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'SHORT':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,38,-143,-275,38,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,38,-143,-143,-144,-177,-143,-143,-258,38,-110,-166,-259,-237,-231,-143,-143,-264,38,38,-143,-271,-272,-109,-182,-185,38,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,38,-143,-164,-234,-156,-143,38,-155,-267,38,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'INT':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,39,-143,-275,39,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,39,-143,-143,-144,-177,-143,-143,-258,39,-110,-166,-259,-237,-231,-143,-143,-264,39,39,-143,-271,-272,-109,-182,-185,39,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,39,-143,-164,-234,-156,-143,39,-155,-267,39,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'LONG':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,40,-143,-275,40,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,40,-143,-143,-144,-177,-143,-143,-258,40,-110,-166,-259,-237,-231,-143,-143,-264,40,40,-143,-271,-272,-109,-182,-185,40,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,40,-143,-164,-234,-156,-143,40,-155,-267,40,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'FLOAT':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,41,-143,-275,41,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,41,-143,-143,-144,-177,-143,-143,-258,41,-110,-166,-259,-237,-231,-143,-143,-264,41,41,-143,-271,-272,-109,-182,-185,41,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,41,-143,-164,-234,-156,-143,41,-155,-267,41,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'DOUBLE':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,42,-143,-275,42,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,42,-143,-143,-144,-177,-143,-143,-258,42,-110,-166,-259,-237,-231,-143,-143,-264,42,42,-143,-271,-272,-109,-182,-185,42,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,42,-143,-164,-234,-156,-143,42,-155,-267,42,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'SIGNED':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,43,-143,-275,43,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,43,-143,-143,-144,-177,-143,-143,-258,43,-110,-166,-259,-237,-231,-143,-143,-264,43,43,-143,-271,-272,-109,-182,-185,43,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,43,-143,-164,-234,-156,-143,43,-155,-267,43,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'UNSIGNED':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,44,-143,-275,44,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,44,-143,-143,-144,-177,-143,-143,-258,44,-110,-166,-259,-237,-231,-143,-143,-264,44,44,-143,-271,-272,-109,-182,-185,44,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,44,-143,-164,-234,-156,-143,44,-155,-267,44,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'TYPE_NAME':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,54,55,56,57,58,59,60,61,66,67,69,71,77,78,79,81,82,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,47,-143,-275,47,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-141,-142,-143,-260,-236,-143,47,-143,-143,-144,-177,-143,-143,-258,47,-110,185,-166,-259,-237,-231,-143,-143,-264,47,47,-143,-271,-272,-109,-182,-185,47,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,47,-143,-164,-234,-156,-143,47,-155,-267,47,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'CONST':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,20,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,73,75,76,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,178,179,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,48,-143,-275,48,-178,48,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,48,-143,-143,-144,-177,-143,48,-190,-191,-143,-258,48,-110,-166,-259,-237,-231,-143,-143,-264,48,48,-143,-271,-272,-109,-182,-185,48,-192,-193,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,48,-143,-164,-234,-156,-143,48,-155,-267,48,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'VOLATILE':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,20,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,73,75,76,77,78,79,81,84,85,86,87,89,104,132,152,155,157,159,162,164,165,168,170,175,178,179,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,49,-143,-275,49,-178,49,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,49,-143,-143,-144,-177,-143,49,-190,-191,-143,-258,49,-110,-166,-259,-237,-231,-143,-143,244,-264,49,49,-143,-271,-272,-109,-182,-185,49,-192,-193,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,49,-143,-164,-234,-156,-143,49,-155,-267,49,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'RESTRICT':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,20,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,73,75,76,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,178,179,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,50,-143,-275,50,-178,50,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,50,-143,-143,-144,-177,-143,50,-190,-191,-143,-258,50,-110,-166,-259,-237,-231,-143,-143,-264,50,50,-143,-271,-272,-109,-182,-185,50,-192,-193,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,50,-143,-164,-234,-156,-143,50,-155,-267,50,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'__RESTRICT':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,20,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,73,75,76,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,178,179,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,51,-143,-275,51,-178,51,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,51,-143,-143,-144,-177,-143,51,-190,-191,-143,-258,51,-110,-166,-259,-237,-231,-143,-143,-264,51,51,-143,-271,-272,-109,-182,-185,51,-192,-193,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,51,-143,-164,-234,-156,-143,51,-155,-267,51,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'ENUM':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,53,-143,-275,53,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,53,-143,-143,-144,-177,-143,-143,-258,53,-110,-166,-259,-237,-231,-143,-143,-264,53,53,-143,-271,-272,-109,-182,-185,53,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,53,-143,-164,-234,-156,-143,53,-155,-267,53,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'STRUCT':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,54,-143,-275,54,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,54,-143,-143,-144,-177,-143,-143,-258,54,-110,-166,-259,-237,-231,-143,-143,-264,54,54,-143,-271,-272,-109,-182,-185,54,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,54,-143,-164,-234,-156,-143,54,-155,-267,54,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),'UNION':([0,1,2,3,4,5,6,7,8,10,11,15,16,18,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,56,57,58,59,60,61,66,67,69,71,77,78,79,81,84,85,86,87,89,104,152,155,157,159,162,164,165,168,170,175,180,183,184,185,191,194,196,231,241,267,268,270,273,274,275,277,281,283,284,285,287,291,299,300,301,302,303,304,308,355,359,360,365,366,377,378,384,385,386,388,392,395,406,417,418,423,424,425,430,431,436,451,452,],[-1,-143,-2,-3,-255,-256,-261,-262,-263,55,-143,-275,55,-178,-179,-106,-143,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-260,-236,-143,55,-143,-143,-144,-177,-143,-143,-258,55,-110,-166,-259,-237,-231,-143,-143,-264,55,55,-143,-271,-272,-109,-182,-185,55,-257,-139,-143,-140,-232,-233,-235,-143,-143,-265,-266,-143,-157,-158,-143,-143,-180,-181,-183,-184,-143,-143,-143,-143,-151,55,-143,-164,-234,-156,-143,55,-155,-267,55,-276,-143,-138,-152,-154,-143,-165,-143,-268,-269,-143,-277,-278,-136,-153,-137,-270,-145,]),';':([9,10,18,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,58,59,60,66,67,69,79,81,84,86,87,88,89,90,91,92,93,94,95,96,97,98,99,102,107,110,111,112,113,114,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,151,156,158,160,165,168,170,182,183,185,191,192,193,194,195,196,197,198,201,202,210,211,212,213,214,215,234,235,236,238,239,240,246,247,248,252,266,270,273,274,277,281,283,284,285,296,297,302,304,308,309,310,311,312,313,314,316,321,322,323,324,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,365,385,387,389,390,395,396,397,398,399,400,404,405,407,428,430,433,435,436,437,438,439,441,443,446,452,457,459,460,461,462,463,464,465,472,473,474,475,478,485,487,],[22,-107,-178,-179,-106,-143,-108,-143,-114,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-236,98,-107,-143,-144,-177,-116,-110,-166,-237,-231,98,98,197,-238,-222,-223,-224,-225,-226,-227,-240,-4,202,98,212,213,214,-103,-90,-58,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-143,-105,-58,-4,-109,-182,-185,-115,-139,-140,-232,-239,197,-233,98,-235,-242,98,98,-241,98,322,-251,-252,-253,323,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-116,-143,-157,-158,-143,-180,-181,-183,-184,-117,-217,388,-164,-234,-228,-5,-7,98,-230,-104,-21,98,-250,-254,-91,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-156,-155,-138,431,-159,-143,-165,-229,98,-59,98,98,-23,-25,-50,-218,-136,-161,-143,-137,-243,-245,-246,98,-89,-34,-145,-219,-160,-143,-162,98,474,-248,98,-163,-244,-247,-249,-35,-36,-37,]),'{':([11,18,21,22,23,52,53,54,55,56,58,59,67,69,77,82,84,86,87,88,89,91,92,93,94,95,96,97,98,107,168,170,181,183,185,191,192,194,195,196,197,198,201,202,212,213,214,281,283,284,285,298,308,309,312,313,322,323,396,397,399,400,429,437,438,439,441,452,462,464,465,473,474,475,],[59,-178,-179,-106,59,-143,83,-141,-142,59,-236,59,-144,-177,59,184,190,-237,-231,59,59,-238,-222,-223,-224,-225,-226,-227,-240,59,-182,-185,298,299,303,-232,-239,-233,59,-235,-242,59,59,-241,-251,-252,-253,-180,-181,-183,-184,298,-234,-228,59,-230,-250,-254,-229,59,59,59,298,-243,-245,-246,59,-145,59,-248,59,-244,-247,-249,]),'PP_DEFINE_NAME':([12,14,],[61,65,]),'PP_DEFINE_MACRO_NAME':([12,],[62,]),'error':([12,22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,210,212,213,214,308,309,312,313,321,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[63,-106,-236,90,-237,-231,193,193,-238,-222,-223,-224,-225,-226,-227,-240,193,-232,-239,-233,193,-235,-242,193,193,-241,193,-251,-252,-253,-234,-228,193,-230,193,-250,-254,-229,193,193,193,-243,-245,-246,193,193,-248,193,-244,-247,-249,]),'=':([18,21,23,67,69,79,99,115,117,123,125,133,135,136,137,139,140,141,142,144,151,158,160,168,170,189,234,235,236,238,239,240,246,247,248,252,266,281,283,284,285,310,311,316,328,332,333,339,340,398,404,405,407,446,452,478,485,487,],[-178,-179,-143,-144,-177,181,-4,217,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-143,-58,-4,-182,-185,306,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,181,-180,-181,-183,-184,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-145,-35,-36,-37,]),',':([18,20,21,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,66,67,69,73,74,75,76,79,81,84,99,102,113,114,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,151,155,156,158,160,165,168,170,171,172,173,174,175,177,178,179,182,183,185,187,188,189,205,215,234,235,236,238,239,240,246,247,248,252,266,269,270,271,272,273,274,277,279,280,281,282,283,284,285,288,289,290,295,296,297,304,310,311,314,315,316,318,319,324,325,326,327,328,329,330,331,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,361,363,365,370,371,372,373,375,376,382,383,385,387,389,390,393,394,395,398,404,405,407,409,410,412,415,416,420,422,426,427,428,430,433,435,436,440,442,443,444,445,446,447,448,449,450,452,453,454,457,458,459,460,461,466,467,468,469,472,477,478,480,481,482,483,484,485,486,487,],[-178,-186,-179,-143,80,-143,-114,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-144,-177,-187,-188,-190,-191,-116,-110,-166,-4,203,-103,-90,-58,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-143,-203,-105,-58,-4,-109,-182,-185,286,287,-201,-196,-200,-189,-192,-193,-115,-139,-140,305,-169,-171,203,203,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-116,-204,-143,-205,-143,-157,-158,-143,369,-273,-180,-148,-181,-183,-184,-143,-199,-205,381,-117,-217,-164,-5,-7,-104,203,-21,203,203,-91,203,-87,203,-24,406,-30,-32,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-156,-143,-206,-213,-209,-155,422,-146,-149,-202,-197,-198,429,-220,-138,432,-159,-143,-170,-172,-165,-59,-23,-25,-50,-207,-211,-215,-214,-210,-274,-148,455,456,-218,-136,-161,-143,-137,203,203,-89,-31,-33,-34,-38,-212,-216,-208,-145,-147,406,-219,-221,-160,-143,-162,-41,479,-39,-150,-163,-38,-35,203,479,-40,-42,-38,-36,479,-37,]),')':([18,20,21,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,66,67,69,71,73,74,75,76,81,84,98,113,114,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,155,158,160,161,163,165,168,169,170,171,172,173,174,175,176,177,178,179,183,185,197,202,205,206,231,234,235,236,238,239,240,246,247,248,252,269,270,271,272,273,274,275,277,279,280,281,282,283,284,285,288,289,290,291,293,294,295,304,310,311,314,315,316,318,319,324,326,328,329,330,331,332,333,334,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,359,361,362,363,365,370,371,372,373,374,375,376,385,395,398,402,404,405,407,408,409,410,412,413,414,415,416,420,421,422,426,427,430,436,440,442,443,444,445,446,447,448,449,450,452,453,454,466,467,468,469,470,471,477,478,480,481,482,483,484,485,486,487,],[-178,-186,-179,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-144,-177,170,-187,-188,-190,-191,-110,-166,-240,-103,-90,-58,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-203,-58,-4,278,281,-109,-182,284,-185,285,-194,-201,-196,-200,292,-189,-192,-193,-139,-140,-242,-241,316,317,328,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-204,-143,-205,-143,-157,-158,361,-143,368,-273,-180,-148,-181,-183,-184,-143,-199,-205,361,379,380,-279,-164,-5,-7,-104,397,-21,399,400,-91,-87,-24,405,-30,-32,-26,-27,407,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-156,-143,-206,412,-213,415,-209,-155,421,-146,-149,-202,-195,-197,-198,-138,-165,-59,441,-23,-25,-50,446,-207,-211,-215,449,450,-214,-210,-274,452,-148,-280,-283,-136,-137,463,465,-89,-31,-33,-34,-38,-212,-216,-208,-145,-147,469,-41,478,-39,-150,-281,-282,-38,-35,483,485,-40,-42,-38,-36,487,-37,]),':':([18,21,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,67,69,84,99,101,113,114,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,156,158,160,168,170,183,185,200,234,235,236,238,239,240,246,247,248,252,270,273,274,277,281,283,284,285,302,304,310,311,314,316,324,325,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,365,385,390,395,398,404,405,407,408,430,432,436,443,446,447,452,466,467,468,477,478,481,482,483,485,487,],[-178,-179,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-144,-177,-166,198,201,-103,-90,-58,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-105,-58,-4,-182,-185,-139,-140,312,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-143,-157,-158,-143,-180,-181,-183,-184,391,-164,-5,-7,-104,-21,-91,403,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-156,-155,-138,434,-165,-59,-23,-25,-50,447,-136,391,-137,-89,-34,-38,-145,-41,477,-39,-38,-35,484,-40,-42,-36,-37,]),'[':([18,20,21,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,66,67,69,73,74,75,76,81,84,99,117,125,133,135,136,137,139,140,141,142,144,155,160,165,168,170,175,177,178,179,183,185,234,235,246,247,248,252,270,271,272,273,274,275,277,281,283,284,285,290,291,304,310,311,316,328,332,333,339,340,355,356,360,361,363,365,377,385,395,404,405,410,412,415,416,430,436,448,449,450,452,],[70,-186,-179,-143,-111,-112,-113,-118,-119,-120,-121,-122,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,-143,-144,70,-187,-188,-190,-191,-110,-166,-4,230,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,276,-4,-109,-182,-185,276,-189,-192,-193,-139,-140,-28,-29,-14,-15,-16,-17,-143,276,358,-157,-158,-143,-143,-180,-181,-183,-184,276,-143,-164,-5,-7,-21,-24,-26,-27,-8,-6,-156,358,276,-213,-209,-155,276,-138,-165,-23,-25,-211,-215,-214,-210,-136,-137,-212,-216,-208,-145,]),'PRAGMA_PACK':([19,],[72,]),'PP_END_DEFINE':([20,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,61,63,65,67,73,74,75,76,84,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,153,154,155,156,158,160,177,178,179,183,185,234,235,236,238,239,240,246,247,248,252,269,270,271,272,273,274,277,278,304,310,311,316,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,361,363,365,367,368,385,395,398,404,405,407,409,410,412,415,416,419,430,436,443,446,448,449,450,452,478,485,487,],[-186,-123,-124,-125,-126,-127,-128,-129,-130,-131,-132,-133,-134,-135,-173,-174,-175,-176,152,162,164,-144,-187,-188,-190,-191,-166,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,267,268,-203,-105,-58,-4,-189,-192,-193,-139,-140,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-204,-143,-205,-143,-157,-158,-143,366,-164,-5,-7,-21,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-156,-143,-206,-213,-209,-155,417,418,-138,-165,-59,-23,-25,-50,-207,-211,-215,-214,-210,451,-136,-137,-89,-34,-212,-216,-208,-145,-35,-36,-37,]),'}':([22,58,59,86,87,88,89,90,91,92,93,94,95,96,97,98,114,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,156,158,160,186,187,188,189,191,192,194,195,196,197,202,212,213,214,234,235,236,238,239,240,246,247,248,252,297,300,301,305,307,308,309,310,311,313,316,322,323,324,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,382,383,384,386,388,392,393,394,396,398,404,405,407,428,429,431,437,438,439,443,446,457,458,464,473,474,475,478,485,487,],[-106,-236,87,-237,-231,191,194,196,-238,-222,-223,-224,-225,-226,-227,-240,-90,-58,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-105,-58,-4,304,-167,-169,-171,-232,-239,-233,308,-235,-242,-241,-251,-252,-253,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-217,385,-151,-168,395,-234,-228,-5,-7,-230,-21,-250,-254,-91,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,428,-220,430,-152,-154,436,-170,-172,-229,-59,-23,-25,-50,-218,457,-153,-243,-245,-246,-89,-34,-219,-221,-248,-244,-247,-249,-35,-36,-37,]),'CASE':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,100,-237,-231,100,100,-238,-222,-223,-224,-225,-226,-227,-240,100,-232,-239,-233,100,-235,-242,100,100,-241,-251,-252,-253,-234,-228,100,-230,-250,-254,-229,100,100,100,-243,-245,-246,100,100,-248,100,-244,-247,-249,]),'DEFAULT':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,101,-237,-231,101,101,-238,-222,-223,-224,-225,-226,-227,-240,101,-232,-239,-233,101,-235,-242,101,101,-241,-251,-252,-253,-234,-228,101,-230,-250,-254,-229,101,101,101,-243,-245,-246,101,101,-248,101,-244,-247,-249,]),'IF':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,103,-237,-231,103,103,-238,-222,-223,-224,-225,-226,-227,-240,103,-232,-239,-233,103,-235,-242,103,103,-241,-251,-252,-253,-234,-228,103,-230,-250,-254,-229,103,103,103,-243,-245,-246,103,103,-248,103,-244,-247,-249,]),'SWITCH':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,105,-237,-231,105,105,-238,-222,-223,-224,-225,-226,-227,-240,105,-232,-239,-233,105,-235,-242,105,105,-241,-251,-252,-253,-234,-228,105,-230,-250,-254,-229,105,105,105,-243,-245,-246,105,105,-248,105,-244,-247,-249,]),'WHILE':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,209,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,106,-237,-231,106,106,-238,-222,-223,-224,-225,-226,-227,-240,106,-232,-239,-233,106,-235,-242,106,106,-241,320,-251,-252,-253,-234,-228,106,-230,-250,-254,-229,106,106,106,-243,-245,-246,106,106,-248,106,-244,-247,-249,]),'DO':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,107,-237,-231,107,107,-238,-222,-223,-224,-225,-226,-227,-240,107,-232,-239,-233,107,-235,-242,107,107,-241,-251,-252,-253,-234,-228,107,-230,-250,-254,-229,107,107,107,-243,-245,-246,107,107,-248,107,-244,-247,-249,]),'FOR':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,108,-237,-231,108,108,-238,-222,-223,-224,-225,-226,-227,-240,108,-232,-239,-233,108,-235,-242,108,108,-241,-251,-252,-253,-234,-228,108,-230,-250,-254,-229,108,108,108,-243,-245,-246,108,108,-248,108,-244,-247,-249,]),'GOTO':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,109,-237,-231,109,109,-238,-222,-223,-224,-225,-226,-227,-240,109,-232,-239,-233,109,-235,-242,109,109,-241,-251,-252,-253,-234,-228,109,-230,-250,-254,-229,109,109,109,-243,-245,-246,109,109,-248,109,-244,-247,-249,]),'CONTINUE':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,110,-237,-231,110,110,-238,-222,-223,-224,-225,-226,-227,-240,110,-232,-239,-233,110,-235,-242,110,110,-241,-251,-252,-253,-234,-228,110,-230,-250,-254,-229,110,110,110,-243,-245,-246,110,110,-248,110,-244,-247,-249,]),'BREAK':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,111,-237,-231,111,111,-238,-222,-223,-224,-225,-226,-227,-240,111,-232,-239,-233,111,-235,-242,111,111,-241,-251,-252,-253,-234,-228,111,-230,-250,-254,-229,111,111,111,-243,-245,-246,111,111,-248,111,-244,-247,-249,]),'RETURN':([22,58,59,86,87,88,89,91,92,93,94,95,96,97,98,107,191,192,194,195,196,197,198,201,202,212,213,214,308,309,312,313,322,323,396,397,399,400,437,438,439,441,462,464,465,473,474,475,],[-106,-236,112,-237,-231,112,112,-238,-222,-223,-224,-225,-226,-227,-240,112,-232,-239,-233,112,-235,-242,112,112,-241,-251,-252,-253,-234,-228,112,-230,-250,-254,-229,112,112,112,-243,-245,-246,112,112,-248,112,-244,-247,-249,]),'INC_OP':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,99,100,104,107,112,117,118,119,120,122,125,126,127,128,129,130,131,133,135,136,137,139,140,141,142,144,159,160,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,237,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,310,311,312,313,316,317,321,322,323,328,332,333,339,340,358,368,391,396,397,399,400,401,402,403,404,405,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,118,118,118,-237,-231,118,118,-238,-222,-223,-224,-225,-226,-227,-240,-4,118,118,118,118,234,118,118,118,118,-22,-52,-53,-54,-55,-56,-57,-12,-18,-19,-20,-16,-9,-10,-13,-11,118,-4,118,-232,-239,-233,118,-235,-242,118,118,-241,118,118,118,118,118,-251,-252,-253,118,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,118,118,118,118,-28,-29,118,118,118,118,-14,-15,-16,118,118,-17,118,118,118,118,118,118,118,118,118,118,118,118,118,118,118,118,118,-234,-228,-5,-7,118,-230,-21,118,118,-250,-254,-24,-26,-27,-8,-6,118,118,118,-229,118,118,118,118,118,118,-23,-25,118,118,118,118,-243,-245,-246,118,118,-248,118,-244,-247,-249,118,]),'DEC_OP':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,99,100,104,107,112,117,118,119,120,122,125,126,127,128,129,130,131,133,135,136,137,139,140,141,142,144,159,160,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,237,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,310,311,312,313,316,317,321,322,323,328,332,333,339,340,358,368,391,396,397,399,400,401,402,403,404,405,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,119,119,119,-237,-231,119,119,-238,-222,-223,-224,-225,-226,-227,-240,-4,119,119,119,119,235,119,119,119,119,-22,-52,-53,-54,-55,-56,-57,-12,-18,-19,-20,-16,-9,-10,-13,-11,119,-4,119,-232,-239,-233,119,-235,-242,119,119,-241,119,119,119,119,119,-251,-252,-253,119,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,119,119,119,119,-28,-29,119,119,119,119,-14,-15,-16,119,119,-17,119,119,119,119,119,119,119,119,119,119,119,119,119,119,119,119,119,-234,-228,-5,-7,119,-230,-21,119,119,-250,-254,-24,-26,-27,-8,-6,119,119,119,-229,119,119,119,119,119,119,-23,-25,119,119,119,119,-243,-245,-246,119,119,-248,119,-244,-247,-249,119,]),'SIZEOF':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,159,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,122,122,122,-237,-231,122,122,-238,-222,-223,-224,-225,-226,-227,-240,122,122,122,122,122,122,122,122,-52,-53,-54,-55,-56,-57,122,122,-232,-239,-233,122,-235,-242,122,122,-241,122,122,122,122,122,-251,-252,-253,122,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,122,-234,-228,122,-230,122,122,-250,-254,122,122,122,-229,122,122,122,122,122,122,122,122,122,122,-243,-245,-246,122,122,-248,122,-244,-247,-249,122,]),'&':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,99,100,104,107,112,115,117,118,119,120,121,122,123,125,126,127,128,129,130,131,133,135,136,137,139,140,141,142,143,144,146,147,148,149,150,158,159,160,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,236,237,238,239,240,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,310,311,312,313,316,317,321,322,323,328,332,333,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,358,368,391,396,397,398,399,400,401,402,403,404,405,406,407,423,429,434,437,438,439,441,446,462,464,465,473,474,475,476,478,485,487,],[-106,-236,126,126,126,-237,-231,126,126,-238,-222,-223,-224,-225,-226,-227,-240,-4,126,126,126,126,-58,-45,126,126,126,-60,126,-51,-22,-52,-53,-54,-55,-56,-57,-12,-18,-19,-20,-16,-9,-10,-13,251,-11,-78,-75,-70,-67,-64,-58,126,-4,126,-232,-239,-233,126,-235,-242,126,126,-241,126,126,126,126,126,-251,-252,-253,126,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,126,126,126,126,-28,-29,-46,126,-47,-48,-49,126,126,126,-14,-15,-16,126,126,-17,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,-234,-228,-5,-7,126,-230,-21,126,126,-250,-254,-24,-26,-27,251,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,126,126,126,-229,126,-59,126,126,126,126,126,-23,-25,126,-50,126,126,126,-243,-245,-246,126,-34,126,-248,126,-244,-247,-249,126,-35,-36,-37,]),'+':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,99,100,104,107,112,115,117,118,119,120,121,122,123,125,126,127,128,129,130,131,133,135,136,137,139,140,141,142,144,149,150,158,159,160,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,236,237,238,239,240,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,310,311,312,313,316,317,321,322,323,328,332,333,339,340,348,349,350,351,352,353,354,358,368,391,396,397,398,399,400,401,402,403,404,405,406,407,423,429,434,437,438,439,441,446,462,464,465,473,474,475,476,478,485,487,],[-106,-236,128,128,128,-237,-231,128,128,-238,-222,-223,-224,-225,-226,-227,-240,-4,128,128,128,128,-58,-45,128,128,128,-60,128,-51,-22,-52,-53,-54,-55,-56,-57,-12,-18,-19,-20,-16,-9,-10,-13,-11,261,-64,-58,128,-4,128,-232,-239,-233,128,-235,-242,128,128,-241,128,128,128,128,128,-251,-252,-253,128,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,128,128,128,128,-28,-29,-46,128,-47,-48,-49,128,128,128,-14,-15,-16,128,128,-17,128,128,128,128,128,128,128,128,128,128,128,128,128,128,128,128,128,-234,-228,-5,-7,128,-230,-21,128,128,-250,-254,-24,-26,-27,-8,-6,261,261,-65,-66,-61,-62,-63,128,128,128,-229,128,-59,128,128,128,128,128,-23,-25,128,-50,128,128,128,-243,-245,-246,128,-34,128,-248,128,-244,-247,-249,128,-35,-36,-37,]),'-':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,99,100,104,107,112,115,117,118,119,120,121,122,123,125,126,127,128,129,130,131,133,135,136,137,139,140,141,142,144,149,150,158,159,160,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,234,235,236,237,238,239,240,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,310,311,312,313,316,317,321,322,323,328,332,333,339,340,348,349,350,351,352,353,354,358,368,391,396,397,398,399,400,401,402,403,404,405,406,407,423,429,434,437,438,439,441,446,462,464,465,473,474,475,476,478,485,487,],[-106,-236,129,129,129,-237,-231,129,129,-238,-222,-223,-224,-225,-226,-227,-240,-4,129,129,129,129,-58,-45,129,129,129,-60,129,-51,-22,-52,-53,-54,-55,-56,-57,-12,-18,-19,-20,-16,-9,-10,-13,-11,262,-64,-58,129,-4,129,-232,-239,-233,129,-235,-242,129,129,-241,129,129,129,129,129,-251,-252,-253,129,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,129,129,129,129,-28,-29,-46,129,-47,-48,-49,129,129,129,-14,-15,-16,129,129,-17,129,129,129,129,129,129,129,129,129,129,129,129,129,129,129,129,129,-234,-228,-5,-7,129,-230,-21,129,129,-250,-254,-24,-26,-27,-8,-6,262,262,-65,-66,-61,-62,-63,129,129,129,-229,129,-59,129,129,129,129,129,-23,-25,129,-50,129,129,129,-243,-245,-246,129,-34,129,-248,129,-244,-247,-249,129,-35,-36,-37,]),'~':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,159,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,130,130,130,-237,-231,130,130,-238,-222,-223,-224,-225,-226,-227,-240,130,130,130,130,130,130,130,130,-52,-53,-54,-55,-56,-57,130,130,-232,-239,-233,130,-235,-242,130,130,-241,130,130,130,130,130,-251,-252,-253,130,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,130,-234,-228,130,-230,130,130,-250,-254,130,130,130,-229,130,130,130,130,130,130,130,130,130,130,-243,-245,-246,130,130,-248,130,-244,-247,-249,130,]),'!':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,159,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,131,131,131,-237,-231,131,131,-238,-222,-223,-224,-225,-226,-227,-240,131,131,131,131,131,131,131,131,-52,-53,-54,-55,-56,-57,131,131,-232,-239,-233,131,-235,-242,131,131,-241,131,131,131,131,131,-251,-252,-253,131,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,131,-234,-228,131,-230,131,131,-250,-254,131,131,131,-229,131,131,131,131,131,131,131,131,131,131,-243,-245,-246,131,131,-248,131,-244,-247,-249,131,]),'__ASM__':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,159,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,132,132,132,-237,-231,132,132,-238,-222,-223,-224,-225,-226,-227,-240,132,132,132,132,132,132,132,132,-52,-53,-54,-55,-56,-57,132,132,-232,-239,-233,132,-235,-242,132,132,-241,132,132,132,132,132,-251,-252,-253,132,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,132,-234,-228,132,-230,132,132,-250,-254,132,132,132,-229,132,132,132,132,132,132,132,132,132,132,-243,-245,-246,132,132,-248,132,-244,-247,-249,132,]),'PP_MACRO_PARAM':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,133,137,139,142,144,145,159,161,181,191,192,194,195,196,197,198,199,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,369,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,139,139,139,-237,-231,139,139,-238,-222,-223,-224,-225,-226,-227,-240,139,139,139,139,139,139,139,139,-52,-53,-54,-55,-56,-57,-12,248,-16,-13,-11,252,139,280,139,-232,-239,-233,139,-235,-242,139,311,139,-241,139,139,139,139,139,-251,-252,-253,139,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,139,139,139,139,139,139,139,139,-14,-15,-16,139,339,139,-17,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,-234,-228,139,-230,139,139,-250,-254,139,139,420,139,-229,139,139,139,139,139,139,139,139,139,139,-243,-245,-246,139,139,-248,139,-244,-247,-249,139,]),'CONSTANT':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,159,176,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,381,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,455,462,464,465,473,474,475,476,],[-106,-236,140,140,140,-237,-231,140,140,-238,-222,-223,-224,-225,-226,-227,-240,140,140,140,140,140,140,140,140,-52,-53,-54,-55,-56,-57,140,140,140,-232,-239,-233,140,-235,-242,140,140,-241,140,140,140,140,140,-251,-252,-253,140,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,140,-234,-228,140,-230,140,140,-250,-254,140,140,140,140,-229,140,140,140,140,140,140,140,140,140,140,-243,-245,-246,140,140,140,-248,140,-244,-247,-249,140,]),'CHARACTER_CONSTANT':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,159,176,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,381,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,455,462,464,465,473,474,475,476,],[-106,-236,141,141,141,-237,-231,141,141,-238,-222,-223,-224,-225,-226,-227,-240,141,141,141,141,141,141,141,141,-52,-53,-54,-55,-56,-57,141,141,141,-232,-239,-233,141,-235,-242,141,141,-241,141,141,141,141,141,-251,-252,-253,141,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,-234,-228,141,-230,141,141,-250,-254,141,141,141,141,-229,141,141,141,141,141,141,141,141,141,141,-243,-245,-246,141,141,141,-248,141,-244,-247,-249,141,]),'STRING_LITERAL':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,133,137,139,142,144,159,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,336,358,368,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,447,462,464,465,473,474,475,476,477,479,484,],[-106,-236,144,144,144,-237,-231,144,144,-238,-222,-223,-224,-225,-226,-227,-240,144,144,144,144,144,144,144,144,-52,-53,-54,-55,-56,-57,-12,144,-16,-13,-11,144,144,-232,-239,-233,144,-235,-242,144,144,-241,144,144,144,144,144,-251,-252,-253,144,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,144,144,144,144,144,144,144,144,-14,-15,-16,144,144,-17,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,-234,-228,144,-230,144,144,-250,-254,144,144,144,144,-229,144,144,144,144,144,144,144,144,144,144,-243,-245,-246,144,144,144,-248,144,-244,-247,-249,144,144,144,144,]),'PP_STRINGIFY':([22,58,59,61,70,86,87,88,89,91,92,93,94,95,96,97,98,100,104,107,112,118,119,120,122,126,127,128,129,130,131,133,137,139,142,144,159,181,191,192,194,195,196,197,198,201,202,203,204,207,208,210,212,213,214,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,237,241,242,245,246,247,248,249,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,308,309,312,313,317,321,322,323,358,368,391,396,397,399,400,401,402,403,406,423,429,434,437,438,439,441,462,464,465,473,474,475,476,],[-106,-236,145,145,145,-237,-231,145,145,-238,-222,-223,-224,-225,-226,-227,-240,145,145,145,145,145,145,145,145,-52,-53,-54,-55,-56,-57,-12,145,-16,-13,-11,145,145,-232,-239,-233,145,-235,-242,145,145,-241,145,145,145,145,145,-251,-252,-253,145,-92,-93,-94,-95,-96,-97,-98,-99,-100,-101,-102,145,145,145,145,145,145,145,145,-14,-15,-16,145,145,-17,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,-234,-228,145,-230,145,145,-250,-254,145,145,145,-229,145,145,145,145,145,145,145,145,145,145,-243,-245,-246,145,145,-248,145,-244,-247,-249,145,]),']':([70,113,114,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,156,158,160,167,234,235,236,238,239,240,246,247,248,252,276,310,311,314,316,324,326,327,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,358,364,398,404,405,407,411,443,446,478,485,487,],[168,-103,-90,-58,-88,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-105,-58,-4,283,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,363,-5,-7,-104,-21,-91,-87,404,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,410,416,-59,-23,-25,-50,448,-89,-34,-35,-36,-37,]),'ELSE':([87,92,93,94,95,96,97,98,191,194,196,197,202,212,213,214,308,309,313,322,323,396,437,438,439,464,473,474,475,],[-231,-222,-223,-224,-225,-226,-227,-240,-232,-233,-235,-242,-241,-251,-252,-253,-234,-228,-230,-250,-254,-229,462,-245,-246,-248,-244,-247,-249,]),'PERIOD':([99,117,125,133,135,136,137,139,140,141,142,144,160,234,235,246,247,248,252,310,311,316,328,332,333,339,340,404,405,],[-4,232,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-4,-28,-29,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-23,-25,]),'PTR_OP':([99,117,125,133,135,136,137,139,140,141,142,144,160,234,235,246,247,248,252,310,311,316,328,332,333,339,340,404,405,],[-4,233,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-4,-28,-29,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-23,-25,]),'MUL_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,218,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'DIV_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,219,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'MOD_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,220,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'ADD_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,221,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'SUB_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,222,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'LEFT_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,223,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'RIGHT_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,224,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'AND_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,225,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'XOR_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,226,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'OR_ASSIGN':([99,115,117,123,125,133,135,136,137,139,140,141,142,144,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,398,404,405,407,446,478,485,487,],[-4,227,-45,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,-59,-23,-25,-50,-34,-35,-36,-37,]),'/':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,264,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,264,264,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'%':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,265,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,265,265,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'LEFT_OP':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,259,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,259,259,259,259,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'RIGHT_OP':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,260,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,260,260,260,260,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'<':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,255,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,255,255,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'>':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,256,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,256,256,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'LE_OP':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,257,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,257,257,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'GE_OP':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,258,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,258,258,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'EQ_OP':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,253,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,253,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'NE_OP':([99,115,117,121,123,125,133,135,136,137,139,140,141,142,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,-16,-9,-10,-13,-11,254,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,-8,-6,254,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'^':([99,115,117,121,123,125,133,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,-18,-19,-20,249,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,249,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'|':([99,115,117,121,123,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,-22,-12,245,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-24,-26,-27,245,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'AND_OP':([99,115,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,-45,-60,-51,242,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,242,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'?':([99,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,228,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'OR_OP':([99,115,116,117,121,123,124,125,133,134,135,136,137,138,139,140,141,142,143,144,146,147,148,149,150,158,160,234,235,236,238,239,240,246,247,248,252,310,311,316,326,328,332,333,335,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,398,404,405,407,446,478,485,487,],[-4,-58,229,-45,-60,-51,-86,-22,-12,-84,-18,-19,-20,-82,-16,-9,-10,-13,-80,-11,-78,-75,-70,-67,-64,-58,-4,-28,-29,-46,-47,-48,-49,-14,-15,-16,-17,-5,-7,-21,-87,-24,-26,-27,-85,-83,-81,-8,-6,-79,-76,-77,-71,-72,-73,-74,-68,-69,-65,-66,-61,-62,-63,-59,-23,-25,-50,-34,-35,-36,-37,]),'PP_IDENTIFIER_PASTE':([99,139,160,311,339,],[199,250,199,250,250,]),'ELLIPSIS':([287,],[374,]),'PRAGMA_END':([292,379,380,],[378,424,425,]),}
_lr_action = { }
for _k, _v in _lr_action_items.items():
for _x,_y in zip(_v[0],_v[1]):
_lr_action[(_x,_k)] = _y
del _lr_action_items
_lr_goto_items = {'translation_unit':([0,],[1,]),'external_declaration':([1,],[2,]),'directive':([1,],[3,]),'declaration':([1,11,23,56,59,77,89,],[4,58,58,86,58,86,86,]),'function_definition':([1,],[5,]),'define':([1,],[6,]),'undefine':([1,],[7,]),'pragma':([1,],[8,]),'declaration_impl':([1,11,23,56,59,77,89,],[9,9,9,9,9,9,9,]),'declaration_specifier_list':([1,11,23,56,59,71,77,89,275,287,291,359,],[10,60,60,60,60,175,60,60,175,175,175,175,]),'declarator':([1,10,60,64,80,175,302,377,432,],[11,23,151,163,151,288,390,163,390,]),'pragma_pack':([1,],[15,]),'gcc_attributes':([1,11,13,23,25,52,56,59,61,66,71,77,89,104,151,159,184,231,241,270,272,275,277,287,288,291,299,300,303,356,359,384,390,392,406,423,435,460,],[16,16,64,79,81,82,16,16,157,165,16,16,16,157,266,157,157,157,157,355,357,360,365,16,376,377,157,157,157,409,16,157,433,157,157,157,461,472,]),'pointer':([1,10,20,60,64,73,80,155,175,302,360,377,432,],[17,17,74,17,17,177,17,271,290,17,271,290,17,]),'direct_declarator':([1,10,17,60,64,80,175,290,302,377,432,],[18,18,69,18,18,18,18,69,18,18,18,]),'init_declarator_list':([10,60,],[24,24,]),'declaration_specifier':([10,16,60,79,175,360,377,],[25,66,25,66,25,66,66,]),'init_declarator':([10,60,80,],[26,26,182,]),'storage_class_specifier':([10,16,60,79,175,360,377,],[27,27,27,27,27,27,27,]),'type_specifier':([10,16,60,79,155,157,175,302,360,377,],[28,28,28,28,273,273,28,273,28,28,]),'type_qualifier':([10,16,20,60,73,79,155,157,175,302,360,377,],[29,29,75,29,178,29,274,274,29,274,29,29,]),'struct_or_union_specifier':([10,16,60,79,155,157,175,302,360,377,],[45,45,45,45,45,45,45,45,45,45,]),'enum_specifier':([10,16,60,79,155,157,175,302,360,377,],[46,46,46,46,46,46,46,46,46,46,]),'struct_or_union':([10,16,60,79,155,157,175,302,360,377,],[52,52,52,52,52,52,52,52,52,52,]),'declaration_list':([11,23,59,],[56,77,89,]),'compound_statement':([11,23,56,59,77,88,89,107,195,198,201,312,397,399,400,441,462,465,],[57,78,85,93,180,93,93,93,93,93,93,93,93,93,93,93,93,93,]),'gcc_attribute':([16,20,64,73,79,81,82,157,165,266,355,357,360,365,376,377,409,433,461,472,],[67,76,67,179,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,67,]),'type_qualifier_list':([20,],[73,]),'statement_list':([59,89,],[88,195,]),'statement':([59,88,89,107,195,198,201,312,397,399,400,441,462,465,],[91,192,91,209,192,309,313,396,437,438,439,464,473,475,]),'labeled_statement':([59,88,89,107,195,198,201,312,397,399,400,441,462,465,],[92,92,92,92,92,92,92,92,92,92,92,92,92,92,]),'expression_statement':([59,88,89,107,195,198,201,210,312,321,397,399,400,441,462,465,],[94,94,94,94,94,94,94,321,94,402,94,94,94,94,94,94,]),'selection_statement':([59,88,89,107,195,198,201,312,397,399,400,441,462,465,],[95,95,95,95,95,95,95,95,95,95,95,95,95,95,]),'iteration_statement':([59,88,89,107,195,198,201,312,397,399,400,441,462,465,],[96,96,96,96,96,96,96,96,96,96,96,96,96,96,]),'jump_statement':([59,88,89,107,195,198,201,312,397,399,400,441,462,465,],[97,97,97,97,97,97,97,97,97,97,97,97,97,97,]),'expression':([59,88,89,104,107,112,159,195,198,201,204,207,208,210,228,230,237,241,312,321,397,399,400,401,402,441,462,465,476,],[102,102,102,205,102,215,205,102,102,102,315,318,319,102,325,327,205,205,102,102,102,102,102,440,442,102,102,102,480,]),'assignment_expression':([59,88,89,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,230,231,237,241,298,312,321,397,399,400,401,402,406,423,429,441,462,465,476,],[113,113,113,113,113,113,113,297,113,113,113,314,113,113,113,113,324,113,113,330,113,113,297,113,113,113,113,113,113,113,444,330,297,113,113,113,113,]),'conditional_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,230,231,237,241,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[114,156,156,114,114,156,114,114,114,114,114,114,114,114,114,114,114,114,114,114,114,114,114,114,114,156,156,114,156,114,114,156,156,156,114,114,114,114,114,443,114,114,114,156,114,114,114,114,]),'unary_expression':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[115,158,158,115,115,158,115,115,115,236,238,158,240,115,115,115,115,115,115,115,115,115,115,115,115,158,115,115,115,115,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,115,158,115,158,115,158,158,158,115,115,115,115,115,158,115,115,115,158,115,115,115,115,]),'logical_or_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,230,231,237,241,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,116,]),'postfix_expression':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,117,]),'unary_operator':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,]),'cast_expression':([59,61,70,88,89,100,104,107,112,120,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[121,121,121,121,121,121,121,121,121,239,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,352,353,354,121,121,121,121,121,398,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,121,]),'asm_expression':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,123,]),'logical_and_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,326,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,124,]),'primary_expression':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,125,]),'string_literal':([59,61,70,88,89,100,104,107,112,118,119,120,122,137,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,336,358,368,391,397,399,400,401,402,403,406,423,429,434,441,447,462,465,476,477,479,484,],[133,133,133,133,133,133,133,133,133,133,133,133,133,246,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,133,408,133,133,133,133,133,133,133,133,133,133,133,133,133,133,466,133,133,133,466,466,466,]),'inclusive_or_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,335,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,134,]),'identifier':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,199,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,250,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,310,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,340,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,135,]),'constant':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,176,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,381,391,397,399,400,401,402,403,406,423,429,434,441,455,462,465,476,],[136,136,136,136,136,136,136,136,136,136,136,136,136,136,293,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,136,427,136,136,136,136,136,136,136,136,136,136,136,136,470,136,136,136,]),'multi_string_literal':([59,61,70,88,89,100,104,107,112,118,119,120,122,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,137,]),'exclusive_or_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,337,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,138,]),'macro_param':([59,61,70,88,89,100,104,107,112,118,119,120,122,137,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,263,264,265,276,278,298,306,312,317,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[142,142,142,142,142,142,142,142,142,142,142,142,142,247,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,142,]),'and_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,338,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,]),'equality_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,341,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,]),'relational_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,342,343,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,]),'shift_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,344,345,346,347,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,]),'additive_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,348,349,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,]),'multiplicative_expression':([59,61,70,88,89,100,104,107,112,159,181,195,198,201,203,204,207,208,210,216,228,229,230,231,237,241,242,245,249,251,253,254,255,256,257,258,259,260,261,262,276,278,298,306,312,321,358,368,391,397,399,400,401,402,403,406,423,429,434,441,462,465,476,],[150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,350,351,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,]),'type_name':([61,104,159,231,241,406,423,],[153,206,206,331,334,445,331,]),'constant_expression':([61,70,100,276,278,306,358,368,391,434,],[154,167,200,364,367,394,411,419,435,460,]),'specifier_qualifier_list':([61,104,159,184,231,241,299,300,303,384,392,406,423,],[155,155,155,302,155,155,302,302,302,302,302,155,155,]),'parameter_type_list':([71,275,291,359,],[169,362,362,413,]),'identifier_list':([71,],[171,]),'parameter_list':([71,275,291,359,],[172,172,172,172,]),'parameter_declaration':([71,275,287,291,359,],[174,174,375,174,174,]),'enumerator_list':([83,190,],[186,307,]),'enumerator_list_iso':([83,190,],[187,187,]),'enumerator':([83,190,305,],[188,188,393,]),'assignment_operator':([115,],[216,]),'volatile_opt':([132,],[243,]),'abstract_declarator':([155,175,360,377,],[269,289,414,414,]),'specifier_qualifier':([155,157,302,],[270,277,270,]),'direct_abstract_declarator':([155,175,271,290,360,377,],[272,272,356,356,272,272,]),'macro_parameter_list':([161,],[279,]),'pragma_pack_stack_args':([176,],[294,]),'initializer':([181,298,429,],[296,383,458,]),'struct_declaration_list':([184,299,303,],[300,384,392,]),'struct_declaration':([184,299,300,303,384,392,],[301,301,386,301,386,386,]),'argument_expression_list':([231,423,],[329,454,]),'gcc_attrib_list':([282,],[370,]),'gcc_attrib':([282,422,],[371,453,]),'initializer_list':([298,],[382,]),'struct_declarator_list':([302,],[387,]),'struct_declarator':([302,432,],[389,459,]),'str_opt_expr_pair_list':([447,477,484,],[467,481,486,]),'str_opt_expr_pair':([447,477,479,484,],[468,468,482,468,]),}
_lr_goto = { }
for _k, _v in _lr_goto_items.items():
for _x,_y in zip(_v[0],_v[1]):
_lr_goto[(_x,_k)] = _y
del _lr_goto_items
_lr_productions = [
("S'",1,None,None,None),
('translation_unit',0,'p_translation_unit','../../ctypesgen/parser/cgrammar.py',173),
('translation_unit',2,'p_translation_unit','../../ctypesgen/parser/cgrammar.py',174),
('translation_unit',2,'p_translation_unit','../../ctypesgen/parser/cgrammar.py',175),
('identifier',1,'p_identifier','../../ctypesgen/parser/cgrammar.py',184),
('identifier',3,'p_identifier','../../ctypesgen/parser/cgrammar.py',185),
('identifier',3,'p_identifier','../../ctypesgen/parser/cgrammar.py',186),
('identifier',3,'p_identifier','../../ctypesgen/parser/cgrammar.py',187),
('identifier',3,'p_identifier','../../ctypesgen/parser/cgrammar.py',188),
('constant',1,'p_constant','../../ctypesgen/parser/cgrammar.py',206),
('constant',1,'p_constant','../../ctypesgen/parser/cgrammar.py',207),
('string_literal',1,'p_string_literal','../../ctypesgen/parser/cgrammar.py',233),
('multi_string_literal',1,'p_multi_string_literal','../../ctypesgen/parser/cgrammar.py',238),
('multi_string_literal',1,'p_multi_string_literal','../../ctypesgen/parser/cgrammar.py',239),
('multi_string_literal',2,'p_multi_string_literal','../../ctypesgen/parser/cgrammar.py',240),
('multi_string_literal',2,'p_multi_string_literal','../../ctypesgen/parser/cgrammar.py',241),
('macro_param',1,'p_macro_param','../../ctypesgen/parser/cgrammar.py',252),
('macro_param',2,'p_macro_param','../../ctypesgen/parser/cgrammar.py',253),
('primary_expression',1,'p_primary_expression','../../ctypesgen/parser/cgrammar.py',262),
('primary_expression',1,'p_primary_expression','../../ctypesgen/parser/cgrammar.py',263),
('primary_expression',1,'p_primary_expression','../../ctypesgen/parser/cgrammar.py',264),
('primary_expression',3,'p_primary_expression','../../ctypesgen/parser/cgrammar.py',265),
('postfix_expression',1,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',274),
('postfix_expression',4,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',275),
('postfix_expression',3,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',276),
('postfix_expression',4,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',277),
('postfix_expression',3,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',278),
('postfix_expression',3,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',279),
('postfix_expression',2,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',280),
('postfix_expression',2,'p_postfix_expression','../../ctypesgen/parser/cgrammar.py',281),
('argument_expression_list',1,'p_argument_expression_list','../../ctypesgen/parser/cgrammar.py',320),
('argument_expression_list',3,'p_argument_expression_list','../../ctypesgen/parser/cgrammar.py',321),
('argument_expression_list',1,'p_argument_expression_list','../../ctypesgen/parser/cgrammar.py',322),
('argument_expression_list',3,'p_argument_expression_list','../../ctypesgen/parser/cgrammar.py',323),
('asm_expression',5,'p_asm_expression','../../ctypesgen/parser/cgrammar.py',333),
('asm_expression',7,'p_asm_expression','../../ctypesgen/parser/cgrammar.py',334),
('asm_expression',9,'p_asm_expression','../../ctypesgen/parser/cgrammar.py',335),
('asm_expression',11,'p_asm_expression','../../ctypesgen/parser/cgrammar.py',336),
('str_opt_expr_pair_list',0,'p_str_opt_expr_pair_list','../../ctypesgen/parser/cgrammar.py',349),
('str_opt_expr_pair_list',1,'p_str_opt_expr_pair_list','../../ctypesgen/parser/cgrammar.py',350),
('str_opt_expr_pair_list',3,'p_str_opt_expr_pair_list','../../ctypesgen/parser/cgrammar.py',351),
('str_opt_expr_pair',1,'p_str_opt_expr_pair','../../ctypesgen/parser/cgrammar.py',356),
('str_opt_expr_pair',4,'p_str_opt_expr_pair','../../ctypesgen/parser/cgrammar.py',357),
('volatile_opt',0,'p_volatile_opt','../../ctypesgen/parser/cgrammar.py',362),
('volatile_opt',1,'p_volatile_opt','../../ctypesgen/parser/cgrammar.py',363),
('unary_expression',1,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',380),
('unary_expression',2,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',381),
('unary_expression',2,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',382),
('unary_expression',2,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',383),
('unary_expression',2,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',384),
('unary_expression',4,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',385),
('unary_expression',1,'p_unary_expression','../../ctypesgen/parser/cgrammar.py',386),
('unary_operator',1,'p_unary_operator','../../ctypesgen/parser/cgrammar.py',403),
('unary_operator',1,'p_unary_operator','../../ctypesgen/parser/cgrammar.py',404),
('unary_operator',1,'p_unary_operator','../../ctypesgen/parser/cgrammar.py',405),
('unary_operator',1,'p_unary_operator','../../ctypesgen/parser/cgrammar.py',406),
('unary_operator',1,'p_unary_operator','../../ctypesgen/parser/cgrammar.py',407),
('unary_operator',1,'p_unary_operator','../../ctypesgen/parser/cgrammar.py',408),
('cast_expression',1,'p_cast_expression','../../ctypesgen/parser/cgrammar.py',414),
('cast_expression',4,'p_cast_expression','../../ctypesgen/parser/cgrammar.py',415),
('multiplicative_expression',1,'p_multiplicative_expression','../../ctypesgen/parser/cgrammar.py',431),
('multiplicative_expression',3,'p_multiplicative_expression','../../ctypesgen/parser/cgrammar.py',432),
('multiplicative_expression',3,'p_multiplicative_expression','../../ctypesgen/parser/cgrammar.py',433),
('multiplicative_expression',3,'p_multiplicative_expression','../../ctypesgen/parser/cgrammar.py',434),
('additive_expression',1,'p_additive_expression','../../ctypesgen/parser/cgrammar.py',450),
('additive_expression',3,'p_additive_expression','../../ctypesgen/parser/cgrammar.py',451),
('additive_expression',3,'p_additive_expression','../../ctypesgen/parser/cgrammar.py',452),
('shift_expression',1,'p_shift_expression','../../ctypesgen/parser/cgrammar.py',468),
('shift_expression',3,'p_shift_expression','../../ctypesgen/parser/cgrammar.py',469),
('shift_expression',3,'p_shift_expression','../../ctypesgen/parser/cgrammar.py',470),
('relational_expression',1,'p_relational_expression','../../ctypesgen/parser/cgrammar.py',488),
('relational_expression',3,'p_relational_expression','../../ctypesgen/parser/cgrammar.py',489),
('relational_expression',3,'p_relational_expression','../../ctypesgen/parser/cgrammar.py',490),
('relational_expression',3,'p_relational_expression','../../ctypesgen/parser/cgrammar.py',491),
('relational_expression',3,'p_relational_expression','../../ctypesgen/parser/cgrammar.py',492),
('equality_expression',1,'p_equality_expression','../../ctypesgen/parser/cgrammar.py',508),
('equality_expression',3,'p_equality_expression','../../ctypesgen/parser/cgrammar.py',509),
('equality_expression',3,'p_equality_expression','../../ctypesgen/parser/cgrammar.py',510),
('and_expression',1,'p_and_expression','../../ctypesgen/parser/cgrammar.py',520),
('and_expression',3,'p_and_expression','../../ctypesgen/parser/cgrammar.py',521),
('exclusive_or_expression',1,'p_exclusive_or_expression','../../ctypesgen/parser/cgrammar.py',532),
('exclusive_or_expression',3,'p_exclusive_or_expression','../../ctypesgen/parser/cgrammar.py',533),
('inclusive_or_expression',1,'p_inclusive_or_expression','../../ctypesgen/parser/cgrammar.py',544),
('inclusive_or_expression',3,'p_inclusive_or_expression','../../ctypesgen/parser/cgrammar.py',545),
('logical_and_expression',1,'p_logical_and_expression','../../ctypesgen/parser/cgrammar.py',556),
('logical_and_expression',3,'p_logical_and_expression','../../ctypesgen/parser/cgrammar.py',557),
('logical_or_expression',1,'p_logical_or_expression','../../ctypesgen/parser/cgrammar.py',568),
('logical_or_expression',3,'p_logical_or_expression','../../ctypesgen/parser/cgrammar.py',569),
('conditional_expression',1,'p_conditional_expression','../../ctypesgen/parser/cgrammar.py',580),
('conditional_expression',5,'p_conditional_expression','../../ctypesgen/parser/cgrammar.py',581),
('assignment_expression',1,'p_assignment_expression','../../ctypesgen/parser/cgrammar.py',604),
('assignment_expression',3,'p_assignment_expression','../../ctypesgen/parser/cgrammar.py',605),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',620),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',621),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',622),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',623),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',624),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',625),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',626),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',627),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',628),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',629),
('assignment_operator',1,'p_assignment_operator','../../ctypesgen/parser/cgrammar.py',630),
('expression',1,'p_expression','../../ctypesgen/parser/cgrammar.py',636),
('expression',3,'p_expression','../../ctypesgen/parser/cgrammar.py',637),
('constant_expression',1,'p_constant_expression','../../ctypesgen/parser/cgrammar.py',644),
('declaration',2,'p_declaration','../../ctypesgen/parser/cgrammar.py',649),
('declaration_impl',1,'p_declaration_impl','../../ctypesgen/parser/cgrammar.py',656),
('declaration_impl',2,'p_declaration_impl','../../ctypesgen/parser/cgrammar.py',657),
('declaration_specifier_list',3,'p_declaration_specifier_list','../../ctypesgen/parser/cgrammar.py',683),
('declaration_specifier_list',3,'p_declaration_specifier_list','../../ctypesgen/parser/cgrammar.py',684),
('declaration_specifier',1,'p_declaration_specifier','../../ctypesgen/parser/cgrammar.py',697),
('declaration_specifier',1,'p_declaration_specifier','../../ctypesgen/parser/cgrammar.py',698),
('declaration_specifier',1,'p_declaration_specifier','../../ctypesgen/parser/cgrammar.py',699),
('init_declarator_list',1,'p_init_declarator_list','../../ctypesgen/parser/cgrammar.py',705),
('init_declarator_list',3,'p_init_declarator_list','../../ctypesgen/parser/cgrammar.py',706),
('init_declarator',2,'p_init_declarator','../../ctypesgen/parser/cgrammar.py',715),
('init_declarator',4,'p_init_declarator','../../ctypesgen/parser/cgrammar.py',716),
('storage_class_specifier',1,'p_storage_class_specifier','../../ctypesgen/parser/cgrammar.py',727),
('storage_class_specifier',1,'p_storage_class_specifier','../../ctypesgen/parser/cgrammar.py',728),
('storage_class_specifier',1,'p_storage_class_specifier','../../ctypesgen/parser/cgrammar.py',729),
('storage_class_specifier',1,'p_storage_class_specifier','../../ctypesgen/parser/cgrammar.py',730),
('storage_class_specifier',1,'p_storage_class_specifier','../../ctypesgen/parser/cgrammar.py',731),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',737),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',738),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',739),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',740),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',741),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',742),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',743),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',744),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',745),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',746),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',747),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',748),
('type_specifier',1,'p_type_specifier','../../ctypesgen/parser/cgrammar.py',749),
('struct_or_union_specifier',6,'p_struct_or_union_specifier','../../ctypesgen/parser/cgrammar.py',758),
('struct_or_union_specifier',6,'p_struct_or_union_specifier','../../ctypesgen/parser/cgrammar.py',759),
('struct_or_union_specifier',5,'p_struct_or_union_specifier','../../ctypesgen/parser/cgrammar.py',760),
('struct_or_union_specifier',3,'p_struct_or_union_specifier','../../ctypesgen/parser/cgrammar.py',761),
('struct_or_union_specifier',3,'p_struct_or_union_specifier','../../ctypesgen/parser/cgrammar.py',762),
('struct_or_union',1,'p_struct_or_union','../../ctypesgen/parser/cgrammar.py',787),
('struct_or_union',1,'p_struct_or_union','../../ctypesgen/parser/cgrammar.py',788),
('gcc_attributes',0,'p_gcc_attributes','../../ctypesgen/parser/cgrammar.py',794),
('gcc_attributes',2,'p_gcc_attributes','../../ctypesgen/parser/cgrammar.py',795),
('gcc_attribute',6,'p_gcc_attribute','../../ctypesgen/parser/cgrammar.py',806),
('gcc_attrib_list',1,'p_gcc_attrib_list','../../ctypesgen/parser/cgrammar.py',812),
('gcc_attrib_list',3,'p_gcc_attrib_list','../../ctypesgen/parser/cgrammar.py',813),
('gcc_attrib',0,'p_gcc_attrib','../../ctypesgen/parser/cgrammar.py',822),
('gcc_attrib',1,'p_gcc_attrib','../../ctypesgen/parser/cgrammar.py',823),
('gcc_attrib',4,'p_gcc_attrib','../../ctypesgen/parser/cgrammar.py',824),
('struct_declaration_list',1,'p_struct_declaration_list','../../ctypesgen/parser/cgrammar.py',837),
('struct_declaration_list',2,'p_struct_declaration_list','../../ctypesgen/parser/cgrammar.py',838),
('struct_declaration',3,'p_struct_declaration','../../ctypesgen/parser/cgrammar.py',847),
('struct_declaration',2,'p_struct_declaration','../../ctypesgen/parser/cgrammar.py',848),
('specifier_qualifier_list',3,'p_specifier_qualifier_list','../../ctypesgen/parser/cgrammar.py',869),
('specifier_qualifier_list',3,'p_specifier_qualifier_list','../../ctypesgen/parser/cgrammar.py',870),
('specifier_qualifier',1,'p_specifier_qualifier','../../ctypesgen/parser/cgrammar.py',879),
('specifier_qualifier',1,'p_specifier_qualifier','../../ctypesgen/parser/cgrammar.py',880),
('struct_declarator_list',1,'p_struct_declarator_list','../../ctypesgen/parser/cgrammar.py',886),
('struct_declarator_list',3,'p_struct_declarator_list','../../ctypesgen/parser/cgrammar.py',887),
('struct_declarator',2,'p_struct_declarator','../../ctypesgen/parser/cgrammar.py',896),
('struct_declarator',3,'p_struct_declarator','../../ctypesgen/parser/cgrammar.py',897),
('struct_declarator',4,'p_struct_declarator','../../ctypesgen/parser/cgrammar.py',898),
('enum_specifier',4,'p_enum_specifier','../../ctypesgen/parser/cgrammar.py',913),
('enum_specifier',5,'p_enum_specifier','../../ctypesgen/parser/cgrammar.py',914),
('enum_specifier',2,'p_enum_specifier','../../ctypesgen/parser/cgrammar.py',915),
('enumerator_list',1,'p_enumerator_list','../../ctypesgen/parser/cgrammar.py',929),
('enumerator_list',2,'p_enumerator_list','../../ctypesgen/parser/cgrammar.py',930),
('enumerator_list_iso',1,'p_enumerator_list_iso','../../ctypesgen/parser/cgrammar.py',938),
('enumerator_list_iso',3,'p_enumerator_list_iso','../../ctypesgen/parser/cgrammar.py',939),
('enumerator',1,'p_enumerator','../../ctypesgen/parser/cgrammar.py',948),
('enumerator',3,'p_enumerator','../../ctypesgen/parser/cgrammar.py',949),
('type_qualifier',1,'p_type_qualifier','../../ctypesgen/parser/cgrammar.py',958),
('type_qualifier',1,'p_type_qualifier','../../ctypesgen/parser/cgrammar.py',959),
('type_qualifier',1,'p_type_qualifier','../../ctypesgen/parser/cgrammar.py',960),
('type_qualifier',1,'p_type_qualifier','../../ctypesgen/parser/cgrammar.py',961),
('declarator',2,'p_declarator','../../ctypesgen/parser/cgrammar.py',967),
('declarator',1,'p_declarator','../../ctypesgen/parser/cgrammar.py',968),
('direct_declarator',1,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',982),
('direct_declarator',4,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',983),
('direct_declarator',4,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',984),
('direct_declarator',3,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',985),
('direct_declarator',4,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',986),
('direct_declarator',4,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',987),
('direct_declarator',3,'p_direct_declarator','../../ctypesgen/parser/cgrammar.py',988),
('pointer',1,'p_pointer','../../ctypesgen/parser/cgrammar.py',1018),
('pointer',2,'p_pointer','../../ctypesgen/parser/cgrammar.py',1019),
('pointer',2,'p_pointer','../../ctypesgen/parser/cgrammar.py',1020),
('pointer',3,'p_pointer','../../ctypesgen/parser/cgrammar.py',1021),
('type_qualifier_list',1,'p_type_qualifier_list','../../ctypesgen/parser/cgrammar.py',1043),
('type_qualifier_list',1,'p_type_qualifier_list','../../ctypesgen/parser/cgrammar.py',1044),
('type_qualifier_list',2,'p_type_qualifier_list','../../ctypesgen/parser/cgrammar.py',1045),
('type_qualifier_list',2,'p_type_qualifier_list','../../ctypesgen/parser/cgrammar.py',1046),
('parameter_type_list',1,'p_parameter_type_list','../../ctypesgen/parser/cgrammar.py',1055),
('parameter_type_list',3,'p_parameter_type_list','../../ctypesgen/parser/cgrammar.py',1056),
('parameter_list',1,'p_parameter_list','../../ctypesgen/parser/cgrammar.py',1065),
('parameter_list',3,'p_parameter_list','../../ctypesgen/parser/cgrammar.py',1066),
('parameter_declaration',3,'p_parameter_declaration','../../ctypesgen/parser/cgrammar.py',1075),
('parameter_declaration',2,'p_parameter_declaration','../../ctypesgen/parser/cgrammar.py',1076),
('parameter_declaration',1,'p_parameter_declaration','../../ctypesgen/parser/cgrammar.py',1077),
('identifier_list',1,'p_identifier_list','../../ctypesgen/parser/cgrammar.py',1093),
('identifier_list',3,'p_identifier_list','../../ctypesgen/parser/cgrammar.py',1094),
('type_name',1,'p_type_name','../../ctypesgen/parser/cgrammar.py',1107),
('type_name',2,'p_type_name','../../ctypesgen/parser/cgrammar.py',1108),
('abstract_declarator',1,'p_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1124),
('abstract_declarator',2,'p_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1125),
('abstract_declarator',3,'p_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1126),
('direct_abstract_declarator',4,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1153),
('direct_abstract_declarator',2,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1154),
('direct_abstract_declarator',3,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1155),
('direct_abstract_declarator',3,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1156),
('direct_abstract_declarator',4,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1157),
('direct_abstract_declarator',2,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1158),
('direct_abstract_declarator',3,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1159),
('direct_abstract_declarator',3,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1160),
('direct_abstract_declarator',4,'p_direct_abstract_declarator','../../ctypesgen/parser/cgrammar.py',1161),
('initializer',1,'p_initializer','../../ctypesgen/parser/cgrammar.py',1200),
('initializer',3,'p_initializer','../../ctypesgen/parser/cgrammar.py',1201),
('initializer',4,'p_initializer','../../ctypesgen/parser/cgrammar.py',1202),
('initializer_list',1,'p_initializer_list','../../ctypesgen/parser/cgrammar.py',1207),
('initializer_list',3,'p_initializer_list','../../ctypesgen/parser/cgrammar.py',1208),
('statement',1,'p_statement','../../ctypesgen/parser/cgrammar.py',1213),
('statement',1,'p_statement','../../ctypesgen/parser/cgrammar.py',1214),
('statement',1,'p_statement','../../ctypesgen/parser/cgrammar.py',1215),
('statement',1,'p_statement','../../ctypesgen/parser/cgrammar.py',1216),
('statement',1,'p_statement','../../ctypesgen/parser/cgrammar.py',1217),
('statement',1,'p_statement','../../ctypesgen/parser/cgrammar.py',1218),
('labeled_statement',3,'p_labeled_statement','../../ctypesgen/parser/cgrammar.py',1223),
('labeled_statement',4,'p_labeled_statement','../../ctypesgen/parser/cgrammar.py',1224),
('labeled_statement',3,'p_labeled_statement','../../ctypesgen/parser/cgrammar.py',1225),
('compound_statement',2,'p_compound_statement','../../ctypesgen/parser/cgrammar.py',1230),
('compound_statement',3,'p_compound_statement','../../ctypesgen/parser/cgrammar.py',1231),
('compound_statement',3,'p_compound_statement','../../ctypesgen/parser/cgrammar.py',1232),
('compound_statement',4,'p_compound_statement','../../ctypesgen/parser/cgrammar.py',1233),
('compound_statement',3,'p_compound_statement_error','../../ctypesgen/parser/cgrammar.py',1238),
('declaration_list',1,'p_declaration_list','../../ctypesgen/parser/cgrammar.py',1243),
('declaration_list',2,'p_declaration_list','../../ctypesgen/parser/cgrammar.py',1244),
('statement_list',1,'p_statement_list','../../ctypesgen/parser/cgrammar.py',1249),
('statement_list',2,'p_statement_list','../../ctypesgen/parser/cgrammar.py',1250),
('expression_statement',1,'p_expression_statement','../../ctypesgen/parser/cgrammar.py',1255),
('expression_statement',2,'p_expression_statement','../../ctypesgen/parser/cgrammar.py',1256),
('expression_statement',2,'p_expression_statement_error','../../ctypesgen/parser/cgrammar.py',1261),
('selection_statement',5,'p_selection_statement','../../ctypesgen/parser/cgrammar.py',1266),
('selection_statement',7,'p_selection_statement','../../ctypesgen/parser/cgrammar.py',1267),
('selection_statement',5,'p_selection_statement','../../ctypesgen/parser/cgrammar.py',1268),
('iteration_statement',5,'p_iteration_statement','../../ctypesgen/parser/cgrammar.py',1273),
('iteration_statement',7,'p_iteration_statement','../../ctypesgen/parser/cgrammar.py',1274),
('iteration_statement',6,'p_iteration_statement','../../ctypesgen/parser/cgrammar.py',1275),
('iteration_statement',7,'p_iteration_statement','../../ctypesgen/parser/cgrammar.py',1276),
('jump_statement',3,'p_jump_statement','../../ctypesgen/parser/cgrammar.py',1281),
('jump_statement',2,'p_jump_statement','../../ctypesgen/parser/cgrammar.py',1282),
('jump_statement',2,'p_jump_statement','../../ctypesgen/parser/cgrammar.py',1283),
('jump_statement',2,'p_jump_statement','../../ctypesgen/parser/cgrammar.py',1284),
('jump_statement',3,'p_jump_statement','../../ctypesgen/parser/cgrammar.py',1285),
('external_declaration',1,'p_external_declaration','../../ctypesgen/parser/cgrammar.py',1290),
('external_declaration',1,'p_external_declaration','../../ctypesgen/parser/cgrammar.py',1291),
('function_definition',4,'p_function_definition','../../ctypesgen/parser/cgrammar.py',1297),
('function_definition',3,'p_function_definition','../../ctypesgen/parser/cgrammar.py',1298),
('function_definition',3,'p_function_definition','../../ctypesgen/parser/cgrammar.py',1299),
('function_definition',2,'p_function_definition','../../ctypesgen/parser/cgrammar.py',1300),
('directive',1,'p_directive','../../ctypesgen/parser/cgrammar.py',1306),
('directive',1,'p_directive','../../ctypesgen/parser/cgrammar.py',1307),
('directive',1,'p_directive','../../ctypesgen/parser/cgrammar.py',1308),
('define',3,'p_define','../../ctypesgen/parser/cgrammar.py',1313),
('define',4,'p_define','../../ctypesgen/parser/cgrammar.py',1314),
('define',4,'p_define','../../ctypesgen/parser/cgrammar.py',1315),
('define',5,'p_define','../../ctypesgen/parser/cgrammar.py',1316),
('define',6,'p_define','../../ctypesgen/parser/cgrammar.py',1317),
('define',6,'p_define','../../ctypesgen/parser/cgrammar.py',1318),
('define',7,'p_define','../../ctypesgen/parser/cgrammar.py',1319),
('define',3,'p_define_error','../../ctypesgen/parser/cgrammar.py',1351),
('undefine',3,'p_undefine','../../ctypesgen/parser/cgrammar.py',1378),
('macro_parameter_list',1,'p_macro_parameter_list','../../ctypesgen/parser/cgrammar.py',1388),
('macro_parameter_list',3,'p_macro_parameter_list','../../ctypesgen/parser/cgrammar.py',1389),
('pragma',1,'p_pragma','../../ctypesgen/parser/cgrammar.py',1413),
('pragma_pack',5,'p_pragma_pack','../../ctypesgen/parser/cgrammar.py',1417),
('pragma_pack',6,'p_pragma_pack','../../ctypesgen/parser/cgrammar.py',1418),
('pragma_pack',6,'p_pragma_pack','../../ctypesgen/parser/cgrammar.py',1419),
('pragma_pack_stack_args',1,'p_pragma_pack_stack_args','../../ctypesgen/parser/cgrammar.py',1443),
('pragma_pack_stack_args',3,'p_pragma_pack_stack_args','../../ctypesgen/parser/cgrammar.py',1444),
('pragma_pack_stack_args',5,'p_pragma_pack_stack_args','../../ctypesgen/parser/cgrammar.py',1445),
('pragma_pack_stack_args',5,'p_pragma_pack_stack_args','../../ctypesgen/parser/cgrammar.py',1446),
('pragma_pack_stack_args',3,'p_pragma_pack_stack_args','../../ctypesgen/parser/cgrammar.py',1447),
]
| 387.664516 | 77,513 | 0.674943 | 25,967 | 120,176 | 3.07937 | 0.033658 | 0.053088 | 0.081401 | 0.08848 | 0.813762 | 0.793339 | 0.763613 | 0.712051 | 0.668267 | 0.657687 | 0 | 0.516145 | 0.007839 | 120,176 | 309 | 77,514 | 388.919094 | 0.154486 | 0.000549 | 0 | 0.006667 | 1 | 0.003333 | 0.181989 | 0.120893 | 0.003333 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8434b7ffea4a7afc3c1a818480a39a2744081c78 | 3,125 | py | Python | simulations/old/player_model/single-particle-fit.py | hawkrobe/fish | 2000e46c397f7c95bba8ecb0c6afd26013929ff8 | [
"MIT"
] | 1 | 2015-12-11T16:51:08.000Z | 2015-12-11T16:51:08.000Z | simulations/old/player_model/single-particle-fit.py | hawkrobe/fish | 2000e46c397f7c95bba8ecb0c6afd26013929ff8 | [
"MIT"
] | 3 | 2020-02-11T21:36:11.000Z | 2020-11-01T21:25:17.000Z | simulations/old/player_model/single-particle-fit.py | hawkrobe/couzin_replication | ff491639954f0652d6b4b2a318477bb54c38fadf | [
"MIT"
] | null | null | null |
import numpy as np
import pandas as pd
import goal_inference_with_data
import rational_model
import sys
par_1 = float(sys.argv[1])
par_2 = float(sys.argv[2])
in_dir = '../../modeling/'
game = '0-1en01_simulation.csv'
player = 0
df = pd.read_csv(in_dir + game)
players = list(set(df['pid']))
my_pid = players[player]
model = goal_inference_with_data.Model(lambda: rational_model.RationalModel((par_1,par_2)), n_samples = 200)
ticks = list(set(df['tick']))
for tick in range(max(ticks)+1):
sub = df[df['tick'] == tick]
others = []
for pid in players:
if pid == my_pid:
continue
others += [{'position':np.array([float(sub.loc[sub['pid'] == pid, 'x_pos']),
float(sub.loc[sub['pid'] == pid, 'y_pos'])]),
'angle':float(sub.loc[sub['pid'] == pid, 'angle']),
'speed':float(sub.loc[sub['pid'] == pid, 'velocity'])}]
model.observe(np.array([float(sub.loc[sub['pid'] == my_pid,'x_pos']),
float(sub.loc[sub['pid'] == my_pid,'y_pos'])]),
float(sub.loc[sub['pid'] == my_pid,'angle']),
float(sub.loc[sub['pid'] == my_pid,'velocity']),
float(sub.loc[sub['pid'] == my_pid,'bg_val']), others, tick)
if (tick % 10) == 0:
print tick, model.marginal_like
print model.marginal_like
# import numpy as np
# import pandas as pd
# import goal_inference_with_data
# import rational_model
# import sys
# player = int(sys.argv[1])
# #par_1 = float(sys.argv[2])
# #in_dir = '../../modeling/'
# #game = '0-1en01_simulation.csv'
# in_dir = '../../processed/'
# #game = '2015-01-30-14-5-9-36_1_0-1en01_37758667487.csv'
# game = '2015-01-29-20-50-8-310_1_1-1en01_12584840878.csv'
# df = pd.read_csv(in_dir + game)
# players = list(set(df['pid']))
# my_pid = players[player]
# model = goal_inference_with_data.Model(lambda: rational_model.RationalModel(None), n_samples = 100)
# ticks = list(set(df['tick']))
# for tick in range(max(ticks)+1):
# sub = df[df['tick'] == tick]
# others = []
# for pid in players:
# if pid == my_pid:
# continue
# others += [{'position':np.array([float(sub.loc[sub['pid'] == pid, 'x_pos']),
# float(sub.loc[sub['pid'] == pid, 'y_pos'])]),
# 'angle':float(sub.loc[sub['pid'] == pid, 'angle']),
# 'speed':float(sub.loc[sub['pid'] == pid, 'velocity'])}]
# model.observe(np.array([float(sub.loc[sub['pid'] == my_pid,'x_pos']),
# float(sub.loc[sub['pid'] == my_pid,'y_pos'])]),
# float(sub.loc[sub['pid'] == my_pid,'angle']),
# float(sub.loc[sub['pid'] == my_pid,'velocity']),
# float(sub.loc[sub['pid'] == my_pid,'bg_val']), others, tick)
# if (tick % 10) == 0:
# #print tick, max(model.likelihoods)
# print tick, model.marginal_like, np.mean(model.pars)
# #print max(model.likelihoods)
# print model.marginal_like
| 28.935185 | 108 | 0.54976 | 432 | 3,125 | 3.824074 | 0.196759 | 0.087167 | 0.119855 | 0.152542 | 0.84322 | 0.805085 | 0.805085 | 0.805085 | 0.805085 | 0.805085 | 0 | 0.040465 | 0.25664 | 3,125 | 107 | 109 | 29.205607 | 0.670684 | 0.51072 | 0 | 0 | 0 | 0 | 0.097709 | 0.014825 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.151515 | null | null | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
08283357c2addc63988f5b9cdac1ccd890f0bdcf | 4,293 | py | Python | test/fstrings/prefixes1.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 1,482 | 2015-10-16T21:59:32.000Z | 2022-03-30T11:44:40.000Z | test/fstrings/prefixes1.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 226 | 2015-10-15T15:53:44.000Z | 2022-03-25T03:08:27.000Z | test/fstrings/prefixes1.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 129 | 2015-10-20T02:41:49.000Z | 2022-03-22T01:44:36.000Z | a = f's t r'
a = f"s t r"
a = F's t r'
a = F"s t r"
a = f'''s t r'''
a = F"""s t r"""
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
f : meta.fstring.python, source.python, storage.type.string.python, string.interpolated.python, string.quoted.single.python
' : meta.fstring.python, punctuation.definition.string.begin.python, source.python, string.interpolated.python, string.quoted.single.python
s t r : meta.fstring.python, source.python, string.interpolated.python, string.quoted.single.python
' : meta.fstring.python, punctuation.definition.string.end.python, source.python, string.interpolated.python, string.quoted.single.python
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
f : meta.fstring.python, source.python, storage.type.string.python, string.interpolated.python, string.quoted.single.python
" : meta.fstring.python, punctuation.definition.string.begin.python, source.python, string.interpolated.python, string.quoted.single.python
s t r : meta.fstring.python, source.python, string.interpolated.python, string.quoted.single.python
" : meta.fstring.python, punctuation.definition.string.end.python, source.python, string.interpolated.python, string.quoted.single.python
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
F : meta.fstring.python, source.python, storage.type.string.python, string.interpolated.python, string.quoted.single.python
' : meta.fstring.python, punctuation.definition.string.begin.python, source.python, string.interpolated.python, string.quoted.single.python
s t r : meta.fstring.python, source.python, string.interpolated.python, string.quoted.single.python
' : meta.fstring.python, punctuation.definition.string.end.python, source.python, string.interpolated.python, string.quoted.single.python
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
F : meta.fstring.python, source.python, storage.type.string.python, string.interpolated.python, string.quoted.single.python
" : meta.fstring.python, punctuation.definition.string.begin.python, source.python, string.interpolated.python, string.quoted.single.python
s t r : meta.fstring.python, source.python, string.interpolated.python, string.quoted.single.python
" : meta.fstring.python, punctuation.definition.string.end.python, source.python, string.interpolated.python, string.quoted.single.python
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
f : meta.fstring.python, source.python, storage.type.string.python, string.interpolated.python, string.quoted.multi.python
''' : meta.fstring.python, punctuation.definition.string.begin.python, source.python, string.interpolated.python, string.quoted.multi.python
s t r : meta.fstring.python, source.python, string.interpolated.python, string.quoted.multi.python
''' : meta.fstring.python, punctuation.definition.string.end.python, source.python, string.interpolated.python, string.quoted.multi.python
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
F : meta.fstring.python, source.python, storage.type.string.python, string.interpolated.python, string.quoted.multi.python
""" : meta.fstring.python, punctuation.definition.string.begin.python, source.python, string.interpolated.python, string.quoted.multi.python
s t r : meta.fstring.python, source.python, string.interpolated.python, string.quoted.multi.python
""" : meta.fstring.python, punctuation.definition.string.end.python, source.python, string.interpolated.python, string.quoted.multi.python
| 74.017241 | 151 | 0.688563 | 504 | 4,293 | 5.865079 | 0.045635 | 0.194858 | 0.255751 | 0.243572 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0.1959 | 4,293 | 57 | 152 | 75.315789 | 0.856315 | 0 | 0 | 0.84 | 0 | 0.16 | 0.007955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f22a8ed7b5fc786774a14a825408e5a0f3553153 | 7,172 | py | Python | nlplingo/oregon/event_models/uoregon/tools/global_constants.py | BBN-E/nlplingo | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | [
"Apache-2.0"
] | 3 | 2020-10-22T13:28:00.000Z | 2022-03-24T19:57:22.000Z | nlplingo/oregon/event_models/uoregon/tools/global_constants.py | BBN-E/nlplingo | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | [
"Apache-2.0"
] | null | null | null | nlplingo/oregon/event_models/uoregon/tools/global_constants.py | BBN-E/nlplingo | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | [
"Apache-2.0"
] | 1 | 2020-10-22T13:29:51.000Z | 2020-10-22T13:29:51.000Z | import os, json
WORKING_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), os.pardir)
IMPACT_KEY = "helpful-harmful"
EFFECT_KEY = "material-verbal"
HARMFUL_KEY = 'harmful'
HELPFUL_KEY = 'helpful'
NEUTRAL_KEY = 'neutral'
MATERIAL_KEY = 'material'
VERBAL_KEY = 'verbal'
MATERIAL_VERBAL_KEY = 'both'
UNKNOWN_EVENT_KEY = 'unk'
ANNOT_SPLIT_STR = '\n{}\n'.format('=' * 10)
ENT_SPLIT_STR = '<,>'
BIO_KEY = 'BIO'
PAD_TOKEN = '[PAD]' # consistent with BERT
UNK_TOKEN = '[UNK]' # consistent with BERT
CLS_TOKEN = '[CLS]'
SEP_TOKEN = '[SEP]'
ROOT_TOKEN = '[ROOT]' # for node whose head is root
PAD_ID = 0
UNK_ID = 1
BERT_PRETRAINED_MODEL_NAMES = {
'bert-base-uncased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin'
},
'bert-large-uncased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-pytorch_model.bin'
},
'bert-base-cased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin'
},
'bert-large-cased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-pytorch_model.bin'
},
'bert-base-multilingual-uncased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-pytorch_model.bin'
},
'bert-base-multilingual-cased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-pytorch_model.bin'
},
'bert-base-chinese': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin'
},
'bert-base-german-cased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-pytorch_model.bin'
},
'bert-large-uncased-whole-word-masking': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin'
},
'bert-large-cased-whole-word-masking': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-pytorch_model.bin'
},
'bert-large-uncased-whole-word-masking-finetuned-squad': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin'
},
'bert-large-cased-whole-word-masking-finetuned-squad': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-pytorch_model.bin'
},
'bert-base-cased-finetuned-mrpc': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-pytorch_model.bin'
},
'bert-base-german-dbmdz-cased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin'
},
'bert-base-german-dbmdz-uncased': {
'config-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json',
'vocab-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt',
'model-file': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin'
},
}
INFINITY_NUMBER = float("inf")
MAX_BERT_TOKENS = 512
SAFE_BERT_TOKENS = MAX_BERT_TOKENS - 50
BERT_LAYERS = 13 # the first layer is for raw embedding, the next 12 layers are transformer layers
#STANFORD_RESOURCE_DIR = os.path.join(WORKING_DIR, 'tools', 'stanford_resources') # <==
STANZA_RESOURCE_DIR = os.path.join(WORKING_DIR, 'tools', 'stanza_resources')
BERT_RESOURCE_DIR = os.path.join(WORKING_DIR, 'tools', 'bert_resources')
| 60.268908 | 147 | 0.712354 | 998 | 7,172 | 5.053106 | 0.101202 | 0.080309 | 0.098156 | 0.178465 | 0.866746 | 0.866746 | 0.824113 | 0.810629 | 0.789213 | 0.76165 | 0 | 0.00905 | 0.106386 | 7,172 | 118 | 148 | 60.779661 | 0.777812 | 0.032627 | 0 | 0 | 0 | 0.432692 | 0.759919 | 0.049632 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009615 | 0 | 0.009615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f27aa829fd834445e1dc0212e039d5ad5ebbb477 | 343 | py | Python | wrappers/serial/imaging/timeslice_single.py | ChrisHad/algorithm-reference-library | bded1b62ea801ea4f4f5bd0794c18cd81d4b2810 | [
"Apache-2.0"
] | 22 | 2016-12-14T11:20:07.000Z | 2021-08-13T15:23:41.000Z | wrappers/serial/imaging/timeslice_single.py | ChrisHad/algorithm-reference-library | bded1b62ea801ea4f4f5bd0794c18cd81d4b2810 | [
"Apache-2.0"
] | 30 | 2017-06-27T09:15:38.000Z | 2020-09-11T18:16:37.000Z | wrappers/serial/imaging/timeslice_single.py | ChrisHad/algorithm-reference-library | bded1b62ea801ea4f4f5bd0794c18cd81d4b2810 | [
"Apache-2.0"
] | 20 | 2017-07-02T03:45:49.000Z | 2019-12-11T17:19:01.000Z | """ W term processing
"""
from processing_components.imaging.timeslice_single import fit_uvwplane_only
from processing_components.imaging.timeslice_single import fit_uvwplane
from processing_components.imaging.timeslice_single import predict_timeslice_single
from processing_components.imaging.timeslice_single import invert_timeslice_single | 42.875 | 83 | 0.895044 | 42 | 343 | 6.952381 | 0.333333 | 0.308219 | 0.328767 | 0.424658 | 0.787671 | 0.787671 | 0.787671 | 0.431507 | 0.431507 | 0 | 0 | 0 | 0.06414 | 343 | 8 | 84 | 42.875 | 0.909657 | 0.049563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
4b382fa88a3454ceae66fca21b5bcf5350da7f2b | 87 | py | Python | list/pi.py | Khachornchit/Python-Quick-Start | a262d96296d400138548f6061f34481d25afd37f | [
"MIT"
] | 1 | 2020-07-01T04:42:38.000Z | 2020-07-01T04:42:38.000Z | list/pi.py | khachornchit/Python3 | a262d96296d400138548f6061f34481d25afd37f | [
"MIT"
] | null | null | null | list/pi.py | khachornchit/Python3 | a262d96296d400138548f6061f34481d25afd37f | [
"MIT"
] | null | null | null | from math import pi
print("round(pi, i) = ", [str(round(pi, i)) for i in range(1, 6)]) | 29 | 66 | 0.609195 | 18 | 87 | 2.944444 | 0.722222 | 0.264151 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.172414 | 87 | 3 | 66 | 29 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0.170455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
4ba5abeffafd602c03b36d26e224d7882964073c | 103 | py | Python | harvest/trader/__init__.py | alexonab/harvest | de8fe7887d861d0cd662f446a86b4a598ddf41d4 | [
"MIT"
] | 83 | 2021-06-25T22:35:46.000Z | 2022-03-30T23:34:00.000Z | harvest/trader/__init__.py | alexonab/harvest | de8fe7887d861d0cd662f446a86b4a598ddf41d4 | [
"MIT"
] | 169 | 2021-06-26T03:57:02.000Z | 2022-03-11T11:50:09.000Z | harvest/trader/__init__.py | alexonab/harvest | de8fe7887d861d0cd662f446a86b4a598ddf41d4 | [
"MIT"
] | 20 | 2021-06-28T07:00:24.000Z | 2022-03-22T21:23:36.000Z | from harvest.trader.trader import LiveTrader, PaperTrader
from harvest.trader.tester import BackTester
| 34.333333 | 57 | 0.864078 | 13 | 103 | 6.846154 | 0.615385 | 0.247191 | 0.382022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 103 | 2 | 58 | 51.5 | 0.946809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
299cf9b483a5b48615d59cf6b9f580dbc06d6af6 | 5,114 | py | Python | tests/test_gemm_split.py | openppl-public/ppq | 0fdea7d4982bc57feb6bb8548c7f012707fbd607 | [
"Apache-2.0"
] | 100 | 2021-12-31T09:34:06.000Z | 2022-03-25T02:54:51.000Z | tests/test_gemm_split.py | openppl-public/ppq | 0fdea7d4982bc57feb6bb8548c7f012707fbd607 | [
"Apache-2.0"
] | 12 | 2021-12-31T10:28:15.000Z | 2022-03-31T07:08:44.000Z | tests/test_gemm_split.py | openppl-public/ppq | 0fdea7d4982bc57feb6bb8548c7f012707fbd607 | [
"Apache-2.0"
] | 21 | 2021-12-31T09:51:02.000Z | 2022-03-30T12:21:55.000Z | from ppq import *
from ppq.IR.morph import GraphDecomposer
from ppq.api import *
import torch
graph = BaseGraph(name='test', built_from=NetworkFramework.ONNX)
matmul = \
graph.create_operation(op_type='Gemm', name='gemm',
platform=TargetPlatform.UNSPECIFIED,
inputs=[graph.create_variable(), graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, 10]))],
outputs=[graph.create_variable()])
graph.create_operation(op_type='Softmax', name='softmax', platform=TargetPlatform.UNSPECIFIED,
inputs=[matmul.outputs[0], graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, ]))],
outputs=[graph.create_variable()])
processor = GraphDecomposer(graph)
processor.decompose_gemm()
assert len(graph.operations) == 2
assert len(graph.operations['gemm'].inputs) == 2
assert graph.operations['gemm'].type == 'Matmul'
graph = BaseGraph(name='test', built_from=NetworkFramework.ONNX)
matmul = \
graph.create_operation(op_type='Gemm', name='gemm',
platform=TargetPlatform.UNSPECIFIED,
inputs=[graph.create_variable(),
graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, 10])),
graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, 10]))],
outputs=[graph.create_variable()])
graph.create_operation(op_type='Softmax', name='softmax', platform=TargetPlatform.UNSPECIFIED,
inputs=[matmul.outputs[0], graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, ]))],
outputs=[graph.create_variable()])
processor = GraphDecomposer(graph)
processor.decompose_gemm()
assert len(graph.operations) == 3
assert len(graph.operations['gemm'].inputs) == 2
assert graph.operations['gemm'].type == 'Matmul'
graph = BaseGraph(name='test', built_from=NetworkFramework.ONNX)
matmul = \
graph.create_operation(op_type='Gemm', name='gemm',
platform=TargetPlatform.UNSPECIFIED,
inputs=[graph.create_variable(),
graph.create_variable(is_parameter=True, value=torch.ones(size=[10, 10])),
graph.create_variable(is_parameter=True, value=torch.ones(size=[10, 10]))],
attributes={'alpha': 2, 'beta': 3},
outputs=[graph.create_variable()])
graph.create_operation(op_type='Softmax', name='softmax', platform=TargetPlatform.UNSPECIFIED,
inputs=[matmul.outputs[0], graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, ]))],
outputs=[graph.create_variable()])
processor = GraphDecomposer(graph)
processor.decompose_gemm()
assert len(graph.operations) == 3
assert len(graph.operations['gemm'].inputs) == 2
assert graph.operations['gemm'].type == 'Matmul'
assert graph.operations['gemm'].inputs[1].value.mean().item() == 2
graph = BaseGraph(name='test', built_from=NetworkFramework.ONNX)
matmul = \
graph.create_operation(op_type='Gemm', name='gemm',
platform=TargetPlatform.UNSPECIFIED,
inputs=[graph.create_variable(),
graph.create_variable(is_parameter=True, value=torch.ones(size=[10, 10])),
graph.create_variable(is_parameter=True, value=torch.ones(size=[10, 10]))],
attributes={'transA': 0, 'transB': 1},
outputs=[graph.create_variable()])
graph.create_operation(op_type='Softmax', name='softmax', platform=TargetPlatform.UNSPECIFIED,
inputs=[matmul.outputs[0], graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, ]))],
outputs=[graph.create_variable()])
processor = GraphDecomposer(graph)
processor.decompose_gemm()
assert len(graph.operations) == 3
assert len(graph.operations['gemm'].inputs) == 2
assert graph.operations['gemm'].type == 'Matmul'
try:
graph = BaseGraph(name='test', built_from=NetworkFramework.ONNX)
matmul = \
graph.create_operation(op_type='Gemm', name='gemm',
platform=TargetPlatform.UNSPECIFIED,
inputs=[graph.create_variable(),
graph.create_variable(is_parameter=True, value=torch.ones(size=[10, 10])),
graph.create_variable(is_parameter=True, value=torch.ones(size=[10, 10]))],
attributes={'transA': 1, 'transB': 0},
outputs=[graph.create_variable()])
graph.create_operation(op_type='Softmax', name='softmax', platform=TargetPlatform.UNSPECIFIED,
inputs=[matmul.outputs[0], graph.create_variable(is_parameter=True, value=torch.zeros(size=[10, ]))],
outputs=[graph.create_variable()])
processor = GraphDecomposer(graph)
processor.decompose_gemm()
except ValueError as e:
pass | 52.721649 | 132 | 0.628862 | 546 | 5,114 | 5.75641 | 0.106227 | 0.136494 | 0.17531 | 0.093541 | 0.944321 | 0.944321 | 0.944321 | 0.944321 | 0.944321 | 0.944321 | 0 | 0.017074 | 0.232695 | 5,114 | 97 | 133 | 52.721649 | 0.783894 | 0 | 0 | 0.825581 | 0 | 0 | 0.043597 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 1 | 0 | false | 0.011628 | 0.046512 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d9bb117b1476820297ae1f905862f2431282ac2a | 427 | py | Python | tests/fixtures/defxmlschema/chapter05/__init__.py | nimish/xsdata | 7afe2781b66982428cc1731f53c065086acd35c1 | [
"MIT"
] | null | null | null | tests/fixtures/defxmlschema/chapter05/__init__.py | nimish/xsdata | 7afe2781b66982428cc1731f53c065086acd35c1 | [
"MIT"
] | null | null | null | tests/fixtures/defxmlschema/chapter05/__init__.py | nimish/xsdata | 7afe2781b66982428cc1731f53c065086acd35c1 | [
"MIT"
] | null | null | null | from tests.fixtures.defxmlschema.chapter05.chapter05 import ItemsType
from tests.fixtures.defxmlschema.chapter05.chapter05 import OrderType
from tests.fixtures.defxmlschema.chapter05.chapter05prod import ProductType
from tests.fixtures.defxmlschema.chapter05.chapter05prod import SizeType
from tests.fixtures.defxmlschema.chapter05.chapter05 import Order
from tests.fixtures.defxmlschema.chapter05.chapter05prod import Product
| 61 | 75 | 0.887588 | 48 | 427 | 7.895833 | 0.270833 | 0.14248 | 0.269129 | 0.459103 | 0.870712 | 0.870712 | 0.870712 | 0 | 0 | 0 | 0 | 0.059553 | 0.056206 | 427 | 6 | 76 | 71.166667 | 0.880893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
8a0062ea35de7d354647f80552b15cd2191b17f6 | 12,574 | py | Python | tests/hwsim/test_ap_pmf.py | rzr/wpasupplicant | 3f7ac05878ba965e941f2b5b80b8cb744e63f506 | [
"Unlicense"
] | 1 | 2016-05-12T08:49:00.000Z | 2016-05-12T08:49:00.000Z | tests/hwsim/test_ap_pmf.py | jku/hostap | a61fcc131aa6a7e396eee6a3c613001bf0475cd1 | [
"Unlicense"
] | null | null | null | tests/hwsim/test_ap_pmf.py | jku/hostap | a61fcc131aa6a7e396eee6a3c613001bf0475cd1 | [
"Unlicense"
] | null | null | null | # Protected management frames tests
# Copyright (c) 2013, Jouni Malinen <j@w1.fi>
#
# This software may be distributed under the terms of the BSD license.
# See README for more details.
import time
import subprocess
import logging
logger = logging.getLogger()
import hwsim_utils
import hostapd
from wlantest import Wlantest
from wpasupplicant import WpaSupplicant
from test_ap_eap import eap_connect
def test_ap_pmf_required(dev, apdev):
"""WPA2-PSK AP with PMF required"""
ssid = "test-pmf-required"
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
params = hostapd.wpa2_params(ssid=ssid, passphrase="12345678")
params["wpa_key_mgmt"] = "WPA-PSK-SHA256";
params["ieee80211w"] = "2";
hapd = hostapd.add_ap(apdev[0]['ifname'], params)
key_mgmt = hapd.get_config()['key_mgmt']
if key_mgmt.split(' ')[0] != "WPA-PSK-SHA256":
raise Exception("Unexpected GET_CONFIG(key_mgmt): " + key_mgmt)
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
if "[WPA2-PSK-SHA256-CCMP]" not in dev[0].request("SCAN_RESULTS"):
raise Exception("Scan results missing RSN element info")
hwsim_utils.test_connectivity(dev[0].ifname, apdev[0]['ifname'])
dev[1].connect(ssid, psk="12345678", ieee80211w="2",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[1].ifname, apdev[0]['ifname'])
hapd = hostapd.Hostapd(apdev[0]['ifname'])
hapd.request("SA_QUERY " + dev[0].p2p_interface_addr())
hapd.request("SA_QUERY " + dev[1].p2p_interface_addr())
wt.require_ap_pmf_mandatory(apdev[0]['bssid'])
wt.require_sta_pmf(apdev[0]['bssid'], dev[0].p2p_interface_addr())
wt.require_sta_pmf_mandatory(apdev[0]['bssid'], dev[1].p2p_interface_addr())
time.sleep(0.1)
if wt.get_sta_counter("valid_saqueryresp_tx", apdev[0]['bssid'],
dev[0].p2p_interface_addr()) < 1:
raise Exception("STA did not reply to SA Query")
if wt.get_sta_counter("valid_saqueryresp_tx", apdev[0]['bssid'],
dev[1].p2p_interface_addr()) < 1:
raise Exception("STA did not reply to SA Query")
def test_ap_pmf_optional(dev, apdev):
"""WPA2-PSK AP with PMF optional"""
ssid = "test-pmf-optional"
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
params = hostapd.wpa2_params(ssid=ssid, passphrase="12345678")
params["wpa_key_mgmt"] = "WPA-PSK";
params["ieee80211w"] = "1";
hostapd.add_ap(apdev[0]['ifname'], params)
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[0].ifname, apdev[0]['ifname'])
dev[1].connect(ssid, psk="12345678", ieee80211w="2",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[1].ifname, apdev[0]['ifname'])
wt.require_ap_pmf_optional(apdev[0]['bssid'])
wt.require_sta_pmf(apdev[0]['bssid'], dev[0].p2p_interface_addr())
wt.require_sta_pmf_mandatory(apdev[0]['bssid'], dev[1].p2p_interface_addr())
def test_ap_pmf_optional_2akm(dev, apdev):
"""WPA2-PSK AP with PMF optional (2 AKMs)"""
ssid = "test-pmf-optional-2akm"
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
params = hostapd.wpa2_params(ssid=ssid, passphrase="12345678")
params["wpa_key_mgmt"] = "WPA-PSK WPA-PSK-SHA256";
params["ieee80211w"] = "1";
hostapd.add_ap(apdev[0]['ifname'], params)
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[0].ifname, apdev[0]['ifname'])
dev[1].connect(ssid, psk="12345678", ieee80211w="2",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[1].ifname, apdev[0]['ifname'])
wt.require_ap_pmf_optional(apdev[0]['bssid'])
wt.require_sta_pmf(apdev[0]['bssid'], dev[0].p2p_interface_addr())
wt.require_sta_key_mgmt(apdev[0]['bssid'], dev[0].p2p_interface_addr(),
"PSK-SHA256")
wt.require_sta_pmf_mandatory(apdev[0]['bssid'], dev[1].p2p_interface_addr())
wt.require_sta_key_mgmt(apdev[0]['bssid'], dev[1].p2p_interface_addr(),
"PSK-SHA256")
def test_ap_pmf_negative(dev, apdev):
"""WPA2-PSK AP without PMF (negative test)"""
ssid = "test-pmf-negative"
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
params = hostapd.wpa2_params(ssid=ssid, passphrase="12345678")
hostapd.add_ap(apdev[0]['ifname'], params)
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[0].ifname, apdev[0]['ifname'])
try:
dev[1].connect(ssid, psk="12345678", ieee80211w="2",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hwsim_utils.test_connectivity(dev[1].ifname, apdev[0]['ifname'])
raise Exception("PMF required STA connected to no PMF AP")
except Exception, e:
logger.debug("Ignore expected exception: " + str(e))
wt.require_ap_no_pmf(apdev[0]['bssid'])
def test_ap_pmf_assoc_comeback(dev, apdev):
"""WPA2-PSK AP with PMF association comeback"""
ssid = "assoc-comeback"
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
params = hostapd.wpa2_params(ssid=ssid, passphrase="12345678")
params["wpa_key_mgmt"] = "WPA-PSK-SHA256";
params["ieee80211w"] = "2";
hapd = hostapd.add_ap(apdev[0]['ifname'], params)
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
hapd.set("ext_mgmt_frame_handling", "1")
dev[0].request("DISCONNECT")
ev = dev[0].wait_event(["CTRL-EVENT-DISCONNECTED"])
if ev is None:
raise Exception("Timeout on disconnection")
hapd.set("ext_mgmt_frame_handling", "0")
dev[0].request("REASSOCIATE")
ev = dev[0].wait_event(["CTRL-EVENT-CONNECTED"])
if ev is None:
raise Exception("Timeout on re-connection")
if wt.get_sta_counter("assocresp_comeback", apdev[0]['bssid'],
dev[0].p2p_interface_addr()) < 1:
raise Exception("AP did not use association comeback request")
def test_ap_pmf_assoc_comeback2(dev, apdev):
"""WPA2-PSK AP with PMF association comeback (using DROP_SA)"""
ssid = "assoc-comeback"
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
params = hostapd.wpa2_params(ssid=ssid, passphrase="12345678")
params["wpa_key_mgmt"] = "WPA-PSK";
params["ieee80211w"] = "1";
hapd = hostapd.add_ap(apdev[0]['ifname'], params)
dev[0].connect(ssid, psk="12345678", ieee80211w="2",
key_mgmt="WPA-PSK", proto="WPA2", scan_freq="2412")
if "OK" not in dev[0].request("DROP_SA"):
raise Exception("DROP_SA failed")
dev[0].request("REASSOCIATE")
ev = dev[0].wait_event(["CTRL-EVENT-CONNECTED"])
if ev is None:
raise Exception("Timeout on re-connection")
if wt.get_sta_counter("reassocresp_comeback", apdev[0]['bssid'],
dev[0].p2p_interface_addr()) < 1:
raise Exception("AP did not use reassociation comeback request")
def test_ap_pmf_sta_sa_query(dev, apdev):
"""WPA2-PSK AP with station using SA Query"""
ssid = "assoc-comeback"
addr = dev[0].p2p_dev_addr()
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
wpas = WpaSupplicant(global_iface='/tmp/wpas-wlan5')
wpas.interface_add("wlan5", drv_params="use_monitor=1")
id = wpas.add_network()
wpas.set_network(id, "mode", "2")
wpas.set_network_quoted(id, "ssid", ssid)
wpas.set_network(id, "proto", "WPA2")
wpas.set_network(id, "key_mgmt", "WPA-PSK-SHA256")
wpas.set_network(id, "ieee80211w", "2")
wpas.set_network_quoted(id, "psk", "12345678")
wpas.set_network(id, "pairwise", "CCMP")
wpas.set_network(id, "group", "CCMP")
wpas.set_network(id, "frequency", "2412")
wpas.connect_network(id)
bssid = wpas.p2p_dev_addr()
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
wpas.request("DEAUTHENTICATE " + addr + " test=0")
wpas.request("DISASSOCIATE " + addr + " test=0")
ev = dev[0].wait_event(["CTRL-EVENT-DISCONNECTED"], timeout=1)
if ev is not None:
raise Exception("Unexpected disconnection")
wpas.request("DEAUTHENTICATE " + addr + " reason=6 test=0")
wpas.request("DISASSOCIATE " + addr + " reason=7 test=0")
ev = dev[0].wait_event(["CTRL-EVENT-DISCONNECTED"], timeout=1)
if ev is not None:
raise Exception("Unexpected disconnection")
if wt.get_sta_counter("valid_saqueryreq_tx", bssid, addr) < 1:
raise Exception("STA did not send SA Query")
if wt.get_sta_counter("valid_saqueryresp_rx", bssid, addr) < 1:
raise Exception("AP did not reply to SA Query")
def test_ap_pmf_sta_unprot_deauth_burst(dev, apdev):
"""WPA2-PSK AP with station receiving burst of unprotected Deauthentication frames"""
ssid = "deauth-attack"
addr = dev[0].p2p_dev_addr()
wt = Wlantest()
wt.flush()
wt.add_passphrase("12345678")
wpas = WpaSupplicant(global_iface='/tmp/wpas-wlan5')
wpas.interface_add("wlan5", drv_params="use_monitor=1")
id = wpas.add_network()
wpas.set_network(id, "mode", "2")
wpas.set_network_quoted(id, "ssid", ssid)
wpas.set_network(id, "proto", "WPA2")
wpas.set_network(id, "key_mgmt", "WPA-PSK-SHA256")
wpas.set_network(id, "ieee80211w", "2")
wpas.set_network_quoted(id, "psk", "12345678")
wpas.set_network(id, "pairwise", "CCMP")
wpas.set_network(id, "group", "CCMP")
wpas.set_network(id, "frequency", "2412")
wpas.connect_network(id)
bssid = wpas.p2p_dev_addr()
dev[0].connect(ssid, psk="12345678", ieee80211w="1",
key_mgmt="WPA-PSK WPA-PSK-SHA256", proto="WPA2",
scan_freq="2412")
for i in range(0, 10):
wpas.request("DEAUTHENTICATE " + addr + " reason=6 test=0")
wpas.request("DISASSOCIATE " + addr + " reason=7 test=0")
ev = dev[0].wait_event(["CTRL-EVENT-DISCONNECTED"], timeout=1)
if ev is not None:
raise Exception("Unexpected disconnection")
num_req = wt.get_sta_counter("valid_saqueryreq_tx", bssid, addr)
num_resp = wt.get_sta_counter("valid_saqueryresp_rx", bssid, addr)
if num_req < 1:
raise Exception("STA did not send SA Query")
if num_resp < 1:
raise Exception("AP did not reply to SA Query")
if num_req > 1:
raise Exception("STA initiated too many SA Query procedures (%d)" % num_req)
time.sleep(10)
for i in range(0, 5):
wpas.request("DEAUTHENTICATE " + addr + " reason=6 test=0")
wpas.request("DISASSOCIATE " + addr + " reason=7 test=0")
ev = dev[0].wait_event(["CTRL-EVENT-DISCONNECTED"], timeout=1)
if ev is not None:
raise Exception("Unexpected disconnection")
num_req = wt.get_sta_counter("valid_saqueryreq_tx", bssid, addr)
num_resp = wt.get_sta_counter("valid_saqueryresp_rx", bssid, addr)
if num_req != 2 or num_resp != 2:
raise Exception("Unexpected number of SA Query procedures (req=%d resp=%d)" % (num_req, num_resp))
def test_ap_pmf_required_eap(dev, apdev):
"""WPA2-EAP AP with PMF required"""
ssid = "test-pmf-required-eap"
params = hostapd.wpa2_eap_params(ssid=ssid)
params["wpa_key_mgmt"] = "WPA-EAP-SHA256";
params["ieee80211w"] = "2";
hapd = hostapd.add_ap(apdev[0]['ifname'], params)
key_mgmt = hapd.get_config()['key_mgmt']
if key_mgmt.split(' ')[0] != "WPA-EAP-SHA256":
raise Exception("Unexpected GET_CONFIG(key_mgmt): " + key_mgmt)
dev[0].connect("test-pmf-required-eap", key_mgmt="WPA-EAP-SHA256",
ieee80211w="2", eap="PSK", identity="psk.user@example.com",
password_hex="0123456789abcdef0123456789abcdef")
| 44.431095 | 106 | 0.642675 | 1,742 | 12,574 | 4.463835 | 0.115959 | 0.018004 | 0.027006 | 0.031764 | 0.835905 | 0.79784 | 0.776363 | 0.758616 | 0.727752 | 0.706019 | 0 | 0.068066 | 0.200811 | 12,574 | 282 | 107 | 44.588652 | 0.705742 | 0.013918 | 0 | 0.708 | 0 | 0 | 0.245924 | 0.026842 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.06 | 0.032 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
8a22b4804174b62a51d20c642ac6a09996b34c1f | 2,674 | py | Python | tests/terraform/checks/resource/azure/test_AzureManagedDiscEncryption.py | tallengft/checkov | 19b5d90ef42dd0fa209899c6012c9a0d4716bfdc | [
"Apache-2.0"
] | 1 | 2021-03-07T07:23:46.000Z | 2021-03-07T07:23:46.000Z | tests/terraform/checks/resource/azure/test_AzureManagedDiscEncryption.py | tallengft/checkov | 19b5d90ef42dd0fa209899c6012c9a0d4716bfdc | [
"Apache-2.0"
] | 5 | 2021-02-16T13:55:58.000Z | 2022-01-31T23:03:14.000Z | tests/terraform/checks/resource/azure/test_AzureManagedDiscEncryption.py | robeden/checkov | 5a354bf3fb7a73da1f057b03de884b4347f5dbb6 | [
"Apache-2.0"
] | 1 | 2021-03-07T07:23:39.000Z | 2021-03-07T07:23:39.000Z | import unittest
import hcl2
from checkov.common.models.enums import CheckResult
from checkov.terraform.checks.resource.azure.AzureManagedDiscEncryption import check
class TestAzureManagedDiscEncryption(unittest.TestCase):
def test_failure(self):
hcl_res = hcl2.loads("""
resource "azurerm_managed_disk" "example" {
name = var.disk_name
location = var.location
resource_group_name = var.resource_group_name
storage_account_type = var.storage_account_type
create_option = "Empty"
disk_size_gb = var.disk_size_gb
encryption_settings {
enabled = false
}
tags = var.common_tags
}
""")
resource_conf = hcl_res['resource'][0]['azurerm_managed_disk']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.FAILED, scan_result)
def testmissing_failure(self):
hcl_res = hcl2.loads("""
resource "azurerm_managed_disk" "example" {
name = var.disk_name
location = var.location
resource_group_name = var.resource_group_name
storage_account_type = var.storage_account_type
create_option = "Empty"
disk_size_gb = var.disk_size_gb
tags = var.common_tags
}
""")
resource_conf = hcl_res['resource'][0]['azurerm_managed_disk']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.FAILED, scan_result)
def test_success(self):
hcl_res = hcl2.loads("""
resource "azurerm_managed_disk" "example" {
name = var.disk_name
location = var.location
resource_group_name = var.resource_group_name
storage_account_type = var.storage_account_type
create_option = "Empty"
disk_size_gb = var.disk_size_gb
encryption_settings {
enabled = true
}
tags = var.common_tags
}
""")
resource_conf = hcl_res['resource'][0]['azurerm_managed_disk']['example']
scan_result = check.scan_resource_conf(conf=resource_conf)
self.assertEqual(CheckResult.PASSED, scan_result)
if __name__ == '__main__':
unittest.main()
| 39.910448 | 84 | 0.560957 | 256 | 2,674 | 5.492188 | 0.226563 | 0.076814 | 0.076814 | 0.106686 | 0.806543 | 0.806543 | 0.806543 | 0.806543 | 0.806543 | 0.806543 | 0 | 0.00411 | 0.363126 | 2,674 | 66 | 85 | 40.515152 | 0.821491 | 0 | 0 | 0.689655 | 0 | 0 | 0.63089 | 0.077412 | 0 | 0 | 0 | 0 | 0.051724 | 1 | 0.051724 | false | 0.017241 | 0.068966 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a2c2c755d5e17f7c1e6f7a3022cba4d079624dc | 81,544 | py | Python | dnacentersdk/api/v2_1_2/sda.py | nonstdout/dnacentersdk | dbbbc4baa5300aa9e5c9193f2ea71438018095f5 | [
"MIT"
] | null | null | null | dnacentersdk/api/v2_1_2/sda.py | nonstdout/dnacentersdk | dbbbc4baa5300aa9e5c9193f2ea71438018095f5 | [
"MIT"
] | null | null | null | dnacentersdk/api/v2_1_2/sda.py | nonstdout/dnacentersdk | dbbbc4baa5300aa9e5c9193f2ea71438018095f5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""DNA Center SDA API wrapper.
Copyright (c) 2019-2020 Cisco and/or its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from builtins import *
from past.builtins import basestring
from ...restsession import RestSession
from ...utils import (
check_type,
dict_from_items_with_values,
apply_path_params,
dict_of_str,
)
class Sda(object):
"""DNA Center SDA API (version: 2.1.2).
Wraps the DNA Center SDA
API and exposes the API as native Python
methods that return native Python objects.
"""
def __init__(self, session, object_factory, request_validator):
"""Initialize a new Sda
object with the provided RestSession.
Args:
session(RestSession): The RESTful session object to be used for
API calls to the DNA Center service.
Raises:
TypeError: If the parameter types are incorrect.
"""
check_type(session, RestSession)
super(Sda, self).__init__()
self._session = session
self._object_factory = object_factory
self._request_validator = request_validator
def delete_port_assignment_for_access_point(self,
device_ip,
interface_name,
headers=None,
**request_parameters):
"""Delete Port assignment for access point in SDA Fabric.
Args:
device_ip(basestring): device-ip query parameter.
interface_name(basestring): interfaceName query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ip, basestring,
may_be_none=False)
check_type(interface_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'device-ip':
device_ip,
'interfaceName':
interface_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/hostonboarding/access-'
+ 'point')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_07874a4c4c9aabd9_v2_1_2', json_data)
def get_device_info(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Get device info from SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_138518e14069ab5f_v2_1_2', json_data)
def get_sda_fabric_info(self,
fabric_name,
headers=None,
**request_parameters):
"""Get SDA Fabric Info.
Args:
fabric_name(basestring): Fabric Name.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(fabric_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'fabricName':
fabric_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_16a1bb5d48cb873d_v2_1_2', json_data)
def add_ip_pool_in_sda_virtual_network(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add IP Pool in SDA Virtual Network.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_208579ea4ed98f4f_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/virtualnetwork/ippool')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_208579ea4ed98f4f_v2_1_2', json_data)
def get_vn(self,
site_name_hierarchy,
virtual_network_name,
headers=None,
**request_parameters):
"""Get virtual network (VN) from SDA Fabric.
Args:
virtual_network_name(basestring): virtualNetworkName
query parameter.
site_name_hierarchy(basestring): siteNameHierarchy query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(virtual_network_name, basestring,
may_be_none=False)
check_type(site_name_hierarchy, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'virtualNetworkName':
virtual_network_name,
'siteNameHierarchy':
site_name_hierarchy,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/virtual-network')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_2eb1fa1e49caa2b4_v2_1_2', json_data)
def delete_site(self,
site_name_hierarchy,
headers=None,
**request_parameters):
"""Delete Site from SDA Fabric.
Args:
site_name_hierarchy(basestring): Site Name Hierarchy.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(site_name_hierarchy, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'siteNameHierarchy':
site_name_hierarchy,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric-site')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_50864acf4ad8b54d_v2_1_2', json_data)
def get_port_assignment_for_access_point(self,
device_ip,
interface_name,
headers=None,
**request_parameters):
"""Get Port assignment for access point in SDA Fabric.
Args:
device_ip(basestring): device-ip query parameter.
interface_name(basestring): interfaceName query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ip, basestring,
may_be_none=False)
check_type(interface_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'device-ip':
device_ip,
'interfaceName':
interface_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/hostonboarding/access-'
+ 'point')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_5097f8d445f98f51_v2_1_2', json_data)
def delete_ip_pool_from_sda_virtual_network(self,
ip_pool_name,
virtual_network_name,
headers=None,
**request_parameters):
"""Delete IP Pool from SDA Virtual Network.
Args:
ip_pool_name(basestring): ipPoolName query parameter.
virtual_network_name(basestring): virtualNetworkName
query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(ip_pool_name, basestring,
may_be_none=False)
check_type(virtual_network_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'ipPoolName':
ip_pool_name,
'virtualNetworkName':
virtual_network_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/virtualnetwork/ippool')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_549e4aff42bbb52a_v2_1_2', json_data)
def delete_edge_device(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Delete edge device from SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/edge-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_1fb8f9f24c998133_v2_1_2', json_data)
def add_fabric(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_6db9292d4f28a26b_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_6db9292d4f28a26b_v2_1_2', json_data)
def delete_default_authentication_profile(self,
site_name_hierarchy,
headers=None,
**request_parameters):
"""Add default authentication profile in SDA Fabric.
Args:
site_name_hierarchy(basestring): siteNameHierarchy query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(site_name_hierarchy, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'siteNameHierarchy':
site_name_hierarchy,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/authentication-profile')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_3ebcda3e4acbafb7_v2_1_2', json_data)
def get_site(self,
site_name_hierarchy,
headers=None,
**request_parameters):
"""Get Site info from SDA Fabric.
Args:
site_name_hierarchy(basestring): Site Name Hierarchy.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(site_name_hierarchy, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'siteNameHierarchy':
site_name_hierarchy,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric-site')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_80b7f8e6406a8701_v2_1_2', json_data)
def get_edge_device(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Get edge device from SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/edge-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_7683f90b4efab090_v2_1_2', json_data)
def add_edge_device(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add edge device in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_87a8ba444ce9bc59_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/edge-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_87a8ba444ce9bc59_v2_1_2', json_data)
def add_vn(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add virtual network (VN) in SDA Fabric .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_518c59cd441aa9fc_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/virtual-network')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_518c59cd441aa9fc_v2_1_2', json_data)
def get_device_role_in_sda_fabric(self,
device_management_ip_address,
headers=None,
**request_parameters):
"""Get device role in SDA Fabric.
Args:
device_management_ip_address(basestring): Device
Management IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_management_ip_address, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceManagementIpAddress':
device_management_ip_address,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/device/role')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_8a92d87c416a8e83_v2_1_2', json_data)
def add_port_assignment_for_user_device(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add Port assignment for user device in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_9582ab824ce8b29d_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/hostonboarding/user-'
+ 'device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_9582ab824ce8b29d_v2_1_2', json_data)
def get_sda_fabric_count(self,
headers=None,
**request_parameters):
"""Get SDA Fabric Count.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_6fa0f8d54d29857a_v2_1_2', json_data)
def get_control_plane_device(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Get control plane device from SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/control-plane-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_aba4991d4e9b8747_v2_1_2', json_data)
def add_default_authentication_profile(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add default authentication profile in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_bca339d844c8a3c0_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/authentication-profile')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_bca339d844c8a3c0_v2_1_2', json_data)
def update_default_authentication_profile(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Update default authentication profile in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_8984ea7744d98a54_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/authentication-profile')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.put(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.put(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_8984ea7744d98a54_v2_1_2', json_data)
def delete_vn(self,
site_name_hierarchy,
virtual_network_name,
headers=None,
**request_parameters):
"""Delete virtual network (VN) from SDA Fabric .
Args:
virtual_network_name(basestring): virtualNetworkName
query parameter.
site_name_hierarchy(basestring): siteNameHierarchy query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(virtual_network_name, basestring,
may_be_none=False)
check_type(site_name_hierarchy, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'virtualNetworkName':
virtual_network_name,
'siteNameHierarchy':
site_name_hierarchy,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/virtual-network')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_c78c9ad245bb9657_v2_1_2', json_data)
def get_default_authentication_profile(self,
site_name_hierarchy,
headers=None,
**request_parameters):
"""Get default authentication profile from SDA Fabric.
Args:
site_name_hierarchy(basestring): siteNameHierarchy query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(site_name_hierarchy, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'siteNameHierarchy':
site_name_hierarchy,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/authentication-profile')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_8b908a4e4c5a9a23_v2_1_2', json_data)
def delete_sda_fabric(self,
fabric_name,
headers=None,
**request_parameters):
"""Delete SDA Fabric.
Args:
fabric_name(basestring): Fabric Name.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(fabric_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'fabricName':
fabric_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_d0aafa694f4b9d7b_v2_1_2', json_data)
def delete_port_assignment_for_user_device(self,
device_ip,
interface_name,
headers=None,
**request_parameters):
"""Delete Port assignment for user device in SDA Fabric.
Args:
device_ip(basestring): device-ip query parameter.
interface_name(basestring): interfaceName query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ip, basestring,
may_be_none=False)
check_type(interface_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'device-ip':
device_ip,
'interfaceName':
interface_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/hostonboarding/user-'
+ 'device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_cba5b8b14edb81f4_v2_1_2', json_data)
def add_control_plane_device(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add control plane device in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_dd85c91042489a3f_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/control-plane-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_dd85c91042489a3f_v2_1_2', json_data)
def gets_border_device_detail(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Gets border device detail from SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/border-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_98a39bf4485a9871_v2_1_2', json_data)
def get_port_assignment_for_user_device(self,
device_ip,
interface_name,
headers=None,
**request_parameters):
"""Get Port assignment for user device in SDA Fabric.
Args:
device_ip(basestring): device-ip query parameter.
interface_name(basestring): interfaceName query
parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ip, basestring,
may_be_none=False)
check_type(interface_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'device-ip':
device_ip,
'interfaceName':
interface_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/hostonboarding/user-'
+ 'device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_a4a1e8ed41cb9653_v2_1_2', json_data)
def get_ip_pool_from_sda_virtual_network(self,
ip_pool_name,
virtual_network_name,
headers=None,
**request_parameters):
"""Get IP Pool from SDA Virtual Network.
Args:
ip_pool_name(basestring): ipPoolName query parameter.
virtual_network_name(basestring): virtualNetworkName
query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(ip_pool_name, basestring,
may_be_none=False)
check_type(virtual_network_name, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'ipPoolName':
ip_pool_name,
'virtualNetworkName':
virtual_network_name,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/virtualnetwork/ippool')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=params)
return self._object_factory('bpm_fa9219bf45c8b43b_v2_1_2', json_data)
def adds_border_device(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Adds border device in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_bead7b3443b996a7_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/border-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_bead7b3443b996a7_v2_1_2', json_data)
def add_port_assignment_for_access_point(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add Port assignment for access point in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_c2a43ad24098baa7_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/hostonboarding/access-'
+ 'point')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_c2a43ad24098baa7_v2_1_2', json_data)
def deletes_border_device(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Deletes border device from SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/border-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_cb81b93540baaab0_v2_1_2', json_data)
def add_site(self,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Add Site in SDA Fabric.
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_d2b4d9d04a4b884c_v2_1_2')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/fabric-site')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=params,
json=_payload)
return self._object_factory('bpm_d2b4d9d04a4b884c_v2_1_2', json_data)
def delete_control_plane_device(self,
device_ipaddress,
headers=None,
**request_parameters):
"""Delete control plane device in SDA Fabric.
Args:
device_ipaddress(basestring): Device IP Address.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_ipaddress, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
params = {
'deviceIPAddress':
device_ipaddress,
}
params.update(request_parameters)
params = dict_from_items_with_values(params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/business/sda/control-plane-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=params)
return self._object_factory('bpm_f6bd6bf64e6890be_v2_1_2', json_data)
| 38.337565 | 78 | 0.573408 | 8,634 | 81,544 | 5.169794 | 0.035094 | 0.051797 | 0.038848 | 0.028945 | 0.942647 | 0.939264 | 0.93568 | 0.920737 | 0.91583 | 0.908818 | 0 | 0.011719 | 0.362737 | 81,544 | 2,126 | 79 | 38.355597 | 0.847243 | 0.301383 | 0 | 0.854393 | 0 | 0 | 0.078529 | 0.054578 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029289 | false | 0 | 0.005021 | 0 | 0.063598 | 0.000837 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a5113417786057f2cefeaf7309f0d02b40ac05d | 17,895 | py | Python | BEAVRS/docs/specifications/scripts/make_assy_figs.py | AllSafeCyberSecur1ty/Nuclear-Engineering | 302d6dcc7c0a85a9191098366b076cf9cb5a9f6e | [
"MIT"
] | 1 | 2022-03-26T20:01:13.000Z | 2022-03-26T20:01:13.000Z | BEAVRS/docs/specifications/scripts/make_assy_figs.py | AllSafeCyberSecur1ty/Nuclear-Engineering | 302d6dcc7c0a85a9191098366b076cf9cb5a9f6e | [
"MIT"
] | null | null | null | BEAVRS/docs/specifications/scripts/make_assy_figs.py | AllSafeCyberSecur1ty/Nuclear-Engineering | 302d6dcc7c0a85a9191098366b076cf9cb5a9f6e | [
"MIT"
] | 1 | 2022-03-26T19:59:13.000Z | 2022-03-26T19:59:13.000Z | #!/usr/bin/env python
import sys
import os
try:
base = sys.argv[1]
except:
base = ".."
seq = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
node_t = "\\renewcommand{{\Node{0}{1}}}{{{text}}}\n"
node_link_t = "\\renewcommand{{\NodeLink{0}{1}}}{{{link}}}\n"
node_fill_t = "\\renewcommand{{\NodeFill{0}{1}}}{{{fill}}}\n"
fig_str = r"""\begin{{figure}}[htpb]
\centering
\hypertarget{{{label}_target}}{{}}
\begin{{tikzpicture}}[draw=black, x=\Size,y=\Size, scale={scale}]
\foreach \col/\colLetter in \Sequence {{%
\foreach \row/\rowLetter in \Sequence{{%
\pgfmathtruncatemacro{{\value}}{{\col+\NumOfColumns*(\row-1)}}
\def\NodeText{{\expandafter\csname Node\rowLetter\colLetter\endcsname}}
\def\NodeLink{{\expandafter\csname NodeLink\rowLetter\colLetter\endcsname}}
\def\NodeFill{{\expandafter\csname NodeFill\rowLetter\colLetter\endcsname}}
\node [Square, hyperlink node=\NodeLink, fill=\NodeFill] at ($(\col,-\row)-(0.5,0.5)$) {{\NodeText}};
}}
}}
{extra}
\end{{tikzpicture}}
\caption[{altcap}]{{{caption} Source: {source} \label{{{label}}}}}
\end{{figure}}"""
GTU = ("G","fig_guidetube_pin","yellow!40")
INS = ("I","fig_instr_pin","white")
BA = ("B","fig_ba_pin","red!40")
default = ("","fig_fuel_pin","blue!10")
################################################################################
################################################################################
################################################################################
# Cycle 1
######################## 6 BA assembly
outp = os.path.join(base,"specifications{0}assy{0}figs{0}6ba.tex".format(os.sep))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = BA
if r+1 == 3 and c+1 == 9: node = GTU
if r+1 == 3 and c+1 == 12: node = BA
if r+1 == 4 and c+1 == 14: node = BA
if r+1 == 6 and c+1 == 3: node = BA
if r+1 == 6 and c+1 == 6: node = GTU
if r+1 == 6 and c+1 == 9: node = GTU
if r+1 == 6 and c+1 == 12: node = GTU
if r+1 == 6 and c+1 == 15: node = BA
if r+1 == 9 and c+1 == 3: node = GTU
if r+1 == 9 and c+1 == 6: node = GTU
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = GTU
if r+1 == 9 and c+1 == 15: node = GTU
if r+1 == 12 and c+1 == 3: node = GTU
if r+1 == 12 and c+1 == 6: node = GTU
if r+1 == 12 and c+1 == 9: node = GTU
if r+1 == 12 and c+1 == 12: node = GTU
if r+1 == 12 and c+1 == 15: node = GTU
if r+1 == 14 and c+1 == 4: node = GTU
if r+1 == 15 and c+1 == 6: node = GTU
if r+1 == 15 and c+1 == 9: node = GTU
if r+1 == 15 and c+1 == 12: node = GTU
if r+1 == 14 and c+1 == 14: node = GTU
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
e = " \\draw[->,thick] (8.5,-1) -- (8.5,0);\n \\node[anchor=south] at (8.5,0) {Core Center};"
outStr += fig_str.format(extra=e,scale=1,altcap="The 6BA burnable absorber configuration.",caption="The 6BA burnable absorber configuration. Blank locations denote fuel rods, \\textbf{G} denotes a guide tube location, \\textbf{B} denotes a burnable absorber rod, and \\textbf{I} denotes a guide tube position that might contain an instrument tube.",label="ass_6ba",
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
######################## 12 BA assembly
outp = os.path.join(base,"specifications{0}assy{0}figs{0}12ba.tex".format(os.sep))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = BA
if r+1 == 3 and c+1 == 9: node = GTU
if r+1 == 3 and c+1 == 12: node = BA
if r+1 == 4 and c+1 == 14: node = BA
if r+1 == 6 and c+1 == 3: node = BA
if r+1 == 6 and c+1 == 6: node = GTU
if r+1 == 6 and c+1 == 9: node = GTU
if r+1 == 6 and c+1 == 12: node = GTU
if r+1 == 6 and c+1 == 15: node = BA
if r+1 == 9 and c+1 == 3: node = GTU
if r+1 == 9 and c+1 == 6: node = GTU
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = GTU
if r+1 == 9 and c+1 == 15: node = GTU
if r+1 == 12 and c+1 == 3: node = BA
if r+1 == 12 and c+1 == 6: node = GTU
if r+1 == 12 and c+1 == 9: node = GTU
if r+1 == 12 and c+1 == 12: node = GTU
if r+1 == 12 and c+1 == 15: node = BA
if r+1 == 14 and c+1 == 4: node = BA
if r+1 == 15 and c+1 == 6: node = BA
if r+1 == 15 and c+1 == 9: node = GTU
if r+1 == 15 and c+1 == 12: node = BA
if r+1 == 14 and c+1 == 14: node = BA
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
outStr += fig_str.format(extra="",scale=1,altcap="The 12BA burnable absorber configuration for cycle 1.",caption="The 12BA burnable absorber configuration for cycle 1. Blank locations denote fuel rods, \\textbf{G} denotes a guide tube location, \\textbf{B} denotes a burnable absorber rod, and \\textbf{I} denotes a guide tube position that might contain an instrument tube.",label="ass_12ba",
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
######################## 15 BA assembly
outp = os.path.join(base,"specifications{0}assy{0}figs{0}15ba.tex".format(os.sep))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = BA
if r+1 == 3 and c+1 == 9: node = BA
if r+1 == 3 and c+1 == 12: node = BA
if r+1 == 4 and c+1 == 14: node = GTU
if r+1 == 6 and c+1 == 3: node = BA
if r+1 == 6 and c+1 == 6: node = BA
if r+1 == 6 and c+1 == 9: node = BA
if r+1 == 6 and c+1 == 12: node = BA
if r+1 == 6 and c+1 == 15: node = GTU
if r+1 == 9 and c+1 == 3: node = BA
if r+1 == 9 and c+1 == 6: node = BA
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = BA
if r+1 == 9 and c+1 == 15: node = GTU
if r+1 == 12 and c+1 == 3: node = BA
if r+1 == 12 and c+1 == 6: node = BA
if r+1 == 12 and c+1 == 9: node = BA
if r+1 == 12 and c+1 == 12: node = BA
if r+1 == 12 and c+1 == 15: node = GTU
if r+1 == 14 and c+1 == 4: node = GTU
if r+1 == 15 and c+1 == 6: node = GTU
if r+1 == 15 and c+1 == 9: node = GTU
if r+1 == 15 and c+1 == 12: node = GTU
if r+1 == 14 and c+1 == 14: node = GTU
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
e = " \\draw[->,thick] (0,-1) -- (-1,0);\n \\node[anchor=south] at (-1,0) {Core Center};"
outStr += fig_str.format(extra=e,scale=1,altcap="The 15BA burnable absorber configuration.",caption="The 15BA burnable absorber configuration. Blank locations denote fuel rods, \\textbf{G} denotes a guide tube location, \\textbf{B} denotes a burnable absorber rod, and \\textbf{I} denotes a guide tube position that might contain an instrument tube.",label="ass_15ba",
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
######################## 16 BA assembly
outp = os.path.join(base,"specifications{0}assy{0}figs{0}16ba.tex".format(os.sep))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = BA
if r+1 == 3 and c+1 == 9: node = BA
if r+1 == 3 and c+1 == 12: node = BA
if r+1 == 4 and c+1 == 14: node = BA
if r+1 == 6 and c+1 == 3: node = BA
if r+1 == 6 and c+1 == 6: node = GTU
if r+1 == 6 and c+1 == 9: node = GTU
if r+1 == 6 and c+1 == 12: node = GTU
if r+1 == 6 and c+1 == 15: node = BA
if r+1 == 9 and c+1 == 3: node = BA
if r+1 == 9 and c+1 == 6: node = GTU
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = GTU
if r+1 == 9 and c+1 == 15: node = BA
if r+1 == 12 and c+1 == 3: node = BA
if r+1 == 12 and c+1 == 6: node = GTU
if r+1 == 12 and c+1 == 9: node = GTU
if r+1 == 12 and c+1 == 12: node = GTU
if r+1 == 12 and c+1 == 15: node = BA
if r+1 == 14 and c+1 == 4: node = BA
if r+1 == 15 and c+1 == 6: node = BA
if r+1 == 15 and c+1 == 9: node = BA
if r+1 == 15 and c+1 == 12: node = BA
if r+1 == 14 and c+1 == 14: node = BA
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
outStr += fig_str.format(extra="",scale=1,altcap="The 16BA burnable absorber configuration.",caption="The 16BA burnable absorber configuration. Blank locations denote fuel rods, \\textbf{G} denotes a guide tube location, \\textbf{B} denotes a burnable absorber rod, and \\textbf{I} denotes a guide tube position that might contain an instrument tube.",label="ass_16ba",
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
######################## 20 BA assembly
outp = os.path.join(base,"specifications{0}assy{0}figs{0}20ba.tex".format(os.sep))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = BA
if r+1 == 3 and c+1 == 9: node = BA
if r+1 == 3 and c+1 == 12: node = BA
if r+1 == 4 and c+1 == 14: node = BA
if r+1 == 6 and c+1 == 3: node = BA
if r+1 == 6 and c+1 == 6: node = BA
if r+1 == 6 and c+1 == 9: node = GTU
if r+1 == 6 and c+1 == 12: node = BA
if r+1 == 6 and c+1 == 15: node = BA
if r+1 == 9 and c+1 == 3: node = BA
if r+1 == 9 and c+1 == 6: node = GTU
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = GTU
if r+1 == 9 and c+1 == 15: node = BA
if r+1 == 12 and c+1 == 3: node = BA
if r+1 == 12 and c+1 == 6: node = BA
if r+1 == 12 and c+1 == 9: node = GTU
if r+1 == 12 and c+1 == 12: node = BA
if r+1 == 12 and c+1 == 15: node = BA
if r+1 == 14 and c+1 == 4: node = BA
if r+1 == 15 and c+1 == 6: node = BA
if r+1 == 15 and c+1 == 9: node = BA
if r+1 == 15 and c+1 == 12: node = BA
if r+1 == 14 and c+1 == 14: node = BA
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
outStr += fig_str.format(extra="",scale=1,altcap="The 20BA burnable absorber configuration.",caption="The 20BA burnable absorber configuration. Blank locations denote fuel rods, \\textbf{G} denotes a guide tube location, \\textbf{B} denotes a burnable absorber rod, and \\textbf{I} denotes a guide tube position that might contain an instrument tube.",label="ass_20ba",
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
################################################################################
################################################################################
################################################################################
# Cycle 2
######################## 4 BA assembly
nba = 4
outp = os.path.join(base,"specifications{0}assy{0}figs{0}{1}ba.tex".format(os.sep,nba))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = GTU
if r+1 == 3 and c+1 == 9: node = GTU
if r+1 == 3 and c+1 == 12: node = GTU
if r+1 == 4 and c+1 == 14: node = BA
if r+1 == 6 and c+1 == 3: node = GTU
if r+1 == 6 and c+1 == 6: node = GTU
if r+1 == 6 and c+1 == 9: node = GTU
if r+1 == 6 and c+1 == 12: node = GTU
if r+1 == 6 and c+1 == 15: node = GTU
if r+1 == 9 and c+1 == 3: node = GTU
if r+1 == 9 and c+1 == 6: node = GTU
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = GTU
if r+1 == 9 and c+1 == 15: node = GTU
if r+1 == 12 and c+1 == 3: node = GTU
if r+1 == 12 and c+1 == 6: node = GTU
if r+1 == 12 and c+1 == 9: node = GTU
if r+1 == 12 and c+1 == 12: node = GTU
if r+1 == 12 and c+1 == 15: node = GTU
if r+1 == 14 and c+1 == 4: node = BA
if r+1 == 15 and c+1 == 6: node = GTU
if r+1 == 15 and c+1 == 9: node = GTU
if r+1 == 15 and c+1 == 12: node = GTU
if r+1 == 14 and c+1 == 14: node = BA
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
outStr += fig_str.format(extra="",scale=1,altcap="The {0}BA burnable absorber configuration.".format(nba),caption="The {0}BA burnable absorber configuration. Blank locations denote fuel rods, \\textbf{{G}} denotes a guide tube location, \\textbf{{B}} denotes a burnable absorber rod, and \\textbf{{I}} denotes a guide tube position that might contain an instrument tube.".format(nba),label="ass_{0}ba".format(nba),
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
######################## 8 BA assembly
nba = 8
outp = os.path.join(base,"specifications{0}assy{0}figs{0}{1}ba.tex".format(os.sep,nba))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = BA
if r+1 == 3 and c+1 == 6: node = GTU
if r+1 == 3 and c+1 == 9: node = GTU
if r+1 == 3 and c+1 == 12: node = GTU
if r+1 == 4 and c+1 == 14: node = BA
if r+1 == 6 and c+1 == 3: node = GTU
if r+1 == 6 and c+1 == 6: node = GTU
if r+1 == 6 and c+1 == 9: node = BA
if r+1 == 6 and c+1 == 12: node = GTU
if r+1 == 6 and c+1 == 15: node = GTU
if r+1 == 9 and c+1 == 3: node = GTU
if r+1 == 9 and c+1 == 6: node = BA
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = BA
if r+1 == 9 and c+1 == 15: node = GTU
if r+1 == 12 and c+1 == 3: node = GTU
if r+1 == 12 and c+1 == 6: node = GTU
if r+1 == 12 and c+1 == 9: node = BA
if r+1 == 12 and c+1 == 12: node = GTU
if r+1 == 12 and c+1 == 15: node = GTU
if r+1 == 14 and c+1 == 4: node = BA
if r+1 == 15 and c+1 == 6: node = GTU
if r+1 == 15 and c+1 == 9: node = GTU
if r+1 == 15 and c+1 == 12: node = GTU
if r+1 == 14 and c+1 == 14: node = BA
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
outStr += fig_str.format(extra="",scale=1,altcap="The {0}BA burnable absorber configuration.".format(nba),caption="The {0}BA burnable absorber configuration. Blank locations denote fuel rods, \\textbf{{G}} denotes a guide tube location, \\textbf{{B}} denotes a burnable absorber rod, and \\textbf{{I}} denotes a guide tube position that might contain an instrument tube.".format(nba),label="ass_{0}ba".format(nba),
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
######################## 12 BA assembly
nba = 12
outp = os.path.join(base,"specifications{0}assy{0}figs{0}{1}ba_c2.tex".format(os.sep,nba))
outStr = ""
for r,R in enumerate(seq):
for c,C in enumerate(seq):
node = default
# Guide tube positions
if r+1 == 4 and c+1 == 4: node = GTU
if r+1 == 3 and c+1 == 6: node = BA
if r+1 == 3 and c+1 == 9: node = GTU
if r+1 == 3 and c+1 == 12: node = BA
if r+1 == 4 and c+1 == 14: node = GTU
if r+1 == 6 and c+1 == 3: node = BA
if r+1 == 6 and c+1 == 6: node = GTU
if r+1 == 6 and c+1 == 9: node = BA
if r+1 == 6 and c+1 == 12: node = GTU
if r+1 == 6 and c+1 == 15: node = BA
if r+1 == 9 and c+1 == 3: node = GTU
if r+1 == 9 and c+1 == 6: node = BA
if r+1 == 9 and c+1 == 9: node = INS
if r+1 == 9 and c+1 == 12: node = BA
if r+1 == 9 and c+1 == 15: node = GTU
if r+1 == 12 and c+1 == 3: node = BA
if r+1 == 12 and c+1 == 6: node = GTU
if r+1 == 12 and c+1 == 9: node = BA
if r+1 == 12 and c+1 == 12: node = GTU
if r+1 == 12 and c+1 == 15: node = BA
if r+1 == 14 and c+1 == 4: node = GTU
if r+1 == 15 and c+1 == 6: node = BA
if r+1 == 15 and c+1 == 9: node = GTU
if r+1 == 15 and c+1 == 12: node = BA
if r+1 == 14 and c+1 == 14: node = GTU
outStr += node_t.format(R,C,text=node[0])
outStr += node_link_t.format(R,C,link=node[1])
outStr += node_fill_t.format(R,C,fill=node[2])
outStr += fig_str.format(extra="",scale=1,altcap="The {0}BA burnable absorber configuration for cycle 2.".format(nba),caption="The {0}BA burnable absorber configuration for cycle 2. Blank locations denote fuel rods, \\textbf{{G}} denotes a guide tube location, \\textbf{{B}} denotes a burnable absorber rod, and \\textbf{{I}} denotes a guide tube position that might contain an instrument tube.".format(nba),label="ass_{0}ba_c2".format(nba),
source=r"\ref{num:sheet_BPs}")
with open(outp,'w') as fh:
fh.write(outStr)
| 39.157549 | 441 | 0.523386 | 3,354 | 17,895 | 2.768336 | 0.049195 | 0.06462 | 0.08616 | 0.103393 | 0.903608 | 0.883576 | 0.879483 | 0.879483 | 0.870113 | 0.862359 | 0 | 0.085714 | 0.268511 | 17,895 | 456 | 442 | 39.243421 | 0.623606 | 0.017938 | 0 | 0.858434 | 0 | 0.036145 | 0.247721 | 0.051628 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.006024 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8a617791a54858c79f3b1f2d936f325cdc1101b9 | 22,281 | py | Python | dohq_teamcity/api/investigation_api.py | DenKoren/teamcity | 69acb4d1402c316129b4602882a9cce2d55cf926 | [
"MIT"
] | 23 | 2018-10-19T07:28:45.000Z | 2021-11-12T12:46:09.000Z | dohq_teamcity/api/investigation_api.py | DenKoren/teamcity | 69acb4d1402c316129b4602882a9cce2d55cf926 | [
"MIT"
] | 31 | 2018-10-16T05:53:11.000Z | 2021-09-09T14:44:14.000Z | dohq_teamcity/api/investigation_api.py | DenKoren/teamcity | 69acb4d1402c316129b4602882a9cce2d55cf926 | [
"MIT"
] | 12 | 2018-10-28T23:00:17.000Z | 2021-09-07T12:07:13.000Z | # coding: utf-8
"""
TeamCity REST API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 2018.1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
from dohq_teamcity.custom.base_model import TeamCityObject
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from dohq_teamcity.models.investigation import Investigation # noqa: F401,E501
from dohq_teamcity.models.investigations import Investigations # noqa: F401,E501
class InvestigationApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
base_name = 'Investigation'
def __init__(self, api_client=None):
self.api_client = api_client
def create_instance(self, **kwargs): # noqa: E501
"""create_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_instance(async_req=True)
>>> result = thread.get()
:param async_req: bool
:param Investigation body:
:param str fields:
:return: Investigation
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.__create_instance_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.__create_instance_with_http_info(**kwargs) # noqa: E501
return data
def create_instances(self, **kwargs): # noqa: E501
"""create_instances # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_instances(async_req=True)
>>> result = thread.get()
:param async_req: bool
:param Investigations body:
:param str fields:
:return: Investigations
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.__create_instances_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.__create_instances_with_http_info(**kwargs) # noqa: E501
return data
def delete_instance(self, investigation_locator, **kwargs): # noqa: E501
"""delete_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_instance(investigation_locator, async_req=True)
>>> result = thread.get()
:param async_req: bool
:param str investigation_locator: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.__delete_instance_with_http_info(investigation_locator, **kwargs) # noqa: E501
else:
(data) = self.__delete_instance_with_http_info(investigation_locator, **kwargs) # noqa: E501
return data
def get_investigations(self, **kwargs): # noqa: E501
"""get_investigations # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_investigations(async_req=True)
>>> result = thread.get()
:param async_req: bool
:param str locator:
:param str fields:
:return: Investigations
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.__get_investigations_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.__get_investigations_with_http_info(**kwargs) # noqa: E501
return data
def replace_instance(self, investigation_locator, **kwargs): # noqa: E501
"""replace_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.replace_instance(investigation_locator, async_req=True)
>>> result = thread.get()
:param async_req: bool
:param str investigation_locator: (required)
:param Investigation body:
:param str fields:
:return: Investigation
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.__replace_instance_with_http_info(investigation_locator, **kwargs) # noqa: E501
else:
(data) = self.__replace_instance_with_http_info(investigation_locator, **kwargs) # noqa: E501
return data
def serve_instance(self, investigation_locator, **kwargs): # noqa: E501
"""serve_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.serve_instance(investigation_locator, async_req=True)
>>> result = thread.get()
:param async_req: bool
:param str investigation_locator: (required)
:param str fields:
:return: Investigation
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.__serve_instance_with_http_info(investigation_locator, **kwargs) # noqa: E501
else:
(data) = self.__serve_instance_with_http_info(investigation_locator, **kwargs) # noqa: E501
return data
def __create_instance_with_http_info(self, **kwargs): # noqa: E501
"""create_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.__create_instance_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param Investigation body:
:param str fields:
:return: Investigation
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'fields'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_instance" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/app/rest/investigations', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Investigation', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def __create_instances_with_http_info(self, **kwargs): # noqa: E501
"""create_instances # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.__create_instances_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param Investigations body:
:param str fields:
:return: Investigations
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'fields'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_instances" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/app/rest/investigations/multiple', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Investigations', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def __delete_instance_with_http_info(self, investigation_locator, **kwargs): # noqa: E501
"""delete_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.__delete_instance_with_http_info(investigation_locator, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str investigation_locator: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['investigation_locator'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_instance" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'investigation_locator' is set
if ('investigation_locator' not in params or
params['investigation_locator'] is None):
raise ValueError("Missing the required parameter `investigation_locator` when calling `delete_instance`") # noqa: E501
collection_formats = {}
path_params = {}
if 'investigation_locator' in params:
if isinstance(params['investigation_locator'], TeamCityObject):
path_params['investigationLocator'] = params['investigation_locator'].locator_id
else:
path_params['investigationLocator'] = params['investigation_locator'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/app/rest/investigations/{investigationLocator}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def __get_investigations_with_http_info(self, **kwargs): # noqa: E501
"""get_investigations # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.__get_investigations_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str locator:
:param str fields:
:return: Investigations
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['locator', 'fields'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_investigations" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'locator' in params:
query_params.append(('locator', params['locator'])) # noqa: E501
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/app/rest/investigations', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Investigations', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def __replace_instance_with_http_info(self, investigation_locator, **kwargs): # noqa: E501
"""replace_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.__replace_instance_with_http_info(investigation_locator, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str investigation_locator: (required)
:param Investigation body:
:param str fields:
:return: Investigation
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['investigation_locator', 'body', 'fields'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_instance" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'investigation_locator' is set
if ('investigation_locator' not in params or
params['investigation_locator'] is None):
raise ValueError("Missing the required parameter `investigation_locator` when calling `replace_instance`") # noqa: E501
collection_formats = {}
path_params = {}
if 'investigation_locator' in params:
if isinstance(params['investigation_locator'], TeamCityObject):
path_params['investigationLocator'] = params['investigation_locator'].locator_id
else:
path_params['investigationLocator'] = params['investigation_locator'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/app/rest/investigations/{investigationLocator}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Investigation', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def __serve_instance_with_http_info(self, investigation_locator, **kwargs): # noqa: E501
"""serve_instance # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.__serve_instance_with_http_info(investigation_locator, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str investigation_locator: (required)
:param str fields:
:return: Investigation
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['investigation_locator', 'fields'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method serve_instance" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'investigation_locator' is set
if ('investigation_locator' not in params or
params['investigation_locator'] is None):
raise ValueError("Missing the required parameter `investigation_locator` when calling `serve_instance`") # noqa: E501
collection_formats = {}
path_params = {}
if 'investigation_locator' in params:
if isinstance(params['investigation_locator'], TeamCityObject):
path_params['investigationLocator'] = params['investigation_locator'].locator_id
else:
path_params['investigationLocator'] = params['investigation_locator'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/app/rest/investigations/{investigationLocator}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Investigation', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 37.700508 | 132 | 0.612271 | 2,383 | 22,281 | 5.443139 | 0.067562 | 0.041323 | 0.025904 | 0.033305 | 0.94187 | 0.935471 | 0.935471 | 0.922982 | 0.920515 | 0.903323 | 0 | 0.014344 | 0.299134 | 22,281 | 590 | 133 | 37.764407 | 0.816278 | 0.297922 | 0 | 0.807453 | 1 | 0 | 0.189765 | 0.078322 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040373 | false | 0 | 0.018634 | 0 | 0.121118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8a72b8c65af69d6c4eb177b5c42ea52ccfb2e757 | 4,310 | py | Python | src/test/parser/template/nodes/test_condtype1.py | hiitsme123/python | e08309fe61fd5ed88cfb39e9f402613dd7e39269 | [
"MIT"
] | 5 | 2017-02-03T07:38:45.000Z | 2022-01-06T11:29:29.000Z | src/test/parser/template/nodes/test_condtype1.py | hiitsme123/python | e08309fe61fd5ed88cfb39e9f402613dd7e39269 | [
"MIT"
] | 8 | 2017-02-03T06:59:03.000Z | 2017-04-28T14:23:46.000Z | src/test/parser/template/nodes/test_condtype1.py | hiitsme123/python | e08309fe61fd5ed88cfb39e9f402613dd7e39269 | [
"MIT"
] | 8 | 2017-02-02T15:12:12.000Z | 2017-04-02T13:35:03.000Z | import xml.etree.ElementTree as ET
from programy.parser.template.nodes.base import TemplateNode
from programy.parser.template.nodes.word import TemplateWordNode
from programy.parser.template.nodes.condtype1 import TemplateType1ConditionNode
from programy.dialog import Question
from test.parser.template.base import TemplateTestsBaseClass
class TemplateType1ConditionNodeTests(TemplateTestsBaseClass):
def test_node_global_match(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=False)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
self.bot.conversation(self.clientid)._predicates['name1'] = "value1"
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "Hello")
def test_node_global_nomatch(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=False)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
self.bot.conversation(self.clientid)._predicates['name1'] = "value2"
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "")
def test_node_local_match(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("var1", TemplateWordNode("value1"), local=True)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
question = Question.create_from_text("Hello")
self.bot.conversation(self.clientid).record_dialog(question)
self.bot.conversation(self.clientid).current_question().set_predicate("var1", "value1")
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "Hello")
def test_node_local_nomatch(self):
root = TemplateNode()
self.assertIsNotNone(root)
self.assertIsNotNone(root.children)
self.assertEqual(len(root.children), 0)
node = TemplateType1ConditionNode("var1", TemplateWordNode("value1"), local=True)
self.assertIsNotNone(node)
node.append(TemplateWordNode("Hello"))
root.append(node)
self.assertEqual(len(root.children), 1)
question = Question.create_from_text("Hello")
self.bot.conversation(self.clientid).record_dialog(question)
self.bot.conversation(self.clientid).current_question().set_predicate("var1", "value2")
result = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(result)
self.assertEqual(result, "")
def test_to_xml_global(self):
root = TemplateNode()
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=False)
node.append(TemplateWordNode("Hello"))
root.append(node)
xml = root.xml_tree(self.bot, self.clientid)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><condition name="name1"><value>value1</value>Hello</condition></template>', xml_str)
def test_to_xml_local(self):
root = TemplateNode()
node = TemplateType1ConditionNode("name1", TemplateWordNode("value1"), local=True)
node.append(TemplateWordNode("Hello"))
root.append(node)
xml = root.xml_tree(self.bot, self.clientid)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual('<template><condition var="name1"><value>value1</value>Hello</condition></template>', xml_str)
| 37.478261 | 120 | 0.685383 | 455 | 4,310 | 6.413187 | 0.149451 | 0.117204 | 0.063057 | 0.060315 | 0.884167 | 0.852296 | 0.852296 | 0.850583 | 0.850583 | 0.760795 | 0 | 0.012912 | 0.191415 | 4,310 | 114 | 121 | 37.807018 | 0.82439 | 0 | 0 | 0.764706 | 0 | 0 | 0.079137 | 0.028545 | 0 | 0 | 0 | 0 | 0.376471 | 1 | 0.070588 | false | 0 | 0.070588 | 0 | 0.152941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a909b842c2b69e547b64f9abff73e896a1f0946 | 14,262 | py | Python | test/test_ETLfinance_data.py | arturmesquitab/personal_finance_dashboard | d89913db6c45ac19dc1a26fc17312b0b0025dd7a | [
"MIT"
] | null | null | null | test/test_ETLfinance_data.py | arturmesquitab/personal_finance_dashboard | d89913db6c45ac19dc1a26fc17312b0b0025dd7a | [
"MIT"
] | null | null | null | test/test_ETLfinance_data.py | arturmesquitab/personal_finance_dashboard | d89913db6c45ac19dc1a26fc17312b0b0025dd7a | [
"MIT"
] | null | null | null | import os
import pathlib as pl
import test.config as test_config
import unittest
from datetime import datetime
import pandas as pd
from pandas.testing import assert_frame_equal
from src.finance_pipeline.etl.etl_finance_data import ETLFinanceData
class TestETLfinance_data(unittest.TestCase):
def setUp(self) -> None:
self.etl = ETLFinanceData(
seb_data_filename=test_config.TEST_INPUT_SEB_RAWDATA,
seb_data_filedirectory=test_config.TEST_OUTPUT_FILEDIRECTORY,
seb_data_schema_filename=test_config.SCHEMA_SEB_RAWDATA_FILENAME,
seb_data_schema_filedirectory=test_config.SCHEMA_FILEDIRECTORY,
amex_data_filename=test_config.TEST_INPUT_AMEX_RAWDATA,
amex_data_filedirectory=test_config.TEST_OUTPUT_FILEDIRECTORY,
amex_data_schema_filename=test_config.SCHEMA_AMEX_RAWDATA_FILENAME,
amex_data_schema_filedirectory=test_config.SCHEMA_FILEDIRECTORY,
output_filename=test_config.TEST_BALANCE_SHEET_FILENAME,
output_filedirectory=test_config.TEST_OUTPUT_FILEDIRECTORY,
start_date="2021-12-14",
end_date="2021-12-28",
)
def tearDown(self) -> None:
if pl.Path(self.etl.output_filepath).resolve().is_file():
os.remove(self.etl.output_filepath)
def test_seb_balance_sheet(self):
data = [
[
"20211216467398845",
"46739438884",
"EXPENSE",
190.0,
"2021-12-16",
"SEB",
"2022-05-01",
],
[
"20211223LON44730",
"LÖN",
"INCOME",
34467.0,
"2021-12-23",
"SEB",
"2022-05-01",
],
[
"20211224SEBKO9400",
"SEB KORT BANK AB",
"BALANCE CLEARANCE",
29.0,
"2021-12-24",
"SEB",
"2022-05-01",
],
]
columns = [
"transaction_id",
"description",
"type_of_transaction",
"amount_sek",
"transaction_date",
"data_source",
"etl_rawdata_date",
]
schema = {
"transaction_id": "str",
"description": "str",
"type_of_transaction": "str",
"amount_sek": "float",
"transaction_date": "datetime64[D]",
"data_source": "str",
"etl_rawdata_date": "datetime64[D]",
}
df_expected = (
pd.DataFrame(data=data, columns=columns)
.astype(schema)
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
df_output = (
self.etl.seb_balance_sheet()
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
assert_frame_equal(df_expected, df_output)
def test_amex_balance_sheet(self):
data = [
[
"AT213570081000010003416",
"BETALNING MOTTAGEN TACK",
"BALANCE CLEARANCE",
17568.35,
"2021-12-23",
"AMEX",
"2022-06-01",
],
[
"AT213620085000010002467",
"APPLE.COM/BILL HOLLYHILL",
"INCOME",
29.0,
"2021-12-28",
"AMEX",
"2022-06-01",
],
]
columns = [
"transaction_id",
"description",
"type_of_transaction",
"amount_sek",
"transaction_date",
"data_source",
"etl_rawdata_date",
]
schema = {
"transaction_id": "str",
"description": "str",
"type_of_transaction": "str",
"amount_sek": "float",
"transaction_date": "datetime64[D]",
"data_source": "str",
"etl_rawdata_date": "datetime64[D]",
}
df_expected = (
pd.DataFrame(data=data, columns=columns)
.astype(schema)
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
df_output = (
self.etl.amex_balance_sheet()
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
assert_frame_equal(df_expected, df_output)
def test_output_balance_sheet_data(self):
data_input = [
[
"20211216467398845",
"46739438884",
"EXPENSE",
190.0,
"2021-12-16",
"SEB",
"2022-05-01",
],
[
"20211223LON44730",
"LÖN",
"INCOME",
34467.0,
"2021-12-23",
"SEB",
"2022-05-01",
],
[
"20211224SEBKO9400",
"SEB KORT BANK AB",
"BALANCE CLEARANCE",
29.0,
"2021-12-24",
"SEB",
"2022-05-01",
],
]
columns_input = [
"transaction_id",
"description",
"type_of_transaction",
"amount_sek",
"transaction_date",
"data_source",
"etl_rawdata_date",
]
schema_input = {
"transaction_id": "str",
"description": "str",
"type_of_transaction": "str",
"amount_sek": "float",
"transaction_date": "datetime64[D]",
"data_source": "str",
"etl_rawdata_date": "datetime64[D]",
}
df_input = pd.DataFrame(data=data_input, columns=columns_input).astype(
schema_input
)
self.etl.output_balance_sheet_data(df_input)
timenow = datetime.now().strftime("%Y-%m-%d %H:%M")
data_output = [
[
"20211216467398845",
"46739438884",
"EXPENSE",
190.0,
"2021-12-16",
"12-2021",
"SEB",
"2022-05-01",
timenow,
],
[
"20211223LON44730",
"LÖN",
"INCOME",
34467.0,
"2021-12-23",
"1-2022",
"SEB",
"2022-05-01",
timenow,
],
[
"20211224SEBKO9400",
"SEB KORT BANK AB",
"BALANCE CLEARANCE",
29.0,
"2021-12-24",
"1-2022",
"SEB",
"2022-05-01",
timenow,
],
]
columns_output = [
"transaction_id",
"description",
"type_of_transaction",
"amount_sek",
"transaction_date",
"month_reference",
"data_source",
"etl_rawdata_date",
"etl_date",
]
schema_output = {
"transaction_id": "str",
"description": "str",
"type_of_transaction": "str",
"amount_sek": "float",
"transaction_date": "datetime64[D]",
"month_reference": "str",
"data_source": "str",
"etl_rawdata_date": "datetime64[D]",
"etl_date": "datetime64[D]",
}
df_expected = (
pd.DataFrame(data=data_output, columns=columns_output)
.astype(schema_output)
.sort_values("transaction_date")
)
# Test if file exists
path = self.etl.output_filepath
if not pl.Path(path).resolve().is_file():
raise AssertionError("File does not exist: %s" % str(path))
# Test if data is the same as expected output
df_output = (
pd.read_csv(path).astype(schema_output).sort_values("transaction_date")
)
assert_frame_equal(df_expected, df_output)
data_append_input = [
[
"20211216467398845",
"46739438884",
"EXPENSE",
180.0,
"2021-12-16",
"SEB",
"2022-05-01",
],
[
"AT213620085000010002467",
"APPLE.COM/BILL HOLLYHILL",
"INCOME",
30.0,
"2021-12-28",
"AMEX",
"2022-06-01",
],
]
data_append_output = [
[
"20211216467398845",
"46739438884",
"EXPENSE",
180.0,
"2021-12-16",
"12-2021",
"SEB",
"2022-05-01",
timenow,
],
[
"20211223LON44730",
"LÖN",
"INCOME",
34467.0,
"2021-12-23",
"1-2022",
"SEB",
"2022-05-01",
timenow,
],
[
"20211224SEBKO9400",
"SEB KORT BANK AB",
"BALANCE CLEARANCE",
29.0,
"2021-12-24",
"1-2022",
"SEB",
"2022-05-01",
timenow,
],
[
"AT213620085000010002467",
"APPLE.COM/BILL HOLLYHILL",
"INCOME",
30.0,
"2021-12-28",
"1-2022",
"AMEX",
"2022-06-01",
timenow,
],
]
df_expected_append = (
pd.DataFrame(data=data_append_output, columns=columns_output)
.astype(schema_output)
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
df_input_append = (
pd.DataFrame(data=data_append_input, columns=columns_input)
.astype(schema_input)
.reset_index()
.drop(columns="index")
)
self.etl.output_balance_sheet_data(df_input_append)
# Test if file exists
path_append = self.etl.output_filepath
if not pl.Path(path_append).resolve().is_file():
raise AssertionError("File does not exist: %s" % str(path_append))
# Test if data is the same as expected output
df_output_append = (
pd.read_csv(path_append)
.astype(schema_output)
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
assert_frame_equal(df_expected_append, df_output_append)
def test_run(self):
timenow = datetime.now().strftime("%Y-%m-%d %H:%M")
data_output = [
[
"20211216467398845",
"46739438884",
"EXPENSE",
190.0,
"2021-12-16",
"12-2021",
"SEB",
"2022-05-01",
timenow,
],
[
"20211223LON44730",
"LÖN",
"INCOME",
34467.0,
"2021-12-23",
"1-2022",
"SEB",
"2022-05-01",
timenow,
],
[
"AT213570081000010003416",
"BETALNING MOTTAGEN TACK",
"BALANCE CLEARANCE",
17568.35,
"2021-12-23",
"1-2022",
"AMEX",
"2022-06-01",
timenow,
],
[
"20211224SEBKO9400",
"SEB KORT BANK AB",
"BALANCE CLEARANCE",
29.0,
"2021-12-24",
"1-2022",
"SEB",
"2022-05-01",
timenow,
],
[
"AT213620085000010002467",
"APPLE.COM/BILL HOLLYHILL",
"INCOME",
29.0,
"2021-12-28",
"1-2022",
"AMEX",
"2022-06-01",
timenow,
],
]
columns_output = [
"transaction_id",
"description",
"type_of_transaction",
"amount_sek",
"transaction_date",
"month_reference",
"data_source",
"etl_rawdata_date",
"etl_date",
]
schema_output = {
"transaction_id": "str",
"description": "str",
"type_of_transaction": "str",
"amount_sek": "float",
"transaction_date": "datetime64[D]",
"month_reference": "str",
"data_source": "str",
"etl_rawdata_date": "datetime64[D]",
"etl_date": "datetime64[D]",
}
df_expected = (
pd.DataFrame(data=data_output, columns=columns_output)
.astype(schema_output)
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
self.etl.run()
# Test if file exists
path = self.etl.output_filepath
if not pl.Path(path).resolve().is_file():
raise AssertionError("File does not exist: %s" % str(path))
# Test if data is the same as expected output
df_output = (
pd.read_csv(path)
.astype(schema_output)
.sort_values("transaction_date")
.reset_index()
.drop(columns="index")
)
assert_frame_equal(df_expected, df_output)
| 28.297619 | 83 | 0.43353 | 1,174 | 14,262 | 5.018739 | 0.113288 | 0.02444 | 0.023761 | 0.029871 | 0.86558 | 0.857943 | 0.817549 | 0.754752 | 0.722505 | 0.716395 | 0 | 0.125 | 0.456458 | 14,262 | 503 | 84 | 28.353877 | 0.635062 | 0.013392 | 0 | 0.764835 | 0 | 0 | 0.223336 | 0.009812 | 0 | 0 | 0 | 0 | 0.01978 | 1 | 0.013187 | false | 0 | 0.017582 | 0 | 0.032967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8aa39cd07becb08bd47354a88ff1b8b46bb1ea70 | 69,080 | py | Python | test/python/circuit/test_extensions_standard.py | abhik-99/qiskit-terra | ad1680f54fecb415fa2131200365e47c6d00bbb1 | [
"Apache-2.0"
] | 1 | 2020-10-25T17:56:57.000Z | 2020-10-25T17:56:57.000Z | test/python/circuit/test_extensions_standard.py | abhik-99/qiskit-terra | ad1680f54fecb415fa2131200365e47c6d00bbb1 | [
"Apache-2.0"
] | null | null | null | test/python/circuit/test_extensions_standard.py | abhik-99/qiskit-terra | ad1680f54fecb415fa2131200365e47c6d00bbb1 | [
"Apache-2.0"
] | null | null | null | # This code is part of Qiskit.
#
# (C) Copyright IBM 2017.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
# pylint: disable=missing-docstring
import unittest
import warnings
from inspect import signature
from ddt import ddt, data, unpack
from qiskit import ClassicalRegister, QuantumCircuit, QuantumRegister, execute
from qiskit.qasm import pi
from qiskit.exceptions import QiskitError
from qiskit.circuit.exceptions import CircuitError
from qiskit.test import QiskitTestCase
from qiskit.circuit import Gate, ControlledGate, ParameterVector
from qiskit import BasicAer
from qiskit.quantum_info.operators.predicates import matrix_equal, is_unitary_matrix
from qiskit.circuit.library import (
HGate, CHGate, IGate, RGate, RXGate, CRXGate, RYGate, CRYGate, RZGate,
CRZGate, SGate, SdgGate, CSwapGate, TGate, TdgGate, U1Gate, CU1Gate,
U2Gate, U3Gate, CU3Gate, XGate, CXGate, CCXGate, YGate, CYGate,
ZGate, CZGate
)
class TestStandard1Q(QiskitTestCase):
"""Standard Extension Test. Gates with a single Qubit"""
def setUp(self):
self.qr = QuantumRegister(3, "q")
self.qr2 = QuantumRegister(3, "r")
self.cr = ClassicalRegister(3, "c")
self.circuit = QuantumCircuit(self.qr, self.qr2, self.cr)
def test_barrier(self):
self.circuit.barrier(self.qr[1])
self.assertEqual(len(self.circuit), 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'barrier')
self.assertEqual(qargs, [self.qr[1]])
def test_barrier_wires(self):
self.circuit.barrier(1)
self.assertEqual(len(self.circuit), 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'barrier')
self.assertEqual(qargs, [self.qr[1]])
def test_barrier_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.barrier, self.cr[0])
self.assertRaises(CircuitError, qc.barrier, self.cr)
self.assertRaises(CircuitError, qc.barrier, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.barrier, .0)
def test_conditional_barrier_invalid(self):
qc = self.circuit
barrier = qc.barrier(self.qr)
self.assertRaises(QiskitError, barrier.c_if, self.cr, 0)
def test_barrier_reg(self):
self.circuit.barrier(self.qr)
self.assertEqual(len(self.circuit), 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'barrier')
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2]])
def test_barrier_none(self):
self.circuit.barrier()
self.assertEqual(len(self.circuit), 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'barrier')
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2],
self.qr2[0], self.qr2[1], self.qr2[2]])
def test_ccx(self):
self.circuit.ccx(self.qr[0], self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'ccx')
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2]])
def test_ccx_wires(self):
self.circuit.ccx(0, 1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'ccx')
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2]])
def test_ccx_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.ccx, self.cr[0], self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.ccx, self.qr[0], self.qr[0], self.qr[2])
self.assertRaises(CircuitError, qc.ccx, 0.0, self.qr[0], self.qr[2])
self.assertRaises(CircuitError, qc.ccx, self.cr, self.qr, self.qr)
self.assertRaises(CircuitError, qc.ccx, 'a', self.qr[1], self.qr[2])
def test_ch(self):
self.circuit.ch(self.qr[0], self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'ch')
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_ch_wires(self):
self.circuit.ch(0, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'ch')
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_ch_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.ch, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.ch, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.ch, .0, self.qr[0])
self.assertRaises(CircuitError, qc.ch, (self.qr, 3), self.qr[0])
self.assertRaises(CircuitError, qc.ch, self.cr, self.qr)
self.assertRaises(CircuitError, qc.ch, 'a', self.qr[1])
def test_crz(self):
self.circuit.crz(1, self.qr[0], self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'crz')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_cry(self):
self.circuit.cry(1, self.qr[0], self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cry')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_crx(self):
self.circuit.crx(1, self.qr[0], self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'crx')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_crz_wires(self):
self.circuit.crz(1, 0, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'crz')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_cry_wires(self):
self.circuit.cry(1, 0, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cry')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_crx_wires(self):
self.circuit.crx(1, 0, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'crx')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[0], self.qr[1]])
def test_crz_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.crz, 0, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.crz, 0, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.crz, 0, .0, self.qr[0])
self.assertRaises(CircuitError, qc.crz, self.qr[2], self.qr[1], self.qr[0])
self.assertRaises(CircuitError, qc.crz, 0, self.qr[1], self.cr[2])
self.assertRaises(CircuitError, qc.crz, 0, (self.qr, 3), self.qr[1])
self.assertRaises(CircuitError, qc.crz, 0, self.cr, self.qr)
# TODO self.assertRaises(CircuitError, qc.crz, 'a', self.qr[1], self.qr[2])
def test_cry_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cry, 0, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.cry, 0, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cry, 0, .0, self.qr[0])
self.assertRaises(CircuitError, qc.cry, self.qr[2], self.qr[1], self.qr[0])
self.assertRaises(CircuitError, qc.cry, 0, self.qr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cry, 0, (self.qr, 3), self.qr[1])
self.assertRaises(CircuitError, qc.cry, 0, self.cr, self.qr)
# TODO self.assertRaises(CircuitError, qc.cry, 'a', self.qr[1], self.qr[2])
def test_crx_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.crx, 0, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.crx, 0, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.crx, 0, .0, self.qr[0])
self.assertRaises(CircuitError, qc.crx, self.qr[2], self.qr[1], self.qr[0])
self.assertRaises(CircuitError, qc.crx, 0, self.qr[1], self.cr[2])
self.assertRaises(CircuitError, qc.crx, 0, (self.qr, 3), self.qr[1])
self.assertRaises(CircuitError, qc.crx, 0, self.cr, self.qr)
# TODO self.assertRaises(CircuitError, qc.crx, 'a', self.qr[1], self.qr[2])
def test_cswap(self):
self.circuit.cswap(self.qr[0], self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cswap')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2]])
def test_cswap_wires(self):
self.circuit.cswap(0, 1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cswap')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2]])
def test_cswap_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cswap, self.cr[0], self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cswap, self.qr[1], self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cswap, self.qr[1], .0, self.qr[0])
self.assertRaises(CircuitError, qc.cswap, self.cr[0], self.cr[1], self.qr[0])
self.assertRaises(CircuitError, qc.cswap, self.qr[0], self.qr[0], self.qr[1])
self.assertRaises(CircuitError, qc.cswap, .0, self.qr[0], self.qr[1])
self.assertRaises(CircuitError, qc.cswap, (self.qr, 3), self.qr[0], self.qr[1])
self.assertRaises(CircuitError, qc.cswap, self.cr, self.qr[0], self.qr[1])
self.assertRaises(CircuitError, qc.cswap, 'a', self.qr[1], self.qr[2])
def test_cu1(self):
self.circuit.cu1(1, self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cu1')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cu1_wires(self):
self.circuit.cu1(1, 1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cu1')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cu1_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cu1, self.cr[0], self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cu1, 1, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cu1, self.qr[1], 0, self.qr[0])
self.assertRaises(CircuitError, qc.cu1, 0, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.cu1, 0, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cu1, 0, .0, self.qr[0])
self.assertRaises(CircuitError, qc.cu1, self.qr[2], self.qr[1], self.qr[0])
self.assertRaises(CircuitError, qc.cu1, 0, self.qr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cu1, 0, (self.qr, 3), self.qr[1])
self.assertRaises(CircuitError, qc.cu1, 0, self.cr, self.qr)
# TODO self.assertRaises(CircuitError, qc.cu1, 'a', self.qr[1], self.qr[2])
def test_cu3(self):
self.circuit.cu3(1, 2, 3, self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cu3')
self.assertEqual(op.params, [1, 2, 3])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cu3_wires(self):
self.circuit.cu3(1, 2, 3, 1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cu3')
self.assertEqual(op.params, [1, 2, 3])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cu3_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cu3, 0, 0, self.qr[0], self.qr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cu3, 0, 0, 0, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cu3, 0, 0, self.qr[1], 0, self.qr[0])
self.assertRaises(CircuitError, qc.cu3, 0, 0, 0, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cu3, 0, 0, 0, .0, self.qr[0])
self.assertRaises(CircuitError, qc.cu3, 0, 0, 0, (self.qr, 3), self.qr[1])
self.assertRaises(CircuitError, qc.cu3, 0, 0, 0, self.cr, self.qr)
# TODO self.assertRaises(CircuitError, qc.cu3, 0, 0, 'a', self.qr[1], self.qr[2])
def test_cx(self):
self.circuit.cx(self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cx')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cx_wires(self):
self.circuit.cx(1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cx')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cx_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cx, self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cx, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cx, .0, self.qr[0])
self.assertRaises(CircuitError, qc.cx, (self.qr, 3), self.qr[0])
self.assertRaises(CircuitError, qc.cx, self.cr, self.qr)
self.assertRaises(CircuitError, qc.cx, 'a', self.qr[1])
def test_cy(self):
self.circuit.cy(self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cy')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cy_wires(self):
self.circuit.cy(1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cy')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cy_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cy, self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cy, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cy, .0, self.qr[0])
self.assertRaises(CircuitError, qc.cy, (self.qr, 3), self.qr[0])
self.assertRaises(CircuitError, qc.cy, self.cr, self.qr)
self.assertRaises(CircuitError, qc.cy, 'a', self.qr[1])
def test_cz(self):
self.circuit.cz(self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cz')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cz_wires(self):
self.circuit.cz(1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'cz')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_cz_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.cz, self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.cz, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.cz, .0, self.qr[0])
self.assertRaises(CircuitError, qc.cz, (self.qr, 3), self.qr[0])
self.assertRaises(CircuitError, qc.cz, self.cr, self.qr)
self.assertRaises(CircuitError, qc.cz, 'a', self.qr[1])
def test_h(self):
self.circuit.h(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'h')
self.assertEqual(qargs, [self.qr[1]])
def test_h_wires(self):
self.circuit.h(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'h')
self.assertEqual(qargs, [self.qr[1]])
def test_h_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.h, self.cr[0])
self.assertRaises(CircuitError, qc.h, self.cr)
self.assertRaises(CircuitError, qc.h, (self.qr, 3))
self.assertRaises(CircuitError, qc.h, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.h, .0)
def test_h_reg(self):
instruction_set = self.circuit.h(self.qr)
self.assertEqual(len(instruction_set.instructions), 3)
self.assertEqual(instruction_set.instructions[0].name, 'h')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
def test_h_reg_inv(self):
instruction_set = self.circuit.h(self.qr).inverse()
self.assertEqual(len(instruction_set.instructions), 3)
self.assertEqual(instruction_set.instructions[0].name, 'h')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
def test_iden(self):
self.circuit.i(self.qr[1])
op, _, _ = self.circuit[0]
self.assertEqual(op.name, 'id')
self.assertEqual(op.params, [])
def test_iden_wires(self):
self.circuit.i(1)
op, _, _ = self.circuit[0]
self.assertEqual(op.name, 'id')
self.assertEqual(op.params, [])
def test_iden_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.i, self.cr[0])
self.assertRaises(CircuitError, qc.i, self.cr)
self.assertRaises(CircuitError, qc.i, (self.qr, 3))
self.assertRaises(CircuitError, qc.i, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.i, .0)
def test_iden_reg(self):
instruction_set = self.circuit.i(self.qr)
self.assertEqual(len(instruction_set.instructions), 3)
self.assertEqual(instruction_set.instructions[0].name, 'id')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
def test_iden_reg_inv(self):
instruction_set = self.circuit.i(self.qr).inverse()
self.assertEqual(len(instruction_set.instructions), 3)
self.assertEqual(instruction_set.instructions[0].name, 'id')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
def test_rx(self):
self.circuit.rx(1, self.qr[1])
op, _, _ = self.circuit[0]
self.assertEqual(op.name, 'rx')
self.assertEqual(op.params, [1])
def test_rx_wires(self):
self.circuit.rx(1, 1)
op, _, _ = self.circuit[0]
self.assertEqual(op.name, 'rx')
self.assertEqual(op.params, [1])
def test_rx_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.rx, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.rx, self.qr[1], 0)
self.assertRaises(CircuitError, qc.rx, 0, self.cr[0])
self.assertRaises(CircuitError, qc.rx, 0, .0)
self.assertRaises(CircuitError, qc.rx, self.qr[2], self.qr[1])
self.assertRaises(CircuitError, qc.rx, 0, (self.qr, 3))
self.assertRaises(CircuitError, qc.rx, 0, self.cr)
# TODO self.assertRaises(CircuitError, qc.rx, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.rx, 0, 'a')
def test_rx_reg(self):
instruction_set = self.circuit.rx(1, self.qr)
self.assertEqual(len(instruction_set.instructions), 3)
self.assertEqual(instruction_set.instructions[0].name, 'rx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_rx_reg_inv(self):
instruction_set = self.circuit.rx(1, self.qr).inverse()
self.assertEqual(len(instruction_set.instructions), 3)
self.assertEqual(instruction_set.instructions[0].name, 'rx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_rx_pi(self):
qc = self.circuit
qc.rx(pi / 2, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'rx')
self.assertEqual(op.params, [pi / 2])
self.assertEqual(qargs, [self.qr[1]])
def test_ry(self):
self.circuit.ry(1, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'ry')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1]])
def test_ry_wires(self):
self.circuit.ry(1, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'ry')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1]])
def test_ry_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.ry, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.ry, self.qr[1], 0)
self.assertRaises(CircuitError, qc.ry, 0, self.cr[0])
self.assertRaises(CircuitError, qc.ry, 0, .0)
self.assertRaises(CircuitError, qc.ry, self.qr[2], self.qr[1])
self.assertRaises(CircuitError, qc.ry, 0, (self.qr, 3))
self.assertRaises(CircuitError, qc.ry, 0, self.cr)
# TODO self.assertRaises(CircuitError, qc.ry, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.ry, 0, 'a')
def test_ry_reg(self):
instruction_set = self.circuit.ry(1, self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'ry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_ry_reg_inv(self):
instruction_set = self.circuit.ry(1, self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'ry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_ry_pi(self):
qc = self.circuit
qc.ry(pi / 2, self.qr[1])
op, _, _ = self.circuit[0]
self.assertEqual(op.name, 'ry')
self.assertEqual(op.params, [pi / 2])
def test_rz(self):
self.circuit.rz(1, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'rz')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1]])
def test_rz_wires(self):
self.circuit.rz(1, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'rz')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1]])
def test_rz_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.rz, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.rz, self.qr[1], 0)
self.assertRaises(CircuitError, qc.rz, 0, self.cr[0])
self.assertRaises(CircuitError, qc.rz, 0, .0)
self.assertRaises(CircuitError, qc.rz, self.qr[2], self.qr[1])
self.assertRaises(CircuitError, qc.rz, 0, (self.qr, 3))
self.assertRaises(CircuitError, qc.rz, 0, self.cr)
# TODO self.assertRaises(CircuitError, qc.rz, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.rz, 0, 'a')
def test_rz_reg(self):
instruction_set = self.circuit.rz(1, self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'rz')
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_rz_reg_inv(self):
instruction_set = self.circuit.rz(1, self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'rz')
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_rz_pi(self):
self.circuit.rz(pi / 2, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'rz')
self.assertEqual(op.params, [pi / 2])
self.assertEqual(qargs, [self.qr[1]])
def test_rzz(self):
self.circuit.rzz(1, self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'rzz')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_rzz_wires(self):
self.circuit.rzz(1, 1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'rzz')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_rzz_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.rzz, 1, self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.rzz, 1, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.rzz, 1, .0, self.qr[0])
self.assertRaises(CircuitError, qc.rzz, 1, (self.qr, 3), self.qr[0])
self.assertRaises(CircuitError, qc.rzz, 1, self.cr, self.qr)
self.assertRaises(CircuitError, qc.rzz, 1, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.rzz, 0.1, self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.rzz, 0.1, self.qr[0], self.qr[0])
def test_s(self):
self.circuit.s(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 's')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_s_wires(self):
self.circuit.s(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 's')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_s_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.s, self.cr[0])
self.assertRaises(CircuitError, qc.s, self.cr)
self.assertRaises(CircuitError, qc.s, (self.qr, 3))
self.assertRaises(CircuitError, qc.s, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.s, .0)
def test_s_reg(self):
instruction_set = self.circuit.s(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 's')
self.assertEqual(instruction_set.instructions[2].params, [])
def test_s_reg_inv(self):
instruction_set = self.circuit.s(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'sdg')
self.assertEqual(instruction_set.instructions[2].params, [])
def test_sdg(self):
self.circuit.sdg(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'sdg')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_sdg_wires(self):
self.circuit.sdg(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'sdg')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_sdg_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.sdg, self.cr[0])
self.assertRaises(CircuitError, qc.sdg, self.cr)
self.assertRaises(CircuitError, qc.sdg, (self.qr, 3))
self.assertRaises(CircuitError, qc.sdg, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.sdg, .0)
def test_sdg_reg(self):
instruction_set = self.circuit.sdg(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'sdg')
self.assertEqual(instruction_set.instructions[2].params, [])
def test_sdg_reg_inv(self):
instruction_set = self.circuit.sdg(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 's')
self.assertEqual(instruction_set.instructions[2].params, [])
def test_swap(self):
self.circuit.swap(self.qr[1], self.qr[2])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'swap')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_swap_wires(self):
self.circuit.swap(1, 2)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'swap')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1], self.qr[2]])
def test_swap_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.swap, self.cr[1], self.cr[2])
self.assertRaises(CircuitError, qc.swap, self.qr[0], self.qr[0])
self.assertRaises(CircuitError, qc.swap, .0, self.qr[0])
self.assertRaises(CircuitError, qc.swap, (self.qr, 3), self.qr[0])
self.assertRaises(CircuitError, qc.swap, self.cr, self.qr)
self.assertRaises(CircuitError, qc.swap, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.swap, self.qr, self.qr2[[1, 2]])
self.assertRaises(CircuitError, qc.swap, self.qr[:2], self.qr2)
def test_t(self):
self.circuit.t(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 't')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_t_wire(self):
self.circuit.t(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 't')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_t_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.t, self.cr[0])
self.assertRaises(CircuitError, qc.t, self.cr)
self.assertRaises(CircuitError, qc.t, (self.qr, 3))
self.assertRaises(CircuitError, qc.t, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.t, .0)
def test_t_reg(self):
instruction_set = self.circuit.t(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 't')
self.assertEqual(instruction_set.instructions[2].params, [])
def test_t_reg_inv(self):
instruction_set = self.circuit.t(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'tdg')
self.assertEqual(instruction_set.instructions[2].params, [])
def test_tdg(self):
self.circuit.tdg(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'tdg')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_tdg_wires(self):
self.circuit.tdg(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'tdg')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_tdg_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.tdg, self.cr[0])
self.assertRaises(CircuitError, qc.tdg, self.cr)
self.assertRaises(CircuitError, qc.tdg, (self.qr, 3))
self.assertRaises(CircuitError, qc.tdg, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.tdg, .0)
def test_tdg_reg(self):
instruction_set = self.circuit.tdg(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'tdg')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_tdg_reg_inv(self):
instruction_set = self.circuit.tdg(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 't')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_u1(self):
self.circuit.u1(1, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u1')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1]])
def test_u1_wires(self):
self.circuit.u1(1, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u1')
self.assertEqual(op.params, [1])
self.assertEqual(qargs, [self.qr[1]])
def test_u1_invalid(self):
qc = self.circuit
# CHECKME? self.assertRaises(CircuitError, qc.u1, self.cr[0], self.qr[0])
self.assertRaises(CircuitError, qc.u1, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.u1, self.qr[1], 0)
self.assertRaises(CircuitError, qc.u1, 0, self.cr[0])
self.assertRaises(CircuitError, qc.u1, 0, .0)
self.assertRaises(CircuitError, qc.u1, self.qr[2], self.qr[1])
self.assertRaises(CircuitError, qc.u1, 0, (self.qr, 3))
self.assertRaises(CircuitError, qc.u1, 0, self.cr)
# TODO self.assertRaises(CircuitError, qc.u1, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.u1, 0, 'a')
def test_u1_reg(self):
instruction_set = self.circuit.u1(1, self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'u1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_u1_reg_inv(self):
instruction_set = self.circuit.u1(1, self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'u1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_u1_pi(self):
qc = self.circuit
qc.u1(pi / 2, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u1')
self.assertEqual(op.params, [pi / 2])
self.assertEqual(qargs, [self.qr[1]])
def test_u2(self):
self.circuit.u2(1, 2, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u2')
self.assertEqual(op.params, [1, 2])
self.assertEqual(qargs, [self.qr[1]])
def test_u2_wires(self):
self.circuit.u2(1, 2, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u2')
self.assertEqual(op.params, [1, 2])
self.assertEqual(qargs, [self.qr[1]])
def test_u2_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.u2, 0, self.cr[0], self.qr[0])
self.assertRaises(CircuitError, qc.u2, 0, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.u2, 0, self.qr[1], 0)
self.assertRaises(CircuitError, qc.u2, 0, 0, self.cr[0])
self.assertRaises(CircuitError, qc.u2, 0, 0, .0)
self.assertRaises(CircuitError, qc.u2, 0, self.qr[2], self.qr[1])
self.assertRaises(CircuitError, qc.u2, 0, 0, (self.qr, 3))
self.assertRaises(CircuitError, qc.u2, 0, 0, self.cr)
# TODO self.assertRaises(CircuitError, qc.u2, 0, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.u2, 0, 0, 'a')
def test_u2_reg(self):
instruction_set = self.circuit.u2(1, 2, self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'u2')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [1, 2])
def test_u2_reg_inv(self):
instruction_set = self.circuit.u2(1, 2, self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'u2')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [-pi - 2, -1 + pi])
def test_u2_pi(self):
self.circuit.u2(pi / 2, 0.3 * pi, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u2')
self.assertEqual(op.params, [pi / 2, 0.3 * pi])
self.assertEqual(qargs, [self.qr[1]])
def test_u3(self):
self.circuit.u3(1, 2, 3, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u3')
self.assertEqual(op.params, [1, 2, 3])
self.assertEqual(qargs, [self.qr[1]])
def test_u3_wires(self):
self.circuit.u3(1, 2, 3, 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u3')
self.assertEqual(op.params, [1, 2, 3])
self.assertEqual(qargs, [self.qr[1]])
def test_u3_invalid(self):
qc = self.circuit
# TODO self.assertRaises(CircuitError, qc.u3, 0, self.cr[0], self.qr[0])
self.assertRaises(CircuitError, qc.u3, 0, 0, self.cr[0], self.cr[1])
self.assertRaises(CircuitError, qc.u3, 0, 0, self.qr[1], 0)
self.assertRaises(CircuitError, qc.u3, 0, 0, 0, self.cr[0])
self.assertRaises(CircuitError, qc.u3, 0, 0, 0, .0)
self.assertRaises(CircuitError, qc.u3, 0, 0, self.qr[2], self.qr[1])
self.assertRaises(CircuitError, qc.u3, 0, 0, 0, (self.qr, 3))
self.assertRaises(CircuitError, qc.u3, 0, 0, 0, self.cr)
# TODO self.assertRaises(CircuitError, qc.u3, 0, 0, 'a', self.qr[1])
self.assertRaises(CircuitError, qc.u3, 0, 0, 0, 'a')
def test_u3_reg(self):
instruction_set = self.circuit.u3(1, 2, 3, self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'u3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [1, 2, 3])
def test_u3_reg_inv(self):
instruction_set = self.circuit.u3(1, 2, 3, self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'u3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1, -3, -2])
def test_u3_pi(self):
self.circuit.u3(pi, pi / 2, 0.3 * pi, self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'u3')
self.assertEqual(op.params, [pi, pi / 2, 0.3 * pi])
self.assertEqual(qargs, [self.qr[1]])
def test_x(self):
self.circuit.x(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'x')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_x_wires(self):
self.circuit.x(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'x')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_x_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.x, self.cr[0])
self.assertRaises(CircuitError, qc.x, self.cr)
self.assertRaises(CircuitError, qc.x, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.x, 0.0)
def test_x_reg(self):
instruction_set = self.circuit.x(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'x')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_x_reg_inv(self):
instruction_set = self.circuit.x(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'x')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_y(self):
self.circuit.y(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'y')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_y_wires(self):
self.circuit.y(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'y')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_y_invalid(self):
qc = self.circuit
self.assertRaises(CircuitError, qc.y, self.cr[0])
self.assertRaises(CircuitError, qc.y, self.cr)
self.assertRaises(CircuitError, qc.y, (self.qr, 'a'))
self.assertRaises(CircuitError, qc.y, 0.0)
def test_y_reg(self):
instruction_set = self.circuit.y(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'y')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_y_reg_inv(self):
instruction_set = self.circuit.y(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'y')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_z(self):
self.circuit.z(self.qr[1])
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'z')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_z_wires(self):
self.circuit.z(1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'z')
self.assertEqual(op.params, [])
self.assertEqual(qargs, [self.qr[1]])
def test_z_reg(self):
instruction_set = self.circuit.z(self.qr)
self.assertEqual(instruction_set.instructions[0].name, 'z')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_z_reg_inv(self):
instruction_set = self.circuit.z(self.qr).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'z')
self.assertEqual(instruction_set.qargs[1], [self.qr[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
class TestStandard2Q(QiskitTestCase):
"""Standard Extension Test. Gates with two Qubits"""
def setUp(self):
self.qr = QuantumRegister(3, "q")
self.qr2 = QuantumRegister(3, "r")
self.cr = ClassicalRegister(3, "c")
self.circuit = QuantumCircuit(self.qr, self.qr2, self.cr)
def test_barrier_reg_bit(self):
self.circuit.barrier(self.qr, self.qr2[0])
self.assertEqual(len(self.circuit), 1)
op, qargs, _ = self.circuit[0]
self.assertEqual(op.name, 'barrier')
self.assertEqual(qargs, [self.qr[0], self.qr[1], self.qr[2], self.qr2[0]])
def test_ch_reg_reg(self):
instruction_set = self.circuit.ch(self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'ch')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_ch_reg_reg_inv(self):
instruction_set = self.circuit.ch(self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'ch')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_ch_reg_bit(self):
instruction_set = self.circuit.ch(self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'ch')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_ch_reg_bit_inv(self):
instruction_set = self.circuit.ch(self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'ch')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_ch_bit_reg(self):
instruction_set = self.circuit.ch(self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'ch')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_crz_reg_reg(self):
instruction_set = self.circuit.crz(1, self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'crz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_crz_reg_reg_inv(self):
instruction_set = self.circuit.crz(1, self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'crz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_crz_reg_bit(self):
instruction_set = self.circuit.crz(1, self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'crz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_crz_reg_bit_inv(self):
instruction_set = self.circuit.crz(1, self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'crz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_crz_bit_reg(self):
instruction_set = self.circuit.crz(1, self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'crz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_crz_bit_reg_inv(self):
instruction_set = self.circuit.crz(1, self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'crz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cry_reg_reg(self):
instruction_set = self.circuit.cry(1, self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_cry_reg_reg_inv(self):
instruction_set = self.circuit.cry(1, self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cry_reg_bit(self):
instruction_set = self.circuit.cry(1, self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'cry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_cry_reg_bit_inv(self):
instruction_set = self.circuit.cry(1, self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cry_bit_reg(self):
instruction_set = self.circuit.cry(1, self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_cry_bit_reg_inv(self):
instruction_set = self.circuit.cry(1, self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cry')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_crx_reg_reg(self):
instruction_set = self.circuit.crx(1, self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'crx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_crx_reg_reg_inv(self):
instruction_set = self.circuit.crx(1, self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'crx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_crx_reg_bit(self):
instruction_set = self.circuit.crx(1, self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'crx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_crx_reg_bit_inv(self):
instruction_set = self.circuit.crx(1, self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'crx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_crx_bit_reg(self):
instruction_set = self.circuit.crx(1, self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'crx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_crx_bit_reg_inv(self):
instruction_set = self.circuit.crx(1, self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'crx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cu1_reg_reg(self):
instruction_set = self.circuit.cu1(1, self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cu1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_cu1_reg_reg_inv(self):
instruction_set = self.circuit.cu1(1, self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cu1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cu1_reg_bit(self):
instruction_set = self.circuit.cu1(1, self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'cu1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_cu1_reg_bit_inv(self):
instruction_set = self.circuit.cu1(1, self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cu1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cu1_bit_reg(self):
instruction_set = self.circuit.cu1(1, self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cu1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1])
def test_cu1_bit_reg_inv(self):
instruction_set = self.circuit.cu1(1, self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cu1')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1])
def test_cu3_reg_reg(self):
instruction_set = self.circuit.cu3(1, 2, 3, self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cu3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1, 2, 3])
def test_cu3_reg_reg_inv(self):
instruction_set = self.circuit.cu3(1, 2, 3, self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cu3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1, -3, -2])
def test_cu3_reg_bit(self):
instruction_set = self.circuit.cu3(1, 2, 3, self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'cu3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1, 2, 3])
def test_cu3_reg_bit_inv(self):
instruction_set = self.circuit.cu3(1, 2, 3, self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cu3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1, -3, -2])
def test_cu3_bit_reg(self):
instruction_set = self.circuit.cu3(1, 2, 3, self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cu3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [1, 2, 3])
def test_cu3_bit_reg_inv(self):
instruction_set = self.circuit.cu3(1, 2, 3, self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cu3')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [-1, -3, -2])
def test_cx_reg_reg(self):
instruction_set = self.circuit.cx(self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cx_reg_reg_inv(self):
instruction_set = self.circuit.cx(self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cx_reg_bit(self):
instruction_set = self.circuit.cx(self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'cx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cx_reg_bit_inv(self):
instruction_set = self.circuit.cx(self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cx_bit_reg(self):
instruction_set = self.circuit.cx(self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cx_bit_reg_inv(self):
instruction_set = self.circuit.cx(self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cx')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cy_reg_reg(self):
instruction_set = self.circuit.cy(self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cy')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cy_reg_reg_inv(self):
instruction_set = self.circuit.cy(self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cy')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cy_reg_bit(self):
instruction_set = self.circuit.cy(self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'cy')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cy_reg_bit_inv(self):
instruction_set = self.circuit.cy(self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cy')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cy_bit_reg(self):
instruction_set = self.circuit.cy(self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cy')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cy_bit_reg_inv(self):
instruction_set = self.circuit.cy(self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cy')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cz_reg_reg(self):
instruction_set = self.circuit.cz(self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cz_reg_reg_inv(self):
instruction_set = self.circuit.cz(self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cz_reg_bit(self):
instruction_set = self.circuit.cz(self.qr, self.qr2[1])
self.assertEqual(instruction_set.instructions[0].name, 'cz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cz_reg_bit_inv(self):
instruction_set = self.circuit.cz(self.qr, self.qr2[1]).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cz_bit_reg(self):
instruction_set = self.circuit.cz(self.qr[1], self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'cz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cz_bit_reg_inv(self):
instruction_set = self.circuit.cz(self.qr[1], self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cz')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_swap_reg_reg(self):
instruction_set = self.circuit.swap(self.qr, self.qr2)
self.assertEqual(instruction_set.instructions[0].name, 'swap')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_swap_reg_reg_inv(self):
instruction_set = self.circuit.swap(self.qr, self.qr2).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'swap')
self.assertEqual(instruction_set.qargs[1], [self.qr[1], self.qr2[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
class TestStandard3Q(QiskitTestCase):
"""Standard Extension Test. Gates with three Qubits"""
def setUp(self):
self.qr = QuantumRegister(3, "q")
self.qr2 = QuantumRegister(3, "r")
self.qr3 = QuantumRegister(3, "s")
self.cr = ClassicalRegister(3, "c")
self.circuit = QuantumCircuit(self.qr, self.qr2, self.qr3, self.cr)
def test_ccx_reg_reg_reg(self):
instruction_set = self.circuit.ccx(self.qr, self.qr2, self.qr3)
self.assertEqual(instruction_set.instructions[0].name, 'ccx')
self.assertEqual(instruction_set.qargs[1],
[self.qr[1], self.qr2[1], self.qr3[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_ccx_reg_reg_inv(self):
instruction_set = self.circuit.ccx(self.qr, self.qr2, self.qr3).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'ccx')
self.assertEqual(instruction_set.qargs[1],
[self.qr[1], self.qr2[1], self.qr3[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cswap_reg_reg_reg(self):
instruction_set = self.circuit.cswap(self.qr, self.qr2, self.qr3)
self.assertEqual(instruction_set.instructions[0].name, 'cswap')
self.assertEqual(instruction_set.qargs[1],
[self.qr[1], self.qr2[1], self.qr3[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
def test_cswap_reg_reg_inv(self):
instruction_set = self.circuit.cswap(self.qr, self.qr2, self.qr3).inverse()
self.assertEqual(instruction_set.instructions[0].name, 'cswap')
self.assertEqual(instruction_set.qargs[1],
[self.qr[1], self.qr2[1], self.qr3[1]])
self.assertEqual(instruction_set.instructions[2].params, [])
class TestStandardMethods(QiskitTestCase):
"""Standard Extension Test."""
def test_to_matrix(self):
"""test gates implementing to_matrix generate matrix which matches
definition."""
from qiskit.circuit.library.standard_gates.ms import MSGate
params = [0.1 * (i + 1) for i in range(10)]
gate_class_list = Gate.__subclasses__() + ControlledGate.__subclasses__()
simulator = BasicAer.get_backend('unitary_simulator')
for gate_class in gate_class_list:
sig = signature(gate_class)
if gate_class == MSGate:
# due to the signature (num_qubits, theta, *, n_qubits=Noe) the signature detects
# 3 arguments but really its only 2. This if can be removed once the deprecated
# n_qubits argument is no longer supported.
free_params = 2
else:
free_params = len(set(sig.parameters) - {'label'})
try:
gate = gate_class(*params[0:free_params])
except (CircuitError, QiskitError, AttributeError):
self.log.info(
'Cannot init gate with params only. Skipping %s',
gate_class)
continue
if gate.name in ['U', 'CX']:
continue
circ = QuantumCircuit(gate.num_qubits)
circ.append(gate, range(gate.num_qubits))
try:
gate_matrix = gate.to_matrix()
except CircuitError:
# gate doesn't implement to_matrix method: skip
self.log.info('to_matrix method FAILED for "%s" gate',
gate.name)
continue
definition_unitary = execute([circ], simulator).result().get_unitary()
with self.subTest(gate_class):
# TODO check for exact equality once BasicAer can handle global phase
self.assertTrue(matrix_equal(definition_unitary, gate_matrix, ignore_phase=True))
self.assertTrue(is_unitary_matrix(gate_matrix))
def test_to_matrix_op(self):
"""test gates implementing to_matrix generate matrix which matches
definition using Operator."""
from qiskit.quantum_info import Operator
from qiskit.circuit.library.standard_gates.ms import MSGate
params = [0.1 * i for i in range(10)]
gate_class_list = Gate.__subclasses__() + ControlledGate.__subclasses__()
for gate_class in gate_class_list:
sig = signature(gate_class)
if gate_class == MSGate:
# due to the signature (num_qubits, theta, *, n_qubits=Noe) the signature detects
# 3 arguments but really its only 2. This if can be removed once the deprecated
# n_qubits argument is no longer supported.
free_params = 2
else:
free_params = len(set(sig.parameters) - {'label'})
try:
gate = gate_class(*params[0:free_params])
except (CircuitError, QiskitError, AttributeError):
self.log.info(
'Cannot init gate with params only. Skipping %s',
gate_class)
continue
if gate.name in ['U', 'CX']:
continue
try:
gate_matrix = gate.to_matrix()
except CircuitError:
# gate doesn't implement to_matrix method: skip
self.log.info('to_matrix method FAILED for "%s" gate',
gate.name)
continue
if not hasattr(gate, 'definition') or not gate.definition:
continue
definition_unitary = Operator(gate.definition).data
self.assertTrue(matrix_equal(definition_unitary, gate_matrix))
self.assertTrue(is_unitary_matrix(gate_matrix))
@ddt
class TestQubitKeywordArgRenaming(QiskitTestCase):
"""Test renaming of qubit keyword args on standard instructions."""
# pylint: disable=bad-whitespace
@unpack
@data(
('h', HGate, 0, [('q', 'qubit')]),
('ch', CHGate, 0, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('id', IGate, 0, [('q', 'qubit')]),
('r', RGate, 2, [('q', 'qubit')]),
('rx', RXGate, 1, [('q', 'qubit')]),
('crx', CRXGate, 1, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('ry', RYGate, 1, [('q', 'qubit')]),
('cry', CRYGate, 1, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('rz', RZGate, 1, [('q', 'qubit')]),
('crz', CRZGate, 1, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('s', SGate, 0, [('q', 'qubit')]),
('sdg', SdgGate, 0, [('q', 'qubit')]),
('cswap',
CSwapGate,
0,
[('ctl', 'control_qubit'),
('tgt1', 'target_qubit1'),
('tgt2', 'target_qubit2')]),
('t', TGate, 0, [('q', 'qubit')]),
('tdg', TdgGate, 0, [('q', 'qubit')]),
('u1', U1Gate, 1, [('q', 'qubit')]),
('cu1', CU1Gate, 1, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('u2', U2Gate, 2, [('q', 'qubit')]),
('u3', U3Gate, 3, [('q', 'qubit')]),
('cu3', CU3Gate, 3, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('x', XGate, 0, [('q', 'qubit')]),
('cx', CXGate, 0, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('ccx',
CCXGate,
0,
[('ctl1', 'control_qubit1'),
('ctl2', 'control_qubit2'),
('tgt', 'target_qubit')]),
('y', YGate, 0, [('q', 'qubit')]),
('cy', CYGate, 0, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
('z', ZGate, 0, [('q', 'qubit')]),
('cz', CZGate, 0, [('ctl', 'control_qubit'), ('tgt', 'target_qubit')]),
)
# pylint: enable=bad-whitespace
def test_kwarg_deprecation(self, instr_name, inst_class, n_params, kwarg_map):
# Verify providing *args is unchanged
num_qubits = len(kwarg_map)
qr = QuantumRegister(num_qubits)
qc = QuantumCircuit(qr)
params = ParameterVector('theta', n_params)
getattr(qc, instr_name)(*params[:], *qr[:])
op, qargs, cargs = qc.data[0]
self.assertIsInstance(op, inst_class)
self.assertEqual(op.params, params[:])
self.assertEqual(qargs, qr[:])
self.assertEqual(cargs, [])
# Verify providing old_arg raises a DeprecationWarning
num_qubits = len(kwarg_map)
qr = QuantumRegister(num_qubits)
qc = QuantumCircuit(qr)
params = ParameterVector('theta', n_params)
with self.assertWarns(DeprecationWarning):
getattr(qc, instr_name)(*params[:],
**{keyword[0]: qubit
for keyword, qubit
in zip(kwarg_map, qr[:])})
op, qargs, cargs = qc.data[0]
self.assertIsInstance(op, inst_class)
self.assertEqual(op.params, params[:])
self.assertEqual(qargs, qr[:])
self.assertEqual(cargs, [])
# Verify providing new_arg does not raise a DeprecationWarning
num_qubits = len(kwarg_map)
qr = QuantumRegister(num_qubits)
qc = QuantumCircuit(qr)
params = ParameterVector('theta', n_params)
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
getattr(qc, instr_name)(*params[:],
**{keyword[1]: qubit
for keyword, qubit
in zip(kwarg_map, qr[:])})
self.assertEqual(len(w), 0)
op, qargs, cargs = qc.data[0]
self.assertIsInstance(op, inst_class)
self.assertEqual(op.params, params[:])
self.assertEqual(qargs, qr[:])
self.assertEqual(cargs, [])
if __name__ == '__main__':
unittest.main(verbosity=2)
| 44.88629 | 97 | 0.629748 | 9,394 | 69,080 | 4.522248 | 0.033638 | 0.072595 | 0.156066 | 0.174074 | 0.920343 | 0.906572 | 0.889106 | 0.84118 | 0.798479 | 0.731015 | 0 | 0.031084 | 0.212956 | 69,080 | 1,538 | 98 | 44.915475 | 0.75028 | 0.036986 | 0 | 0.511444 | 0 | 0 | 0.017942 | 0 | 0 | 0 | 0 | 0.0013 | 0.515391 | 1 | 0.15075 | false | 0 | 0.012628 | 0 | 0.167324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8ac73d9cf9e0f8190239eb70a9588e32acc449be | 2,601 | py | Python | test/test_user_permissions_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | test/test_user_permissions_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | test/test_user_permissions_api.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Octopus Server API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 2019.6.7+Branch.tags-2019.6.7.Sha.aa18dc6809953218c66f57eff7d26481d9b23d6a
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import octopus_deploy_swagger_client
from octopus_deploy_client.user_permissions_api import UserPermissionsApi # noqa: E501
from octopus_deploy_swagger_client.rest import ApiException
class TestUserPermissionsApi(unittest.TestCase):
"""UserPermissionsApi unit test stubs"""
def setUp(self):
self.api = octopus_deploy_client.user_permissions_api.UserPermissionsApi() # noqa: E501
def tearDown(self):
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_action(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_action
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_action_spaces(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_action_spaces
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_configuration_action(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_configuration_action
"""
pass
def test_custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_configuration_action_spaces(self):
"""Test case for custom_action_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_configuration_action_spaces
"""
pass
def test_file_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_export_action(self):
"""Test case for file_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_export_action
"""
pass
def test_file_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_export_action_spaces(self):
"""Test case for file_response_descriptor_octopus_server_web_api_actions_users_user_get_permissions_export_action_spaces
"""
pass
if __name__ == '__main__':
unittest.main()
| 36.633803 | 144 | 0.809689 | 327 | 2,601 | 5.831804 | 0.211009 | 0.088621 | 0.157315 | 0.195071 | 0.737284 | 0.737284 | 0.698479 | 0.658626 | 0.658626 | 0.658626 | 0 | 0.021457 | 0.139946 | 2,601 | 70 | 145 | 37.157143 | 0.831024 | 0.443676 | 0 | 0.291667 | 1 | 0 | 0.005789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.291667 | 0.208333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
8acb14c0cf6ecbe936ac0ace55d5c6d3f1ac5c67 | 62,429 | py | Python | sdk/purview/azure-purview-catalog/azure/purview/catalog/rest/types/_request_builders_py3.py | saikrishna563/azure-sdk-for-python | 2b280f5175aa690def433e34ec5f3be9e240b137 | [
"MIT"
] | null | null | null | sdk/purview/azure-purview-catalog/azure/purview/catalog/rest/types/_request_builders_py3.py | saikrishna563/azure-sdk-for-python | 2b280f5175aa690def433e34ec5f3be9e240b137 | [
"MIT"
] | null | null | null | sdk/purview/azure-purview-catalog/azure/purview/catalog/rest/types/_request_builders_py3.py | saikrishna563/azure-sdk-for-python | 2b280f5175aa690def433e34ec5f3be9e240b137 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, Dict, IO, List, Optional, TYPE_CHECKING, Union
from azure.core.pipeline.transport._base import _format_url_section
from azure.purview.catalog.core.rest import HttpRequest
from msrest import Serializer
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any
_SERIALIZER = Serializer()
def build_get_classification_def_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the classification definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the classification.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/classificationdef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_classification_def_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the classification definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the classification.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/classificationdef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_entity_definition_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the Entity definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the entity.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/entitydef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_entity_definition_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the entity definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the entity.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/entitydef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_enum_def_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the enum definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the enum.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/enumdef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_enum_def_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the enum definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the enum.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/enumdef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_relationship_def_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the relationship definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the relationship.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/relationshipdef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_relationship_def_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the relationship definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the relationship.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/relationshipdef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_struct_def_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the struct definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the struct.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/structdef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_struct_def_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the struct definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the struct.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/structdef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_type_definition_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the type definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the type.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_type_definition_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the type definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the type.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_delete_type_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Delete API for type identified by its name.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the type.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
"""
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
return HttpRequest(
method="DELETE",
url=url,
**kwargs
)
def build_get_all_type_definitions_request(
*,
include_term_template: Optional[bool] = False,
type: Optional[Union[str, "_models.Type"]] = None,
**kwargs: Any
) -> HttpRequest:
"""Get all type definitions in Atlas in bulk.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:keyword include_term_template: Whether include termtemplatedef when return all typedefs.
This is always true when search filter type=term_template.
:paramtype include_term_template: bool
:keyword type: Typedef name as search filter when get typedefs.
:paramtype type: str or ~azure.purview.catalog.models.Type
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {
"classificationDefs": [
{
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"entityDefs": [
{
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"enumDefs": [
{
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
],
"relationshipDefs": [
{
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
],
"structDefs": [
{
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
],
"termTemplateDefs": [
{}
]
}
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedefs')
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if include_term_template is not None:
query_parameters['includeTermTemplate'] = _SERIALIZER.query("include_term_template", include_term_template, 'bool')
if type is not None:
query_parameters['type'] = _SERIALIZER.query("type", type, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_create_type_definitions_request(
*,
json: Any = None,
content: Any = None,
**kwargs: Any
) -> HttpRequest:
"""Create all atlas type definitions in bulk, only new definitions will be created.
Any changes to the existing definitions will be discarded.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:keyword json: A composite wrapper object with corresponding lists of the type definition.
:paramtype json: Any
:keyword content: A composite wrapper object with corresponding lists of the type definition.
:paramtype content: Any
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your `json` input.
json = {
"classificationDefs": [
{
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"entityDefs": [
{
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"enumDefs": [
{
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
],
"relationshipDefs": [
{
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
],
"structDefs": [
{
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
],
"termTemplateDefs": [
{}
]
}
# response body for status code(s): 200
response_body == {
"classificationDefs": [
{
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"entityDefs": [
{
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"enumDefs": [
{
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
],
"relationshipDefs": [
{
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
],
"structDefs": [
{
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
],
"termTemplateDefs": [
{}
]
}
"""
content_type = kwargs.pop("content_type", None)
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedefs')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
json=json,
content=content,
**kwargs
)
def build_update_atlas_type_definitions_request(
*,
json: Any = None,
content: Any = None,
**kwargs: Any
) -> HttpRequest:
"""Update all types in bulk, changes detected in the type definitions would be persisted.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:keyword json: A composite object that captures all type definition changes.
:paramtype json: Any
:keyword content: A composite object that captures all type definition changes.
:paramtype content: Any
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your `json` input.
json = {
"classificationDefs": [
{
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"entityDefs": [
{
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"enumDefs": [
{
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
],
"relationshipDefs": [
{
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
],
"structDefs": [
{
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
],
"termTemplateDefs": [
{}
]
}
# response body for status code(s): 200
response_body == {
"classificationDefs": [
{
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"entityDefs": [
{
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"enumDefs": [
{
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
],
"relationshipDefs": [
{
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
],
"structDefs": [
{
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
],
"termTemplateDefs": [
{}
]
}
"""
content_type = kwargs.pop("content_type", None)
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedefs')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="PUT",
url=url,
headers=header_parameters,
json=json,
content=content,
**kwargs
)
def build_delete_type_definitions_request(
*,
json: Any = None,
content: Any = None,
**kwargs: Any
) -> HttpRequest:
"""Delete API for all types in bulk.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:keyword json: A composite object that captures all types to be deleted.
:paramtype json: Any
:keyword content: A composite object that captures all types to be deleted.
:paramtype content: Any
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your `json` input.
json = {
"classificationDefs": [
{
"entityTypes": [
"str (optional)"
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"entityDefs": [
{
"relationshipAttributeDefs": [
{
"isLegacyAttribute": "bool (optional)",
"relationshipTypeName": "str (optional)"
}
],
"subTypes": [
"str (optional)"
],
"superTypes": [
"str (optional)"
]
}
],
"enumDefs": [
{
"defaultValue": "str (optional)",
"elementDefs": [
{
"description": "str (optional)",
"ordinal": "float (optional)",
"value": "str (optional)"
}
]
}
],
"relationshipDefs": [
{
"endDef1": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"endDef2": {
"cardinality": "str (optional)",
"description": "str (optional)",
"isContainer": "bool (optional)",
"isLegacyAttribute": "bool (optional)",
"name": "str (optional)",
"type": "str (optional)"
},
"relationshipCategory": "str (optional)",
"relationshipLabel": "str (optional)"
}
],
"structDefs": [
{
"attributeDefs": [
{
"cardinality": "str (optional)",
"constraints": [
{
"params": {
"str": "object (optional)"
},
"type": "str (optional)"
}
],
"defaultValue": "str (optional)",
"description": "str (optional)",
"includeInNotification": "bool (optional)",
"isIndexable": "bool (optional)",
"isOptional": "bool (optional)",
"isUnique": "bool (optional)",
"name": "str (optional)",
"options": {
"str": "str (optional)"
},
"typeName": "str (optional)",
"valuesMaxCount": "int (optional)",
"valuesMinCount": "int (optional)"
}
]
}
],
"termTemplateDefs": [
{}
]
}
"""
content_type = kwargs.pop("content_type", None)
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedefs')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
return HttpRequest(
method="DELETE",
url=url,
headers=header_parameters,
json=json,
content=content,
**kwargs
)
def build_list_type_definition_headers_request(
*,
include_term_template: Optional[bool] = False,
type: Optional[Union[str, "_models.Type"]] = None,
**kwargs: Any
) -> HttpRequest:
"""List all type definitions returned as a list of minimal information header.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:keyword include_term_template: Whether include termtemplatedef when return all typedefs.
This is always true when search filter type=term_template.
:paramtype include_term_template: bool
:keyword type: Typedef name as search filter when get typedefs.
:paramtype type: str or ~azure.purview.catalog.models.Type
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == [
{
"category": "str (optional)",
"guid": "str (optional)",
"name": "str (optional)"
}
]
"""
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/atlas/v2/types/typedefs/headers')
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if include_term_template is not None:
query_parameters['includeTermTemplate'] = _SERIALIZER.query("include_term_template", include_term_template, 'bool')
if type is not None:
query_parameters['type'] = _SERIALIZER.query("type", type, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_term_template_def_by_guid_request(
guid: str,
**kwargs: Any
) -> HttpRequest:
"""Get the term template definition for the given GUID.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param guid: The globally unique identifier of the term template.
:type guid: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {}
"""
api_version = "2021-05-01-preview"
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/types/termtemplatedef/guid/{guid}')
path_format_arguments = {
'guid': _SERIALIZER.url("guid", guid, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['api-version'] = _SERIALIZER.query("api_version", api_version, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_term_template_def_by_name_request(
name: str,
**kwargs: Any
) -> HttpRequest:
"""Get the term template definition by its name (unique).
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder into your code flow.
:param name: The name of the term template.
:type name: str
:return: Returns an :class:`~azure.purview.catalog.core.rest.HttpRequest` that you will pass to the client's `send_request` method.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this response into your code flow.
:rtype: ~azure.purview.catalog.core.rest.HttpRequest
Example:
.. code-block:: python
# response body for status code(s): 200
response_body == {}
"""
api_version = "2021-05-01-preview"
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/types/termtemplatedef/name/{name}')
path_format_arguments = {
'name': _SERIALIZER.url("name", name, 'str', max_length=4096, min_length=1),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['api-version'] = _SERIALIZER.query("api_version", api_version, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
| 37.562575 | 135 | 0.464047 | 4,973 | 62,429 | 5.729338 | 0.049467 | 0.081461 | 0.028675 | 0.033097 | 0.96413 | 0.961779 | 0.957321 | 0.956584 | 0.951951 | 0.951109 | 0 | 0.005017 | 0.425283 | 62,429 | 1,661 | 136 | 37.58519 | 0.789097 | 0.703663 | 0 | 0.797927 | 0 | 0 | 0.154333 | 0.051259 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051813 | false | 0 | 0.012953 | 0 | 0.11658 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
76d6095ba2e1de4cb7ff7e982c33887b6325fc1d | 2,998 | py | Python | tests/test_icons.py | alex-oleshkevich/python-tabler-icons | 548a1c8737454fa40c552261e2964eb1e23bdb83 | [
"MIT"
] | null | null | null | tests/test_icons.py | alex-oleshkevich/python-tabler-icons | 548a1c8737454fa40c552261e2964eb1e23bdb83 | [
"MIT"
] | null | null | null | tests/test_icons.py | alex-oleshkevich/python-tabler-icons | 548a1c8737454fa40c552261e2964eb1e23bdb83 | [
"MIT"
] | null | null | null | # noqa: E501
import pytest
from tabler_icons.icons import IconDoesNotExists, extract_icon, get_icon
def test_extracts_icon() -> None:
assert (
extract_icon('arrow-narrow-right')
== """
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-arrow-narrow-right" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"/>
<line x1="5" y1="12" x2="19" y2="12" />
<line x1="15" y1="16" x2="19" y2="12" />
<line x1="15" y1="8" x2="19" y2="12" />
</svg>
""".strip() #
)
def test_raises_for_missing_icon() -> None:
with pytest.raises(IconDoesNotExists, match='The icon _missing-icon does not exist.'):
extract_icon('_missing-icon')
def test_get_icon() -> None:
assert (
get_icon('arrow-narrow-right')
== """
<svg class="tabler-icon tabler-icon-arrow-narrow-right" width="20" height="20" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none" />
<line x1="5" y1="12" x2="19" y2="12" />
<line x1="15" y1="16" x2="19" y2="12" />
<line x1="15" y1="8" x2="19" y2="12" />
</svg>
""".strip()
)
def test_get_icon_with_custom_size() -> None:
assert (
get_icon('arrow-narrow-right', 24)
== """
<svg class="tabler-icon tabler-icon-arrow-narrow-right" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none" />
<line x1="5" y1="12" x2="19" y2="12" />
<line x1="15" y1="16" x2="19" y2="12" />
<line x1="15" y1="8" x2="19" y2="12" />
</svg>
""".strip()
)
def test_get_icon_with_custom_svg_attributes() -> None:
assert (
get_icon('arrow-narrow-right', 24, style="color: red")
== """
<svg class="tabler-icon tabler-icon-arrow-narrow-right" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round" style="color: red">
<path stroke="none" d="M0 0h24v24H0z" fill="none" />
<line x1="5" y1="12" x2="19" y2="12" />
<line x1="15" y1="16" x2="19" y2="12" />
<line x1="15" y1="8" x2="19" y2="12" />
</svg>
""".strip()
)
def test_get_icon_with_custom_path_attributes() -> None:
assert (
get_icon('arrow-narrow-right', 24, stroke_width=5)
== """
<svg class="tabler-icon tabler-icon-arrow-narrow-right" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none" stroke-width="5" />
<line x1="5" y1="12" x2="19" y2="12" />
<line x1="15" y1="16" x2="19" y2="12" />
<line x1="15" y1="8" x2="19" y2="12" />
</svg>
""".strip()
)
| 37.475 | 237 | 0.623082 | 466 | 2,998 | 3.929185 | 0.152361 | 0.049153 | 0.049153 | 0.065538 | 0.798471 | 0.785909 | 0.785909 | 0.767886 | 0.748771 | 0.699618 | 0 | 0.10922 | 0.157105 | 2,998 | 79 | 238 | 37.949367 | 0.615354 | 0.003336 | 0 | 0.630769 | 0 | 0.092308 | 0.725628 | 0.163819 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.092308 | true | 0 | 0.030769 | 0 | 0.123077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
76de31ff999614d11b9aa2ba77bb49ed6b7d9b0b | 2,199 | py | Python | test/programytest/parser/template/graph_tests/test_datetime.py | ItsPhant/program-y | c2b211fcaf8cedc7d6d95a8ea9470a913efa1622 | [
"MIT"
] | null | null | null | test/programytest/parser/template/graph_tests/test_datetime.py | ItsPhant/program-y | c2b211fcaf8cedc7d6d95a8ea9470a913efa1622 | [
"MIT"
] | null | null | null | test/programytest/parser/template/graph_tests/test_datetime.py | ItsPhant/program-y | c2b211fcaf8cedc7d6d95a8ea9470a913efa1622 | [
"MIT"
] | 1 | 2020-02-21T17:58:05.000Z | 2020-02-21T17:58:05.000Z | import xml.etree.ElementTree as ET
from programy.parser.template.nodes.base import TemplateNode
from programy.parser.template.nodes.date import TemplateDateNode
from programytest.parser.template.graph_tests.graph_test_client import TemplateGraphTestClient
class TemplateGraphDateTests(TemplateGraphTestClient):
def test_date_format_as_attrib(self):
template = ET.fromstring("""
<template>
<date format="%c" />
</template>
""")
ast = self.parser.parse_template_expression(template)
self.assertIsNotNone(ast)
self.assertIsInstance(ast, TemplateNode)
self.assertIsNotNone(ast.children)
self.assertEqual(len(ast.children), 1)
set_node = ast.children[0]
self.assertIsNotNone(set_node)
self.assertIsInstance(set_node, TemplateDateNode)
self.assertIsNotNone(ast.resolve(self.test_bot, self.test_clientid))
def test_date_format_as_attrib_full(self):
template = ET.fromstring("""
<template>
<date format="%c"></date>
</template>
""")
ast = self.parser.parse_template_expression(template)
self.assertIsNotNone(ast)
self.assertIsInstance(ast, TemplateNode)
self.assertIsNotNone(ast.children)
self.assertEqual(len(ast.children), 1)
set_node = ast.children[0]
self.assertIsNotNone(set_node)
self.assertIsInstance(set_node, TemplateDateNode)
self.assertIsNotNone(ast.resolve(self.test_bot, self.test_clientid))
def test_date_format_as_attrib_default(self):
template = ET.fromstring("""
<template>
<date/>
</template>
""")
ast = self.parser.parse_template_expression(template)
self.assertIsNotNone(ast)
self.assertIsInstance(ast, TemplateNode)
self.assertIsNotNone(ast.children)
self.assertEqual(len(ast.children), 1)
set_node = ast.children[0]
self.assertIsNotNone(set_node)
self.assertIsInstance(set_node, TemplateDateNode)
self.assertIsNotNone(ast.resolve(self.test_bot, self.test_clientid))
| 35.467742 | 94 | 0.664848 | 226 | 2,199 | 6.300885 | 0.20354 | 0.160112 | 0.139045 | 0.035815 | 0.844101 | 0.800562 | 0.760534 | 0.760534 | 0.70014 | 0.70014 | 0 | 0.003569 | 0.235562 | 2,199 | 61 | 95 | 36.04918 | 0.843546 | 0 | 0 | 0.78 | 0 | 0 | 0.128753 | 0 | 0 | 0 | 0 | 0 | 0.42 | 1 | 0.06 | false | 0 | 0.08 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0a2609225c6a1e74eb801649a9dcb35a48a8f17c | 90 | py | Python | tests/conftest.py | bressanmarcos/pade-plus | b879a3c543f6c291a8779879efdc8119ce8ed0d5 | [
"MIT"
] | null | null | null | tests/conftest.py | bressanmarcos/pade-plus | b879a3c543f6c291a8779879efdc8119ce8ed0d5 | [
"MIT"
] | null | null | null | tests/conftest.py | bressanmarcos/pade-plus | b879a3c543f6c291a8779879efdc8119ce8ed0d5 | [
"MIT"
] | null | null | null | from pade.plus.testing import start_loop_test
from pade.plus.testing import start_runtime
| 30 | 45 | 0.866667 | 15 | 90 | 5 | 0.6 | 0.213333 | 0.32 | 0.506667 | 0.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 90 | 2 | 46 | 45 | 0.914634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
0a2fda47811f341a9ad8a1c164ab267702499eb7 | 43 | py | Python | unicorn_fy/__init__.py | lingster/unicorn-fy | 4877bc85ca97b6ff9c75efdfc8e97d2156548130 | [
"MIT"
] | 19 | 2020-12-03T22:44:44.000Z | 2021-12-01T21:23:50.000Z | unicorn_fy/__init__.py | lingster/unicorn-fy | 4877bc85ca97b6ff9c75efdfc8e97d2156548130 | [
"MIT"
] | 8 | 2021-03-16T20:49:33.000Z | 2021-11-15T14:35:48.000Z | unicorn_fy/__init__.py | lingster/unicorn-fy | 4877bc85ca97b6ff9c75efdfc8e97d2156548130 | [
"MIT"
] | 7 | 2021-01-29T04:58:13.000Z | 2021-05-15T22:16:19.000Z | from unicorn_fy.unicorn_fy import UnicornFy | 43 | 43 | 0.906977 | 7 | 43 | 5.285714 | 0.714286 | 0.486486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0a53e1b07c02a04191553f7ed8bcefdea34b29ac | 6,761 | py | Python | mlops-template-gitlab/lambda_functions/lambda-seedcode-checkin-gitlab/gitlab/v4/objects/milestones.py | giuseppe-zappia/sagemaker-custom-project-templates | a160cf250dcabf8a9a14682e28d0a39df18e3a5c | [
"MIT-0"
] | 22 | 2021-08-24T13:43:55.000Z | 2022-03-25T06:18:19.000Z | mlops-template-gitlab/lambda_functions/lambda-seedcode-checkin-gitlab/gitlab/v4/objects/milestones.py | giuseppe-zappia/sagemaker-custom-project-templates | a160cf250dcabf8a9a14682e28d0a39df18e3a5c | [
"MIT-0"
] | 3 | 2021-09-09T00:40:56.000Z | 2022-01-26T10:53:30.000Z | mlops-template-gitlab/lambda_functions/lambda-seedcode-checkin-gitlab/gitlab/v4/objects/milestones.py | giuseppe-zappia/sagemaker-custom-project-templates | a160cf250dcabf8a9a14682e28d0a39df18e3a5c | [
"MIT-0"
] | 15 | 2021-08-19T23:53:24.000Z | 2022-03-28T22:26:04.000Z | from gitlab import cli
from gitlab import exceptions as exc
from gitlab import types
from gitlab.base import RequiredOptional, RESTManager, RESTObject, RESTObjectList
from gitlab.mixins import CRUDMixin, ObjectDeleteMixin, SaveMixin
from .issues import GroupIssue, GroupIssueManager, ProjectIssue, ProjectIssueManager
from .merge_requests import (
GroupMergeRequest,
ProjectMergeRequest,
ProjectMergeRequestManager,
)
__all__ = [
"GroupMilestone",
"GroupMilestoneManager",
"ProjectMilestone",
"ProjectMilestoneManager",
]
class GroupMilestone(SaveMixin, ObjectDeleteMixin, RESTObject):
_short_print_attr = "title"
@cli.register_custom_action("GroupMilestone")
@exc.on_http_error(exc.GitlabListError)
def issues(self, **kwargs):
"""List issues related to this milestone.
Args:
all (bool): If True, return all the items, without pagination
per_page (int): Number of items to retrieve per request
page (int): ID of the page to return (starts with page 1)
as_list (bool): If set to False and no pagination option is
defined, return a generator instead of a list
**kwargs: Extra options to send to the server (e.g. sudo)
Raises:
GitlabAuthenticationError: If authentication is not correct
GitlabListError: If the list could not be retrieved
Returns:
RESTObjectList: The list of issues
"""
path = "%s/%s/issues" % (self.manager.path, self.get_id())
data_list = self.manager.gitlab.http_list(path, as_list=False, **kwargs)
manager = GroupIssueManager(self.manager.gitlab, parent=self.manager._parent)
# FIXME(gpocentek): the computed manager path is not correct
return RESTObjectList(manager, GroupIssue, data_list)
@cli.register_custom_action("GroupMilestone")
@exc.on_http_error(exc.GitlabListError)
def merge_requests(self, **kwargs):
"""List the merge requests related to this milestone.
Args:
all (bool): If True, return all the items, without pagination
per_page (int): Number of items to retrieve per request
page (int): ID of the page to return (starts with page 1)
as_list (bool): If set to False and no pagination option is
defined, return a generator instead of a list
**kwargs: Extra options to send to the server (e.g. sudo)
Raises:
GitlabAuthenticationError: If authentication is not correct
GitlabListError: If the list could not be retrieved
Returns:
RESTObjectList: The list of merge requests
"""
path = "%s/%s/merge_requests" % (self.manager.path, self.get_id())
data_list = self.manager.gitlab.http_list(path, as_list=False, **kwargs)
manager = GroupIssueManager(self.manager.gitlab, parent=self.manager._parent)
# FIXME(gpocentek): the computed manager path is not correct
return RESTObjectList(manager, GroupMergeRequest, data_list)
class GroupMilestoneManager(CRUDMixin, RESTManager):
_path = "/groups/%(group_id)s/milestones"
_obj_cls = GroupMilestone
_from_parent_attrs = {"group_id": "id"}
_create_attrs = RequiredOptional(
required=("title",), optional=("description", "due_date", "start_date")
)
_update_attrs = RequiredOptional(
optional=("title", "description", "due_date", "start_date", "state_event"),
)
_list_filters = ("iids", "state", "search")
_types = {"iids": types.ListAttribute}
class ProjectMilestone(SaveMixin, ObjectDeleteMixin, RESTObject):
_short_print_attr = "title"
@cli.register_custom_action("ProjectMilestone")
@exc.on_http_error(exc.GitlabListError)
def issues(self, **kwargs):
"""List issues related to this milestone.
Args:
all (bool): If True, return all the items, without pagination
per_page (int): Number of items to retrieve per request
page (int): ID of the page to return (starts with page 1)
as_list (bool): If set to False and no pagination option is
defined, return a generator instead of a list
**kwargs: Extra options to send to the server (e.g. sudo)
Raises:
GitlabAuthenticationError: If authentication is not correct
GitlabListError: If the list could not be retrieved
Returns:
RESTObjectList: The list of issues
"""
path = "%s/%s/issues" % (self.manager.path, self.get_id())
data_list = self.manager.gitlab.http_list(path, as_list=False, **kwargs)
manager = ProjectIssueManager(self.manager.gitlab, parent=self.manager._parent)
# FIXME(gpocentek): the computed manager path is not correct
return RESTObjectList(manager, ProjectIssue, data_list)
@cli.register_custom_action("ProjectMilestone")
@exc.on_http_error(exc.GitlabListError)
def merge_requests(self, **kwargs):
"""List the merge requests related to this milestone.
Args:
all (bool): If True, return all the items, without pagination
per_page (int): Number of items to retrieve per request
page (int): ID of the page to return (starts with page 1)
as_list (bool): If set to False and no pagination option is
defined, return a generator instead of a list
**kwargs: Extra options to send to the server (e.g. sudo)
Raises:
GitlabAuthenticationError: If authentication is not correct
GitlabListError: If the list could not be retrieved
Returns:
RESTObjectList: The list of merge requests
"""
path = "%s/%s/merge_requests" % (self.manager.path, self.get_id())
data_list = self.manager.gitlab.http_list(path, as_list=False, **kwargs)
manager = ProjectMergeRequestManager(
self.manager.gitlab, parent=self.manager._parent
)
# FIXME(gpocentek): the computed manager path is not correct
return RESTObjectList(manager, ProjectMergeRequest, data_list)
class ProjectMilestoneManager(CRUDMixin, RESTManager):
_path = "/projects/%(project_id)s/milestones"
_obj_cls = ProjectMilestone
_from_parent_attrs = {"project_id": "id"}
_create_attrs = RequiredOptional(
required=("title",),
optional=("description", "due_date", "start_date", "state_event"),
)
_update_attrs = RequiredOptional(
optional=("title", "description", "due_date", "start_date", "state_event"),
)
_list_filters = ("iids", "state", "search")
_types = {"iids": types.ListAttribute}
| 40.975758 | 87 | 0.665878 | 788 | 6,761 | 5.581218 | 0.163706 | 0.040018 | 0.021828 | 0.020919 | 0.819463 | 0.810823 | 0.807185 | 0.804911 | 0.804911 | 0.804911 | 0 | 0.000783 | 0.244195 | 6,761 | 164 | 88 | 41.22561 | 0.859883 | 0.391066 | 0 | 0.447368 | 0 | 0 | 0.136945 | 0.029948 | 0 | 0 | 0 | 0.02439 | 0 | 1 | 0.052632 | false | 0 | 0.092105 | 0 | 0.460526 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a5676b6b77fe44490b3f92eafb496aa988a10ab | 45 | py | Python | __init__.py | channolanp/QtScript | 7c7f39b7018194b92c1b191bf0ccff3212d3ee53 | [
"MIT"
] | null | null | null | __init__.py | channolanp/QtScript | 7c7f39b7018194b92c1b191bf0ccff3212d3ee53 | [
"MIT"
] | null | null | null | __init__.py | channolanp/QtScript | 7c7f39b7018194b92c1b191bf0ccff3212d3ee53 | [
"MIT"
] | null | null | null | from . import widgets
from . import QtScript
| 15 | 22 | 0.777778 | 6 | 45 | 5.833333 | 0.666667 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 23 | 22.5 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6a7721f218f533ed0efb6f9284bc5fb7623f62e8 | 31,352 | py | Python | examples/libs/rlottie/lv_example_rlottie_approve.py | embeddedt/lvgl | bc826c703f0dcf6474cc450f19c8dbeda4e84ac8 | [
"MIT"
] | 8 | 2022-02-11T08:20:49.000Z | 2022-03-22T06:19:59.000Z | examples/libs/rlottie/lv_example_rlottie_approve.py | embeddedt/lvgl | bc826c703f0dcf6474cc450f19c8dbeda4e84ac8 | [
"MIT"
] | 2 | 2022-03-22T03:22:45.000Z | 2022-03-22T06:09:13.000Z | examples/libs/rlottie/lv_example_rlottie_approve.py | embeddedt/lvgl | bc826c703f0dcf6474cc450f19c8dbeda4e84ac8 | [
"MIT"
] | 3 | 2021-07-25T15:22:44.000Z | 2022-01-07T13:47:59.000Z | '''
The original JSON string is converted to hex array because C99 requires support only 4096 character long strings.
lvgl/scripts/tohex.py is sued to convert a json file to a hex array. E.g.
./filetohex.py my_lottie.json > output.txt
If your compiler support very long strings you can do
const char * my_lottie = "JSON data here";
But be sure to replace all " characters with \".
'''
lv_example_rlottie_approve = bytes([
0x7b, 0x22, 0x76, 0x22, 0x3a, 0x22, 0x34, 0x2e, 0x38, 0x2e, 0x30, 0x22, 0x2c, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x67, 0x22, 0x3a, 0x22, 0x4c, 0x6f, 0x74, 0x74, 0x69, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x73, 0x20, 0x41, 0x45, 0x20, 0x31, 0x2e, 0x30, 0x2e, 0x30, 0x22, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x22, 0x22, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x22, 0x22, 0x2c, 0x22, 0x64, 0x22, 0x3a, 0x22, 0x22, 0x2c, 0x22, 0x74, 0x63, 0x22, 0x3a, 0x22, 0x22, 0x7d, 0x2c, 0x22, 0x66, 0x72, 0x22, 0x3a, 0x36, 0x30, 0x2c, 0x22, 0x69, 0x70, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6f, 0x70, 0x22, 0x3a, 0x36, 0x30, 0x2c, 0x22, 0x77, 0x22, 0x3a, 0x37, 0x32, 0x30, 0x2c, 0x22, 0x68, 0x22, 0x3a, 0x37, 0x32, 0x30, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x22, 0x2c, 0x22, 0x64, 0x64, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x61, 0x73, 0x73, 0x65, 0x74, 0x73, 0x22, 0x3a, 0x5b, 0x5d, 0x2c, 0x22, 0x6c, 0x61, 0x79, 0x65, 0x72, 0x73, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x64, 0x64, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x20, 0x34, 0x22, 0x2c, 0x22, 0x73, 0x72, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x31, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x30, 0x7d, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x33, 0x33, 0x36, 0x2c, 0x33, 0x39, 0x36, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x7d, 0x2c, 0x22, 0x61, 0x6f, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x73, 0x68, 0x61, 0x70, 0x65, 0x73, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x67, 0x72, 0x22, 0x2c, 0x22, 0x69, 0x74, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x73, 0x68, 0x22, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x7b, 0x22, 0x69, 0x22, 0x3a, 0x5b, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x5d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x5b, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x5d, 0x2c, 0x22, 0x76, 0x22, 0x3a, 0x5b, 0x5b, 0x2d, 0x31, 0x32, 0x33, 0x2c, 0x2d, 0x36, 0x36, 0x5d, 0x2c, 0x5b, 0x36, 0x2c, 0x34, 0x35, 0x5d, 0x2c, 0x5b, 0x33, 0x32, 0x31, 0x2c, 0x2d, 0x32, 0x36, 0x34, 0x5d, 0x5d, 0x2c, 0x22, 0x63, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x50, 0x61, 0x74, 0x68, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x2d, 0x20, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x73, 0x74, 0x22, 0x2c, 0x22, 0x63, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x39, 0x38, 0x30, 0x33, 0x39, 0x32, 0x31, 0x35, 0x36, 0x38, 0x36, 0x2c, 0x30, 0x2e, 0x36, 0x38, 0x36, 0x32, 0x37, 0x34, 0x35, 0x30, 0x39, 0x38, 0x30, 0x34, 0x2c, 0x30, 0x2e, 0x33, 0x31, 0x33, 0x37, 0x32, 0x35,
0x34, 0x39, 0x30, 0x31, 0x39, 0x36, 0x2c, 0x31, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x77, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x35, 0x32, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6c, 0x63, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6c, 0x6a, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x61, 0x70, 0x68, 0x69, 0x63, 0x20, 0x2d, 0x20, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x72, 0x22, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x37, 0x7d, 0x2c, 0x22, 0x73, 0x6b, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x73, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x66, 0x6f, 0x72, 0x6d, 0x22, 0x7d, 0x5d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6e, 0x70, 0x22, 0x3a, 0x33, 0x2c, 0x22, 0x63, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x6d, 0x22, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x65, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x69, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x33, 0x36, 0x38, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x31, 0x5d, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x35, 0x31, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x22, 0x74, 0x22, 0x3a, 0x31, 0x30, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x22, 0x3a, 0x34, 0x35, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x39, 0x32, 0x5d, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6d, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x69, 0x6d, 0x20, 0x50, 0x61, 0x74, 0x68, 0x73, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65,
0x63, 0x74, 0x6f, 0x72, 0x20, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x20, 0x2d, 0x20, 0x54, 0x72, 0x69, 0x6d, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x70, 0x22, 0x3a, 0x31, 0x30, 0x2c, 0x22, 0x6f, 0x70, 0x22, 0x3a, 0x36, 0x30, 0x2c, 0x22, 0x73, 0x74, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x7d, 0x2c, 0x7b, 0x22, 0x64, 0x64, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x20, 0x33, 0x22, 0x2c, 0x22, 0x73, 0x72, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x31, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x30, 0x7d, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x33, 0x33, 0x36, 0x2c, 0x33, 0x39, 0x36, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x7d, 0x2c, 0x22, 0x61, 0x6f, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x73, 0x68, 0x61, 0x70, 0x65, 0x73, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x67, 0x72, 0x22, 0x2c, 0x22, 0x69, 0x74, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x73, 0x68, 0x22, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x7b, 0x22, 0x69, 0x22, 0x3a, 0x5b, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x5d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x5b, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x5d, 0x2c, 0x22, 0x76, 0x22, 0x3a, 0x5b, 0x5b, 0x2d, 0x31, 0x32, 0x33, 0x2c, 0x2d, 0x36, 0x36, 0x5d, 0x2c, 0x5b, 0x36, 0x2c, 0x34, 0x35, 0x5d, 0x2c, 0x5b, 0x33, 0x32, 0x31, 0x2c, 0x2d, 0x32, 0x36, 0x34, 0x5d, 0x5d, 0x2c, 0x22, 0x63, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x50, 0x61, 0x74, 0x68, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x2d, 0x20, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x73, 0x74, 0x22, 0x2c, 0x22, 0x63, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x31, 0x39, 0x36, 0x30, 0x37, 0x38, 0x34, 0x33, 0x31, 0x33, 0x37, 0x2c, 0x30, 0x2e, 0x35, 0x35, 0x36, 0x38, 0x36, 0x32, 0x37, 0x34, 0x35, 0x30, 0x39, 0x38, 0x2c, 0x30, 0x2e, 0x32, 0x33, 0x35, 0x32, 0x39, 0x34, 0x31, 0x31, 0x37, 0x36, 0x34, 0x37, 0x2c, 0x31, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x77, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x34, 0x38, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6c, 0x63, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6c, 0x6a, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22,
0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x61, 0x70, 0x68, 0x69, 0x63, 0x20, 0x2d, 0x20, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x72, 0x22, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x37, 0x7d, 0x2c, 0x22, 0x73, 0x6b, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x73, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x66, 0x6f, 0x72, 0x6d, 0x22, 0x7d, 0x5d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6e, 0x70, 0x22, 0x3a, 0x33, 0x2c, 0x22, 0x63, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x6d, 0x22, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x65, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x69, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x33, 0x36, 0x38, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x31, 0x5d, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x35, 0x31, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x22, 0x74, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x22, 0x3a, 0x34, 0x30, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x5d, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6d, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x69, 0x6d, 0x20, 0x50, 0x61, 0x74, 0x68, 0x73, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x20, 0x2d, 0x20, 0x54, 0x72, 0x69, 0x6d, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x70, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6f, 0x70, 0x22, 0x3a, 0x36, 0x30, 0x2c, 0x22, 0x73, 0x74, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x7d, 0x2c, 0x7b, 0x22, 0x64, 0x64, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x33, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x68, 0x61, 0x70, 0x65,
0x20, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x20, 0x32, 0x22, 0x2c, 0x22, 0x73, 0x72, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x31, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x38, 0x37, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x30, 0x7d, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x33, 0x33, 0x36, 0x2c, 0x33, 0x36, 0x36, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x7d, 0x2c, 0x22, 0x61, 0x6f, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x73, 0x68, 0x61, 0x70, 0x65, 0x73, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x67, 0x72, 0x22, 0x2c, 0x22, 0x69, 0x74, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x64, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x65, 0x6c, 0x22, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x35, 0x32, 0x30, 0x2c, 0x35, 0x32, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x45, 0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x20, 0x50, 0x61, 0x74, 0x68, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x2d, 0x20, 0x45, 0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x73, 0x74, 0x22, 0x2c, 0x22, 0x63, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x39, 0x38, 0x30, 0x33, 0x39, 0x32, 0x31, 0x35, 0x36, 0x38, 0x36, 0x2c, 0x30, 0x2e, 0x36, 0x38, 0x36, 0x32, 0x37, 0x34, 0x35, 0x30, 0x39, 0x38, 0x30, 0x34, 0x2c, 0x30, 0x2e, 0x33, 0x31, 0x33, 0x37, 0x32, 0x35, 0x34, 0x39, 0x30, 0x31, 0x39, 0x36, 0x2c, 0x31, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x77, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x35, 0x32, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6c, 0x63, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6c, 0x6a, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6d, 0x6c, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x61, 0x70, 0x68, 0x69, 0x63, 0x20, 0x2d, 0x20, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x72, 0x22, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c,
0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x37, 0x7d, 0x2c, 0x22, 0x73, 0x6b, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x73, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x66, 0x6f, 0x72, 0x6d, 0x22, 0x7d, 0x5d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x45, 0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6e, 0x70, 0x22, 0x3a, 0x33, 0x2c, 0x22, 0x63, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x6d, 0x22, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x65, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x69, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x33, 0x33, 0x37, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x31, 0x5d, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x33, 0x33, 0x34, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x22, 0x74, 0x22, 0x3a, 0x31, 0x30, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x22, 0x3a, 0x34, 0x35, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x38, 0x30, 0x5d, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6d, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x69, 0x6d, 0x20, 0x50, 0x61, 0x74, 0x68, 0x73, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x20, 0x2d, 0x20, 0x54, 0x72, 0x69, 0x6d, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x70, 0x22, 0x3a, 0x31, 0x30, 0x2c, 0x22, 0x6f, 0x70, 0x22, 0x3a, 0x36, 0x30, 0x2c, 0x22, 0x73, 0x74, 0x22, 0x3a, 0x31, 0x30, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x7d, 0x2c, 0x7b, 0x22, 0x64, 0x64, 0x64, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x73, 0x72, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x31, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x38, 0x37, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x30, 0x7d, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x33, 0x33, 0x36, 0x2c, 0x33, 0x36, 0x36, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d,
0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x7d, 0x2c, 0x22, 0x61, 0x6f, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x73, 0x68, 0x61, 0x70, 0x65, 0x73, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x67, 0x72, 0x22, 0x2c, 0x22, 0x69, 0x74, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x64, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x65, 0x6c, 0x22, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x35, 0x32, 0x30, 0x2c, 0x35, 0x32, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x45, 0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x20, 0x50, 0x61, 0x74, 0x68, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x53, 0x68, 0x61, 0x70, 0x65, 0x20, 0x2d, 0x20, 0x45, 0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x73, 0x74, 0x22, 0x2c, 0x22, 0x63, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x31, 0x39, 0x36, 0x30, 0x37, 0x38, 0x34, 0x33, 0x31, 0x33, 0x37, 0x2c, 0x30, 0x2e, 0x35, 0x35, 0x36, 0x38, 0x36, 0x32, 0x37, 0x34, 0x35, 0x30, 0x39, 0x38, 0x2c, 0x30, 0x2e, 0x32, 0x33, 0x35, 0x32, 0x39, 0x34, 0x31, 0x31, 0x37, 0x36, 0x34, 0x37, 0x2c, 0x31, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x77, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x34, 0x38, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6c, 0x63, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6c, 0x6a, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6d, 0x6c, 0x22, 0x3a, 0x34, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x61, 0x70, 0x68, 0x69, 0x63, 0x20, 0x2d, 0x20, 0x53, 0x74, 0x72, 0x6f, 0x6b, 0x65, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x72, 0x22, 0x2c, 0x22, 0x70, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x30, 0x2c, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x31, 0x30, 0x30, 0x2c, 0x31, 0x30, 0x30, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x72, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x36, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x31, 0x30, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x37, 0x7d, 0x2c, 0x22, 0x73, 0x6b, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x34, 0x7d, 0x2c, 0x22, 0x73, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x35, 0x7d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x66, 0x6f, 0x72, 0x6d, 0x22, 0x7d, 0x5d, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x45,
0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6e, 0x70, 0x22, 0x3a, 0x33, 0x2c, 0x22, 0x63, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x79, 0x22, 0x3a, 0x22, 0x74, 0x6d, 0x22, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x31, 0x7d, 0x2c, 0x22, 0x65, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x5b, 0x7b, 0x22, 0x69, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x33, 0x36, 0x38, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x31, 0x5d, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x78, 0x22, 0x3a, 0x5b, 0x30, 0x2e, 0x32, 0x35, 0x31, 0x5d, 0x2c, 0x22, 0x79, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x22, 0x74, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x30, 0x5d, 0x7d, 0x2c, 0x7b, 0x22, 0x74, 0x22, 0x3a, 0x34, 0x30, 0x2c, 0x22, 0x73, 0x22, 0x3a, 0x5b, 0x38, 0x34, 0x5d, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x7d, 0x2c, 0x22, 0x6f, 0x22, 0x3a, 0x7b, 0x22, 0x61, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6b, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x33, 0x7d, 0x2c, 0x22, 0x6d, 0x22, 0x3a, 0x31, 0x2c, 0x22, 0x69, 0x78, 0x22, 0x3a, 0x32, 0x2c, 0x22, 0x6e, 0x6d, 0x22, 0x3a, 0x22, 0x54, 0x72, 0x69, 0x6d, 0x20, 0x50, 0x61, 0x74, 0x68, 0x73, 0x20, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x6e, 0x22, 0x3a, 0x22, 0x41, 0x44, 0x42, 0x45, 0x20, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x20, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x20, 0x2d, 0x20, 0x54, 0x72, 0x69, 0x6d, 0x22, 0x2c, 0x22, 0x68, 0x64, 0x22, 0x3a, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x7d, 0x5d, 0x2c, 0x22, 0x69, 0x70, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x6f, 0x70, 0x22, 0x3a, 0x36, 0x30, 0x2c, 0x22, 0x73, 0x74, 0x22, 0x3a, 0x30, 0x2c, 0x22, 0x62, 0x6d, 0x22, 0x3a, 0x30, 0x7d, 0x5d, 0x2c, 0x22, 0x6d, 0x61, 0x72, 0x6b, 0x65, 0x72, 0x73, 0x22, 0x3a, 0x5b, 0x5d, 0x7d,
0x00
])
| 1,254.08 | 4,093 | 0.665954 | 5,211 | 31,352 | 4.005757 | 0.021493 | 0.208872 | 0.086232 | 0.09428 | 0.969052 | 0.963687 | 0.956788 | 0.954106 | 0.954106 | 0.950656 | 0 | 0.51904 | 0.169112 | 31,352 | 24 | 4,094 | 1,306.333333 | 0.282265 | 0.012344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.664211 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
6ab1894b5552d3483f8529e6f665c130c880a4cb | 98,103 | py | Python | slandroid.py | BadPramaya/Advance-Banner | cdf19c64c956150a0a45f5e499d5dc537d1d3da2 | [
"MIT"
] | 44 | 2020-12-19T10:05:51.000Z | 2022-03-26T01:26:10.000Z | slandroid.py | BadPramaya/Advance-Banner | cdf19c64c956150a0a45f5e499d5dc537d1d3da2 | [
"MIT"
] | null | null | null | slandroid.py | BadPramaya/Advance-Banner | cdf19c64c956150a0a45f5e499d5dc537d1d3da2 | [
"MIT"
] | 10 | 2021-09-04T08:37:05.000Z | 2022-03-26T01:25:57.000Z | #ENCODE BY CRYPTO
#YOU CAN TRY THIS DECODE GOD BLESS
import gzip,marshal,zlib,base64,binascii,lzma
try:
exec(gzip.decompress(marshal.loads(b's\xb1\x86\x00\x00\x1f\x8b\x08\x00\xc2mba\x02\xffl]YW\x14M\xb0|\xbf\xbf\x02\x10q\x03\xec}Q\x10\x17\x04\x05Q\x94E\xc0Q\xe9\xa5Z@\x16e\x17\x84\xdf~\'"\xa3\x9c\xef\x9es\x1fT\x84a\xa6\xbb\xba*\x97\xc8\xc8\xc8\xdd\x83_G\xc7\xa7Cuu\xe2\xb2d\xfc\xa0:>\xd9\xa9\xf6\xc7\xeb\xdd\xc3\xea\xa4\xd9\xdd\x1d\xffq\xb5\xfbk\xfcj\x7f\xb7\xfe\x9f\xd3\xe3?O\xdc\xa5k\xee\xeb5\x93\xfbGU{r\x1f/\x98l]st\xf0\xeb\xd8\x9d\x9c\xdc\xaf\xef\xf5.\xc3\xaewY\xd4\xbd\xcb\xa0\xe8\xff\tz\x97MtPW\xfd/\xa3\xdee\xd7\xf5z\xbd\xcb2}\xd5\xff\x0b?\n\x9a\xdee\xd5\xb8\xdee\xdb\x7f}\x9d\x1d\xf8/z\x97\xae\xed\xbf\x1a?\xa8\xfe\xcf\x0f\x0e\xfe\xbf\x97\xe6\xfdw\xeb\x7ff\xd7\xff\xb7\xaa\xba\xb0\xffw\xd9\xffi\xff\xad\xcbf\xb5\xff\xe3t\xb2\xff\xba\xe8m\xffBR\xbb(\x17\xf0J.\x83\xfe\x1bT\x05\xbe\x1e\xed\xff\x15\xd8\xbb\x94\xcd\x14\xbe\xf3\xa8\xff\xf2L\xd7\xdf\xff\xb7\xe8\xff\xdb\xea\xffe\x85\x17,>_\xea\xff\x88\xef4\x8c\xbb\x19\xeb\xbf@\xef\x1b\x06\x1b\xfd\xbf\xe2\xfe\x7f\xfao\x9ew\xf6\xa1U\xd4;\xed\x7ft\xff\xef\x9fX\x8b=[\x90\x12k\xc4k\xc1\x7f\x1a\xfc\xe7\x10\xcb\xb2\x8a\x8f\xda\xb0\xcf\x0b\xf1\x9aD\xeb\x89w\xc2\xc7\xf4\xaf\xa5\xed\xfa\xaf\nK{\x11^\xd0\xf8\x7f\xfb\xefU\xf4?>\xc8q-\xcf\xedZ\xc2\x08\xff\xb1\x17\xe1\xff\xad\xd3\x8b+\xfb\x84\x96\x9f\x12\xd9\xaa\xf0E\xc5\xa8\xbd\x85\xdd\xc5t\xff;\x95\xdd_]&\xfd[\xae\xd3!\xfbo\xa5\xb5\xcf\xbb\xa7\xf9\xf3\x1c\x97\xb3`k\x81M\xc0\xd7\xf7\xff-Z\xdc\xdfJ\xff\x8bhXW\x1c\xdc\xf6\xdf2\x19\\:\xee\x0b/\xab\xfaO0\x08\xed-\xf0D\xf0\x8c\xcb\xfa\x18\xaf\x8c\xdf\xf5\xdf\xb1\xff\x14\x9b\xf2\xaa\xff:\\h\xff\xf5!\xae1[\x1f|\x03\x9f\xd9`s\x94\xdf\x1f\xe3\xd3\xe7\xec\x1e\xaa\xb6\xfb\xb6\xf9\xa1\xff\x8d\xfe27\xcd\xa7\xb7\x13\xbc\xbc\x99\xfe_\xfd5\xec\x9a\xef\xfd\xa7\xc3U\xed\xbf\xa2\xff\xc7\xf5\xbfY\x94\xbd{\xb7X\xbb\xe2\x06\x7f\x9d\xd8*u\xfd\xdf/\xbag\xfd\xbf\x8a\xe3\xfe;\x15\x17\xb7\xf8\x12\x9f]\xfc\xe0\x97{\xf8\xf2\x88_\xbe\xc4\x97\xaf\xde<\x1aZ\xc4\xfd\x9d^\xae\xe0\x9f\xfd\xcb/\xb6\xe6!6dp\xa6\xf5h\xec\x01\xe1\x11\x17\xfc\x98\xcf\xfd+im\x07\x95\\0,x;\x84\x87\xfbB\x0f\xac\xb0\x9d\xd3\x14o\xec?]\x8b\xff\xe0\xa6\xfb/+c\xfb\x18W\x0f\xbd\xd3\xeel\xf5\\q\x04\\w\x8bg\xb2\xc1O8\x1e\xef\xbfm2aO\x03\x9f\xd8%\xb8\x83\xa6\xff?\xd7\xd8\x87\xf0\xdf\xe0S\xffG\xf1G\xbcf\x0b\xcf\x08\x0f\x03\x0f\xac\x08\xfcS\xed\x7f\xb8\xbb?d\x9f\xd5\x95x\x96\xa9\xfd\xb8\x8b\xff\xf4\xef\xde\xe1\xfc\xf7\xbfQ\x8d\xf7\x17\xb8q\xefm7\xd6\xd5z\xbc\xb0\x16\xdb3\xc4\xc77\xd51n1\xb5-Q\xe3\x96\xca\xe9\xdc>\x10G\xb5\xca\xb0\x03q\xd4\x0f\xecF\x8b\xfc\x11\xac\xc6}<\x9b\xe59;\xad]}mK\xd8\xb5\xd70<]\xdc\xdf6m\xd2?\xb8e{\xd3\xeb\x1f4\x87\x8f\xee_\x11\x8cF\xe0lo\xb5\xdd\xee_X\x8d\xfe\x1b\xe1-\xe2N\xff+\xf9\xbfs\xfe\xef%\xbe\xc4i\xe6\xff>\xd9\xff\xda\x08o\xd94\xb6\xaby\xae\x921\xac\xc8\x96}\xa7\x8adOxL3<\xed1\xfcu_G\\\x7f\xc2\xee\xc6\xacWU\xe8\x17\xf8\xcb!\xae\xf2\xc7\x9c=\x8a y\xaf%\x8fl\xa1\xebl\x08\x07\xe1\xb9Y\x14\xdc\x12NE\x11\xd6\xf6\x12\xaca\xdb\xfd\xc2wn\xd6\x86/\xfa?m\xdf\xe0\xa2\xdf`s\xf4\xff`\x7f\xd7\xa5m\xed\x0e\x8f<1\x13\xe3"\xfb7\xa4\x85*/pxvp\x838]\x91\xedY{\xeaW;of`\xbd\x1e\xd9B6\xe1a\x7f/\x15M\xff=\xeb\x10\xcbR\xf0p\xf7\xb7m\x8d\x87\x1c\xfc\xc4\xd3\xdb\xbc)\xec\xb0\xc2\xc8W0\xf4\xf1\xfa\xf7\'{\xf8Y\x80\xa7\xdaT\xfdsR&\xefGl\x1f\x96\xf1\xc8\xb6\x1d\xd6"\xb8\xc2u\xbe\x84]\xc0>\xc6z\xfd1\x9bW\xe6|\x10v\xe0\x1b3*A\xb0l\xfb\xbajn\xf1!Op}v\xb3u|kw^\xc9\x10\xf1\xae\xa2\'Z\xf0\xca\xae\x17K\x80-\x88\x17\x15\x05^Y\x9f\xc0\xef\xccc\x8b\xc9\xaf\x84\xda\x13Mz1\xcd\x8dt\x0f\x9f\x80/\xdd\x94\xb92,\x14\x17\xb8\xf1gz\x13\x9f\r#\xaee\xec/R\x7f\xa9B\x87O\xb9k\xdf\xe2G\xa6\xfa\xe5\xfe\xb5Uy\xbd\x91\xd9\x03mp\xab\xad\x1d\'\xdc\xaa\xab^\xcf\xca\xbaUX\x95e|n\xfe\xc4\x1cV\x80=\x8b\x1b\x84\xa5\xc6\x1e\x85KvAl7\x86s\xd0\xbf\xe1\xd3\xfe\xfei\xe2y\xbc\xd5\xac\xed\xa46\xb6\xa3\x06\xa7\x00\x8b\x18p\xdd\xce6fmq\xban\x1c\xa7\xa8\xc2\xe1\xaac\xdb/u\xfb\x07g\xdc\xad`\xc1\xeb\x0e\x16;\x9c\xc7\x85\xe3\x08\xe1\xf4\xe6\xfdM\\\xe7\x17\xddg<\xad;\xb8\nlfw\x82\xfb\xef\x07\x06A\xb6\x8dW_\x9f\xda\x1d\x86\xf5Y\xef\xde\xd5\xb9\x1d\x9b\xa6\x93\x91\xce&m9\xf00\xe0\xed\xbb\xfc\xb1\x99m\x18?\xb8\xb6"4\x13\xd0\xc8\x9b\xe1\xb7\xfc\xf7\x9c\xbe\x87;\xf8\xf7\xbaX\xc6\xce\xfd\xb1oT)\xfc5\xcc\x15\x0cg]\xff\xc1\xee~\x82g|\x89\xe5\xd8\xb0-\x86\xed\x86\xb7\x83\xe1v\xc1\xf3!;\xacx\x82\x8d\x1b\xb1\x9d\xe5\x8a!s\xc9\xd8\xb90"M\xfa\xf1\xd2,:6\'\x1c8\xcc}\x93\\\xdaV.\xc2\xb9]\x0b:J\xec\xcb`yfs\x16\xafw\xe6H\x9b\xe0\x07\x96\xcf\x1eM]\x1d\xe0p.\x99\xdbj\xd31;\xbaM\xb7q\xbb\xb9\xff\xa0\xff\xa3L\x01Hc?\x80\xc5\xc4\x0e\xafh\x1d\xcc\xcb\xc0\xab\xe5\xb6\x05p\x85U\xde?\xc7mi\xefX\xc5v\xfb\xa1\xb6A#S\xee\xaa\xbfs\xf8\xfb\xc6\xee\xc8%\xdc\xce\xd3\xb7\xf8\xf2\xb3\x99\x07\x97?kt\xe2\xf0.Ei\x1fS\x05\x1f\xecw\ng\xde\xbd\xa8/h\x00\xee\xd9\xc3,[3_\xfd\xe7\x88=\x95\xd0G\xbc\xc5\xf7\xf0p\xb2o7x\xe7\x99\xd8\xb6\x1b"G\x1c\x10X\x01\x98\xf0\xdab\xac\xd8\x96\xae\xa9\xbfcq6\xfe"^\xb0OG\xc8X\xd1\x8e\x7f\x82\x8b\x0b\x9f\xc2\x12\x8c\xdb\x81i\x02[`\xfej\x0e\'\\\x17\xfb\x9b\xfb\x9b\xc7\xf6\xa4\xbbt\xd3\xd6\x0e\xffi\xb0@%>\xb9\xf8\xa5\x80\x8c\x1b\xa0\xc6\xb2=\xb7\xcf\t\xc2\x1d;C\x88\xbc\xb0Gy\xc8\xf2\xf7\xb6?\xdb\xf4\xd7\x10\xaee\xc2\xae\xbc\xc0\x0f\xfd3/\xdc\xe5C|u\xbea1T\xdd.\xe2\x1e\x1f\xda\x96wEg\x1f\xc7\xb3\x8f0/]\x87}\xb8\xb65A\xb4\x04{UT\xf6o\x13\xed\xd8)\xc0\xfd\xd7n\xdc\xde\xa4U\xa0W\xb8c;3\xdc~\x05\xe3\xd4\xa7\x8a\x1cq\x8b\xf0\xf4vIX\x94\xfe\x8e\x82C\xc7\xd6\xc5\xd2\x87\xe1\x8bs\xf8\xdc_\xf6\x04Z\x1a\xa1\xf6t\x14\xfb\x05\xc7\xbdM\xb0\x93\x10\r\xa5sv\xa9\xb8\xec\x9a\xd71\x81\xb8\xf1\x16\x7f\xd7{\xb6\xae\xf01.\x85e\xaa\x0f\xf6\xdc\xf7\xe7{\x8a\xc9\x13\xb3\xabu3o\xa7\x94+\x85\xf7(\xfe\x9asCX\n{H\xc3\xdc\xd9\xabq\xbdm\xb3g\xc9A\x95\xecc\xcf=\xb3\x83\xddF\xcf\x10\xea9;R0\x9eapu\xb9k\xeb\xe9\x9a\xc9\xfeM\xb6\xc8!\xe02q\xbc\xc3x\xd9\xb6$\xd7;\x1c\xc4\xaa\x08,\xf0\xa7\x0c^\xdaj\xb5\xee;\xeej[O\x12\x99@Y\xda\xe3\xae\xbb\xef2$\xf9\x1b<\x93?\xb4\x9a\xfd\x9f\x0f\xe3\xbd\x86\x94X4\xb6\xf2x\x0cu<f+O\x7fW\xbb+\xfb\x88\xae\x81\xdfn\xfa\x16\xf6\x9e\xddC\xa1\x98\xbcl\xae\xb1\xdb\xb0\x88HiZ\xc6\x18\xaf\xcd\xf6\x172\xf7\xfd\xf8\x17\xc6?\xb9\xd1G\x05r-\xa1\x8efa\xc7\xcf\xc9\xd06\x0c.\xdf\x99\x8fc\xec\x1f\xdb/q\xf3d\xbdS\xfcn\xf2D\xa7\'nd\x9ek\x1d7\xc4\xea4\xfa\xe5\xb56nm\xe1\x9b\x8b\xbfYD\xe5\xb0\x98\xed\xfb\xcf{?\x8fp\xed\x138\xf1p2\x15w\xb9]@\x18!\x88\xc56\x84\xaf\n\xf3\x8ff\x8c\x1c\x16.\x08G\xec\x01\xe3\x02\x11,\xb5\xc1\xb3J\xe1<6[\xf5}\xc76jW\xbea\x064n\x06\x9b\x0e\xb3d&\x9a\r\x87\x1bf&a\x11\xc3\xd0\xde\xa7\xc0\xf2\xb92\xdb\xd9\x84A\x8f\x8fl\xf9\xba\xf0Nw\xf5\x19\xbb\xf2\\\xd1"\x9cO\xc2h\n\xafX\xba\x86\xf5\xc1e\x16\x05\xd2\x93\xf4\xea;\xce!\x1eN\x82\x9d\x96<\x96%\x89O\xcd\xd2\xe3\xd9\x84\xd5\xa2\xfdN\x15\xfdL\xed\x04\xe3\x7f\x88\x88\xda\xf4\xe4\xad=\xe4\xce\x9dU\xb6\xf9a\x94\x8b\xf6\x87\xac3\xc2\x92\xf8\x0cG\xb3^\xe3\xe6>\x86u\xc1\x0ek\xd3g\xed\xe7;f\xf5z\xb6\x1cL\x8a\x15\xce\xbbd\x14\xd9k\x91:\xd9E\'o\xd3\xca\xd5;\x0b\x12x\xf0\xb4=\\\xb9y\x83\xef|\xc5\xa7W\x1b\xb1\x82\x03\xdcnU\xce\xaf\x9e\x99\xb9aT\xe0\x86-\xd6\xc0\xbat\xf5k\xf7\x0e\xbb\xeeJ\x17]\xcd\xe8\xd4\xe5;\xb6\x05\x10\xb61\xfa\xa3w\xfbj\x87\xd9%\x08\x9b\xead\xe9\x01L\x046L\xb4u\xc7\xb6aS|QB\xe2\x0c\'\xe8\xbb\x93{\xda-\xf13\x0b\xa6\x1b\x04x\xd8\xcc\x0c\x94\x10^F\xf1\x9cy\xf8 \x9fy\xb0\x8cC\xaf\xa8\n\x0f\x1f;.L\'?\xc2|na\x89~\xe1\x08v?\xefo\x7f\xffm+\x86\x83\xc2}\xef\x14\xec\xb7\x9f\x14\x8a\x84vhk\x86L\xe1\x04|w\xf2\x0bo\xf7\xc1\xc2x&\x92\xf1\xb9\xce$\x96\x08!\x1f\x83*,G\xfa\xc7"x\xa6q\xb5-\x7f\xd8\xbc\xb0\x1dB\xa7\x96(c\xb2\x90\xddlQ\xa5(\xbfb\xb8\x11)`l\x93U;\x84\x86$\x9c=\xd1J\xc3\x01\x15_\xb5\x05Z3\xb7\xad\xf2~\xa7\x08\x8a\tz\xf0\xc5\xb6:\x1dkfa\xb2\xab\x7fla\x07\xec\xdb\xa1\xec\x9f. \x14\xf9\xae\x99\xe6:_\x1c\x1c\xd8\x90)\xc3\xa9\x99\xc5\xa6y\xa5\\T;\xa1\xce\xce\x94\x05\xd5\x81\xd9\x08\xbc\x1e\t\xbb\x8b\xff\xe2"\x8e\x02\xdbS\x8c\xf4*[0\xfc\x18v\x9aA\x9e3/\xc2\x988\xef\x1d\xeb\x1a[\x1d\x9b@\xa7*CL\xd9=UF\x9d\xbe\xb33\x88e\xc1\xf6\xafCD\x13\xd9k<\xa6\xbf\xb2P\xf4L\xe3B\xadB\x8b\x91[\xf7\xd8n\xaa,\x97\xe1[\x8e\xe7\xd7\x94\x10T/\x04M\xa5v`\x10\xfb\xd5YE\x9bvl9$n\xb7\x94k*\x9b!\xbb\xc8 \xe3\xcf\x11?\x854\x96\x89\x0e\\\x88\xedk\xb9ny\x17\xd1\xe3\xccg{N\x0cI\xda\xb9/8\x1f\x8b\xe6Q\xdal\xca, \xbdr0+ \x07\xe6\xad\xcc\xd7\xcd\x1f\x04\xe1\x1d\xdbO8UA\x80@\x9a1\xb5{`+FG\xa7\x8bw\xca/\xdb\xf6\xf3\x87\x01h\xc5t\xab\xb4\xbd\xdb*\'D\x04\xe7:\x19\x0bD\x19p+\x85;d\xa8\xdc\xbf\xb6p\xd5\x96\xa2\xacF\xed\xa8 \xf6)\x83\xe4\x17\xfc{2\xa7\x1896\xb3\xd7\x84/?\xf3!\x12\x1d\xe0\x9f\xe4\xa9O\xf4\x1b{\xf4mjV\x19&\xa6C\x1eAgZ)\xc4\xc6\xedeG\x1brS\x052\x8cf|\xd3\x9e_\x19\x85\x16\xf7\xd6\x84D\xdeN\xc8\xae\xd3\xbe\x17ko\xf1V\xef\xde\xe0\x046\x07\x1670\r(\x7f\x99\x11"\x16\x97\x9e+\x0cB\xca\x92>V\xa8\x95]"3\x85\x11(\xab\x1c\xde>\xfd\xf2\x17\x1f\x11\x0eRJz\xcb\xdcn\x0e\xbb\xb6\x9fB\x9c\xcex\xfc\xaf)\xf3a\x1c\xeb\x05\xa5\xf8\xd1\xe1;y\xac"~\xbe\xb67\x8c\xad1\xf6\xd1\xde\x80\xf6\xc2)\xf6\x8a~,ag\xe0\xae\xdd7{ \x8c\x1b\x8b\xcf\x16\r1\x9f\x897p\xef\x1f\x15\xdf\xe1bx\xfc+\x0b\xd9\xfa\x9e\xf2\x90/\xbcGs\xe9\xec\x89\x84Bv\x98\xfe\xe7\x8a\nq=\xc93[[\xa2ixd\xed\xaa\x1d\x08\xec*lV\xf8\xf3\xaa\\\xb2\xd3\x8esT\x10-,\x17Za\x82\xb1,d\xfd\xd6\x1ez\xa9(\xbc\xcbO\xde_\x02\xaa@ P\xe9j]\xfaW~\xa6\x12(\x81\xa7\x9c\x1c\xca\x8b\x04\xb6\xa0\xfc\x13\xf7\x0e\xf7\x0c)b\n\x18\x99\x85\xc0S#\xe8\x83\x07\x96\xc1\xf8\xc6>Xh\x94\xdfU\x16[\x15\xe5\xab\xde1\xa0\x88\xee\xc9}\xe6\xb0X+<\xadh\x91\xdb\x06\xf6\rV\x0c\xe6\xa9\xc8\xa6\xa6\x94\x91D:\xdb\xf8\xba=)\x95F\xcb\xea4\x80\x96\xcax\x19g\xb59\xb8\xc2\x11\x9aW\xc8\x9f\x00\xed\x0b.e0\xb0\xd9b\xa6\xc6ou\x02S3\x0fX\x882\x90\x83F~\xd1%\xa5V\x01;\xa0\xaa>\xe1\x96f\x128\xc8j\xd8B\xd1:>\xfdm\xa6\xbbKG\x84\xdbuf\x04\xaad\xc5\x1c\n\xb66s\x9a\xe8\xabY]:$,p\xfb\xa8\xb4\xed\xdaf\x81m\'D\xf0u8\x15\x1b\xf6\x85\x1d\x1a*\xc8\xb5-\x81\xfb\x9d\x95\xff@zP<\xc0\xee\xfb\xdc\x0f1\xee}\xdbV T"P\x07zR\t\x82v\xddY\xef01\x0bL\xbfE\xd0\x10.3\xd8\xfe=i\xd7\xd2\xb6\xcf\xef;s\x8f\xdc)\xc0\x05\x10\xef\xe2\r\x9bt\xc9\x9d\x0eu8\x82\xd5c\xfb\xe0\x7f(o\x03\xec-\x83\x15l\xa6\xae\xdf\xd8\xeee\x90\x1f\xe3)b\x17V\xc9\xed\xd2\x90\xddO\xc3\xf8\x1a\x98R<a\xeb\x87\xe3\x85\x9d[3\x85\x9f\xc4\xf5\xbdW<\xdd\xd9\xef\xb8\xe4|<\xb4\x87_\xd5\xaf-\x03\x06\xf2\x8a\x1dG\xcc\xb7\xfe*\xcbV\xdb\x93C\x80I\xafW=\xbc\x9c\x85e\xfa\x88\xf7\x06\\\\+\x08\xeb\xaa\xe7B\xd8\x19C&v\xf6\xf0\x81\xae\x9a2\xf3W6\x82t\xf0\xb8\xc2\xbd!-&\x0e|\xffD\xf4,\xea\xe0\xf6L\x16\xe4\xfd\xe0\xae\xfa\x0f\xe9X\x98lmg\x1f?*\xa3i\xec\x825\xbb\xe02\xbd\x15v\x90Y\x84\xd4?s\xa78(\xc9\x90-7<\x02\x93s\xdcH\xbb\xa1\x0c\xac\xb0\xda\x85ez\xc8~\xaa\xd0\xee\xd9[\x18\x077Yv@\x13\xe0\x0b\x0b\xc4\x800\x89\xac;\xa4\xca*J\x96\x12\x98%\x7f\xb2\xfb\x81I.U\x1d\xa2\xa1W\xc1\x01O\xb9\xca^o\xe2\xefYX\xc5\x15\xdb<\xa16o\xc5S\xde;\xfd1%\xb85\xdd\xd7u\xd5\xab\xf6\xce5\xcd\x1d\x9e\x93C\xc6\xd3\xc8\xf1\xd1|:&\xb7\x0c-:\xbf\xbf\xaf\x94\x07t\xc2\xc9\x9c0\x11\xa7\xf00\xdb\xee^\x1c\xed\xda\xe1\xaa\x0b!\xbcu\xb66#\x88\x05\xe9^x=!\xc0\x1a\xbf\x19\xe5\x16\xe0\x06\xc5\xd33nc\xfb~\xc1B\x02\x96\x04e\x14V\x80\xe2]{e\x8bc\x16d\xef.\x9e\x99ea\xfcW\x8e\t\x88\xc9\x9e(Z\xd5\x8eu\xed\x05\x82`\xbcI\xdc>\xb3\x9f\x11\xddq\xefG\x15\x945\x1e\x10X\xb4M\xc5\xea\x86\xf9\x01<j\xd4\'xI\x9d\xadx\xe8\x96\xe1\xdf\xbf\xc0*\xadc\xcd\xce\x7f\x18\x9a\x83\xab\xac\xb0\x1cf\xf8/\xedd7\xd5\xa5\xdfp\x8e\x89\xcb\xdc\xfc\x18.a\xe5\xce\xedaM\xef{\xcc\xa3\xd3\xc3\x91X?\xb1\xa7W3\xc4\x0f\xed6\x027\xfci\x0f\x9b=;\xb4\xa5\xee\xbb\xa5Cs}.\x1a\x95_n7\xf1\xa0\x8aU\x05\x8cr\xe4\xdc\xbb\xa5\x9d"B\x93\xa9\x82\xbc\xea\xfb\x90\xed\x15\xd6\xc12\xda\x9e"}\xb5?+\x8b]\xe0\x9c\x07kf\xd3\x10\xe9\xb4\x05a\xae-a\x1e\xa1\xd9\xf9\xaa^0\xbfY\xcaC1\'\x86=\x8a~1^\xd2~\xc2\x9d\xa4[\xbb\xb6\xc3\x89\xea\x15\xca\n\n\x1f8\xa1\xc6\x94L\x1c\xca\x03\x04r\xa3\xed\x1b\x80\x0e)C\xc5Bf\x81\xf61\x90\x11\x88\xdf \x9d%H\x98\xda\x99mY\xeb\x04\xf8\x18tO*\xbbN\xd6\xeb\x8a\xc9\rDY3\x0b\xfb\xf7\r\x85\rP\x95D\x00\xdc\x86\xd8>\xed,\xd3\x94\x17v~Y\xf2`f\xcf{%\x1e=.0\xa3\xfbh\xfb\x99\xf7Vl\xd8m\x01Ye\x90\x12\xd1m\xe0\x19#\xf0\x83\xcd\x82!\xc7\xa3\x08\x92\xb3\x01L\xdc\xf7r\xa7\x96\x9f\x99mZ\xfed\x1f\xec\x8b^a:c\'\xc6\x01H\xe7\x11\x0b.T\xd8\xa9\xec(`\xebv\xe9W\xdc7w\xf9\x0en\x10_vnui\x12\xa9(,K\xed\xd6T\xa4\xfcw&\xec\x81\xf1\xdc\xa7/\xefb\xbb\x8f(\xc3n\x15O5\xe6\xca\\\xad\x08\x1d\xa8h[\xf2h\xc4,\x18\x8e\xe3.h&w\x0c\xcb\x0b\x8b\x83Y\x0bf\xeap>?=\xdc=\x85\x85\xab\xed\xa3\x91U`\x15\xf8l"\xd4:\x8a%[\xb2*\xdc\xc2q\x9e\xb2\xe0\xb1IB\xad@e\xd7\x08K\xe2\x8b{\xacw\xb3V\x07\xe3\x99\xad\xdaz\x87\t\xf6D\xb2F\xe4d\x85E7\xcb\x97\xb1\xaf\xc2`\x7fB1\xa1SY\x1a\xc6\xabL\xe7\x8f\xd6U\xbcr(%\xb9};\xf7\xb8\x06\xd8\xf2\xb6\xb3 \x8b\x95\x80b]\x85\xedLq\x0c\xb7\xf4\x0b\x83\xe9\xe0E\xf8\xaa\xea\x15\xcc\xc0\x8a\x1d\x19\x04\x13\xac\xf1\x13\x02\xbc\xf4\xe0\xb7\xe5\xffe<\xe9\xc1[V\x9d\xf1X\x92\x1d;g8\xbd8\x14\r\x13\xfe;\xd8E{\x02(\x91X\xb1\xce\x8eO\x8b\xf7Un)W\x94\x17\x97G\xaaK\xb5\x1f\xed \x84\xc1\xecc\\\x1f\xb2\x82pQ[\x80\xe0\xee+\xf8o\xd6p\xb1W\xd2\x89a\xa1\x8b\x05vd\x97\xad>0\x1fN[\xa3Zy\xa7\x1c\xb3\x8e\xe6\x1b\x9f\xf3-a3\xdf(\xd7\x17\x04\xd9\x05wG.`\x8b7\xe1)Q\xa0)~\xdbo\xb7\xc9\xda\xa1\xada\xd8\x9e\xd9C.\xf2\x85ee\x1d\xed\x9aY\xa7R\xa8\x8a\x955~\x85\xfao\xd1\xd4\xd3\xb6\xe7\xf9V\xb1\x8a\x14\xb8\xa8vZ\xa0\t\x9d\xecK\x7f\xa9\xc0\xf5P\xa7\x08\x8ag\xbf\xecm\xc3\xff\xc0!.\x8b\x0f\xec\x06\xeb:\xf7[\x0c{5X\xb2MZ\xa4\xdf\xe0\xde_\t\'`\xce\xb7pe\xfb\x8fV\x147H\x10\x06\x0f\x0c\xa1X\xe5\xb6\xe1$tkA\xb7\xb9\xbf\xf8\xad\xb1\x855\x80\x15\x7fTVC@\xd0!\x8c\xc22\x87\xe1\xc8C;ux\x1dMu\xf5\x1d&\xca=\xb4\x93\xd5%\x93J\xcc\x11\x99#\xbb+\xaa_\xb8\xfa\xb7\x1e\x920\x8b\xd2\xd5\x17+x\x82\x056\x07\xc3\xbe\xb6\xbaoAR\x91n3~ze[\x0e\x1b\xbf*\x84F\x84\x890xbf\x89\xc0\x08\\e\xf6\xf4\xb13\x07\x82\x03\xd4\xba1\xf8\x1d\xba\xed\xea\x9d\x05\xe5u\xb0\xf6\x18\x87\xc1\nQ\xa7f\xdd\xea`\xc6n\xb3\x16\xe6Q\x15O\x850e_U\xba\x0e\x87\xceaT\x9e\x0b\xcdm\xee\x08\xc9\xce\xc6\x8e`\x93?_\xda\xbe\x08\xc2\xf7\x08(Q\x80)\x94g7\x04\x81\xce\xec,49\x8a;L\xa9=\x8c\x8c\x7f\xebJ\xe8\x913\xb7T\x07\x93\xcf\xcc\x964,,tt\xd2\xf7V\x8e/\x14\xfe\x05\x8ft\xac\xf3\xd6\xac\x8aS\x82]\x910\xf3\xf7\xbb=u\xa2\x02a\xf5jxC\x90:s\xad\xd2\x96\x0b\x8f\xaa\xf1g8\xccb\xad"L\x00\xa9(*m\xb5X\xe6\x8a\x81\x0c\xaa\xd8H^\x88\xa5\xab\x18\xcb\x9a#V\xbb\xcdr\x0b0\xf9]\xeex\x04\xf2\xcd,\xfcx\x81J\x073\xce\xda<2\xabb\xd1\x1a\xbe\x88\xccg9EAa\x10||\xdb\xbbwf\x8f\x04\x19!\xde\xb4S&\xc1\x04"\xb1=K\xff`\xcb\xc8\x1a\x8f\xbb\x1eR\xf2\x86\xd8\x8byG\xfe\xeb\x9b\xbd\xbep\xcf\x95\x97\x128\x04\x9c\x14_\x8d}\xc1#B\xdc\x8c%\xc4\x17\x8c\x84C\x99>\x820\x1b\xf3w\x15d\x87\xb2#\x99N\xbb\xa2\\\xfar\xc6-\xb8\x12\xb8<\xfc2\x1da\x8e\x12|\x85l>\xbe\x98\xfda[9@\xdd\x04W\xce\x1c1\xd1\xe7\xf2\xc6[\xe5\xc5\xd1\x962\xc7JU;\xbc\x94\x07\x8b;\x1b\xd6/\x1e\x8f\xec[\xd8t\x15\xc1\xb8\xc7\x1f\xf6\x9f\xcb\xaa8{\xda\xf8\x97yMA\x97\xb7A\x02\xd2Qt`n\x93aL>\x85\xfd\xfd\xf0\xe2x\xe0\x82*\x1ehTW\x12a\xa3e\xfeC\xd1KH\xc8i\xc6V\xb9Jg?\xa8bR(aE\x9a\x11_=\xc5\xda\xed\x9c\xca\xe7g\x97\x82o\x11\xd7\xc4\xc3\xca\xa8T\xaa!\x07\x0b\xc6\xb0]\xbes\x19\x1f<\x90\xcb\xcb\x95\xfe\x83\xbc\xc2\x0c\x82U\xf2\t[\xa7&\xfe=|\xfe\r\x87\xf0\x1d\xd9bs\x93nz\x02H_\xf3\xfa\xf5\xa1\xaa\xa6Z\x81"\x0e\xcd\xff\xe0\x0c\x87\x0c\xfe\x10\xf1Ts\xda\x14\xb52\xd8\xc46[\xd3\xcc\t\x88\x0bvD\x98Hc\xbd\x93\xf0bW}91\xefX)\x95\xc5y\xcd\xc1\xf0r\xc5\xd8\xb6\xddQ\xff\x89\xf6\xccv[\x8cr\x18\t\xb4\x10\x0b\x8a)\x00\xea\xe6\xe4\\\xb4\x96t\x15\xe9\x03\x02\x9e=\xe5\xfb8z\xf9\xfdqag\xed_Tx\xba1\x9f\xd7>\xb6O\xaaJ\xa5\xf1\xa9\x19\xdbR\xe9T\x93\xab\x16\x84\x0c\x8e1>\xecn=}s\xa2\xcc\'6\xfb\xd16\xf7\xcdB99\x15\xc7\x9a\xd2c\xbb-\xc2\x84>^t\xd9\xc5_\x1fd)\xa7\x8b\x1f\x8a\x10\x86\xdaW\x19^o\xc1\x16\xbf7\x83\x1cD/e\xc1\x04\x086(\xa9!,+\x00w4\xed\xe9\xa7\xe5\xe2lneC\xa5\xe3\xb80\x03\xd1\xa9\xa4\xd8\xd1l\x1co\xc3c\x86vSU\xf7\x02\xfe-L\x15\xe2#t\xab\xdc\xd0+\x9d\x82\xec\xe9\x84\xd9\n\x849\x8eh\xf7\x99y\xcf*\t|\xcd*\xec\x1d?xO\xd4.\'\xff\xe7"K\x1f\xa9\x02\x9a\x08\xa3\xe8\x9a\x1d\x0b\xa7\x83\xe4\xb7\'\x9b\xc0\xaa\x85\xa0_%\x001\xe0\x15\xfaG\x07\xb5\x1f\xb8lZ\xefP\xa4\xa7Z$\x14\xf1<\xe8$\xeaO\xb0\x97/e\x9cp\xb6\x98\xf4\x94v\\\x1b\xe4fD\x153KS\xadBi{\x8e\x8f\xa0\xb9\x11P\x8e%l\x04S\x13VB\x16\xec\xd2#;\xa7a\xf8n\x1b\xef\xbf,\xf8\xaeU\xf4\x912\x10;x\x8b\x10\xefMk\x1fjq\xafBr\x1c\x03\xf0\x86pyp\x88]\x85\xf4\x0cN \xc7R\xf1\x12\xae\xa5N\x1as\x11,\x06\x87\xbay>\xf0\xee\xabB\tq\xfc\\\x1a,z\x80\x10{ \x9f\xc0\xfe)@\x8d\xc22s{V\'\xb8\x91a[\xefB\xe8H\x9d\xf9\x08\xdd\xb6\x12,@%\xd0\xb4RDQ\x05\x87o\x1f>\xb3\x03\x16 t\xa8\x84\x8cxP\xa8\xcc\xf6F^\xe2\xa3\x83\xbbc\xf6\xd6\xb4\xef\xe13\x01\xad\x86\xab\x04>X\n\x94$\x16\xcbX\xb8k\xa5G\x82TB\xd7\xbeA=\xbe9\x82A\xfd`o\xd5\x85\xdfl\x8d\xed\xe6Y\xba@\xa0\xc8\x1c6S8.\x18\x9a\xac\xa5Jx\x1c\xee.\xcd\xed\\\xd3Z\x14r\x8b\xd1?B\x0c\x88\x13\x1d\x98\x02\x15BP\x94\x93\xca|\xed\xbb\xe5Yf\\*AS\x8c\xcf\x9b\xa5\x0f*\xeb\x16rLN\x10&\x89\x11\xe9\xf0\xe1\xda\x8d|[.&A\xc4\xcf\x88\x05\xca\xc0\x94\x81tWt\x9f_~\x12T\x95\x01\x9cC\xe8M7\x94\t\x94\xcd\x85\xa6\xd4\xaa\xd0V\xe2\xd3\xc4\xe6nJ\x96\xba\xa7\xd7ls\xd1\xa0t\x0b0\x8d\x04h\xbeN\x1b,\x11&\'\x8d\x90\xe8\xa6A\xf4\x9f}\x07\x7f\xaf\xdc\x97\xf9O\xe7^,\xe0\xb1\x9fc)\xc1+\xeb\x08W\xfc\x11\xde\xd6\xdc\xf5\x98\xfe\xcf\x8bJ\xf1b\xf5\xa8\xfc\xa1\x12\x03\x98I\x1d\n\xb3\r\xa2k\\Y!"\x107\x90\xee9\xa8\xcb3{\xca!\xf06\xe6\x97Ma\xc6\xad\x16m\xb31\xaa\xdc=%\xc9\xa9\x8ey +U\xcc\xc8u \xfe\x81{\xe3\xef\xc5\x1d\xe2\xe4\xf6\xc8\xf6\xb0\x8b\xdf\x17ry\xf5\xf27\x95~\xc3&\xf0\xae\xcc\xcc7/#\xfct@\xc3$\xfa-\xce7\xd1\xca\xd4l\x13"\n\xc6t\xa0\x7f1\xbe\n\xedY\x18yev\x11\x17\xb2\xad\x8cS\x95\xf8*\xfb\x8e\xa8vWID\\.\xfc\xc3\xaf\x11\xd4\x91i].\x8e\x1d\x0c8\x05\\*\xe6\xda\xbb\xb8\xd3\x05\x15+[\xe6\x96\xad]\x7f\x9b\xddU\x94\x17\xcc\x8bf\xd0*\xb9I\xbe,\xda\xd56\xb0@A7\x97\xa3&M\xc3\x81\x04& 1\xadY\xfa-\x1b/\x98\'t\xa7\xb0\n\xc1\xfb\x05\xdbE\xac@t\xa22\xb4\xc9\x91\x1d\xa6&\x19\xbb\xb8\xbe@\xf1\x96\x99@"\xdb\x13\xc8AGb\x87`\xfb\x13\xb3\x80\xab\x89\xb7\x7f\x98\x97\xae\x88\t\xee\x99\xe5m\xa3\xbb\xaaA\xe1\xd3\x9bt\xdb\x1e*\x11\xb3\xe8\xcd\x9e\xc5\x8e\x952\'\xac\x17\xde\x13\xae\x8bD\xd0\x8c{\x15\x99@\xb7\x86k}\x855\x98_f\xe4\x010\xb5\x8dV\x9e-\x8eXA0\xac6E\x12\xedl\x85j\xc1\xa6\x1d\x01\xe8Wvz\xba\x14u\xc8\xe4`\xeb1\xad\x88\xc0\x19\x97\x12\x13\xea\xdd\xbb\xb6\xeb\x84%%\x9dQ5$\xd6\x0f\xc3\xe0fG\xa6-jlS\xb9t!\xb5\xec\xdb\x13O\xc8at\xee\xab\n\xd0\x0c\xeb\xf6Gq\x88h\x12\xee<[\xd7n*\x7f\xe2%\x0f\xe0w\x00_\x91\xf3\x05\xfe\x1e!\x9d:\x9cz$\xa6\x0ci\xf9`\x1f\xb2d\x17\xbfVt\xec|\xa6\x8dw\xd9W\xd99\xd9\xc2G>\x10g\x18\xf6\x08\x0c[\x92\xbc\x0bE\xfd\x91j\xc00\xc6\xc5wa\xe7!H{ .\xe2NX\xd5\n\x94T\xd5\xb61BB\xccC:\x04\xb9\xea\x9e"\xa40\tt\xac"N?\xb1o\x13\xb7he\x01Xj&u\xc2\xc3`L\x07\xb9u\xe9b\xb4\x93\xc2h^\xb5\x11\xd5\xd8\x19m\xc5\xdb\xa2&\xe5\xe2vc\xfd\xb3}{\xc4LVI[\xdd\xb2o\x04`\x9eu\xc5"\x00[\x02\xe5\xc5\x1a>\x9dK\x96^\xa9\xd6\x1a.\xc9\xde\xb0~\x9f\xfe\xc2C]\xc2\x07\xb2B\x1c\xa9\xd2\xa5\xa2\x04]\xac\x8a{,\xa7\x92\x07wG\x15\x8bR\xc8\x03\xc9o\xd3\x02uX-*\x86\xed\xf2p\xbd8R\xbc\xf6\xae=?<5?E\xccU\x15\xa2\x10\x90?\xd2\x87*\x177\x83`i\xb6?g\x11\x0eW;E\xe5\xb3\x9c\xb4\xd8\x00\x0f\x86N\xa6\xb9\xf90\xba\x8d\xe5$\xde-\xf3\x81\x9f\xb0\xb4\xd16\x93j\x02\x10C\x9e\xdc\xa8\x16\xf8Vq\xb8\xfc\x06/\x07\x13\xa1\xe6\xdb2\xae\x05\x1a\x0e\xb8\x8a\xb5\x1fr\x0c\x9f\xca\xe1W"\x1d\xe4\x1e\xc3\xc7\x89\x19\x9d\xfb*\x92\x01LA\xb4!\xbe#XAM*\xf6P\x00\x87\x88\xed\xceR\x9a\x08D\x8c\x06\x94\x95\xe0r\t}D\xfb\xfb?\xf0\x91\xf2x\xd8!U\x07\xb8#~\xf6\xcdlF\x95\x9e\xe7\xb69J\xbd\xa0p\xbf\xe5\xa2T\xf4,\xdc\x0c\xe8\xfa\xe5g=v\xb3\xeb\xf7\x10|\xc4\xa7\xeb\xf2\xf1M\xf9:]^CY\x9f\xb1\x1d\x91\xee318\xd3\xf0\xc9\x83\xaf\xcaMy\xe3\'r9U\xef\x9e\xdcaU\xcapE\x96e\xc0\xe4\xb4\xc5\xb4\x0c\x87\xe8\xa0\xae\x12]C4\xcb6G\xc0\tF\xa73\x80/\xd0&\r\x9bgg/D\xbaK\xde\xc3\xdaeG\x93*\xf6\'\xaf\x97\xc9\xf1\xdd\x9dP\x91\'X5\x1f\x17\nOf\xbc\x9f\xd9\xb1\xab\x9cNR)\xce)N\xa7\x8b\x16u\xa0\x03ah\x8c\x10\x8fG\xf0\xfaI\x02\xa4?\xaf\x88\x11\xf3\xa48\xb1\x89\xd2K\x18\x94\xecZTnZ\xe3\xd1y\xed1\xec$\xe0,\xe4R\xf2DF\x8a%e\x7f\x19\x95\x91\x88\x85\x13\x8e\xbd\xd2\x94O\x1f{Z\xa0\x8a<)\xca\x97\x95H\xd5\xec\x0f\x81\xa7!\xc4V\xd4\xc3\xf7\xef\xa6\x13\x0f\x8e\xc5\xd1\xaf\x85qv\xaa\\V\xf5\x8fb\xd6\xcc\x91/\xb1z\x1c\x8f\x95\xf9\xcc~\xafv\xafp\x03\xe5\xe4o\xfbY\'\x0eXh\x19\xe8\x13\xd8\x1a\x10\xff"\x16?\xb6?\xdb\xde4C\x82\'\x8b<\xa6L~\xca\xack\xfb:\xeb\xd78\xfc\xfa\x11\x17{f\x8f\x81\xc0$\x82 \xe7\x9e\x8a\x0eO\x06_mO\x07K\xca~\x92|\x04\x04\xc6x\x08pY\xe7#\xaa\xe6hL\x8d*U5\xaf\x82\x90m\x9c{\xefDh\xeb~\x8d\x9e\x01\xf4\x0f~|3\xcbO\xec(\x10\xef\xb3\xd1S\xca\xafn\xf6}\x84\xe1\xb4kc\x9f\xfe\xdfZ\x8eRd\xb7\xb2\xdd\x95\x1a\xbf\xc8R#\xe9\xfbp\x8d\\\x9a\xce\xd3L\x9f]\x8f\x08\x8e\x8d\xbe\xef\xdaY"\x85,\xb1\xa3\xc2\xb2.~1\xbd\x981g\x87\xc5\xaf\xdb\x15\x14/\xebU\xe2\xb2\x8c\x18W\xef\xda\x0b+FAZ\xc3\x0e\xbd3\x1d\xae\xd1\xa1\xa8X\x90\\\xfdC\x81u\x86/\xb2O\xef^\x86J\x84\xb1y\xb1\'\x19"e\x0fdsq\xf3\xf1\xace\x02]\xf4I!^\xf1B@0\xf2H\x82\x14\xc0\xa2\x99\x9e\x96\xefU\x98\'\x12\xb2\x7f\xb2\xd3\xeb\xa1\xf5\xa9\xdb\xd2*6\xa3?\xb6I?\x03(\x16Le\xcbWv\xd5E\x88\xd4\x0f^\x8c\xc8\x92J3\x16ya\x17\x05\x83\xde\x9cR\x9d\x1d>Pg9(Z\xb28=\x0c\xcf\xf7\x80\x7fw\xa0\xb7gw\x7f>\xd8\xfbr2e\x96\x8c \x88\xfb$\xd2BN\xd2a\xb3=\xf7M\xe8[ 00\xf9\xa4\x8c\xbd\x91S!6)\xbaRK\x8c\xb0\x01\xfa\x0b\xc6\x9aS\xb5\xaa\xe4\xee\xfc"^\xa8~\x99L\x81:>F\t<\xf9fHNS=\x1eW\x8e\xafn"v\xe4\xa9\xa9\t\xafp\xc1\x1b@T\xae\x18\x05-\xa0\xd8\xb7\xb3\x0eK\xdbV\xeb\xf6 Bdv\x9d\xbe\xcb\x02\x1f:u\xea\xfc\x0f\x0e\xe1\x99\xf7D\xee\xbeb(X\xc0\xe4\xe5\xb4\xb2\x1a\xf9\x04r\xe5\xf2c|\xf3ZI\x9dr\xc7\xaa\xf98r\xf6w\x15\xc7fj\xcb\x02\x98\xb6\xda\xa8\x1f\xd8\xba\x05\xbe\x9b R_\x1e\x99s\xf7[\xff_\x1c\xab\n\xa4\xfa\xe6\x97r\xbcB\xd5#R\xd1\x9e\xaa\xba\xc7\xf6\xa2\x19\\\xd8\x92Yz\x1eb\xa7*\t^\x9ad\xbeK\x0e\x07y,\xb6m\x15\x84:\xd3]\xfd\xfaX\xc4Q\x05\x14\xc6%@\xcc\x96~d\x8a\xf5\xec5\\[\xf2;\x15\xee]\xf8\x90\x7f\x00P\xf3\x80!(g*\x87\x9a\t\xa9eHO\x82}rl?L\xb3D5\xbf\xee}\x14{\x10\xb6\xc5\xb5\x13\x1d\x8c\xe5\r\xf6-l\xbe\x11\x1c\xc7\x1ceT\x8e!\x11;\x93\xd5<_z\r\x94\x82rg\xc1\xf4\xc5\xa3\x7f7\xc4:\xf0\x19q)\xbeC(\x7f\x0c+\x89$\x94\x9b\xa5Q\x9bh\xe3\xa3Su\xd6$\x03+\xeb,\x7f\xf9\xf0\xb0\xb6\xc0!\xef\x8eiCN{\xa72/p7\x84\x81a\x85C\xd4\xf2\xd9\xb3\x84@\xac\xfe\xf3\xd5v\xb6\xb5}\xbc\xd0\t\x0f\xcf\xeb[{^\x84\x05\xd0\xf0\xd7\xe5(?\x86\xd1\x07+\x88\x84`\x81\xbb\xe6C.\xa6{Y<Yxj\xe7\xa7\xef\xae\xd4\xd7F\xf0=5\xca\xc0\xb1\x981\xfd\x04\xe1\xa5\xaa\x0c\x95vw\xbb\xa2\x8a3c\xf8O"\xbc\t\xe5\xeb\xc2\x1b\xdb\xc9\x81\xee\xb7dYY\xfc\x0c\xb2R\t?\x82\x9e^$$\xe7\x0f\x1b\xccW\xa4\x03\xf4\xa9Cfi}"f\xb7`\x85J\x16R\xe2\xa3O\xf6\xa4X,j\x06\x01;\xc1\xded\xff\xa1\x02|\xa4\x93U\x9d\xcf>\xb5\xebci"(v\x01\x93\xb5\xf3\xe8\x03\x0bQ*\xecv\xd4\xe6\x88\x9e\xca6\x94+/\xf2\xaf8B7\xf7\xd7\xc8M\x10)\xd8\xd5\x17kS\x96\xb9\xb9rB\x19\x02\xae\x0f\xd1<]q\xed]\x07\xf9\x1a\xc4\xd8\xb7\x9f\x1f(\x9f\'\x0c\xf6@\xb9\x99J\x1c\x0e\xb9.\x8fd0\x7f\xa2\x12K|b\xb1G\xa0\xca\x7f\xc3\xc6\x92=\xb9"\xefXA^aD\x9c\x89\xf9\x9a*\xf6!\xd9\x06\x95h7\xbb\xf4\xa9w8jF\x99\xcc\xbb\x86\xdcP4\xa5\x84\xfbd\xa4\xe1\xff\xcb\xa7\x84\xd2\xe7\xa6\xbe\xb2\x9ap8\xb0\xf4\xac\xdc\xe2D"\xb7.=\x9b\xb7\x8d\xff,|}\xa6\x84\xd0\x85\x17\xc2\x0e\xc5\xefb\xf6\x97\xc3\xa3\x1aw\x9eY\xc5O3\xe3%\xca\xb6\x05\xd2\xb9\x02\x05\xa6\xfei:\xec\x1dG\x80h\xdbv\xc2\xdfg\xad\xf05K\xdfb\x07\xea\xedq\xa2+\xc2\xf4\xd1\r.0\x8dV,\xe2\xa8\xf5\xd9E\xe0\x0e\xc8\xddy\x8b_\xf8\xbc\x8f]I\xbeA\x02\x86v8\xf9G\xce\x96G*%\xd9\xe1\x05\xdc\xc2\xae\xef\x1b\xb8\xb2\xcf\xedB\xd1p\xd9#\xf4\x1a\x86p\xc1V$@Q\xaed\x8bW\x90\x12\x008}\xbf\x8dG\x86\xf4\xbe\xbax\xa5\x82m\xb6)X,\x97\x05k\xe8\xb2\x0f\xc5w$0\xb0f\x0e\xa9t\x9b\x02$\xb0\x83\x8b\x0b\xec\x02|bzag\x94\x1b \xd2sCE\x9c\xa9\x1f\x19(\xaf\xf1\x99\xbbC\xfb\x1e\x17yk\xb6&W\xb6\t\xd0\x04G\xb6-V\x11S \xc9$\x94X\x89\xbb\x0f[\x17\xd5;x\x93\x08\xbe\x16E\xe3\x02\x8dE\x8c\xa8b\xbd\x81\xc0\xc7\xae\xbb\x141\xbe\x18%\x05\xf8\xf9s\xfb\x01\x11\xc6\xf0=L\xd3\xcb\xbd\xd7\xfb#\x13bTgb\x8a!\x8c\xa9\xc3\x1f\xaa\x1a\xb1\xc6\xb2\xf8|\x0f5\x9e\xf2\xcd\xce\xa0\xef\xb2\xbf\x82\x9fm\xa7\x97\xd5\xb9\xb2 \xb1\xa3\xb0JEx\xbb\xf3`\xc5{\xa3\x859\x99<\xb1\xf6H0\x8c\xde\xb5v\x1a}\x0bQ\x13\xabc=e6;\xa9C\xe7D\xa6\xc8\x85\xd1e\x15Q\n\xf8q6\xf2a\x83\x85(r\x87\xa0r\x94Y"\xd8\xbe\xbe\x1d\x11\xb2S\xfe\xfc\xad@\x92T\xd9L\xc4\xbdR}0\x15\x1a\xb9K\x05\xef\r\xd9\x82[\xbeM]yj\xa8&+\xa0ae\xf0h\xe7\xfbg\xd5\xb0[-[CB\xc1\xd8\x85~E\xe9\\\x98#=""^>\x94\xf9%\x7fJ\x11Z\xdd\xfa\xd6\xbd/{\x02\xe8\xd2)\xec\xc0\x13\xe5\x9f\xb5\xeahJ\x9fjUS\xfb\x8b\x7f\xc9>\xf8me\x14\xec\x01>\xb4\x05\xae\x04\xa0\x15\x8eI\x10\x19\x90l\xe3T\x1c\xde\x05\x9ew\x08d-V\xd6L\xe2\xba\xfa\xc8[e\xfd\xb5j\x85.\x03\xec\x92\x13\xdb\xc6\x1b\xa4\xad8W$v\xf5\x8e\x1fm\xab?\x89-\x06\xa7\xc3+\x02\'p@\x9a\xaf\x82\x93i;\xd9PA\x10 9Z\xb8\x7fw\xd0\x90MBb#\xca\x8c[\xc2\x92L\xaa\xbd \xbb\x92\x15d\x89)X\x17\x02(\nEQ/\xa2_\x82db\xa2\xd6oF\xb0>\xf5\xe4\xf1\xa6h\xeeh^\xac\xdc)\xf6\x95\xf8b\xac\x05%\xf6\xe0\xd8\x14\x15\xbe\xb1O(\x8bA\xcbe\x17\xdaB4\xb0\'\xa4y\x85\x8fb>\xae\xde\x03\xd1\\\x1aR\tB\x0f\xbf\xc5\xcbCS\x08O;\x15\x83\xd9.\xef^\x89Q\x1f\xb2\xd0*\xec\xad4JWQx\x04\x0f\x94\x8f\x10\xddQ\xf5]\x95\x03a\x1d\xd8\x8a\xde\te\x97\xde\x057\x11\x03\xf8\x07\xb0\r\xe8\xc3\x8e\xc6\xbf\xbe\x8c\xa7<\x97R\xfcf\xb6u\xab\xd1\xacV@\xc9p\xd6\'\x87*73\x9fJN\x97D=\x8d\xc4\x1d\x92\xde\x06.\xdb\xe3\x08\xe4\xa1\xc7\x0b\xe8@JA\xf6\xb0\xeb\xb5mS\nQ\'R\xc2ej\x7f\xda6\xa3ePW\x13\x8e\x0fC\xf5\x8c\x1d\xac\xeaqq\x9e\x03\x8f%\xca\xce\xd5<\x0f:\x06C\xbd\xf4\xaez\x1f\x9aG\x85\x90e\x86Y \x96g\xbf&\x14\xf8\xc3"\xd2Ngk\xac\xb8\xdd\xbf^\xd71\xe8T!h\'Y#D],;\xb1\xb5e\x81\xa9\xb55\x08\xb2\xdf#\xbe\x0cv\xabu\x14\xe2\xde(\xab!\xf3?Q+jr\xe2\xf9\x19\xc9+\xf1\x02:\xe1\xb4\xa2\'0|J\x1f\xa1\x9b\'\x17HHjZ\xfa\xcco("\xbb`\xc8\xd4\xea\x88\xed\x7f\xbb\xf7U\xe1p\xa7\xb06\xff\x88\xfbb\x87~\xfdTe\xbe\xe2%jV\x81\xc8\xd7x\xa4u\xf1LICk\xbfKK,\xd6\x05\xa9:$\x04\xed\x9d\x7fU\t\xbe\x1d\x94\xd4\x1b\xa9\x1e\xd4\xe9\xb4\xd8\xc7\xad\xfb\xb5\x0f\xc6Q\x94\xbd@\x0e\x05\xe8\xa1\x93:\t\x1bw\xc3\xa3\xe7B\x1f2\x95\x01U\xf4&\xd7\x03\xbb*\xf9#\xdaL\r\x8c\xae\x99#\x9fj\xdf<kA`\x07^/-\x06\x1c\xcfNe\xaa~\xa2\xf7\xfb\xae\x0c\x91\x93C\xab\xd6\xbe\xa9\xe0\x91y\xdc\x9c\xf4\x8e\xbc\x10G\x1c\xa1_\x97\\\x06\xea2\x83n@\x81\xa4\xcc\xd7\xbca\xac}\xf3o\xa9x\xbf\xe8n\xdfa\x13\xe5\xe7\x003\x19\x98\xc6\xa8pTjL\x08\xd5\xf1\xd5\xaa}\x9c\xe5\xdf\xe8V,\xfb2\x9b\xb6\xa4\x84\xb1\x1bkz7\x82m\xc5\xf2\xe6\xf9\xf7m\xea\xed\xa9 b\xac\x14\xe5\x1f2p\x8c\xdb\xed\xa1\xcdBn\x925\x98Q\x9e\xc1\x9e=\xc7\xb6\xfa5\xfa(\xfcWE;U\xa9\xca\xca\xd7\xb8a\xb7\'s\xd3\xc9}\x95\xe9k5\xac\x935\x84f\xd4\x08\x05\xefl\x98!+[\x8b_\xb0Q\xe8\\uS\x84\xf4D\'\xa8\x85\xa2f\xed:\x99\x81k\x81\x90G\x81X\x80K!\x19\nB\xe6\xb18\xb4\x08-*c\x12\x1d\xdc\x1dV2\xe2\xeb\x8d\xda\x17u\xb5qj_Q\xa6\xa2\x11\xf1 vj\xaf\xa2\xc5\x02\x91\xb9\xbd\xf3xLq\x7f\xb1w\xe4;A\x95\xa1\xd8\xcdm\xde\xee*\xe1\x0cpC!)\xce\x85\xbb#}\t\xf6\x11?\xc3\xd2\x8f(\xe1\xcb\x11T\xe7{\x12\x18B\xb8\xc6\x1a\x0e\\N\xf2\xf9\x96!\xaf{\xb7\xb4\xae\xdd\xdb\xaaJ\x1d)\xe9\xc9-\xa8\xf7\x19=\xf7\x96*\xa8\xa5Xem\xd3\x0f\x8bP\r\x04\xa7\xa7\x06\xbd\x99\xd5\xe1\xec\xd7\xa8\xbd\xd6#\xef!\xba\xb6\x1cz\xcaj\xadK\xd9\xa8\x8a\xc40\xa7\x05\xe8P?\x96\xf9@\xf9\x9bu\xd4\xe2\xcb\x92][\x15n!Y+/\x18\xcd\xa3f\x17\xfc\xb2O(\x05\xe9\xf8\x94\x8a\xa5\xecT\rZ\xa4\xed\x96\x97\x1f\x94\xe3\xb04\x9eI\x1a\x82t\x12&\xf1\xdb?[{^\xec\xb2\x8d\xd9\x08\xd8\xf2h\xc1W\xc7\xf7\xf5\xe6\xd2\xd9(\xd2k1\xcdR{\xda%0\x8a\xc2\t\x00+\xd0\x1a\x99\xbe\xfc\xb4\xa2L55\x13\xc1\x8aN;<\xba\x82sv\xf1\xd0v]]|\xdb\x19F`6\xa6\xd0H}\xa2l\x82l=T\xcf\xdd\xdd\xcc\xb3\xeb\xae>\x9c\xe2\xbfpPH\xd8j\t*\x85\x92\xb7h\xaa\xafx\x1c\xb5\x0c\x93K\xd6}\xd3\xbe.#\xb0\xe7\xd9\x84\xd3\xea\x7f\xc4/#\x02(R\x11\xe0\x19\xfb\xa0c\xacB\xb3E\x05\xf4\xa1i/\xe6\xce\xe4\xd6\tl|\xdcz\xe5k\xc6\xf0\x82\x9bj\xdb&\x99\xf6\x0f0\xbaX<\xc8\x12$LkH\x14\x02ET\x03\xad\xe0!\xa3\xd3\x98\xf9\xca{\xf5\xc5\xe6\xe2\xa2\xb3\x19\xf7x\x9e\x14\x1b\xdf\x843\xc8\xf7}\xa8\x1c\x88DZ\xa6jH,\xa0\x8fA\xc4\x93m\x83s*,6v\xee\x98\x94\xb1(\xb7\xbe\x07\x8eh\xf9\xe5\xd9\xdc\xe4\xcd\x1d\xb6\xe2\x1f\ttM\x86N\xbf\xe8\x90\x97f\xae;\xd1\x98K\xb8\xdc\xb0B\x9f_\xa9\xc0\x01oX\xe4\x1b\xaa\xc1\xa9\xf5\xab\xa8\x87\xd5\x9c\xdf\xde\\"_C\x0cD\xe3O\xbc\x18jV,U\xa6\x8aUM\x9b\xe4\x90\x15\xd4I\xf1\x0b\x8a\xf4\x8d\x0c.\xfbp^p[\x1c\xa8\xa4\x07&%\x89\xec%\x94\nB\xf7So(V\x0e+w\xd5\x12\xb1iTS\x1b\x10\xb7J\xb5\xbey\xfd\x0f \xdb\xd8m\x87\xf6-\xe2>\x9d\x0c\x03>\x16bR\xffJ\xfe"\x92WBL\xf8\x0bn\xe2\xddc-\x04\xde:\xc5\xa2\xa1\xd4\x8c{\xac\xebQ\x94\x87\x9a\xb7\x94\xaf\xf9\xa2\xfc\x81*db\xb3Q(-\x7fq\xa5\xf8\xaby\xa5\xf2`<\xe2\xfb \x11\xd3\xbcRsO\xf2zN-\xfb\xdd\xda\xe4\xb5=\x11\x17\xcf\xbd\x14[\xcd=\x9f\xd8\xdd\xd6mQ\x88dE\\\xac\\\xb8?I\xcf\xca\x13HaU\x8e\xc4\xd6D\x1cy\x96.\x88\x13\xe8=(l\xa0\x1e\x87\xa6\xbb/v\x9b8\xcd\xec\x9a\x88Y\xa4\xbcf\xa7+\xce\n\xd9*\x04\x89.\xcd\xaaU\xaaj\x13\xf8\xcaWTcM\x17^y\xbb\x89\xad<$F\x04)\x10\xd9\xf8\xfd\xdd\x05F7\xb8\x88!y\xeal\x81P\xfb[{\xb7\x7f@,\x8f\xd6,\x1b\xab\x96|\xfc8\xb1*3D\x9a\xe7C\xec\x0b\x84\x1fM\xbb1\xf3\xe5\x0bO\xcb=\xed7\xb1\xa8B\xb5W\x97\xa2\xcdU\xde\xa8Y\xce\xf6\xec#>\xff\xf7\x93\xd7\x9b\xd0\xbf\x08\x96\x0f\xb4\x19BF\x87U\xf8\x87\x9d6\xb3\x97\x88@\x1a\xe9\x94\x14\xd2\\k\x15{\xf7\x17\xedT\xfa\r\x04I\x0b\xa5\x88^u%\xbdz\xa1\xc7\xddH^\x03\xad\x9e\xf8\xcdR\x05Rn9\xc5"\x14\xf8 4\xc3T\x98\xbbeJ\tr6\x0ecp\xae_b\xff\xfa\xca\x1d6\x91\xa92\xdc\xbf\xd3\x9e\x98c|\xbfp\xe6\xe5w\x9cz\xa6\xa53O\x10\xd2e+\xb8\xa5\xec\xeb7\x9c\xc9F-\x13\xd8\xb8`\xcf\xb6\xc9\xfeU7\x14Tr\xe4\xc9;\xbb\xe3\xbe\xc7<\xfd9\x97\xcd\xcc\x02J\x00\xff%\xac\x97\x04o\x95\xf0\xacE\xf4\xc8\xb7yL\xe1\xe1\x8d\tH\xf9\'\xbatWi\x01\x13r\xea\r\x0c\xa9\x02\xd3\x8cX\xe8\xd5\x81u\xc7\x1f\x03`b\xfd\xa2UcH\xe8\xe5\xdc\xb0j\xf1\xfd\xb7ZL#\x15L\xe4\xefTxl)c\xb2o\xfe* \x83\x93<\xcby\xb5\xca8\x80[m\xe6;\xb8\xf18\xe6\xa1\x07\xd1M\x83\x98\x80\xb2\x0fN"K\xcf\x84\xb7:74+\x8eNw\x82\xc6\x84\xfa\x87\x1d\x90V\xfe\x88\rZ\xb9\x9e\x08!B\xd6\xf2Z=\x0e\xb7\xb0\xa0\xe8(9\x83\x93\r\xabC\x81ZlZ\x00\xbf"\xfe\x8c\xbf*5b\xb3c\xa5\x91\x0b\xe8l\x01\x9a\xda\xf8\xa2\xc2\xa9\xf8\xb1(\x937\xe5\xa3\xb3\xdbE\xd9\xfb\xffds\x85\xb8\x04!\x80\xb8F\x0cP*hE\xb7w\xc4\x1eb\xb4\xcc>\x0c\x1e\xa4\xdb;\x16dP\xba\x8a\x92|\xb0.tUx~\xcd\x17\xfa\xe2g&\xff5\x828\xaa}\xa7N{]\xa8\x0b\xd8"\x84\xf6\xe9\xfc\xfe\x13\xb3\x83^\xab\xac\xc9^\nw\xee\x94\x19\x93\xa4\xf4P(\x85\x1bU\xdc\xe8\xe3\xec\xa2\x16\x9b\x9b\xbc\x11\x91\x12*\xa1#\xbe\x9d\xb3c\xf7_\xa1Fo\x97\xcb=\x94\xd1\xbe\xb6\x17\x1d\xd7S\xd8\x02\xcf$g\xe7\x08\xa4Fx\x82q\xb4\xdco\xc8W\xd4\xd8\xf5\xf5\xbc]W\xdd\x81\x93\x99<\x129U\xcd\xd0A8>A\xae2T\xa4\x12*p\xb1\xd8\x89h \xfb|2\xf9R\x0c\xbe\xb6\x1c\xfe\x9b\x8e\xb5\xbd{\xf7=\xf5\xd5\x17\x03\x7f\xaf*\xc2\xc3=\x85\xefl\x93\x10\x1a\x90n%\t$\x88\xbb\xa2s)\x98\xb5\x9d\x1ah\x9d\xc7E\xb4k\x03U\xa6:t+0y\x87%\xeeN\x95Z\xa9\x0f\xfc_\x99\x8cA\xa7\xf2\x0f2-\x03X\xba\x02\x18\xb3u&#\x92q\xf1\xc1\xda\x90m\x8e\x16Bg\xec9\x95\xfaX\xa9w\xf2(,\xb3M\\GJ\xe8\xfb\xc9\xa6\x17\x82\xd3\xb6\x0f\xbc\xba$\x02~\xd6\x1e\x9d`n5q\xb4\xb4b}C\xb5\xe7\xa9\xa4S\x92)Q\xed\x9a=(\xec\xc8\xfa\x94\xcf\x89\xbb\x90\xfa\xc6\x00\xc8=\x14\xb1T\x11(\x82U\xef=\xfa\x0cD(\x9bQ\x1b[ \xf9#qG\t1\x94G\x8b\x92i3A\xac\xf7*\xfe4G\x9b2\x81\xd0\xf6rz\x14,*\xa7@\xd6\xfbg\x05\xc0\x89\x157\xdf{}\xb7O\x83\\\x89\x88\xaaX2F\x1c\x950W\xa7f\xed\x1a\x8c\xca\xd0\xfdX\xa0\x9d\x8ew\xf9\xb4^\x97\xfa-\xf5a\xb2\x94\x96.\x8bp\xd3x$F\x18z+\x8cC n+J\x1a\xcb\x849\xd412D\xd0\xd5\xf2\xfa@\xf6\xc1\x90\x97U\xc3AM=rD\xb7,\xa2\xa1\xf3\x82\x7fl\xe1\x96\\f\x13x\xc5\x0fx\x11d\xca!\x8a\xc0\xcd\x96$\x8f\xa8\xa16\xf3\xe2\xaf|\xbeX\xbbM\x82\x16Jc\x92\xb4\xf1\xf3G8\x1c\x7f\xbc\xed\x1b\xb2\xa3O\x18\xc9\xbd\xb0NSC{\xd8\x97\x1eH\x81\x82\x8d\xbb\x00J\x0c@Q\xd1\x9c\x1c\xf8\xf7\x12p\xad\xd5TV\xda\x13\xa3^j\x87d\xc3:\xb8w\x14\x9a\x85\x108m\xa4+\xc6(\x97\xd8\x88\x88\x04\xd4\xd7lsy\xf1\xf6\xed#\xa0\x1f\t\xd9\x1f\xad\x82\xa3V\x98P\xba\xb9\x8bEf/\x90*\x8bl>*\xdc[5\x13z\xcaB\x1b!A\x0eD4\xc1\xf9c]\xa5[\x14\xcb\xa9z\xad:J\x90\xec?\x81\x90d6*`:8\xb1\xa7\xd1\xb0p\x15\x8bO\xa1N)S\xcc\xfd\x81\xbba\xf0\x8f\x08\xab=\x9eS\xd3w#NP\xaa\xa3\x1c\xe4\xe8M6\xaa:]\x82\xb5\x90\x87\xb3\x04En\xedJ\xd9\xd6\xaf\xc6J\x876f\xa2B\xf1\x0f\xb9\xdbrM\x9b\x96M\x89\xe3\xf6E?$}\x05u\xccN"+\xd8\x8b|x\xe1\xa8\x85.!\x9a\x1c\x89:\xba\x98\xa9y\xbb?}WzIN\x9ax\xa9\x9a\xbd\x19\xa4\xb3\x8f\xf9\xb9\n2\x89\x92\xb4\xce\x9e\x94\xef\xff\xeb\x90\xdb\x96rk\xb5\x88\xc8!\xbb\x9f\x9d\xb2Z\xdf\\\x11v\xe9\x0cpWj\xb5\x05\xe1\xfd\xab\xd7\xea\x1c\x08\xb76l\xcf\xb0\xc8\xab&\t\xc7~?\xd6\xce\xdfc_N\xa9\xd8\x1c\xc9\x9b\xaa\xbcGAE[C\xb6[B3\x83\xa4\xa6f\xd0\x92\xc4v\xf3\xda\xfc\x13O(D.r\xf0C\xad\xedE\xc53\xbc;R\xa8*8\xba\xfe\xa9\xb25\xe5ao\xa6\xe5\xd2Y\xe4\x1e\xf2\xfa>\x83\xc3\xd9\xa99\x93\x9cF\xd7;\xbd4\xdb\xec\xc2\xdf\xb2\xab$\xf9\xb2\xb6|\x83O>\xf9r\xb3~H\xd8\xf3\r[\xf6z\xc7\xa4\xdf%\x8c,\x10\x1a\x85\xd7p\x03\xd8\xc4P\xbb\xaa\xd5\x8c\xd6D\x88)\xab\xabUf*\x81\x98\xa5\xbckDB\x89\xf4\x05\xeav}Z\xd5\xae\x88:\x04\x0c\xb6r\xb6\xa9\xadv\x12\r*\x16\xc8\xfaT\xc0G\x8e\xb4\xea A\x03\xbd\xce\xb0<T\xefI-:C\x05]\x05\xf2\xe4\x9d\xef\xb2\xc7\x15\xec\t\xa9/$\x90Q\xac\xc9\xbb\x88\x19YE\x1eH\xd0sM\x14H\x97[\xe8\x07\xaa\x7f\xc5\xb8d\xc49\xa14\xf3\xe8\x8c}\x87\x14\xea\xe9A\x12\x02\xf8(`\xb1\xc2c08\x10\xbf\x928\x04\r(\xdei+\x14\r\x9b\x1eM\xc4u\xbd\xac\xd6\xbc\\\xcd\xbbv\x81=\xf5\x8b5\xb6\x0e0\xe0\xdcV\xf9\x13J>\x1e\t\xfe\x8bb{\x9d:\xd9z\xf3\x089\x18A\xd6\xd9\xa0\xe7\xa2o\xb6N\x1f\x88\xed\xac\x9e\x0e\x97\x0f\xafP1G\x15g\x02\x0e\xf9L1\'\xdeG\xf9\xc0\x96\xa7\x95\xbe]+\xe4\xbb\x8bU\x92s\xc9"\xdc\xff\x97\xde\xf1\xd2\xd1\x1f\xbe\xe7\xa1r\xb76\x9b\x80\xb6t\xe0\xd6o\x04R\x936\xb7\xb1"P.z\xbc;\xf7\xe3Hy\x14\xd5C^^\x9e\xd8\x06q"8\x17\xa6b\xd3K\xfd.\xb0\xb3[6\x8f?\xd9S\xa6\x11\xc6S\xaa?n\xa39\xaf\x02\xe7\x18\x1djud\xf2mx\xd9\xb1H \xe4+\x10\x05\xf7J-\xacI\xbe\x13Y\xa8\xa0\xac\xae\xa4\xb6}?E\x99\xabB\xde\xa8|P\x7f\x11\x03\x11(Z\'%\x96\xea_r\x7fy\x04S\xd5\xfd\\\xdeMF\xa5b\xe2\xbc\x86\x08\xc9\x03H\xc3\x1d\x8b;>x\x11#\xd9z\xa9\xdf.2!\xdd\xb7]\x1a\x80\xde\x15\x96#?\xd5\xbf\x93\xaa\xed \xc9\xe7\xa6\xed\x8d\x89^\x85^\x90\xfb\xc1S\x91*T+g\x9f\xb3\x0f\xe6\x88\xcc\xec\xc8\x1b\xe11F_$\xa5\x11?:\\\xff\xfd\x17\xeb!e\n\xd6\xb3\x8a\xb1^od\xfa\xb5\x0f\x7f\x96%\xdbW\t\xec\xa0\xb6[\xf6sj\xfbT\xda\x16)%\x02\x87\xc4\xe2\x14\xdf\x85\xe6\x86N:Y3\x1b\xe5\x92\x17\xcb\xdfUd\x8f\x16~K\xfaR\xec\xae:@6\x86-\xd5\xaa\xccD\x8e\x19\xf1\xb7FU\x9f\xd6\xb78\xe0\xa0\xecm)\x82\t$\xf3l\x85\x04\\u\xb4L\x84;\xf8\xb34b\xcf\xaf\x16\xe9\x99\x95\xf6\xe65\x95\xbb\x1e\x8bd!<\x8f-p\xcd\x17\tEx\xca\x13{\xa7\'F(\x7f\xc1ex\xb5!1\xd9\xfa\xb4\xe3\xe6\xc5^\x8b\xc7o\xce\x86A\x13l\xca1\x16"W>!\xdbA9\xde\xd5\x8aV\xcb\xea\xda\xceS\x87$\xad\x0c\xde\x8a#)\xf2jQ|\xe8\xbf\x15\xd5\x90\xdf,?\x049#\xfc\xa0\xb2\x16\xb9\x85\xfb\x85\x993\x86\xc2\xe4$\x87_%<\xc5g\xf4\x9d*|\xda$\xe4\xda\x07\xe2\xd6%h\xddss*\xf9R\xcd\xf7X\x9d\x00\xc1@\xf9\x83\x8a\xf9xi\'^U\x9d|\x90\x8b+\xee\x9e\x0e0\xd0\xd0\x87\xe4\xa1\xf9#<\xa8.\xbch\xa7\xf4i\x10\r(E \xaa\xa5\x06@\x91j$\xa5\xe1\xe1\xde\xce\xe4\xdc\x8eo\x8aN-V\xcd;\x945\xa2\x9f\x03\x9d\xb6\x00\x91E\x0e]L/KO\x96<\x17c\xfc\xf9\x16b\xa3\x9ajf\xb1{\x07u\xf20.\xb7\xd4\xa6\x90\x9c\n\x8d\xca\xa5\xf7\xe5Y\xfeu\xefx\xf9\xe3\x07)>\x95J\xdc\xf9/\xf8+\xec\xdf\x06*\xd8\x88\x03D\'IQ\xa6\x19\x15\xd0\xdc\x13\xd4\xf8\x00\x1f\xfe\xab\xa9\xb6\xc5\x87\xbf\x17\n\x03)P\xf7DAa\x18\x9f\xef}\xe2\xd3\x94\xa8Y\x97\x0fy\x9a\xbc"\x94z\x0c\x88x\x0e\xa08\x82U\x05\xed\xbf\x11-\xdb\x1eb\xef\xf0\xec\x93P!#\x11\xbcTQ\x89-3\xcf\xc5-/\x064i\'}\xae.zN\xa5\x94tU\x1d7\x9e1Rl+cB\x10]\xfe\xd3\xf4\x84\x86J\xf5u\xe6\'B\x13\x97\xfbnB\xcfHTR\x95\n\x0f\x0c}r\xa1\xce\x94\xf8Z\x85\xc2h\xf3/\xf2\xd4|s\\\xca\x81\xb1Z((%|\x17i#\x10\xb9\xa69U\x19\x9e\x1e\x9d\x10FA\xfa\xcd\x0b\xcf]\xc6\xefP \x17\x08)\xfb\x13\xaa\xde=7\xb5\xab\x8a[\xa1+\x0cA\x06H\x86\x0e\r\xd9g\x82W?\xbd9\xb83-T$\x98`\x81\xf7T\xf2\x1b\x92j\xa2\xc9\xecDN\xff\'\xde\r\xf8\xa0\xbd|\x0b\t\x8d\x0e\x85\xc0n\xe8\xa1\xd4,\xd4\xad\xe5\x9a\xbb\x9a\x84\xe0<1\xb0P\xd4K\x0bK\xeaL-\xda$\xdcz\x99\x0e\xbd[\xd1\x15\xe3\x19T\x1f\xf0\x00\xc4\x83\xa4\x05\x01+\xa8\x06\xa3\xa2\xff$N\x99\x81\xe3\x94\xb2Z\'\xd6G\x08\x16\x06<\x19U\x92\xc8\x17#\xaa\xbf\xd5\xeb%\x07\xbd\xe3\xb9\x03\xd1\x84\x08 \xa7\xd2\xf8\xa8\xc4\xbc\x13W\xac\x90DS fF\xc5\x141R\xfa!\x98\x8fp\x16X[\xb5\x06fp\x97(xfVFF\xfe\xca\x07\x1b\xd4a\x83\x16Z\xdb\x8a\x1d\xd0y\x16Z\xa3\xe7\xb2%\x88S\x9b\x1c\xb8l\xff2\xadC\xa4~\xa1\x86Vf\xd5\xc5\x80EO\x16\xb9\xb4\xda\xa8V\x17\xbe\x1e\xbdV\xbd\x13\xb7P\xed\xbe\x16\xbe\x92zn\xdd\xef\xe7JCk2\x92\xbf\xa8*\xef\xc4\x90\xedT\x1fc\x8c\x9e<\xfb\xfc\x0c>4\x8b\xd0\x0bS\x14\'k\xe0\xb0w"\xb7\xb2\xf1\x1e\x84\x08\xd6?\xdb\xe3%A!\xd9@+\xb3\x7f\xfcz\x16\xb9Q\t D\x05\r\x8aS\x9d\xc6m\xb4nl\x82\x9c\xb8+>\xec\xd3\xdd=\xa5\xfc\x85\xfa4\x99\n\x8c\xdd\xaas\x81\x8f\x95\x0e\xfd\xaf0\xb1x}\xf9\xa92S\xd2\xb6\xe6\xd4\x7f\x18\xc5GK\xc8;\x9a\xc7b\xaa\x89\xc6\xe0\x04\xdb\x11 \x14\x8eU\x10PQ\xf8D\xfaF\xe0[\x03\xc1\x13\xa9\xf3%Iy\x86\r\xd8g)\xb8\x92\x91\x90\x81\xb0\x19RV\x99\xc1o\xb4\xe1\xc1@\\\xb4c\x7fg\x987\x10x!]\xb5\xf4\xc9\x84\t\xe5\xf6\xbf1\xf5\xbb\x14y\xa7\xd9\xb6\xef\xdb\x08\x80\x0f\x02a\xd4gTK\xd5\xb4U@bC\x10\xa6\xc4\xd3a4\xb5\xa9\x16<F\xc9LR\x9eS\xc9w\x9c4V\xdf%\xa6\n\xbbo\x07\x87\xa6WY\x0e`R;\x8e\xaf\xa4\x02\xc1\x85s\x9e\xa8\xfb\xd4\xef\x90\xde\xbdC\xa14\xcd\xf6\x86D\t\xb0[\xd0\x84\xc7\xf8\x18\x1aP%Z\xcf\xfb\x1b\xfbX\xa3\tL+\xe6\xbb"\'\xc4\xdd\x8e\x8a\x8d\x85\xe4m\x93\x11\xa1b\xe1\xb2\x10Jb\xc2\x13x\xd8\x1bh\xb4O\xce}\xd4%"\xac\x82\x96\xae8Tg\x95xc\x95D\x96\x027&~\x99g\xad\x13\xf4\x14$C\x86_\xa6\x1c\x05\x856\xb6SE\'J\xb2k\xf1\xd0\xbbrJ;Z\xc1DS\xad\x8e\x10\xe9\xef^\xa03\x07\xe9\x07n\x8dA\x7f""r\x0bek\x16\x99!\x1c\xd9fo\xf0\x08\xd4\xa1f\xe4\x9ebJ\xb0\x1e\xd1\x80\x99?k\xa2\x92\xb1q=\xf9\xf5^)\x1bs+L\xe4AF\x88B\xabuj\xb6\x0fgA\xd0\x8cS.\x03#\xc5\xa5\xa8\x94\xbf\xa7\xc0\n;Z^\x9d\x0e\xf4\xa6C\xf5\xbeX\xe15\xd1$\x88\x06"\xaem;\xe9\xe7\x1cx\x9f\xf6\xe4;<K\x0bJ\x10[\xed\xc2G\xa9\x80\xf2v\xc7"\x82R\x15\xce\x16\x88m\x98\x8f\xbc\x16\x1d\x8c\x8d\xa3\xb9&\xde\xd4\x19\x8ec\x1d\xb8\xa3\t`/\x11x\xa4\x84+B\xedhAj}\xab\xe5!a5!\xb6h\x85(Ct\xd3\x15h{\x0c\x95\xe2W%\xbb\xca\xd0\x1e\x1d\xc3^F\x954\xdb\x89\xfe\x07\xbe]\xf0hY\xc1\x9a\x07\xb6\xe2\x81\xa2\x14\x15\x9b\n/\xa6M^\x14d]\xbb\xe8\x01qi<\x98|\xc8\x17)H\x9c\x1fQ\xad8\x10\xc7\x82\xcc@\x88\xd0W\xa0j0%`\xf2\xb9g\xae\xa9U\xb5\x8c\x8d\x8aL\xd8\xc0\xcfK/O\x7f\x83\xc0\x02\xd8&\xf8\x17x\xd7c/\x1e\x04\x8a\x95\x827+\xd8\xf9\xd23*\xbb\xf7`\xf4\x04\xf9\x10\xbb\xa9>\t\x87\x96\xba!E\xf7\x9c\x94\xa5\xfa\x1f\xdc{OY}\xdci\xf9JR\nnPe\xf5u\x92"\xf3\xba\xbe\xec\xf6:\xec\x9d\xbe\xdbG\xcb2\xc4\xb5\x18\x97\x95+x\x12e\xbes\x86\x988\x00\x82_\xd4\xa3\xb7\xf2\xd5\xed\x82\x1a\x98\x1a\x853\x0ca0\x8f\'C\xe3B\xb8{\xfcI\x10,!\xca\xacS\x9f\xba*\x84U\x05\x84\xa7Qi\x9d*\x19\xc1\xb1\x1a-\xfaW|x\xad\t\x07\x9e\xf9\x14B\xf0\xd2\x85O\xfe\xaf\x88!\xb5\x84\x99"^\nlA\x08F\xa4\xb6\xa50a$\x8a\x12\xe1\xa8\xf6\x8fyCvY\x07\xda]\xd8\t\xe8\'(\xc5$\xaa\xbd\xa8Q+7R\xed\x91$\x96\xe3\x828N\x83fx\x9a:\xbdh\x9bF\xc4\x13\xea\x9d\x98\xf3\x91\xa0\xfb\xf0|Z\x964\x97t\x81\xa2&\xa3\x00\xa2\x07\xa9\x19\x92\xec\x07\xb7\xe9\xa28"\xf5kY\xe1L-\xf2\x91\xd7\xd1z\x82\xe3V\x00\xa1\x8f\xf7\xaa\xf8\xa3\x04\nj\x96\xf229\xd6\xe6\xf0\x8d\x82\x9d\xb0>x\xa4\xa4\xa1y\x8a]9\xf1tYe\x83\xf8\xb7@\x13xN\xfa\xf8\x12\x13} \x1fL^@x\x86+\xab9\x1b`\xe9\xbe\xa7\xff\xa0\x13&[3\x83a\xca8\xe7\x03"_ 7\xca{\xf1\xfa\xad\x9c$\xb4\xa6\x9d\xd7z"\xb6\xad\x10\xcb\xb1\xf1\xd6\x111\xc2q0\x9c\xd3\xdc)*\x13-\xd5\xd9\xd0\xa6_\x8b3\x15Y+\x07\xca\xd0U\x04wb\xcdPy\x0b]K\rz\xfd*\r\xe0h\xca\x95\xb5H}v\xd4tHt\xe5\x04\x99#{\xc6\xa5D(j\xaa\xb78\xc9@v\x92\x9c+\xf2\xb7\x9acF\xdbT\x8f\n\xf6\xa5T\xa3\xba\xd3\xcd]\xf9\x06u\x18\xbc\xf6\xc7G\xcf/\x8d\xdb\x87\x94(\x07\x16W\xfa\x19\'\x0cf\x94\xdc\xb4\xed\xb1\x14\xba\x18\x82\xa4pO\xcd\xd8\xaf\x1f\x85\xddC\x99\xdfW+\x82\xe2\x11\n\x0bH\xec\x80d\x95\xacw\xba\xfdI\xadF\x08\xa6SI\xf3\x06\xaa\xc6U\xb49\xc5\xc1\xf4\xb4o\xb4L\xcf\x9e\xcf\xa8I\xa2\xb5\xc7\xd6h\xbaO\xc9\tb\x84\xce\xbf\xdc\xd8*\xe4*F\xb8\x02;\x9dRH\xa8z\xbb\x0cL9\xb6\xf7\xd0\x81}\r/\x95\x06p\xfa@\xf6\x16r\xe5\xb8\xbeB\x852\xfa\xaf\xe4\xd1\x03\xf1\xb8\x0bQ\x1bj\x95\x1d\xa9\'9\xf7I\x00\xaf:\x8c\x9aV\xadB\xcc|:%\xad({\x1b\xf5\x97\x81U1\x87\xdaX\x83\x14\xad\x9a\x04\x0b\x11*\x90T3#\x0f{\x95;\xe4\xd0\xa7H\xf1\xfa\xe6\xb0\x94\x0bt\xcf\r\x99\xa3\xdf\xa4)MF\xf2Wax\xc2\xf1\x99\x99\x97/\xdb\x8d\xa7\xe2\\2\x90E\xf9\x98\x9a\xf2$D\xbc\x15\x84D\xf9\xe7\x1f\xd0\x90f\x98\xd6=\xc0\x1ey$\xf2U\xe6\x1b#w\xe9G\xa1\x18\x92\xadBL\x07\x0e\'T\xde\xd9&\xefq\xa8\x02\x15\x17kA\xf4\x84\xeb\xa1nK\xf4=~\xc3\xfbG\x9d\x9e\'\xa9[\x913\x8bE\x17\x10\xc0]`fHS\xcd|cGN\xafw\xdf\xab\x86\xeco\xa0\xa4\x1e\xb2\xa0\x87\x9d\x93^j\xa6\x9dS\xc6\xe4D\x91v\xf9\xf1\xa8X;\xf5A!\x98\xa3\x9dc\x07\x12\xe7\x91\x8c)\xbd\'\xe4q\xa9\x87\xce\x92\x96F\xc7q\xfe\x1cW\xbbu\xb3\xd8\x00\xa5\x90\x0f\xd6\xabj\x003E\xf2F\xbd@\xd9\xbc\xd6\xbe\x12\xa7?QL]Kx\xaf\x0ev\x95\xcfD\xe2\xe1\xe4{\xa07\xb0g!\x12\xc3\xb4\xa0\x08E\xfa\xdb\x02\xbf"\x81i\x8b\x9e\xf9\xb1&\x7f\xbe\xfe\xa7\xe7\x83!\xdf\xd6\xda\xef\xf7\xdfDe#\xbd\x96\nx\xed[\xa7\xf6\xdaN\xe87\xf7\xf0\x96\x1aO\x93\x17\xd2%\x08\xfc4\x8b\x98O\xf1\x9e\xb4S)\xb2\x13\n*\xaa\xeb\xbf\x83\xc1\x1c\x9d\xca-\xb4A\xf5\xa9t!\xa4\x87O\x12u}\xa1\xf8\n\xb0z\xaeJ\x8d\x13\xbc\xce\x8e\xd4R\x18\xaeuz\xe2X|\xf9O\xe7,\xea\xb0\xd6\xa1\x01\x8b\x05\x9d7\x97_\x9b\xf53\x85\xbaeaZ\xd4\xa0\x9f\xa3\xc4\x9e\xf0.\xe2=4\x1c\xad\n\x04\x9c\x12\x84=\x9c\xcfO(2TKc\xa5\xbc\x85]\x18\x04\xc6\xc1%,\xa7ES\xaa\xfd\xe4\x83-\x0b\x98\xea\xa4~\xa3`=\x17\x05F\xd1\x9f\x97\x00p\x9af\x18j,!I\x83\xe9\x13$\x06\x18\xd5P+\xe3\x0f5\x8d\xcbg\xf3\xb8\xd5\x86\x01NRm\xff\xe7\xdb\x9c\x91\x94\xf4NY\x9a\xddy6\xff[\xa1\x19gR\xbd\x95P@;(B\x91z]q~\xc5\xf0\xa5\x86\x9f\x94\xf1\x1c\x16\x0b>\xb9"\xca\xf1\xfb\xb5g`]j\xaebse\xa6\xa4.\xdf\x89\x08\xdbU#vl\x03\x95\xc7\xba\x88b\xae\x13\xd0\x08\r\xb9\x15#x\xaa2N\xd4\xc6\xd1i\xe0\x10%\x8ec\x0bih{\x9d\xe8\x9d-\xf8\xba\x84S\x9b\xef\x9f`\xe2\xf2o\xc2\xfc)\xf7\x83\xb4\x0e\x12a\xe4w#\x18\xad5\xc8\xac\x8b\xfc\xec\xab\x07\x1a\xdd\xc9\xa6\x0c\xcd\x02)%n[\xc6{\xd7\x93\xff9z\xd8w\x81x!a<-\xcap7\xa0\xeb\xbbX\x92\x13m=\xfcYN02\x0bFx#\xf6\x87\xab\x96U\xe2N[\xdd\x17\xa8\xde\x89I\x96hg\xb0\x82\x80 \xb1\xbb\xfe\xa2\xd06\xd33\xf1\n\xa8\r\x8b}\xecHX\x91xL\xd5I\x14\xba\x02\xd6\xec\x80\x8f\xd0RQWU\x0b\xd9\x88\xb3\x13\xd4\x93\x0fv\xdf\x8d\x89\xd0R\x8dI\xc0>\xb5"q\xef3\'\xc2Q\xa8\xf4D\xac\xf9N"\x1ea\xcc\xc2\x9b\x9c\xb0\x8d\r\x94\xba@Y\xffg\xb8\x07GS\xb4\xe7\x97\x1a\x14\x02?\xe5\xaaAgX\xa9\xe6`\x03\xfd\xce(\xec\x80,\xed_!<\x97\x8cQv\x07\x95\xe1fM\xf9l\xaa\xf8=\x13v\x8f\xe5\xea\xc8w-z\xa7GW\xb3qt\xb0\x81\xb2s\xd1M\xd6\x028\xdb\xbf>\xbe\x17l\x10\x0fFs\xc1\x13u\xc1\x0f\xb5w7>\x83\x06_\x85\x1e\x84s\xf9\xbe\xce\xf91x\xe2\xb7\xb4\x89\x9f\xce\xf5\xd8\\1[\x16\x99du\xea0bQ\x1d3A\xaaP\xf3\x00:\t\xbbp?\'\x83\xef5\xa8\x95\x17\xf9\x85\xa5\xc8u\x0c\xb6Z\x97\xb5\xd3\xdf\x07\x08s\x10/\xbf\x10\x17\xca\x8fg\x91 D\x10_^\x1e\xff\x87,\xd1\x08+W\xf3\x03\xd5|\xd9\xd2\xf3\x8bM\x87: a9\xf8\x91\x93\xfe-G~\x85\xea\x9d\xa1$\xef\xb0=:\x1e\x8a\xe8|qHYx\xec\x99\xe4\x7f\x85*;\xdfhr9\x18\x06\xcb\xa8\x92\x83m\x81\xa1\x95\xe2\x0f2\xdbR/\x16Mv\xf8\xf4\xb9\x9cQ\x88\xe6\xc4\xb6%\x8d\x16f1\xda\x1d\xd7\xa8<\x0c\x9b\xf3\x8a2A\xea\xad\xf0WeC\x9d\x1f\xc0:\'5\x80\\tE\x8e\x00\xd8U\x137\x1b"\x86l\xc9Hv\xc5\xf5\xa6\x13\x90\xee\x0f\xf9\xd7\x13\xbb\xa0\xce\x88\\\xa7\x07\x92Xk\xd8\xc8\x04\x08\'\xd9\x81D?f\xb7\xd4\xd5\x1f\th\xbb\xf8;N\xd4\x84\x99{\x82\xb6\xe8\xe5!\xa8B\xbf\x93LM\xd4\xeaS\xd1\xd9\xa5A\xea\x1b\xe2S\x99lU\x92I\x9er\x03\xc9)\x86\xf4\xa1\xf4\xec\xd8G\xc5\xd6\xbfqN\xb3\xa4_>\xb9\\~/-q\xb60\xfd\xc1.\x00o=\xd0\x0e\x0f\x9b\x1f\xd7^\x15U\xa5\t\xcc\xf0k\xe3\xe7w \x88\x91\xbe\x97\x9a\x88\xd4m\xe8\x81\xbb\xe2\xe6\xaf\xc8\xe6\xae\xc2\x03,\x87\x7fzqw\xb79\t\xee\x13 \x00\xc2%\xe5\xb17V\x8f\xaf|\xd3\xc0\xfbE%;\x9c>\xfcA\na\xf5\xf6r\xa52t\xa4\x81\x8dT|B\xd7\x00\x1en\x85I\x82u~\xc2J\xdd\xdf\xf5E\x88\xc8$$\x19~\x10V\xc0\x86\xf3K\x88a\xe7]?\x14>\xfd\xac\xb7\x91\xfb\xef\xc7\x17\x80W\xc3\xf8\xa4wO\xaa\xb7-(\xec\xa1\x17\x04a\xf0\xc7\xae\xc4\xcf\x82\xde\x93\x9b\x9eF\xd0\x05\xb9Z\xbb\xd8\xdf\x11\x1f\n\xd4\x81J*%\xe4\xe8X\x9f~{\x82`\x1a\xd4\xa1N\xf3$(\xa9R\xdbY(H\xf0x:\xa4\xc3\xe7$\x03\x1a\x86\xef\'\xa5\xa7\x05\x01\xaf \xf6#\x10*\xdf\xda\xa0\x9bd\xcfY{ \xd0\\MB\x9c\xf9\xd0\xa8M\xbb\xa0\xe8\x08\x0e\x18\xc6Z\x05\x9a\x0b\xdd*j\'"\xdbJ\x98\xcbf\x0bM|\xf2b\x1d\xd2Y\n\x03Be\x1b\x94\x8c\x7f\xa13\x9cy\x9c\x8a\nn\x87\xe2\x0c69\xfa\x83\x9a\xf8\xd9\xfc\x96W#\x9b4\x17Z\xa6\x9e\x89\x89\x84\x1aM/\x0c\xa0Sh\xc9U\xa0b\xb6\xaa%v\t\x87"|\xb7<\x90j\x01\xe5\x05\x97\xf2\x07\x97\xedPr^\x8d" \x9bT\x11\xcd\x88x\x8c\x06\xc9*\x96 M\xa6\x84\xb9\xbe\xfa\x0e~f\xfbV\xa2h\x8d\xea\t\xb0\xa1`\xaa\xfb~\x82*\xc1\xfcS\xe0\x02N\xd4\xabV\xa6\xd0\xe6\xa1>\xab\x9eH\x9cD\x81s\xa9\xbad\xa0\x08\x83\x88s\xae$(P\xeb]\xe7r\n\xa5\xcd\x8b\xe9\x1c\xb6\xd1`\x9e\x11;M\x92\xd9\x9d\x8f\x1e\x0f\xae\xff~\x93\xae"\x07\x9e\xbe\x11\xb2Vr\xbb\x7f\x17\xe9Z\xa4\x0c\x9bV|\x8d\xcf\xc5i\x85\xe40<cW\xdc\x89\xec\x88\xb5\x1a4\x19J=\xdci\xb7\xb2\xb9\xbe9g\x11\x00o\xbe+9#\xea\x80-\xaa\x0e\xa9\xe9\x0c\xb5\x06\xefP\xa8\x08\x88\x96Qv\xd6\xf7\xc4wIgs\xb4#\x85\xca\xca\xfb)\xb2\xb0\xde\xe6%j\x1c\t\xe2\xe1\n\xe2G\xad*\x83\x8dB\xf7\xda\x8d\xa1Y\x82A\xe8\x7fjA\xb57\xceJm*\x95\xeb*\xf4\x1bZ\xc1,e\xd1\xf1\xadz)\xea\xdc\xcf|}\xafd\x88\x03\xcb\xba\xd5k\xdf%"\xb5\xff\xb6\xc3\xc8\x8fd3\xef\x1d\x82\x92\x14Ly\xad\xde\x11\xf5y\xc4~Z\x10G%\xc5\xa6\xe5+%M\xf2N\xeb\x89MA\x06\x9d\xd7\xa2*\xed?\xa1\x1c\x9c/\x84TRT\x0e\x1dZ\xac\xdd\xea\xba\xb4I\x9abD\x04\xd6\x9a\xac\x9c\xe6?\xa6\x9f\xcd2\xe8\xad\xebrt\x0b\xb1\xbb\'\xda\xbc\xbc\xa1\xca1\xd4O\x13$\x1f$\xbe6\n\xe0s\xc9\xe6A\xb2\xdd\xcaK\x0f\xbcP\xce\x9e\xec7c\xbe\xd0\xd69\x88\xc6\x14\x1c\x89\xb6_\x89\xce_\xeb\xd9T\tF\xa3v^\xbd3y4\x7f\xf0p\xf6O\xa2\xb2u85h\x0f\xeaD\xea,\xf2W\xd0\x13\xce?\xfc\xa7\xef\xbc\x99~\xf4W\xad\xd2\xa1\x86\xb9\xd6\x03\x01\xe6:\xf9\xf2\xe5\xe3\xd8\x97W>\xb9x\xf2z_"\x15\xad\xe4G9\x13\x85QHw\xe1\xe7\x7f\x14av\x8d\x12\x1b\xbb\xa4%\xe4QV\xa8\xc67\x7f\x97?\xa3\x0b0\xd9\xd0!\xe4\xe7\xb4nzL\x9d\x10\x1d\x1a\xa6\xcd\xe1a.#&F\xb0>H.$\xfc\x08\xa1\xe4B\xcd\xa5\xa1o\x9cc\x91\xbd\x11G\xadt\xaa\xe3\x17\x12\xcf\x08\xd5\x89I+\x0e\xe5)\x8f\x80\x95\xa2\xc9Q1\x17\x146\xc7\xbc\xb9d\x8f\xec\xfeC\x0c\x10j7\x82\xcbmE\x1d\xd9\xbf\x01\x96\xf7\xc6\xe5\x14(\xa7\x93\xbd\xda\xbb~+X\t\xad\x80\xea\xa3\x85\xcb\xe9 PV\xa0\x86\x16\xc0\xd6\x90v\xc5@\xa5\x189gk\x9d\xc0 \xb2\x06\x8e\xc1\x9a\x8d\x11\x82@\xb2\xcf\xc6K\xed\xc8\xbb\x15\xbb\x13,>\xbc|\xeau@\xcb\xdaW\x92\xb0P\xbf\x05\xf77\xaa\t\x17\xc9\xf7\xebc\xfdn\xa6J\xa3_5\x15kJ9\x066\xb2\x17\xe6\xa5\x8e\xfd\x90 \xd2{\x01\xd3A\xcf\xd4K\xd5T\x9a\xd7E\xc4\x80)\x0f\xb9\xab\x9d2\xd8J\x1d\xc7\x1c\xb6\x08\x11\x80zF\xd9\x1c\xe8\xcd\x8cq\xc9n\xcf\x07\x9d\x18\x15\x89\'\x8c\x00?YXZ\x8b\x82\xd9\xaa\xbd\x94b\xe9\xb9;P\xf5=\x99Z\xe7\xf7\xeey\xd6\r\xb95k\x13&F{%\xed\n\xf1\xbf\xb9\xa3\xa4uK\x16\x18t#J\x00\x97\x1cA\x12c\x9e\xaa\xa9\x8d\xde\xcc*:\xeeT\xc2)]\xba\xf4\xd1\xa2\xa7 \xd4`Zr\x98\xb1\xcd\x03\xb5\xdaS52}\xb21\x98AA\xd7\x8f\xa9\x116u\xe2\x11a4A\xd9\xe13\xb61\xbc\x91\xc4\xafQ\x7f\xba\x85)y7*\xcejy*\xc6\xb1NHU.\xdfS=\xfb\xb8\xaa\x8aw\xe8U/KK:{_\xb3\xe1\x17\xc8\x88\x1c\xe5r5\x1e\xcd\x8f\xb1fv\x90\x1d\xdfWE&\xf9\x0f\xa19g\x0b1\xd1\x83s\xaa\xd1\n\x95\xc0\x13\xce_\x01vO\x7f>\xcd\xc4\xc0\xed\x9aA\xb4\x1a\xaaP\x1bV\xe3\x9f>\x88_\x05]\xd7\xc0\xeb\xc4"#\xef\xfe\x05\x9dj+\xf3C\x1c8\x88\xebPS\xac\xc5\xf1n\xdd\x074\x0f\x83\xd2R\xf9*U\x7f\x91\x9e!Y\xe2\xee\xcb_\x80\xf3\x05\xae}!+_\xf2\xa8&k\xbd\xe3_\xb8\xf8}\xf4\x88\x13\xd0\x14\xaa\xc4\xa9s\xec#";\xfc@N\x97Q\xc2s\x8d\x00R([i\xf7v\xc0\x8f\xa8aXLmo\npK\'o\xd4\xb5\xd9\x01\xda\xab%\xa1\xc7\\\x18QR\xa9\xf6 \'A\xb9*\xdc@R\xdc"\xbdb\x83U36\x90\xa1Q\x8b\x96yv\x89!;\xe9&\x85\x99o\xfcv*I2U\xcb0K\x99Z\xde5\x10\xbet\xfb\x85\xc61\xc6\xe3\xbe\x98\xff\xf2\xaf\xc1\x17\x15\x0c\x08[\xd5Bs\xd6\x04<\xc3O\x9b\x83\x03\xdf\xa6\x9f\xa0H\x0c\x89E\x932\xec\x18\x1f\xc1\x07c\x94*\xa9\x7f$s\xc4\xcf\xa8\r)X\xa1\x8eXD\x878\xbd\xcbF_\x0f\xa9X\xcf\xbe\xec-\xcf\xe4\xfe(Z\x11K\x14\xaf\xb8\xde\x87\x92\r\xe8\xee\x90d;\xecG\xaf\r\xa6\x12SI;{\xfb\xd7\xed\xee\x08\x83\x90\xec\x81\xe7@S5(W\x13\x98\xb4!X\xa2\xa8\xa5n\xd4V\xd2\xde\xa5\xc4\x14!{/\x8f\x1co/~\x19\xa2P\x04\x10\x8e\xba4L\xadt\x9e\xbb\xa9\x08\xb2\x11\x1b\xa5\x1aL\xc0&\xd0\x98\xce"\xff\x83f\x89C\xa8\xd1\x960!\xd4>v\x87\xd2\xe7n\xd2\xbd\x89\xc1\xf8\xac\xa6^\x81\x02}\x90\x08\x01*\xb4\x15\x95\x87\x95\xaa\xbc\x97\x9a/\xd5\xa9<XjZ\x18u\xffY\xd9m\xec\xc05\xe5%<4\x04\xcb\x1b\x7f\x81\x05\xa5@\x1f\xa1\xc6Rd\x17\x7f\xd9.\xff\x16\x17\'\x82\xbf\xe4\xba\xf8\x18*\xf2m\x93/\xbd\xc3\xf9\xb9yI%p\xee\x11\xb99\xf9\xe4\xa6\xaa\xec\x14\x18\x7fZ>\xaa_#gkV/\x84p\x0b\xa3\xa92\xb0\x1f\xc2tjZ\x04\xae\x84M\x80_\xb6\xc9U\xe1\xe0\xde-\xf6[\xa4\x13\xb3\xaa\xae"\x15\xed\x9baT{\x82\xfc\xca\xeb\xcd\x89\xce\x18\xaeBs\x1d\xd3*\xd9oR\xbe\x1f\x91\x98R\xe4[\x7f%/\xcd\xd1vc\x92\x8e`!o\x1b3\x92j\xc1\x1a\x14t0\xda++I\xad\xd0i\x16\x86\xce\x04"e\x1bCc\xd4\x14T\x11\x88H:c\x95@\x94}\xea6{ZS-\xb9Lni\x88\xbc\x86\x1c\xa7\xd6M|\xf4SN\xb7\x04\xeb\xa8L\xc1#\x9c\x1d\x7f\xe2\xd0\xfa\x0f\xe6\x84\xd8\xc7,%\xfb2[\xa1\x14S\xed\x9fO\xb7x\x07~\x04\xda-\x01L9eb\xad?\xa2x\xa7:m\xe4a\xb5\xa6\x9f\xf9\x9e\x82\x84aS\x87\x17\xd5\x1a\x13\x8aR\xa4\xe8\xce\xb7\xa0\xb0?-\\\xf8\xac*\xb2\xf4\x93\x1b\xc1\xcc\xb5\x86\xe02\xda\x0b[\xc4<\xf52\x02\xee<{\xf9\x80"\x10\x8d\x86\xd4\xb1?\x87u7\xe1\x176\x98\x15\xed\xe9AQ\xde>\x12\xb8\xa1\xfe\xd3 \xdd\xb1\xe7Qj\xf4l\xc8\x82"\xe8N\x84\xf9\xb3\xbbk7\xe1`Nf\x95\x8d\xc9\xd5sz=\xfd\xe1\xda\x88(Ax$\x9c\x16\x17-\xbe\x84\x1a<\x81\x1c\xb4\x84t\xd0Tw\xa6\x1e\x06zFt\x83\x944?\x16\x19\x94J>\t\xfa\xb91\xc6\xb9\x90qq$c\x83CL\x98\x1a\xc2\x9b\x1d\r^;\xbd1\x90\xec%\xcb\x85\x13\x80(\xf0\xd7\xee}\x9d\x15\xf8\x1c@\x19\xac\xcb\x07Aq(y\x856\xf1t\x15|\x08\x93\xf8\xf7\x90j\xacc\xf4\xcfsF^\xf7\x1f\xf65\x98S\xd4\x8d3X\xcak\x90E\x82\x82\xa5_\x17D\xc3\x8f{\xc7c\xe2\xb7\xf9\xc9N\xbe2\xc5\xea\xd8\x88]qX0\x12d+\xc1\xea%\'C\xf8\xd2\xdc6\xe6\xd9\xa7?<s\xd0\x83\xcd\xcak\xa0\n\xe1=\x0b!\xdeptxd\xe2\x13\x10o\xf4e\x84E~t,:\\f\x82\x17&k\xc01jX.\xf6\x0e \xe5\xe4@\t\xb0\x9c\x18\x1cd\xc4l\xca\xfa\xe1sI!V\xceq\x86k4\x10\x06\xe5\x8c\x18\xd0\xea\x1dG\x92@u\x9f\xe0O7\xf3\\\xec!.2\xb1\xe5\x1f\xe4\x8b=<\x02\x9f\x8c\x81r$l\xad\x93\x02\x05\x05\x19(B\xc9G\xb5\xa9^\x18\xb5r\x94^\x89\xa79\x93\xacM\xbd+\xadDO\x8c1\xee\xe0\xb4\x0cI\xf3]\xc3LH\xaf\x85\x10r}_)\xa9\xa6\xae\x90\x8bW\x8e\xabz\x02\xc17W\x7f\xfe\xac\xd1\x0b\xb2\xe5m\xeb\x15r\x00"\xc4\x08\xc2\x19\x8d\xa0\xd5\x8c\xdc@\xb5]v\xbe\x17\x994"\xcd5fqC\xbc\xd0\n\x98cS\xce\xed{j\xcf7A\x9b\x10\x0fd<\x14G\xaa\x945\xcbJ\xb7\xb3_z2\xd5\xdb\xb1\xad\xb5\xc14\xa9P\x15\xa4\xfe\xd6\xbdG\xbfC\xb7\xf9D\xd4\xe6\xd8\xcb\xf1\xa9[\xc1iDh\x15\x9cRT2\x9fy)\x8d\x90rl\x95\xbd\xd2\xe9`\x86h\xa3^N\xb6:(R\xed\x12jD\x05\xd2\xdf\xaf\xc4\x82\xa0JH\xf2\xf90\xe8\x91\x10\x8e\x81D\xe1\xbb\x1b\xb1\x99l\xd0\x8c\xea\xb8\x85\x86u8*&\xe6\xdb\x1fjU\xde9+\xe6z\xf8\x9b*^\xf9\xc5\xa1\x17\xd0\xfc.m^\xe3\x05\xbe\xb1Oe(KR\xdc\x03\xaa\x06\xb0\xd5\xb2w\xb89\x866%\x9ee\xe0\xf2\xe4^\x83D\xc0\xcc)\xb0f\x9cBU\xd8\xa6\xfd"R\x13\x19\xefw{\xf7p\xfe\xea\x8f\n\xb0\xba\x85]\xc5\x19\xd9\xd4\x1f\xe9Q\x06q{~\xab\xc1\x96\xc9\x98j\xee\xf9<(\x9c\xed\xea\xe7E$\xa0\xed"\xc40\xb2\r\xcc\xa1\xcc4D\xab\xae\xbd>[9\x0fW\xb7\x90{\x15\xe1G\xe2\x94\xb3\x1a:\xa6n@\xcd%\xf6=\x10~z<G\xd2\xb4\xaf\xcf,\xd8\xb6\xe1`^\x7f\xd6\xf9)\xcb?\x07\x8d\x00ms\xf0L\x1e\x0f;#z,\xa4\xd1\xdbE?\x8f\x8c\xcc\xa17w\x96\xec\t\x93\xbe\xdd-\xd3au\xe9\xb2L>\xfa\xef(yTi\x8c\xb6\xef\x10%\x9f\xba\xc1\x94\xcch]\xa1N\xb6p#\xc2\\(*\n\x1f.Yv\xb9\x14 \\\x04\xd3\x1b\x93\xcc\xdf;\xbd>U\xb5\xb4\x11;\x96\x93+\xb2\xe7\x9e\x8f3\xfa\xc0\x0b\xea\xb3\xe1\xb4\xfb\xbe\x86K\xba/\xcb\x17j\xb0\x8c\xe7\x8e9\xd1\xd7\xd9\x07\xc4\n\xd2\xc2\x91\x04-DZhb\x8e&|q\xfbuF\x8929\xa5\xe8,-\xdf\x9f\xa9\x02DM\xd4"\xaa\x05Dc\xc8:\x8b-\xec.\xf2\xaal\xca\x0c\xaaVR\xb0\xee\xd8\x82\xcb.\x91\x81\xa1\xe0@\xe5\xfb\xd2\xa5\xc8\xe3\xb3\x12W\x8d\xa2\xf0Qy\xb1*\xf5\x9f7\x91\x0f\x88\xf2#%8\x10\xc8\xa8\xa9\xf0\x9a\xa7\x9eX\xf5gA\xa5\x19\xa7rV.\x01\xc3\xa6\x19\xf7\x04f\x1d\x9d\xe8\x8e$\x078\x17\x07{\x1a\x80\xa5\x89s\t\xea\xa0\x88\xd1]\xce\x07\x89\x04\xe51\x8e\xbfk\x01\xbd\x89o/\xbf\xd4\x84i\xd6\xfej\x1d\x9fXq"\x83*\xd2\x89\x85\xd3\x97\xaao\x10\xf5\xc2\x04U\x8e\x06\xa8\xee\xfc\x91\xbd\x93\xe28\xa9\xa3*\xc1\xd4\xa1\x9f\xb7\xb2\xbd\xa2M\x83\x90\xacU\xc8E\x84Ez\x8a\xa5\xb6b\xa0\xbe{\x8a\x91\x87{\xe6\xdbkA\x17\xce\xc8q\x97K\xd9_18\xb2\x1dM\xb3,\x7f\xad\xe1i\xe5\xe8\xd9\x82vv\xf8\x9f\xd6\xf7B\x90%\xe9\xe0~&\x82lX\x8d9\x85A\xe1\xaby\x0f\x04\xb9\xb2kd\\\x93&\x98h\x14*\xd4T\xdc\x90\xdf\xd1\x1e\x18\xf8>i\xdc"\xba\x17\xe9\xdb\x9d\x89tEs\x94\xc2\xceU\xa0m\xa2AwdUOK\x82\xa1\x7ft\xa1;\x12\xbc\x1b\x14\xcbX\xb4\xad8\xcf\x06\x1b\x84T5p\x1d\x8b\xf6,\x9b\xd1\x8c \x08)TQ\xba \x15\x88\x12\xe4]\xce\xad6" H\xaa\x8d\xd0R6\x91&\xe0\xb6Xu\xe6\xab\xddm)\xd2J\x1b\x8b\xf5Uhc\x15R\x18h<m\xb2\xe4h\xe7L\xe6\x80n\x07f\x0f\xc58>\x96t~G=\x95bN\x14j\xee\xb0\x81\x02\xb0\xe6\x93zF \x1b\x87Ri\t\x92\x11Murn\xe5\x95\xe6;I\\\x8bU6X\x04\x1b\x80|"\x9e2\xcf\x0e\xb5-\xee*\x95OT\xfe\xcdc)\xcc(\x91k\xd3\xa9\x87\xea\x12\xae\xe0\x9a\x18w\xf7-\x0b\xfc\x19\xc0.\xd3C\x1a\xdf\x98\xd0@B\xc6\r\x9d\x18\xa1R\xd0\x08\xdc\x94A68}s\x16\xa3\x98\x878y\xa1yK\xacG\x95\x8f?\xa7f\xd2\xcbx\xce=\x9fT\xa8 \x1aY\x1dG{\x89\x9c}(\x89\x8dD\x12\xef,\x9eh\xb2Z\x83\xc4\x8bs\x7f\n\xf2\xf2\xe7\xc8\xe8\xa3\t\x8f\xd1\xce\xc4\xd1\xd1\x05*q\xa5\xa8\x816\xe7\xfc\xf6\x9d\xb4\x8c\xa3\xfft\xbcZD\xcc\xb2\xd1/5Gs,9\xe8\xc9!\x99\x18\xef\xb4\x92\x82\xda\xc9B\x08\xd1\x96Rg\x9b\xdb2\xd9PW$\xa2\xdeA\x04\xcfEOuWY\xef\xde\xba0\x15\x8c#i5f\xb9\xaa\xce\xe4\xe5\x14\xc8\x9az\x88\x18\t\xe4\xb0soN}\xb9\x16h*\xb6MQ\xfc\xba\x18\xcc\xb6\xa4\xfa\ry\x96_\xd4,"\x83\x10bvZ\xd9\x0e~\xaf\x82\x90\x04{\xad<\xd4UIc%S}6\x84\xe6Z\xeb\xd65\xb56l<$=\xba\xa5z[<\xbb\xa4y \x0c[\n\xbd0\x12Q\x96\x18\xe7\'\r_\xa9\xa1\x96\xe0\x8ac\x8e\xbb\xe5\x8d\xb1\xfc\xa3\xee\x19O\ni\xb3\x17\xffi\x19/o\xbe\x0b\xdfw\x13\xe8\xc5\xac\x1f\x89\x16\x0c\xd1y\x869\x81\x07\xcd\xa7|\xb3\xcb\xdf\x8f\x02O\xd4\x10VH\x9e\x8c\xda~\xc5\xc3\xde\xbd\xfb&\xe4{O\\b\xb3\x9a I\xd0\xa6\xa2\xf9\xab\x88\x0e9\xe6\xe7\xca\x1e\xaf\x1f>@\xf7\x1e\xab\x8d\xa7Q\xeb\xb3\xe93\xfc\xa1\xfe\x0f\xedm!\x11\x96D\xa5\xf0@`\x87\x97x\x03\x00\xcf6th\xeb\xb5)\x8cAq\xa6@\xc5=\xe2H\xa3\x9f\x7f\xf2\x02\xa1U\xad\xc1\x18e\xb4\xb5\xbeh\x1b\x93\x1e3i\xbd\xa68YJ\xd1\xc9\x8c\x80Z\xfa\xd6\x8fr&\x9c+\xf5e\xdfO\xe8\xf6\xe8\xea\x910ji\x1f\xd5n\xc5\x0f\xfb\xd9P\xe1H\x8a/U6\x18\x92\xd7*F*\xaa\xaf\xf3\x88\x0cB?\x14\x98\xc4\xeaU\x1f\x11\x06\xe5C\x10\x93\x0bqLBR\xba\x9bANT\x8a8\x15\x88o\xd4\xf1\xa4\xc0b\xe0\\\xf9M\xd8P\x06\x1e\xfc\xc9\xa0\x9b\xb5\x1c\xa8\xc9\x865\xe9\xc8Z\x9cNqH\xc0\xcd,\xc3S\x95\x97\x9a?^\xed\xb0\xd9\x04?\xbbvw\xe0\x9a\x8c\xf1\xbe):\xa5\xb4\xf9\xeal\x94\xedJ0# \xf2U\xc50P\xf0<\xf87ZX\xb4\xf2r\x90\xbd\xf4\xd7\xf4\x1e\x1e\xce\x8c\xa0\x1c\rch\x9a\xedQ\xb5\xe7\xd6\nP:\x07\xd9\x9eN\xbfG\xe8\xa0\x1c{\xd1;|:\xaa\x0e\\\xe7\xe3"/\x9e\x88\xad\xfdQ\x0eU\x13\xd1;\xbc[\xdd^\xb2\xed{O\x05\x06\xaf6T\xe4\x1f\xf9\xaa\xc3\xddI)\x15&J\xe1|\x18B=\xad\x97\xf2\t\x85J\xd3N\x1a\xfc\xb5\x06\x96\x95\n\x03\x19@`\xc8w\xa9\n-\x91t\rhm\xda\xab\x8f\xb6Uh !\x8cYh>\x14\x9bp\xda\xc5-f\xf7\x90\xe9n\x8b\xc9\xf7\xa4[\x81\x00@\x9c1\x91H\xb2\tS\xbf~8h\xc3d$\xcf\xe6)M\x89\xef\xd4\t\xcd\xc8\xa7\xd4hz\xc8\x18\xb1\x82\x1a]\xfdU\x88^\xef\xbf\x13\x92L(\xe8\x8b\n(\xee\x8d&\x011\\\xc8\xdf\xde\x8c\x7f\x11&Yd3\xa86\x11f\xf0Z\x96X\xab\xf0z\xa5w\x08\xfe?3A\x1b\x04\x7fO\r\xc0Y\xef\x18\xe7\xc1\xdc\xdb\x96\x07\xfeUj\xd68\xc1\xc0\xb7\xfd\xc3\xdf\x06\xe1\xaf\x17\x8an\xd4^XKq\xb0(\x96\x96\xc7\x97$-\xd2\rF+\x90\x1cCzsb\xab\x1e\x84w\x17\xd9\xdf\x0f\x17\xdb\x0e\xef\xa8\xda-Js\x89\xf1o$^\xf9\xd6\xa7\xd8\xa8b&g\xc9\xa6\xd1\x80\xc2\x16\xc5\xa9\'\xed\\\xd83r\x1afG\xbb\xc4\x89H\xe5/\xc68`\xc6\xe7\xf3>\xeb\xc5\xb2=@\xd3^(\x08\xab\xc4<\xb5\x10\xcc\x87F\\\xadP\xf7Wx\xca\xa2\x06I\x84\x9d\x17$\xd6\x8b\x98\x10\xc04\xa2\xdd\xabM~\x92\x0c\xfd\xdbO\xce\xd4\x90\x01\xfc"$\x07\xf9\xc4\xbc\xe4\x08q\xf1\xebA\x03e\x01\x83\xd6V\x0f\x95\x87\xff\x1b2 \xab\x15}\xa5\x90\xda\x1f{q\xab\n/\x9b\xd9\x02\xd6\xf9\xe0d;\x8c\x1dmT\xb6\xe7\t\xab\x8e~\n\x93\xec:\xb0\xa6 \xe4T\x17\x03\xf6n\xd5\xae\xbf\xf3\x03\x08\xde\xc3\xbe\xe5\x9f6\xa2\xbb\xbb\x93;(\xce\xa6\xed`\xf4h\x05\xb8\xa5\xc8\x9f\xa9[(S\xc5\x06\xac\xd2Z\xcf\xb5\xf6\x1a\x93@\xfb\x9a`uG\xe15\x06b\xd5(\xceUb\xf3\x95\xe5\xa8\x18\x83\xfdK\xffr6-+Z?\xbe\xf0\nx\x17\xa2\x87\x11\xcf#An\x9d\xcd\xc2\xcb\x94\xc7\xd1D2\xcdO\t\x15,9\xc5Wuu\xff\xe75j\xd5\xf9\xf3+\r\x81\xaf\x03w+|\xc8I\xf74Di\xbas+{\xbcnd}\xed\x9d\x0fKr>\xf9\xed\xf1\n\x11\xee\xdf\x18-\xc3\xc6|*5\xe4\xa7\x1c\x0cz,\xfc?\xd4A\xcb\xdc\xaaz\xeadw\xe5\x8f5\xae\xb7o\xa8=\xdb\x02d\xf1"\xf6}.\x13w8\xfc\xe1Z\xb5H\x8a\xfe\x83\xe8\xd2\xb8\xd6K\x1c\xdc\x875h\xefB\xac\x83\x93\xd8:\x8f!J\x0c\x97e\xa3\x02\xc5V\xcc\xc5\xf6\r\xd6U=\xc4\x0f/g\x7fi\x9fpT\x80;W\x82,\xa4\xb6m\x9f?QM\x9f\x06\xf1@3\xc8\xebP\\CW\xfe\xdc\xb4\xfdb\rH\x13X&\xea>Pv\x19pM\xcb)\x11\xcd\x8c\xc8X\x95\x1f\xa5\xa3\x06\xcd@\n\x1be\xfbF\x18^\xa0\xe1\x055\x8a"\xf9s\x85\xe8bF0\xd5\x07q\x94\xcc%\xe4/D\x05\xd3\xe1\xb1\xb1\x95K\xed\x19V\x84\x17\xa4\xd8T\x8b\x02\xcc\t\xc3\xc90\x1b\xaeRu3Sw\xe0X~\xa6\xd6Ab\xe6\x8f\x83\tyH6\xa5gS\xbbL\xe3\xf6U\xfe)\xd5CP\x0f\x9a(\xba\xb8p_7\xc8\x80j}\xa3\xab0\xe1T\x0c\x12\x08\xb8Z\xd32R\xde\xf8\x11\xfb\xecT\x9fHN\x1fz\xd5V\xc2\xd9\xeb\xd2}\x8e5\xbaX\xc7"\x90\xd1.0\xc2\x89\x02.i$\xfc\xa1\x92l\x0c\xf7\xd6\xefvvU\xe4\x11\x8eD\x18Z\xfb\xfd@\x1dN^p\xff\xdf|wD\xef\xe1\xeaW\xde&\x87N\x9d\xab:\xf6\xcf)\xa7\xbanD\xaa\xf1\xf0\x95>9Q\xa5N\x94v\x829\xf0<\x84\r|sr\x9c\x0bj\x00S\xb8\xf0\xec\x8d\xda\x8f{\xef<\x0e\x0e\x1b\x0c\x96^\xa0\x81\xb5-\xc4\xb6+Y\x0b\xba\xc1\xe8j@\xa9q\xe0(8\x95Z\n\xe8\x88\xd0<5o^\xf9z\xd3\x15\x93\x96\xc4\x0f{\xc0\xfb03~\xf2[\xf2R\xe1\xc1\xb1&\xb2\x00\x05\xae\xc1\xbc\xaf"O\xbd\xc0;\\*\xe6\xc1\xc3\xea\x14\xb5r\xd7G\xcbl\xf3\xb0@=H\x1fn(\xfe\x0bUF\x89\x06\x03k\xca\x06\x031:PN\xb3fQzx\x0co\xd8{\xfc\r4kv\xaa\xa6\xacL%\xc8F\xa1\xddD\xe8\x82\xa07\xdbH>\xfbq6C\x8a\x17\x9a?\xcd\xa5\x8a\x03\xb9zS1!=\xaco2U\x19Z\x85\xfb~\x9c{\x13\xdejTQ!\xd5U63;\xca\xd8\t\xedc\xa5XUPW\xef`\xd0\x1b\x9b\xbf\xdd\x8ftZ!O\xf0\x99\xcf\xf3\xadP_\xb6*S+f]<\x01\x96)4\x92:4\xc1\xa3{\x94\xdc\x91\xe6,\x15%\t\xc6=\x9e\xb8\xe3\x05\x89\x94h\nc.:\xf6cb\xean\xa5\x99U\x15\xa8\x05$?\xf0\x89\xc5%\x9b\xccE=\x0eE\x02o1\xf9\xa0\xff\xa4\x8eE\x01\xa4\xdd\x87XW\xa1\x91\x9c\x9d\xd2\xd7\x063\x0cC\xafr\x96\x0e\xe6\x8a\x87\x10\x97\xb0\xae\x9d\xdf\x9a\xd6W\x97^4\x88\x17x|\xaej9\xa5\xd7\x0f%\x14\xc5qu\x88\x9d\xdd7oW~x\xca96\xf7;\xaf\xbc\x81A\xe1\xf8@\x921\xaa\x81"\x02\xcdz0\xcfI\x10\xf3\xa9\xe4\x97\x9b;\xbf\xe4\x0b\xc9\xfa\xf8\xa8v\x80t\xfb\xf0\xf5\xab}!\x0c\x01\xda\xc49[\xba\r}\r\xfei\xef\xde\xae\xa7\xfbM\x88\xc0L,\xf71D\xda1\xd4\xcb\xa1*\x19\x14c\xbdc4i\xe4\x8f\xbc\xa2\xd5\xee\x9d\xc5\x0f\xab*\xb4\x86J"\xd5G\x1d\xc47\xd22\x94\xc2\x94\x9f\x10\\\xbb\xe7k\x0f\xd6`]\x97\xfc\xf8\x12\x14\\*\xcd\x83+\xd3\xe9\x8f!\xed\xfa\x95HO \xd7\xb3S*\x1e\xd4uhyJQ\x89q\xfcY\xae\r\x14X\xe5~\x987\xb6;\xd6\xbc{\x92\n\xfcHtDHX\x98$t\xb01,\x93\xaf\x89\xd3m\x0e!8hWS0\x98\x1d\x9d\x0b\xc0\xbc;p\xb4\xc2\x97\xc3\xa7\xdc1\x01\x80\xe7\xa2\x03\x19:\xa5`\xef\xdd\x0f\x97*\x9e\x16\xb2Z\x18\xafB\xd8\xb2\x9a\xb9UK\x0f\x1a\x9bH&H\xf7\xf7\xbc\xaf\x10\xdd\xa0\n\xed\xd5\x8d\xbcI\x18/\xdd\x11!@\x1b\x83\x0c@\xad@\x98\xacI\xce&\x04o\x9apY\x8e\xc8\xd4\xebH\xb4-\x07\x1cs\x8a\xe2a=N\xbeC\xa53\xc6 \xf0\x9d\xda!\xf8\xd0\xd0\xf0\x1a\xac\xf8\xc3\xa3\xb5\x0b\x00\xcb4\xd9D\x8evM7\xfdy\x07\x8c\xa2f\xc4\x0b_\xd0&o\x8b7+\xd5\x98VF\x8d\xbc\x99\xfc.T\x82\x12M{\xaa\xd3\x18"\xc2]\x0b\xc4\xb0R\x97.\r_\n\x899\xb6\xb06bA\xd7\xc5\x1d\xd1<\x18\x19\x06_A\xccq\x98-T\x01\x93fJM=\xb5\xc57\xfc<\xa0\x9e\xc1"\x828\x8e1f\xa1\xebzP\\\xe8\x02\x9fC\xa3\xfc\x8e\xba\x01e\xc1\x83\xbf\x9at\xc7\x94\xedA\xe2\xad\xe3\xb7JeL\xdcT\xb7\xb1\x01,\x03\xa5vr\x95@n\xe9\xd2Wb\x9cH>\xbd\xaa\xee\xd3\xa1\x9d\xba1\xd1\x97\xfc\x80\\\xeez(%\xf2;\xe4(\xdfWMN\x81G!u\x07\xa6\xb2\x0e\x8c\xba\xda\rO\x89;\th\x90\xe0\xb5[TR E\x87"}\xf0\x132\xc6\xd0b\xab[\xf4\x1b\xa6\x9f\xe9\x01_>\xc3\x12\x1ex\xf9\x93keB\x99\x85\xdd.\x97\x14e\xcd@\x8a\xf5\xe0_\x1a\xf5A)\xe1\xa5}%\x0b\xdd\x08\x1cw\'u\xa1Pyw\xa1\x14\xb8Kfz\xc7\xa8\xd2\xb2@\xe2`)\xd8\x1e\x14B\xd6\xbc\x0b\x9e\xbc\x1c(C\xd7\xf5\xfdS)J\xc5\x87\xef\xbf\x98kb\x12\x96\x8e\x0b$bD\x7f+\x91\x82dlS4)d\xc2\x0e\xdaI5\'\x0c\xcej\xccl#%=v\x91\x87\xe7\xaa\x11a\x8bsrn\xe0\xae\xa6\xd4\x11[\xfd\xc7 5\xb5\x9f*\xa7i\xc1Ew2\xae\xed\xe1\x07\xf2b\xd6.[\x06\xaa\xc6\x1f\x8c\x87\x92\x1b\xc4T\xa3\xa2}\xe4\x87aif\xab\xd4\xffx@U\x9c!\x8d4=P\xd9\x83\xc2\x9a\xc9\xcf\x8d\x01a\xadF\xdaJ\xf3\x04~:\xddD\x96\t\xcd7]\x83\x15\xf9\xf8J\xfa\x1d\xff\x1a\xc5\xc00#3-D7l+\x92X\x19x\xf5W\x15\xa7:Q\x87\xcaq?\xd0qV\xa3\xa48\xcccR\xac\xf7@\x88K\xa1^\x01IlV\xf1\xa0o\xba\xcd\xd2\xaf\xe8X\r\xe5\x08\x89\x12\xb7\x03\xd2^\x13@\x18\xb1\x9f\xec\x9fz\xc1\x00\\\xce\xf1:\xc0\xedl\xeb\x9bJ\xe8\x11\xa5\xa4[\x0e\x8cu \xfe3\xab\xaa\x1d\xc1\xecd0]\xb7\xa8WQ\xa1a\x8b&\xbb\xe6v\x10\x0c\x95\xb3jq\xf2*j\xd0\xa4\x0e\xd2;\xe8\xbd\xec(\x11\xfaV\x05\x99\xe6Z%X\xbd\xba@\xbd\x9f\x1d/\xd8l\x9d\xb2`X\xca\\\xcd\x1b\xd4e\xe5\xfa\xe5/\x95\xea\x17\x92.u\xec\xef\xa6d\xc1\xb9D\x0e\x99\x1f#\x0f\x8f\x08\xd4\xd4\xc5o\xb2\x0b\x8e\x80\x9b\xc5!L\x18Fc\xb7ir,\x913\x17@\xb3\xd7\xe7\x91|\x96N\xee2d\x82]*\xdb\xe0\xe83\x8an\xd5_\x15h\x87\xf2!\xf1-\xa0\xc5|q\xf4\xe1\x9cRRz:\xdf\xaa@E\xd5\xe0\xd5SaO\x9a\xe2\xd7\xd1/$\xef\xcd\x84\x05\xa8\xbcpno3\x82<3\xcc\x82/bt\xba\xa1\xde!\xef\x1c-6\xd1\xb1\x0eF\xf6\x8e\xf06[\xf9A c\x04\xdd\xca"7\x19\xaa\xb2fC\x8fUv\x91V\x15\xf5\xef\x92w\x7f\xef\x1eC\x15\xcf\xc5\xa8]\xd3\xc3k\xdes)1\x8eP\xd3\x7f*M\x9f4\x91#4\x807\xac\' \xafD\xd1\xd6Y\xc1\x8c\xc3\xf8^\x80\x92\x1b\xac\xef\xaa\xd9\xa2|\xaaN\x7f\x88\xe40\xc1\x01\x89\xb6\x80,\x96\xe9\xc5\xe9\x80F\x83\xca\x16\xd1\xe5f\x00\xaa:\x98\xf4\x02)ZS\xa0&\xeb\xd4\xb2\xe7\x8a\xaf\xef\x1e\xfc\xfc\xac\x1aO\xadc\x13cTd\xf5\x07\x03`\x8a\xb5\x83CYF\xb4\xb2\xb3\xbe\x94\xae\xeb\x14\t\\\xf0\xbaS\xac)e\x1d\xf8%`\xdb\xf4?\xf7\x10R \xd1\xfb\xbb\xdf\xfe\x0es\xea\xd1\xfd\xd3\x83?\xb2\xa9J\xae\xda\x18b\xf8\x86\x81f\xc2\xc2(e\x92&~ \xec\xd1`\x0f\xdbDu,\xfc\nUJ\x9fxR\xe1WQ\xdc\xf4\xba@\xb3"\xd9\xd1\x15\xa8K\x80#\x82\x14\x0f\x91\x05\xda\x1dI\xb8\xc2\xab[w*\n\xb3\x05\' !\x8e\xf09I\xfd\xaa6\x05\xf1GU!Z\xc52\x92Q\xe5\xf0\x14M0d\xb4O\x96`k\x17\xd2\xe4\x1f\x1fFG\xc2\xc5T\xe5o\xdb\xebo\xaaCv\x13\xb0\xf5\x85\xc7D4\x1d\x98\x858(0R3\x00\xd7\x86\x91\xcf\xb5\nI!\x88\x12\xad\xa4\xa0LU\x82\xf5M\x9cS\xb6#T@`2\xca\x1e"\xe0&N\x8c:.\x15\xe1\xd4lg\xf5\xb3v\xd0\x1c@\xeeDx\xe7|\xf6\x0c"\xb4\xc0\xacL\xa7k\xf6\x08\xf5\x99Nu\xc7Vc\xcc\xbc>\x81\x7f\xfa\x15\xa4\xb7}\xda\xd2\xb4/\xa4\xac\xc1.\xf5#O\xfa\xfc\xa84\x94\x9a\xac\xe4\xa4\x97\x03\xc9\xa4P \x08\x89\x192\xeeu}d\x8f\x9ee/\xb6\xa5`8I\xe5n\xe5\xb0\x0b1yY@&\x14\x1f.MI\xff\x0cV\xbcD\xffM|w\x19\x01W\xb4z&\x8e\x9a\xcd\xa4\x04\x81\xfd\x9f\xe4K"\xfc\xdb\x8aR\xbd5u%\xc6S\xa7\x9f\xe5\xfc\xd8\xb5\xf6\x1e\xba<\x95\xb45\xbdd\x7fU\xe5k\xbd\xdeO)\x0eq\x107\xe2\xae\xe6#\x9eWu!g\x89\xb6\x07\xd6`A\x0e\xaf0h%\x88\xb5\')\x9fG\xd0\x8b\xcc\x97\x94\xe3\x14\xe8\x9b\xa7F\xaf\x91C\x96\xaa\xfaUV\xdbP@B=\x99\xe6\xb7}\xc1\x12\x99\xe2\xffJIO\x97\xfb\xa1JPM\xec$\xe6K\xfd\x1d \xee\x81R."aT\xceq\x0fw\x8f\xb7\xee \x0e\x03)5$\xc3*T\x9f\x83\xb5P9/o,\x9a\xb6o\x1e\xc9D0\x15\x04\xd4\xc5\x83\xde\xda@\x92_\xec\xbc\x89)Q\xf9\x91\xbf&\x85\xcbZs\x80\xbd\xa6n#)\x8aF\xe5\xf0\xda\xc0\xd4\x9d\xb7\xd2\xe5\xd3x\xf6Z\xfa\xbf\x94zFV\x1c$\x82\xd5\x19\xc9Y\xe5\xe6\x9b\x94\xc4X\x86\x9b`\xe8\xdc\xca+t\xf5\xd5\xe2}]\xb5\xc7\xdcYa\x1bU$MX\xfc\x81\xf2\xcc\xd6\x96\xa9\x88\xfd\xd4\xd2\x1b<2\xc4\x94\xf1\xdc\xec\ngV\xb2Sd\xe6\xa7:\xcf\x1ap\x8c9\xe9\xad\xf3\xb1)K\xbd\x15\x1e3\x19\x8a\xad\x04\x198\xfa\xe5\xa1\xdaS4\xd5\xa8a_\x92{\xdf;\x1e\x99Eoc\xfe\x04(p\xab\xd9\xd5\x81&_\x16\xe9\x16\x1bC\x11\x1cFw\xb15\xc3+\xac\xb7\x1aS[4\x975\x1a\x91\xc8\xa1\x1c\x94Y\x06\xb2\xd9\xa8e\x9a\xad`\x90\x15s\xda\xba<\x89Q\xb3"\r\x1a\x16\x12\xab\x88\x88\x9aS\xb9\x94\xe1I\xba{\xb6N\x99\xf7\x9f\xfd\xb3t\xe8I\xe9\x7f\xb4\xff\x125mD\x90\x8dexJZ\x84\xd72b^\xa3\xae\xba \xf7\x8d\xca\xb7\x1eJ\xd3<\x18\xe7\xe9\x88\xf5`\xceA\xc3\x18\xf8\x83\x9a\xc5\xa3\x01\x0b\x83==\x14\xa4B\xa0\xcf\xd2e7\xaf$\x80\xc3\x19\x02\x02\x1c\xb3g\xe8kjn\xcd/\x90=\n\xc3\x06a\'&(j(d\xb4\xd8\xfeF^\x98N\xfa\x80\x16\xf7\xc4O\xfa\xf0\x92\xed\x00!5\xa6:\x91_M+xA\x92\x1b6\xfd\x98\x02\xd2\xdb`o4\x0fY\xe1\n~\xdf\x12\x80\xab\x04\x18\xb0u\x06\x08U{\x0bVt\xbc\x89\xbe\xeeLZ\xe2]q\x85|\xa3\x9aG\xa92Z\xfa\xfc]e\x80JE#\xd8|V\x8b\x1amL\x97\xbc\x1a{\xc1\x19\xdcb\x9f\x85\x92\xe6\xed\xfeM\n_:\xb0\xa8\xd3\xa1\x0c\xea\x14}\x85\x91fyU)9tW\xca\x15\x0bM\xe6!m\x0c\x9b\xb3@\xb7f\xd7\x0c\xa4\xac\x8brYQ\x1f\xb6O\xb6\xf4L\x8d\x1b\xf0\x94\x85\x86\xb6\x85*\xd4Q\x8d\x0e#d\xc8\xf1\xc8\xe6X\x054\xd1\\\x92\x08\xd6\xfe3\xda\xae\np\xee\xbay\r\x9d.\xdc\x82\xb0\x8f\\Ao\x1b\xffU\x06\x8e\x13\xda\xc0\x08\x96\x10#\xe8\xe2\xd5\x8d\x01\xc3\xb7\x14\x89\xba\x06\xad\xb4$o4\x14LM\x0e\xe5\xc6@o\xa3\xf43\xc4C\xf6\x12\x9e\xf89DDa7-]geI*;\x18CDl\xf2T\xb5B\x9e^\x0e\xb7@U:\x12;\x86\xfc?e,m\x1e]\x0f\xb0\xe6\x02\xb2\xc4\xae\xdc\xfcl\x8bW\xa4?\xd5\x08C`v\xe6\xeeZ\xca&\x02\xf5\xf37\x80\xf6\x8e\xb5`Rn\x0fQ\xfa&;9\x9a\xc2\tnN\xfd\xc8\xaa\xe7\x8bR\xb9Dq\xa1\xd4\th\xc5\xe3\xf7\x82\xd9\xa5\x92\xfa\xfe~\xfe\x92O\xedC\xb7\xaa\xa81E\xc6\x9f\xc8.V{/\xe7\xe7\xaa=\xa0FGv\x03\xd4\xab\x01g\'l\xa4\xf1\xd1\xb4\x98\xf5\xda\xa5\x07T\x0bd$\x96\xb3\xe1\xfb\x91\x84\xf2B?\x19\xce%\x93\x97B5s\x96Z\xaa\x9dA$[f/K\xe9\x00s\x9e\x1c*\xc6\x08\xc2+\xc9\xb9\xd7\xa2jt\xd9\x7f\xe6\xcd\x90\xedNV\xc0_A;\xba\xc5\x803\xca\xd5\x91W\x06\x9a\xd4\xd1\x06\x97{jS\xf2\x03!\xf9\'\x18Q\x81\x88\x8d\xc0w\x1f\x9c\xa9\xc2\x9d\xacs\xf6\xdd\x8c\xfa)2\xf2\xf1\xc4t)J\x959\x8b\xec\xa7Rvu\xe6\x84\xe9`hi\xdf\x1a\x1cK\xa3\xa4k\x9f\xaa\x8b\x9bX77Q\x97j&jG\x8e\x0e\x8b\\\xcfg/\x16\xd5?\xc6\x10\x05\x82\xa2\xd5\xb4\xe8 u\xa1\x81\xd2\x95\xbb\x91\xc1\xca\x15\xfe2,9\xff\xa9\xf05W\xb2\xae\xaaT\xadi\xaf\x9d\xba\x88\xa8>\x03\xae\x17G"\x06h{&\x1b\xb4\xdc\x1cFY\x0f\xc2\xb3]\xf3\x94\xc3\xc8\xd5\x19\xe2\x88\xd4\xaf\xbc\xb39\xc2lV\x12\x95\xd3O\x1bk\xbdp^=\xffqZ^\xd4\x8f\xbe\x0b`\x0b\x99\x94\xb8\x0f\x92\x8a\xa5\xdc\x03\x1b\xf0\xaa\xf5\xc3u\xc5\xf2r4\xaez%\xd7M\x00o\x8a\x01\xca\x8b\x9fO93\x01\x85\xe9L\xf3,[\xc0am\xf3rA(t\xb6y\xe2\xa9\xe9R\xf2K,\x817u\xd8D8\xaeT\xad\x1b\xd57\xda|\xf7\xc1\xd2y\xf8*\xd4\x94\xe2T\x04\x83\xf0\xf3st\xb2\x94\x87^B\x88\x93\x81w\xbd>\x0f\\\xd7C\xcd\x8e\x00i\xb3\x94\xae\x16\xb3\xee\x0c2\x07\xd0|-\xd1\x8d\xe62F2\xc8\x82\xb3\x9fO\x94\xc6s\xb4^,\xa9@J\x9b@\xa0\x86U\xe1L\xdc.\x99\x0c\xe2\xf1\xb5\x0f1\x07\xea\x8d$\xa5\xe6\xd7\xd3?\x07\x93\xbe\xfd\xf4LZ?M\x9f\x82A&\xdf\x0f\xa3@\xac\xf5\xef\xaf\xde\xa8ash\xb8\'a\xc7\x92\x83c1\xd6\xd7\x85\x13\xca\xbb\xd2\x8bG\x9a\xa0&\xa1\xec\x00$\x04\x93\xaf\xe4\\\x16\x9b\xe7G\xb1\xd4\xf4\x85\xc0\x14\xa7f\xb1@)\xb0\xf3M\x1e\xad\x9eK\xc9\xf1f\x0bj\xfb*\xa4\xb5\xef\x04\x12\x15TH:W\xb3Zy\xd2\xeb\xa9\x95\x8fb}f\xfe\x1e\xa9\x10\xc7\x11p\x80\x9da\xf3(\x0e\x18\xefj?QeQ\xe3O\xdbB|\xb6&\xfc\xd2;\x04\xfb\xa1\xbcQ0 6]\x07\x12>~j\x92\xbf\x1dPM\xe7;\x8a\xd1=\xd3\x8f\xddw\xd6\xbc\xaa\xcd\xb2\xef\x8d9\x97\xe0\x93\x03\x90\x93\x91\xe1\xb2\x83\xda\x00\xbe\xec\x88\xe0\x08m*\xa1I\xe8\x85r\xd9*\x8b\x0eG>g\xd5%]\xf4\x165`\x8e\xa3\xf2\xbcK\xf6\xf0l\xa9\x83\xa6U_j#\xdd\xd1N\xd5a\xb6\xf6v"Qa~[\x13~\xba\xd5T\xd7Xdc\x1f/\x15dE\x9d\xfej4\t\xdcK\x8a\xb2\xdb\x9dxl\xb322\xa5\x08\x9fm\x12\x06!\x1f\x8bq\xcf2)\x85\x02\xa7\xa0\xb3W\x9e\x8c\xaa\x11\xa8U\x9c)\x1a>\xc1\x1cw)\xc6\x81\x13\xd9\xd7U?\xe0\x0f\xd8\xd6\x97BI\x92\x9e\xa8\xf0\xe2\x9d{\x83v\xf4\xd2\x01\xa7.\xa4(\xd3\x88\xc4L\t^D#Nip%\x89\xa4\x02\xf3\xd2+\xe5\xc5\xcc\xea@^w\xa9g\x0f\xd4\x92\x0bJ\x85]iVP\x95\x94p\xdb\xee\xf2#&\x08Q&F\x9de\xa5T\x13\xc86\x8c\xd1\xda\x90K\xaa\x8b\x93h\x91;\xe6\x8f\'t\xcf*\r\x94\x18\xc0\xed\xc5\xca\xf0\xcd\x10\x82G\xce\x81cD\xa6k\xb1$\x98\xaf\x8cT\xff1E\x84\xe5\xde\xe9\xa3\x9fJ\tJ\t\x82u\x03\xae\x133\xd5\xee\xf8\xdd/\xc5e\xb9\xef\xf0\x8a{\xa0#\xfaq\xb7\x95\t\x17p@\xfa\xd6\x16J\xe6\x18\x99\xd0\xd4\x0b\x08\x18\x8b[\x11.Rmk\x8a\xd2\x04\xbe9\x15\xea2O\xd4\xd4\xce\xd5\x9bV4\x9e\xbe\xd1\x8cPW\x9cN\xc3\x82\x15\x18-\x94M\xacC\xa7\xa0Y\xbe3\xa0R\xb2\xe9\xa7|\xe7G2|\xf0\x83\x07\x10\t\x00\x18\x0c\x03?\x8cE\x11w\xd1\\\xc1y6\xec\x16Mu8Z\xe2\x8d\x18\xcd\x90,)\xbb2\x8dY\x91\x90\xb3\xde=$j\xa0\xa8\xf2s\xa2\xf9\x0f\x82\xc8\xaa\x01BXK_\xd2Y\xff]\xc1\x91\x12\xa2\x1eQr\xd9\xcf\x05\x8f\xbd\x9c\xb2\x1f%t\xd5h,\nY-\xf4\xa5\xbf\xa3\x17\x1a,\xed\xfb\xd0;?\xf9\xa0\xf3Y\x11\x16\xb1Co\x10\x13%\xe0\xf4l\r\x8f\xa5\x11\xe2 jS\'\xaf\x91\x18\xc4G|B\xfaD\xaf\xea]v\x1b<\x01\xd9\xb1\xd2\x8e|k\xf7\x8fF\x86\xc0S\xb4\xa9\x04\xb5:\xc46\xa5\xef\xfe\xa0 <\xa7\x94,m\xe1&\xd9\xcc\x0bl\x11:\xbb\xec\x86a\x10\xba\xa2\xe9\xb0M8@\xb8\x89b\xb5gO0t%]\'\xe8\x05L\x9bb\x9c\x05\xc0\xed\xf8\xdb\xa6\x88\x87\x8aFX\n\xd6h{\x9b\xa7y\xff\x8eW\xaf[}r\xfdG\xc7L,\xbe\xa2\xf2P\x1dk\xcdo\xe0\xdc\x00\x88\x95\xd1\xea\xfc\xa4rZ\x9dF\x92R\xbb\x8f\x93\xef\x15\x18x\xdf\x06\xd49\xef\xd8\x924\xf9\x8e\\\x0cl\x1c\r|)\xb2\x81\xc0\xb1\x83\x12"\x0b&($\xb5\xf1\x9d\xdd\xdb\x9dNX*\x07p\x96\xea\xffk\x14\xf4\xf9aU\xec\xa7\xc8\x8fQ\xfdH\x8ftWNL\x0b\x85\x99\xb5I\x83]ht\x8a\x1fG\xaaI\xdc\xd4\xbb(@H2\xf9\xec\x05/\x14Q\xb1\xd2\xf5\x06;\x85\x85U\x91k\xc3dy\xf8V\xf8\x06\xd0v\xb6\x806\x1b\x83\x89\x0f\x98\x02D.\xf3/Qz\xfc\xfc\x96\\L\x99V\xc2\x91D\xe1\xa0\x1e\xd3A\xfe\xbb\xd6T\x0e6\x8a\xa0\xb1\xa4\x88\x86\x85\x9ebs\x06\x7f\xe6V\x06\xb7\x16\x14H:\x10\x9e\xd4\x12x\t\x12\xe69\x1c$z2\xf3A\xdcT\x8c1/i\x8b@\xb9()\xda\x7fi+\xdf@\xcb\x984K\xcd3\xe11\xc2#)\xcf\x11X&\xcd#\xc4\x8a]3\xfd\t|\x92Hs{\x0b\xe5\xf2\xacH\x87\x07\xdd@"\xaf&z\x9a|\x8c\xffH\x1b\x92p:\x87{\xf7\x97\xa5\'\xa5\x006?n`WlH\xb1\x97\t\xc1\x8d\xddI\x81\xc2`\x80j/\xef\xc9\xdd\xfd*\x16~{K\x93y\x7f\x8cm\x7f\xb7\xc2\xe4\xd5\xf9\xdeB\xea\xac!\x85\xaa\xeb\xf5N\x86\x04u\x93\xc9\xcb\xd9\xe8\x7f\x14\x90w\xea\xbbbz)q\xbcJ(j@\xa72~\x93\xa8\x1a\\w;\xd2\xd1H}\x8f;\xa6\x89\xd4\xd1\xd6\xae\x1c\x1e\x052rIz\x16R\xbd-!\xda\xdc\x88D\xccv\xa9\xf6\xc3\xa8\x1f\xf4K\x11\x8ak\xb1|\x02u\xc4\x92\xc4Fa\xbcZ\n\xfdl\x97c\xa5\xbe\xf9@\xb5\x9f\xb9o\xab\x0b\x83\xa9\xaddx \xe4\'\x0b\xb0\xa1\xec9\x05Q\xa5\xc8\xd0yB\x94d\x8bm\xf0\xa34\xe6\x1a\xb4\x97\x1d\xca\xe6g\xd3v\xdam\xde\x18\xdc\xcb\x86\xd7+<^\xd14\x81\x10"\xdb.\xce\xc4\x9a\xae\xb7\xfe\xb2\\\x16[\x1eZ\xe5\x9a\x80\xc7\x89\x07\xecmz\xf0a\x8a\x19#U\x80\xee(\xd8\xcd8~n\xe6\xa1\xca\x96^\x10\x08W\xd9\x0fYOo\x90AT!\x84\x85|\xceQB\x97\x96\x1c\xbd\xe0\x99D/\x12\x0e\xa9\x82d\xbfs\xe8\x83oF\x16\x0f\'?\xf8\xa9\xf3\x14(9\xdd\xda;\xdfTSN9\xf4U$9NQi\xbd\xc2\x13\xe2\xe4\x06\x9c\xa2Bn\x89\xc8z\xa5\xa7\xe0e\x9b\xcb\x81H\x0f\xabU\xa5\xa8\x86\x898\xc5\xb5A\x0bN\xbfW\x08\xb9\xa5~=\xf8\xd2\x04A\xc3\xd8\xf3\xe8\xf1H~\x9f\xcd(\x81i<\x13\xad\xfc`>\xac*?\x0cKU%\xacw"u\xdc\x15~\xbe\xf9\xa8*0\xbe\xaa\\\xa9o\x86\xe3\xa8A\xe7D\x0f1\x1f\x1cUb\x1b\xf9\r\xaa\x11\x04\x9a\xd5\xc3\xacBrA]=\x18\xabns-EUi\xbd\x10\'\xc3\xadU\xb1gI\xa6\xf22`\xc5\xb80m\xa3\xc9/\x089\xf4\xed-\x81`B\xb5\x05\xf0\xb8U\xa7\xfbLZ\xa1\xa3\x9a\x1e\xec\x88C\xe1\x86\x85\xa32\x85;z\xf6u\xd0H\xc5\xa6\x9c`\xe6\xc7\xf9\xe2\x03)\xab@g\xa0\x93\xc2\x14\x87\x031\xe6\xc6\x19\xf5\xc3\xb2\x1c\x9b\x1d\x9b\'\x1fU\x8fm\xfcP\xb7a\xee\xf0\x89\x01\xd1\xbb\xd5\x14)\x87QFT\xd4M\xd3S\x14\x7f\xc3q\xec\xe8\xe6\xbb\x7f\xdc$imKM\xd7\x03P\x1c\xc7\xde@\xe7\xbel\xc6\xd0l\x07\xc5\xba\xceS,\xc5F\r\xdd`\x88\x92\x1f\xd1\x1bt+\xe40\xa2\x96\xd98\tR\xb5\x9ai\xa7y5aR\xf8m:\x05A&\xf7\xf6\xb5\x9f\xf9p\xa3\rQ\x9d\xa0\x9aZ\xaajA\xe4-\xdez\xfc~0\xe1\xbb\xf1d\xc6L\x00\xb2\xaa\xe2\x81\x06\xa0R\xf3\x07\xb7\xcf\xb2T\xe5\xcbn\xc2\xa8\xb8\x97\x8f\xbd\xa2\xe0\x1e:\xbd\xdc\xe5;\x7f\xca\xc5FhT\x9c&\xb3\xa1\x1bQ=;\xd6\x0c\xb4R\xf5\xd3\xa0\xf0\x93\x88q\xe4b?\x19\x00\xfa)A\xf3\xd9\x16\xab\xf2*\x94\xb18\x11&%0\xe7[\x92\xa5y\x95\x0e-K\xa6\xdc\'\xc7%\x84c\x18\xebgo\xd6}E\x93s{\xbc&wF\xa6\x11\xa7\xa1\xdd\xee]\xfd\x07\t\xab\xca\x8d\xb1/\x9a\x8d\x1c\r0\xfdR\x9c\xd4J\xedle%\x959\x16\xf9\xe23{\xdbP\xf3\x04\x9dW(B\xd9\xc1\xc7\xfa\x9d\x1e%G/\xfa\x89\x00\x9d{B\xb0\x88\x98\xf5_=\x0f\x8c\x05me\xea\xcc\x9b\xcciPy\xf5\xe1\\\xd4\xf3\x8e\x13\x13\x83\xdf"\n\xa9\xcd\xb7\x8b\x96\x84\x91\xa8\x86K\x9fZ\xadmI\xed\x84Q\xf16vru\xf2\xe3\x91\x9cM*\x15^I\xaa\x9b\xde\xf7]j\xec\xc0&c\xdc\xb3\xb5\xa3c\x0b\xed\x01\xf6@\x95;\xe8\x0e\x07\xb3\x1a\xc8\x95\xe0\x18\xe2\x8b\xcd\xc1\xd8\xc8@b\x1bA\'\xf5\xbf\xc6\xb3q\x12\xf1\xf2\\\xb6>\xe8\x03\xf3\xb8r\xcdAv<R\x94T~Tz\xb60y\xa8?\xfd<\x1d?\xf9x\xd3B\xbb\xca\xd4\xfc`\x92B\x13?\xf97^t\xb4R\x91!\xf6\xc9\x0fGw\\\x0f\x06\x80Q\xca\x8c#!\xff\xb7\xab+\xed\x8a\xda\xf9\x9a\xef\x9fO\x81( \n\x98\x9ed&\t"\x88\x02\x82\x0b\xb8\xf1\x03tX\xba\xd3\t\x82l\x02"\x02\xf2\xd9\xffTu\xb5\xf1</<G\x10\x87\x99\xa4\xd3}o\xddZ\x8a`\\\xd5r\xfa\x13\xe6\x8dc\r\xd8\xcf\x82E\xa5\xc4p\xe6\xe0Y\x1e\x7f\x08{\xe2`\x1cRH)\x01\xe2r\x8c-7\xc5\xcdat\xd9\x80}\\\xf7X\x05\xb8<}\xc0\x14\xe8~\x7f\'\x05Q\xcdZ\x8a\xcea\xbb\'\x8aw\xd7c\xeac\x02jG\x16q<\x85~\xd1\x02$\x0b$\x8b\xd37O\xf7\xde\xb9\xe5\x17\'\xaa\xe0 \x99,s\xb1x]\xfe\xee\xecP\x95\xbd\xac\x82\xc8C\xe1-0\x8f\xe5\xe9\xd7\x13\xd1\x07\xa4?#\x91\x07\x9b\x12%!\x16\x8a\x02\xb0\xc9\xcc\xe5\xe3Sz\x13\x86\x82id\x8b\xb9\xcb\xf1\x98\x10\x02\x9c\xcbj\x90|\x9fg\xc07;p\x18\x85cY\xd8\xef\x07\xc1Q\x00\xd2\xc6\xc3\x01`\x19\xb23%J\x0c\xb5\x8a\xab\xcbe\t\xb5h\xb4H\xdf\x80AQ4\xb8\x7f\xd1\x1b}\xfc\x17\x98\xf4\x89<\xb9\xc8\xd4f\xb6\xc9\x8b\x9f$\xc9Bz\x90\xdfb\xfd\t\xed\x8d\x16M\x9ch\x1a\xf2\x11\xf5n#\x1e@&vO\x1b\xb2B_\x8a\xde\xd0\xae\x1e1@\xcc\x05\xa3S\xb3W\xe1\x0eS$\xaa\xd29I\x14\xf2UW\xdd\xe71\x96H\xbe\xd6\x0c\x9d)\xa4M\xe3*\x84\x88\x11M\x07=9\xd2\x07;rM\x17RU\x9av\xc7\xae\xc0$\xa0\xa7\x084\xb0\xc1]\x1c\xba\xd5\xc6\x7f\n%))&\xd5\x86\xb2\x1b\xd90\xa9\xa4\x0b\\nR\x06\x96\xc53)fvu\xb1\xc1,\xa1G1u\xf3\xeb\xdf.\xf4\x90`\xd8\x83;r\xb7\xd5\x1c\xad\xca\xcf\xa3ly\xce!\x12\xa5x\xb0\xc5[\x08\xb5a\xcd\xf47\x89\xd3\n!\x9eu\xfeG\x01^\xf6\xb4h\th\x15\xc5\xbcr\xad\x0c\xa3\xe8\x0b\x8d\xff\x18c\xd4[\xfd\xa0\xe1w$\xc22\xf7\xcb\xc4DN \xb4\x08\xbcK\x94\xbdE81\x1d\xfbM\xb8\x01\xde+\xcd|\xe4\xeb%m\x14xUkL\x0e\xd7q\xa7Yi!At\xa1\x1d)\x84\xd9\xbd\x10^\x14Mm\xa3\xb2\x8a\xcf\xe2\xaf[Il\t|\x89\xeb\xc4\x17\xef\xfd@\xc8\x06\x0b\xedtT\xa3\xa0\xe4\xdd\x8d\x18\xcd\t|\x97\x9c8]\x94|0Rjp\xbb\xcd\xe1\xa3\x88\x8e\x89\x88:e\x0by\xb2[tmU\x17Bf\xab\xcb\xcfI\x96\xd0\xe4\xa2\xc6D\xa1\xba\xa7\xba\x0c\x1b6\xf7y\xde}\xf0\x18\x93\x1f\x87*\x96\xf2~_\xa3W2nM`\\\x8eH\x01\x99\xcb\xa7\x15\xe7_g\x1f\x1fmAxe\xb3\xf4T\xfcu&\x8e\xba\xfe\xa9\xceI\xfe\x92\xec6z>\x90\x90f\xe3\xa2\x8d\xd4X\x99\x04\xcbb\x81\x04\x18\xe9\xe9\xefn\xef)\xad\xe5[\xa37\xc2\x86!_\x83\x11\x12\xbba\xb8(\x9e(\x1f\x15\x1f>{\xddQ"\xb8M&Z\xce\x1f\xc5\x0e\xd5\xb3\xa7\xfd\xfe\xe2C}\xc2J^VE\xe7\x0b\xb3!z\xc3\xbf\x04\xfc)\xaf\xc4\x8b\xfa\x96d\xd1\xa0\x98x\x86<\xe2\xabry\x95\x16\x05\x1d\x9a\x84x\xb9P\xd0s\xe8\xbf\x8b{0\xf53R\x95\xd2\xc0\xad\xba\x99\xfe(\xcc\x1d\x8eB\x8d\x1b[\xbb\xfe#\xfaw>\xbe\x85a\x19\x94\xe2F\tT^\xf4\x06\xe7\xf7$\xd4\xf61\x95\xd4\xac\x1f\xa8\x0e/3T*\x01\xb3Y]|\x10\x9a\xa0Sxp\xa1\xd5\xaf4dl\x92\xad\xd3\x93\x07\x9a.\xf7\xbe\xcc\x1dkH\xd8\x91\x85\x9d\x11c\xa76G\xb7_\xae\xf0\xaf\xc8\xe9\xf6\x82QC8\xaeT/\x8c}\xe9}\xfb\x8f\xbb\xe2\xd1\xa0L\x91\xc1l\xf6\n6\xaa\x01(V\t]I\xce4T\xc1\xd9\x92\xb8\xdbO\xf4\xf7\xdb]\x94w%g\xf1\x9f4<\xa0"$H\x9co_\xe8F\xdaq\x9cRV\x07{|JLq\xf6\xe8\xd7J\xd8\xefyze&<\xae\xb5("\xaa\x02\x02\x02\xc5\xd0\xa1g\x1a\x15\x95\xe6\xe3\x1b\xfe\x1e\x0eV6T\xa4\xd2/\x05\xf6,\xf5\xeb*l\xda\x91\xb1a"eXz\x9f\xba\x9a\xf9\xa0QV\xcc{\xa6\xb5\xdb\x92\x06\xab\xf2\x82\xae\xd4\xfc\x91OE\xb4\x03\x9f\x90\xb5]\x8f\xae\xb5\xdf\xa6\xd5T+\x80\xc4\xc3\xb1\xb3(\xce~\xd2\x98t\x82\xc1\xa9\x82{H\xab8\x96\xe4S\x8e\x1f\x04\xdbI\x1e\x83\xe5\x0b\xebH\xb7\x10f`\x96v\xec\xcf\x86\x7f\xe8R8\x95\x91yk\xaeP\xa3\xado\xa4HH\xaa\xd6E-Q|\x1d0\x00\x8cC,"\xebH\x9e\xc2\x0bC[\xd1\xc4t\xd1\x08l\x8b,e\x05\xd7U"\xda\x92}\x92\xb4\xef\x88I\xc1\xf5\xbc\x9aH\x16\xab3\xdd\xebW\xa1c\xa9\xb3\xcf\x0f7@e\x87\xef\xa5\x91\x83\x8c\xcfc-\x06\xe8.\xff8\xafZ\xce\x81\x9cj\x94\xd0\x10\x1d\x99\x13Y"\xd6\xfe\xd3w\xd9\x02\xb1\x82uG\xa7\x9a D\x92\xb1c\xd6\xe4? i\xadH=\xab\xed\xb7\xd4]o\xa2\xf1c\xbep_\xf5\x15ID\xc8\xe7#\xf8\xd9C\xebiyr\xfc\xa7\xe5G\xa9\xcd\xc5\xef6%\xaf\x01*F\xbf\xbeZy\x8c1E<\x8d\xf6\xa8\x89h\xf5\xa5\x02R\xca \x17y+\x1a0]s\x96\xc5\x08\xf3\xc9\xfa\xa4\xb2\xaf\x93\x18\xd5:^,\xad\x89wZ\xbf\xde<\x14o-\xe3\xd8\xeb\xfb\x97\xf2S(K\\\x00\x93F\xa4V\xafZ&i\x99\xc7\xfb\xbfBe\xa8\xc6\xeeM \xfb\x80\xa8\x90|\x7f\xad}\x80\xf5\xef\xcaO\xd9\x87\x14\rBs\xb2\xc5+1\xc82!#T4\xc2_\xfcn\x7f\xdc\xd59h\xb1CF~)k\xcbN\xa7\x82\xddc*\tV#\xcepU\xb7.\x01\x0e\xcc\xc3Z\xb8\n\x8f`\x1c(5\x84\xbbNa\x15l\xa7\xbbz"\x0bQ\xfc\x95V\xd2\xf8\xd78\xdf\x9c\x849\xa4\x81e\xd8p]o\xff\x9d^\x90\xec\x95\xcfO\xbf\xc0(>G\xe1U\xd1\xa5^\xe6:\x04\xba!\xb9H`\x84\xf67\xab#\x9f\x91u>\x10\xc5"B\xb2\xaa\xe2\x8b\xee\x85\xfc\xa8#\x80\\\x13JB\xb2\x16\xcd-M\xa8\xa5\x0c4e\xf5?\xc5t\xf5\x97\xda\x87\xbf3\xf0\x1e:\x9e\x0e"j\xb095\xf9\xea\xf2\xeaM\x0c\xba\xc3l\xaf\xdcy\x12u52h,\xcc\xcf\xf1\xe7Q\x84{p%D\x834\xb4\xe6\xe0Lr>H\xb5*M\x8e\x9d\x9f\xc7\xf5\xdf\xbe\xdc}r\x1f\x02\x91z_^s\x942)\x9b\xc0\xd9\xa9\xf7\xb2V\xadyt>gR\xdcW,\xcf\xa8Wk\x94\xdc\xc5\x83\xcb\xbe\x92E\x04A2\x180$\xbd\xb99\xd5\x9c\x9c&\x97\xbd\x16\x82\xb8\xeb\xc1\xb6\xc5h\xa0\xcf\xa1\x7f\xa7k\x87\xd5i?\xc9K\x84N\xb2\xd8\x06\x81O\x93X\xc8*\x9e\xb4\xa9\xe9?\xd7\xea\x1c\xed\xee\xeb\xfeHg\xee\x07\x9dJ.\xf0.\xa7\xb5\xa5\xa0\x1b)\xd3\xff\xda\x98\xe6&\xd7\xf0\xdd\x95\xc7{\xda\xd1z1BH,(&\xd9\x88\x0bJg-\xbbM\xed\xf3}\x15\xa7\x1c\xbe3w\x12\xd6\xb2\x8c\x83\xa86E\x9b\xa5y\x10\x92\x9e=\x8c\xf2m,9!te\xce\x02\xed]\xae\xc5\xcfS\x0c%[\xbc\xde\x04\xae\x86\x13\xfc\xc8\x81\xf6X\x04\x1f\xd4\x1b\xd5\xc2\xee\xb8aT\xd1\xb0\xa1\x02\x16\x963\x1a6\x92g9M\xff\xbd\xa6\xa3D95V\x06\xb0\x9c\xdbC\x90[\xdb7[2\r\xb1\xf7\xce\xc5\xf20\xed^Qdoi6\x84hK\xfb@C\xb5\xa2?\x02_\x16\xabA\x93\x93T\xaeJ)\xbf\xc30\xc8Y\xc8*\x98Z\xce\x8d\xa6\xd7\x8a4\x88\xdb"j\xc9c(\x03\xf7\x95#\xae\x96\xa37-\x11\x99\x8dQ~~\x8d\xb1m\t\x86\x06\x9b\xb5\xe4\xc3\xf9\xfa\'M\x91\xbacOBQ\\t\xc3\x08\xe8\xfe\xca\x0f\x95\x16\xbc\x03^\x18Zy\xd6\x92J+1\x99\x0b<\xd2\x81g\x8b\x1f\xa5\x1dd\xd6\x1aD\x906\xa2|O\x0f\xe7=\x1b\xa5\x0ep\xd7\xe4\xfa\x89\xf8}\x87\xabS~g\xa5\xa8\xe6Vj\xe9\xc0p\xff\xf6\x17C;\x15\xf8\xe1\xb5\x8fg\x90\xb6G\xeb\xcdh\xbbQt\x06$*\xc2/\xf4k/#\xa9\xf1hV\xbd\xa0\xb8\xe8\x858kd\xec)R\x88\nQ?\xb9DR>\x8e\x11\xec\xa2\xec\xe8\nEm$\xd8\xd2\x8a\xe8\xad\xdee\x1a\xb9\xfb\x13\xd6G\x95\xed\xad\x87}\xc9T\xad{\x01\r\xa6\xa9[\xec\x1e|\x17\r^n\x87%\x9fq\x91\x00\x9c\x87\xae\xac\xfb<\x12F\\g-\x16\xcf\xf7\xa4.6OV\xfb#R\x8b\x14B\xe2\x9aH\xe6!\x07\x0cBd\n\'\x93=\xb9\x95\'\xd9\xfcg2\x90\xae\xe4\xb4\xd8U\x99\xadAae\xc2\xf7\xd8a\xd3\x9cr\x1e\x94\xa2\x02\xe7Vw\x13r(t\x9c\x15-Jyp>\x9c\x1ao\xedwM\x8af\xd9l\xc7\x94\xba\x92w\xa9\xc8\xbf^\xea\\/\xe4c\\c);\xd0\x169\xca\xf3L\x95\xfd\xa2\xca\x8c\xde\xa5\x9f\x84\x9e\x89 \xcf]\x0cy\n.\x8e\xa5I\x98f\x8d{&~>G\xa7O\xf6[W\x13~\x90f\x0f\x08W\xce\xf0\xcbI\x9c2\xc9\xcc\x0f4\xbb\xf6F\x8f\xad\x12\xca\x93\xder\\R\x04*\xb8/\xd2\x91\x9b\x07D\xa2\xfa1\x93\xeb{w\x9a\x88\xdc\xf7pw\xef\xfeG_\xbaf\xac\xb3lS\x1b\x1a\x1a\xddB\xe4\x08\xef\xe2\xd6\xf3\x16R\xdfr\xe0\xc7S\xcd\x1c(N\xce\x0f3\x11zT\n\x9b<\x1f\x8e\x9e\xfc y6\xb7\xa7\xfb+\x1ax\xb9\xb5\x03\t\x88}\x95Nm\x88\x13\xde\xab\xa4#\x95;\x88\xadr\xff\xf5\xa3\x0c\x90\xe8t\xf7l\xf6\xa0\xe5gE\xa1=\xd5\x9b\x05\xea!\xee\xdc\x95R\x88\x9cY\x93\x888\xbaop\x92\xca\xc5:\xaeBQ3W\x86[Tf\xb5\x8d\x8anT\xfb\xddm\xad#3"\x99\xf4\xa8Y\x1c\xd7FG\x14\nlL\xb8\x0f\xc6^\x9b\x9e>n\xf8H\xd5my\xef\xcd\t?\xf1\x91\x06\r\xd9/\x199$7b8U\x92<7\r"\xce|\x11\xef\x1e.]\x01\x85\x96y\x88\x137\xf0j\xfe\x88\xaaD\xd7\x91\x90\xc8M\xc3\x92\xc7\x92\xd9\xe1LO\xe9V0\x851\x10\x0c-\x12\xd5\x03\xa5\x9b\xd3\x00#F\r0o{\x19\xb3\xc1\xda}\x13T \xf26\xf5+\xe6*v\xd8.l.J\xfc\xa4\xf1L\xfd\xb5\x7f\xbe\x11\x81JS\x1c\x0f\xbe\x8d\xddKf\xdbd\xcd0\x92^Bu\x9c)\xf1\xc0\'\x9d\x85\xa7\x7f\x14\x1b\xd1\xa4\x8bE1\x86IG\xfe\x19\x8a\xf8\xec\xe7\xc9\xb6\x00\xaa\xf4\x8bV\\.\xd3\\9\xed\xb0\x1aLc\n\xcbG5\xda\x8a\xb9!\xfdJ\x81%\xf5\xdd\x05?\x02\xcf\xa6\x03\xbf\xf6B3&B\xc5i\x80\xc6\xa1)D\xff\x934\xc1\xca\x1b\xb6\x12\xe0\xe8\xf3f\xc2\xa1\xc3\'?\x91,\x02f\xe7}\x04\xf3f\xef\xdf\xc2J\xa63\x802&y\x1dy\xfa0\xc6\x81\xcd0s\xb9\xdc{16lk\xc8\xcaI,\x07\xaa\x0b\x1a\xef\xa4\x11\x96\xc1\x07\xfb\xa4O\x94\xaa\xb9\xcdC\x02\x18\x1f\xe6Wb\xc9\x85\x90r&G}x,\xdb\xfdZ\xb6`\x84\xcf.\xb0\x917;\xda\x02\x8d\x88Y6y\x13\xfd\'\xe5\xca\x00A~\xa5z\x8c\x9c\xb9J\xc5\x86\x8f\xe8\xa6\x971W.\xcd\x84C88\x15\x9ctP\x85\xd9B\xd4rV>\x06\xbabq\x15\xa4\xa4\xe7\x92\xe0\xa6<Y\xe7n\x03\xde\x15r\x1en5\xb7\xf4:\xfee\x05\xe0c\xf6\x11j\x1f\x03\x0c\xcb1*\xa9\xd8o#\x90\x1a\xd0\x9b\x92\xe2\xcbn\xfcR.>9v\xec\xfaWh\x02\xc9\x9fjdI\x03\xeb\x88\xf2\xeeQ\xed/\xb1\xaa\xea\x8fjsS\xaee#\xa8\xbe\xc8\x87\xdax\x16\xa7\xd6\xab`\x92.\xaa{g\xdf\\Cs\x8ag\xef/\xaa\x16\x95q\x08k\t \xe1\xfe\x8bxDn)>\x94|\x11en\x06\x17\x06f\t\xef\xb6\xbde\x91\x80\x9fO\x87\xe5Z\xf3\xa0ZV\x11&\x8ezdEB\xc6kMo\x88\x1f\xa3\x08& \x10\xc2\x8d\x0f\x04\xf7&\x7f\x03T\xd3o\xb6\x01\xcc\r{\x9a\xc9\xd6m\xd5\xea\xe3q\x18jO%\xed\xca\xe5\xd3\xe8d&Qy\x805\xf9K\x9eN\xf7\xde\xc0\xf3\xaaBF\x18E\x1b\xf9\xe3\xf7\x9aB)R\xaa\x08,\xf5sR\xbc)\xd1\xd9\xc9Vb\x0bJJ\x08\xc5\xbb\xe0B4\'r\x85b\x8dp\xa3\xdd\xa2\x89\xe6\x13\x8a\xb8\xf4\xca\x03\xe2\xe4\x94\xce?\xea\xe08\xac4Eoh\\W\xeb\xaec\xd6SEP!nlUk\xacEX\x1e=\x9cA\x91\x1e\x9dQ\xbd\x04Mu\r\xa3.V\xcf\x04\xec\x8b\xa3\xf7b\xb9\x13\xd8\x15\xeb)D\xdbg\x83\xb9F\xa1\x92\xb2\xd1\x0cW\xda\x920\\Gf\x15G\xae~q[-s#\xd3\xceB\x19\xa8\xb4YSd}\x18V>\xd0O\x92I\x9dt\x0f\xec<\x8e\xfa\x1eu\xb5_?\xd0\x8c\xfa\xfaL\xfc9\x1f\xc3h\x11\xb6Z\x17\xa3\x18\xbbV\x8fD\xedNL,D\x8f\xdb\xe1<-\xa9\x0b\x86\xacx5\xa7\xd1;\xa0\x8a\xce\xd5\xad\x8dM\xd9\x89\x8a\xb3"\x86\xc2\x94\x0c\xf5\xa91\xa7"\xef*\x85\x82\x9c<\xc6\xbc\x8e\x1c%!4\xc1\xa5\xfe\x94\x1e\xed=!m\xb1\xa7\x12\x10f\xb3\xdf\xb2\xdb\xc4v\xe64m\xe4\x9aC\xf5\x9f\x80cDN\'\x8f\xc7\xf7\x92%(\x01\xb7\xc8!\xe3\x80\x99\x8cs\xb2\x84\x8e\'4\x1b2\x0c\xca9\tJ3\xcd\x9135\xe5\t\xa3\xff\xb44\xd8\x0c\xc1I&tM\xf0\x84h\xec\x9f\x05MgK!\x14D\xb3m4\xe0\xc7\x96\x08)s\xa5\xf0,S\xc4:\xa7;L\x83B\r\x94\xca\xb1\xe7b\xf2d\xd7KO8a\x0c\x02iZ\x9f>\xd2\x03\x02\xfc\xbf2\x8f6\xf6^\xcaH.\x8d\x83}s\xff6\x92\xc2r9Ei>A\x89~7\xdc\x04\x9bn\xec?\x17\xd6I\xd7^j\x96W\xa0q\xf3\x83B\xe7\x94[NoA\xb9b\x18w8\xbb\xae\r\xac\x92TMc\x18\x03\xf6w\xa2\xfc\xb5:\x8d\xa66*.\x19\xecT\xd5\x91S\xfc}|\xf3V\xacWZV\xd3\x86F\x86\xd1w7\x1a=\x1b\x91\xe1l\xf1\x83\xcc\x87\xa5\xa5&\x02h\xc1PpV\x16q\xdcr\'4#P*L\x93\xca\xb0\x83\xce\x06\xe9y\x8c8\xd7\xa3\xc7\xf1>:\xbdB\xfe\x89I\xd6\x8e\xb08P@\x11E#\x1eVu\xefgd}\xc8\xf5\x8a\xa9y\xca\xb9\xd4\x13m\x83=\x10g\xd9\xcaea\xfd\xb8z\x9fw\x11tt3\xf3Q\\\xe4D\x9f\xa3\x13=\x02\'cZ\xc6\xd9\xa0|\xbc\xb8p\xf9PO\x88\x94\xe5\x15\x89X\xfa\x9b\x7f\xb2P"\x15MZ\xdc\xbc\xb9z(\xd2\xb3Lg8B,\x9f\x87)#\xcd\xca \x83n`\\k\xf9\xe6\xcds\x99:i\xf6\xe4\x92\xcd\xc7\xc7\xaa\x9b\xf3\xd6\x15\xcfEL\x94\xd9\xa9S\xb3\xd2\xba\x167\x9fhO\xc9T\xdc\x06\xff\x0f\xb8`\r\xe2\x16\x81\xbc\xea\xf9\xfd\xef2MH~~\x96\xf2\x9c\xf1\xee\x97jS\xab\x9f/\x81\x80A\xcaU\x88\x99\xd2d\xdf\xf1\xe4W\xdd?/V\xda\x88\x90B~rM>\xdej\xaf\xbd\xccU(\x8c\x14\t\xa5D\x1aE\xd3eZC\xba@G7\xd7\xde\xe2B10E\xd5\xcd\xe9\xa1}_\xdc\x10\xb9\x87X\xf1\xdaC\x94\xb4\x98\xb6>\x95\xf18Y\xd0wK\x95P\xc6\xe2!\x88\xe3\xc1g\x83(\xc6\x9b\xbd\xb1\xabu\xaa\x91\xac\x821\xf9Y9\xae\xf6b\x13q\xf7\x19\xde\xa3\x85\xf34:\x95\xee:\xc5\x03\xe5e\x0b\x9f\xd8h\xe2R\x88,\x88\xbcy*e\x90\xb3\xec\xaag\xc3\x10\x18\xc7\xac\xa2\xaa\xa4NSJ\xa5&\xf6\x7f\x86i\x87FNqp&\xe2 \xb5\x17T\xe4\xfdH\xa5kz00\x82\x0bq\x8d\xbd\x9dn(\x10+So\xa8\x8e8\x81>\xadi\xf67\xcf\x88\xa7\x0e\xe1q\x9d\x9eYE\x08\x94\x9d\x97W\x9a\xb6\xe8R<\x0c\xdb\xd9D+\x91\xeckzSk\\\xe3^\t\xefJ4\xa2\x08I\xc0\xe7\xe9\xafgx\xb8`\xcdD\x03\xf2`j\xa8%Y\xca~\xa8\xa1\xc7\xfaR\xfft}\x1c\xfdHS\xcdI\xf7a\xde\xcf+*\xd6\x0b\xdc\x82I_\x11\x19\xbe\xc9\xbdK\xa0\x14DN\xc0\x03Nt\x8fy\xdd\xca\x17\x90w\xf6\xe84<\xa9\x04\x8cL\xe5\xae\x97\xd4\xbf\xb0\xd8$\xdcGP\x00\xe8\xb6Q\xa0\xf0b\x97\x8d\x95\xd9\xbc\xbd\xe6\xf9\xc6\xe3\xea\x81\xd8\x0cb\xac\x96\xcd\xe6\xa86BS=\x0e\xdbi"T\xc1\xc8\xe3\xde\xe9\x88\xe4\x08A\x86,1\x92\xc05\xe7\x18\xc65\x0c5\xc2\xcf\x9b\xa1\xef\xf7A\xb1\xa9C\xe5q\xaam8n\xf2T<\x92I\x85\xb3\xdb?}\x17\xa8\xa9?84\x9d\x89\xae\x9fJ\xcf\xe2\xdc^N\t\xb5\xb4O\xb1\xee\xa1\xe3\x89,,bs\xc1\xf98Hj\xf146:s\xadoM\xd9m>\xbb\xa4\x883)\x16\xe8\xca_\x89\xefDv\xa2\x8b\x18\xa1\xa4y\xdd\xb6\xce*5\x00/\xeaiM\x84h\x86\xfaa\x1b\xb1\x13,\xa8X\x81"}\xa7d\xe3\xd3\x81\xe4\x8b\xf8\x84\x95\xc8O\x1c\t#F\xb6\xef*\x08\x98\x93K\x03(\x98n\x16f0<\xb86\x9f\x976\xda\x8b\x89FL\x86n\xaa\xbdw/\x95$Y~~\xff\xf2Zy@\x12\xafY-\xd0Z\xe60\xacOd\xdd\xea3\xa8\xe5\xd3-\x19%\xd0M\x1cS:\x06\xbb\xd3F2\xf2\xbd\\\x7f\x04\x81\x18\xbdU\x8d\xa5`\xebSW\xd3z^\x857\xd4H\x983\xe2y5\xe2\xbd\x04K4\xdc\xc6h\xf5o\x0e_?\x16\x97\xba\xa3=\xafS\x8b\xc2\x0e\x92"\xf9\t\xb8\t\x81\x8e\xf2u8\x92\xe8\xa7\x94XQ\x1c\xc5D\xbd\xd7\xe4:\xf54r\xea\xa9\x10\x8ct#\xd2?7\xb4\xe5hV\xc6\xd6\xafS\x93\x0e\xd5\x19\\_\xd8:$qzI\xc3\x82J#^y\xa1\x16\xb9\x11\xadN\xba\xf1\xc4\xbc\xd1\x9b p7\xae\xc6\xd7\x7f\'\xde\xf5\xdf\x97\xe8u\x90`\x1a\xcfs\xa8\x11\x01\x0c\xe0+\x08\x0be\x1e\xaf\x87F\xf9@\xc8\xd8\xa8 \xa7&z\n\xd8\x9a\x11g\xa8\xa6\xf0\xd44JO\xa8\xad\xeca\x9d\xf2\xbf\xac\x82\xb4\xb0`+\x92\xdd\xb4\xe0\x9dF\x94\xbe\x89>q"\xac0\xa8\xa3\'/\x85\xe8\xd3@|j+b\x89\xe42h\xfa\x8d\xd0\x1fq6\xf3\x88\xde\\\xe8\r\xcb]\x95\xc0\xafS\xbb@\xcd/\xa4m|G\xdc&\xc1\x9f\xf2\xd0\x8c\xf3\x89\xedf\xd7\x1f\xb6\x1e\xeb\x10\xe1p\x99\xa2\xffQ\rP\xee\xb6\x01t\xb7\xf9;5\xf5\nE\xa2\xfa\xba\xbeV\x03l\xe4\x8d\'&\xbd\x13?\x9b\xdd\x8f_\x91\x86\xb4\xea\xd2\xe3\x99\xb2\xad\xc1\x93c\x19\xd1\xc6\xd3\x02y}N^P\x98\xee\x9f\xceRm0\xdd\x1af\x96\xddG\xac\x03\x97\xc4|\xa2\xd2y\xf7[@\x0eL\x86X\xb5\x02\x8eh|\xaak\x8a\xf3H\xc3\xe8\xf7\xd7\xfb#\x03\xe2\xa44\xb2V\x80\x91j\xe9\xb7\x98\xb2\xbct\xabY=Y:\x9fo\x88\x9e\x97\xcaA\x08\xb4\xfcI\x95g\xbe\x15\xa8\xf0\xe6\xc112\xc1`\xac\x84\xa9[Ym\xc5}\xa8zq\xac\x1a\x95\xa5&X*T`QA\xf4L\xc66\xa5\x0c\x97%>\x88z3Zeh\x92\xedlg\x8cm\x96\xe8\x9c\t\xdd\x9f\x92\xd8\x0br\xc5\xcd\xe0\xb2$\xb3\x0f\x050:\xf4\x94\x96!z\x1dao\x85_X\x9e\x12\xc8\x8d\xc5a\xbf*\xa3V\xd5}\x02\xc7\x08n\xbaY\x156V+W\x19\x03\xd14\xb9\xa3\x9e\x129\xcdo\x0b\x00\x82M\xe7&l\xae\xc4\x97\xd4\xc7\x19\xa9X\x82#\xb2\xf4\x16L\x15\x84\x19\x1dq\r\xea\x115}\xa67\xa6\x8d\xd6\xfapk\xb3\xc2\t\n\x1b\xfd2>l\xcdN\x1e\xdd\xf6\xfb;\xb4\x1b\x17q\x99L\x88\xb1q\xd9\xfcd?\x81G\xa4\x94\x9c\xf96\x01\xc7\xc9\xd7\x89\xeaGBp\xf0\x11\xb0\x91\xecl4d\xa3(\xa5s:t\x13\x0c&\x9e\xcf\xde+\t\xfcNe\x10\xd1U\x0c\x92J\xc2\x8e]\x88\x1dod=K\x03n#]H\xd5\x99\x9b\x0cOAP\x90M\xea\xb0\x94\\&\x12\x9a\xc9\xd3d\x9e\x83\xcc\x9c\xd8y\xa4L\x80+\xa3P\x01fn\x1c`\xd1\xd2qW\xde\xf6Uw\x8aF\xda_\xae\x16\x15\xbe\xd1\x83D;Q\x1cV\xd9\x11k\xc2\x1d3\xb3d\x9b\xb8\xce>\xbd]\'d\xf7J\x9b\xe5\xde\xb1n>m9&0;m\x04\xe5\x85\xd9|\xf9\xed\xe3E\xb8R\x89\xa1\x9b\xcd\xf1\xea\xa7\xa7z*\xba<\xc0\xab\xa5S\xd1\x13z\xa2\xc5\xf9\xb1(\x07\x92Z\xbaV\xfb\xc0\x9e \xdf\x17R\x1f#\xb9\xc8\x0b}\x13\xe3C\xd76\xc2\xb7\x83\xd6\xed\x83p\xe2\xbc\xcd\xc1\xf0\xb4\xa3)\xec\x87e\xad\x0b\xce\xd6K\xe9iU\xeb\x07\xe7\xb5\xb3i^\xd3\xbe\x80\xa7|yZ\xe0+#b\xe7"\x97\xad\xbc\x8d\n\xd27\x91h\xf81\x82\x800\xaf\x82\xe5\x13k\xb0\xec&\xecg^\x0cur\xaa\x05\xe3Q\xc1R\xfe\x16!\xacR\x81\xfa\x8f\xdb\x0c\xcd\xfe\x91\xdf \t\xfb\xb1\xc6:\xd2\xdc\x04\x0c\xcf\xbf\x9b\x98\x11\x89\xae\xf3GF;\xa6\x0c\xcf\xa4\x87j\x80\xaf\xdah\x8f-;\xb7\x91\x04\xa3\x96\x99o\xfe\xeb\x8a"a\x92\xec\xc5>\x0c\x19,\xc2\x06\x93uyS\x01\xa1\x0f\xf4\xedg\xf9}\xf2p\xae\x16\x17\xa7T1H\n\xd1\x14\xcf\x87\xb7\xef\x1f\xad\x88"C\xc4h\xe3\xa9\\\x07h\xdae\xba\xe2\xf7:\xb5\xf9\x06\xee\xa7\x1c\x9d5\x83\x7f\xf8O\xa7\xca-\xc6\xd5\tA\xf0\xa3{\xf2\xd5\x0c\xac\xf7\x07\xff\xc4\x17S^\xb3\xcc~\xeaJ\xa9\xd1>\xf9\xf5.\xa4\xb6\xf4)\xc5\xfb(S+\xe9-\x9b\xea\x06{\xd3L\x9c\xb5\xbe\x83|<\xc7p\xb7N\x1f8\xd2\xca\x7fo\xcbrA<\xdf\xc6\x8f\xca\xba\t\x11\x0b^\xa1\x93U\xcc1\x88~\xf4\xbc\xa5\x18z\x94\x92%$\x12\xc6\xfb\xce\x81f&8&\xb0\xc1\xf2i\x8c:g\';\x0f\x9b*\xfa\x89\xa9\x06\xe0o\xf8\x1cC\xe5\xf2\xe4I(\xb698\xe0\xf7m\xff\x1c\x82RZit\xa6\x87\xaeo\xeea)\xa8\xe1\t\xb1\xc7\xbf\xa7\x85\xd8\xcb\xde\x83\x009\xed\xc0\xe0\x80LF6\x04}\x95\xdd/\xc2jsI\xdc\r\xb1\xbf|\\\xd0\xb1a\xbb\xdf\x16$KCO\xefS\xe5\x9f\x16\xa4\xce\x9a\xed\xfao\xe8e\xe8HS\xed\xbf\xaa\x17\x03\x83\xb4Z;\xdb \xb7\x0b\x06\xa50@/:\xf6\x8a\xc9_p;!DFl\n\xbdm\xfaf\xb8P\x9fh\xc3\xf7\xabl\xe86&\x12\xc0\xef\xab\xfb\xf3R`\xb9\xcc\xfc\xabfW\x8fX!8PY\x9f\x9c\xc3x?\xa4A\x9dY\xd8\x80.\xd2\xae\xea\x1a\x93\xdd\xcc\xc2\x15\xe0Bu\xac\xb2\x1b\xee\x92\x0c\xe3\xa2\xce\xfc\xf8\xe9\x8c\x8e!\xa5:\xa2\xf6"y\xb8\xa3\xed(S\xdd\x1b\xbd\x80\x9b\xdc\xac\x0ci\x05\x03~&\xc6$\xb5IU\xef\xe8\xe2\xf8\xff\xda\xf8L\x97\xef\xe8\xf8\x8d\xaf\x9d\nU"\xb1q\xf1\xa3\x88|6\xda\xeba\xc3\xb8\xd5.QK\x07M\xbd\xb7BK\xca\x1e\xd8\xffY\xbe\xac{\xab\x8c\xc5\xa2\x1b\xe5\xc2\x99D08\x88\xb2\xdf\xc2\xd4\x12\x01\xaa\xb0g\xaadM[j\x08\xe6\x94\xb2V\xf6zV\x14k\xceA\xa6\xd1\x9b\xd7R\xc5\xbaZ\x14\x9a&;\xd1\xe1\xa8eo\xba\x9b8\xa7}\n\x0f\x91d\x17\x04\xc1\xfc\xfd\xe8\xee ~\xf4\xe8\x82\x0c\x967\x08\x97\xc4\xaeZfgs\xad\xa0\xd5`0UaZM\x15\xbcTb\x94\x81t\xdc\xafR5\xaa\x95\xfb<}\xa3\xf3\x18Y\xb0\xac+h\x85(!\xe5\xdc\xa5C\xaf\x15R\xa5N\xa3\xd4\x89\xdc\x98a\xd8/\x16\x83\x03+\xa3a\xe5F\x11\xb2S\xd7\\(\x9c\xaaJ\x9aY,\x94O\xe8:\xcd\xef\xc7\xd2\xaa\xb0X\x9e9\xda\xd4XM\xf1\xb6\x1ea\xb7\xe4\xc5\x82\x0c\xc0\x19|6\xc9-\xc6\xab\xbd\x15\xc3\xc6\xa8\x84\xa6\x90\xad!\x1e\x95\xff\xc2\xd9\xed\xc7\x97\xd4\xbd\xe3\xa8 {\xb9{_\xff\x97\xa7\x0f\xf6\x95\x0e\xa7\xaay\xd8\x8c\x0c\\\xe3H\xf8\xb1t\xb1\xe3\x1c2\xa6\xa0q\x12\xe3Y\n\xb0{Y\xc3\x11M(#m\xad\x06\xbd8\xc5\xde\xc1@\xb4\xc1\xd8\xd70\x1e\x9b\x86Wx\x86)\x93!\xf1\xf2\xa7p\x04\xc6\x0f\x0fi\x96\x91\x88\xa8c\x15\x88j,/~W\x85\x9eP\x13BL\xf6D\x13R\xa2q\x0fB\xfd\xe8\xf2\xcbR\xacr\xa7\xb3\xda\x14\xe9\xe9\xd9\xf2\xdb\xf8\x81\x05t\x8b\xde\x11\x82\xbfu\x90\x1b<\x8fa-\xb7\x19%\xb6s\xf8\xf3\x9e\x9a?\xc2rG\xba\xabu,\xa0N\x7f\x88\xc5U\x00b\x0c\x0b/\xfaE<9x$\x03r\x81\x0f\xaen\xa9\x0b\x94\x19\xd2\xaezJ\xd8\xb0\x84\xe7\xa6\xb6\xe3\xca)D\x8b^H\xfdA\x19~s\xaeh\xe7"\x848Q\x8dN\xecb\xfd\xa2\x06F\xe6\xc5\x9b\xb2\xc8\x1d$\xc4\x80\xf9\xe3\xdd\x1e\x8132\x1d\xfe2\x8bQ/b\xd1l\xca\xe4\xe2\x1eJ\x0e*\xb1\x99\x15Z\xae\xa0\x0f\x86\x96\xff\xaf\xa1I\x07\xf4\x10#\x8aC\x13I\x05\x04@)\x9e\xbe\xe0\xa9r\xf5\x04E\xab\x81\x8dz]>|.\x98T\x12b"\x0f\xe6\xc5\x82\xea0\xe5\x10\xd4Ilkg\xa0\x9c\xe9\xd1\xbd\x9a&z\xcb\xb7\xeaQJ\xdd7\x81A\x0eZC\x0eT\xb9\xf7\xe0\xa3",\xbd\xf0ck\x02\xac\x130\xd9\x923\xe8\x99\r[\x07d\xa3$\xe9\xd0\xed\x0f\xcd\xcbQH\x95\xb9\xf9\x94\xae\xfe%\xda\x8bC\x8d\x17\xab_\xc0V/\xb1\xa4!\xef\xd1s\xe32\xa2\xfe\xa9p\xbb\xde\x7f\xc9\xfa\xfdh\x8a\xa7\xa0+_\xcc\t\xd9\x03\x04W\xcaz\xde\xc1]7\x11\xae\x08]\xf3\x91\x86\xf4\x9cQ\xcf>Z\x89\xc9\xce\xa0\xccu\xc60\xb7I\x16~\x89\x86\'\xb2e\x9d\xef\xc6Se~\x87rZl\xd5\xcd3\x05\xb00Lv;\xdc\x9f\xb2z\xd0yz!\xa4Ys\xde8\xe3\x0e\xbe\x1f\xb5\xdcD\x82\xb1s_-]\x16uu\xb8\xd9\xefE{1\n\xbe\xf0Rn\xf7dv\xd5\r\xa7G\xd3\xdcH\x9dM3\x8f9U\x0e\xb6h\x07\xdf&\x86\xba\xeb\xb2\xbbx<\xeb\nx\x8c\x08\xc80&\x17x(\x14y\rp/^V\x1c\xd0\x8d(\xf5t\xd0\xcee\x91LEHuY\xc3\x01+,\xc1o\xa3R^\x92\xb1\xb1</\xb7\xdc\x8e<\x1e\xe9\xa3(OB\\\x98\\Y\xe8U\xca\xd0B\xd6\x90/\xf4\xeb\xfd\x0c\x98\xa7]F\xa2\xcd\xf3q:\xdd\xfc\xda\xe2\x84\xa5\xc40\xec\xf3\x9a\xe9A\xaa\xd9\xea\xd1\x89\xb7\n\xf8&\xa4\xed\xd7\x17^=\x15\xf5\xee\xdf\x88\xf8@9:\xa2\xc1\xe4\xac\xe6\x87j\xf8\xad\xb8\xbc<\xfb\xd0U\x929O\x1e\x03\xa0\xdaR\xb9\x88\x1et+\xa2\\t\x96)\x15eS\x88;\x17\xbd)\x03\x9a&\xb6D\x85z\xdc\xf4f1#\xca\xbe`}$\x8c$\xc4\xf8\x11\xf8ua7\xe5\x88W\xf6\xcaM\x9d\xe0B@\n*k\x9d\xa2\x17UdTj\x7fL\x84F\x95\xda{\xd7H\x9d\x1e\xcb\xbcTv\xfc\x9e\xd5J/\xec\xa5\t\xd8-da\x10\x1c\xbc\x94\xd1\x81EdC0zB\xadWI\xc6\x9d4\xf3N\xfc\xdc:\x13\x97\x90\xa5\xe6\x9c\xe6(@I\xbdrM\xef\x96\xf6#\x8d\x0b#5O\x03k\x0b\xb7\x15\xea\xd3\xefn\xc7H\x85#\xdf \xfe\xc8\x93\x17\x03:Y\xd3L\xfe\x11\x9b\x14\xa8\x15=38\x98\x18\x80\xbc\x8a\xccM=3\xd6m~\x11\x0c\xdeel\xf0\x966\xa1\xfc\xad\xf4M\x94\xe8Li\x01\xfb\xc8{\x83\x90/\xd8!!N\xca\xc1\xd9\x8cY\x10"\xed\xd3\x1a\r\xee\xe7>\xa4\\\x90l\xd1\xf5o\xe7t\x9aV\x1f\xf9\x82\xe7\x94ON!F\xcc+8\x94\xae\x8cl\xc1\xbb\xd1\xf0\x9e\xad\xb3\xfc9\x1d`Rzd5\x9aW\xe4\xb2\xf4m\x9eh\xda_\\kge\x84\xcd\xa2\xa8\xdf\xdd\xeb\x9cyr\x8b\xa3\xea\xa0\xf5\x13\xc4\x8f\xba\xdbOot=\xa2c0<\r\xb9\x1a\xaa\xe8\x83\xb4\xfe\xec@9\xdd\x16\xd2\xd4Rz\xf3R\xf7\xcbv\xa2\x9d}w5j\xf3Xl}\xbe\x92\x9dGNX\xfd\x91F\xd1\x99\x94\xc2N\xc0\x15%\x95Z\x96\x14\x0c\xb8dG\x0e;\x1c[\xbe\x97\xcf\xbf\xed\x0e1\xd5L6\x16.2\xa5\xb2\x8e\xf0\xc3\x14P\\\xf4\x10\xa4{\x00:*N\x04, j4g\x85@M*"\xa2X\x91\x9au%\xc4S`\xaf\x82\x81;^\xf1\xe3\xf7\xab\x98E\x8f\x05\x00\xcb\xc7\xee\x86\xf0Y\x05\x02\xb8\xba\xcd&\xab!\x86ejT\xbd\xf5=\x1a4\x8d\t\xca(cn\xf8+l-/\xfeux{@~(\x9a*\x80;\xa4\xe4g\xdc@\xdeVa\xc1\x87\x87\x03\xa7\xb0\x97\xabne\'C(\xc6\xc8\xb0\xa6:\x89\xc6\xa7\x85FOe\xcb}\xae\x84\xd0\xd4hb\x8b(7H\x97v\xe2\xad\x14\xeb!"nN~\x11\xb6\xb3=\x19V\xe8_\xcdn-\xf2`$\xf8sc\x1f\x12z\'k%\x8e\x1fDq*%\x84?\x12-\xa8wq6\xb6\xa0\x1cq\xa3\xdd\xa5,!\x98\x8dd\x02"1\x02\x13\x8bh\x82 \xdbB\x9f\xed\xbdU\x85\x9d\xe9\xac\xef\x1dI\xfd\\S4\xecVM\xc6\xd5M\x1f\xef\x1dA.Z\xf3\x95]\xff\xac\xf5\xca\x93\xe9\x18hH\x07\xc6\x85\xe5\xf0T88lr\x7fe\x13\xef6o\xb6\xde\xa9\xcbb#\xb6\x1a.@\xe07\xa2 \xed~\x95\x11\x06\x81\xbd\'(\x1bi?-\x85\x96\xd3 \xc4\xc9\xda\xadA\xdc\x04\r\t5\xd8hz\xbb)\xec\xe0L\xfeP#\xf6\\\xef6i\x89\xc8\x8d\xbd\xb7\xbc!\xa4\xca\xbe\xdb\x00q\xbdG\x03\xf4q\xa5+\xda\x98\xc1\x95\x80\xd3\x0b\xc6k\x93\xe4,\xb4\x0f\xe5\xa7`\xfb\xfd\x97\xba\x9a Kp\x86Zj\xe2\xe6%\xc3\xae\x04\xda\xda\xde\xe54><\xf1\xaeTLX\xb5\xf2NB4Z\x02\xdaE\'\xf2\x04\'\x0f\xd5[8F\xe7r\xd00\xcdx\xb4\xc0\xaa\xe4\xf1\xd3@\x0c\x10\x8co\xe6\xfb\xfd\xffHZ;\xe5\xb2\xeao\x8a\\"~\xaf\x93y\x1f)\x96\xcccP:u\x95LUj\xb693\xf1z\xf4k\x89m\xaaVR\xc0%N\xc5\xe2}\xe0\x84Ug.W\xd1g\x0e\xaa-a\xc78\x94\xd9o\xd7;[\xf7\xf4\x7f{\x8b\xe1.\xb3\x95l\xde\xbf\x93\xcd1Q\n\xb4\x99ur\xd6\xceN9ZL^]\\\xa9\x1aS\xab\xdf\xd8\x18\xe9:\xfa1\x15O\xbf\xd2\xe5%\xfc\xf7M%\x06L\x90\x1b\x99u1o\xb1;:\x05T\xcd,\xf7\xfb\xd3Q\xd3\xfe\xec\xb1\xf8\x16@K\x18r\xd7\xdd\x91|\xf0\xeevB\xdc\x88TQ:jt\x7fh\x97\x15\xa2\xe3\xcc\xa7\xc7\x03\xda\xf0\xe9:6&\xdeZ\xa9\xbf$\xdc\xe3\x12`\xfau\xb2\x8d\xc7 \x1f\x07A\x18~\x88\x8a\x80\n.\x07\x99h\xffxG\x9d\xb7\x8aMk\xe8X2\xb1.We\x02\x05@\xf4\xaarG\xea\xdf\x04\xb4Y\xdb\xbd\xc6\xa1\xd1\xf9\x82"\xae^\x7f\xf4\x97\x19\x84\xb9h\xad\x9e*\x88\x11\x18\x98\xc0\x04kL\xde\xb1\x97\xfaN\xdb\xa6\x17LV\xa7\x9ck0N\x16\xae\xe5a@x\xf5l^\xc5R\xbe\xbfF\x83&\xc5O\xd5\xc5\x98<\x99R\rR\x81\x8a\xd3\xfd\x11\xdau\xbaE\xbam\x92\xa4.\xe5qd\x15\x7f\xc3\x1b\x877\xd0\xc8\x01\xc2\xd2Y\xcc\xaeEu\xd8\xd5C\x9c\xd1\x08\xee5\xda\x90\\w\x18t.\x08\x9bH-/$\x08&\x8d:8\x92<\x17\xf3\x8a\\5x\xaf\xb0\x95F\xe8\xb3\x15K\xdfV-\xab\xc9\x87d\x84W\xaa\xca\x89Y\xbd^ZR\x94K\x15s\xcd\xedh\x1b\xeeI<\xb2\x0c\x0f\x0b;\x082:2\xb5~\x94=6\x11\x16u\x81n?\x12\xfe\xa1\xaa\x90VT\xe9\x13\x93\x19B.\xd3\xd6\xeb\xdd\x17\x90\xecUs\xf7\x05\x152#o@\xc1\x0cz\x0e\x9d\x82\x13\x98\xf4\x00A\xac\x03\xdf\x9b\xa9\x1fw\xbd\xee\xf9(\x90zdo5\xe5\xcc\xa5\x84I\x1a\x02&\xd9\xa2\x94\x90\xe6A"\xe1\xc1_q\x06X\xf3u\xf9\xf1\xa7Db\xca\xed\xe3a\x9aLi>n[\xe9]\x14(\xd4\xda\xe7\xfd]\xe9\xd4\x1f\xfe\xbd(\xb5\xbb\x9c\x1c\xe8\xd7\xe3E\xc5\xd6\xb4\xa0(\x9e\x08{\xab[\xedH\x8d\x14w\xee\xe4f\xf9\x9b\xda\x96&\xbb\xb8\xfc\xc0\xce\xdd\xa7m.1\xb7"3\xbf\xfd\xf4X\r\xb1\x18\x12e\xd9\n\xed)\xad7\x1f"(5)\xc6\xa6\xa8\xa4\xb6\xf6\x80\xa3\x0b\xc9,\x1aM\x05\xca\xa2\xed\xbf\x1dr\x16L\xd4\xb54\x17>\x12K\xae\xef\xbd\xd3\xb3_\xa9\xe3\xaedmG\xe2\xe8\xc3=!g\xa5\xce\xc3\x9e\xd2\xbb\xd8\n\xb2\xe1\x9a\x98\x88ch\x981bd\xc7jZ]\x17\xb5\xe4\x89\x86\x9b\xb6\x9e\xb8\'o\xadTG\x82\xf6\xb7Z\xa6<\xa6\x1b\x19\xfbW\xc0\x7f{+\x9fa\x8cg\x15\xfd\x1eM\xbbY\x93abe\xff!\x159x;T\xb2\x1d0n\xa9\x9d.\x97\xc9o\xb2\xcc\x90K^\xd4\xa0\x96u\xce>\xc4d\x81h\xdf"o\t"Q\xb0\x8ed\x13T\xf6\x06\xc7\xdb\x18\xc2$f\xc8\xd5\x9b\x9ag\xc2\xb4\xb8\x86n\xc1u\xcf$Z\xf2\xd9\xe1\x84\x9e#\'\xee\xad\xf7\xe9\x93/\xf2\xe8\xad\x1e\x82\x15\xd4Am\xcdG5;"\xd0V\xb6\x96\xb6\x89\x14\x12e\xafE^\xd9\xebYl 4/\xe6j\x04\\\xe6\x82\x1a\xea4\xce]W\xbe\x86\xbf\x0fjf\xe2\xff\xbc\xc2\x02\xbaQ\xdd\xdcL>\x0egz\x93\xb4yzu\xbe\xa2\xa5!O\x0827\x1a\x91\xe7L[\xa0\x19a\xc2I\xb3u\xacESG \xd4\xcd\x1f\x0brP\xabltvZ\r#\xad\xcaAS\xbc,Z\xd1j\xd3\x19\xd2\xc8Z\x9e\xe4\x92\x93\xdd\xfd\xc3\xb0\x8a\xd1\xde\xc7!\x0f\x0c\xb39S\xec\x87e\xbf%X\x8cN\x17\x1c\xfc\x8c\xad\x8e\xcb\xcc\xd4\xfc\xcd\x93;\x9d\x84oa}\x8f\xba`\xf2a\xba\xa3\n\x04\xa9g;\xe1G\t\xb7\xc0l4d\x93 \xd5.}*\r\x88@0"\xb0\xf5\x15\xfe\xff\xb5n-\xdd\xb5j\x81gl&\xabt\xfdOXS\xb6\xf96 /\x18R\x96P\xb3z\xd9\xcb\xfa\xf2M\xb4(\xd4\xa9\x02\xbbQ\xe3\x16\x07\x0e\xc5\xd5\x88\x9e4\xb9R\xd0m\xb0*\'<\xd7\xef\xe38\xccy\x7f\x1e\xeb\x86\xd0\x1b\x10\xe9\x99\x91dJ\xf5\x00\xe2\xf7\x12)\xd2\xa9\xd2\xce\x8e%\xf2\xc1>I6\x85\xd4\x80F\xf4s\xa2\x08l\xce\xaeWr\xcc\x95\xa9\xe6\xb7\xe9\x9c\x1e*\xd1\x8e\xf9\xf7\xe4R9\x82\x9e.CNF3\r\xe1s\x00\xcd\xac\xba\xe1\xbfX\xb8\xadcy\'1\xf9\xe2\xed\x7f\x8b\x9bCZjET\xaf7\xca\x9fq\x9a\x13\x94\xe2p6\x11\xd4\xb8+|F<\xf7\x1auO\xd5K\xe6\x18\xbc\x8f"\xf9[\xcc\x86\xbb3T}\xed\rQ\x9e\xfa\x1bXl\xfa\x98\\\r\xe9c\xb9!\x9a\x99W:\xc6\xb3#\xa5\xbc\xe3\x01f\xdb\x8dC0\xff%\xfc\x08\xef\xd1\x00A4\xc5\x86F\x8fV\xc2q)\x83\x0b\xf7\x8a|\x9fT~L$\xce\x14\xfd\xfer4\x83!z\x87\x9d.\x1d\xd2\xe3\x8f\x93\x9c^\x05de\xedK*\xad\x180\x17\x19#\xd9\x8d\xb1z\xa6\x9b(\x1b\x85\xb38\xc1\xb0\xe6U\\OBc\xe9\x80\xd0\x00\x19\xf1\x88\x1eq\x1d\xfd2j.\x8a\xd7?\xf0qW\xa5@A\xbb\x87\xfe\xb8\xee\xb5#e\x03E\x1c\xe1Qh\xa2\xee.\xfc\xc8\xc2i\xdd\xc6\xc7\xd8Db\n/D\xa8p\x8a\x84u\xeac<\xdau\x1a\x13\xca\xa5%\xc99@;\x94\x07[G\x1eaJM%\xe2\xe9! f\x81\xcerqMV\x00N \xbc\xffr\xc2\x95U\x8a\x92\xc6\xfc\xa2\xfcg\x1c\xf4\xe3\xba\xac\xca+\xa4\xd61z\xb7\n\x8f\xa6\xb5\x88\xfc\x06\xca"\xb7\x83S\xb1\xbb\xa0\x17w\x80mx\xda\xf6\xfa#W\x9b*{\x95\x14\xec\xca\x9d\xfd\xf0\xea\x16|8G\x87\xb3\xce\xcd^X\xa3\x05R\xe58\xbe\xeaM\x9fh^$\xceG%\xdf\x96DX&\xf19}.OF+)Jy4\xe3\xc7\xd3~\xfb\xe4\xad\x94\xa5\xa0E%J\xb0\xaf\xc8\x88\xb4\x03\xdb\xe1\xd1\xad\xc1\xa5ODY\xe3d?\xfd\xda?_\x1d\x1b\x95Z\xd4D\x05\n\x83\xc6\xbc\x9dx)\xdb\x904\xd7\x081A\x8fH\xc6c\xef@\x85h\xb9/\xe6\xa6\xc4#\x06g\x03\x1b\xa8\x8e*\x8f\x10c\x19\xa3N\x91\x0eP\xd5\xa3\x9a\xd8\xab\xd1\xa7\xcf\x9b\xf9\xfe@\xd91\x14#\xb3\x92\x02\xd7\xc0i\xeb\xe0\x19m\xd9\xbdRN\xf7Mc\x9c4\x869D\xd6\x93E\xe6 +\x8f\xd8\xe2v\x0f&.\x86\x07\x16\xdbH\xca\xaa\xfbP\x90C\xa4\xb4\xd0\x04\xe2\xe5\ni\xf1_%\x83\xa5Bim\xfa\x8f\x00\'Tgx\x8a"\xc5\xc5q\x8c\xcd\x9c{\n\x06\x0b\xe8\x9a\xe94X\xf7Z\xd7%v.t\xfc\xbd\x04d\x93*%%\x1alY\xa9\x89\x8a\xba\xb7\xaf\xc1\x8eH\x15F*\x0c\x17\xdc\xd3\xa8?\x83\x16QQ "\xa5T"?U\xb8(%\xdd\xf6\xe9SWH\xd6\\\x8axP\x0by\xe7\xf3\xe9\x87\xfa#c\xd1Mp@\xd5A\xa9\xf1\xb9fS\x8d\x8d\xbe\xb5LY\xd0\xcd\x827\xa8\x13VW\xe5\xads\x18\x8fq\xf7~;\x1c\xec!ePr\x91B~\x96\xb2\xb9\xa0\xc7\x9c\x06\x99F\xc5\x01\x91\xbf\xbb\xaaC\xf4\xfd\xe87\xe2\x12\xa0i\xd5\xed\xc1\xfc\t\xb6\xbc\x85\xf0\xabM:(\x8b\xf2R\xce0\x1a\xf1\xf1k\xcay\x98\xed\x8ewIKs\x90\xf3\xabj\x02\x05\x7f\xfdUq\xf7\xee\x01\x83\x19\xe4%m4Sw6\x1a\x90"\x10\xb7{#I\x97\x97-\x16\xbeQ\xab\x04\xa5,<\xbf\x04u\x83)\xdf\x8d\x82\xabL\xae\x9e\xb9i~\x7fP\xf9!J\\#\xae@\xd2\x95\x0c\xa0\x14hB\n\x95X\xfa\x85\x9a\xa9\xa2\x8c\xdbU\xd3\xc44\x16\xcc\xd9\x1a\xa6x\xcb\xaf\xa6\xe8<\xef\x8f0k\xf3\xef\x03\xc2\xb9\xe7\xac\xfcw\x8d\x18$j\xbd)\xb3\xee\xc9\x0c,k=Z8\x89\xa9\xa2t\x02\xf5GoX\xf6\xde\xbd\x08\x1ak\xa5\xf2F\x0e?_\xfb\xae\x9b\xe7\xe8\xef\x8f\x89\x1e8\xee&]\x93\x01k\x89\x7f0\x85\x92\x819\xe3\xcfW\xe5\xa0\xdb\x88P\xecS}\'qO4{LW\xd7\xaf\tZcH\xef\xf3\xd5\x07\xa8\xd1\\\xca\xd9$\xce>>\x1bq\xe8\x8b\xf8 +\xe2G\xa9\x8c_\xca<\xa3\xec?zn\xa4\x8f\xc4\xd6B>\x8b\xad\x84mU\x92EY\xf1l\x92\xc0-\x96\xbd;.h\x1eR\x16\x8eP\xc7\x95G\x8dj7; d:\xa1B\xef\xe8\x98\xa8\xe2]/\xf9R\xa2A\n\x1a\x15hbT\xe1\x94r\xc1\x0e\xe7\xe5,<wp\x97X\x14r\xd7\x1b\x05)\x99\x9ei.\xea\xc3\xd4#p\xc8\xcc\xa8Lm\xb3Ir\xd5?\x9f?==>\x1dp\xbf\x07\xc2\x93\x9a\x98\x81Q\xb1\x8ci(\xe5\xecY\xdd\x93\xe9m\x92\x1f\xda\xd3\xb3o\xf6@_\x16n\xef\xc8\x9eU{{\xfa:\xdb\xbd\xda;\x89\x7f\xbf:\xd8\x8b/\x92\xd5\x7f\xd5\xe1I\xf7\xe0\xd8\xfa3~q\xe4\xeb\xea\xf8\xf0\xe4\xb4>\x0b_\x9f\xcf\xdfm\xe8\'\xe7{\xc7G\xd1j;\xbe@\xf7\xe4t\xef\xe8\\\xe5i\x12\x0c\r\x93\xf8\xe7\xff}\x19\x7fg\xf3\xb3\xfa\x1e\xdf\xe7\xd4\xe1\xb1\xffy\x00F|b\xfe\xfd\xd93\xde\xa4\xbf_>\xd4)\xc8\xadp\xac\xf5\xb1"1\xb4\xd3\xba\x8d8{\x89\xb1\x9a\x8d"\xe8ddtt\xf4\xffj\xbe\xfb\x81\xbf\x1fb\xc0\x9e\r\xb8I\xbe\xf3\x87\xcd\xe0\xdf\xcb|\xed\xfe\x0c\x0c\x8e\xfe\x0f\x19\xbb\x10I\xa2 \x01\x00')))
except Exception as b:
print(f'Error for : {b} ') | 8,918.454545 | 97,944 | 0.733372 | 22,470 | 98,103 | 3.196084 | 0.203293 | 0.000501 | 0.000251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230607 | 0.001692 | 98,103 | 11 | 97,945 | 8,918.454545 | 0.50268 | 0.000499 | 0 | 0 | 0 | 19.8 | 0.693329 | 0.690902 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ab3bda916793d420e9753f6d20d9af2f5b2bd01 | 5,055 | py | Python | tests/request_examples.py | irlrobot/pylexa | 200fbd6a792ffe414c2fbd5819d721544240576a | [
"Apache-2.0"
] | null | null | null | tests/request_examples.py | irlrobot/pylexa | 200fbd6a792ffe414c2fbd5819d721544240576a | [
"Apache-2.0"
] | null | null | null | tests/request_examples.py | irlrobot/pylexa | 200fbd6a792ffe414c2fbd5819d721544240576a | [
"Apache-2.0"
] | null | null | null | """
Example JSON requests to be used for tests
"""
def launch_request():
'''example launch request
https://developer.amazon.com/docs/custom-skills/request-types-reference.html#launchrequest-example
'''
return {
"version": "1.0",
"session": {
"new": True,
"sessionId": "amzn1.echo-api.session.0000000-0000-0000-0000-00000000000",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.000000-d0ed-0000-ad00-000000d00ebe"
},
"attributes": {},
"user": {
"userId": "amzn1.account.AM3B00000000000000000000000"
}
},
"context": {
"System": {
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.000000-d0ed-0000-ad00-000000d00ebe"
},
"user": {
"userId": "amzn1.account.AM3B00000000000000000000000"
},
"device": {
"supportedInterfaces": {
"AudioPlayer": {}
}
}
},
"AudioPlayer": {
"offsetInMilliseconds": 0,
"playerActivity": "IDLE"
}
},
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.0000000-0000-0000-0000-00000000000",
"timestamp": "2015-05-13T12:34:56Z",
"locale": "string"
}
}
def intent_request():
'''example intent request
https://developer.amazon.com/docs/custom-skills/request-types-reference.html#intentrequest-example
'''
return {
"version": "1.0",
"session": {
"new": False,
"sessionId": "amzn1.echo-api.session.0000000-0000-0000-0000-00000000000",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.000000-d0ed-0000-ad00-000000d00ebe"
},
"attributes": {
"supportedHoroscopePeriods": {
"daily": True,
"weekly": False,
"monthly": False
}
},
"user": {
"userId": "amzn1.account.AM3B00000000000000000000000"
}
},
"context": {
"System": {
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.000000-d0ed-0000-ad00-000000d00ebe"
},
"user": {
"userId": "amzn1.account.AM3B00000000000000000000000"
},
"device": {
"supportedInterfaces": {
"AudioPlayer": {}
}
}
},
"AudioPlayer": {
"offsetInMilliseconds": 0,
"playerActivity": "IDLE"
}
},
"request": {
"type": "IntentRequest",
"requestId": " amzn1.echo-api.request.0000000-0000-0000-0000-00000000000",
"timestamp": "2015-05-13T12:34:56Z",
"dialogState": "COMPLETED",
"locale": "string",
"intent": {
"name": "BLAH",
"confirmationStatus": "NONE",
"slots": {
"ZodiacSign": {
"name": "ZodiacSign",
"value": "virgo",
"confirmationStatus": "NONE"
}
}
}
}
}
def session_ended_request():
'''example session ended request
https://developer.amazon.com/docs/custom-skills/request-types-reference.html#sessionendedrequest-example
'''
return {
"version": "1.0",
"session": {
"new": False,
"sessionId": "amzn1.echo-api.session.0000000-0000-0000-0000-00000000000",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.000000-d0ed-0000-ad00-000000d00ebe"
},
"attributes": {
"supportedHoroscopePeriods": {
"daily": True,
"weekly": False,
"monthly": False
}
},
"user": {
"userId": "amzn1.account.AM3B00000000000000000000000"
}
},
"context": {
"System": {
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.000000-d0ed-0000-ad00-000000d00ebe"
},
"user": {
"userId": "amzn1.account.AM3B00000000000000000000000"
},
"device": {
"supportedInterfaces": {
"AudioPlayer": {}
}
}
},
"AudioPlayer": {
"offsetInMilliseconds": 0,
"playerActivity": "IDLE"
}
},
"request": {
"type": "SessionEndedRequest",
"requestId": "amzn1.echo-api.request.0000000-0000-0000-0000-00000000000",
"timestamp": "2015-05-13T12:34:56Z",
"reason": "USER_INITIATED",
"locale": "string"
}
}
| 31.397516 | 108 | 0.464293 | 348 | 5,055 | 6.729885 | 0.255747 | 0.046114 | 0.030743 | 0.048676 | 0.829206 | 0.829206 | 0.829206 | 0.815542 | 0.815542 | 0.815542 | 0 | 0.169276 | 0.393472 | 5,055 | 160 | 109 | 31.59375 | 0.594586 | 0.083284 | 0 | 0.572414 | 0 | 0 | 0.435646 | 0.213787 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02069 | true | 0 | 0 | 0 | 0.041379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ad9797bb799a582432c230e821f3ca7851dc6b2 | 37,662 | py | Python | vrchatapi/api/invite_api.py | vrchatapi/vrchatapi-python | afe5ec9fda298723e7408358473aafe343e27d18 | [
"MIT"
] | 8 | 2021-08-25T02:35:30.000Z | 2022-03-28T18:11:58.000Z | vrchatapi/api/invite_api.py | vrchatapi/vrchatapi-python | afe5ec9fda298723e7408358473aafe343e27d18 | [
"MIT"
] | 1 | 2022-03-18T20:29:30.000Z | 2022-03-18T20:35:05.000Z | vrchatapi/api/invite_api.py | vrchatapi/vrchatapi-python | afe5ec9fda298723e7408358473aafe343e27d18 | [
"MIT"
] | 1 | 2022-01-11T10:49:12.000Z | 2022-01-11T10:49:12.000Z | """
VRChat API Documentation
The version of the OpenAPI document: 1.6.8
Contact: me@ruby.js.org
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from vrchatapi.api_client import ApiClient, Endpoint as _Endpoint
from vrchatapi.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from vrchatapi.model.error import Error
from vrchatapi.model.invite_message import InviteMessage
from vrchatapi.model.invite_request import InviteRequest
from vrchatapi.model.invite_response import InviteResponse
from vrchatapi.model.notification import Notification
from vrchatapi.model.request_invite_request import RequestInviteRequest
from vrchatapi.model.update_invite_message_request import UpdateInviteMessageRequest
class InviteApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.get_invite_message_endpoint = _Endpoint(
settings={
'response_type': (InviteMessage,),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/message/{userId}/{messageType}/{slot}',
'operation_id': 'get_invite_message',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'user_id',
'message_type',
'slot',
],
'required': [
'user_id',
'message_type',
'slot',
],
'nullable': [
],
'enum': [
'message_type',
],
'validation': [
'slot',
]
},
root_map={
'validations': {
('slot',): {
'inclusive_maximum': 11,
'inclusive_minimum': 0,
},
},
'allowed_values': {
('message_type',): {
"MESSAGE": "message",
"RESPONSE": "response",
"REQUEST": "request",
"REQUESTRESPONSE": "requestResponse"
},
},
'openapi_types': {
'user_id':
(str,),
'message_type':
(str,),
'slot':
(int,),
},
'attribute_map': {
'user_id': 'userId',
'message_type': 'messageType',
'slot': 'slot',
},
'location_map': {
'user_id': 'path',
'message_type': 'path',
'slot': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_invite_messages_endpoint = _Endpoint(
settings={
'response_type': ([InviteMessage],),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/message/{userId}/{messageType}',
'operation_id': 'get_invite_messages',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'user_id',
'message_type',
],
'required': [
'user_id',
'message_type',
],
'nullable': [
],
'enum': [
'message_type',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('message_type',): {
"MESSAGE": "message",
"RESPONSE": "response",
"REQUEST": "request",
"REQUESTRESPONSE": "requestResponse"
},
},
'openapi_types': {
'user_id':
(str,),
'message_type':
(str,),
},
'attribute_map': {
'user_id': 'userId',
'message_type': 'messageType',
},
'location_map': {
'user_id': 'path',
'message_type': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.invite_user_endpoint = _Endpoint(
settings={
'response_type': (Notification,),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/invite/{userId}',
'operation_id': 'invite_user',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'user_id',
'invite_request',
],
'required': [
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'user_id':
(str,),
'invite_request':
(InviteRequest,),
},
'attribute_map': {
'user_id': 'userId',
},
'location_map': {
'user_id': 'path',
'invite_request': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.request_invite_endpoint = _Endpoint(
settings={
'response_type': (Notification,),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/requestInvite/{userId}',
'operation_id': 'request_invite',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'user_id',
'request_invite_request',
],
'required': [
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'user_id':
(str,),
'request_invite_request':
(RequestInviteRequest,),
},
'attribute_map': {
'user_id': 'userId',
},
'location_map': {
'user_id': 'path',
'request_invite_request': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.reset_invite_message_endpoint = _Endpoint(
settings={
'response_type': ([InviteMessage],),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/message/{userId}/{messageType}/{slot}',
'operation_id': 'reset_invite_message',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'user_id',
'message_type',
'slot',
],
'required': [
'user_id',
'message_type',
'slot',
],
'nullable': [
],
'enum': [
'message_type',
],
'validation': [
'slot',
]
},
root_map={
'validations': {
('slot',): {
'inclusive_maximum': 11,
'inclusive_minimum': 0,
},
},
'allowed_values': {
('message_type',): {
"MESSAGE": "message",
"RESPONSE": "response",
"REQUEST": "request",
"REQUESTRESPONSE": "requestResponse"
},
},
'openapi_types': {
'user_id':
(str,),
'message_type':
(str,),
'slot':
(int,),
},
'attribute_map': {
'user_id': 'userId',
'message_type': 'messageType',
'slot': 'slot',
},
'location_map': {
'user_id': 'path',
'message_type': 'path',
'slot': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.respond_invite_endpoint = _Endpoint(
settings={
'response_type': (Notification,),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/invite/{notificationId}/response',
'operation_id': 'respond_invite',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'notification_id',
'invite_response',
],
'required': [
'notification_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'notification_id':
(str,),
'invite_response':
(InviteResponse,),
},
'attribute_map': {
'notification_id': 'notificationId',
},
'location_map': {
'notification_id': 'path',
'invite_response': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_invite_message_endpoint = _Endpoint(
settings={
'response_type': ([InviteMessage],),
'auth': [
'apiKeyCookie',
'authCookie'
],
'endpoint_path': '/message/{userId}/{messageType}/{slot}',
'operation_id': 'update_invite_message',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'user_id',
'message_type',
'slot',
'update_invite_message_request',
],
'required': [
'user_id',
'message_type',
'slot',
],
'nullable': [
],
'enum': [
'message_type',
],
'validation': [
'slot',
]
},
root_map={
'validations': {
('slot',): {
'inclusive_maximum': 11,
'inclusive_minimum': 0,
},
},
'allowed_values': {
('message_type',): {
"MESSAGE": "message",
"RESPONSE": "response",
"REQUEST": "request",
"REQUESTRESPONSE": "requestResponse"
},
},
'openapi_types': {
'user_id':
(str,),
'message_type':
(str,),
'slot':
(int,),
'update_invite_message_request':
(UpdateInviteMessageRequest,),
},
'attribute_map': {
'user_id': 'userId',
'message_type': 'messageType',
'slot': 'slot',
},
'location_map': {
'user_id': 'path',
'message_type': 'path',
'slot': 'path',
'update_invite_message_request': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def get_invite_message(
self,
user_id,
message_type,
slot,
**kwargs
):
"""Get Invite Message # noqa: E501
Returns a single Invite Message. This returns the exact same information but less than `getInviteMessages`. Admin Credentials are required to view messages of other users! Message type refers to a different collection of messages, used during different types of responses. * `message` = Message during a normal invite * `response` = Message when replying to a message * `request` = Message when requesting an invite * `requestResponse` = Message when replying to a request for invite # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_invite_message(user_id, message_type, slot, async_req=True)
>>> result = thread.get()
Args:
user_id (str):
message_type (str):
slot (int):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
InviteMessage
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user_id'] = \
user_id
kwargs['message_type'] = \
message_type
kwargs['slot'] = \
slot
return self.get_invite_message_endpoint.call_with_http_info(**kwargs)
def get_invite_messages(
self,
user_id,
message_type,
**kwargs
):
"""List Invite Messages # noqa: E501
Returns a list of all the users Invite Messages. Admin Credentials are required to view messages of other users! Message type refers to a different collection of messages, used during different types of responses. * `message` = Message during a normal invite * `response` = Message when replying to a message * `request` = Message when requesting an invite * `requestResponse` = Message when replying to a request for invite # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_invite_messages(user_id, message_type, async_req=True)
>>> result = thread.get()
Args:
user_id (str):
message_type (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[InviteMessage]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user_id'] = \
user_id
kwargs['message_type'] = \
message_type
return self.get_invite_messages_endpoint.call_with_http_info(**kwargs)
def invite_user(
self,
user_id,
**kwargs
):
"""Invite User # noqa: E501
Sends an invite to a user. Returns the Notification of type `invite` that was sent. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.invite_user(user_id, async_req=True)
>>> result = thread.get()
Args:
user_id (str):
Keyword Args:
invite_request (InviteRequest): Slot number of the Invite Message to use when inviting a user.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Notification
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user_id'] = \
user_id
return self.invite_user_endpoint.call_with_http_info(**kwargs)
def request_invite(
self,
user_id,
**kwargs
):
"""Request Invite # noqa: E501
Requests an invite from a user. Returns the Notification of type `requestInvite` that was sent. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.request_invite(user_id, async_req=True)
>>> result = thread.get()
Args:
user_id (str):
Keyword Args:
request_invite_request (RequestInviteRequest): Slot number of the Request Message to use when request an invite.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Notification
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user_id'] = \
user_id
return self.request_invite_endpoint.call_with_http_info(**kwargs)
def reset_invite_message(
self,
user_id,
message_type,
slot,
**kwargs
):
"""Reset Invite Message # noqa: E501
Resets a single Invite Message back to its original message, and then returns a list of all of them. Admin Credentials are required to update messages of other users! Resetting a message respects the rate-limit, so it is not possible to reset within the 60 minutes countdown. Resetting it does however not set the rate-limit to 60 like when editing it. It is possible to edit it right after resetting it. Trying to edit a message before the cooldown timer expires results in a 429 \"Too Fast Error\". Message type refers to a different collection of messages, used during different types of responses. * `message` = Message during a normal invite * `response` = Message when replying to a message * `request` = Message when requesting an invite * `requestResponse` = Message when replying to a request for invite The DELETE endpoint does not have/require any request body. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.reset_invite_message(user_id, message_type, slot, async_req=True)
>>> result = thread.get()
Args:
user_id (str):
message_type (str):
slot (int):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[InviteMessage]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user_id'] = \
user_id
kwargs['message_type'] = \
message_type
kwargs['slot'] = \
slot
return self.reset_invite_message_endpoint.call_with_http_info(**kwargs)
def respond_invite(
self,
notification_id,
**kwargs
):
"""Respond Invite # noqa: E501
Respond to an invite request by sending a world invite to the requesting user. `:notificationId` is the ID of the requesting notification. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.respond_invite(notification_id, async_req=True)
>>> result = thread.get()
Args:
notification_id (str):
Keyword Args:
invite_response (InviteResponse): Slot number of the Response Message to use when responding to a user.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Notification
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['notification_id'] = \
notification_id
return self.respond_invite_endpoint.call_with_http_info(**kwargs)
def update_invite_message(
self,
user_id,
message_type,
slot,
**kwargs
):
"""Update Invite Message # noqa: E501
Updates a single Invite Message and then returns a list of all of them. Admin Credentials are required to update messages of other users! Updating a message automatically sets the cooldown timer to 60 minutes. Trying to edit a message before the cooldown timer expires results in a 429 \"Too Fast Error\". Message type refers to a different collection of messages, used during different types of responses. * `message` = Message during a normal invite * `response` = Message when replying to a message * `request` = Message when requesting an invite * `requestResponse` = Message when replying to a request for invite # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_invite_message(user_id, message_type, slot, async_req=True)
>>> result = thread.get()
Args:
user_id (str):
message_type (str):
slot (int):
Keyword Args:
update_invite_message_request (UpdateInviteMessageRequest): Message of what to set the invite message to.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[InviteMessage]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user_id'] = \
user_id
kwargs['message_type'] = \
message_type
kwargs['slot'] = \
slot
return self.update_invite_message_endpoint.call_with_http_info(**kwargs)
| 36.959764 | 898 | 0.489087 | 3,387 | 37,662 | 5.214054 | 0.080012 | 0.020385 | 0.020612 | 0.021404 | 0.842639 | 0.827237 | 0.825651 | 0.816251 | 0.803114 | 0.787656 | 0 | 0.00379 | 0.425522 | 37,662 | 1,018 | 899 | 36.996071 | 0.812442 | 0.358 | 0 | 0.691655 | 1 | 0 | 0.23505 | 0.037374 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011315 | false | 0 | 0.015559 | 0 | 0.03819 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a9b1b31b1ff59605f35802aa786adcf493b088b | 2,553 | py | Python | {{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/users/forms_allauth.py | tsantor/cookiecutter-django | 6aa4b4f2accb8ecb969189e0f54f8e490dbc262b | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/users/forms_allauth.py | tsantor/cookiecutter-django | 6aa4b4f2accb8ecb969189e0f54f8e490dbc262b | [
"BSD-3-Clause"
] | 26 | 2021-02-01T08:37:50.000Z | 2022-02-22T20:59:39.000Z | {{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/users/forms_allauth.py | tsantor/cookiecutter-django | 6aa4b4f2accb8ecb969189e0f54f8e490dbc262b | [
"BSD-3-Clause"
] | null | null | null | from allauth.account.forms import SignupForm
from allauth.socialaccount.forms import SignupForm as SocialSignupForm
from django import forms
class CustomSignupForm(SignupForm):
"""Override allauth default SignupForm."""
# Add our custom form fields to the ones that already exist
first_name = forms.CharField(max_length=30, label="First Name")
last_name = forms.CharField(max_length=30, label="Last Name")
address = forms.CharField(max_length=255)
address_line2 = forms.CharField(max_length=255, required=False)
city = forms.CharField(max_length=255)
state = forms.CharField(max_length=255)
zip_code = forms.CharField(max_length=255)
home_phone = forms.CharField(max_length=255)
opt_in = forms.BooleanField(required=False)
def signup(self, request, user):
user.first_name = self.cleaned_data["first_name"]
user.last_name = self.cleaned_data["last_name"]
user.address = self.cleaned_data["address"]
user.address_line2 = self.cleaned_data["address_line2"]
user.city = self.cleaned_data["city"]
user.state = self.cleaned_data["state"]
user.zip_code = self.cleaned_data["zip_code"]
user.home_phone = self.cleaned_data["home_phone"]
# user.opt_in = self.cleaned_data["opt_in"]
user.save()
return user
class CustomSocialSignupForm(SocialSignupForm):
"""Override allauth default SignupForm."""
# Add our custom form fields to the ones that already exist
first_name = forms.CharField(max_length=30, label="First Name")
last_name = forms.CharField(max_length=30, label="Last Name")
address = forms.CharField(max_length=255)
address_line2 = forms.CharField(max_length=255, required=False)
city = forms.CharField(max_length=255)
state = forms.CharField(max_length=255)
zip_code = forms.CharField(max_length=255)
home_phone = forms.CharField(max_length=255)
opt_in = forms.BooleanField(required=False)
def signup(self, request, user):
user.first_name = self.cleaned_data["first_name"]
user.last_name = self.cleaned_data["last_name"]
user.address = self.cleaned_data["address"]
user.address_line2 = self.cleaned_data["address_line2"]
user.city = self.cleaned_data["city"]
user.state = self.cleaned_data["state"]
user.zip_code = self.cleaned_data["zip_code"]
user.home_phone = self.cleaned_data["home_phone"]
# user.opt_in = self.cleaned_data["opt_in"]
user.save()
return user
| 36.471429 | 70 | 0.704661 | 338 | 2,553 | 5.115385 | 0.168639 | 0.114517 | 0.15616 | 0.21284 | 0.886061 | 0.886061 | 0.886061 | 0.886061 | 0.886061 | 0.886061 | 0 | 0.02405 | 0.185664 | 2,553 | 69 | 71 | 37 | 0.8076 | 0.107325 | 0 | 0.888889 | 0 | 0 | 0.075055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.066667 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
0aa71a8c1942907254266ed80bad5dfbcc94dabc | 198 | py | Python | snippets/template_backends/jinja2/globals/extensions/__init__.py | wizzzet/todo_backend | 58d27a639899514a3b10058cebb82c9b420a5bcc | [
"MIT"
] | null | null | null | snippets/template_backends/jinja2/globals/extensions/__init__.py | wizzzet/todo_backend | 58d27a639899514a3b10058cebb82c9b420a5bcc | [
"MIT"
] | null | null | null | snippets/template_backends/jinja2/globals/extensions/__init__.py | wizzzet/todo_backend | 58d27a639899514a3b10058cebb82c9b420a5bcc | [
"MIT"
] | null | null | null | from snippets.template_backends.jinja2.globals.extensions.cache import CacheExtension # NOQA
from snippets.template_backends.jinja2.globals.extensions.spaceless import SpacelessExtension # NOQA
| 66 | 102 | 0.858586 | 22 | 198 | 7.636364 | 0.590909 | 0.142857 | 0.238095 | 0.333333 | 0.607143 | 0.607143 | 0.607143 | 0 | 0 | 0 | 0 | 0.010989 | 0.080808 | 198 | 2 | 103 | 99 | 0.912088 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0adf0d4433732f0e202e568df64d3ee5fe2d2e1f | 221 | py | Python | jobscheduler/__init__.py | wenbobuaa/pykit | 43e38fe40297a1e7a9329bcf3db3554c7ca48ead | [
"MIT"
] | 2 | 2018-01-04T06:39:54.000Z | 2018-03-20T10:32:13.000Z | jobscheduler/__init__.py | wenbobuaa/pykit | 43e38fe40297a1e7a9329bcf3db3554c7ca48ead | [
"MIT"
] | 3 | 2018-10-15T06:08:28.000Z | 2018-12-03T12:07:06.000Z | jobscheduler/__init__.py | wenbobuaa/pykit | 43e38fe40297a1e7a9329bcf3db3554c7ca48ead | [
"MIT"
] | 2 | 2018-04-08T07:11:19.000Z | 2021-03-21T06:04:54.000Z | from .jobscheduler import (
JobExistError,
JobScheduler,
NextFireTimeError,
get_next_fire_time,
)
__all__ = [
'JobExistError',
'JobScheduler',
'NextFireTimeError',
'get_next_fire_time',
]
| 15.785714 | 27 | 0.674208 | 18 | 221 | 7.722222 | 0.555556 | 0.359712 | 0.604317 | 0.647482 | 0.820144 | 0.820144 | 0.820144 | 0 | 0 | 0 | 0 | 0 | 0.226244 | 221 | 13 | 28 | 17 | 0.812866 | 0 | 0 | 0 | 0 | 0 | 0.271493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ae5379f85cd6c27d39ec2a9d7fa4cbb404e2c8f | 6,177 | py | Python | test/similarity_test.py | xiaohan2012/epitope-similarity | 60a3a342fa2aea5cc402b9cb0d3d7cf8260afc2e | [
"MIT"
] | null | null | null | test/similarity_test.py | xiaohan2012/epitope-similarity | 60a3a342fa2aea5cc402b9cb0d3d7cf8260afc2e | [
"MIT"
] | null | null | null | test/similarity_test.py | xiaohan2012/epitope-similarity | 60a3a342fa2aea5cc402b9cb0d3d7cf8260afc2e | [
"MIT"
] | null | null | null | from setting import *
import unittest, os
from get_fp import Complex
from Bio.PDB.PDBParser import PDBParser
from similarity import FPWithComplex, similarity_between
class SimilarityTest (unittest.TestCase):
def test_basic (self):
"""
nothing is specified
"""
path1 = DIRNAME + '/data/sample1.pdb'
path2 = DIRNAME + '/data/sample2.pdb'
p = PDBParser(PERMISSIVE=1)
query_struct = p.get_structure(os.path.basename (path1), path1)
against_struct = p.get_structure(os.path.basename (path2), path2)
query_complex = Complex (query_struct)
against_complex = Complex (against_struct)
query_complex.get_fp ()
against_complex.get_fp ()
query_fp_string = query_complex.fp2str ()
against_fp_string = against_complex.fp2str ()
query = FPWithComplex (query_complex, query_fp_string)
against = FPWithComplex (against_complex, against_fp_string)
score1, score2, score3 = similarity_between (query, against)
expected = {"score1": 118.00269647021572, "score3": 20, "score2": -8}
actual = {"score1": score1, "score3": score2, "score2": score3}
self.assertEqual (actual, expected)
def test_basic_with_epitope (self):
"""
epitope is specified
"""
path1 = DIRNAME + '/data/sample1.pdb'
path2 = DIRNAME + '/data/sample2.pdb'
p = PDBParser(PERMISSIVE=1)
query_struct = p.get_structure(os.path.basename (path1), path1)
against_struct = p.get_structure(os.path.basename (path2), path2)
query_complex = Complex (query_struct, epitope = [211,213,214,224,225,226,227,228,229])
against_complex = Complex (against_struct, epitope = [216,217,218,219,220,221])
query_complex.get_fp ()
against_complex.get_fp ()
query_fp_string = query_complex.fp2str ()
against_fp_string = against_complex.fp2str ()
query = FPWithComplex (query_complex, query_fp_string)
against = FPWithComplex (against_complex, against_fp_string)
score1, score2, score3 = similarity_between (query, against)
expected = {'score1': 34.705754203703862, 'score3': 0, 'score2': 6}
actual = {"score1": score1, "score2": score2, "score3": score3}
self.assertEqual (actual, expected)
def test_basic_with_another_spinimage (self):
"""
non-default spinimage
"""
path1 = DIRNAME + '/data/sample1.pdb'
path2 = DIRNAME + '/data/sample2.pdb'
p = PDBParser(PERMISSIVE=1)
query_struct = p.get_structure(os.path.basename (path1), path1)
against_struct = p.get_structure(os.path.basename (path2), path2)
query_complex = Complex (query_struct)
against_complex = Complex (against_struct)
query_complex.get_fp (spin_image_radius_step=2, spin_image_height_step=2, sphere_radius_step=2)
against_complex.get_fp (spin_image_radius_step=2, spin_image_height_step=2, sphere_radius_step=2)
query_fp_string = query_complex.fp2str ()
against_fp_string = against_complex.fp2str ()
query = FPWithComplex (query_complex, query_fp_string)
against = FPWithComplex (against_complex, against_fp_string)
score1, score2, score3 = similarity_between (query, against)
expected = {'score1': 129.68169758476202, 'score3': 5, 'score2': 20}
actual = {"score1": score1, "score2": score2, "score3": score3}
self.assertEqual (actual, expected)
def test_with_epitope_another_spinimage (self):
"""
Epitope is specified and non-default spinimage
"""
path1 = DIRNAME + '/data/sample1.pdb'
path2 = DIRNAME + '/data/sample2.pdb'
p = PDBParser(PERMISSIVE=1)
query_struct = p.get_structure(os.path.basename (path1), path1)
against_struct = p.get_structure(os.path.basename (path2), path2)
query_complex = Complex (query_struct, epitope = [211,213,214,224,225,226,227,228,229])
against_complex = Complex (against_struct, epitope = [216,217,218,219,220,221])
query_complex.get_fp (spin_image_radius_step=2, spin_image_height_step=2, sphere_radius_step=2)
against_complex.get_fp (spin_image_radius_step=2, spin_image_height_step=2, sphere_radius_step=2)
query_fp_string = query_complex.fp2str ()
against_fp_string = against_complex.fp2str ()
query = FPWithComplex (query_complex, query_fp_string)
against = FPWithComplex (against_complex, against_fp_string)
score1, score2, score3 = similarity_between (query, against)
expected = {'score1': 35.771598481467343, 'score3': 2, 'score2': 6}
actual = {"score1": score1, "score2": score2, "score3": score3}
self.assertEqual (actual, expected)
def test_with_epitope_another_cutoff (self):
"""
the similarity calculation cutoff is set to 5
"""
path1 = DIRNAME + '/data/sample1.pdb'
path2 = DIRNAME + '/data/sample2.pdb'
p = PDBParser(PERMISSIVE=1)
query_struct = p.get_structure(os.path.basename (path1), path1)
against_struct = p.get_structure(os.path.basename (path2), path2)
query_complex = Complex (query_struct)
against_complex = Complex (against_struct)
query_complex.get_fp ()
against_complex.get_fp ()
query_fp_string = query_complex.fp2str ()
against_fp_string = against_complex.fp2str ()
query = FPWithComplex (query_complex, query_fp_string)
against = FPWithComplex (against_complex, against_fp_string)
score1, score2, score3 = similarity_between (query, against, cutoff = 5)
expected = {"score1": 119.75339423551459, "score3": -8, "score2": 20}
actual = {"score1": score1, "score2": score2, "score3": score3}
self.assertEqual (actual, expected)
if __name__ == '__main__':
unittest.main ()
| 37.210843 | 105 | 0.644326 | 699 | 6,177 | 5.432046 | 0.141631 | 0.063208 | 0.026337 | 0.05004 | 0.861996 | 0.861996 | 0.861996 | 0.861996 | 0.861996 | 0.846194 | 0 | 0.068541 | 0.251255 | 6,177 | 165 | 106 | 37.436364 | 0.752432 | 0.025417 | 0 | 0.795918 | 0 | 0 | 0.060647 | 0 | 0 | 0 | 0 | 0 | 0.05102 | 1 | 0.05102 | false | 0 | 0.05102 | 0 | 0.112245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ae7bf8344b851ee21d6d924268f9baee1a99a3f | 163,171 | py | Python | service/src/gen/thrift/gen-py/cli_service/ttypes.py | exponea/hive | 460ea2040683c5fad0ab5b215b2d45946a2a44e2 | [
"Apache-2.0"
] | 4 | 2015-03-20T19:47:04.000Z | 2018-02-20T22:07:08.000Z | service/src/gen/thrift/gen-py/cli_service/ttypes.py | exponea/hive | 460ea2040683c5fad0ab5b215b2d45946a2a44e2 | [
"Apache-2.0"
] | null | null | null | service/src/gen/thrift/gen-py/cli_service/ttypes.py | exponea/hive | 460ea2040683c5fad0ab5b215b2d45946a2a44e2 | [
"Apache-2.0"
] | 7 | 2015-12-22T14:52:08.000Z | 2019-06-14T07:45:01.000Z | #
# Autogenerated by Thrift Compiler (0.7.0)
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
from thrift.Thrift import *
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
class TProtocolVersion:
HIVE_CLI_SERVICE_PROTOCOL_V1 = 0
_VALUES_TO_NAMES = {
0: "HIVE_CLI_SERVICE_PROTOCOL_V1",
}
_NAMES_TO_VALUES = {
"HIVE_CLI_SERVICE_PROTOCOL_V1": 0,
}
class TType:
BOOLEAN_TYPE = 0
TINYINT_TYPE = 1
SMALLINT_TYPE = 2
INT_TYPE = 3
BIGINT_TYPE = 4
FLOAT_TYPE = 5
DOUBLE_TYPE = 6
STRING_TYPE = 7
TIMESTAMP_TYPE = 8
BINARY_TYPE = 9
ARRAY_TYPE = 10
MAP_TYPE = 11
STRUCT_TYPE = 12
UNION_TYPE = 13
USER_DEFINED_TYPE = 14
_VALUES_TO_NAMES = {
0: "BOOLEAN_TYPE",
1: "TINYINT_TYPE",
2: "SMALLINT_TYPE",
3: "INT_TYPE",
4: "BIGINT_TYPE",
5: "FLOAT_TYPE",
6: "DOUBLE_TYPE",
7: "STRING_TYPE",
8: "TIMESTAMP_TYPE",
9: "BINARY_TYPE",
10: "ARRAY_TYPE",
11: "MAP_TYPE",
12: "STRUCT_TYPE",
13: "UNION_TYPE",
14: "USER_DEFINED_TYPE",
}
_NAMES_TO_VALUES = {
"BOOLEAN_TYPE": 0,
"TINYINT_TYPE": 1,
"SMALLINT_TYPE": 2,
"INT_TYPE": 3,
"BIGINT_TYPE": 4,
"FLOAT_TYPE": 5,
"DOUBLE_TYPE": 6,
"STRING_TYPE": 7,
"TIMESTAMP_TYPE": 8,
"BINARY_TYPE": 9,
"ARRAY_TYPE": 10,
"MAP_TYPE": 11,
"STRUCT_TYPE": 12,
"UNION_TYPE": 13,
"USER_DEFINED_TYPE": 14,
}
class TStatusCode:
SUCCESS_STATUS = 0
SUCCESS_WITH_INFO_STATUS = 1
STILL_EXECUTING_STATUS = 2
ERROR_STATUS = 3
INVALID_HANDLE_STATUS = 4
_VALUES_TO_NAMES = {
0: "SUCCESS_STATUS",
1: "SUCCESS_WITH_INFO_STATUS",
2: "STILL_EXECUTING_STATUS",
3: "ERROR_STATUS",
4: "INVALID_HANDLE_STATUS",
}
_NAMES_TO_VALUES = {
"SUCCESS_STATUS": 0,
"SUCCESS_WITH_INFO_STATUS": 1,
"STILL_EXECUTING_STATUS": 2,
"ERROR_STATUS": 3,
"INVALID_HANDLE_STATUS": 4,
}
class TOperationState:
INITIALIZED_STATE = 0
RUNNING_STATE = 1
FINISHED_STATE = 2
CANCELED_STATE = 3
CLOSED_STATE = 4
ERROR_STATE = 5
UKNOWN_STATE = 6
_VALUES_TO_NAMES = {
0: "INITIALIZED_STATE",
1: "RUNNING_STATE",
2: "FINISHED_STATE",
3: "CANCELED_STATE",
4: "CLOSED_STATE",
5: "ERROR_STATE",
6: "UKNOWN_STATE",
}
_NAMES_TO_VALUES = {
"INITIALIZED_STATE": 0,
"RUNNING_STATE": 1,
"FINISHED_STATE": 2,
"CANCELED_STATE": 3,
"CLOSED_STATE": 4,
"ERROR_STATE": 5,
"UKNOWN_STATE": 6,
}
class TOperationType:
EXECUTE_STATEMENT = 0
GET_TYPE_INFO = 1
GET_CATALOGS = 2
GET_SCHEMAS = 3
GET_TABLES = 4
GET_TABLE_TYPES = 5
GET_COLUMNS = 6
GET_FUNCTIONS = 7
UNKNOWN = 8
_VALUES_TO_NAMES = {
0: "EXECUTE_STATEMENT",
1: "GET_TYPE_INFO",
2: "GET_CATALOGS",
3: "GET_SCHEMAS",
4: "GET_TABLES",
5: "GET_TABLE_TYPES",
6: "GET_COLUMNS",
7: "GET_FUNCTIONS",
8: "UNKNOWN",
}
_NAMES_TO_VALUES = {
"EXECUTE_STATEMENT": 0,
"GET_TYPE_INFO": 1,
"GET_CATALOGS": 2,
"GET_SCHEMAS": 3,
"GET_TABLES": 4,
"GET_TABLE_TYPES": 5,
"GET_COLUMNS": 6,
"GET_FUNCTIONS": 7,
"UNKNOWN": 8,
}
class TGetInfoType:
CLI_MAX_DRIVER_CONNECTIONS = 0
CLI_MAX_CONCURRENT_ACTIVITIES = 1
CLI_DATA_SOURCE_NAME = 2
CLI_FETCH_DIRECTION = 8
CLI_SERVER_NAME = 13
CLI_SEARCH_PATTERN_ESCAPE = 14
CLI_DBMS_NAME = 17
CLI_DBMS_VER = 18
CLI_ACCESSIBLE_TABLES = 19
CLI_ACCESSIBLE_PROCEDURES = 20
CLI_CURSOR_COMMIT_BEHAVIOR = 23
CLI_DATA_SOURCE_READ_ONLY = 25
CLI_DEFAULT_TXN_ISOLATION = 26
CLI_IDENTIFIER_CASE = 28
CLI_IDENTIFIER_QUOTE_CHAR = 29
CLI_MAX_COLUMN_NAME_LEN = 30
CLI_MAX_CURSOR_NAME_LEN = 31
CLI_MAX_SCHEMA_NAME_LEN = 32
CLI_MAX_CATALOG_NAME_LEN = 34
CLI_MAX_TABLE_NAME_LEN = 35
CLI_SCROLL_CONCURRENCY = 43
CLI_TXN_CAPABLE = 46
CLI_USER_NAME = 47
CLI_TXN_ISOLATION_OPTION = 72
CLI_INTEGRITY = 73
CLI_GETDATA_EXTENSIONS = 81
CLI_NULL_COLLATION = 85
CLI_ALTER_TABLE = 86
CLI_ORDER_BY_COLUMNS_IN_SELECT = 90
CLI_SPECIAL_CHARACTERS = 94
CLI_MAX_COLUMNS_IN_GROUP_BY = 97
CLI_MAX_COLUMNS_IN_INDEX = 98
CLI_MAX_COLUMNS_IN_ORDER_BY = 99
CLI_MAX_COLUMNS_IN_SELECT = 100
CLI_MAX_COLUMNS_IN_TABLE = 101
CLI_MAX_INDEX_SIZE = 102
CLI_MAX_ROW_SIZE = 104
CLI_MAX_STATEMENT_LEN = 105
CLI_MAX_TABLES_IN_SELECT = 106
CLI_MAX_USER_NAME_LEN = 107
CLI_OJ_CAPABILITIES = 115
CLI_XOPEN_CLI_YEAR = 10000
CLI_CURSOR_SENSITIVITY = 10001
CLI_DESCRIBE_PARAMETER = 10002
CLI_CATALOG_NAME = 10003
CLI_COLLATION_SEQ = 10004
CLI_MAX_IDENTIFIER_LEN = 10005
_VALUES_TO_NAMES = {
0: "CLI_MAX_DRIVER_CONNECTIONS",
1: "CLI_MAX_CONCURRENT_ACTIVITIES",
2: "CLI_DATA_SOURCE_NAME",
8: "CLI_FETCH_DIRECTION",
13: "CLI_SERVER_NAME",
14: "CLI_SEARCH_PATTERN_ESCAPE",
17: "CLI_DBMS_NAME",
18: "CLI_DBMS_VER",
19: "CLI_ACCESSIBLE_TABLES",
20: "CLI_ACCESSIBLE_PROCEDURES",
23: "CLI_CURSOR_COMMIT_BEHAVIOR",
25: "CLI_DATA_SOURCE_READ_ONLY",
26: "CLI_DEFAULT_TXN_ISOLATION",
28: "CLI_IDENTIFIER_CASE",
29: "CLI_IDENTIFIER_QUOTE_CHAR",
30: "CLI_MAX_COLUMN_NAME_LEN",
31: "CLI_MAX_CURSOR_NAME_LEN",
32: "CLI_MAX_SCHEMA_NAME_LEN",
34: "CLI_MAX_CATALOG_NAME_LEN",
35: "CLI_MAX_TABLE_NAME_LEN",
43: "CLI_SCROLL_CONCURRENCY",
46: "CLI_TXN_CAPABLE",
47: "CLI_USER_NAME",
72: "CLI_TXN_ISOLATION_OPTION",
73: "CLI_INTEGRITY",
81: "CLI_GETDATA_EXTENSIONS",
85: "CLI_NULL_COLLATION",
86: "CLI_ALTER_TABLE",
90: "CLI_ORDER_BY_COLUMNS_IN_SELECT",
94: "CLI_SPECIAL_CHARACTERS",
97: "CLI_MAX_COLUMNS_IN_GROUP_BY",
98: "CLI_MAX_COLUMNS_IN_INDEX",
99: "CLI_MAX_COLUMNS_IN_ORDER_BY",
100: "CLI_MAX_COLUMNS_IN_SELECT",
101: "CLI_MAX_COLUMNS_IN_TABLE",
102: "CLI_MAX_INDEX_SIZE",
104: "CLI_MAX_ROW_SIZE",
105: "CLI_MAX_STATEMENT_LEN",
106: "CLI_MAX_TABLES_IN_SELECT",
107: "CLI_MAX_USER_NAME_LEN",
115: "CLI_OJ_CAPABILITIES",
10000: "CLI_XOPEN_CLI_YEAR",
10001: "CLI_CURSOR_SENSITIVITY",
10002: "CLI_DESCRIBE_PARAMETER",
10003: "CLI_CATALOG_NAME",
10004: "CLI_COLLATION_SEQ",
10005: "CLI_MAX_IDENTIFIER_LEN",
}
_NAMES_TO_VALUES = {
"CLI_MAX_DRIVER_CONNECTIONS": 0,
"CLI_MAX_CONCURRENT_ACTIVITIES": 1,
"CLI_DATA_SOURCE_NAME": 2,
"CLI_FETCH_DIRECTION": 8,
"CLI_SERVER_NAME": 13,
"CLI_SEARCH_PATTERN_ESCAPE": 14,
"CLI_DBMS_NAME": 17,
"CLI_DBMS_VER": 18,
"CLI_ACCESSIBLE_TABLES": 19,
"CLI_ACCESSIBLE_PROCEDURES": 20,
"CLI_CURSOR_COMMIT_BEHAVIOR": 23,
"CLI_DATA_SOURCE_READ_ONLY": 25,
"CLI_DEFAULT_TXN_ISOLATION": 26,
"CLI_IDENTIFIER_CASE": 28,
"CLI_IDENTIFIER_QUOTE_CHAR": 29,
"CLI_MAX_COLUMN_NAME_LEN": 30,
"CLI_MAX_CURSOR_NAME_LEN": 31,
"CLI_MAX_SCHEMA_NAME_LEN": 32,
"CLI_MAX_CATALOG_NAME_LEN": 34,
"CLI_MAX_TABLE_NAME_LEN": 35,
"CLI_SCROLL_CONCURRENCY": 43,
"CLI_TXN_CAPABLE": 46,
"CLI_USER_NAME": 47,
"CLI_TXN_ISOLATION_OPTION": 72,
"CLI_INTEGRITY": 73,
"CLI_GETDATA_EXTENSIONS": 81,
"CLI_NULL_COLLATION": 85,
"CLI_ALTER_TABLE": 86,
"CLI_ORDER_BY_COLUMNS_IN_SELECT": 90,
"CLI_SPECIAL_CHARACTERS": 94,
"CLI_MAX_COLUMNS_IN_GROUP_BY": 97,
"CLI_MAX_COLUMNS_IN_INDEX": 98,
"CLI_MAX_COLUMNS_IN_ORDER_BY": 99,
"CLI_MAX_COLUMNS_IN_SELECT": 100,
"CLI_MAX_COLUMNS_IN_TABLE": 101,
"CLI_MAX_INDEX_SIZE": 102,
"CLI_MAX_ROW_SIZE": 104,
"CLI_MAX_STATEMENT_LEN": 105,
"CLI_MAX_TABLES_IN_SELECT": 106,
"CLI_MAX_USER_NAME_LEN": 107,
"CLI_OJ_CAPABILITIES": 115,
"CLI_XOPEN_CLI_YEAR": 10000,
"CLI_CURSOR_SENSITIVITY": 10001,
"CLI_DESCRIBE_PARAMETER": 10002,
"CLI_CATALOG_NAME": 10003,
"CLI_COLLATION_SEQ": 10004,
"CLI_MAX_IDENTIFIER_LEN": 10005,
}
class TFetchOrientation:
FETCH_NEXT = 0
FETCH_PRIOR = 1
FETCH_RELATIVE = 2
FETCH_ABSOLUTE = 3
FETCH_FIRST = 4
FETCH_LAST = 5
_VALUES_TO_NAMES = {
0: "FETCH_NEXT",
1: "FETCH_PRIOR",
2: "FETCH_RELATIVE",
3: "FETCH_ABSOLUTE",
4: "FETCH_FIRST",
5: "FETCH_LAST",
}
_NAMES_TO_VALUES = {
"FETCH_NEXT": 0,
"FETCH_PRIOR": 1,
"FETCH_RELATIVE": 2,
"FETCH_ABSOLUTE": 3,
"FETCH_FIRST": 4,
"FETCH_LAST": 5,
}
class TPrimitiveTypeEntry:
"""
Attributes:
- type
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'type', None, None, ), # 1
)
def __init__(self, type=None,):
self.type = type
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.type = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TPrimitiveTypeEntry')
if self.type is not None:
oprot.writeFieldBegin('type', TType.I32, 1)
oprot.writeI32(self.type)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.type is None:
raise TProtocol.TProtocolException(message='Required field type is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TArrayTypeEntry:
"""
Attributes:
- objectTypePtr
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'objectTypePtr', None, None, ), # 1
)
def __init__(self, objectTypePtr=None,):
self.objectTypePtr = objectTypePtr
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.objectTypePtr = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TArrayTypeEntry')
if self.objectTypePtr is not None:
oprot.writeFieldBegin('objectTypePtr', TType.I32, 1)
oprot.writeI32(self.objectTypePtr)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.objectTypePtr is None:
raise TProtocol.TProtocolException(message='Required field objectTypePtr is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TMapTypeEntry:
"""
Attributes:
- keyTypePtr
- valueTypePtr
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'keyTypePtr', None, None, ), # 1
(2, TType.I32, 'valueTypePtr', None, None, ), # 2
)
def __init__(self, keyTypePtr=None, valueTypePtr=None,):
self.keyTypePtr = keyTypePtr
self.valueTypePtr = valueTypePtr
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.keyTypePtr = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.valueTypePtr = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TMapTypeEntry')
if self.keyTypePtr is not None:
oprot.writeFieldBegin('keyTypePtr', TType.I32, 1)
oprot.writeI32(self.keyTypePtr)
oprot.writeFieldEnd()
if self.valueTypePtr is not None:
oprot.writeFieldBegin('valueTypePtr', TType.I32, 2)
oprot.writeI32(self.valueTypePtr)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.keyTypePtr is None:
raise TProtocol.TProtocolException(message='Required field keyTypePtr is unset!')
if self.valueTypePtr is None:
raise TProtocol.TProtocolException(message='Required field valueTypePtr is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TStructTypeEntry:
"""
Attributes:
- nameToTypePtr
"""
thrift_spec = (
None, # 0
(1, TType.MAP, 'nameToTypePtr', (TType.STRING,None,TType.I32,None), None, ), # 1
)
def __init__(self, nameToTypePtr=None,):
self.nameToTypePtr = nameToTypePtr
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.MAP:
self.nameToTypePtr = {}
(_ktype1, _vtype2, _size0 ) = iprot.readMapBegin()
for _i4 in xrange(_size0):
_key5 = iprot.readString();
_val6 = iprot.readI32();
self.nameToTypePtr[_key5] = _val6
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TStructTypeEntry')
if self.nameToTypePtr is not None:
oprot.writeFieldBegin('nameToTypePtr', TType.MAP, 1)
oprot.writeMapBegin(TType.STRING, TType.I32, len(self.nameToTypePtr))
for kiter7,viter8 in self.nameToTypePtr.items():
oprot.writeString(kiter7)
oprot.writeI32(viter8)
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.nameToTypePtr is None:
raise TProtocol.TProtocolException(message='Required field nameToTypePtr is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TUnionTypeEntry:
"""
Attributes:
- nameToTypePtr
"""
thrift_spec = (
None, # 0
(1, TType.MAP, 'nameToTypePtr', (TType.STRING,None,TType.I32,None), None, ), # 1
)
def __init__(self, nameToTypePtr=None,):
self.nameToTypePtr = nameToTypePtr
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.MAP:
self.nameToTypePtr = {}
(_ktype10, _vtype11, _size9 ) = iprot.readMapBegin()
for _i13 in xrange(_size9):
_key14 = iprot.readString();
_val15 = iprot.readI32();
self.nameToTypePtr[_key14] = _val15
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TUnionTypeEntry')
if self.nameToTypePtr is not None:
oprot.writeFieldBegin('nameToTypePtr', TType.MAP, 1)
oprot.writeMapBegin(TType.STRING, TType.I32, len(self.nameToTypePtr))
for kiter16,viter17 in self.nameToTypePtr.items():
oprot.writeString(kiter16)
oprot.writeI32(viter17)
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.nameToTypePtr is None:
raise TProtocol.TProtocolException(message='Required field nameToTypePtr is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TUserDefinedTypeEntry:
"""
Attributes:
- typeClassName
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'typeClassName', None, None, ), # 1
)
def __init__(self, typeClassName=None,):
self.typeClassName = typeClassName
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.typeClassName = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TUserDefinedTypeEntry')
if self.typeClassName is not None:
oprot.writeFieldBegin('typeClassName', TType.STRING, 1)
oprot.writeString(self.typeClassName)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.typeClassName is None:
raise TProtocol.TProtocolException(message='Required field typeClassName is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TTypeEntry:
"""
Attributes:
- primitiveEntry
- arrayEntry
- mapEntry
- structEntry
- unionEntry
- userDefinedTypeEntry
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'primitiveEntry', (TPrimitiveTypeEntry, TPrimitiveTypeEntry.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'arrayEntry', (TArrayTypeEntry, TArrayTypeEntry.thrift_spec), None, ), # 2
(3, TType.STRUCT, 'mapEntry', (TMapTypeEntry, TMapTypeEntry.thrift_spec), None, ), # 3
(4, TType.STRUCT, 'structEntry', (TStructTypeEntry, TStructTypeEntry.thrift_spec), None, ), # 4
(5, TType.STRUCT, 'unionEntry', (TUnionTypeEntry, TUnionTypeEntry.thrift_spec), None, ), # 5
(6, TType.STRUCT, 'userDefinedTypeEntry', (TUserDefinedTypeEntry, TUserDefinedTypeEntry.thrift_spec), None, ), # 6
)
def __init__(self, primitiveEntry=None, arrayEntry=None, mapEntry=None, structEntry=None, unionEntry=None, userDefinedTypeEntry=None,):
self.primitiveEntry = primitiveEntry
self.arrayEntry = arrayEntry
self.mapEntry = mapEntry
self.structEntry = structEntry
self.unionEntry = unionEntry
self.userDefinedTypeEntry = userDefinedTypeEntry
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.primitiveEntry = TPrimitiveTypeEntry()
self.primitiveEntry.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.arrayEntry = TArrayTypeEntry()
self.arrayEntry.read(iprot)
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRUCT:
self.mapEntry = TMapTypeEntry()
self.mapEntry.read(iprot)
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRUCT:
self.structEntry = TStructTypeEntry()
self.structEntry.read(iprot)
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.STRUCT:
self.unionEntry = TUnionTypeEntry()
self.unionEntry.read(iprot)
else:
iprot.skip(ftype)
elif fid == 6:
if ftype == TType.STRUCT:
self.userDefinedTypeEntry = TUserDefinedTypeEntry()
self.userDefinedTypeEntry.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TTypeEntry')
if self.primitiveEntry is not None:
oprot.writeFieldBegin('primitiveEntry', TType.STRUCT, 1)
self.primitiveEntry.write(oprot)
oprot.writeFieldEnd()
if self.arrayEntry is not None:
oprot.writeFieldBegin('arrayEntry', TType.STRUCT, 2)
self.arrayEntry.write(oprot)
oprot.writeFieldEnd()
if self.mapEntry is not None:
oprot.writeFieldBegin('mapEntry', TType.STRUCT, 3)
self.mapEntry.write(oprot)
oprot.writeFieldEnd()
if self.structEntry is not None:
oprot.writeFieldBegin('structEntry', TType.STRUCT, 4)
self.structEntry.write(oprot)
oprot.writeFieldEnd()
if self.unionEntry is not None:
oprot.writeFieldBegin('unionEntry', TType.STRUCT, 5)
self.unionEntry.write(oprot)
oprot.writeFieldEnd()
if self.userDefinedTypeEntry is not None:
oprot.writeFieldBegin('userDefinedTypeEntry', TType.STRUCT, 6)
self.userDefinedTypeEntry.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TTypeDesc:
"""
Attributes:
- types
"""
thrift_spec = (
None, # 0
(1, TType.LIST, 'types', (TType.STRUCT,(TTypeEntry, TTypeEntry.thrift_spec)), None, ), # 1
)
def __init__(self, types=None,):
self.types = types
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.LIST:
self.types = []
(_etype21, _size18) = iprot.readListBegin()
for _i22 in xrange(_size18):
_elem23 = TTypeEntry()
_elem23.read(iprot)
self.types.append(_elem23)
iprot.readListEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TTypeDesc')
if self.types is not None:
oprot.writeFieldBegin('types', TType.LIST, 1)
oprot.writeListBegin(TType.STRUCT, len(self.types))
for iter24 in self.types:
iter24.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.types is None:
raise TProtocol.TProtocolException(message='Required field types is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TColumnDesc:
"""
Attributes:
- columnName
- typeDesc
- position
- comment
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'columnName', None, None, ), # 1
(2, TType.STRUCT, 'typeDesc', (TTypeDesc, TTypeDesc.thrift_spec), None, ), # 2
(3, TType.I32, 'position', None, None, ), # 3
(4, TType.STRING, 'comment', None, None, ), # 4
)
def __init__(self, columnName=None, typeDesc=None, position=None, comment=None,):
self.columnName = columnName
self.typeDesc = typeDesc
self.position = position
self.comment = comment
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.columnName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.typeDesc = TTypeDesc()
self.typeDesc.read(iprot)
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.I32:
self.position = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRING:
self.comment = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TColumnDesc')
if self.columnName is not None:
oprot.writeFieldBegin('columnName', TType.STRING, 1)
oprot.writeString(self.columnName)
oprot.writeFieldEnd()
if self.typeDesc is not None:
oprot.writeFieldBegin('typeDesc', TType.STRUCT, 2)
self.typeDesc.write(oprot)
oprot.writeFieldEnd()
if self.position is not None:
oprot.writeFieldBegin('position', TType.I32, 3)
oprot.writeI32(self.position)
oprot.writeFieldEnd()
if self.comment is not None:
oprot.writeFieldBegin('comment', TType.STRING, 4)
oprot.writeString(self.comment)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.columnName is None:
raise TProtocol.TProtocolException(message='Required field columnName is unset!')
if self.typeDesc is None:
raise TProtocol.TProtocolException(message='Required field typeDesc is unset!')
if self.position is None:
raise TProtocol.TProtocolException(message='Required field position is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TTableSchema:
"""
Attributes:
- columns
"""
thrift_spec = (
None, # 0
(1, TType.LIST, 'columns', (TType.STRUCT,(TColumnDesc, TColumnDesc.thrift_spec)), None, ), # 1
)
def __init__(self, columns=None,):
self.columns = columns
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.LIST:
self.columns = []
(_etype28, _size25) = iprot.readListBegin()
for _i29 in xrange(_size25):
_elem30 = TColumnDesc()
_elem30.read(iprot)
self.columns.append(_elem30)
iprot.readListEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TTableSchema')
if self.columns is not None:
oprot.writeFieldBegin('columns', TType.LIST, 1)
oprot.writeListBegin(TType.STRUCT, len(self.columns))
for iter31 in self.columns:
iter31.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.columns is None:
raise TProtocol.TProtocolException(message='Required field columns is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TBoolValue:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.BOOL, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.BOOL:
self.value = iprot.readBool();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TBoolValue')
if self.value is not None:
oprot.writeFieldBegin('value', TType.BOOL, 1)
oprot.writeBool(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TByteValue:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.BYTE, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.BYTE:
self.value = iprot.readByte();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TByteValue')
if self.value is not None:
oprot.writeFieldBegin('value', TType.BYTE, 1)
oprot.writeByte(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TI16Value:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.I16, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I16:
self.value = iprot.readI16();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TI16Value')
if self.value is not None:
oprot.writeFieldBegin('value', TType.I16, 1)
oprot.writeI16(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TI32Value:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.value = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TI32Value')
if self.value is not None:
oprot.writeFieldBegin('value', TType.I32, 1)
oprot.writeI32(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TI64Value:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.I64, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I64:
self.value = iprot.readI64();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TI64Value')
if self.value is not None:
oprot.writeFieldBegin('value', TType.I64, 1)
oprot.writeI64(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TDoubleValue:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.DOUBLE, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.DOUBLE:
self.value = iprot.readDouble();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TDoubleValue')
if self.value is not None:
oprot.writeFieldBegin('value', TType.DOUBLE, 1)
oprot.writeDouble(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TStringValue:
"""
Attributes:
- value
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'value', None, None, ), # 1
)
def __init__(self, value=None,):
self.value = value
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.value = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TStringValue')
if self.value is not None:
oprot.writeFieldBegin('value', TType.STRING, 1)
oprot.writeString(self.value)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TColumn:
"""
Attributes:
- boolColumn
- byteColumn
- i16Column
- i32Column
- i64Column
- doubleColumn
- stringColumn
"""
thrift_spec = (
None, # 0
(1, TType.LIST, 'boolColumn', (TType.STRUCT,(TBoolValue, TBoolValue.thrift_spec)), None, ), # 1
(2, TType.LIST, 'byteColumn', (TType.STRUCT,(TByteValue, TByteValue.thrift_spec)), None, ), # 2
(3, TType.LIST, 'i16Column', (TType.STRUCT,(TI16Value, TI16Value.thrift_spec)), None, ), # 3
(4, TType.LIST, 'i32Column', (TType.STRUCT,(TI32Value, TI32Value.thrift_spec)), None, ), # 4
(5, TType.LIST, 'i64Column', (TType.STRUCT,(TI64Value, TI64Value.thrift_spec)), None, ), # 5
(6, TType.LIST, 'doubleColumn', (TType.STRUCT,(TDoubleValue, TDoubleValue.thrift_spec)), None, ), # 6
(7, TType.LIST, 'stringColumn', (TType.STRUCT,(TStringValue, TStringValue.thrift_spec)), None, ), # 7
)
def __init__(self, boolColumn=None, byteColumn=None, i16Column=None, i32Column=None, i64Column=None, doubleColumn=None, stringColumn=None,):
self.boolColumn = boolColumn
self.byteColumn = byteColumn
self.i16Column = i16Column
self.i32Column = i32Column
self.i64Column = i64Column
self.doubleColumn = doubleColumn
self.stringColumn = stringColumn
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.LIST:
self.boolColumn = []
(_etype35, _size32) = iprot.readListBegin()
for _i36 in xrange(_size32):
_elem37 = TBoolValue()
_elem37.read(iprot)
self.boolColumn.append(_elem37)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.LIST:
self.byteColumn = []
(_etype41, _size38) = iprot.readListBegin()
for _i42 in xrange(_size38):
_elem43 = TByteValue()
_elem43.read(iprot)
self.byteColumn.append(_elem43)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.LIST:
self.i16Column = []
(_etype47, _size44) = iprot.readListBegin()
for _i48 in xrange(_size44):
_elem49 = TI16Value()
_elem49.read(iprot)
self.i16Column.append(_elem49)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.LIST:
self.i32Column = []
(_etype53, _size50) = iprot.readListBegin()
for _i54 in xrange(_size50):
_elem55 = TI32Value()
_elem55.read(iprot)
self.i32Column.append(_elem55)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.LIST:
self.i64Column = []
(_etype59, _size56) = iprot.readListBegin()
for _i60 in xrange(_size56):
_elem61 = TI64Value()
_elem61.read(iprot)
self.i64Column.append(_elem61)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 6:
if ftype == TType.LIST:
self.doubleColumn = []
(_etype65, _size62) = iprot.readListBegin()
for _i66 in xrange(_size62):
_elem67 = TDoubleValue()
_elem67.read(iprot)
self.doubleColumn.append(_elem67)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 7:
if ftype == TType.LIST:
self.stringColumn = []
(_etype71, _size68) = iprot.readListBegin()
for _i72 in xrange(_size68):
_elem73 = TStringValue()
_elem73.read(iprot)
self.stringColumn.append(_elem73)
iprot.readListEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TColumn')
if self.boolColumn is not None:
oprot.writeFieldBegin('boolColumn', TType.LIST, 1)
oprot.writeListBegin(TType.STRUCT, len(self.boolColumn))
for iter74 in self.boolColumn:
iter74.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.byteColumn is not None:
oprot.writeFieldBegin('byteColumn', TType.LIST, 2)
oprot.writeListBegin(TType.STRUCT, len(self.byteColumn))
for iter75 in self.byteColumn:
iter75.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.i16Column is not None:
oprot.writeFieldBegin('i16Column', TType.LIST, 3)
oprot.writeListBegin(TType.STRUCT, len(self.i16Column))
for iter76 in self.i16Column:
iter76.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.i32Column is not None:
oprot.writeFieldBegin('i32Column', TType.LIST, 4)
oprot.writeListBegin(TType.STRUCT, len(self.i32Column))
for iter77 in self.i32Column:
iter77.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.i64Column is not None:
oprot.writeFieldBegin('i64Column', TType.LIST, 5)
oprot.writeListBegin(TType.STRUCT, len(self.i64Column))
for iter78 in self.i64Column:
iter78.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.doubleColumn is not None:
oprot.writeFieldBegin('doubleColumn', TType.LIST, 6)
oprot.writeListBegin(TType.STRUCT, len(self.doubleColumn))
for iter79 in self.doubleColumn:
iter79.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.stringColumn is not None:
oprot.writeFieldBegin('stringColumn', TType.LIST, 7)
oprot.writeListBegin(TType.STRUCT, len(self.stringColumn))
for iter80 in self.stringColumn:
iter80.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TColumnValue:
"""
Attributes:
- boolVal
- byteVal
- i16Val
- i32Val
- i64Val
- doubleVal
- stringVal
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'boolVal', (TBoolValue, TBoolValue.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'byteVal', (TByteValue, TByteValue.thrift_spec), None, ), # 2
(3, TType.STRUCT, 'i16Val', (TI16Value, TI16Value.thrift_spec), None, ), # 3
(4, TType.STRUCT, 'i32Val', (TI32Value, TI32Value.thrift_spec), None, ), # 4
(5, TType.STRUCT, 'i64Val', (TI64Value, TI64Value.thrift_spec), None, ), # 5
(6, TType.STRUCT, 'doubleVal', (TDoubleValue, TDoubleValue.thrift_spec), None, ), # 6
(7, TType.STRUCT, 'stringVal', (TStringValue, TStringValue.thrift_spec), None, ), # 7
)
def __init__(self, boolVal=None, byteVal=None, i16Val=None, i32Val=None, i64Val=None, doubleVal=None, stringVal=None,):
self.boolVal = boolVal
self.byteVal = byteVal
self.i16Val = i16Val
self.i32Val = i32Val
self.i64Val = i64Val
self.doubleVal = doubleVal
self.stringVal = stringVal
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.boolVal = TBoolValue()
self.boolVal.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.byteVal = TByteValue()
self.byteVal.read(iprot)
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRUCT:
self.i16Val = TI16Value()
self.i16Val.read(iprot)
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRUCT:
self.i32Val = TI32Value()
self.i32Val.read(iprot)
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.STRUCT:
self.i64Val = TI64Value()
self.i64Val.read(iprot)
else:
iprot.skip(ftype)
elif fid == 6:
if ftype == TType.STRUCT:
self.doubleVal = TDoubleValue()
self.doubleVal.read(iprot)
else:
iprot.skip(ftype)
elif fid == 7:
if ftype == TType.STRUCT:
self.stringVal = TStringValue()
self.stringVal.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TColumnValue')
if self.boolVal is not None:
oprot.writeFieldBegin('boolVal', TType.STRUCT, 1)
self.boolVal.write(oprot)
oprot.writeFieldEnd()
if self.byteVal is not None:
oprot.writeFieldBegin('byteVal', TType.STRUCT, 2)
self.byteVal.write(oprot)
oprot.writeFieldEnd()
if self.i16Val is not None:
oprot.writeFieldBegin('i16Val', TType.STRUCT, 3)
self.i16Val.write(oprot)
oprot.writeFieldEnd()
if self.i32Val is not None:
oprot.writeFieldBegin('i32Val', TType.STRUCT, 4)
self.i32Val.write(oprot)
oprot.writeFieldEnd()
if self.i64Val is not None:
oprot.writeFieldBegin('i64Val', TType.STRUCT, 5)
self.i64Val.write(oprot)
oprot.writeFieldEnd()
if self.doubleVal is not None:
oprot.writeFieldBegin('doubleVal', TType.STRUCT, 6)
self.doubleVal.write(oprot)
oprot.writeFieldEnd()
if self.stringVal is not None:
oprot.writeFieldBegin('stringVal', TType.STRUCT, 7)
self.stringVal.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TRow:
"""
Attributes:
- colVals
"""
thrift_spec = (
None, # 0
(1, TType.LIST, 'colVals', (TType.STRUCT,(TColumnValue, TColumnValue.thrift_spec)), None, ), # 1
)
def __init__(self, colVals=None,):
self.colVals = colVals
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.LIST:
self.colVals = []
(_etype84, _size81) = iprot.readListBegin()
for _i85 in xrange(_size81):
_elem86 = TColumnValue()
_elem86.read(iprot)
self.colVals.append(_elem86)
iprot.readListEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TRow')
if self.colVals is not None:
oprot.writeFieldBegin('colVals', TType.LIST, 1)
oprot.writeListBegin(TType.STRUCT, len(self.colVals))
for iter87 in self.colVals:
iter87.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.colVals is None:
raise TProtocol.TProtocolException(message='Required field colVals is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TRowSet:
"""
Attributes:
- startRowOffset
- rows
- columns
"""
thrift_spec = (
None, # 0
(1, TType.I64, 'startRowOffset', None, None, ), # 1
(2, TType.LIST, 'rows', (TType.STRUCT,(TRow, TRow.thrift_spec)), None, ), # 2
(3, TType.LIST, 'columns', (TType.STRUCT,(TColumn, TColumn.thrift_spec)), None, ), # 3
)
def __init__(self, startRowOffset=None, rows=None, columns=None,):
self.startRowOffset = startRowOffset
self.rows = rows
self.columns = columns
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I64:
self.startRowOffset = iprot.readI64();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.LIST:
self.rows = []
(_etype91, _size88) = iprot.readListBegin()
for _i92 in xrange(_size88):
_elem93 = TRow()
_elem93.read(iprot)
self.rows.append(_elem93)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.LIST:
self.columns = []
(_etype97, _size94) = iprot.readListBegin()
for _i98 in xrange(_size94):
_elem99 = TColumn()
_elem99.read(iprot)
self.columns.append(_elem99)
iprot.readListEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TRowSet')
if self.startRowOffset is not None:
oprot.writeFieldBegin('startRowOffset', TType.I64, 1)
oprot.writeI64(self.startRowOffset)
oprot.writeFieldEnd()
if self.rows is not None:
oprot.writeFieldBegin('rows', TType.LIST, 2)
oprot.writeListBegin(TType.STRUCT, len(self.rows))
for iter100 in self.rows:
iter100.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.columns is not None:
oprot.writeFieldBegin('columns', TType.LIST, 3)
oprot.writeListBegin(TType.STRUCT, len(self.columns))
for iter101 in self.columns:
iter101.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.startRowOffset is None:
raise TProtocol.TProtocolException(message='Required field startRowOffset is unset!')
if self.rows is None:
raise TProtocol.TProtocolException(message='Required field rows is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TStatus:
"""
Attributes:
- statusCode
- infoMessages
- sqlState
- errorCode
- errorMessage
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'statusCode', None, None, ), # 1
(2, TType.LIST, 'infoMessages', (TType.STRING,None), None, ), # 2
(3, TType.STRING, 'sqlState', None, None, ), # 3
(4, TType.I32, 'errorCode', None, None, ), # 4
(5, TType.STRING, 'errorMessage', None, None, ), # 5
)
def __init__(self, statusCode=None, infoMessages=None, sqlState=None, errorCode=None, errorMessage=None,):
self.statusCode = statusCode
self.infoMessages = infoMessages
self.sqlState = sqlState
self.errorCode = errorCode
self.errorMessage = errorMessage
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.statusCode = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.LIST:
self.infoMessages = []
(_etype105, _size102) = iprot.readListBegin()
for _i106 in xrange(_size102):
_elem107 = iprot.readString();
self.infoMessages.append(_elem107)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.sqlState = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.I32:
self.errorCode = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.STRING:
self.errorMessage = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TStatus')
if self.statusCode is not None:
oprot.writeFieldBegin('statusCode', TType.I32, 1)
oprot.writeI32(self.statusCode)
oprot.writeFieldEnd()
if self.infoMessages is not None:
oprot.writeFieldBegin('infoMessages', TType.LIST, 2)
oprot.writeListBegin(TType.STRING, len(self.infoMessages))
for iter108 in self.infoMessages:
oprot.writeString(iter108)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.sqlState is not None:
oprot.writeFieldBegin('sqlState', TType.STRING, 3)
oprot.writeString(self.sqlState)
oprot.writeFieldEnd()
if self.errorCode is not None:
oprot.writeFieldBegin('errorCode', TType.I32, 4)
oprot.writeI32(self.errorCode)
oprot.writeFieldEnd()
if self.errorMessage is not None:
oprot.writeFieldBegin('errorMessage', TType.STRING, 5)
oprot.writeString(self.errorMessage)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.statusCode is None:
raise TProtocol.TProtocolException(message='Required field statusCode is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class THandleIdentifier:
"""
Attributes:
- guid
- secret
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'guid', None, None, ), # 1
(2, TType.STRING, 'secret', None, None, ), # 2
)
def __init__(self, guid=None, secret=None,):
self.guid = guid
self.secret = secret
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.guid = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.secret = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('THandleIdentifier')
if self.guid is not None:
oprot.writeFieldBegin('guid', TType.STRING, 1)
oprot.writeString(self.guid)
oprot.writeFieldEnd()
if self.secret is not None:
oprot.writeFieldBegin('secret', TType.STRING, 2)
oprot.writeString(self.secret)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.guid is None:
raise TProtocol.TProtocolException(message='Required field guid is unset!')
if self.secret is None:
raise TProtocol.TProtocolException(message='Required field secret is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TSessionHandle:
"""
Attributes:
- sessionId
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionId', (THandleIdentifier, THandleIdentifier.thrift_spec), None, ), # 1
)
def __init__(self, sessionId=None,):
self.sessionId = sessionId
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionId = THandleIdentifier()
self.sessionId.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TSessionHandle')
if self.sessionId is not None:
oprot.writeFieldBegin('sessionId', TType.STRUCT, 1)
self.sessionId.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionId is None:
raise TProtocol.TProtocolException(message='Required field sessionId is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TOperationHandle:
"""
Attributes:
- operationId
- operationType
- hasResultSet
- modifiedRowCount
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'operationId', (THandleIdentifier, THandleIdentifier.thrift_spec), None, ), # 1
(2, TType.I32, 'operationType', None, None, ), # 2
(3, TType.BOOL, 'hasResultSet', None, None, ), # 3
(4, TType.DOUBLE, 'modifiedRowCount', None, None, ), # 4
)
def __init__(self, operationId=None, operationType=None, hasResultSet=None, modifiedRowCount=None,):
self.operationId = operationId
self.operationType = operationType
self.hasResultSet = hasResultSet
self.modifiedRowCount = modifiedRowCount
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.operationId = THandleIdentifier()
self.operationId.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.operationType = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.BOOL:
self.hasResultSet = iprot.readBool();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.DOUBLE:
self.modifiedRowCount = iprot.readDouble();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TOperationHandle')
if self.operationId is not None:
oprot.writeFieldBegin('operationId', TType.STRUCT, 1)
self.operationId.write(oprot)
oprot.writeFieldEnd()
if self.operationType is not None:
oprot.writeFieldBegin('operationType', TType.I32, 2)
oprot.writeI32(self.operationType)
oprot.writeFieldEnd()
if self.hasResultSet is not None:
oprot.writeFieldBegin('hasResultSet', TType.BOOL, 3)
oprot.writeBool(self.hasResultSet)
oprot.writeFieldEnd()
if self.modifiedRowCount is not None:
oprot.writeFieldBegin('modifiedRowCount', TType.DOUBLE, 4)
oprot.writeDouble(self.modifiedRowCount)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.operationId is None:
raise TProtocol.TProtocolException(message='Required field operationId is unset!')
if self.operationType is None:
raise TProtocol.TProtocolException(message='Required field operationType is unset!')
if self.hasResultSet is None:
raise TProtocol.TProtocolException(message='Required field hasResultSet is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TOpenSessionReq:
"""
Attributes:
- client_protocol
- username
- password
- configuration
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'client_protocol', None, 0, ), # 1
(2, TType.STRING, 'username', None, None, ), # 2
(3, TType.STRING, 'password', None, None, ), # 3
(4, TType.MAP, 'configuration', (TType.STRING,None,TType.STRING,None), None, ), # 4
)
def __init__(self, client_protocol=thrift_spec[1][4], username=None, password=None, configuration=None,):
self.client_protocol = client_protocol
self.username = username
self.password = password
self.configuration = configuration
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.client_protocol = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.username = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.password = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.MAP:
self.configuration = {}
(_ktype110, _vtype111, _size109 ) = iprot.readMapBegin()
for _i113 in xrange(_size109):
_key114 = iprot.readString();
_val115 = iprot.readString();
self.configuration[_key114] = _val115
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TOpenSessionReq')
if self.client_protocol is not None:
oprot.writeFieldBegin('client_protocol', TType.I32, 1)
oprot.writeI32(self.client_protocol)
oprot.writeFieldEnd()
if self.username is not None:
oprot.writeFieldBegin('username', TType.STRING, 2)
oprot.writeString(self.username)
oprot.writeFieldEnd()
if self.password is not None:
oprot.writeFieldBegin('password', TType.STRING, 3)
oprot.writeString(self.password)
oprot.writeFieldEnd()
if self.configuration is not None:
oprot.writeFieldBegin('configuration', TType.MAP, 4)
oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.configuration))
for kiter116,viter117 in self.configuration.items():
oprot.writeString(kiter116)
oprot.writeString(viter117)
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.client_protocol is None:
raise TProtocol.TProtocolException(message='Required field client_protocol is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TOpenSessionResp:
"""
Attributes:
- status
- serverProtocolVersion
- sessionHandle
- configuration
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.I32, 'serverProtocolVersion', None, 0, ), # 2
(3, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 3
(4, TType.MAP, 'configuration', (TType.STRING,None,TType.STRING,None), None, ), # 4
)
def __init__(self, status=None, serverProtocolVersion=thrift_spec[2][4], sessionHandle=None, configuration=None,):
self.status = status
self.serverProtocolVersion = serverProtocolVersion
self.sessionHandle = sessionHandle
self.configuration = configuration
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.serverProtocolVersion = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.MAP:
self.configuration = {}
(_ktype119, _vtype120, _size118 ) = iprot.readMapBegin()
for _i122 in xrange(_size118):
_key123 = iprot.readString();
_val124 = iprot.readString();
self.configuration[_key123] = _val124
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TOpenSessionResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.serverProtocolVersion is not None:
oprot.writeFieldBegin('serverProtocolVersion', TType.I32, 2)
oprot.writeI32(self.serverProtocolVersion)
oprot.writeFieldEnd()
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 3)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.configuration is not None:
oprot.writeFieldBegin('configuration', TType.MAP, 4)
oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.configuration))
for kiter125,viter126 in self.configuration.items():
oprot.writeString(kiter125)
oprot.writeString(viter126)
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
if self.serverProtocolVersion is None:
raise TProtocol.TProtocolException(message='Required field serverProtocolVersion is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TCloseSessionReq:
"""
Attributes:
- sessionHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
)
def __init__(self, sessionHandle=None,):
self.sessionHandle = sessionHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TCloseSessionReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TCloseSessionResp:
"""
Attributes:
- status
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
)
def __init__(self, status=None,):
self.status = status
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TCloseSessionResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetInfoValue:
"""
Attributes:
- stringValue
- smallIntValue
- integerBitmask
- integerFlag
- binaryValue
- lenValue
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'stringValue', None, None, ), # 1
(2, TType.I16, 'smallIntValue', None, None, ), # 2
(3, TType.I32, 'integerBitmask', None, None, ), # 3
(4, TType.I32, 'integerFlag', None, None, ), # 4
(5, TType.I32, 'binaryValue', None, None, ), # 5
(6, TType.I64, 'lenValue', None, None, ), # 6
)
def __init__(self, stringValue=None, smallIntValue=None, integerBitmask=None, integerFlag=None, binaryValue=None, lenValue=None,):
self.stringValue = stringValue
self.smallIntValue = smallIntValue
self.integerBitmask = integerBitmask
self.integerFlag = integerFlag
self.binaryValue = binaryValue
self.lenValue = lenValue
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.stringValue = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I16:
self.smallIntValue = iprot.readI16();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.I32:
self.integerBitmask = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.I32:
self.integerFlag = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.I32:
self.binaryValue = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 6:
if ftype == TType.I64:
self.lenValue = iprot.readI64();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetInfoValue')
if self.stringValue is not None:
oprot.writeFieldBegin('stringValue', TType.STRING, 1)
oprot.writeString(self.stringValue)
oprot.writeFieldEnd()
if self.smallIntValue is not None:
oprot.writeFieldBegin('smallIntValue', TType.I16, 2)
oprot.writeI16(self.smallIntValue)
oprot.writeFieldEnd()
if self.integerBitmask is not None:
oprot.writeFieldBegin('integerBitmask', TType.I32, 3)
oprot.writeI32(self.integerBitmask)
oprot.writeFieldEnd()
if self.integerFlag is not None:
oprot.writeFieldBegin('integerFlag', TType.I32, 4)
oprot.writeI32(self.integerFlag)
oprot.writeFieldEnd()
if self.binaryValue is not None:
oprot.writeFieldBegin('binaryValue', TType.I32, 5)
oprot.writeI32(self.binaryValue)
oprot.writeFieldEnd()
if self.lenValue is not None:
oprot.writeFieldBegin('lenValue', TType.I64, 6)
oprot.writeI64(self.lenValue)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetInfoReq:
"""
Attributes:
- sessionHandle
- infoType
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
(2, TType.I32, 'infoType', None, None, ), # 2
)
def __init__(self, sessionHandle=None, infoType=None,):
self.sessionHandle = sessionHandle
self.infoType = infoType
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.infoType = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetInfoReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.infoType is not None:
oprot.writeFieldBegin('infoType', TType.I32, 2)
oprot.writeI32(self.infoType)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
if self.infoType is None:
raise TProtocol.TProtocolException(message='Required field infoType is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetInfoResp:
"""
Attributes:
- status
- infoValue
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'infoValue', (TGetInfoValue, TGetInfoValue.thrift_spec), None, ), # 2
)
def __init__(self, status=None, infoValue=None,):
self.status = status
self.infoValue = infoValue
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.infoValue = TGetInfoValue()
self.infoValue.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetInfoResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.infoValue is not None:
oprot.writeFieldBegin('infoValue', TType.STRUCT, 2)
self.infoValue.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
if self.infoValue is None:
raise TProtocol.TProtocolException(message='Required field infoValue is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TExecuteStatementReq:
"""
Attributes:
- sessionHandle
- statement
- confOverlay
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
(2, TType.STRING, 'statement', None, None, ), # 2
(3, TType.MAP, 'confOverlay', (TType.STRING,None,TType.STRING,None), None, ), # 3
)
def __init__(self, sessionHandle=None, statement=None, confOverlay=None,):
self.sessionHandle = sessionHandle
self.statement = statement
self.confOverlay = confOverlay
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.statement = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.MAP:
self.confOverlay = {}
(_ktype128, _vtype129, _size127 ) = iprot.readMapBegin()
for _i131 in xrange(_size127):
_key132 = iprot.readString();
_val133 = iprot.readString();
self.confOverlay[_key132] = _val133
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TExecuteStatementReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.statement is not None:
oprot.writeFieldBegin('statement', TType.STRING, 2)
oprot.writeString(self.statement)
oprot.writeFieldEnd()
if self.confOverlay is not None:
oprot.writeFieldBegin('confOverlay', TType.MAP, 3)
oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.confOverlay))
for kiter134,viter135 in self.confOverlay.items():
oprot.writeString(kiter134)
oprot.writeString(viter135)
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
if self.statement is None:
raise TProtocol.TProtocolException(message='Required field statement is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TExecuteStatementResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TExecuteStatementResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetTypeInfoReq:
"""
Attributes:
- sessionHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
)
def __init__(self, sessionHandle=None,):
self.sessionHandle = sessionHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetTypeInfoReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetTypeInfoResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetTypeInfoResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetCatalogsReq:
"""
Attributes:
- sessionHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
)
def __init__(self, sessionHandle=None,):
self.sessionHandle = sessionHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetCatalogsReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetCatalogsResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetCatalogsResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetSchemasReq:
"""
Attributes:
- sessionHandle
- catalogName
- schemaName
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
(2, TType.STRING, 'catalogName', None, None, ), # 2
(3, TType.STRING, 'schemaName', None, None, ), # 3
)
def __init__(self, sessionHandle=None, catalogName=None, schemaName=None,):
self.sessionHandle = sessionHandle
self.catalogName = catalogName
self.schemaName = schemaName
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.catalogName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.schemaName = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetSchemasReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.catalogName is not None:
oprot.writeFieldBegin('catalogName', TType.STRING, 2)
oprot.writeString(self.catalogName)
oprot.writeFieldEnd()
if self.schemaName is not None:
oprot.writeFieldBegin('schemaName', TType.STRING, 3)
oprot.writeString(self.schemaName)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetSchemasResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetSchemasResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetTablesReq:
"""
Attributes:
- sessionHandle
- catalogName
- schemaName
- tableName
- tableTypes
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
(2, TType.STRING, 'catalogName', None, None, ), # 2
(3, TType.STRING, 'schemaName', None, None, ), # 3
(4, TType.STRING, 'tableName', None, None, ), # 4
(5, TType.LIST, 'tableTypes', (TType.STRING,None), None, ), # 5
)
def __init__(self, sessionHandle=None, catalogName=None, schemaName=None, tableName=None, tableTypes=None,):
self.sessionHandle = sessionHandle
self.catalogName = catalogName
self.schemaName = schemaName
self.tableName = tableName
self.tableTypes = tableTypes
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.catalogName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.schemaName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRING:
self.tableName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.LIST:
self.tableTypes = []
(_etype139, _size136) = iprot.readListBegin()
for _i140 in xrange(_size136):
_elem141 = iprot.readString();
self.tableTypes.append(_elem141)
iprot.readListEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetTablesReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.catalogName is not None:
oprot.writeFieldBegin('catalogName', TType.STRING, 2)
oprot.writeString(self.catalogName)
oprot.writeFieldEnd()
if self.schemaName is not None:
oprot.writeFieldBegin('schemaName', TType.STRING, 3)
oprot.writeString(self.schemaName)
oprot.writeFieldEnd()
if self.tableName is not None:
oprot.writeFieldBegin('tableName', TType.STRING, 4)
oprot.writeString(self.tableName)
oprot.writeFieldEnd()
if self.tableTypes is not None:
oprot.writeFieldBegin('tableTypes', TType.LIST, 5)
oprot.writeListBegin(TType.STRING, len(self.tableTypes))
for iter142 in self.tableTypes:
oprot.writeString(iter142)
oprot.writeListEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetTablesResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetTablesResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetTableTypesReq:
"""
Attributes:
- sessionHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
)
def __init__(self, sessionHandle=None,):
self.sessionHandle = sessionHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetTableTypesReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetTableTypesResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetTableTypesResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetColumnsReq:
"""
Attributes:
- sessionHandle
- catalogName
- schemaName
- tableName
- columnName
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
(2, TType.STRING, 'catalogName', None, None, ), # 2
(3, TType.STRING, 'schemaName', None, None, ), # 3
(4, TType.STRING, 'tableName', None, None, ), # 4
(5, TType.STRING, 'columnName', None, None, ), # 5
)
def __init__(self, sessionHandle=None, catalogName=None, schemaName=None, tableName=None, columnName=None,):
self.sessionHandle = sessionHandle
self.catalogName = catalogName
self.schemaName = schemaName
self.tableName = tableName
self.columnName = columnName
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.catalogName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.schemaName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRING:
self.tableName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.STRING:
self.columnName = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetColumnsReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.catalogName is not None:
oprot.writeFieldBegin('catalogName', TType.STRING, 2)
oprot.writeString(self.catalogName)
oprot.writeFieldEnd()
if self.schemaName is not None:
oprot.writeFieldBegin('schemaName', TType.STRING, 3)
oprot.writeString(self.schemaName)
oprot.writeFieldEnd()
if self.tableName is not None:
oprot.writeFieldBegin('tableName', TType.STRING, 4)
oprot.writeString(self.tableName)
oprot.writeFieldEnd()
if self.columnName is not None:
oprot.writeFieldBegin('columnName', TType.STRING, 5)
oprot.writeString(self.columnName)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetColumnsResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetColumnsResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetFunctionsReq:
"""
Attributes:
- sessionHandle
- catalogName
- schemaName
- functionName
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'sessionHandle', (TSessionHandle, TSessionHandle.thrift_spec), None, ), # 1
(2, TType.STRING, 'catalogName', None, None, ), # 2
(3, TType.STRING, 'schemaName', None, None, ), # 3
(4, TType.STRING, 'functionName', None, None, ), # 4
)
def __init__(self, sessionHandle=None, catalogName=None, schemaName=None, functionName=None,):
self.sessionHandle = sessionHandle
self.catalogName = catalogName
self.schemaName = schemaName
self.functionName = functionName
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.sessionHandle = TSessionHandle()
self.sessionHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.catalogName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.schemaName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.STRING:
self.functionName = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetFunctionsReq')
if self.sessionHandle is not None:
oprot.writeFieldBegin('sessionHandle', TType.STRUCT, 1)
self.sessionHandle.write(oprot)
oprot.writeFieldEnd()
if self.catalogName is not None:
oprot.writeFieldBegin('catalogName', TType.STRING, 2)
oprot.writeString(self.catalogName)
oprot.writeFieldEnd()
if self.schemaName is not None:
oprot.writeFieldBegin('schemaName', TType.STRING, 3)
oprot.writeString(self.schemaName)
oprot.writeFieldEnd()
if self.functionName is not None:
oprot.writeFieldBegin('functionName', TType.STRING, 4)
oprot.writeString(self.functionName)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.sessionHandle is None:
raise TProtocol.TProtocolException(message='Required field sessionHandle is unset!')
if self.functionName is None:
raise TProtocol.TProtocolException(message='Required field functionName is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetFunctionsResp:
"""
Attributes:
- status
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 2
)
def __init__(self, status=None, operationHandle=None,):
self.status = status
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetFunctionsResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 2)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetOperationStatusReq:
"""
Attributes:
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 1
)
def __init__(self, operationHandle=None,):
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetOperationStatusReq')
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 1)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.operationHandle is None:
raise TProtocol.TProtocolException(message='Required field operationHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetOperationStatusResp:
"""
Attributes:
- status
- operationState
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.I32, 'operationState', None, None, ), # 2
)
def __init__(self, status=None, operationState=None,):
self.status = status
self.operationState = operationState
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.operationState = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetOperationStatusResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.operationState is not None:
oprot.writeFieldBegin('operationState', TType.I32, 2)
oprot.writeI32(self.operationState)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TCancelOperationReq:
"""
Attributes:
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 1
)
def __init__(self, operationHandle=None,):
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TCancelOperationReq')
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 1)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.operationHandle is None:
raise TProtocol.TProtocolException(message='Required field operationHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TCancelOperationResp:
"""
Attributes:
- status
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
)
def __init__(self, status=None,):
self.status = status
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TCancelOperationResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TCloseOperationReq:
"""
Attributes:
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 1
)
def __init__(self, operationHandle=None,):
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TCloseOperationReq')
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 1)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.operationHandle is None:
raise TProtocol.TProtocolException(message='Required field operationHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TCloseOperationResp:
"""
Attributes:
- status
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
)
def __init__(self, status=None,):
self.status = status
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TCloseOperationResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetResultSetMetadataReq:
"""
Attributes:
- operationHandle
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 1
)
def __init__(self, operationHandle=None,):
self.operationHandle = operationHandle
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetResultSetMetadataReq')
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 1)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.operationHandle is None:
raise TProtocol.TProtocolException(message='Required field operationHandle is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TGetResultSetMetadataResp:
"""
Attributes:
- status
- schema
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'schema', (TTableSchema, TTableSchema.thrift_spec), None, ), # 2
)
def __init__(self, status=None, schema=None,):
self.status = status
self.schema = schema
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.schema = TTableSchema()
self.schema.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TGetResultSetMetadataResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.schema is not None:
oprot.writeFieldBegin('schema', TType.STRUCT, 2)
self.schema.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TFetchResultsReq:
"""
Attributes:
- operationHandle
- orientation
- maxRows
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'operationHandle', (TOperationHandle, TOperationHandle.thrift_spec), None, ), # 1
(2, TType.I32, 'orientation', None, 0, ), # 2
(3, TType.I64, 'maxRows', None, None, ), # 3
)
def __init__(self, operationHandle=None, orientation=thrift_spec[2][4], maxRows=None,):
self.operationHandle = operationHandle
self.orientation = orientation
self.maxRows = maxRows
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.operationHandle = TOperationHandle()
self.operationHandle.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.orientation = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.I64:
self.maxRows = iprot.readI64();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TFetchResultsReq')
if self.operationHandle is not None:
oprot.writeFieldBegin('operationHandle', TType.STRUCT, 1)
self.operationHandle.write(oprot)
oprot.writeFieldEnd()
if self.orientation is not None:
oprot.writeFieldBegin('orientation', TType.I32, 2)
oprot.writeI32(self.orientation)
oprot.writeFieldEnd()
if self.maxRows is not None:
oprot.writeFieldBegin('maxRows', TType.I64, 3)
oprot.writeI64(self.maxRows)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.operationHandle is None:
raise TProtocol.TProtocolException(message='Required field operationHandle is unset!')
if self.orientation is None:
raise TProtocol.TProtocolException(message='Required field orientation is unset!')
if self.maxRows is None:
raise TProtocol.TProtocolException(message='Required field maxRows is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TFetchResultsResp:
"""
Attributes:
- status
- hasMoreRows
- results
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'status', (TStatus, TStatus.thrift_spec), None, ), # 1
(2, TType.BOOL, 'hasMoreRows', None, None, ), # 2
(3, TType.STRUCT, 'results', (TRowSet, TRowSet.thrift_spec), None, ), # 3
)
def __init__(self, status=None, hasMoreRows=None, results=None,):
self.status = status
self.hasMoreRows = hasMoreRows
self.results = results
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.status = TStatus()
self.status.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.BOOL:
self.hasMoreRows = iprot.readBool();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRUCT:
self.results = TRowSet()
self.results.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TFetchResultsResp')
if self.status is not None:
oprot.writeFieldBegin('status', TType.STRUCT, 1)
self.status.write(oprot)
oprot.writeFieldEnd()
if self.hasMoreRows is not None:
oprot.writeFieldBegin('hasMoreRows', TType.BOOL, 2)
oprot.writeBool(self.hasMoreRows)
oprot.writeFieldEnd()
if self.results is not None:
oprot.writeFieldBegin('results', TType.STRUCT, 3)
self.results.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.status is None:
raise TProtocol.TProtocolException(message='Required field status is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
| 32.285516 | 188 | 0.659829 | 18,519 | 163,171 | 5.571359 | 0.028835 | 0.03528 | 0.031664 | 0.032973 | 0.842483 | 0.813106 | 0.789321 | 0.780114 | 0.750698 | 0.744495 | 0 | 0.015267 | 0.224838 | 163,171 | 5,053 | 189 | 32.291906 | 0.800457 | 0.01849 | 0 | 0.722571 | 1 | 0 | 0.059581 | 0.011511 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098115 | false | 0.001692 | 0.000967 | 0.030691 | 0.238038 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7c1f8af7ebcc2d4bd76177e94e4df3f0325ceb45 | 7,174 | py | Python | tests/fixtures/test_auth.py | ArenaNetworks/dto-digitalmarketplace-api | d0d58924719d889503ed112b0d5801b528b0398c | [
"MIT"
] | null | null | null | tests/fixtures/test_auth.py | ArenaNetworks/dto-digitalmarketplace-api | d0d58924719d889503ed112b0d5801b528b0398c | [
"MIT"
] | null | null | null | tests/fixtures/test_auth.py | ArenaNetworks/dto-digitalmarketplace-api | d0d58924719d889503ed112b0d5801b528b0398c | [
"MIT"
] | 1 | 2021-08-23T06:05:06.000Z | 2021-08-23T06:05:06.000Z | import json
import pytest
from base64 import b64encode
def test_anonymous(client):
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/_protected')
assert res.status_code == 401
def test_authenticated(client, users):
client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au',
'password': 'testpassword'
}), content_type='application/json')
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
res = client.get('/2/_protected')
assert res.status_code == 200
def test_basic_auth(client, users):
header = b64encode('{}:{}'.format('test@digital.gov.au', 'testpassword'))
res = client.get('/2/_protected', headers={'Authorization': 'Basic {}'.format(header)})
assert res.status_code == 200
wrong_password = b64encode('{}:{}'.format('test@digital.gov.au', 'testpasswor'))
res = client.get('/2/_protected', headers={'Authorization': 'Basic {}'.format(wrong_password)})
assert res.status_code == 401
def test_valid_csrf(app, client):
app.config['CSRF_ENABLED'] = True
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
res = client.post('/2/_post', headers={'X-CSRFToken': data['csrfToken']})
assert res.status_code == 200
def test_invalid_csrf(app, client):
app.config['CSRF_ENABLED'] = True
res = client.post('/2/_post')
assert res.status_code == 400
def test_logout(client, users):
client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au', 'password': 'testpassword'
}), content_type='application/json')
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
res = client.get('/2/_protected')
assert res.status_code == 200
res = client.get('/2/logout')
assert res.status_code == 200
res = client.get('/2/_protected')
assert res.status_code == 401
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/logout')
assert res.status_code == 401
def test_login(client, users):
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au'
}), content_type='application/json')
assert res.status_code == 400
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au', 'password': 'testpassword'
}), content_type='application/json')
assert res.status_code == 200
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au', 'password': 'testpasswor'
}), content_type='application/json')
assert res.status_code == 403
def test_api_key_generating_by_admin(client, users, admin_users):
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'testadmin@digital.gov.au', 'password': 'testpassword'
}), content_type='application/json')
assert res.status_code == 200
res = client.post('/2/generate-api-key/1')
assert res.status_code == 200
data = json.loads(res.get_data())
assert len(data['key']) == 64
def test_api_key_authentication(client, users, api_key):
key = api_key.key
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/ping', headers={'X-Api-Key': key})
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
def test_api_key_authentication_fails_on_non_api_key_resource(client, users, api_key):
key = api_key.key
res = client.get('/2/_protected', headers={'X-Api-Key': key})
assert res.status_code == 401
def test_api_key_authentication_fails_supplier_user(client, supplier_user):
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'j@examplecompany.biz', 'password': 'testpassword'
}), content_type='application/json')
assert res.status_code == 200
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
res = client.post('/2/generate-api-key/{}'.format(supplier_user.id))
assert res.status_code == 403
def test_api_key_authentication_fails_buyer_user(client, users):
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au', 'password': 'testpassword'
}), content_type='application/json')
assert res.status_code == 200
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
res = client.post('/2/generate-api-key/7')
assert res.status_code == 403
def test_api_key_authentication_fails_bad_header(client, users, api_key):
key = api_key.key
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/ping', headers={'X-Apikey': key})
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/ping', headers={'X-Api-key': 'badkey'})
assert res.status_code == 403
def test_api_key_revocation(client, users, api_key):
key = api_key.key
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/ping', headers={'X-Api-Key': key})
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
res = client.post('/2/revoke-api-key/{}'.format(key))
assert res.status_code == 200
res = client.get('/2/ping', headers={'X-Api-Key': key})
assert res.status_code == 403
def test_api_key_revocation_by_admin(client, users, admin_users, api_key):
key = api_key.key
res = client.get('/2/ping')
data = json.loads(res.get_data(as_text=True))
assert not data['isAuthenticated']
res = client.get('/2/ping', headers={'X-Api-Key': key})
data = json.loads(res.get_data(as_text=True))
assert data['isAuthenticated']
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'testadmin@digital.gov.au', 'password': 'testpassword'
}), content_type='application/json')
assert res.status_code == 200
res = client.post('/2/revoke-api-key/{}'.format(key))
assert res.status_code == 200
res = client.get('/2/logout')
assert res.status_code == 200
res = client.get('/2/ping', headers={'X-Api-Key': key})
assert res.status_code == 403
def test_api_key_require_auth_decorator(client, users, api_key):
res = client.post('/2/login', data=json.dumps({
'emailAddress': 'test@digital.gov.au', 'password': 'testpassword'
}), content_type='application/json')
assert res.status_code == 200
res = client.get('/2/reports/brief/published')
assert res.status_code == 403
key = api_key.key
res = client.get('/2/reports/brief/published', headers={'X-Api-Key': key})
assert res.status_code == 200
| 30.922414 | 99 | 0.664343 | 996 | 7,174 | 4.626506 | 0.093373 | 0.087891 | 0.078125 | 0.084635 | 0.909288 | 0.883681 | 0.834418 | 0.8023 | 0.781467 | 0.720269 | 0 | 0.024966 | 0.168107 | 7,174 | 231 | 100 | 31.056277 | 0.747151 | 0 | 0 | 0.746835 | 0 | 0 | 0.2247 | 0.02286 | 0 | 0 | 0 | 0 | 0.28481 | 1 | 0.101266 | false | 0.075949 | 0.018987 | 0 | 0.120253 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
7cc33e20ddde8a8b762939d9ad76bfd3ab4e5e49 | 6,018 | py | Python | venv/Lib/site-packages/test/test_iam_token_manager.py | jo2hu6/home-assistant | 1f97943a97d511323f7bfb57facb3fe93840d726 | [
"Apache-2.0"
] | null | null | null | venv/Lib/site-packages/test/test_iam_token_manager.py | jo2hu6/home-assistant | 1f97943a97d511323f7bfb57facb3fe93840d726 | [
"Apache-2.0"
] | null | null | null | venv/Lib/site-packages/test/test_iam_token_manager.py | jo2hu6/home-assistant | 1f97943a97d511323f7bfb57facb3fe93840d726 | [
"Apache-2.0"
] | null | null | null | import responses
from ibm_cloud_sdk_core import IAMTokenManager
import time
import jwt
import json
def get_access_token():
access_token_layout = {
"username": "dummy",
"role": "Admin",
"permissions": [
"administrator",
"manage_catalog"
],
"sub": "admin",
"iss": "sss",
"aud": "sss",
"uid": "sss",
"iat": 3600,
"exp": int(time.time())
}
access_token = jwt.encode(access_token_layout, 'secret', algorithm='HS256', headers={'kid': '230498151c214b788dd97f22b85410a5'})
return access_token.decode('utf-8')
@responses.activate
def test_request_token_auth_default():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("apikey")
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers.get('Authorization') is None
assert responses.calls[0].response.text == response
@responses.activate
def test_request_token_auth_in_ctor():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
default_auth_header = 'Basic Yng6Yng='
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("apikey", iam_url, 'foo', 'bar')
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers['Authorization'] != default_auth_header
assert responses.calls[0].response.text == response
@responses.activate
def test_request_token_auth_in_ctor_client_id_only():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("iam_apikey", iam_url, 'foo')
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers.get('Authorization') is None
assert responses.calls[0].response.text == response
@responses.activate
def test_request_token_auth_in_ctor_secret_only():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("iam_apikey", iam_url, None, 'bar')
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers.get('Authorization') is None
assert responses.calls[0].response.text == response
@responses.activate
def test_request_token_auth_in_setter():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
default_auth_header = 'Basic Yng6Yng='
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("iam_apikey")
token_manager.set_client_id_and_secret('foo', 'bar')
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers['Authorization'] != default_auth_header
assert responses.calls[0].response.text == response
@responses.activate
def test_request_token_auth_in_setter_client_id_only():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("iam_apikey")
token_manager.set_client_id_and_secret('foo', None)
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers.get('Authorization') is None
assert responses.calls[0].response.text == response
@responses.activate
def test_request_token_auth_in_setter_secret_only():
iam_url = "https://iam.cloud.ibm.com/identity/token"
response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=response, status=200)
token_manager = IAMTokenManager("iam_apikey")
token_manager.set_client_id_and_secret(None, 'bar')
token_manager.set_headers({'user':'header'})
token_manager.request_token()
assert len(responses.calls) == 1
assert responses.calls[0].request.url == iam_url
assert responses.calls[0].request.headers.get('Authorization') is None
assert responses.calls[0].response.text == response
| 35.192982 | 132 | 0.678963 | 712 | 6,018 | 5.505618 | 0.132022 | 0.1 | 0.107143 | 0.1125 | 0.894643 | 0.894643 | 0.894643 | 0.884439 | 0.884439 | 0.884439 | 0 | 0.049878 | 0.183782 | 6,018 | 170 | 133 | 35.4 | 0.748168 | 0 | 0 | 0.748299 | 0 | 0 | 0.324693 | 0.034397 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.054422 | false | 0 | 0.034014 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7cd5a42b4a3e51de298b99a5254c36bbc140ba6c | 11,088 | py | Python | tests/components/switch/test_mqtt.py | sara0871/laughing--barnacle- | 70412fc0ba42ccfe446c0c62e327eceeda56a2ab | [
"Apache-2.0"
] | 2 | 2020-12-06T23:15:21.000Z | 2021-03-20T20:21:03.000Z | tests/components/switch/test_mqtt.py | sara0871/https-wakatime.com-android-studio | 5a15b2c036b332c17d5f6a06664378e9273d684f | [
"Apache-2.0"
] | 3 | 2021-09-08T03:06:43.000Z | 2022-03-12T00:56:04.000Z | tests/components/switch/test_mqtt.py | sara0871/https-wakatime.com-android-studio | 5a15b2c036b332c17d5f6a06664378e9273d684f | [
"Apache-2.0"
] | 1 | 2021-02-22T01:56:28.000Z | 2021-02-22T01:56:28.000Z | """The tests for the MQTT switch platform."""
import unittest
from unittest.mock import patch
from homeassistant.setup import setup_component
from homeassistant.const import STATE_ON, STATE_OFF, STATE_UNAVAILABLE,\
ATTR_ASSUMED_STATE
import homeassistant.core as ha
import homeassistant.components.switch as switch
from tests.common import (
mock_mqtt_component, fire_mqtt_message, get_test_home_assistant, mock_coro)
class TestSwitchMQTT(unittest.TestCase):
"""Test the MQTT switch."""
def setUp(self): # pylint: disable=invalid-name
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
self.mock_publish = mock_mqtt_component(self.hass)
def tearDown(self): # pylint: disable=invalid-name
"""Stop everything that was started."""
self.hass.stop()
def test_controlling_state_via_topic(self):
"""Test the controlling state via topic."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'state-topic',
'command_topic': 'command-topic',
'payload_on': 1,
'payload_off': 0
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
self.assertFalse(state.attributes.get(ATTR_ASSUMED_STATE))
fire_mqtt_message(self.hass, 'state-topic', '1')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
fire_mqtt_message(self.hass, 'state-topic', '0')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
def test_sending_mqtt_commands_and_optimistic(self):
"""Test the sending MQTT commands in optimistic mode."""
fake_state = ha.State('switch.test', 'on')
with patch('homeassistant.components.switch.mqtt.async_get_last_state',
return_value=mock_coro(fake_state)):
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'command_topic': 'command-topic',
'payload_on': 'beer on',
'payload_off': 'beer off',
'qos': '2'
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
self.assertTrue(state.attributes.get(ATTR_ASSUMED_STATE))
switch.turn_on(self.hass, 'switch.test')
self.hass.block_till_done()
self.mock_publish.async_publish.assert_called_once_with(
'command-topic', 'beer on', 2, False)
self.mock_publish.async_publish.reset_mock()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
switch.turn_off(self.hass, 'switch.test')
self.hass.block_till_done()
self.mock_publish.async_publish.assert_called_once_with(
'command-topic', 'beer off', 2, False)
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
def test_controlling_state_via_topic_and_json_message(self):
"""Test the controlling state via topic and JSON message."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'state-topic',
'command_topic': 'command-topic',
'payload_on': 'beer on',
'payload_off': 'beer off',
'value_template': '{{ value_json.val }}'
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
fire_mqtt_message(self.hass, 'state-topic', '{"val":"beer on"}')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
fire_mqtt_message(self.hass, 'state-topic', '{"val":"beer off"}')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
def test_controlling_availability(self):
"""Test the controlling state via topic."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'state-topic',
'command_topic': 'command-topic',
'availability_topic': 'availability_topic',
'payload_on': 1,
'payload_off': 0,
'payload_available': 1,
'payload_not_available': 0
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'availability_topic', '1')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
self.assertFalse(state.attributes.get(ATTR_ASSUMED_STATE))
fire_mqtt_message(self.hass, 'availability_topic', '0')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'state-topic', '1')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'availability_topic', '1')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
def test_default_availability_payload(self):
"""Test the availability payload."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'state-topic',
'command_topic': 'command-topic',
'availability_topic': 'availability_topic',
'payload_on': 1,
'payload_off': 0
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'availability_topic', 'online')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
self.assertFalse(state.attributes.get(ATTR_ASSUMED_STATE))
fire_mqtt_message(self.hass, 'availability_topic', 'offline')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'state-topic', '1')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'availability_topic', 'online')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
def test_custom_availability_payload(self):
"""Test the availability payload."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'state-topic',
'command_topic': 'command-topic',
'availability_topic': 'availability_topic',
'payload_on': 1,
'payload_off': 0,
'payload_available': 'good',
'payload_not_available': 'nogood'
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'availability_topic', 'good')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
self.assertFalse(state.attributes.get(ATTR_ASSUMED_STATE))
fire_mqtt_message(self.hass, 'availability_topic', 'nogood')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'state-topic', '1')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_UNAVAILABLE, state.state)
fire_mqtt_message(self.hass, 'availability_topic', 'good')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
def test_custom_state_payload(self):
"""Test the state payload."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'state-topic',
'command_topic': 'command-topic',
'payload_on': 1,
'payload_off': 0,
'state_on': "HIGH",
'state_off': "LOW",
}
})
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
self.assertFalse(state.attributes.get(ATTR_ASSUMED_STATE))
fire_mqtt_message(self.hass, 'state-topic', 'HIGH')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_ON, state.state)
fire_mqtt_message(self.hass, 'state-topic', 'LOW')
self.hass.block_till_done()
state = self.hass.states.get('switch.test')
self.assertEqual(STATE_OFF, state.state)
def test_unique_id(self):
"""Test unique id option only creates one switch per unique_id."""
assert setup_component(self.hass, switch.DOMAIN, {
switch.DOMAIN: [{
'platform': 'mqtt',
'name': 'Test 1',
'state_topic': 'test-topic',
'command_topic': 'command-topic',
'unique_id': 'TOTALLY_UNIQUE'
}, {
'platform': 'mqtt',
'name': 'Test 2',
'state_topic': 'test-topic',
'command_topic': 'command-topic',
'unique_id': 'TOTALLY_UNIQUE'
}]
})
fire_mqtt_message(self.hass, 'test-topic', 'payload')
self.hass.block_till_done()
assert len(self.hass.states.async_entity_ids()) == 2
# all switches group is 1, unique id created is 1
| 36.354098 | 79 | 0.594246 | 1,242 | 11,088 | 5.095008 | 0.099034 | 0.102402 | 0.064159 | 0.081068 | 0.81732 | 0.79725 | 0.786979 | 0.774968 | 0.774968 | 0.774968 | 0 | 0.003506 | 0.279672 | 11,088 | 304 | 80 | 36.473684 | 0.788782 | 0.052128 | 0 | 0.729258 | 0 | 0 | 0.178233 | 0.009476 | 0 | 0 | 0 | 0 | 0.19214 | 1 | 0.043668 | false | 0 | 0.030568 | 0 | 0.078603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7cf351b03ccc9fb18e0567dc9708f0816a20f4eb | 29,801 | py | Python | tests/integration/deploy/test_deploy_command.py | michael-k/aws-sam-cli | a8525fc8157d507c4b102477ded4d221deaed145 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | tests/integration/deploy/test_deploy_command.py | michael-k/aws-sam-cli | a8525fc8157d507c4b102477ded4d221deaed145 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | tests/integration/deploy/test_deploy_command.py | michael-k/aws-sam-cli | a8525fc8157d507c4b102477ded4d221deaed145 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | import os
import shutil
import tempfile
import uuid
import time
from unittest import skipIf
import boto3
import docker
from parameterized import parameterized
from samcli.lib.config.samconfig import DEFAULT_CONFIG_FILE_NAME
from samcli.lib.bootstrap.bootstrap import SAM_CLI_STACK_NAME
from tests.integration.deploy.deploy_integ_base import DeployIntegBase
from tests.integration.package.package_integ_base import PackageIntegBase
from tests.testing_utils import RUNNING_ON_CI, RUNNING_TEST_FOR_MASTER_ON_CI, RUN_BY_CANARY
from tests.testing_utils import CommandResult, run_command, run_command_with_input
# Deploy tests require credentials and CI/CD will only add credentials to the env if the PR is from the same repo.
# This is to restrict package tests to run outside of CI/CD, when the branch is not master or tests are not run by Canary
SKIP_DEPLOY_TESTS = RUNNING_ON_CI and RUNNING_TEST_FOR_MASTER_ON_CI and not RUN_BY_CANARY
CFN_SLEEP = 3
TIMEOUT = 300
CFN_PYTHON_VERSION_SUFFIX = os.environ.get("PYTHON_VERSION", "0.0.0").replace(".", "-")
@skipIf(SKIP_DEPLOY_TESTS, "Skip deploy tests in CI/CD only")
class TestDeploy(PackageIntegBase, DeployIntegBase):
@classmethod
def setUpClass(cls):
cls.docker_client = docker.from_env()
cls.local_images = [("alpine", "latest")]
# setup some images locally by pulling them.
for repo, tag in cls.local_images:
cls.docker_client.api.pull(repository=repo, tag=tag)
PackageIntegBase.setUpClass()
DeployIntegBase.setUpClass()
def setUp(self):
self.cf_client = boto3.client("cloudformation")
self.sns_arn = os.environ.get("AWS_SNS")
self.stack_names = []
time.sleep(CFN_SLEEP)
super().setUp()
def tearDown(self):
shutil.rmtree(os.path.join(os.getcwd(), ".aws-sam", "build"), ignore_errors=True)
for stack_name in self.stack_names:
# because of the termination protection, do not delete aws-sam-cli-managed-default stack
if stack_name != SAM_CLI_STACK_NAME:
self.cf_client.delete_stack(StackName=stack_name)
super().tearDown()
@parameterized.expand(["aws-serverless-function.yaml"])
def test_package_and_deploy_no_s3_bucket_all_args(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
with tempfile.NamedTemporaryFile(delete=False) as output_template_file:
# Package necessary artifacts.
package_command_list = self.get_command_list(
s3_bucket=self.s3_bucket.name, template=template_path, output_template_file=output_template_file.name
)
package_process = run_command(command_list=package_command_list)
self.assertEqual(package_process.process.returncode, 0)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Deploy and only show changeset.
deploy_command_list_no_execute = self.get_deploy_command_list(
template_file=output_template_file.name,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=True,
tags="integ=true clarity=yes foo_bar=baz",
)
deploy_process_no_execute = run_command(deploy_command_list_no_execute)
self.assertEqual(deploy_process_no_execute.process.returncode, 0)
# Deploy the given stack with the changeset.
deploy_command_list_execute = self.get_deploy_command_list(
template_file=output_template_file.name,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
tags="integ=true clarity=yes foo_bar=baz",
)
deploy_process = run_command(deploy_command_list_execute)
self.assertEqual(deploy_process.process.returncode, 0)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_no_package_and_deploy_with_s3_bucket_all_args(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand(["aws-serverless-function-image.yaml"])
def test_no_package_and_deploy_with_s3_bucket_all_args_image_repository(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
image_repository=self.ecr_repo_name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand([("Hello", "aws-serverless-function-image.yaml")])
def test_no_package_and_deploy_with_s3_bucket_all_args_image_repositories(self, resource_id, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
image_repositories=f"{resource_id}={self.ecr_repo_name}",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_no_package_and_deploy_with_s3_bucket_and_no_confirm_changeset(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = "a" + str(uuid.uuid4()).replace("-", "")[:10]
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_command_list.append("--no-confirm-changeset")
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_no_redeploy_on_same_built_artifacts(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
# Build project
build_command_list = self.get_minimal_build_command_list(template_file=template_path)
run_command(build_command_list)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Should result in a zero exit code.
deploy_command_list = self.get_deploy_command_list(
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
# ReBuild project, absolutely nothing has changed, will result in same build artifacts.
run_command(build_command_list)
# Re-deploy, this should cause an empty changeset error and not re-deploy.
# This will cause a non zero exit code.
deploy_process_execute = run_command(deploy_command_list)
# Does not cause a re-deploy
self.assertEqual(deploy_process_execute.process.returncode, 1)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_no_package_and_deploy_with_s3_bucket_all_args_confirm_changeset(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.s3_bucket.name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=True,
)
deploy_process_execute = run_command_with_input(deploy_command_list, "Y".encode())
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_without_s3_bucket(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
# Error asking for s3 bucket
self.assertEqual(deploy_process_execute.process.returncode, 1)
self.assertIn(
bytes(
f"S3 Bucket not specified, use --s3-bucket to specify a bucket name or run sam deploy --guided",
encoding="utf-8",
),
deploy_process_execute.stderr,
)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_without_stack_name(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 2)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_without_capabilities(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
s3_prefix="integ_deploy",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 1)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_without_template_file(self, template_file):
stack_name = self._method_to_stack_name(self.id())
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
stack_name=stack_name,
s3_prefix="integ_deploy",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
# Error template file not specified
self.assertEqual(deploy_process_execute.process.returncode, 1)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_with_s3_bucket_switch_region(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.bucket_name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
)
deploy_process_execute = run_command(deploy_command_list)
# Deploy should succeed
self.assertEqual(deploy_process_execute.process.returncode, 0)
# Try to deploy to another region.
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
s3_prefix="integ_deploy",
s3_bucket=self.bucket_name,
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
no_execute_changeset=False,
tags="integ=true clarity=yes foo_bar=baz",
confirm_changeset=False,
region="eu-west-2",
)
deploy_process_execute = run_command(deploy_command_list)
# Deploy should fail, asking for s3 bucket
self.assertEqual(deploy_process_execute.process.returncode, 1)
stderr = deploy_process_execute.stderr.strip()
self.assertIn(
bytes(
f"Error: Failed to create/update stack {stack_name} : "
f"deployment s3 bucket is in a different region, try sam deploy --guided",
encoding="utf-8",
),
stderr,
)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_twice_with_no_fail_on_empty_changeset(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
kwargs = {
"template_file": template_path,
"stack_name": stack_name,
"capabilities": "CAPABILITY_IAM",
"s3_prefix": "integ_deploy",
"s3_bucket": self.bucket_name,
"force_upload": True,
"notification_arns": self.sns_arn,
"parameter_overrides": "Parameter=Clarity",
"kms_key_id": self.kms_key,
"no_execute_changeset": False,
"tags": "integ=true clarity=yes foo_bar=baz",
"confirm_changeset": False,
}
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(**kwargs)
print("######################################")
print(deploy_command_list)
print("######################################")
deploy_process_execute = run_command(deploy_command_list)
# Deploy should succeed
self.assertEqual(deploy_process_execute.process.returncode, 0)
# Deploy with `--no-fail-on-empty-changeset` after deploying the same template first
deploy_command_list = self.get_deploy_command_list(fail_on_empty_changeset=False, **kwargs)
deploy_process_execute = run_command(deploy_command_list)
# Deploy should not fail
self.assertEqual(deploy_process_execute.process.returncode, 0)
stdout = deploy_process_execute.stdout.strip()
self.assertIn(bytes(f"No changes to deploy. Stack {stack_name} is up to date", encoding="utf-8"), stdout)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_twice_with_fail_on_empty_changeset(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
kwargs = {
"template_file": template_path,
"stack_name": stack_name,
"capabilities": "CAPABILITY_IAM",
"s3_prefix": "integ_deploy",
"s3_bucket": self.bucket_name,
"force_upload": True,
"notification_arns": self.sns_arn,
"parameter_overrides": "Parameter=Clarity",
"kms_key_id": self.kms_key,
"no_execute_changeset": False,
"tags": "integ=true clarity=yes foo_bar=baz",
"confirm_changeset": False,
}
deploy_command_list = self.get_deploy_command_list(**kwargs)
deploy_process_execute = run_command(deploy_command_list)
# Deploy should succeed
self.assertEqual(deploy_process_execute.process.returncode, 0)
# Deploy with `--fail-on-empty-changeset` after deploying the same template first
deploy_command_list = self.get_deploy_command_list(fail_on_empty_changeset=True, **kwargs)
deploy_process_execute = run_command(deploy_command_list)
# Deploy should not fail
self.assertNotEqual(deploy_process_execute.process.returncode, 0)
stderr = deploy_process_execute.stderr.strip()
self.assertIn(bytes(f"Error: No changes to deploy. Stack {stack_name} is up to date", encoding="utf-8"), stderr)
@parameterized.expand(["aws-serverless-inline.yaml"])
def test_deploy_inline_no_package(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
deploy_command_list = self.get_deploy_command_list(
template_file=template_path, stack_name=stack_name, capabilities="CAPABILITY_IAM"
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_guided_zip(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(template_file=template_path, guided=True)
deploy_process_execute = run_command_with_input(
deploy_command_list, "{}\n\n\n\n\n\n\n\n\n".format(stack_name).encode()
)
# Deploy should succeed with a managed stack
self.assertEqual(deploy_process_execute.process.returncode, 0)
self.stack_names.append(SAM_CLI_STACK_NAME)
# Remove samconfig.toml
os.remove(self.test_data_path.joinpath(DEFAULT_CONFIG_FILE_NAME))
@parameterized.expand(["aws-serverless-function-image.yaml"])
def test_deploy_guided_image(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(template_file=template_path, guided=True)
deploy_process_execute = run_command_with_input(
deploy_command_list, f"{stack_name}\n\n{self.ecr_repo_name}\n\n\ny\n\n\n\n\n\n".encode()
)
# Deploy should succeed with a managed stack
self.assertEqual(deploy_process_execute.process.returncode, 0)
self.stack_names.append(SAM_CLI_STACK_NAME)
# Remove samconfig.toml
os.remove(self.test_data_path.joinpath(DEFAULT_CONFIG_FILE_NAME))
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_guided_set_parameter(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(template_file=template_path, guided=True)
deploy_process_execute = run_command_with_input(
deploy_command_list, "{}\n\nSuppliedParameter\n\n\n\n\n\n\n".format(stack_name).encode()
)
# Deploy should succeed with a managed stack
self.assertEqual(deploy_process_execute.process.returncode, 0)
self.stack_names.append(SAM_CLI_STACK_NAME)
# Remove samconfig.toml
os.remove(self.test_data_path.joinpath(DEFAULT_CONFIG_FILE_NAME))
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_guided_set_capabilities(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(template_file=template_path, guided=True)
deploy_process_execute = run_command_with_input(
deploy_command_list,
"{}\n\nSuppliedParameter\n\nn\nCAPABILITY_IAM CAPABILITY_NAMED_IAM\n\n\n\n".format(stack_name).encode(),
)
# Deploy should succeed with a managed stack
self.assertEqual(deploy_process_execute.process.returncode, 0)
self.stack_names.append(SAM_CLI_STACK_NAME)
# Remove samconfig.toml
os.remove(self.test_data_path.joinpath(DEFAULT_CONFIG_FILE_NAME))
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_guided_capabilities_default(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(template_file=template_path, guided=True)
# Set no for Allow SAM CLI IAM role creation, but allow default of ["CAPABILITY_IAM"] by just hitting the return key.
deploy_process_execute = run_command_with_input(
deploy_command_list, "{}\n\nSuppliedParameter\n\nn\n\n\n\n\n\n".format(stack_name).encode()
)
# Deploy should succeed with a managed stack
self.assertEqual(deploy_process_execute.process.returncode, 0)
self.stack_names.append(SAM_CLI_STACK_NAME)
# Remove samconfig.toml
os.remove(self.test_data_path.joinpath(DEFAULT_CONFIG_FILE_NAME))
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_guided_set_confirm_changeset(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
# Package and Deploy in one go without confirming change set.
deploy_command_list = self.get_deploy_command_list(template_file=template_path, guided=True)
deploy_process_execute = run_command_with_input(
deploy_command_list, "{}\n\nSuppliedParameter\nY\n\nY\n\n\n\n".format(stack_name).encode()
)
# Deploy should succeed with a managed stack
self.assertEqual(deploy_process_execute.process.returncode, 0)
self.stack_names.append(SAM_CLI_STACK_NAME)
# Remove samconfig.toml
os.remove(self.test_data_path.joinpath(DEFAULT_CONFIG_FILE_NAME))
@parameterized.expand(["aws-serverless-function.yaml"])
def test_deploy_with_no_s3_bucket_set_resolve_s3(self, template_file):
template_path = self.test_data_path.joinpath(template_file)
stack_name = self._method_to_stack_name(self.id())
self.stack_names.append(stack_name)
deploy_command_list = self.get_deploy_command_list(
template_file=template_path,
stack_name=stack_name,
capabilities="CAPABILITY_IAM",
force_upload=True,
notification_arns=self.sns_arn,
parameter_overrides="Parameter=Clarity",
kms_key_id=self.kms_key,
tags="integ=true clarity=yes foo_bar=baz",
resolve_s3=True,
)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 0)
@parameterized.expand([("aws-serverless-function.yaml", "samconfig-invalid-syntax.toml")])
def test_deploy_with_invalid_config(self, template_file, config_file):
template_path = self.test_data_path.joinpath(template_file)
config_path = self.test_data_path.joinpath(config_file)
deploy_command_list = self.get_deploy_command_list(template_file=template_path, config_file=config_path)
deploy_process_execute = run_command(deploy_command_list)
self.assertEqual(deploy_process_execute.process.returncode, 1)
self.assertIn("Error reading configuration: Unexpected character", str(deploy_process_execute.stderr))
def _method_to_stack_name(self, method_name):
"""Method expects method name which can be a full path. Eg: test.integration.test_deploy_command.method_name"""
method_name = method_name.split(".")[-1]
return f"{method_name.replace('_', '-')}-{CFN_PYTHON_VERSION_SUFFIX}"
| 44.280832 | 125 | 0.685984 | 3,682 | 29,801 | 5.202064 | 0.077404 | 0.054036 | 0.074554 | 0.053879 | 0.836222 | 0.819463 | 0.801973 | 0.801973 | 0.791636 | 0.789235 | 0 | 0.004302 | 0.227845 | 29,801 | 672 | 126 | 44.346726 | 0.828082 | 0.095366 | 0 | 0.717391 | 0 | 0.003953 | 0.12019 | 0.041712 | 0 | 0 | 0 | 0 | 0.067194 | 1 | 0.05336 | false | 0 | 0.029644 | 0 | 0.086957 | 0.005929 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6b469c923b506e0d4ff9ddcced72f1befa1e333f | 4,323 | py | Python | terminusdb_client/tests/ans_doctype.py | LogicalDash/terminusdb-client-python | 7f13f77e60f891b1e6bd214ebf73ff7f75fcaff8 | [
"Apache-2.0"
] | 43 | 2020-06-12T23:44:17.000Z | 2022-03-12T15:18:55.000Z | terminusdb_client/tests/ans_doctype.py | LogicalDash/terminusdb-client-python | 7f13f77e60f891b1e6bd214ebf73ff7f75fcaff8 | [
"Apache-2.0"
] | 151 | 2020-06-12T20:23:05.000Z | 2022-03-29T20:38:35.000Z | terminusdb_client/tests/ans_doctype.py | LogicalDash/terminusdb-client-python | 7f13f77e60f891b1e6bd214ebf73ff7f75fcaff8 | [
"Apache-2.0"
] | 46 | 2020-06-16T20:51:21.000Z | 2022-03-17T18:11:46.000Z | import pytest
@pytest.fixture(scope="module")
def doctype_without():
return {
"@type": "And",
"and": [
{
"@type": "AddTriple",
"graph": "schema",
"object": {"@type": "NodeValue", "node": "owl:Class"},
"predicate": {"@type": "NodeValue", "node": "rdf:type"},
"subject": {"@type": "Value", "node": "Station"},
},
{
"@type": "AddTriple",
"graph": "schema",
"object": {
"@type": "NodeValue",
"node": "terminus:Document",
},
"predicate": {
"@type": "NodeValue",
"node": "rdfs:subClassOf",
},
"subject": {"@type": "Value", "node": "Station"},
},
],
}
@pytest.fixture(scope="module")
def doctype_with_label():
return {
"@type": "And",
"and": [
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {"@type": "NodeValue", "node": "rdf:type"},
"object": {"@type": "Value", "node": "owl:Class"},
"graph": "schema",
},
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {
"@type": "NodeValue",
"node": "rdfs:subClassOf",
},
"object": {
"@type": "Value",
"node": "terminus:Document",
},
"graph": "schema",
},
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {"@type": "NodeValue", "node": "rdfs:label"},
"object": {
"@type": "Value",
"data": {
"@value": "Station Object",
"@type": "xsd:string",
"@language": "en",
},
},
"graph": "schema",
},
],
}
@pytest.fixture(scope="module")
def doctype_with_des():
return {
"@type": "And",
"and": [
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {"@type": "NodeValue", "node": "rdf:type"},
"object": {"@type": "Value", "node": "owl:Class"},
"graph": "schema",
},
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {
"@type": "NodeValue",
"node": "rdfs:subClassOf",
},
"object": {
"@type": "Value",
"node": "terminus:Document",
},
"graph": "schema",
},
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {"@type": "NodeValue", "node": "rdfs:label"},
"object": {
"@type": "Value",
"data": {
"@value": "Station Object",
"@type": "xsd:string",
"@language": "en",
},
},
"graph": "schema",
},
{
"@type": "AddTriple",
"subject": {"@type": "NodeValue", "node": "Station"},
"predicate": {
"@type": "NodeValue",
"node": "rdfs:comment",
},
"object": {
"@type": "Value",
"data": {
"@value": "A bike station object.",
"@type": "xsd:string",
"@language": "en",
},
},
"graph": "schema",
},
],
}
| 32.261194 | 74 | 0.319685 | 246 | 4,323 | 5.597561 | 0.158537 | 0.169935 | 0.222222 | 0.169935 | 0.954248 | 0.897603 | 0.808279 | 0.753086 | 0.684822 | 0.65069 | 0 | 0 | 0.495952 | 4,323 | 133 | 75 | 32.503759 | 0.631941 | 0 | 0 | 0.677165 | 0 | 0 | 0.303493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023622 | true | 0 | 0.007874 | 0.023622 | 0.055118 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
86f8a4b9fb77a6c5a8240669b20ac6064cd92e61 | 106 | py | Python | transganformer/__init__.py | adam-mehdi/transganformer | bce0202fd45e921d2e2b372b96cc7cf64d39934a | [
"MIT"
] | 142 | 2021-03-11T03:52:09.000Z | 2022-03-26T21:23:18.000Z | transganformer/__init__.py | adam-mehdi/transganformer | bce0202fd45e921d2e2b372b96cc7cf64d39934a | [
"MIT"
] | 8 | 2021-03-11T10:53:16.000Z | 2021-05-13T21:39:22.000Z | transganformer/__init__.py | adam-mehdi/transganformer | bce0202fd45e921d2e2b372b96cc7cf64d39934a | [
"MIT"
] | 14 | 2021-03-19T07:21:34.000Z | 2022-01-03T11:09:34.000Z | from transganformer.transganformer import Transganformer, Generator, Discriminator, Trainer, NanException
| 53 | 105 | 0.877358 | 9 | 106 | 10.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 106 | 1 | 106 | 106 | 0.94898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
86f9baed3a72edb47d914634935f7df7560a3bfb | 58,014 | py | Python | uuv_control/uuv_trajectory_control/scripts/rov_mb_sm_controller1.py | Xiaoran807/uuv_simulator | 5273de462a83f4a86e1478d94146ceef08fe9f7d | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | uuv_control/uuv_trajectory_control/scripts/rov_mb_sm_controller1.py | Xiaoran807/uuv_simulator | 5273de462a83f4a86e1478d94146ceef08fe9f7d | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | uuv_control/uuv_trajectory_control/scripts/rov_mb_sm_controller1.py | Xiaoran807/uuv_simulator | 5273de462a83f4a86e1478d94146ceef08fe9f7d | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2016-2019 The UUV Simulator Authors.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import rospy
import numpy as np
from uuv_control_interfaces import DPPIDControllerBase
from uuv_control_msgs.srv import *
from uuv_control_interfaces.vehicle import cross_product_operator
from std_msgs.msg import Int32
class ROV_MB_SMController(DPPIDControllerBase):
"""
Modelbased Feedback Linearization Controller
Reference:
Thor I. Fossen 2011
Handbook of Marine Craft Hydrodynamics and Motion Control
"""
_LABEL = 'Model-based Feedback Linearization Controller'
def __init__(self):
DPPIDControllerBase.__init__(self, True)
self._logger.info('Initializing: ' + self._LABEL)
# Lambda - Slope of the Sliding Surface
self._lambda = np.zeros(6)
# Rho Constant - Vector of positive terms for assuring sliding surface reaching condition
self._rho_constant = np.zeros(6)
# k - PD gain (P term = k * lambda , D term = k)
self._k = np.zeros(6)
# c - slope of arctan (the greater, the more similar with the sign function)
self._c = np.zeros(6)
# Adapt slope - Adaptation gain for the estimation of uncertainties
# and disturbances upper boundaries
# adapt_slope = [proportional to surface distance, prop. to square
# of pose errors, prop. to square of velocity errors]
self._adapt_slope = np.zeros(3)
# Rho_0 - rho_adapt treshold for drift prevention
self._rho_0 = np.zeros(6)
# Drift prevent - Drift prevention slope
self._drift_prevent = 0
self._pid_control = np.zeros(6)
if rospy.has_param('~lambda'):
coefs = rospy.get_param('~lambda')
if len(coefs) == 6:
self._lambda = np.array(coefs)
else:
raise rospy.ROSException('lambda coefficients: 6 coefficients '
'needed')
print('lambda=', self._lambda)
if rospy.has_param('~rho_constant'):
coefs = rospy.get_param('~rho_constant')
if len(coefs) == 6:
self._rho_constant = np.array(coefs)
else:
raise rospy.ROSException('rho_constant coefficients: 6 coefficients '
'needed')
print('rho_constant=', self._rho_constant)
if rospy.has_param('~k'):
coefs = rospy.get_param('~k')
if len(coefs) == 6:
self._k = np.array(coefs)
else:
raise rospy.ROSException('k coefficients: 6 coefficients '
'needed')
print('k=', self._k)
if rospy.has_param('~c'):
coefs = rospy.get_param('~c')
if len(coefs) == 6:
self._c = np.array(coefs)
else:
raise rospy.ROSException('c coefficients: 6 coefficients '
'needed')
print('c=', self._c)
if rospy.has_param('~adapt_slope'):
coefs = rospy.get_param('~adapt_slope')
if len(coefs) == 3:
self._adapt_slope = np.array(coefs)
else:
raise rospy.ROSException('adapt_slope coefficients: 6 coefficients '
'needed')
print('adapt_slope=', self._adapt_slope)
if rospy.has_param('~rho_0'):
coefs = rospy.get_param('~rho_0')
if len(coefs) == 6:
self._rho_0 = np.array(coefs)
else:
raise rospy.ROSException('rho_0 coefficients: 6 coefficients '
'needed')
print('rho_0=', self._rho_0)
if rospy.has_param('~drift_prevent'):
scalar = rospy.get_param('~drift_prevent')
if not isinstance(scalar, list):
self._drift_prevent = scalar
else:
raise rospy.ROSException('drift_prevent needs to be a scalar value')
print('drift_prevent=', self._drift_prevent)
# Enable(1) / disable(0) integral term in the sliding surface
if rospy.has_param('~enable_integral_term'):
self._sliding_int = rospy.get_param('~enable_integral_term')
else:
self._sliding_int = 0
# Enable(1) / disable(0) adaptive uncertainty upper boundaries for
# robust control
if rospy.has_param('~adaptive_bounds'):
self._adaptive_bounds = rospy.get_param('~adaptive_bounds')
else:
self._adaptive_bounds = 1
# Enable(1) / disable(0) constant uncertainty upper boundaries for
# robust control
if rospy.has_param('~constant_bound'):
self._constant_bound = rospy.get_param('~constant_bound')
else:
self._constant_bound = 1
# Enable(1) / disable(0) equivalent control term
if rospy.has_param('~ctrl_eq'):
self._ctrl_eq = rospy.get_param('~ctrl_eq')
else:
self._ctrl_eq = 1
# Enable(1) / disable(0) linear control term
if rospy.has_param('~ctrl_lin'):
self._ctrl_lin = rospy.get_param('~ctrl_lin')
else:
self._ctrl_lin = 1
# Enable(1) / disable(0) robust control term
if rospy.has_param('~ctrl_robust'):
self._ctrl_robust = rospy.get_param('~ctrl_robust')
else:
self._ctrl_robust = 1
# Integrator component
self._int = np.zeros(6)
# Error for the vehicle pose
self._error_pose = np.zeros(6)
# Sliding Surface
self._s_b = np.zeros(6)
# Time derivative of the rotation matrix
self._rotBtoI_dot = np.zeros(shape=(3, 3), dtype=float)
# Linear acceleration estimation
self._accel_linear_estimate_b = np.zeros(3)
# Angular acceleration estimation
self._accel_angular_estimate_b = np.zeros(3)
# Acceleration estimation
self._accel_estimate_b = np.zeros(6)
# adaptive term of uncertainties upper bound estimation
self._rho_adapt = np.zeros(6)
# Upper bound for model uncertainties and disturbances
self._rho_total = np.zeros(6)
# Equivalent control
self._f_eq = np.zeros(6)
# Linear term of controller
self._f_lin = np.zeros(6)
# Robust control
self._f_robust = np.zeros(6)
# Total control
self._tau = np.zeros(6)
self.F_tau = np.zeros(6)
self._slidingSurface=np.zeros(6)
self._vel=np.zeros(3)
self._vehi=np.zeros(1)
self._services['set_mb_sm_controller_params'] = rospy.Service(
'set_mb_sm_controller_params',
SetMBSMControllerParams,
self.set_mb_sm_controller_params_callback)
self._services['get_mb_sm_controller_params'] = rospy.Service(
'get_mb_sm_controller_params',
GetMBSMControllerParams,
self.get_mb_sm_controller_params_callback)
self._is_init = True
self._logger.info(self._LABEL + ' ready')
self.error_up_prev=0
self.x_u_prev=0
self.x_u_vel_prev=0
self.vehicle_x_u_pos_prev=0
self.vehicle_x_u_vel_prev=0
self._error_up_pose=0
self.error_up_int=0
self.x_u_acc_transformed_prev=0
self.d_vel_error_up_prev=0
self.error_up_prev=0
self.s_u_prev=0
self.rho_u_prev=0
self.F_u_prev=0
self.vehicle_x_u_acc_transformed_prev=0
self.H_hat_u=0
self.error_up_int_prev=0
self.error_vp_prev=0
self.x_v_prev=0
self.x_v_vel_prev=0
self.vehicle_x_v_pos_prev=0
self.vehicle_x_v_vel_prev=0
self._error_vp_pose=0
self.error_vp_int=0
self.x_v_acc_transformed_prev=0
self.d_vel_error_vp_prev=0
self.error_vp_prev=0
self.s_v_prev=0
self.rho_v_prev=0
self.F_v_prev=0
self.vehicle_x_v_acc_transformed_prev=0
self.H_hat_v=0
self.error_vp_int_prev=0
self.error_wp_prev=0
self.x_w_prev=0
self.x_w_vel_prev=0
self.vehicle_x_w_pos_prev=0
self.vehicle_x_w_vel_prev=0
self._error_wp_pose=0
self.error_wp_int=0
self.x_w_acc_transformed_prev=0
self.d_vel_error_wp_prev=0
self.error_wp_prev=0
self.s_w_prev=0
self.rho_w_prev=0
self.F_w_prev=0
self.vehicle_x_w_acc_transformed_prev=0
self.H_hat_w=0
self.error_wp_int_prev=0
self._vehicle_model_pose_x_prev=0
self._vehicle_model_pose_y_prev=0
self._vehicle_model_pose_z_prev=0
self._vehicle_model_pose_x_prev2=0
self._vehicle_model_pose_y_prev2=0
self._vehicle_model_pose_z_prev2=0
self._vehicle_model_pose_x_prev3=0
self._vehicle_model_pose_y_prev3=0
self._vehicle_model_pose_z_prev3=0
self._vehicle_model_pose_x_prev4=0
self._vehicle_model_pose_y_prev4=0
self._vehicle_model_pose_z_prev4=0
self._vehicle_model_pose_x_prev5=0
self._vehicle_model_pose_y_prev5=0
self._vehicle_model_pose_z_prev5=0
self._vehicle_model_pose_x_prev6=0
self._vehicle_model_pose_y_prev6=0
self._vehicle_model_pose_z_prev6=0
self._ref_model_pose_x_prev=0
self._ref_model_pose_y_prev=0
self._ref_model_pose_z_prev=0
self._ref_model_pose_x_prev2=0
self._ref_model_pose_y_prev2=0
self._ref_model_pose_z_prev2=0
self._ref_model_pose_x_prev3=0
self._ref_model_pose_y_prev3=0
self._ref_model_pose_z_prev3=0
self._ref_model_pose_x_prev4=0
self._ref_model_pose_y_prev4=0
self._ref_model_pose_z_prev4=0
self._ref_model_pose_x_prev5=0
self._ref_model_pose_y_prev5=0
self._ref_model_pose_z_prev5=0
self._ref_model_pose_x_prev6=0
self._ref_model_pose_y_prev6=0
self._ref_model_pose_z_prev6=0
self.vel_vehicle_prev=np.zeros(3)
self.vel_ref_prev=np.zeros(3)
self.acc_cal_fromVel_prev=np.zeros(3)
self._error_pose_prev=np.zeros(6)
self.rho_x_prev=3
self.rho_y_prev=3
self.rho_z_prev=3
self.rho_p_prev=3
self.rho_q_prev=3
self.rho_r_prev=3
self.F_x_prev=0
self.F_y_prev=0
self.F_z_prev=0
self.F_p_prev=0
self.F_q_prev=0
self.F_r_prev=0
self.acc_angular_prev=np.zeros(3)
self.error_linear_vel=np.array([0,0,0])
self.ref_pose_x_prev=0
self._tau_pid=np.zeros(6)
def _reset_controller(self):
super(ROV_MB_SMController, self)._reset_controller()
self._sliding_int = 0
self._adaptive_bounds = 0
self._constant_bound = 0
self._ctrl_eq = 0
self._ctrl_lin = 0
self._ctrl_robust = 0
self._prev_t = 0
self._int = np.zeros(6)
self._error_pose = np.zeros(6)
self._s_b = np.zeros(6)
self._rotBtoI_dot = np.zeros(shape=(3, 3), dtype=float)
self._accel_linear_estimate_b = np.zeros(3)
self._accel_angular_estimate_b = np.zeros(3)
self._accel_estimate_b = np.zeros(6)
self._rho_adapt = np.zeros(6)
self._rho_total = np.zeros(6)
self._f_eq = np.zeros(6)
self._f_lin = np.zeros(6)
self._f_robust = np.zeros(6)
self._tau = np.zeros(6)
self.F_tau = np.zeros(6)
self._slidingSurface=np.zeros(6)
self._vel=np.zeros(3)
self._vehi=np.zeros(1)
self._pid_control = np.zeros(6)
def set_mb_sm_controller_params_callback(self, request):
return SetMBSMControllerParamsResponse(True)
def get_mb_sm_controller_params_callback(self, request):
return GetMBSMControllerParamsResponse(
self._lambda.tolist(),
self._rho_constant.tolist(),
self._k.tolist(),
self._c.tolist(),
self._adapt_slope.tolist(),
self._rho_0.tolist(),
self._drift_prevent)
# Proposed control without delay, full 6 dof
def update_controller(self):
if not self._is_init:
return False
t = rospy.Time.now().to_sec()
dt = t - self._prev_t
if self._prev_t < 0.0:
dt = 0.05
acc_linear_ref=(self.ref_boxVelocityLinear1-self.vel_ref_prev)/dt
self.vel_ref_prev=self.ref_boxVelocityLinear1
self._int += 0.5 * (self.error_pose_euler - self._error_pose) * self._dt
# Store current pose error
self._error_pose = self.error_pose_euler
# Get trajectory errors (reference - actual)
e_p_linear_b = self._errors['pos']
e_v_linear_b = self._errors['vel'][0:3]
e_p_angular_b = self.error_orientation_rpy
e_v_angular_b = self._errors['vel'][3:6]
e_p_b = np.hstack((e_p_linear_b, e_p_angular_b))
e_v_b = np.hstack((e_v_linear_b, e_v_angular_b))
# larger H_hat
# if t>8 and t<=85:
# kp_x=1.1
# kp_y=.9
# kp_z=1.5
# else:
# kp_x=.2
# kp_y=.2
# kp_z=.2
# ki_x=0.1
# kd_x=0.2
# mu_x=0.1
# ki_y=0.1
# kd_y=0.2
# mu_y=0.1
# ki_z=0.1
# kd_z=0.1
# mu_z=0.1
# kp_p=.5
# ki_p=0.1
# kd_p=0.1
# mu_p=0.1
# kp_q=.5
# ki_q=0.1
# kd_q=0.1
# mu_q=0.1
# kp_r=.5
# ki_r=0.1
# kd_r=0.1
# mu_r=0.1
# m_bar_x=2641#100, 2642
# m_bar_y=3083#300, 3084
# m_bar_z=2522#200, 5522
# m_bar_p=1400
# m_bar_q=1400
# m_bar_r=1400
# delta=1
# beta=1#.00001
# Ldelta_c_x=500
# Ldelta_c_y=500
# Ldelta_c_z=500
# Ldelta_c_p=500
# Ldelta_c_q=500
# Ldelta_c_r=500
# LDelta_d_x=500
# LDelta_d_y=500
# LDelta_d_z=500
# H_para_x=.8
# H_para_y=.8
# H_para_z=.2#0.7
# H_para_p=0
# H_para_q=0
# H_para_r=0
# delta_z=0.004
kp_x=.4
ki_x=0.1
kd_x=0.1
mu_x=.1
kp_y=.4
ki_y=0.1
kd_y=0.1
mu_y=.1
kp_z=.4
ki_z=0.1
kd_z=0.1
mu_z=0.1
kp_p=.5
ki_p=0.1
kd_p=0.1
mu_p=0.1
kp_q=.5
ki_q=0.1
kd_q=0.1
mu_q=0.1
kp_r=.5
ki_r=0.1
kd_r=0.1
mu_r=0.1
m_bar_x=2042#100, 2642
m_bar_y=2884#300, 3084
m_bar_z=2522#200, 5522
m_bar_p=1400
m_bar_q=1400
m_bar_r=1400
delta=10
beta=.1#.00001
Ldelta_c_x=2500
Ldelta_c_y=2500
Ldelta_c_z=3500
Ldelta_c_p=1500
Ldelta_c_q=1500
Ldelta_c_r=1500
LDelta_d_x=1000
LDelta_d_y=1000
LDelta_d_z=1000
H_para_x=.4
H_para_y=.4
H_para_z=0.2#0.7
H_para_p=0
H_para_q=0
H_para_r=0
self._rotBtoI_dot = np.dot(cross_product_operator(self._vehicle_model._vel[3:6]), self._vehicle_model.rotBtoI)
acc_angular=self._vehicle_model.to_SNAME(np.dot(self._vehicle_model.rotItoB, np.dot(self._rotBtoI_dot, self._vehicle_model._vel[3:6])))
acc_cal_fromVel=(self._vehicle_model._vel[0:3]-self.vel_vehicle_prev)/dt
ref_pose_x=self.x_wg.data
self.ref_pose_x_prev=ref_pose_x
error_pose=self.error_pose_euler
self._int += 0.5 * (error_pose - self._error_pose_prev) * self._dt
s_x=kp_x*e_p_b[0]+ki_x*self._int[0]+kd_x*e_v_b[0]
s_y=kp_y*e_p_b[1]+ki_y*self._int[1]+kd_y*e_v_b[1]
s_z=kp_z*e_p_b[2]+ki_z*self._int[2]+kd_z*e_v_b[2]
s_p=kp_p*e_p_b[3]+ki_p*self._int[3]+kd_p*e_v_b[3]
s_q=kp_q*e_p_b[4]+ki_q*self._int[4]+kd_q*e_v_b[4]
s_r=kp_r*e_p_b[5]+ki_r*self._int[5]+kd_r*e_v_b[5]
S=np.hstack((s_x,s_y,s_z,s_p,s_q,s_r))
rho_x=delta*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)/(m_bar_x-delta)
rho_y=delta*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)/(m_bar_y-delta)
rho_z=1*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)/(m_bar_z-delta)
rho_p=1*(self.rho_p_prev+kd_p*LDelta_d_z+kd_p*Ldelta_c_p)/(m_bar_p-delta)
rho_q=1*(self.rho_q_prev+kd_q*LDelta_d_z+kd_q*Ldelta_c_q)/(m_bar_q-delta)
rho_r=1*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)/(m_bar_r-delta)
#rho_x=0.86
#rho_y=0.6
#v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+rho_x/kd_x*np.tanh(s_x/beta)
#v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+rho_y/kd_y*np.tanh(s_y/beta)
v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
v_z=acc_linear_ref[2]+kp_z/kd_z*e_v_b[2]+ki_z/kd_z*e_p_b[2]+1/mu_z/kd_z*s_z+rho_x/kd_z*np.tanh(s_z/beta)
v_p=0+kp_p/kd_p*e_v_b[3]+ki_p/kd_p*e_p_b[3]+1/mu_p/kd_p*s_p+rho_p/kd_p*np.tanh(s_p/beta)
v_q=0+kp_q/kd_q*e_v_b[4]+ki_q/kd_q*e_p_b[4]+1/mu_q/kd_q*s_q+rho_q/kd_q*np.tanh(s_q/beta)
v_r=0+kp_r/kd_r*e_v_b[5]+ki_r/kd_r*e_p_b[5]+1/mu_r/kd_r*s_r+rho_r/kd_r*np.tanh(s_r/beta)
#self.h_hat_x=H_para_x*(self.F_x_prev-m_bar_x*self._linear_acceleration.x)
#self.h_hat_y=H_para_y*(self.F_y_prev-m_bar_y*self._linear_acceleration.y)
#self.h_hat_z=H_para_z*(self.F_z_prev-m_bar_z*acc[2])
#self.h_hat_p=H_para_p*(self.F_p_prev-m_bar_p*self._accel_angular_estimate_b[0])
#self.h_hat_q=H_para_q*(self.F_q_prev-m_bar_q*self._accel_angular_estimate_b[1])
#self.h_hat_r=H_para_r*(self.F_r_prev-m_bar_r*self._accel_angular_estimate_b[2])
self.h_hat_x=H_para_x*self.F_x_prev-m_bar_x*self._linear_acceleration.x
self.h_hat_y=H_para_y*self.F_y_prev-m_bar_y*self._linear_acceleration.y
self.h_hat_z=H_para_z*self.F_z_prev-m_bar_z*0
self.h_hat_p=H_para_p*(self.F_p_prev-m_bar_p*self._accel_angular_estimate_b[0])
self.h_hat_q=H_para_q*(self.F_q_prev-m_bar_q*self._accel_angular_estimate_b[1])
self.h_hat_r=H_para_r*(self.F_r_prev-m_bar_r*self._accel_angular_estimate_b[2])
H_hat=np.hstack((self.h_hat_x,self.h_hat_y,self.h_hat_z))
F_x=m_bar_x*v_x+self.h_hat_x
F_y=m_bar_y*v_y+self.h_hat_y
F_z=m_bar_z*v_z+self.h_hat_z
F_p=m_bar_p*v_p+self.h_hat_p
F_q=m_bar_q*v_q+self.h_hat_q
F_r=m_bar_r*v_r+self.h_hat_r
F=np.hstack((F_x,F_y,F_z,F_p,F_q,F_r))
self._error_pose_prev = error_pose
self.rho_x_prev=rho_x
self.rho_y_prev=rho_y
self.rho_z_prev=rho_z
self.rho_p_prev=rho_p
self.rho_q_prev=rho_q
self.rho_r_prev=rho_r
self.F_x_prev=F_x
self.F_y_prev=F_y
self.F_z_prev=F_z
self.F_p_prev=F_p
self.F_q_prev=F_q
self.F_r_prev=F_r
self.acc_angular_prev=acc_angular
self.acc_cal_fromVel_prev=acc_cal_fromVel
self.vel_vehicle_prev=self._vehicle_model._vel[0:3]
self._slidingSurface=S
self._tau[0]=F_x
self._tau[1]=F_y
self._tau[2]=F_z
self._tau[3]=F_p
self._tau[4]=F_q
self._tau[5]=F_r
#self._tau[0]=self.F_tau[0]
#self._tau[1]=self.F_tau[1]
#self._tau[2]=self.F_tau[2]
#self._tau[3]=self.F_tau[3]
#self._tau[4]=self.F_tau[4]
#self._tau[5]=self.F_tau[5]
# for presentation
# def update_controller(self):
# if not self._is_init:
# return False
# t = rospy.Time.now().to_sec()
# dt = t - self._prev_t
# if self._prev_t < 0.0:
# dt = 0.05
#
# acc_linear_ref=(self.ref_boxVelocityLinear1-self.vel_ref_prev)/dt
# self.vel_ref_prev=self.ref_boxVelocityLinear1
#
#
#
# self._int += 0.5 * (self.error_pose_euler - self._error_pose) * self._dt
# # Store current pose error
# self._error_pose = self.error_pose_euler
# # Get trajectory errors (reference - actual)
# e_p_linear_b = self._errors['pos']
# e_v_linear_b = self._errors['vel'][0:3]
# e_p_angular_b = self.error_orientation_rpy
# e_v_angular_b = self._errors['vel'][3:6]
# e_p_b = np.hstack((e_p_linear_b, e_p_angular_b))
# e_v_b = np.hstack((e_v_linear_b, e_v_angular_b))
# Acceleration estimate
# self._rotBtoI_dot = np.dot(cross_product_operator(self._vehicle_model._vel[3:6]), self._vehicle_model.rotBtoI)
# self._accel_linear_estimate_b = np.dot(
# self._vehicle_model.rotItoB, (acc_linear_ref - \
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[0:3]))) + \
# np.multiply(self._lambda[0:3], e_v_linear_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[0:3]) / 4, e_p_linear_b)
# self._accel_angular_estimate_b = np.dot(self._vehicle_model.rotItoB, (np.zeros(3) -
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[3:6]))) + \
# np.multiply(self._lambda[3:6], e_v_angular_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[3:6]) / 4,
# e_p_angular_b)
# self._accel_estimate_b = np.hstack((self._accel_linear_estimate_b, self._accel_angular_estimate_b))
# # Equivalent control
# acc = self._vehicle_model.to_SNAME(self._accel_estimate_b)
# kp_x=.7#.9
# ki_x=.1
# kd_x=.1
# mu_x=.1
# kp_y=.7#.9
# ki_y=.1
# kd_y=.1
# mu_y=.1
# kp_z=.4#.9
# ki_z=0.1
# kd_z=0.1
# mu_z=0.1
# kp_p=.5
# ki_p=0.1
# kd_p=0.1
# mu_p=0.1
# kp_q=.5
# ki_q=0.1
# kd_q=0.1
# mu_q=0.1
# kp_r=.5
# ki_r=0.1
# kd_r=0.1
# mu_r=0.1
# m_bar_x=2042#100, 2642
# m_bar_y=2884#300, 3084
# m_bar_z=2522#200, 5522
# m_bar_p=1400
# m_bar_q=1400
# m_bar_r=1400
# delta=1
# beta=.1#.00001
# Ldelta_c_x=2500
# Ldelta_c_y=2500
# Ldelta_c_z=1500
# Ldelta_c_p=1500
# Ldelta_c_q=1500
# Ldelta_c_r=1500
# LDelta_d_x=1000
# LDelta_d_y=1000
# LDelta_d_z=1000
# H_para_x=0#.4
# H_para_y=0#.4
# H_para_z=0#0.2
# H_para_p=0
# H_para_q=0
# H_para_r=0
# delta_z=0.001
# self._rotBtoI_dot = np.dot(cross_product_operator(self.vel_veh_prev1[3:6]), self._vehicle_model.rotBtoI)
# acc_angular=self._vehicle_model.to_SNAME(np.dot(self._vehicle_model.rotItoB, np.dot(self._rotBtoI_dot, self.vel_veh_prev1[3:6])))
# acc_cal_fromVel=(self.vel_veh_prev2[0:3]-self.vel_vehicle_prev)/dt
# ref_pose_x=self.x_wg.data
# self.ref_pose_x_prev=ref_pose_x
# error_pose=self.error_pose_euler
# self._int += 0.5 * (error_pose - self._error_pose_prev) * self._dt
# s_x=kp_x*e_p_b[0]+ki_x*self._int[0]+kd_x*e_v_b[0]
# s_y=kp_y*e_p_b[1]+ki_y*self._int[1]+kd_y*e_v_b[1]
# s_z=kp_z*e_p_b[2]+ki_z*self._int[2]+kd_z*e_v_b[2]
# s_p=kp_p*e_p_b[3]+ki_p*self._int[3]+kd_p*e_v_b[3]
# s_q=kp_q*e_p_b[4]+ki_q*self._int[4]+kd_q*e_v_b[4]
# s_r=kp_r*e_p_b[5]+ki_r*self._int[5]+kd_r*e_v_b[5]
# S=np.hstack((s_x,s_y,s_z,s_p,s_q,s_r))
#rho_x=delta*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)/(m_bar_x-delta)
#rho_y=delta*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)/(m_bar_y-delta)
#rho_z=delta*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)/(m_bar_z-delta)
# rho_p=delta*(self.rho_p_prev+kd_p*LDelta_d_z+kd_p*Ldelta_c_p)/(m_bar_p-delta)
# rho_q=delta*(self.rho_q_prev+kd_q*LDelta_d_z+kd_q*Ldelta_c_q)/(m_bar_q-delta)
# rho_r=delta*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)/(m_bar_r-delta)
# v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
# v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
# v_p=0+kp_p/kd_p*e_v_b[3]+ki_p/kd_p*e_p_b[3]+1/mu_p/kd_p*s_p+rho_p/kd_p*np.tanh(s_p/1)
# v_q=0+kp_q/kd_q*e_v_b[4]+ki_q/kd_q*e_p_b[4]+1/mu_q/kd_q*s_q+rho_q/kd_q*np.tanh(s_q/1)
# v_r=0+kp_r/kd_r*e_v_b[5]+ki_r/kd_r*e_p_b[5]+1/mu_r/kd_r*s_r+rho_r/kd_r*np.tanh(s_r/1)
# rho_x=1#
# rho_y=1#
# rho_z=1#
# v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
# v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
# v_z=acc_linear_ref[2]+kp_z/kd_z*e_v_b[2]+ki_z/kd_z*e_p_b[2]+1/mu_z/kd_z*s_z+rho_z/kd_z*np.tanh(s_z/beta)
#v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+rho_x/kd_x*np.tanh(s_x/beta)
#v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+rho_y/kd_y*np.tanh(s_y/beta)
#v_z=acc_linear_ref[2]+kp_z/kd_z*e_v_b[2]+ki_z/kd_z*e_p_b[2]+rho_z/kd_z*np.tanh(s_z/beta)
# self.h_hat_x=H_para_x*self.F_x_prev-m_bar_x*self._linear_acceleration.x
# self.h_hat_y=H_para_y*self.F_y_prev-m_bar_y*self._linear_acceleration.y
# self.h_hat_z=H_para_z*self.F_z_prev-m_bar_z*acc[2]
# self.h_hat_p=H_para_p*(self.F_p_prev-m_bar_p*self._accel_angular_estimate_b[0])
# self.h_hat_q=H_para_q*(self.F_q_prev-m_bar_q*self._accel_angular_estimate_b[1])
# self.h_hat_r=H_para_r*(self.F_r_prev-m_bar_r*self._accel_angular_estimate_b[2])
# H_hat=np.hstack((self.h_hat_x,self.h_hat_y,self.h_hat_z))
# self._tau_pid = self.update_pid()
# F_x=m_bar_x*v_x+self.h_hat_x
# F_y=m_bar_y*v_y+self.h_hat_y
# F_z=m_bar_z*v_z+self.h_hat_z
# F_p=m_bar_p*v_p+self.h_hat_p
# F_q=m_bar_q*v_q+self.h_hat_q
# F_r=m_bar_r*v_r+self.h_hat_r
# F=np.hstack((F_x,F_y,F_z,F_p,F_q,F_r))
# self._error_pose_prev = error_pose
# self.rho_x_prev=rho_x
# self.rho_y_prev=rho_y
# self.rho_z_prev=rho_z
# self.rho_p_prev=rho_p
# self.rho_q_prev=rho_q
# self.rho_r_prev=rho_r
# self.F_x_prev=F_x
# self.F_y_prev=F_y
# self.F_z_prev=F_z
# self.F_p_prev=F_p
# self.F_q_prev=F_q
# self.F_r_prev=F_r
# self.acc_angular_prev=acc_angular
# self.acc_cal_fromVel_prev=acc_cal_fromVel
# self.vel_vehicle_prev=self.vel_veh_prev2[0:3]
# self._slidingSurface=S
# self._tau[0]=F_x
# self._tau[1]=F_y
# self._tau[2]=F_z
# self._tau[3]=F_p
# self._tau[4]=F_q
# self._tau[5]=F_r
#self._tau[0]=self.F_tau[0]
#self._tau[1]=self.F_tau[1]
#self._tau[2]=self.F_tau[2]
#self._tau[3]=self.F_tau[3]
#self._tau[4]=self.F_tau[4]
#self._tau[5]=self.F_tau[5]
# Proposed control, compare with Kim controller. same controller for both with and without delay
# def update_controller(self):
# if not self._is_init:
# return False
# t = rospy.Time.now().to_sec()
# dt = t - self._prev_t
# if self._prev_t < 0.0:
# dt = 0.05
# acc_linear_ref=(self.ref_boxVelocityLinear1-self.vel_ref_prev)/dt
# self.vel_ref_prev=self.ref_boxVelocityLinear1
# self._int += 0.5 * (self.error_pose_euler - self._error_pose) * self._dt
# # Store current pose error
# self._error_pose = self.error_pose_euler
# # Get trajectory errors (reference - actual)
# e_p_linear_b = self._errors['pos']
# e_v_linear_b = self._errors['vel'][0:3]
# e_p_angular_b = self.error_orientation_rpy
# e_v_angular_b = self._errors['vel'][3:6]
# e_p_b = np.hstack((e_p_linear_b, e_p_angular_b))
# e_v_b = np.hstack((e_v_linear_b, e_v_angular_b))
# # Compute sliding surface s wrt body frame
# self._s_b = -e_v_b - np.multiply(self._lambda, e_p_b) \
# - self._sliding_int * np.multiply(np.square(self._lambda)/4, self._int)
# # Acceleration estimate
# self._rotBtoI_dot = np.dot(cross_product_operator(self._vehicle_model._vel[3:6]), self._vehicle_model.rotBtoI)
# self._accel_linear_estimate_b = np.dot(
# self._vehicle_model.rotItoB, (acc_linear_ref - \
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[0:3]))) + \
# np.multiply(self._lambda[0:3], e_v_linear_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[0:3]) / 4, e_p_linear_b)
# self._accel_angular_estimate_b = np.dot(self._vehicle_model.rotItoB, (np.zeros(3) -
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[3:6]))) + \
# np.multiply(self._lambda[3:6], e_v_angular_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[3:6]) / 4,
# e_p_angular_b)
# self._accel_estimate_b = np.hstack((self._accel_linear_estimate_b, self._accel_angular_estimate_b))
# # Equivalent control
# acc = self._vehicle_model.to_SNAME(self._accel_estimate_b)
# if t>3 and t<=85:
# kp_x=.7
# kp_y=.8
# kp_z=.7
# else:
# kp_x=.2
# kp_y=.2
# kp_z=.2
# ki_x=0.1
# kd_x=0.3
# mu_x=0.3
# ki_y=0.1
# kd_y=0.3
# mu_y=0.3
# ki_z=0.1
# kd_z=0.3
# mu_z=0.3
# kp_p=.5
# ki_p=0.1
# kd_p=0.1
# mu_p=0.1
# kp_q=.5
# ki_q=0.1
# kd_q=0.1
# mu_q=0.1
# kp_r=.5
# ki_r=0.1
# kd_r=0.1
# mu_r=0.1
# m_bar_x=2042#100, 2642
# m_bar_y=2884#300, 3084
# m_bar_z=2522#200, 5522
# m_bar_p=1400
# m_bar_q=1400
# m_bar_r=1400
# delta=1
# beta=1#.00001
# Ldelta_c_x=1500
# Ldelta_c_y=1500
# Ldelta_c_z=1500
# Ldelta_c_p=1500
# Ldelta_c_q=1500
# Ldelta_c_r=1500
# LDelta_d_x=1000
# LDelta_d_y=1000
# LDelta_d_z=1000
# H_para_x=.2
# H_para_y=.4
# H_para_z=0#0.7
# H_para_p=0
# H_para_q=0
# H_para_r=0
# delta_z=0.001
# self._rotBtoI_dot = np.dot(cross_product_operator(self.vel_veh_prev1[3:6]), self._vehicle_model.rotBtoI)
# acc_angular=self._vehicle_model.to_SNAME(np.dot(self._vehicle_model.rotItoB, np.dot(self._rotBtoI_dot, self.vel_veh_prev1[3:6])))
# acc_cal_fromVel=(self.vel_veh_prev2[0:3]-self.vel_vehicle_prev)/dt
# ref_pose_x=self.x_wg.data
# self.ref_pose_x_prev=ref_pose_x
# error_pose=self.error_pose_euler
# self._int += 0.5 * (error_pose - self._error_pose_prev) * self._dt
# s_x=kp_x*e_p_b[0]+ki_x*self._int[0]+kd_x*e_v_b[0]
# s_y=kp_y*e_p_b[1]+ki_y*self._int[1]+kd_y*e_v_b[1]
# s_z=kp_z*e_p_b[2]+ki_z*self._int[2]+kd_z*e_v_b[2]
# s_p=kp_p*e_p_b[3]+ki_p*self._int[3]+kd_p*e_v_b[3]
# s_q=kp_q*e_p_b[4]+ki_q*self._int[4]+kd_q*e_v_b[4]
# s_r=kp_r*e_p_b[5]+ki_r*self._int[5]+kd_r*e_v_b[5]
# S=np.hstack((s_x,s_y,s_z,s_p,s_q,s_r))
# rho_x=delta*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)/(m_bar_x-delta)
# rho_y=delta*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)/(m_bar_y-delta)
# rho_z=delta*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)/(m_bar_z-delta)
# rho_p=delta*(self.rho_p_prev+kd_p*LDelta_d_z+kd_p*Ldelta_c_p)/(m_bar_p-delta)
# rho_q=delta*(self.rho_q_prev+kd_q*LDelta_d_z+kd_q*Ldelta_c_q)/(m_bar_q-delta)
# rho_r=delta*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)/(m_bar_r-delta)
# v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+1/mu_x/kd_x*s_x+0*rho_x/kd_x*np.tanh(s_x/beta)
# v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+1/mu_y/kd_y*s_y+0*rho_y/kd_y*np.tanh(s_y/beta)
# v_z=acc_linear_ref[2]+kp_z/kd_z*e_v_b[2]+ki_z/kd_z*e_p_b[2]+1/mu_z/kd_z*s_z+0*rho_x/kd_z*np.tanh(s_z/beta)
# v_p=0+kp_p/kd_p*e_v_b[3]+ki_p/kd_p*e_p_b[3]+1/mu_p/kd_p*s_p+0*rho_p/kd_p*np.tanh(s_p/beta)
# v_q=0+kp_q/kd_q*e_v_b[4]+ki_q/kd_q*e_p_b[4]+1/mu_q/kd_q*s_q+0*rho_q/kd_q*np.tanh(s_q/beta)
# v_r=0+kp_r/kd_r*e_v_b[5]+ki_r/kd_r*e_p_b[5]+1/mu_r/kd_r*s_r+0*rho_r/kd_r*np.tanh(s_r/beta)
# rho_x=25#8, 12
# rho_y=25#8, 12
#rho_x=(kd_x*delta+m_bar_x*delta_z)/(kd_x*m_bar_x-kd_x*delta-m_bar_x*delta_z)*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)
#rho_y=(kd_y*delta+m_bar_y*delta_z)/(kd_y*m_bar_y-kd_y*delta-m_bar_y*delta_z)*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)
# rho_z=(kd_z*delta+m_bar_z*delta_z)/(kd_z*m_bar_z-kd_z*delta-m_bar_z*delta_z)*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)
# rho_p=(kd_p*delta+m_bar_p*delta_z)/(kd_p*m_bar_p-kd_p*delta-m_bar_p*delta_z)*(self.rho_p_prev+kd_p*LDelta_d_x+kd_p*Ldelta_c_p)
# rho_q=(kd_q*delta+m_bar_q*delta_z)/(kd_q*m_bar_q-kd_q*delta-m_bar_q*delta_z)*(self.rho_q_prev+kd_q*LDelta_d_y+kd_q*Ldelta_c_q)
# rho_r=(kd_r*delta+m_bar_r*delta_z)/(kd_r*m_bar_r-kd_r*delta-m_bar_r*delta_z)*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)
# v_x=acc_linear_ref[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
# v_y=acc_linear_ref[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
# v_z=acc_linear_ref[2]+1/mu_z/kd_z*s_z+rho_z/kd_z*np.tanh(s_z/beta)
# v_p=0+1/mu_p/kd_p*s_p+rho_p/kd_p*np.tanh(s_p/beta)
# v_q=0+1/mu_q/kd_q*s_q+rho_q/kd_q*np.tanh(s_q/beta)
# v_r=0+1/mu_r/kd_r*s_r+rho_r/kd_r*np.tanh(s_r/beta)
# self.h_hat_x=H_para_x*self.F_x_prev-m_bar_x*self._linear_acceleration.x
# self.h_hat_y=H_para_y*self.F_y_prev-m_bar_y*self._linear_acceleration.y
# self.h_hat_z=H_para_z*self.F_z_prev-m_bar_z*acc[2]
# self.h_hat_p=H_para_p*(self.F_p_prev-m_bar_p*self._accel_angular_estimate_b[0])
# self.h_hat_q=H_para_q*(self.F_q_prev-m_bar_q*self._accel_angular_estimate_b[1])
# self.h_hat_r=H_para_r*(self.F_r_prev-m_bar_r*self._accel_angular_estimate_b[2])
# H_hat=np.hstack((self.h_hat_x,self.h_hat_y,self.h_hat_z))
# self._tau_pid = self.update_pid()
# F_x=m_bar_x*v_x+self.h_hat_x
# F_y=m_bar_y*v_y+self.h_hat_y
# F_z=m_bar_z*v_z+self.h_hat_z
# F_p=m_bar_p*v_p+self.h_hat_p
# F_q=m_bar_q*v_q+self.h_hat_q
# F_r=m_bar_r*v_r+self.h_hat_r
# F=np.hstack((F_x,F_y,F_z,F_p,F_q,F_r))
# self._error_pose_prev = error_pose
# self.rho_x_prev=rho_x
# self.rho_y_prev=rho_y
# self.rho_z_prev=rho_z
# self.rho_p_prev=rho_p
# self.rho_q_prev=rho_q
# self.rho_r_prev=rho_r
# self.F_x_prev=F_x
# self.F_y_prev=F_y
# self.F_z_prev=F_z
# self.F_p_prev=F_p
# self.F_q_prev=F_q
# self.F_r_prev=F_r
# self.acc_angular_prev=acc_angular
# self.acc_cal_fromVel_prev=acc_cal_fromVel
# self.vel_vehicle_prev=self.vel_veh_prev2[0:3]
# self._slidingSurface=S
# self._tau[0]=F_x
# self._tau[1]=F_y
# self._tau[2]=F_z
# self._tau[3]=F_p
# self._tau[4]=F_q
# self._tau[5]=F_r
#self._tau[0]=self.F_tau[0]
#self._tau[1]=self.F_tau[1]
#self._tau[2]=self.F_tau[2]
#self._tau[3]=self.F_tau[3]
#self._tau[4]=self.F_tau[4]
#self._tau[5]=self.F_tau[5]
# don't touch. Proposed control without delay, full 6 dof
# def update_controller(self):
# if not self._is_init:
# return False
# t = rospy.Time.now().to_sec()
# dt = t - self._prev_t
# if self._prev_t < 0.0:
# dt = 0.05
#
# acc_linear_ref=(self.ref_boxVelocityLinear1-self.vel_ref_prev)/dt
# self.vel_ref_prev=self.ref_boxVelocityLinear1
#
#
#
# self._int += 0.5 * (self.error_pose_euler - self._error_pose) * self._dt
# # Store current pose error
# self._error_pose = self.error_pose_euler
# # Get trajectory errors (reference - actual)
# e_p_linear_b = self._errors['pos']
# e_v_linear_b = self._errors['vel'][0:3]
#
# e_p_angular_b = self.error_orientation_rpy
# e_v_angular_b = self._errors['vel'][3:6]
# e_p_b = np.hstack((e_p_linear_b, e_p_angular_b))
# e_v_b = np.hstack((e_v_linear_b, e_v_angular_b))
# # Compute sliding surface s wrt body frame
# self._s_b = -e_v_b - np.multiply(self._lambda, e_p_b) \
# - self._sliding_int * np.multiply(np.square(self._lambda)/4, self._int)
# # Acceleration estimate
# self._rotBtoI_dot = np.dot(cross_product_operator(self._vehicle_model._vel[3:6]), self._vehicle_model.rotBtoI)
# self._accel_linear_estimate_b = np.dot(
# self._vehicle_model.rotItoB, (acc_linear_ref - \
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[0:3]))) + \
# np.multiply(self._lambda[0:3], e_v_linear_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[0:3]) / 4, e_p_linear_b)
# self._accel_angular_estimate_b = np.dot(self._vehicle_model.rotItoB, (np.zeros(3) -
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[3:6]))) + \
# np.multiply(self._lambda[3:6], e_v_angular_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[3:6]) / 4,
# e_p_angular_b)
# self._accel_estimate_b = np.hstack((self._accel_linear_estimate_b, self._accel_angular_estimate_b))
# # Equivalent control
# acc = self._vehicle_model.to_SNAME(self._accel_estimate_b)
# self._f_eq = self._vehicle_model.compute_force(acc, use_sname=False)
# # Linear control
# self._f_lin = - np.multiply(self._k, self._s_b)
# # Uncertainties / disturbances upper boundaries for robust control
# self._rho_total = self._adaptive_bounds * self._rho_adapt + self._constant_bound * self._rho_constant
# # Adaptation law
# self._rho_adapt = self._rho_adapt + \
# (self._adapt_slope[0] * np.abs(self._s_b) +
# (self._adapt_slope[1] * np.abs(self._s_b) * np.abs(e_p_b) * np.abs(e_p_b)) +
# (self._adapt_slope[2] * np.abs(self._s_b) * np.abs(e_v_b) * np.abs(e_v_b)) +
# self._drift_prevent * (self._rho_0 - self._rho_adapt)) * dt
# # Robust control
# self._f_robust = - np.multiply(self._rho_total, (2 / np.pi) * np.arctan(np.multiply(self._c, self._s_b)))
# # Compute required forces and torques wrt body frame
# self.F_tau = self._ctrl_eq * self._f_eq + self._ctrl_lin * self._f_lin + self._ctrl_robust * self._f_robust
#
#
# kp_x=1
# ki_x=0.1
# kd_x=0.1
# mu_x=0.1
#
# kp_y=1
# ki_y=0.1
# kd_y=0.1
# mu_y=0.1
#
# kp_z=1.2
# ki_z=0.1
# kd_z=0.1
# mu_z=0.1
#
# kp_p=.5
# ki_p=0.1
# kd_p=0.1
# mu_p=0.1
#
# kp_q=.5
# ki_q=0.1
# kd_q=0.1
# mu_q=0.1
#
# kp_r=.5
# ki_r=0.1
# kd_r=0.1
# mu_r=0.1
#
# m_bar_x=2042#100, 2642
# m_bar_y=2884#300, 3084
# m_bar_z=2522#200, 5522
# m_bar_p=1400
# m_bar_q=1400
# m_bar_r=1400
# delta=1
# beta=1#.00001
#
# Ldelta_c_x=2500
# Ldelta_c_y=2500
# Ldelta_c_z=3500
# Ldelta_c_p=1500
# Ldelta_c_q=1500
# Ldelta_c_r=1500
# LDelta_d_x=1000
# LDelta_d_y=1000
## LDelta_d_z=1000
#
# H_para_x=.4
# H_para_y=.4
# H_para_z=0.2#0.7
# H_para_p=0
# H_para_q=0
# H_para_r=0
# delta_z=0.004
#
# self._rotBtoI_dot = np.dot(cross_product_operator(self._vehicle_model._vel[3:6]), self._vehicle_model.rotBtoI)
# acc_angular=self._vehicle_model.to_SNAME(np.dot(self._vehicle_model.rotItoB, np.dot(self._rotBtoI_dot, self._vehicle_model._vel[3:6])))
# acc_cal_fromVel=(self._vehicle_model._vel[0:3]-self.vel_vehicle_prev)/dt
#
#
#
# ref_pose_x=self.x_wg.data
# self.ref_pose_x_prev=ref_pose_x
#
#
# error_pose=self.error_pose_euler
# self._int += 0.5 * (error_pose - self._error_pose_prev) * self._dt
# s_x=kp_x*e_p_b[0]+ki_x*self._int[0]+kd_x*e_v_b[0]
# s_y=kp_y*e_p_b[1]+ki_y*self._int[1]+kd_y*e_v_b[1]
# s_z=kp_z*e_p_b[2]+ki_z*self._int[2]+kd_z*e_v_b[2]
#
# s_p=kp_p*e_p_b[3]+ki_p*self._int[3]+kd_p*e_v_b[3]
# s_q=kp_q*e_p_b[4]+ki_q*self._int[4]+kd_q*e_v_b[4]
# s_r=kp_r*e_p_b[5]+ki_r*self._int[5]+kd_r*e_v_b[5]
# S=np.hstack((s_x,s_y,s_z,s_p,s_q,s_r))
#
# rho_x=delta*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)/(m_bar_x-delta)
# rho_y=delta*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)/(m_bar_y-delta)
# rho_z=delta*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)/(m_bar_z-delta)
# rho_p=delta*(self.rho_p_prev+kd_p*LDelta_d_z+kd_p*Ldelta_c_p)/(m_bar_p-delta)
# rho_q=delta*(self.rho_q_prev+kd_q*LDelta_d_z+kd_q*Ldelta_c_q)/(m_bar_q-delta)
# rho_r=delta*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)/(m_bar_r-delta)
#
# v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
# v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
# v_z=acc_linear_ref[2]+kp_z/kd_z*e_v_b[2]+ki_z/kd_z*e_p_b[2]+1/mu_z/kd_z*s_z+rho_x/kd_z*np.tanh(s_z/beta)
# v_p=0+kp_p/kd_p*e_v_b[3]+ki_p/kd_p*e_p_b[3]+1/mu_p/kd_p*s_p+rho_p/kd_p*np.tanh(s_p/beta)
# v_q=0+kp_q/kd_q*e_v_b[4]+ki_q/kd_q*e_p_b[4]+1/mu_q/kd_q*s_q+rho_q/kd_q*np.tanh(s_q/beta)
# v_r=0+kp_r/kd_r*e_v_b[5]+ki_r/kd_r*e_p_b[5]+1/mu_r/kd_r*s_r+rho_r/kd_r*np.tanh(s_r/beta)
#
#
#
# #rho_x=(kd_x*delta+m_bar_x*delta_z)/(kd_x*m_bar_x-kd_x*delta-m_bar_x*delta_z)*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)
# #rho_y=(kd_y*delta+m_bar_y*delta_z)/(kd_y*m_bar_y-kd_y*delta-m_bar_y*delta_z)*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)
# #rho_z=(kd_z*delta+m_bar_z*delta_z)/(kd_z*m_bar_z-kd_z*delta-m_bar_z*delta_z)*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)
#
# #rho_p=(kd_p*delta+m_bar_p*delta_z)/(kd_p*m_bar_p-kd_p*delta-m_bar_p*delta_z)*(self.rho_p_prev+kd_p*LDelta_d_x+kd_p*Ldelta_c_p)
# #rho_q=(kd_q*delta+m_bar_q*delta_z)/(kd_q*m_bar_q-kd_q*delta-m_bar_q*delta_z)*(self.rho_q_prev+kd_q*LDelta_d_y+kd_q*Ldelta_c_q)
# #rho_r=(kd_r*delta+m_bar_r*delta_z)/(kd_r*m_bar_r-kd_r*delta-m_bar_r*delta_z)*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)
#
#
#
# #v_x=acc_linear_ref[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
# #v_y=acc_linear_ref[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
# #v_z=acc_linear_ref[2]+1/mu_z/kd_z*s_z+rho_z/kd_z*np.tanh(s_z/beta)
#
# #v_p=0+1/mu_p/kd_p*s_p+rho_p/kd_p*np.tanh(s_p/beta)
# #v_q=0+1/mu_q/kd_q*s_q+rho_q/kd_q*np.tanh(s_q/beta)
# #v_r=0+1/mu_r/kd_r*s_r+rho_r/kd_r*np.tanh(s_r/beta)
#
#
#
#
# self.h_hat_x=H_para_x*(self.F_x_prev-m_bar_x*self._linear_acceleration.x)
# self.h_hat_y=H_para_y*(self.F_y_prev-m_bar_y*self._linear_acceleration.y)
# self.h_hat_z=H_para_z*(self.F_z_prev-m_bar_z*acc[2])
# self.h_hat_p=H_para_p*(self.F_p_prev-m_bar_p*self._accel_angular_estimate_b[0])
# self.h_hat_q=H_para_q*(self.F_q_prev-m_bar_q*self._accel_angular_estimate_b[1])
# self.h_hat_r=H_para_r*(self.F_r_prev-m_bar_r*self._accel_angular_estimate_b[2])
#
# H_hat=np.hstack((self.h_hat_x,self.h_hat_y,self.h_hat_z))
# F_x=m_bar_x*v_x+self.h_hat_x
# F_y=m_bar_y*v_y+self.h_hat_y
# F_z=m_bar_z*v_z+self.h_hat_z
# F_p=m_bar_p*v_p+self.h_hat_p
# F_q=m_bar_q*v_q+self.h_hat_q
# F_r=m_bar_r*v_r+self.h_hat_r
# F=np.hstack((F_x,F_y,F_z,F_p,F_q,F_r))
# self._error_pose_prev = error_pose
#
# self.rho_x_prev=rho_x
# self.rho_y_prev=rho_y
# self.rho_z_prev=rho_z
# self.rho_p_prev=rho_p
# self.rho_q_prev=rho_q
# self.rho_r_prev=rho_r
# self.F_x_prev=F_x
# self.F_y_prev=F_y
# self.F_z_prev=F_z
# self.F_p_prev=F_p
# self.F_q_prev=F_q
# self.F_r_prev=F_r
# self.acc_angular_prev=acc_angular
# self.acc_cal_fromVel_prev=acc_cal_fromVel
# self.vel_vehicle_prev=self._vehicle_model._vel[0:3]
# self._slidingSurface=S
# self._tau[0]=F_x
# self._tau[1]=F_y
# self._tau[2]=F_z
# self._tau[3]=F_p
# self._tau[4]=F_q
# self._tau[5]=F_r
# #self._tau[0]=self.F_tau[0]
# #self._tau[1]=self.F_tau[1]
# #self._tau[2]=self.F_tau[2]
# #self._tau[3]=self.F_tau[3]
# #self._tau[4]=self.F_tau[4]
# #self._tau[5]=self.F_tau[5]
# Proposed control with delay, full 6 dof
# def update_controller(self):
# if not self._is_init:
# return False
# t = rospy.Time.now().to_sec()
# dt = t - self._prev_t
# if self._prev_t < 0.0:
# dt = 0.05
# acc_linear_ref=(self.ref_boxVelocityLinear1-self.vel_ref_prev)/dt
# self.vel_ref_prev=self.ref_boxVelocityLinear1
# self._int += 0.5 * (self.error_pose_euler - self._error_pose) * self._dt
# # Store current pose error
# self._error_pose = self.error_pose_euler
# # Get trajectory errors (reference - actual)
# e_p_linear_b = self._errors['pos']
# e_v_linear_b = self._errors['vel'][0:3]
# e_p_angular_b = self.error_orientation_rpy
# e_v_angular_b = self._errors['vel'][3:6]
# e_p_b = np.hstack((e_p_linear_b, e_p_angular_b))
# e_v_b = np.hstack((e_v_linear_b, e_v_angular_b))
# # Compute sliding surface s wrt body frame
# self._s_b = -e_v_b - np.multiply(self._lambda, e_p_b) \
# - self._sliding_int * np.multiply(np.square(self._lambda)/4, self._int)
# # Acceleration estimate
# self._rotBtoI_dot = np.dot(cross_product_operator(self._vehicle_model._vel[3:6]), self._vehicle_model.rotBtoI)
# self._accel_linear_estimate_b = np.dot(
# self._vehicle_model.rotItoB, (acc_linear_ref - \
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[0:3]))) + \
# np.multiply(self._lambda[0:3], e_v_linear_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[0:3]) / 4, e_p_linear_b)
# self._accel_angular_estimate_b = np.dot(self._vehicle_model.rotItoB, (np.zeros(3) -
# np.dot(self._rotBtoI_dot, self._vehicle_model._vel[3:6]))) + \
# np.multiply(self._lambda[3:6], e_v_angular_b) + \
# self._sliding_int * np.multiply(np.square(self._lambda[3:6]) / 4,
# e_p_angular_b)
# self._accel_estimate_b = np.hstack((self._accel_linear_estimate_b, self._accel_angular_estimate_b))
# # Equivalent control
# acc = self._vehicle_model.to_SNAME(self._accel_estimate_b)
# self._f_eq = self._vehicle_model.compute_force(acc, use_sname=False)
# # Linear control
# self._f_lin = - np.multiply(self._k, self._s_b)
# # Uncertainties / disturbances upper boundaries for robust control
# self._rho_total = self._adaptive_bounds * self._rho_adapt + self._constant_bound * self._rho_constant
# # Adaptation law
# self._rho_adapt = self._rho_adapt + \
# (self._adapt_slope[0] * np.abs(self._s_b) +
# (self._adapt_slope[1] * np.abs(self._s_b) * np.abs(e_p_b) * np.abs(e_p_b)) +
# (self._adapt_slope[2] * np.abs(self._s_b) * np.abs(e_v_b) * np.abs(e_v_b)) +
# self._drift_prevent * (self._rho_0 - self._rho_adapt)) * dt
# # Robust control
# self._f_robust = - np.multiply(self._rho_total, (2 / np.pi) * np.arctan(np.multiply(self._c, self._s_b)))
# # Compute required forces and torques wrt body frame
# self.F_tau = self._ctrl_eq * self._f_eq + self._ctrl_lin * self._f_lin + self._ctrl_robust * self._f_robust
# kp_x=.4
# ki_x=0.1
# kd_x=0.1
# mu_x=0.1
#
# kp_y=.4
# ki_y=0.1
# kd_y=0.1
# mu_y=0.1
#
# kp_z=.4
# ki_z=0.1
# kd_z=0.1
# mu_z=0.1
#
# kp_p=.5
# ki_p=0.1
# kd_p=0.1
# mu_p=0.1
#
# kp_q=.5
# ki_q=0.1
# kd_q=0.1
# mu_q=0.1
#
# kp_r=.5
# ki_r=0.1
# kd_r=0.1
# mu_r=0.1
#
# m_bar_x=2042#100, 2642
# m_bar_y=2884#300, 3084
# m_bar_z=2522#200, 5522
# m_bar_p=1400
# m_bar_q=1400
# m_bar_r=1400
# delta=1
# beta=1#.00001
#
# Ldelta_c_x=1500
# Ldelta_c_y=1500
# Ldelta_c_z=1500
# Ldelta_c_p=1500
# Ldelta_c_q=1500
# Ldelta_c_r=1500
# LDelta_d_x=1000
# LDelta_d_y=1000
# LDelta_d_z=1000
#
# H_para_x=.2
# H_para_y=.2
# H_para_z=0.2#0.7
# H_para_p=0
# H_para_q=0
# H_para_r=0
# delta_z=0.001
#
# self._rotBtoI_dot = np.dot(cross_product_operator(self.vel_veh_prev1[3:6]), self._vehicle_model.rotBtoI)
# acc_angular=self._vehicle_model.to_SNAME(np.dot(self._vehicle_model.rotItoB, np.dot(self._rotBtoI_dot, self.vel_veh_prev1[3:6])))
# acc_cal_fromVel=(self.vel_veh_prev2[0:3]-self.vel_vehicle_prev)/dt
#
# ref_pose_x=self.x_wg.data
# self.ref_pose_x_prev=ref_pose_x
# error_pose=self.error_pose_euler
# self._int += 0.5 * (error_pose - self._error_pose_prev) * self._dt
# s_x=kp_x*e_p_b[0]+ki_x*self._int[0]+kd_x*e_v_b[0]
# s_y=kp_y*e_p_b[1]+ki_y*self._int[1]+kd_y*e_v_b[1]
# s_z=kp_z*e_p_b[2]+ki_z*self._int[2]+kd_z*e_v_b[2]
# s_p=kp_p*e_p_b[3]+ki_p*self._int[3]+kd_p*e_v_b[3]
# s_q=kp_q*e_p_b[4]+ki_q*self._int[4]+kd_q*e_v_b[4]
# s_r=kp_r*e_p_b[5]+ki_r*self._int[5]+kd_r*e_v_b[5]
# S=np.hstack((s_x,s_y,s_z,s_p,s_q,s_r))
# rho_x=delta*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)/(m_bar_x-delta)
# rho_y=delta*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)/(m_bar_y-delta)
# rho_z=delta*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)/(m_bar_z-delta)
# rho_p=delta*(self.rho_p_prev+kd_p*LDelta_d_z+kd_p*Ldelta_c_p)/(m_bar_p-delta)
# rho_q=delta*(self.rho_q_prev+kd_q*LDelta_d_z+kd_q*Ldelta_c_q)/(m_bar_q-delta)
# rho_r=delta*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)/(m_bar_r-delta)
# v_x=acc_linear_ref[0]+kp_x/kd_x*e_v_b[0]+ki_x/kd_x*e_p_b[0]+1/mu_x/kd_x*s_x+0*rho_x/kd_x*np.tanh(s_x/beta)
# v_y=acc_linear_ref[1]+kp_y/kd_y*e_v_b[1]+ki_y/kd_y*e_p_b[1]+1/mu_y/kd_y*s_y+0*rho_y/kd_y*np.tanh(s_y/beta)
# v_z=acc_linear_ref[2]+kp_z/kd_z*e_v_b[2]+ki_z/kd_z*e_p_b[2]+1/mu_z/kd_z*s_z+0*rho_x/kd_z*np.tanh(s_z/beta)
# v_p=0+kp_p/kd_p*e_v_b[3]+ki_p/kd_p*e_p_b[3]+1/mu_p/kd_p*s_p+0*rho_p/kd_p*np.tanh(s_p/beta)
# v_q=0+kp_q/kd_q*e_v_b[4]+ki_q/kd_q*e_p_b[4]+1/mu_q/kd_q*s_q+0*rho_q/kd_q*np.tanh(s_q/beta)
# v_r=0+kp_r/kd_r*e_v_b[5]+ki_r/kd_r*e_p_b[5]+1/mu_r/kd_r*s_r+0*rho_r/kd_r*np.tanh(s_r/beta)
# #rho_x=(kd_x*delta+m_bar_x*delta_z)/(kd_x*m_bar_x-kd_x*delta-m_bar_x*delta_z)*(self.rho_x_prev+kd_x*LDelta_d_x+kd_x*Ldelta_c_x)
# #rho_y=(kd_y*delta+m_bar_y*delta_z)/(kd_y*m_bar_y-kd_y*delta-m_bar_y*delta_z)*(self.rho_y_prev+kd_y*LDelta_d_y+kd_y*Ldelta_c_y)
# #rho_z=(kd_z*delta+m_bar_z*delta_z)/(kd_z*m_bar_z-kd_z*delta-m_bar_z*delta_z)*(self.rho_z_prev+kd_z*LDelta_d_z+kd_z*Ldelta_c_z)
# #rho_p=(kd_p*delta+m_bar_p*delta_z)/(kd_p*m_bar_p-kd_p*delta-m_bar_p*delta_z)*(self.rho_p_prev+kd_p*LDelta_d_x+kd_p*Ldelta_c_p)
# #rho_q=(kd_q*delta+m_bar_q*delta_z)/(kd_q*m_bar_q-kd_q*delta-m_bar_q*delta_z)*(self.rho_q_prev+kd_q*LDelta_d_y+kd_q*Ldelta_c_q)
# #rho_r=(kd_r*delta+m_bar_r*delta_z)/(kd_r*m_bar_r-kd_r*delta-m_bar_r*delta_z)*(self.rho_r_prev+kd_r*LDelta_d_z+kd_r*Ldelta_c_r)
# #v_x=acc_linear_ref[0]+1/mu_x/kd_x*s_x+rho_x/kd_x*np.tanh(s_x/beta)
# #v_y=acc_linear_ref[1]+1/mu_y/kd_y*s_y+rho_y/kd_y*np.tanh(s_y/beta)
# #v_z=acc_linear_ref[2]+1/mu_z/kd_z*s_z+rho_z/kd_z*np.tanh(s_z/beta)
# #v_p=0+1/mu_p/kd_p*s_p+rho_p/kd_p*np.tanh(s_p/beta)
# #v_q=0+1/mu_q/kd_q*s_q+rho_q/kd_q*np.tanh(s_q/beta)
# #v_r=0+1/mu_r/kd_r*s_r+rho_r/kd_r*np.tanh(s_r/beta)
# self.h_hat_x=H_para_x*(self.F_x_prev-m_bar_x*self._linear_acceleration.x)
# self.h_hat_y=H_para_y*(self.F_y_prev-m_bar_y*self._linear_acceleration.y)
# self.h_hat_z=H_para_z*(self.F_z_prev-m_bar_z*acc[2])
# self.h_hat_p=H_para_p*(self.F_p_prev-m_bar_p*self._accel_angular_estimate_b[0])
# self.h_hat_q=H_para_q*(self.F_q_prev-m_bar_q*self._accel_angular_estimate_b[1])
# self.h_hat_r=H_para_r*(self.F_r_prev-m_bar_r*self._accel_angular_estimate_b[2])
# H_hat=np.hstack((self.h_hat_x,self.h_hat_y,self.h_hat_z))
#
# self._tau_pid = self.update_pid()
# F_x=m_bar_x*v_x+self.h_hat_x
# F_y=m_bar_y*v_y+self.h_hat_y
# F_z=m_bar_z*v_z+self.h_hat_z
# F_p=m_bar_p*v_p+self.h_hat_p
# F_q=m_bar_q*v_q+self.h_hat_q
# F_r=m_bar_r*v_r+self.h_hat_r
# F=np.hstack((F_x,F_y,F_z,F_p,F_q,F_r))
# self._error_pose_prev = error_pose
# self.rho_x_prev=rho_x
# self.rho_y_prev=rho_y
# self.rho_z_prev=rho_z
# self.rho_p_prev=rho_p
# self.rho_q_prev=rho_q
# self.rho_r_prev=rho_r
# self.F_x_prev=F_x
# self.F_y_prev=F_y
# self.F_z_prev=F_z
# self.F_p_prev=F_p
# self.F_q_prev=F_q
# self.F_r_prev=F_r
# self.acc_angular_prev=acc_angular
# self.acc_cal_fromVel_prev=acc_cal_fromVel
# self.vel_vehicle_prev=self.vel_veh_prev2[0:3]
# self._slidingSurface=S
# self._tau[0]=F_x
# self._tau[1]=F_y
# self._tau[2]=F_z
# self._tau[3]=F_p
# self._tau[4]=F_q
# self._tau[5]=F_r
# #self._tau[0]=self.F_tau[0]
# #self._tau[1]=self.F_tau[1]
# #self._tau[2]=self.F_tau[2]
# #self._tau[3]=self.F_tau[3]
# #self._tau[4]=self.F_tau[4]
# #self._tau[5]=self.F_tau[5]
#self._slidingSurface=self._vehicle_model.restoring_forces
#self._restoring=self._vehicle_model._g
#self._MPara=self._vehicle_model._linear_damping
#self._CPara=self._vehicle_model._C
#self._DPara=self._vehicle_model._D
self._velocity=self._vehicle_model._vel
self._dt_=rho_x
self._dt1_=rho_y
#self._dt_=F_u
self.publish_control_wrench(self._tau)
self.publish_slidingSurface(self._slidingSurface)
#self.publish_restoring(self._restoring)
#self.publish_ref_u(x_u)
#self.publish_veh_u(self._vehicle_model._pose['pos'][0])
#self.publish_error_up(error_up)
#self.publish_surface_up(u_surface)
#self.publish_force_up(f_surge)
self.pub_dt(rho_z)
self.pub_dt1(rho_y)
#self.publish_MPara(self._MPara)
#self.publish_CPara(self._CPara)
#self.publish_DPara(self._DPara)
#self.publish_vel(self._velocity)
#self.publish_generalForce(self._generalForce)
#self.publish_equivalentControl(self._f_eq)
self._prev_t = t
if __name__ == '__main__':
print('Starting Model-based Sliding Mode Controller')
rospy.init_node('rov_mb_sm_controller')
try:
node = ROV_MB_SMController()
rospy.spin()
except rospy.ROSInterruptException:
print('caught exception')
print('exiting')
| 37.45255 | 144 | 0.613438 | 11,060 | 58,014 | 2.778571 | 0.033816 | 0.02421 | 0.021867 | 0.021867 | 0.840975 | 0.81888 | 0.765286 | 0.747649 | 0.730826 | 0.730337 | 0 | 0.040009 | 0.252939 | 58,014 | 1,548 | 145 | 37.476744 | 0.669059 | 0.663943 | 0 | 0.165049 | 0 | 0 | 0.049021 | 0.008179 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.01699 | null | null | 0.026699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81052e8cc7271b995e9cea44da679617517b8fee | 1,236 | py | Python | University - Team Projects/Technicom APG/script-conversion.py | mpuheim/Various | b96caabde036530329f0ebbe2e3f176dfe691d1c | [
"RSA-MD"
] | null | null | null | University - Team Projects/Technicom APG/script-conversion.py | mpuheim/Various | b96caabde036530329f0ebbe2e3f176dfe691d1c | [
"RSA-MD"
] | null | null | null | University - Team Projects/Technicom APG/script-conversion.py | mpuheim/Various | b96caabde036530329f0ebbe2e3f176dfe691d1c | [
"RSA-MD"
] | null | null | null | # simple script to test validity of file conversions
from modules import utils, fileio
# load customers table as CSV
tab = fileio.loadTable('data/customers.csv')
# save customers table as JSON
fileio.saveTable(tab,'customers.json')
# load customers table as JSON
tab = fileio.loadTable('customers.json')
# save customers table as CSV
fileio.saveTable(tab,'customers.csv')
# load customers table as CSV
tab = fileio.loadTable('customers.csv')
# save customers table as JSON
fileio.saveTable(tab,'customers_2.json')
# load customers table as JSON
tab = fileio.loadTable('customers_2.json')
# save customers table as CSV
fileio.saveTable(tab,'customers_2.csv')
# load purchases table as CSV
tab = fileio.loadTable('data/purchases.csv')
# save purchases table as JSON
fileio.saveTable(tab,'purchases.json')
# load purchases table as JSON
tab = fileio.loadTable('purchases.json')
# save purchases table as CSV
fileio.saveTable(tab,'purchases.csv')
# load purchases table as CSV
tab = fileio.loadTable('purchases.csv')
# save purchases table as JSON
fileio.saveTable(tab,'purchases_2.json')
# load purchases table as JSON
tab = fileio.loadTable('purchases_2.json')
# save purchases table as CSV
fileio.saveTable(tab,'purchases_2.csv')
| 30.146341 | 53 | 0.772654 | 182 | 1,236 | 5.21978 | 0.148352 | 0.117895 | 0.134737 | 0.084211 | 0.915789 | 0.911579 | 0.911579 | 0.903158 | 0.816842 | 0.724211 | 0 | 0.005515 | 0.119741 | 1,236 | 40 | 54 | 30.9 | 0.866728 | 0.409385 | 0 | 0 | 0 | 0 | 0.33474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
812f3de3ebec8aba082eb652fabc78accb92a306 | 53,927 | py | Python | ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/igmpquerier_38c883b0cec7ffb5405af90bf1b8cda5.py | OpenIxia/ixnetwork_restpy | f628db450573a104f327cf3c737ca25586e067ae | [
"MIT"
] | 20 | 2019-05-07T01:59:14.000Z | 2022-02-11T05:24:47.000Z | ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/igmpquerier_38c883b0cec7ffb5405af90bf1b8cda5.py | OpenIxia/ixnetwork_restpy | f628db450573a104f327cf3c737ca25586e067ae | [
"MIT"
] | 60 | 2019-04-03T18:59:35.000Z | 2022-02-22T12:05:05.000Z | ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/igmpquerier_38c883b0cec7ffb5405af90bf1b8cda5.py | OpenIxia/ixnetwork_restpy | f628db450573a104f327cf3c737ca25586e067ae | [
"MIT"
] | 13 | 2019-05-20T10:48:31.000Z | 2021-10-06T07:45:44.000Z | # MIT LICENSE
#
# Copyright 1997 - 2020 by IXIA Keysight
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from ixnetwork_restpy.base import Base
from ixnetwork_restpy.files import Files
from typing import List, Any, Union
class IgmpQuerier(Base):
"""IGMP Querier Configuration
The IgmpQuerier class encapsulates a list of igmpQuerier resources that are managed by the user.
A list of resources can be retrieved from the server using the IgmpQuerier.find() method.
The list can be managed by using the IgmpQuerier.add() and IgmpQuerier.remove() methods.
"""
__slots__ = ()
_SDM_NAME = 'igmpQuerier'
_SDM_ATT_MAP = {
'Active': 'active',
'ConnectedVia': 'connectedVia',
'Count': 'count',
'DescriptiveName': 'descriptiveName',
'DiscardLearntInfo': 'discardLearntInfo',
'Errors': 'errors',
'GeneralQueryInterval': 'generalQueryInterval',
'GeneralQueryResponseInterval': 'generalQueryResponseInterval',
'Multiplier': 'multiplier',
'Name': 'name',
'ProxyQuerier': 'proxyQuerier',
'RobustnessVariable': 'robustnessVariable',
'RouterAlert': 'routerAlert',
'SessionInfo': 'sessionInfo',
'SessionStatus': 'sessionStatus',
'SpecificQueryResponseInterval': 'specificQueryResponseInterval',
'SpecificQueryTransmissionCount': 'specificQueryTransmissionCount',
'StackedLayers': 'stackedLayers',
'StartupQueryCount': 'startupQueryCount',
'StateCounts': 'stateCounts',
'Status': 'status',
'SupportElection': 'supportElection',
'SupportOlderVersionHost': 'supportOlderVersionHost',
'SupportOlderVersionQuerier': 'supportOlderVersionQuerier',
'VersionType': 'versionType',
}
_SDM_ENUM_MAP = {
'status': ['configured', 'error', 'mixed', 'notStarted', 'started', 'starting', 'stopping'],
}
def __init__(self, parent, list_op=False):
super(IgmpQuerier, self).__init__(parent, list_op)
@property
def LearnedInfo(self):
"""
Returns
-------
- obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.learnedinfo.learnedinfo_ff4d5e5643a63bccb40b6cf64fc58100.LearnedInfo): An instance of the LearnedInfo class
Raises
------
- ServerError: The server has encountered an uncategorized error condition
"""
from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.learnedinfo.learnedinfo_ff4d5e5643a63bccb40b6cf64fc58100 import LearnedInfo
if self._properties.get('LearnedInfo', None) is not None:
return self._properties.get('LearnedInfo')
else:
return LearnedInfo(self)
@property
def Active(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Activate/Deactivate Configuration
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['Active']))
@property
def ConnectedVia(self):
# type: () -> List[str]
"""DEPRECATED
Returns
-------
- list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*]): List of layers this layer is used to connect with to the wire.
"""
return self._get_attribute(self._SDM_ATT_MAP['ConnectedVia'])
@ConnectedVia.setter
def ConnectedVia(self, value):
# type: (List[str]) -> None
self._set_attribute(self._SDM_ATT_MAP['ConnectedVia'], value)
@property
def Count(self):
# type: () -> int
"""
Returns
-------
- number: Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group.
"""
return self._get_attribute(self._SDM_ATT_MAP['Count'])
@property
def DescriptiveName(self):
# type: () -> str
"""
Returns
-------
- str: Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but may offer more context.
"""
return self._get_attribute(self._SDM_ATT_MAP['DescriptiveName'])
@property
def DiscardLearntInfo(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Discard Learned Info
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['DiscardLearntInfo']))
@property
def Errors(self):
"""
Returns
-------
- list(dict(arg1:str[None | /api/v1/sessions/1/ixnetwork//.../*],arg2:list[str])): A list of errors that have occurred
"""
return self._get_attribute(self._SDM_ATT_MAP['Errors'])
@property
def GeneralQueryInterval(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): General Query Interval in seconds
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['GeneralQueryInterval']))
@property
def GeneralQueryResponseInterval(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): General Query Response Interval in milliseconds
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['GeneralQueryResponseInterval']))
@property
def Multiplier(self):
# type: () -> int
"""
Returns
-------
- number: Number of layer instances per parent instance (multiplier)
"""
return self._get_attribute(self._SDM_ATT_MAP['Multiplier'])
@Multiplier.setter
def Multiplier(self, value):
# type: (int) -> None
self._set_attribute(self._SDM_ATT_MAP['Multiplier'], value)
@property
def Name(self):
# type: () -> str
"""
Returns
-------
- str: Name of NGPF element, guaranteed to be unique in Scenario
"""
return self._get_attribute(self._SDM_ATT_MAP['Name'])
@Name.setter
def Name(self, value):
# type: (str) -> None
self._set_attribute(self._SDM_ATT_MAP['Name'], value)
@property
def ProxyQuerier(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Enable Proxy Querier
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['ProxyQuerier']))
@property
def RobustnessVariable(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Robustness Variable
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['RobustnessVariable']))
@property
def RouterAlert(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Router Alert
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['RouterAlert']))
@property
def SessionInfo(self):
# type: () -> List[str]
"""
Returns
-------
- list(str[noIfaceUp | up]): Logs additional information about the session state
"""
return self._get_attribute(self._SDM_ATT_MAP['SessionInfo'])
@property
def SessionStatus(self):
# type: () -> List[str]
"""
Returns
-------
- list(str[down | notStarted | up]): Current state of protocol session: Not Started - session negotiation not started, the session is not active yet. Down - actively trying to bring up a protocol session, but negotiation is didn't successfully complete (yet). Up - session came up successfully.
"""
return self._get_attribute(self._SDM_ATT_MAP['SessionStatus'])
@property
def SpecificQueryResponseInterval(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Specific Query Response Interval in milliseconds
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['SpecificQueryResponseInterval']))
@property
def SpecificQueryTransmissionCount(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Specific Query Transmission Count
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['SpecificQueryTransmissionCount']))
@property
def StackedLayers(self):
# type: () -> List[str]
"""
Returns
-------
- list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*]): List of secondary (many to one) child layer protocols
"""
return self._get_attribute(self._SDM_ATT_MAP['StackedLayers'])
@StackedLayers.setter
def StackedLayers(self, value):
# type: (List[str]) -> None
self._set_attribute(self._SDM_ATT_MAP['StackedLayers'], value)
@property
def StartupQueryCount(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Startup Query Count
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['StartupQueryCount']))
@property
def StateCounts(self):
"""
Returns
-------
- dict(total:number,notStarted:number,down:number,up:number): A list of values that indicates the total number of sessions, the number of sessions not started, the number of sessions down and the number of sessions that are up
"""
return self._get_attribute(self._SDM_ATT_MAP['StateCounts'])
@property
def Status(self):
# type: () -> str
"""
Returns
-------
- str(configured | error | mixed | notStarted | started | starting | stopping): Running status of associated network element. Once in Started state, protocol sessions will begin to negotiate.
"""
return self._get_attribute(self._SDM_ATT_MAP['Status'])
@property
def SupportElection(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Support Election
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['SupportElection']))
@property
def SupportOlderVersionHost(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Support Older Version Host
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['SupportOlderVersionHost']))
@property
def SupportOlderVersionQuerier(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Support Older Version Querier
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['SupportOlderVersionQuerier']))
@property
def VersionType(self):
# type: () -> 'Multivalue'
"""
Returns
-------
- obj(ixnetwork_restpy.multivalue.Multivalue): Version
"""
from ixnetwork_restpy.multivalue import Multivalue
return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP['VersionType']))
def update(self, ConnectedVia=None, Multiplier=None, Name=None, StackedLayers=None):
# type: (List[str], int, str, List[str]) -> IgmpQuerier
"""Updates igmpQuerier resource on the server.
This method has some named parameters with a type: obj (Multivalue).
The Multivalue class has documentation that details the possible values for those named parameters.
Args
----
- ConnectedVia (list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*])): List of layers this layer is used to connect with to the wire.
- Multiplier (number): Number of layer instances per parent instance (multiplier)
- Name (str): Name of NGPF element, guaranteed to be unique in Scenario
- StackedLayers (list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*])): List of secondary (many to one) child layer protocols
Raises
------
- ServerError: The server has encountered an uncategorized error condition
"""
return self._update(self._map_locals(self._SDM_ATT_MAP, locals()))
def add(self, ConnectedVia=None, Multiplier=None, Name=None, StackedLayers=None):
# type: (List[str], int, str, List[str]) -> IgmpQuerier
"""Adds a new igmpQuerier resource on the server and adds it to the container.
Args
----
- ConnectedVia (list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*])): List of layers this layer is used to connect with to the wire.
- Multiplier (number): Number of layer instances per parent instance (multiplier)
- Name (str): Name of NGPF element, guaranteed to be unique in Scenario
- StackedLayers (list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*])): List of secondary (many to one) child layer protocols
Returns
-------
- self: This instance with all currently retrieved igmpQuerier resources using find and the newly added igmpQuerier resources available through an iterator or index
Raises
------
- ServerError: The server has encountered an uncategorized error condition
"""
return self._create(self._map_locals(self._SDM_ATT_MAP, locals()))
def remove(self):
"""Deletes all the contained igmpQuerier resources in this instance from the server.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
self._delete()
def find(self, ConnectedVia=None, Count=None, DescriptiveName=None, Errors=None, Multiplier=None, Name=None, SessionInfo=None, SessionStatus=None, StackedLayers=None, StateCounts=None, Status=None):
"""Finds and retrieves igmpQuerier resources from the server.
All named parameters are evaluated on the server using regex. The named parameters can be used to selectively retrieve igmpQuerier resources from the server.
To retrieve an exact match ensure the parameter value starts with ^ and ends with $
By default the find method takes no parameters and will retrieve all igmpQuerier resources from the server.
Args
----
- ConnectedVia (list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*])): List of layers this layer is used to connect with to the wire.
- Count (number): Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group.
- DescriptiveName (str): Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but may offer more context.
- Errors (list(dict(arg1:str[None | /api/v1/sessions/1/ixnetwork//.../*],arg2:list[str]))): A list of errors that have occurred
- Multiplier (number): Number of layer instances per parent instance (multiplier)
- Name (str): Name of NGPF element, guaranteed to be unique in Scenario
- SessionInfo (list(str[noIfaceUp | up])): Logs additional information about the session state
- SessionStatus (list(str[down | notStarted | up])): Current state of protocol session: Not Started - session negotiation not started, the session is not active yet. Down - actively trying to bring up a protocol session, but negotiation is didn't successfully complete (yet). Up - session came up successfully.
- StackedLayers (list(str[None | /api/v1/sessions/1/ixnetwork/topology/.../*])): List of secondary (many to one) child layer protocols
- StateCounts (dict(total:number,notStarted:number,down:number,up:number)): A list of values that indicates the total number of sessions, the number of sessions not started, the number of sessions down and the number of sessions that are up
- Status (str(configured | error | mixed | notStarted | started | starting | stopping)): Running status of associated network element. Once in Started state, protocol sessions will begin to negotiate.
Returns
-------
- self: This instance with matching igmpQuerier resources retrieved from the server available through an iterator or index
Raises
------
- ServerError: The server has encountered an uncategorized error condition
"""
return self._select(self._map_locals(self._SDM_ATT_MAP, locals()))
def read(self, href):
"""Retrieves a single instance of igmpQuerier data from the server.
Args
----
- href (str): An href to the instance to be retrieved
Returns
-------
- self: This instance with the igmpQuerier resources from the server available through an iterator or index
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
return self._read(href)
def Abort(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the abort operation on the server.
Abort CPF control plane (equals to demote to kUnconfigured state).
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
abort(async_operation=bool)
---------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
abort(SessionIndices=list, async_operation=bool)
------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
abort(SessionIndices=string, async_operation=bool)
--------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('abort', payload=payload, response_object=None)
def ClearAllLearnedInfoInClient(self, *args, **kwargs):
# type: (*Any, **Any) -> Union[List[str], None]
"""Executes the clearAllLearnedInfoInClient operation on the server.
Clears ALL routes from GUI grid for the selected BGP Peers.
clearAllLearnedInfoInClient(Arg2=list, async_operation=bool)list
----------------------------------------------------------------
- Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
- Returns list(str): ID to associate each async action invocation
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self.href }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('clearAllLearnedInfoInClient', payload=payload, response_object=None)
def GetLearnedInfo(self, *args, **kwargs):
# type: (*Any, **Any) -> Union[List[str], None]
"""Executes the getLearnedInfo operation on the server.
Gets all the LSPs and Topologies learnt by this IGMP Querier.
getLearnedInfo(Arg2=list, async_operation=bool)list
---------------------------------------------------
- Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
- Returns list(str): ID to associate each async action invocation
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self.href }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('getLearnedInfo', payload=payload, response_object=None)
def IgmpGetLearnedInfo(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the igmpGetLearnedInfo operation on the server.
Get Learned Info
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
igmpGetLearnedInfo(async_operation=bool)
----------------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpGetLearnedInfo(SessionIndices=list, async_operation=bool)
-------------------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpGetLearnedInfo(SessionIndices=string, async_operation=bool)
---------------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('igmpGetLearnedInfo', payload=payload, response_object=None)
def IgmpResumePeriodicGenQuery(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the igmpResumePeriodicGenQuery operation on the server.
Resume Periodic General Query
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
igmpResumePeriodicGenQuery(async_operation=bool)
------------------------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpResumePeriodicGenQuery(SessionIndices=list, async_operation=bool)
---------------------------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpResumePeriodicGenQuery(SessionIndices=string, async_operation=bool)
-----------------------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('igmpResumePeriodicGenQuery', payload=payload, response_object=None)
def IgmpSendSpecificQuery(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the igmpSendSpecificQuery operation on the server.
Send Specific Query
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
igmpSendSpecificQuery(Start_group_address=string, Group_count=number, Start_source_address=string, Source_count=number, Source_increment_step=number, async_operation=bool)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Start_group_address (str): This parameter requires a start_group_address of type kString
- Group_count (number): This parameter requires a group_count of type kInteger
- Start_source_address (str): This parameter requires a start_source_address of type kString
- Source_count (number): This parameter requires a source_count of type kInteger
- Source_increment_step (number): This parameter requires a source_increment_step of type kInteger
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpSendSpecificQuery(Start_group_address=string, Group_count=number, Start_source_address=string, Source_count=number, Source_increment_step=number, SessionIndices=list, async_operation=bool)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Start_group_address (str): This parameter requires a start_group_address of type kString
- Group_count (number): This parameter requires a group_count of type kInteger
- Start_source_address (str): This parameter requires a start_source_address of type kString
- Source_count (number): This parameter requires a source_count of type kInteger
- Source_increment_step (number): This parameter requires a source_increment_step of type kInteger
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpSendSpecificQuery(SessionIndices=string, Start_group_address=string, Group_count=number, Start_source_address=string, Source_count=number, Source_increment_step=number, async_operation=bool)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- SessionIndices (str): This parameter requires a start_group_address of type kString
- Start_group_address (str): This parameter requires a group_count of type kInteger
- Group_count (number): This parameter requires a start_source_address of type kString
- Start_source_address (str): This parameter requires a source_count of type kInteger
- Source_count (number): This parameter requires a source_increment_step of type kInteger
- Source_increment_step (number): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('igmpSendSpecificQuery', payload=payload, response_object=None)
def IgmpStartQuerier(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the igmpStartQuerier operation on the server.
Start IGMP Querier
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
igmpStartQuerier(async_operation=bool)
--------------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpStartQuerier(SessionIndices=list, async_operation=bool)
-----------------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpStartQuerier(SessionIndices=string, async_operation=bool)
-------------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('igmpStartQuerier', payload=payload, response_object=None)
def IgmpStopPeriodicGenQuery(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the igmpStopPeriodicGenQuery operation on the server.
Stop Periodic General Query
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
igmpStopPeriodicGenQuery(async_operation=bool)
----------------------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpStopPeriodicGenQuery(SessionIndices=list, async_operation=bool)
-------------------------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpStopPeriodicGenQuery(SessionIndices=string, async_operation=bool)
---------------------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('igmpStopPeriodicGenQuery', payload=payload, response_object=None)
def IgmpStopQuerier(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the igmpStopQuerier operation on the server.
Stop IGMP Querier
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
igmpStopQuerier(async_operation=bool)
-------------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpStopQuerier(SessionIndices=list, async_operation=bool)
----------------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
igmpStopQuerier(SessionIndices=string, async_operation=bool)
------------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('igmpStopQuerier', payload=payload, response_object=None)
def RestartDown(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the restartDown operation on the server.
Stop and start interfaces and sessions that are in Down state.
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
restartDown(async_operation=bool)
---------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
restartDown(SessionIndices=list, async_operation=bool)
------------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
restartDown(SessionIndices=string, async_operation=bool)
--------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('restartDown', payload=payload, response_object=None)
def ResumePeriodicGenQuery(self, *args, **kwargs):
# type: (*Any, **Any) -> Union[List[str], None]
"""Executes the resumePeriodicGenQuery operation on the server.
Resume Sending Periodic General Query
resumePeriodicGenQuery(Arg2=list, async_operation=bool)list
-----------------------------------------------------------
- Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
- Returns list(str): ID to associate each async action invocation
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self.href }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('resumePeriodicGenQuery', payload=payload, response_object=None)
def SendSpecificQuery(self, *args, **kwargs):
# type: (*Any, **Any) -> Union[List[str], None]
"""Executes the sendSpecificQuery operation on the server.
Send Specific Query
sendSpecificQuery(Arg2=list, Arg3=string, Arg4=number, Arg5=string, Arg6=number, Arg7=number, async_operation=bool)list
-----------------------------------------------------------------------------------------------------------------------
- Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
- Arg3 (str): Start Group Address.
- Arg4 (number): Group Count.
- Arg5 (str): Start Source Address.
- Arg6 (number): Source Count.
- Arg7 (number): Source Increment Step.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
- Returns list(str): ID to associate each async action invocation
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self.href }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('sendSpecificQuery', payload=payload, response_object=None)
def Start(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the start operation on the server.
Start CPF control plane (equals to promote to negotiated state).
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
start(async_operation=bool)
---------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
start(SessionIndices=list, async_operation=bool)
------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
start(SessionIndices=string, async_operation=bool)
--------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('start', payload=payload, response_object=None)
def StartIGMP(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the startIGMP operation on the server.
Start IGMP protocol in selected interfaces
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
startIGMP(async_operation=bool)
-------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
startIGMP(SessionIndices=list, async_operation=bool)
----------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
startIGMP(SessionIndices=string, async_operation=bool)
------------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
startIGMP(Arg2=string, Arg3=list, async_operation=bool)
-------------------------------------------------------
- Arg2 (str): ID to associate each async action invocation
- Arg3 (list(number)): List of indices into the group range grid An empty list indicates all instances in the plugin.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('startIGMP', payload=payload, response_object=None)
def Stop(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the stop operation on the server.
Stop CPF control plane (equals to demote to PreValidated-DoDDone state).
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
stop(async_operation=bool)
--------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
stop(SessionIndices=list, async_operation=bool)
-----------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
stop(SessionIndices=string, async_operation=bool)
-------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('stop', payload=payload, response_object=None)
def StopIGMP(self, *args, **kwargs):
# type: (*Any, **Any) -> None
"""Executes the stopIGMP operation on the server.
Stop IGMP protocol in selected interfaces
The IxNetwork model allows for multiple method Signatures with the same name while python does not.
stopIGMP(async_operation=bool)
------------------------------
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
stopIGMP(SessionIndices=list, async_operation=bool)
---------------------------------------------------
- SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
stopIGMP(SessionIndices=string, async_operation=bool)
-----------------------------------------------------
- SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
stopIGMP(Arg2=string, Arg3=list, async_operation=bool)
------------------------------------------------------
- Arg2 (str): ID to associate each async action invocation
- Arg3 (list(number)): List of indices into the group range grid An empty list indicates all instances in the plugin.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('stopIGMP', payload=payload, response_object=None)
def StopPeriodicGenQuery(self, *args, **kwargs):
# type: (*Any, **Any) -> Union[List[str], None]
"""Executes the stopPeriodicGenQuery operation on the server.
Stop Sending Periodic General Query
stopPeriodicGenQuery(Arg2=list, async_operation=bool)list
---------------------------------------------------------
- Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
- async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete.
- Returns list(str): ID to associate each async action invocation
Raises
------
- NotFoundError: The requested resource does not exist on the server
- ServerError: The server has encountered an uncategorized error condition
"""
payload = { "Arg1": self.href }
for i in range(len(args)): payload['Arg%s' % (i + 2)] = args[i]
for item in kwargs.items(): payload[item[0]] = item[1]
return self._execute('stopPeriodicGenQuery', payload=payload, response_object=None)
def get_device_ids(self, PortNames=None, Active=None, DiscardLearntInfo=None, GeneralQueryInterval=None, GeneralQueryResponseInterval=None, ProxyQuerier=None, RobustnessVariable=None, RouterAlert=None, SpecificQueryResponseInterval=None, SpecificQueryTransmissionCount=None, StartupQueryCount=None, SupportElection=None, SupportOlderVersionHost=None, SupportOlderVersionQuerier=None, VersionType=None):
"""Base class infrastructure that gets a list of igmpQuerier device ids encapsulated by this object.
Use the optional regex parameters in the method to refine the list of device ids encapsulated by this object.
Args
----
- PortNames (str): optional regex of port names
- Active (str): optional regex of active
- DiscardLearntInfo (str): optional regex of discardLearntInfo
- GeneralQueryInterval (str): optional regex of generalQueryInterval
- GeneralQueryResponseInterval (str): optional regex of generalQueryResponseInterval
- ProxyQuerier (str): optional regex of proxyQuerier
- RobustnessVariable (str): optional regex of robustnessVariable
- RouterAlert (str): optional regex of routerAlert
- SpecificQueryResponseInterval (str): optional regex of specificQueryResponseInterval
- SpecificQueryTransmissionCount (str): optional regex of specificQueryTransmissionCount
- StartupQueryCount (str): optional regex of startupQueryCount
- SupportElection (str): optional regex of supportElection
- SupportOlderVersionHost (str): optional regex of supportOlderVersionHost
- SupportOlderVersionQuerier (str): optional regex of supportOlderVersionQuerier
- VersionType (str): optional regex of versionType
Returns
-------
- list(int): A list of device ids that meets the regex criteria provided in the method parameters
Raises
------
- ServerError: The server has encountered an uncategorized error condition
"""
return self._get_ngpf_device_ids(locals())
| 52.714565 | 406 | 0.647338 | 6,049 | 53,927 | 5.701438 | 0.076046 | 0.034911 | 0.044885 | 0.028677 | 0.781286 | 0.765223 | 0.740953 | 0.7324 | 0.718946 | 0.679019 | 0 | 0.006479 | 0.221559 | 53,927 | 1,022 | 407 | 52.766145 | 0.815074 | 0.663156 | 0 | 0.374486 | 0 | 0 | 0.119948 | 0.038314 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.074074 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
d493300aa3a4503328d7687fbe4eb5c45378d9da | 25,263 | py | Python | tests/rule_based_profiler/bobby_user_workflow_fixture.py | roger-yu-ds/great_expectations | 563538e4babf934ca2eec409d6eef8c9b07da86d | [
"Apache-2.0"
] | null | null | null | tests/rule_based_profiler/bobby_user_workflow_fixture.py | roger-yu-ds/great_expectations | 563538e4babf934ca2eec409d6eef8c9b07da86d | [
"Apache-2.0"
] | null | null | null | tests/rule_based_profiler/bobby_user_workflow_fixture.py | roger-yu-ds/great_expectations | 563538e4babf934ca2eec409d6eef8c9b07da86d | [
"Apache-2.0"
] | null | null | null | from typing import List
import pytest
from great_expectations.core import ExpectationConfiguration, ExpectationSuite
# TODO: Move these fixtures to integration tests
from great_expectations.data_context.util import file_relative_path
@pytest.fixture
def bobby_columnar_table_multi_batch():
"""
# TODO: <Alex>ALEX -- Add DocString</Alex>
"""
verbose_profiler_config_file_path: str = file_relative_path(
__file__, "bobby_user_workflow_verbose_profiler_config.yml"
)
verbose_profiler_config: str
with open(verbose_profiler_config_file_path) as f:
verbose_profiler_config = f.read()
my_row_count_rule_expectation_configurations: List[ExpectationConfiguration] = [
ExpectationConfiguration(
**{
"kwargs": {"min_value": 6712, "max_value": 9288, "mostly": 1.0},
"expectation_type": "expect_table_row_count_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "table.row_count",
"domain_kwargs": {},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "VendorID",
"min_value": 1,
"max_value": 1,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "VendorID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "VendorID",
"min_value": 4,
"max_value": 4,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "VendorID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "passenger_count",
"min_value": -1,
"max_value": 2,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "passenger_count",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "passenger_count",
"min_value": 6,
"max_value": 6,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "passenger_count",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "trip_distance",
"min_value": 0.0,
"max_value": 0.0,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "trip_distance",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "trip_distance",
"min_value": 21.42,
"max_value": 74.05,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "trip_distance",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "RatecodeID",
"min_value": 1,
"max_value": 1,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "RatecodeID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "RatecodeID",
"min_value": 4,
"max_value": 7,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "RatecodeID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "PULocationID",
"min_value": 1,
"max_value": 1,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "PULocationID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "PULocationID",
"min_value": 265,
"max_value": 265,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "PULocationID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "DOLocationID",
"min_value": 1,
"max_value": 1,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "DOLocationID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "DOLocationID",
"min_value": 265,
"max_value": 265,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "DOLocationID",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "payment_type",
"min_value": 1,
"max_value": 1,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "payment_type",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "payment_type",
"min_value": 4,
"max_value": 4,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "payment_type",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "fare_amount",
"min_value": -76.43,
"max_value": 3.43,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "fare_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "fare_amount",
"min_value": -1982.49,
"max_value": 5201.49,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "fare_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "extra",
"min_value": -64.85,
"max_value": 27.14,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "extra",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "extra",
"min_value": 2.53,
"max_value": 8.97,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "extra",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "mta_tax",
"min_value": -0.5,
"max_value": -0.5,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "mta_tax",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "mta_tax",
"min_value": -28.66,
"max_value": 66.67,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "mta_tax",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "tip_amount",
"min_value": 0.0,
"max_value": 0.0,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "tip_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "tip_amount",
"min_value": 24.4,
"max_value": 97.3,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "tip_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "tolls_amount",
"min_value": 0.0,
"max_value": 0.0,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "tolls_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "tolls_amount",
"min_value": -351.05,
"max_value": 875.12,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "tolls_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "improvement_surcharge",
"min_value": -0.3,
"max_value": -0.3,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "improvement_surcharge",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "improvement_surcharge",
"min_value": 0.3,
"max_value": 0.3,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "improvement_surcharge",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "total_amount",
"min_value": -75.26,
"max_value": -1.84,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "total_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "total_amount",
"min_value": -1405.9,
"max_value": 4948.55,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "total_amount",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "congestion_surcharge",
"min_value": -4.47,
"max_value": 1.97,
"mostly": 1.0,
},
"expectation_type": "expect_column_min_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.min",
"domain_kwargs": {
"column": "congestion_surcharge",
},
},
"num_batches": 2,
},
},
},
),
ExpectationConfiguration(
**{
"kwargs": {
"column": "congestion_surcharge",
"min_value": -1.97,
"max_value": 4.47,
"mostly": 1.0,
},
"expectation_type": "expect_column_max_to_be_between",
"meta": {
"profiler_details": {
"metric_configuration": {
"metric_name": "column.max",
"domain_kwargs": {
"column": "congestion_surcharge",
},
},
"num_batches": 2,
},
},
},
),
]
expectation_configurations: List[ExpectationConfiguration] = []
expectation_configurations.extend(my_row_count_rule_expectation_configurations)
expectation_suite_name: str = "bobby_columnar_table_multi_batch"
expected_expectation_suite: ExpectationSuite = ExpectationSuite(
expectation_suite_name=expectation_suite_name
)
expectation_configuration: ExpectationConfiguration
for expectation_configuration in expectation_configurations:
expected_expectation_suite.add_expectation(expectation_configuration)
return {
"profiler_config": verbose_profiler_config,
"expected_expectation_suite_name": expectation_suite_name,
"expected_expectation_suite": expected_expectation_suite,
}
| 35.1363 | 84 | 0.326367 | 1,376 | 25,263 | 5.614099 | 0.098837 | 0.093204 | 0.032104 | 0.076246 | 0.854628 | 0.83754 | 0.813333 | 0.813333 | 0.806472 | 0.785113 | 0 | 0.02323 | 0.570597 | 25,263 | 718 | 85 | 35.185237 | 0.688883 | 0.003563 | 0 | 0.642959 | 0 | 0 | 0.248957 | 0.047144 | 0 | 0 | 0 | 0.002786 | 0 | 1 | 0.001422 | false | 0.00569 | 0.00569 | 0 | 0.008535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d49caae6668b54c95eb4dfa0a91375959f2a3d7c | 95,523 | py | Python | python-package/lets_plot/plot/scale.py | IKrukov-HORIS/lets-plot | b772e4abcc4c715ef3c3a2e3db55abd4044f863f | [
"MIT"
] | null | null | null | python-package/lets_plot/plot/scale.py | IKrukov-HORIS/lets-plot | b772e4abcc4c715ef3c3a2e3db55abd4044f863f | [
"MIT"
] | null | null | null | python-package/lets_plot/plot/scale.py | IKrukov-HORIS/lets-plot | b772e4abcc4c715ef3c3a2e3db55abd4044f863f | [
"MIT"
] | null | null | null | #
# Copyright (c) 2019. JetBrains s.r.o.
# Use of this source code is governed by the MIT license that can be found in the LICENSE file.
#
from .core import FeatureSpec
from .util import as_boolean
#
# Scales
#
__all__ = ['scale_shape',
'scale_x_discrete', 'scale_y_discrete',
'scale_x_discrete_reversed', 'scale_y_discrete_reversed',
'scale_x_continuous', 'scale_y_continuous',
'scale_x_log10', 'scale_y_log10',
'scale_x_reverse', 'scale_y_reverse',
'scale_color_manual', 'scale_fill_manual', 'scale_size_manual',
'scale_shape_manual', 'scale_linetype_manual', 'scale_alpha_manual',
'scale_fill_gradient', 'scale_fill_continuous', 'scale_color_gradient', 'scale_color_continuous',
'scale_fill_gradient2', 'scale_color_gradient2',
'scale_fill_hue', 'scale_fill_discrete', 'scale_color_hue', 'scale_color_discrete',
'scale_fill_grey', 'scale_color_grey',
'scale_fill_brewer', 'scale_color_brewer',
'scale_x_datetime', 'scale_y_datetime', 'scale_x_time', 'scale_y_time',
'scale_alpha', 'scale_size', 'scale_size_area'
]
def scale_shape(solid=True, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, format=None):
"""
Scale for shapes.
Parameters
----------
solid : bool, default=True
Are the shapes solid (default) True, or hollow (False).
name : str
The name of the scale - used as the axis label or the legend title.
breaks : list
A numeric vector of positions of ticks.
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Scale for shapes. A continuous variable cannot be mapped to shape.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 8
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
x = np.arange(10)
c = np.where(x < 5, 'a', 'b')
ggplot({'x': x, 'y': x, 'c': c}, aes('x', 'y')) + \\
geom_point(aes(shape='c'), size=5) + \\
scale_shape(solid=False, name='shapes')
"""
solid = as_boolean(solid, default=True)
return _scale('shape',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
solid=solid)
#
# Continuous Scales
#
def scale_x_continuous(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, trans=None, format=None):
"""
Continuous position scale x.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 7-8
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(42)
x = np.random.randint(-10, 10, size=100)
ggplot({'x': x}, aes(x='x')) + geom_bar(stat='bin', bins=8) + \\
scale_x_continuous(name='observations', breaks=[-9, -3, 3, 9], \\
limits=[-8, 11], expand=[.2], format='.1f')
"""
return _scale('x',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=trans,
format=format)
def scale_y_continuous(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, trans=None, format=None):
"""
Continuous position scale y.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 7-8
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(42)
x = np.random.randint(-10, 10, size=1000)
ggplot({'x': x}, aes(x='x')) + geom_bar(stat='bin', bins=4) + \\
scale_y_continuous(name='hundreds', breaks=[100, 200, 300, 400], \\
labels=['one', 'two', 'three', 'four'])
"""
return _scale('y',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=trans,
format=format)
def scale_x_log10(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Continuous position scale x where trans='log10'.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions of ticks.
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(42)
x = np.power(10, np.random.randint(9, size=100))
ggplot({'x': x}, aes(x='x')) + geom_bar() + scale_x_log10()
"""
return scale_x_continuous(name, breaks, labels, limits, expand, na_value, 'log10', format)
def scale_y_log10(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Continuous position scales y where trans='log10'.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(42)
x = np.random.poisson(size=100)
ggplot({'x': x}, aes(x='x')) + geom_histogram() + scale_y_log10()
"""
return scale_y_continuous(name, breaks, labels, limits, expand, na_value, 'log10', format)
def scale_x_reverse(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Continuous position scale x where trans='reverse'.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
x = list(range(10))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point() + scale_x_reverse()
"""
return scale_x_continuous(name, breaks, labels, limits, expand, na_value, 'reverse', format)
def scale_y_reverse(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Continuous position scale y where trans='reverse'.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
x = list(range(10))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point() + scale_y_reverse(limits=[2, 6])
"""
return scale_y_continuous(name, breaks, labels, limits, expand, na_value, 'reverse', format)
#
# Discrete Scales
#
def scale_x_discrete(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, reverse=None, format=None):
"""
Discrete position scale x.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale. and the default order of their display in guides.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0, additive = 0.6.
na_value
Missing values will be replaced with this value.
reverse : bool
When True the scale is reversed.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 7
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(43)
scores = {'rating': np.random.randint(3, 6, size=10)}
ggplot(scores, aes(x='rating')) + geom_bar() + \\
scale_x_discrete(name='rating', format='.1f')
"""
reverse = as_boolean(reverse, default=False)
return _scale('x',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=None,
format=format,
#
discrete=True, reverse=reverse)
def scale_x_discrete_reversed(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Reversed discrete position scale x.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale. and the default order of their display in guides.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0, additive = 0.6.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 8
from lets_plot import *
LetsPlot.setup_html()
data = {
'time': ['Lunch', 'Dinner', 'Night'],
'bill': [15.5, 18.13, 30],
}
ggplot(data, aes('time', 'bill')) + geom_bar(stat='identity') + \\
scale_x_discrete_reversed()
"""
return scale_x_discrete(name, breaks, labels, limits, expand, na_value, reverse=True, format=format)
def scale_y_discrete(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, reverse=None, format=None):
"""
Discrete position scale y.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale. and the default order of their display in guides.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0, additive = 0.6.
na_value
Missing values will be replaced with this value.
reverse : bool
When True the scale is reversed.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 8
from lets_plot import *
LetsPlot.setup_html()
data = {
'time': ['Breakfast', 'Lunch', 'Dinner', 'Night'],
'bill': [3.25, 15.5, 18.3, 30],
}
ggplot(data, aes('bill', 'time')) + geom_point(size=5) + \\
scale_y_discrete(limits=['Lunch', 'Dinner', 'Night'])
"""
reverse = as_boolean(reverse, default=False)
return _scale('y',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=None,
format=format,
#
discrete=True, reverse=reverse)
def scale_y_discrete_reversed(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Reversed discrete position scale y.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale. and the default order of their display in guides.
expand : list of two numbers
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0, additive = 0.6.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 8
from lets_plot import *
LetsPlot.setup_html()
data = {
'time': ['Breakfast', 'Lunch', 'Dinner', 'Night'],
'bill': [3.25, 15.5, 18.3, 30],
}
ggplot(data, aes('bill', 'time')) + geom_line() + \\
scale_y_discrete_reversed()
"""
return scale_y_discrete(name, breaks, labels, limits, expand, na_value, reverse=True, format=format)
#
# Manual Scales
#
def scale_color_manual(values, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None,
format=None):
"""
Create your own discrete scale for color aesthetic.
Parameters
----------
values : list of str
A set of aesthetic values to map data values to.
If this is a named vector, then the values will be matched based on the names.
If unnamed, values will be matched in order (usually alphabetical)
with the limits of the scale.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Create your own color scale. Values are strings, encoding colors.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6-7
from lets_plot import *
LetsPlot.setup_html()
x = list(range(9))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point(aes(color='x'), shape=1, size=5) + \\
scale_color_manual(values=['red', 'green', 'blue'], \\
name='color', labels=['red', 'green', 'blue'])
"""
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
values=values)
def scale_fill_manual(values, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, format=None):
"""
Create your own discrete scale for fill aesthetic.
Parameters
----------
values : list of str
A set of aesthetic values to map data values to.
If this is a named vector, then the values will be matched based on the names.
If unnamed, values will be matched in order (usually alphabetical)
with the limits of the scale.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Create your own color scale for fill aesthetic. Values are strings, encoding filling colors.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6-7
from lets_plot import *
LetsPlot.setup_html()
x = list(range(9))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point(aes(fill='x'), shape=21, size=5, color='black') + \\
scale_fill_manual(values=['green', 'yellow', 'red'], \\
name='color', labels=['green', 'yellow', 'red'])
"""
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
values=values)
def scale_size_manual(values, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, format=None):
"""
Create your own discrete scale for size aesthetic.
Parameters
----------
values : list of str
A set of aesthetic values to map data values to.
If this is a named vector, then the values will be matched based on the names.
If unnamed, values will be matched in order (usually alphabetical)
with the limits of the scale.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Create your own discrete scale for size aesthetic. Values are numbers, defining sizes.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 8
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
x = np.arange(10)
c = np.where(x < 5, 'a', 'b')
ggplot({'x': x, 'y': x, 'c': c}, aes('x', 'y')) + \\
geom_point(aes(size='c'), shape=1) + \\
scale_size_manual(name='size', values=[5, 8])
"""
return _scale('size',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
values=values)
def scale_shape_manual(values, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None,
format=None):
"""
Create your own discrete scale for shape aesthetic.
Parameters
----------
values : list of str
A set of aesthetic values to map data values to.
If this is a named vector, then the values will be matched based on the names.
If unnamed, values will be matched in order (usually alphabetical)
with the limits of the scale.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Create your own discrete scale for size aesthetic. Values are numbers, encoding shapes.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 8
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
x = np.arange(10)
c = np.where(x < 5, 'a', 'b')
ggplot({'x': x, 'y': x, 'c': c}, aes('x', 'y')) + \\
geom_point(aes(shape='c'), size=5) + \\
scale_shape_manual(values=[12, 13], name='shapes', labels=['12', '13'])
"""
return _scale('shape',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
values=values)
def scale_linetype_manual(values, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None,
format=None):
"""
Create your own discrete scale for line type aesthetic.
Parameters
----------
values : list of str
A set of aesthetic values to map data values to.
If this is a named vector, then the values will be matched based on the names.
If unnamed, values will be matched in order (usually alphabetical)
with the limits of the scale.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Create your own discrete scale for line type aesthetic.
Values are strings or numbers, encoding linetypes.
Available codes and names: 0 = 'blank', 1 = 'solid', 2 = 'dashed', 3 = 'dotted', 4 = 'dotdash',
5 = 'longdash', 6 = 'twodash'.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5-6
from lets_plot import *
LetsPlot.setup_html()
x = [-.3, -.1, .1, .3]
ggplot() + geom_hline(aes(yintercept=x, linetype=x), size=1) + \\
scale_linetype_manual(values=[3, 4, 5, 6],
labels=['dotted', 'dotdash', 'longdash', 'twodash'])
"""
return _scale('linetype',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
values=values)
def scale_alpha_manual(values, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None,
format=None):
"""
Create your own discrete scale for alpha (transparency) aesthetic.
Parameters
----------
values : list of str
A set of aesthetic values to map data values to.
If this is a named vector, then the values will be matched based on the names.
If unnamed, values will be matched in order (usually alphabetical)
with the limits of the scale.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Create your own discrete scale for alpha (transparency) aesthetic.
Values should be taken from [0, 1] interval.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
x = list(range(10))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point(aes(alpha='x'), shape=21, size=5) + \\
scale_alpha_manual(values=[.2, .5, .9])
"""
return _scale('alpha',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=None,
format=format,
#
values=values)
#
# Gradient (continuous) Color Scales
#
def scale_fill_gradient(low=None, high=None, name=None, breaks=None, labels=None,
limits=None, na_value=None, guide=None, trans=None, format=None):
"""
Define smooth color gradient between two colors for fill aesthetic.
Parameters
----------
low : str
Color for low end of gradient.
high : str
Color for high end of gradient.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define smooth gradient between two colors (defined by low and high) for filling color.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + geom_tile(aes(x='x', fill='x')) + \\
scale_fill_gradient(low='#1a9641', high='#d7191c')
"""
return scale_fill_continuous(low, high, name, breaks, labels, limits, na_value, guide, trans, format)
def scale_fill_continuous(low=None, high=None, name=None, breaks=None, labels=None,
limits=None, na_value=None, guide=None, trans=None, format=None):
"""
Define smooth color gradient between two colors for fill aesthetic.
Parameters
----------
low : str
Color for low end of gradient.
high : str
Color for high end of gradient.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define smooth gradient between two colors (defined by low and high) for filling color.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + geom_tile(aes(x='x', fill='x')) + \\
scale_fill_continuous(low='#1a9641', high='#d7191c')
"""
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
low=low, high=high,
scale_mapper_kind='color_gradient')
def scale_color_gradient(low=None, high=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Define smooth color gradient between two colors for color aesthetic.
Parameters
----------
low : str
Color for low end of gradient.
high : str
Color for high end of gradient.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define smooth gradient between two colors (defined by low and high) for color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + \\
geom_tile(aes(x='x', color='x'), size=1.5, fill='white', width=.6, height=.6) + \\
scale_color_gradient(low='#1a9641', high='#d7191c', guide='legend')
"""
return scale_color_continuous(low, high, name, breaks, labels, limits, na_value, guide, trans, format)
def scale_color_continuous(low=None, high=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Define smooth color gradient between two colors for color aesthetic.
Parameters
----------
low : str
Color for low end of gradient.
high : str
Color for high end of gradient.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
x = list(range(10))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point(aes(color='x'), shape=1, size=5) + \\
scale_color_continuous(low='#1a9641', high='#d7191c')
"""
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
low=low, high=high,
scale_mapper_kind='color_gradient')
def scale_fill_gradient2(low=None, mid=None, high=None, midpoint=0, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Define diverging color gradient for fill aesthetic.
Parameters
----------
low : str
Color for low end of gradient.
mid : str
Color for mid point.
high : str
Color for high end of gradient.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define diverging color gradient for filling color. Default mid point is set to white color.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + geom_tile(aes(x='x', fill='x')) + \\
scale_fill_gradient2(low='#2b83ba', mid='#ffffbf', high='#d7191c')
"""
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
low=low, mid=mid, high=high,
midpoint=midpoint, scale_mapper_kind='color_gradient2')
def scale_color_gradient2(low=None, mid=None, high=None, midpoint=0, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Define diverging color gradient for color aesthetic.
Parameters
----------
low : str
Color for low end of gradient.
mid : str
Color for mid point.
high : str
Color for high end of gradient.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of strings
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define diverging color gradient for color aesthetic. Default mid point is set to white color.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + \\
geom_tile(aes(x='x', color='x'), size=1.5, fill='white', width=.6, height=.6) + \\
scale_color_gradient2(low='#2b83ba', mid='#ffffbf', high='#d7191c')
"""
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
low=low, mid=mid, high=high,
midpoint=midpoint, scale_mapper_kind='color_gradient2')
def scale_fill_hue(h=None, c=None, l=None, h_start=None, direction=None, name=None, breaks=None, labels=None,
limits=None, na_value=None, guide=None, trans=None, format=None):
"""
Qualitative color scale with evenly spaced hues for fill aesthetic.
Parameters
----------
h : list
Range of hues (two numerics), in [0, 360].
c : int
Chroma (intensity of color), maximum value varies depending on.
l : int
Luminance (lightness), in [0, 100].
direction : {1, -1}, default=1
Direction to travel around the color wheel, 1=clockwise, -1=counter-clockwise.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define qualitative color scale with evenly spaced hues for filling color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + geom_tile(aes(x='x', fill='x')) + \\
scale_fill_hue(c=50, l=80, h=[0, 50])
"""
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
h=h, c=c, l=l, h_start=h_start,
direction=direction, scale_mapper_kind='color_hue')
def scale_color_hue(h=None, c=None, l=None, h_start=None, direction=None, name=None, breaks=None, labels=None,
limits=None, na_value=None, guide=None, trans=None, format=None):
"""
Qualitative color scale with evenly spaced hues for color aesthetic.
Parameters
----------
h : list
Range of hues (two numerics), in [0, 360].
c : int
Chroma (intensity of color), maximum value varies depending on.
l : int
Luminance (lightness), in [0, 100].
direction : {1, -1}, default=1
Direction to travel around the color wheel, 1=clockwise, -1=counter-clockwise.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define qualitative color scale with evenly spaced hues for color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + \\
geom_tile(aes(x='x', color='x'), size=1.5, fill='white', width=.6, height=.6) + \\
scale_color_hue(c=20, l=90)
"""
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
h=h, c=c, l=l, h_start=h_start,
direction=direction, scale_mapper_kind='color_hue')
def scale_fill_discrete(direction=None,
name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, format=None):
"""
Qualitative colors.
Defaults to the Brewer 'Set2' palette (or 'Set3' if the categories count > 8).
Parameters
----------
direction : {-1, 1}, default=1
Sets the order of colors in the scale. If 1, colors are as output by brewer palette.
If -1, the order of colors is reversed.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define qualitative color scale with evenly spaced hues for filling color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 10
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(100)
n = 50
x = np.random.rand(n)
y = np.random.rand(n)
z = np.random.rand(n)
ggplot() + geom_point(aes(x, y, fill=z), shape=21, size=4, color='gray') + \\
scale_fill_discrete(guide='none')
"""
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
format=format,
#
direction=direction, discrete=True)
def scale_color_discrete(direction=None,
name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, format=None):
"""
Qualitative colors.
Defaults to the Brewer 'Set2' palette (or 'Set3' if the categories count > 8).
Parameters
----------
direction : {1, -1}, default=1
Sets the order of colors in the scale. If 1, colors are as output by brewer palette.
If -1, the order of colors is reversed.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of strings
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Define qualitative color scale with evenly spaced hues for color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 10
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(100)
n = 50
x = np.random.rand(n)
y = np.random.rand(n)
z = np.random.rand(n)
ggplot() + geom_point(aes(x, y, color=z), size=4) + \\
scale_color_discrete(guide='none')
"""
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
format=format,
#
direction=direction, discrete=True)
def scale_fill_grey(start=None, end=None, direction=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Sequential grey color scale for fill aesthetic.
The palette is computed using HSV (hue, saturation, value) color model.
Parameters
----------
start : float
Gray value at low end of palette in range [0, 1].
end : float
Gray value at high end of palette in range [0, 1].
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Defines sequential grey color scale for filling color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + geom_tile(aes(x='x', fill='x')) + \\
scale_fill_grey(start=.9, end=.1)
"""
start, end = _greyscale_check_parameters(start, end)
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
start=start, end=end,
direction=direction,
scale_mapper_kind='color_grey')
def scale_color_grey(start=None, end=None, direction=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Sequential grey color scale for color aesthetic.
The palette is computed using HSV (hue, saturation, value) color model.
Parameters
----------
start : float
Gray value at low end of palette in range [0, 1].
end : float
Gray value at high end of palette in range [0, 1].
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Defines sequential grey color scale for color aesthetic.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
x = list(range(10))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point(aes(color='x'), shape=15, size=5) + \\
scale_color_grey(start=.7, end=.2)
"""
start, end = _greyscale_check_parameters(start, end)
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
start=start, end=end,
direction=direction,
scale_mapper_kind='color_grey')
def _greyscale_check_parameters(start=None, end=None):
# Up to v.1.4.2 start/end values were in range [0,100]
# Since v.1.4.3 start/end values are in range [0,1]
if start != None and not (0 <= start <= 1):
start = start / 100
print("WARN: Value of 'start' has been scaled down to range: [0,1] : {}".format(start))
if end != None and not (0 <= end <= 1):
end = end / 100
print("WARN: Value of 'end' has been scaled down to range: [0,1] : {}".format(end))
if start != None and not (0 <= start <= 1):
raise ValueError("Value of 'start' must be in range: [0,1] : {}".format(start))
if end != None and not (0 <= end <= 1):
raise ValueError("Value of 'end' must be in range: [0,1] : {}".format(end))
return (start, end)
def scale_fill_brewer(type=None, palette=None, direction=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Sequential, diverging and qualitative color scales from colorbrewer2.org for fill aesthetic.
Color schemes provided are particularly suited to display discrete values (levels of factors) on a map.
Parameters
----------
type : {'seq', 'div', 'qual'}
One of seq (sequential), div (diverging) or qual (qualitative) types of scales.
palette : str or int
If a string, will use that named palette. If a number, will index
into the list of palettes of appropriate type.
direction : {1, -1}, default=1
Sets the order of colors in the scale. If 1, colors are as output by brewer palette.
If -1, the order of colors is reversed.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of strings
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Defines sequential, diverging and qualitative color scales from colorbrewer2.org for filling color aesthetic.
ColorBrewer provides sequential, diverging and qualitative color schemes which are particularly suited and
tested to display discrete values (levels of a factor) on a map. It allows to smoothly interpolate 6 colors
from any palette to a continuous scale (6 colors per palette gives nice gradients; more results in more saturated
colors which do not look as good).
However, the original color schemes (particularly the qualitative ones) were not intended for this and the
perceptual result is left to the appreciation of the user. See colorbrewer2.org for more information.
Palettes:
- Diverging : BrBG, PiYG, PRGn, PuOr, RdBu, RdGy, RdYlBu, RdYlGn, Spectral.
- Qualitative : Accent, Dark2, Paired, Pastel1, Pastel2, Set1, Set2, Set3.
- Sequential : Blues, BuGn, BuPu, GnBu, Greens, Greys, Oranges, OrRd, PuBu, PuBuGn, PuRd, Purples, RdPu, Reds, YlGn, YlGnBu, YlOrBr, YlOrRd.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 5
from lets_plot import *
LetsPlot.setup_html()
data = {'x': list(range(-16, 16))}
ggplot(data) + geom_tile(aes(x='x', fill='x'), color='white') + \\
scale_fill_brewer(type='seq', palette='YlGnBu')
"""
return _scale('fill',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
type=type, palette=palette,
direction=direction,
scale_mapper_kind='color_brewer')
def scale_color_brewer(type=None, palette=None, direction=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Sequential, diverging and qualitative color scales from colorbrewer2.org for color aesthetic.
Color schemes provided are particularly suited to display discrete values (levels of factors) on a map.
Parameters
----------
type : {'seq', 'div', 'qual'}
One of seq (sequential), div (diverging) or qual (qualitative) types of scales.
palette : str or int
If a string, will use that named palette. If a number, will index
into the list of palettes of appropriate type.
direction : {1, -1}, default=1
Sets the order of colors in the scale. If 1, colors are as output by brewer palette.
If -1, the order of colors is reversed.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
Continuous scale: a numeric vector of length two providing limits of the scale.
Discrete scale: a vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
Guide to use for this scale. It can either be a string ('colorbar', 'legend')
or a call to a guide function (`guide_colorbar()`, `guide_legend()`)
specifying additional arguments. 'none' will hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
Defines sequential, diverging and qualitative color scales from colorbrewer2.org for color aesthetic.
ColorBrewer provides sequential, diverging and qualitative color schemes which are particularly suited and
tested to display discrete values (levels of a factor) on a map. It allows to smoothly interpolate 6 colors
from any palette to a continuous scale (6 colors per palette gives nice gradients; more results in more saturated
colors which do not look as good).
However, the original color schemes (particularly the qualitative ones) were not intended for this and
the perceptual result is left to the appreciation of the user. See colorbrewer2.org for more information.
Palettes:
- Diverging : BrBG, PiYG, PRGn, PuOr, RdBu, RdGy, RdYlBu, RdYlGn, Spectral.
- Qualitative : Accent, Dark2, Paired, Pastel1, Pastel2, Set1, Set2, Set3.
- Sequential : Blues, BuGn, BuPu, GnBu, Greens, Greys, Oranges, OrRd, PuBu, PuBuGn, PuRd, Purples, RdPu, Reds, YlGn, YlGnBu, YlOrBr, YlOrRd.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 6
from lets_plot import *
LetsPlot.setup_html()
x = list(range(10))
ggplot({'x': x, 'y': x}, aes('x', 'y')) + \\
geom_point(aes(color='x'), shape=13, size=5) + \\
scale_color_brewer(type='qual', palette='Dark2', direction=-1)
"""
return _scale('color',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
type=type,
palette=palette,
direction=direction,
scale_mapper_kind='color_brewer')
#
# Date-time
#
def scale_x_datetime(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Position scale x for date/time data.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'%d.%m.%y' -> '06.08.19'
'%B %Y' -> 'August 2019'
'%a, %e %b %Y %H:%M:%S' -> 'Tue, 6 Aug 2019 04:46:35'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 12
import datetime as dt
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
n = 31
np.random.seed(42)
d = [dt.datetime(2021, 1, 1) + dt.timedelta(days=d)
for d in range(n)]
t = np.random.normal(loc=-5, scale=6, size=n)
ggplot({'d': d, 't': t}, aes('d', 't')) + \\
geom_histogram(aes(fill='t'), stat='identity', color='black') + \\
scale_x_datetime() + \\
scale_fill_gradient2(low='#2c7bb6', high='#d7191c')
"""
return _scale('x',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=None,
format=format,
#
datetime=True)
def scale_y_datetime(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, format=None):
"""
Position scale y for date/time data.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list of two numbers
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value :
Missing values will be replaced with this value.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'%d.%m.%y' -> '06.08.19'
'%B %Y' -> 'August 2019'
'%a, %e %b %Y %H:%M:%S' -> 'Tue, 6 Aug 2019 04:46:35'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 13
import datetime as dt
from lets_plot import *
LetsPlot.setup_html()
n = 12
rcount = lambda m: 1 if m < 2 else rcount(m - 1) + rcount(m - 2)
data = {
'date': [dt.datetime(2020, m, 1) for m in range(1, n + 1)],
'rabbits count': [rcount(m) for m in range(1, n + 1)],
}
ggplot(data) + \\
geom_segment(aes(x=[0] * n, y='date', xend='rabbits count', yend='date'), size=3, \\
tooltips=layer_tooltips().line('@|@{rabbits count}')) + \\
scale_y_datetime(format='%b') + \\
xlab('rabbits count')
"""
return _scale('y',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=None,
format=format,
#
datetime=True)
def scale_x_time(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None):
"""
Position scale x for data representing "time delta" values expressed in milliseconds.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 12
import datetime as dt
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
n = 31
np.random.seed(42)
data = {
'time': [dt.timedelta(days=v).total_seconds() * 1000 for v in range(n)],
'value': np.random.normal(loc=-5, scale=6, size=n)
}
ggplot(data) + \\
geom_line(aes('time', 'value')) + \\
scale_x_time()
"""
return _scale('x',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=None,
#
time=True)
def scale_y_time(name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None):
"""
Position scale y for data representing "time delta" values expressed in milliseconds.
Parameters
----------
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A numeric vector of length two providing limits of the scale.
expand : list
A numeric vector of length two giving multiplicative and additive expansion constants.
The vector size == 1 => only multiplicative expand (and additive expand by default).
Defaults: multiplicative = 0.05, additive = 0.
na_value
Missing values will be replaced with this value.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 12
import datetime as dt
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
n = 31
np.random.seed(42)
data = {
'time': [dt.timedelta(days=v).total_seconds() * 1000 for v in range(n)],
'value': np.random.normal(loc=-5, scale=6, size=n)
}
ggplot(data) + \\
geom_line(aes('value', 'time')) + \\
scale_y_time()
"""
return _scale('y',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=expand,
na_value=na_value,
guide=None,
trans=None,
#
time=True)
#
# Range Scale (alpha and size)
#
def scale_alpha(range=None, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, trans=None,
format=None):
"""
Scale for alpha.
Parameters
----------
range : list
The range of the mapped aesthetics result.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 9
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(100)
x = np.random.normal(0, 1, 1000)
y = np.random.normal(0, 1, 1000)
ggplot({'x': x, 'y': y}, aes('x', 'y')) + \\
geom_point(aes(alpha='..density..'), stat='density2d', contour=False, n=30) + \\
scale_alpha(range=[.01, .99])
"""
return _scale('alpha',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
range=range)
def scale_size(range=None, name=None, breaks=None, labels=None, limits=None, na_value=None, guide=None, trans=None,
format=None):
"""
Scale for size.
Parameters
----------
range : list
The range of the mapped aesthetics result.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list of str
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 10
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(100)
n = 50
x = np.random.rand(n)
y = np.random.rand(n)
area = np.power(np.random.randint(30, size=n), 2)
ggplot() + geom_point(aes(x, y, size=area), alpha=0.7) + \\
scale_size(range=[3, 13])
"""
return _scale('size',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
range=range)
def scale_size_area(max_size=None, name=None, breaks=None, labels=None, limits=None,
na_value=None, guide=None, trans=None, format=None):
"""
Continuous scale for size that maps 0 to 0.
Parameters
----------
max_size : float
The max size that is mapped to.
name : str
The name of the scale - used as the axis label or the legend title.
If None, the default, the name of the scale
is taken from the first mapping used for that aesthetic.
breaks : list
A numeric vector of positions (of ticks).
labels : list
A vector of labels (on ticks).
limits : list
A vector specifying the data range for the scale
and the default order of their display in guides.
na_value
Missing values will be replaced with this value.
guide
A result returned by `guide_legend()` function or 'none' to hide the guide.
trans : {'identity', 'log10', 'sqrt', 'reverse'}
Name of built-in transformation.
format : str
Defines the format for labels on the scale. The syntax resembles Python's:
'.2f' -> '12.45'
'Num {}' -> 'Num 12.456789'
'TTL: {.2f}$' -> 'TTL: 12.45$'
For more info see https://lets-plot.org/pages/formats.html.
Returns
-------
`FeatureSpec`
Scale specification.
Notes
-----
This method maps 0 data to 0 size. Useful in some stats such as count.
Examples
--------
.. jupyter-execute::
:linenos:
:emphasize-lines: 10
import numpy as np
from lets_plot import *
LetsPlot.setup_html()
np.random.seed(100)
n = 50
x = np.random.rand(n)
y = np.random.rand(n)
area = np.power(np.random.uniform(30, size=n), 2)
ggplot() + geom_point(aes(x, y, size=area), alpha=0.7) + \\
scale_size_area(max_size=15)
"""
return _scale('size',
name=name,
breaks=breaks,
labels=labels,
limits=limits,
expand=None,
na_value=na_value,
guide=guide,
trans=trans,
format=format,
#
max_size=max_size,
scale_mapper_kind='size_area')
def _scale(aesthetic, name=None, breaks=None, labels=None, limits=None, expand=None, na_value=None, guide=None,
trans=None, format=None, **other):
"""
Create a scale (discrete or continuous)
:param aesthetic
The name of the aesthetic that this scale works with
:param name
The name of the scale - used as the axis label or the legend title
:param breaks
A numeric vector of positions (of ticks)
:param labels
A vector of labels (on ticks)
:param limits
A numeric vector of length two providing limits of the scale.
:param expand
A numeric vector of length two giving multiplicative and additive expansion constants.
:param na_value
Value to use for missing values
:param guide
Type of legend. Use 'colorbar' for continuous color bar, or 'legend' for discrete values.
:param trans
Name of built-in transformation.
:param format
A string of the format for labels on the scale. Supported types are number and date/time.
:return:
"""
# flatten the 'other' sub-dictionary
args = locals().copy()
args.pop('other')
return FeatureSpec('scale', **args, **other)
| 34.697784 | 144 | 0.578007 | 12,088 | 95,523 | 4.514808 | 0.04095 | 0.026532 | 0.020522 | 0.024627 | 0.928337 | 0.923317 | 0.921997 | 0.916867 | 0.913972 | 0.908218 | 0 | 0.020938 | 0.319515 | 95,523 | 2,752 | 145 | 34.710392 | 0.818652 | 0.699224 | 0 | 0.809843 | 0 | 0.004474 | 0.054543 | 0.00651 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089485 | false | 0 | 0.004474 | 0 | 0.183445 | 0.004474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d4f391db94c60427f80b22b768e154dde2d03eee | 68,436 | py | Python | fastpivot/test_pivot.py | SethEBaldwin/fastpivot | 950ca50105346180cb4c42aacdd9418473860aaf | [
"MIT"
] | null | null | null | fastpivot/test_pivot.py | SethEBaldwin/fastpivot | 950ca50105346180cb4c42aacdd9418473860aaf | [
"MIT"
] | null | null | null | fastpivot/test_pivot.py | SethEBaldwin/fastpivot | 950ca50105346180cb4c42aacdd9418473860aaf | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import time
import datetime
import fastpivot.pivot as pivot
# NOTE on speed:
# this pivot tends to be faster than pandas when N_ROWS, N_COLS and N_IDX are large
# this pivot tends to be slightly faster than pandas with single idx and col and with N_COLS and N_IDX small
# this pivot tends to be slower than pandas with multiple idx or col and with N_ROWS is large and N_COLS, N_IDX small
# N_ROWS = 4
# N_COLS = 2
# N_IDX = 2
# N_ROWS = 4
# N_COLS = 1
# N_IDX = 1
# N_ROWS = 1000000
# N_COLS = 100
# N_IDX = 10000
# slower here for single col, idx. faster for double
# N_ROWS = 1000000
# N_COLS = 500 # note: pandas can't handle 10000 or even 1000... but this pivot can
# N_IDX = 100
# N_ROWS = 1000000
# N_COLS = 10
# N_IDX = 10
# N_ROWS = 10000
# N_COLS = 100
# N_IDX = 100
# These values cause memory error (out of memory)
# N_ROWS = 1000000
# N_COLS = 1000
# N_IDX = 10000
# good speed ups for these parameters
N_ROWS = 100000
N_COLS = 1000
N_IDX = 1000
# N_ROWS = 2000000
# N_COLS = 1000
# N_IDX = 50000
# N_ROWS = 1000000
# N_COLS = 2000
# N_IDX = 50000
NAME_IDX = 'to_be_idx'
NAME_IDX2 = 'to_be_idx2'
NAME_COL = 'to_be_col'
NAME_COL2 = 'to_be_col2'
NAME_VALUE = 'value'
NAME_VALUE2 = 'value2'
print()
print('n_rows: {}'.format(N_ROWS))
print('n_columns: {}'.format(N_COLS))
print('n_idx: {}'.format(N_IDX))
def gen_df():
col1 = ['idx{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = ['col{}'.format(x) for x in np.random.randint(0, N_COLS, size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_VALUE], index=range(len(data)))
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
# print(df)
return df
def gen_df_int():
col1 = ['idx{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = ['col{}'.format(x) for x in np.random.randint(0, N_COLS, size=N_ROWS)]
col3 = [x for x in np.random.randint(-10, 10, size=N_ROWS)]
data = np.transpose([col1, col2, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_VALUE], index=range(len(data)))
df[NAME_VALUE] = df[NAME_VALUE].astype(np.int64)
# print(df)
return df
def gen_df_multiple_values():
col1 = ['idx{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = ['col{}'.format(x) for x in np.random.randint(0, N_COLS, size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
col4 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col3, col4])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_VALUE, NAME_VALUE2], index=range(len(data)))
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
df[NAME_VALUE2] = df[NAME_VALUE2].astype(np.float64)
# print(df)
return df
def gen_df_multiple_columns():
col1 = ['idx{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = ['col_x{}'.format(x) for x in np.random.randint(0, N_COLS, size=N_ROWS)]
col4 = ['col_y{}'.format(x) for x in np.random.randint(0, N_COLS, size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col4, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_COL2, NAME_VALUE], index=range(len(data)))
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
# print(df)
return df
def gen_df_multiple_index():
col1 = ['idx_x{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = ['ind_x{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col4 = ['col_y{}'.format(x) for x in np.random.randint(0, N_COLS, size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col4, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_IDX2, NAME_COL, NAME_VALUE], index=range(len(data)))
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
# print(df)
return df
def test_pivot_median_int():
print()
print('test pivot median int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
#assert is_equal_pd
def test_pivot_nan_index_dropnacolidx():
print()
print('test pivot nan index dropna_colidx=False')
df = gen_df()
df[NAME_IDX][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum', dropna_idxcol=False)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
def test_pivot_multiple_values_string_nunique_nan():
print()
print('test pivot multiple values string nunique_nan')
df = gen_df_multiple_columns()
df[NAME_COL2][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_COL2, fill_value=0, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=NAME_COL, values=NAME_COL2, fill_value=0, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
#assert is_equal
#assert is_equal_pd
def test_pivot_nan_column_dropnacolidx():
print()
print('test pivot nan column dropna_colidx=False')
df = gen_df()
df[NAME_COL][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum', dropna_idxcol=False)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
def test_pivot_nan_column_nodrop():
print()
print('test pivot nan column nodrop')
df = gen_df()
df[NAME_COL][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='sum', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_datetime():
# pandas fills sum with 0.0 automatically? huh? that is silly. what is going on?
print()
print('test pivot datetime')
col1 = [x for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in np.random.choice(a=['2016-10-28', '2016-11-04', '2016-12-23', '2017-01-15', '2017-02-05', '2017-03-26'], size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_VALUE], index=range(len(data)))
df[NAME_IDX] = df[NAME_IDX].astype('category')
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
# codes, uniques = df[NAME_IDX].factorize(sort=True)
# codes2, uniques2 = df[NAME_COL].factorize(sort=True)
# first_col_nans = set(range(N_IDX)) - {x[0] for x in zip(codes, codes2) if uniques2[x[1]] == datetime.datetime.strptime('2016-10-28', '%Y-%m-%d')}
# first_col_nans_list = sorted(list(first_col_nans))
# print(first_col_nans_list)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
# pivot_cython.info()
# print(pivot_cython.loc[first_col_nans_list])
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# pivot_pandas.info()
# print(pivot_pandas.loc[first_col_nans_list])
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_date():
print()
print('test pivot date')
col1 = [x for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in np.random.choice(a=['2016-10-28', '2016-11-04', '2016-12-23', '2017-01-15', '2017-02-05', '2017-03-26'], size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_VALUE], index=range(len(data)))
df[NAME_IDX] = df[NAME_IDX].astype('category')
df[NAME_COL] = df[NAME_COL].dt.date
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
# pivot_cython.info()
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# pivot_pandas.info()
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_cat_bool():
print()
print('test pivot cat bool')
col1 = ['idx{}'.format(x) for x in np.random.randint(0, N_IDX, size=N_ROWS)]
col2 = [x for x in np.random.choice(a=[False, True], size=N_ROWS)]
col3 = [x for x in np.random.normal(size=N_ROWS)]
data = np.transpose([col1, col2, col3])
df = pd.DataFrame(data, columns=[NAME_IDX, NAME_COL, NAME_VALUE], index=range(len(data)))
df[NAME_IDX] = df[NAME_IDX].astype('category')
df[NAME_VALUE] = df[NAME_VALUE].astype(np.float64)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
# pivot_cython.info()
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# pivot_pandas.info()
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_nunique_fillNone():
#TODO: better test (with actual nunique not equal to counts, and longer vectors per (i, j) pair)
print()
print('test pivot nunique fill none')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_nan_value():
print()
print('test pivot nan value')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_count_fillNone():
print()
print('test pivot count fill None')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='count')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='count')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_count_fillNone_str():
print()
print('test pivot count fill None with str')
df = gen_df_multiple_columns()
df[NAME_COL2][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_COL2, fill_value=None, aggfunc='count')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_COL2, fill_value=None, aggfunc='count')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_nan_value_fillna0():
print()
print('test pivot nan value fillna=0')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_nan_index():
print()
print('test pivot nan index')
df = gen_df()
df[NAME_IDX][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_nan_column():
print()
print('test pivot nan column')
df = gen_df()
df[NAME_COL][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# print(df)
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_values_list():
# inexplicably, pandas does not sort here.
# that would be fine if they didn't sort aggfunc and values in all other cases...
# this pivot will sort in all cases
print()
print('test pivot values list')
df = gen_df()
aggfunc_list = ['median', 'sum']
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc=aggfunc_list)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc=aggfunc_list)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_values_list_nan():
# inexplicably, pandas does not sort here.
# that would be fine if they didn't sort aggfunc and values in all other cases...
# this pivot will sort in all cases
print()
print('test pivot values list nan')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
aggfunc_list = ['max', 'mean']
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc=aggfunc_list)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc=aggfunc_list)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
# def test_pivot_multiple_values_list():
# print()
# print('test pivot multiple values list')
# df = gen_df_multiple_columns()
# aggfunc_dict = {NAME_COL2: 'count', NAME_VALUE: ['median', 'sum']}
# # time
# msg = 'cython'
# tick = time.perf_counter()
# pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_COL2, NAME_VALUE], fill_value=0, aggfunc=aggfunc_dict)
# print(msg, time.perf_counter() - tick)
# print(pivot_cython)
# msg = 'pandas'
# tick = time.perf_counter()
# pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_COL2, NAME_VALUE], fill_value=0, aggfunc=aggfunc_dict)
# print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# # check results are equal
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
# assert is_equal_pd
# def test_pivot_multiple_values_list_nan():
# print()
# print('test pivot multiple values list nan')
# df = gen_df_multiple_columns()
# df[NAME_COL2][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# aggfunc_dict = {NAME_COL2: 'count', NAME_VALUE: ['min', 'median']}
# # time
# msg = 'cython'
# tick = time.perf_counter()
# pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_COL2, NAME_VALUE], fill_value=None, aggfunc=aggfunc_dict)
# print(msg, time.perf_counter() - tick)
# print(pivot_cython)
# msg = 'pandas'
# tick = time.perf_counter()
# pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_COL2, NAME_VALUE], fill_value=None, aggfunc=aggfunc_dict)
# print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# # check results are equal
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
# assert is_equal_pd
def test_pivot_sum():
print()
print('test pivot sum')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
# compare to groupby hack
# msg = 'pandas groupby'
# tick = time.perf_counter()
# groupby_pandas = df.groupby([NAME_COL, NAME_IDX])[NAME_VALUE].sum().unstack(level=NAME_COL).fillna(0)
# print(msg, time.perf_counter() - tick)
# # print(groupby_pandas)
# assert (groupby_pandas.equals(pivot_pandas))
def test_pivot_sum_fillnan():
print()
print('test pivot sum fill nan')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=np.nan, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=np.nan, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
# def test_pivot_sum_silly():
# print()
# print('test pivot sum with index, columns list of single string')
# df = gen_df()
# # time
# msg = 'cython'
# tick = time.perf_counter()
# pivot_cython = pivot.pivot_table(df, index=[NAME_IDX], columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
# print(msg, time.perf_counter() - tick)
# # print(pivot_cython)
# msg = 'pandas'
# tick = time.perf_counter()
# pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
# print(msg, time.perf_counter() - tick)
# # print(pivot_pandas)
# # check results are equal
# is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
# print('componentwise equal: ', is_equal)
# epsilon = 1e-8
# within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
# print('componentwise within {} :'.format(epsilon), within_epsilon)
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
# assert within_epsilon
# assert is_equal
# assert is_equal_pd
# def test_pivot_multiple_values_string():
# print()
# print('test pivot multiple values string')
# df = gen_df_multiple_columns()
# aggfunc_dict = {NAME_COL2: 'count', NAME_VALUE: 'median'}
# # time
# msg = 'cython'
# tick = time.perf_counter()
# pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_COL2, NAME_VALUE], fill_value=0, aggfunc=aggfunc_dict)
# print(msg, time.perf_counter() - tick)
# # print(pivot_cython)
# msg = 'pandas'
# tick = time.perf_counter()
# pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_COL2, NAME_VALUE], fill_value=0, aggfunc=aggfunc_dict)
# print(msg, time.perf_counter() - tick)
# # print(pivot_pandas)
# # check results are equal
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
# assert is_equal_pd
def test_pivot_multiple_values_string_nunique():
print()
print('test pivot multiple values string nunique')
df = gen_df_multiple_columns()
aggfunc_dict = {NAME_COL2: 'nunique', NAME_VALUE: 'median'}
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_COL2, NAME_VALUE], fill_value=0, aggfunc=aggfunc_dict)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_COL2, NAME_VALUE], fill_value=0, aggfunc=aggfunc_dict)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
#assert is_equal_pd
def test_pivot_multiple_values():
print()
print('test pivot multiple_values')
df = gen_df_multiple_values()
# time
aggfunc_dict = {NAME_VALUE: 'sum', NAME_VALUE2: 'min'}
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_VALUE, NAME_VALUE2], fill_value=0.0, aggfunc=aggfunc_dict)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_VALUE, NAME_VALUE2], fill_value=0.0, aggfunc=aggfunc_dict)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_multiple_values_fillNone():
print()
print('test pivot multiple values fillNone')
df = gen_df_multiple_values()
# time
aggfunc_dict = {NAME_VALUE: 'median', NAME_VALUE2: 'sum'}
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_VALUE, NAME_VALUE2], fill_value=None, aggfunc=aggfunc_dict)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_VALUE, NAME_VALUE2], fill_value=None, aggfunc=aggfunc_dict)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_multiple_values_single_aggfunc():
print()
print('test pivot multiple_values format single aggfunc')
df = gen_df_multiple_values()
# time
aggfunc_dict = {NAME_VALUE: 'max', NAME_VALUE2: 'mean'}
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=[NAME_VALUE, NAME_VALUE2], fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=[NAME_VALUE, NAME_VALUE2], fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_sum_int():
print()
print('test pivot sum int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
#assert is_equal_pd
def test_pivot_mean():
print()
print('test pivot mean')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='mean')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='mean')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_mean_fillNone():
print()
print('test pivot mean fill_value=None')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='mean')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='mean')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_mean_nodrop():
print()
print('test pivot mean fill_value=None, dropna=False')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='mean', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='mean', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_mean_int():
# NOTE: pandas keeps mean as int if all entries in column are ints.
# this pivot_table always returns float.
print()
print('test pivot mean int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='mean')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
# pivot_cython.info()
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='mean')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# pivot_pandas.info()
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
def test_pivot_std():
print()
print('test pivot std')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
pivot_cython_numpy = pivot_cython.to_numpy()
pivot_pandas_numpy = pivot_pandas.to_numpy()
same_nan = ((pivot_cython_numpy == np.nan) == (pivot_pandas_numpy == np.nan)).all()
print('same NaN: ', same_nan)
pivot_cython_numpy = np.nan_to_num(pivot_cython_numpy)
pivot_pandas_numpy = np.nan_to_num(pivot_pandas_numpy)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython_numpy - pivot_pandas_numpy) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
# is_equal = (pivot_cython_numpy == pivot_pandas_numpy).all()
# print('componentwise equal: ', is_equal)
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
assert same_nan
assert within_epsilon
# assert is_equal
# assert is_equal_pd
def test_pivot_std_int():
print()
print('test pivot std int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
pivot_cython_numpy = pivot_cython.to_numpy()
pivot_pandas_numpy = pivot_pandas.to_numpy()
same_nan = ((pivot_cython_numpy == np.nan) == (pivot_pandas_numpy == np.nan)).all()
print('same NaN: ', same_nan)
pivot_cython_numpy = np.nan_to_num(pivot_cython_numpy)
pivot_pandas_numpy = np.nan_to_num(pivot_pandas_numpy)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython_numpy - pivot_pandas_numpy) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
# is_equal = (pivot_cython_numpy == pivot_pandas_numpy).all()
# print('componentwise equal: ', is_equal)
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
assert same_nan
assert within_epsilon
# assert is_equal
# assert is_equal_pd
def test_pivot_max():
print()
print('test pivot max')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_max_nodrop():
print()
print('test pivot max no drop')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='max', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='max', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_max_nan_fill_none():
print()
print('test pivot max fill None')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_max_nan_fill_nan():
print()
print('test pivot max fill nan')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=np.nan, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=np.nan, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_max_int():
print()
print('test pivot max int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='max')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
def test_pivot_min():
print()
print('test pivot min')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_min_nan_fill_none():
print()
print('test pivot min nan fill none')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_min_nan_fill_nan():
print()
print('test pivot min nan fill nan')
df = gen_df()
df[NAME_VALUE][np.random.choice(a=[False, True], size=N_ROWS)] = np.nan
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=np.nan, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=np.nan, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_min_int():
print()
print('test pivot min int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='min')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
def test_pivot_count():
print()
print('test pivot count')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='count')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='count')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
#assert is_equal_pd
def test_pivot_nunique():
#TODO: better test (with actual nunique not equal to counts, and longer vectors per (i, j) pair)
print()
print('test pivot nunique')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
#assert is_equal_pd
def test_pivot_nunique_int():
#TODO: better test (with actual nunique not equal to counts, and longer vectors per (i, j) pair)
print()
print('test pivot nunique int')
df = gen_df_int()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0, aggfunc='nunique')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
#assert is_equal_pd
def test_pivot_median():
print()
print('test pivot median')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_pivot_median_fillNone():
print()
print('test pivot median fill None')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
def test_pivot_sum_fill_none():
print()
print('test pivot sum with fill_value=None')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert is_equal_pd
# # def test_pivot_sum_fill_string():
# # print('test pivot sum with fill_value="Nothing!"')
# # df = gen_df()
# # # time
# # msg = 'cython'
# # tick = time.perf_counter()
# # pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value='Nothing!', aggfunc='sum')
# # print(msg, time.perf_counter() - tick)
# # # print(pivot_cython)
# # msg = 'pandas'
# # tick = time.perf_counter()
# # pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value='Nothing!', aggfunc='sum')
# # print(msg, time.perf_counter() - tick)
# # # print(pivot_pandas)
# # # check results are equal
# # is_equal_pd = pivot_cython.equals(pivot_pandas)
# # print('pd.equals: ', is_equal_pd)
# # assert is_equal_pd
def test_pivot_std_fill():
print()
print('test pivot std fill_value=0.0')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
pivot_cython_numpy = pivot_cython.to_numpy()
pivot_pandas_numpy = pivot_pandas.to_numpy()
same_nan = ((pivot_cython_numpy == np.nan) == (pivot_pandas_numpy == np.nan)).all()
print('same NaN: ', same_nan)
pivot_cython_numpy = np.nan_to_num(pivot_cython_numpy)
pivot_pandas_numpy = np.nan_to_num(pivot_pandas_numpy)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython_numpy - pivot_pandas_numpy) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
# is_equal = (pivot_cython_numpy == pivot_pandas_numpy).all()
# print('componentwise equal: ', is_equal)
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
assert same_nan
assert within_epsilon
# assert is_equal
# assert is_equal_pd
def test_pivot_std_fillNone():
print()
print('test pivot std fill_value=None')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=None, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=None, aggfunc='std')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
pivot_cython_numpy = pivot_cython.to_numpy()
pivot_pandas_numpy = pivot_pandas.to_numpy()
same_nan = ((pivot_cython_numpy == np.nan) == (pivot_pandas_numpy == np.nan)).all()
print('same NaN: ', same_nan)
pivot_cython_numpy = np.nan_to_num(pivot_cython_numpy)
pivot_pandas_numpy = np.nan_to_num(pivot_pandas_numpy)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython_numpy - pivot_pandas_numpy) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
# is_equal = (pivot_cython_numpy == pivot_pandas_numpy).all()
# print('componentwise equal: ', is_equal)
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
assert same_nan
assert within_epsilon
# assert is_equal
# assert is_equal_pd
def test_pivot_std_fill_nodrop():
print()
print('test pivot std fill_value=0.0 dropna=False')
df = gen_df()
# time
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='std', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL], values=NAME_VALUE, fill_value=0.0, aggfunc='std', dropna=False)
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
pivot_cython_numpy = pivot_cython.to_numpy()
pivot_pandas_numpy = pivot_pandas.to_numpy()
same_nan = ((pivot_cython_numpy == np.nan) == (pivot_pandas_numpy == np.nan)).all()
print('same NaN: ', same_nan)
pivot_cython_numpy = np.nan_to_num(pivot_cython_numpy)
pivot_pandas_numpy = np.nan_to_num(pivot_pandas_numpy)
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython_numpy - pivot_pandas_numpy) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
# is_equal = (pivot_cython_numpy == pivot_pandas_numpy).all()
# print('componentwise equal: ', is_equal)
# is_equal_pd = pivot_cython.equals(pivot_pandas)
# print('pd.equals: ', is_equal_pd)
assert same_nan
assert within_epsilon
# assert is_equal
# assert is_equal_pd
def test_multiple_columns():
print()
print('test pivot sum with multiple columns')
df = gen_df_multiple_columns()
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=[NAME_COL, NAME_COL2], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL, NAME_COL2], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_multiple_columns_nan():
print()
print('test pivot sum with multiple columns nan')
df = gen_df_multiple_columns()
df[NAME_COL][np.random.choice(a=[False, True], size=N_ROWS, p=[0.75, 0.25])] = np.nan
df[NAME_COL2][np.random.choice(a=[False, True], size=N_ROWS, p=[0.75, 0.25])] = np.nan
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=[NAME_COL, NAME_COL2], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL, NAME_COL2], values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_multiple_columns_median():
print()
print('test pivot median with multiple columns')
df = gen_df_multiple_columns()
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=NAME_IDX, columns=[NAME_COL, NAME_COL2], values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=NAME_IDX, columns=[NAME_COL, NAME_COL2], values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_multiple_index():
print()
print('test pivot sum with multiple index')
df = gen_df_multiple_index()
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=[NAME_IDX, NAME_IDX2], columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=[NAME_IDX, NAME_IDX2], columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_multiple_index_nan():
print()
print('test pivot sum with multiple index nan')
df = gen_df_multiple_index()
df[NAME_IDX][np.random.choice(a=[False, True], size=N_ROWS, p=[0.75, 0.25])] = np.nan
df[NAME_IDX2][np.random.choice(a=[False, True], size=N_ROWS, p=[0.75, 0.25])] = np.nan
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=[NAME_IDX, NAME_IDX2], columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=[NAME_IDX, NAME_IDX2], columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='sum')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
def test_multiple_index_median():
print()
print('test pivot median with multiple index')
df = gen_df_multiple_index()
msg = 'cython'
tick = time.perf_counter()
pivot_cython = pivot.pivot_table(df, index=[NAME_IDX, NAME_IDX2], columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_cython)
msg = 'pandas'
tick = time.perf_counter()
pivot_pandas = df.pivot_table(index=[NAME_IDX, NAME_IDX2], columns=NAME_COL, values=NAME_VALUE, fill_value=0.0, aggfunc='median')
print(msg, time.perf_counter() - tick)
# print(pivot_pandas)
# check results are equal
epsilon = 1e-8
within_epsilon = (np.absolute(pivot_cython.to_numpy() - pivot_pandas.to_numpy()) < epsilon).all()
print('componentwise within {} :'.format(epsilon), within_epsilon)
is_equal = (pivot_cython.to_numpy() == pivot_pandas.to_numpy()).all()
print('componentwise equal: ', is_equal)
is_equal_pd = pivot_cython.equals(pivot_pandas)
print('pd.equals: ', is_equal_pd)
assert within_epsilon
assert is_equal
assert is_equal_pd
| 31.964503 | 181 | 0.685063 | 9,922 | 68,436 | 4.449506 | 0.023282 | 0.06852 | 0.079505 | 0.050353 | 0.960995 | 0.944007 | 0.935354 | 0.925048 | 0.915172 | 0.911276 | 0 | 0.010721 | 0.182258 | 68,436 | 2,140 | 182 | 31.979439 | 0.778157 | 0.177567 | 0 | 0.813769 | 0 | 0 | 0.086873 | 0 | 0 | 0 | 0 | 0.000467 | 0.091792 | 1 | 0.052074 | false | 0 | 0.004413 | 0 | 0.0609 | 0.29391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be0a6e5cff1c988247a99f59ac302e867710a154 | 72,647 | py | Python | test/integration/component/test_affinity_groups.py | saliven1970/cloudstack | 4617be458387421bbbfc120c1f054c9939ba52eb | [
"Apache-2.0"
] | 14 | 2015-01-12T13:46:12.000Z | 2021-07-19T19:33:28.000Z | test/integration/component/test_affinity_groups.py | saliven1970/cloudstack | 4617be458387421bbbfc120c1f054c9939ba52eb | [
"Apache-2.0"
] | 20 | 2020-12-19T22:32:23.000Z | 2022-02-01T01:07:06.000Z | test/integration/component/test_affinity_groups.py | saliven1970/cloudstack | 4617be458387421bbbfc120c1f054c9939ba52eb | [
"Apache-2.0"
] | 8 | 2015-07-17T12:36:51.000Z | 2018-08-09T16:23:40.000Z | #!/usr/bin/env python
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from marvin.cloudstackTestCase import cloudstackTestCase, unittest
from marvin.cloudstackAPI import deleteAffinityGroup
from marvin.lib.utils import (cleanup_resources,
random_gen)
from marvin.lib.base import (Account,
ServiceOffering,
VirtualMachine,
AffinityGroup,
Domain)
from marvin.lib.common import (get_zone,
get_domain,
get_template,
list_virtual_machines,
wait_for_cleanup)
from nose.plugins.attrib import attr
class Services:
"""Test Account Services
"""
def __init__(self):
self.services = {
"domain": {
"name": "Domain",
},
"account": {
"email": "newtest@test.com",
"firstname": "Test",
"lastname": "User",
"username": "test",
# Random characters are appended for unique
# username
"password": "password",
},
"service_offering": {
"name": "Tiny Instance",
"displaytext": "Tiny Instance",
"cpunumber": 1,
"cpuspeed": 100,
# in MHz
"memory": 64,
# In MBs
},
"ostype": 'CentOS 5.3 (64-bit)',
"host_anti_affinity": {
"name": "",
"type": "host anti-affinity",
},
"virtual_machine" : {
},
"new_domain": {
"name": "New Domain",
},
"new_account": {
"email": "domain@test.com",
"firstname": "Domain",
"lastname": "Admin",
"username": "do_admin",
# Random characters are appended for unique
# username
"password": "password",
},
"new_account1": {
"email": "user@test.com",
"firstname": "User",
"lastname": "User",
"username": "user",
# Random characters are appended for unique
# username
"password": "password",
},
}
class TestCreateAffinityGroup(cloudstackTestCase):
"""
Test various scenarios for Create Affinity Group API
"""
@classmethod
def setUpClass(cls):
cls.testClient = super(TestCreateAffinityGroup, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["template"] = cls.template.id
cls.services["zoneid"] = cls.zone.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id
)
cls._cleanup.append(cls.account)
cls.services["account"] = cls.account.name
cls.services["domainid"] = cls.domain.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.cleanup = []
def tearDown(self):
try:
# Clean up, terminate the created instance, volumes and snapshots
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@classmethod
def tearDownClass(cls):
try:
cls.api_client = super(TestCreateAffinityGroup, cls).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
def create_aff_grp(self, api_client=None, aff_grp=None,
acc=None, domainid=None, aff_grp_name=None):
if not api_client:
api_client = self.api_client
if not aff_grp:
aff_grp = self.services["host_anti_affinity"]
if not acc:
acc = self.account.name
if not domainid:
domainid = self.domain.id
if aff_grp_name is None:
aff_grp["name"] = "aff_grp_" + random_gen(size=6)
else:
aff_grp["name"] = aff_grp_name
try:
return AffinityGroup.create(api_client, aff_grp, acc, domainid)
except Exception as e:
raise Exception("Error: Creation of Affinity Group failed : %s" %e)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_01_admin_create_aff_grp(self):
"""
Test create affinity group as admin
@return:
"""
aff_grp = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.account.name, domainid=self.account.domainid)
self.debug("Created Affinity Group: %s" % aff_grp.name)
list_aff_grps = AffinityGroup.list(self.api_client, id=aff_grp.id)
self.assert_(isinstance(list_aff_grps, list) and len(list_aff_grps) > 0)
self.assert_(list_aff_grps[0].id == aff_grp.id)
self.cleanup.append(aff_grp)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_02_doadmin_create_aff_grp(self):
"""
Test create affinity group as domain admin
@return:
"""
self.new_domain = Domain.create(self.api_client, self.services["new_domain"])
self.do_admin = Account.create(self.api_client, self.services["new_account"],
admin=True, domainid=self.new_domain.id)
self.cleanup.append(self.do_admin)
self.cleanup.append(self.new_domain)
domainapiclient = self.testClient.getUserApiClient(self.do_admin.name, self.new_domain.name, 2)
aff_grp = self.create_aff_grp(api_client=domainapiclient, aff_grp=self.services["host_anti_affinity"],
acc=self.do_admin.name, domainid=self.new_domain.id)
aff_grp.delete(domainapiclient)
#@attr(tags=["simulator", "basic", "advanced"])
@attr(tags=["vogxn", "simulator", "basic", "advanced"], required_hardware="false")
def test_03_user_create_aff_grp(self):
"""
Test create affinity group as user
@return:
"""
self.user = Account.create(self.api_client, self.services["new_account"],
domainid=self.domain.id)
self.cleanup.append(self.user)
userapiclient = self.testClient.getUserApiClient(self.user.name, self.domain.name)
aff_grp = self.create_aff_grp(api_client=userapiclient, aff_grp=self.services["host_anti_affinity"],
acc=self.user.name, domainid=self.domain.id)
aff_grp.delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_04_user_create_aff_grp_existing_name(self):
"""
Test create affinity group that exists (same name)
@return:
"""
self.user = Account.create(self.api_client, self.services["new_account"],
domainid=self.domain.id)
self.cleanup.append(self.user)
aff_grp = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user.name, domainid=self.domain.id)
with self.assertRaises(Exception):
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user.name, domainid=self.domain.id,
aff_grp_name = aff_grp.name)
self.debug("Deleted Affinity Group: %s" %aff_grp.name)
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_05_create_aff_grp_same_name_diff_acc(self):
"""
Test create affinity group with existing name but within different account
@return:
"""
self.user = Account.create(self.api_client, self.services["new_account"],
domainid=self.domain.id)
self.cleanup.append(self.user)
aff_grp = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user.name, domainid=self.domain.id)
try:
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
except Exception:
self.debug("Error: Creating affinity group with same name from different account failed.")
self.debug("Deleted Affinity Group: %s" %aff_grp.name)
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_06_create_aff_grp_nonexisting_type(self):
"""
Test create affinity group of non-existing type
@return:
"""
self.non_existing_aff_grp = {
"name": "TestAffGrp_HA",
"type": "Incorrect type",
}
with self.assertRaises(Exception):
self.create_aff_grp(aff_grp=self.non_existing_aff_grp)
class TestListAffinityGroups(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.testClient = super(TestListAffinityGroups, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["template"] = cls.template.id
cls.services["zoneid"] = cls.zone.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id
)
cls._cleanup.append(cls.account)
cls.services["account"] = cls.account.name
cls.services["domainid"] = cls.domain.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
# Create multiple Affinity Groups
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.aff_grp = []
self.cleanup = []
def tearDown(self):
try:
self.api_client = super(TestListAffinityGroups, self).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(self.api_client, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
@classmethod
def tearDownClass(cls):
try:
cls.api_client = super(TestListAffinityGroups, cls).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
def create_aff_grp(self, api_client=None, aff_grp=None,
acc=None, domainid=None):
if api_client == None:
api_client = self.api_client
if aff_grp == None:
aff_grp = self.services["host_anti_affinity"]
aff_grp["name"] = "aff_grp_" + random_gen(size=6)
try:
aff_grp = AffinityGroup.create(api_client,
aff_grp, acc, domainid)
self.aff_grp.append(aff_grp)
return aff_grp
except Exception as e:
raise Exception("Error: Creation of Affinity Group failed : %s" %e)
def create_vm_in_aff_grps(self, ag_list, account_name=None, domain_id=None):
if account_name == None:
account_name = "admin"
if domain_id == None:
domain_id = self.domain.id
self.debug('Creating VM in AffinityGroup=%s' % ag_list[0])
vm = VirtualMachine.create(
self.api_client,
self.services["virtual_machine"],
accountid=account_name,
domainid=domain_id,
templateid=self.template.id,
serviceofferingid=self.service_offering.id,
affinitygroupnames=ag_list
)
self.debug('Created VM=%s in Affinity Group=%s' %
(vm.id, ag_list[0]))
list_vm = list_virtual_machines(self.api_client, id=vm.id)
self.assertEqual(isinstance(list_vm, list), True,
"Check list response returns a valid list")
self.assertNotEqual(len(list_vm),0,
"Check VM available in List Virtual Machines")
vm_response = list_vm[0]
self.assertEqual(vm_response.state, 'Running',
msg="VM is not in Running state")
return vm, vm_response.hostid
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_01_list_aff_grps_for_vm(self):
"""
List affinity group for a vm
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client)
vm, hostid = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm.id)
self.assertEqual(list_aff_grps[0].name, self.aff_grp[0].name,
"Listing Affinity Group by VM id failed")
vm.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_02_list_multiple_aff_grps_for_vm(self):
"""
List multiple affinity groups associated with a vm
"""
aff_grp_01 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
aff_grp_02 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
aff_grps_names = [self.aff_grp[0].name, self.aff_grp[1].name]
vm, hostid = self.create_vm_in_aff_grps(aff_grps_names, account_name=self.account.name, domain_id=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm.id)
list_aff_grps_names = [list_aff_grps[0].name, list_aff_grps[1].name]
aff_grps_names.sort()
list_aff_grps_names.sort()
self.assertEqual(aff_grps_names, list_aff_grps_names,
"One of the Affinity Groups is missing %s"
%list_aff_grps_names)
vm.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
aff_grp_01.delete(self.api_client)
aff_grp_02.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_03_list_aff_grps_by_id(self):
"""
List affinity groups by id
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
print self.aff_grp[0].__dict__
list_aff_grps = AffinityGroup.list(self.api_client)
list_aff_grps = AffinityGroup.list(self.api_client, id=list_aff_grps[0].id)
self.assertEqual(list_aff_grps[0].name, self.aff_grp[0].name,
"Listing Affinity Group by VM id failed")
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_04_list_aff_grps_by_name(self):
"""
List Affinity Groups by name
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client,
name=self.aff_grp[0].name)
self.assertEqual(list_aff_grps[0].name, self.aff_grp[0].name,
"Listing Affinity Group by name failed")
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_05_list_aff_grps_by_non_existing_id(self):
"""
List Affinity Groups by non-existing id
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client,
id=1234)
self.assertEqual(list_aff_grps, None,
"Listing Affinity Group by non-existing id succeeded.")
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_06_list_aff_grps_by_non_existing_name(self):
"""
List Affinity Groups by non-existing name
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client,
name="NonexistingName")
self.assertEqual(list_aff_grps, None,
"Listing Affinity Group by non-existing name succeeded.")
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_07_list_all_vms_in_aff_grp(self):
"""
List affinity group should list all for a vms associated with that group
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm, hostid = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client, id=self.aff_grp[0].id)
self.assertEqual(list_aff_grps[0].name, self.aff_grp[0].name,
"Listing Affinity Group by id failed")
self.assertEqual(list_aff_grps[0].virtualmachineIds[0], vm.id,
"List affinity group response.virtualmachineIds for group: %s doesn't contain hostid : %s associated with the group"
%(self.aff_grp[0].name, vm.id)
)
vm.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
class TestDeleteAffinityGroups(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.testClient = super(TestDeleteAffinityGroups, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["template"] = cls.template.id
cls.services["zoneid"] = cls.zone.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id
)
cls._cleanup.append(cls.account)
cls.services["account"] = cls.account.name
cls.services["domainid"] = cls.domain.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
# Create multiple Affinity Groups
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.aff_grp = []
self.cleanup = []
def tearDown(self):
try:
self.api_client = super(TestDeleteAffinityGroups,self).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(self.api_client, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
@classmethod
def tearDownClass(cls):
try:
cls.api_client = super(TestDeleteAffinityGroups, cls).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
def create_aff_grp(self, api_client=None, aff_grp=None,
acc=None, domainid=None):
if api_client == None:
api_client = self.api_client
if aff_grp == None:
aff_grp = self.services["host_anti_affinity"]
aff_grp["name"] = "aff_grp_" + random_gen(size=6)
try:
return AffinityGroup.create(api_client, aff_grp, acc, domainid)
except Exception as e:
raise Exception("Error: Creation of Affinity Group failed : %s" %e)
def create_vm_in_aff_grps(self, ag_list, account_name=None, domain_id=None):
if account_name == None:
account_name = "admin"
if domain_id == None:
domain_id = self.domain.id
self.debug('Creating VM in AffinityGroup=%s' % ag_list[0])
vm = VirtualMachine.create(
self.api_client,
self.services["virtual_machine"],
accountid=account_name,
domainid=domain_id,
templateid=self.template.id,
serviceofferingid=self.service_offering.id,
affinitygroupnames=ag_list
)
self.debug('Created VM=%s in Affinity Group=%s' %
(vm.id, ag_list[0]))
list_vm = list_virtual_machines(self.api_client, id=vm.id)
self.assertEqual(isinstance(list_vm, list), True,
"Check list response returns a valid list")
self.assertNotEqual(len(list_vm),0,
"Check VM available in Delete Virtual Machines")
vm_response = list_vm[0]
self.assertEqual(vm_response.state, 'Running',
msg="VM is not in Running state")
return vm, vm_response.hostid
def delete_aff_group(self, apiclient, **kwargs):
cmd = deleteAffinityGroup.deleteAffinityGroupCmd()
[setattr(cmd, k, v) for k, v in kwargs.items()]
return apiclient.deleteAffinityGroup(cmd)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_01_delete_aff_grp_by_name(self):
"""
Delete Affinity Group by name
"""
aff_0 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
AffinityGroup.list(self.api_client, name=aff_0.name)
self.delete_aff_group(self.api_client, name=aff_0.name)
self.assert_(AffinityGroup.list(self.api_client, name=aff_0.name) is None)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_02_delete_aff_grp_for_acc(self):
"""
Delete Affinity Group as admin for an account
"""
aff_0 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.account.name, domainid=self.domain.id)
aff_1 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.account.name, domainid=self.domain.id)
aff_0.delete(self.api_client)
with self.assertRaises(Exception):
self.create_vm_in_aff_grps([aff_0.name], account_name=self.account.name, domain_id=self.domain.id)
aff_1.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_03_delete_aff_grp_with_vms(self):
"""
Delete Affinity Group which has vms in it
"""
aff_0 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
aff_1 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm, hostid = self.create_vm_in_aff_grps([aff_0.name, aff_1.name], account_name=self.account.name, domain_id=self.domain.id)
aff_0.delete(self.api_client)
vm_list = list_virtual_machines(self.apiclient, id=vm.id)
self.assert_(vm_list is not None)
vm.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
aff_1.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_05_delete_aff_grp_id(self):
"""
Delete Affinity Group with id which does not belong to this user
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
aff_0 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user1.name,
domainid=self.domain.id)
self.user2 = Account.create(self.apiclient, self.services["new_account1"])
self.cleanup.append(self.user2)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user2.name,
DomainName=self.user2.domain,
type=0)
aff_1 = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client,
name=aff_0.name)
# Delete Affinity group belonging to different user by id
with self.assertRaises(Exception):
self.delete_aff_group(userapiclient, name=list_aff_grps.id)
#Cleanup
aff_0.delete(self.api_client)
aff_1.delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_06_delete_aff_grp_name(self):
"""
Delete Affinity Group by name which does not belong to this user
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
aff_0 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user1.name,
domainid=self.domain.id)
self.user2 = Account.create(self.apiclient, self.services["new_account1"])
self.cleanup.append(self.user2)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user2.name,
DomainName=self.user2.domain,
type=0)
aff_1 = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client,
name=aff_0.name)
# Delete Affinity group belonging to different user by name
with self.assertRaises(Exception):
self.delete_aff_group(userapiclient, name=list_aff_grps.name)
#Cleanup
aff_0.delete(self.api_client)
aff_1.delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_08_delete_aff_grp_by_id(self):
"""
Delete Affinity Group by id.
"""
aff_grp_1 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
aff_grp_2 = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
aff_grp_1.delete(self.api_client)
aff_grp_2.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_09_delete_aff_grp_root_admin(self):
"""
Root admin should be able to delete affinity group of other users
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
user1apiclient = self.testClient.getUserApiClient(
UserName=self.user1.name,
DomainName=self.user1.domain,
type=0)
aff_grp = self.create_aff_grp(api_client=user1apiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client)
self.assertNotEqual(list_aff_grps, [], "Admin not able to list Affinity "
"Groups of users")
aff_grp.delete(self.api_client)
class TestUpdateVMAffinityGroups(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.testClient = super(TestUpdateVMAffinityGroups, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["template"] = cls.template.id
cls.services["zoneid"] = cls.zone.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id
)
cls._cleanup.append(cls.account)
cls.services["account"] = cls.account.name
cls.services["domainid"] = cls.domain.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
# Create multiple Affinity Groups
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.aff_grp = []
self.cleanup = []
def tearDown(self):
try:
self.api_client = super(TestUpdateVMAffinityGroups,self).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(self.api_client, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
@classmethod
def tearDownClass(cls):
try:
cls.api_client = super(TestUpdateVMAffinityGroups, cls).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
def create_aff_grp(self, api_client=None, aff_grp=None,
acc=None, domainid=None):
if api_client == None:
api_client = self.api_client
if aff_grp == None:
aff_grp = self.services["host_anti_affinity"]
aff_grp["name"] = "aff_grp_" + random_gen(size=6)
try:
self.aff_grp.append(AffinityGroup.create(api_client,
aff_grp, acc, domainid))
except Exception as e:
raise Exception("Error: Creation of Affinity Group failed : %s" %e)
def create_vm_in_aff_grps(self, ag_list, account_name=None, domain_id=None):
if account_name == None:
account_name = "admin"
if domain_id == None:
domain_id = self.domain.id
self.debug('Creating VM in AffinityGroup=%s' % ag_list)
vm = VirtualMachine.create(
self.api_client,
self.services["virtual_machine"],
accountid=account_name,
domainid=domain_id,
templateid=self.template.id,
serviceofferingid=self.service_offering.id,
affinitygroupnames=ag_list
)
self.debug('Created VM=%s in Affinity Group=%s' %
(vm.id, ag_list))
list_vm = list_virtual_machines(self.api_client, id=vm.id)
self.assertEqual(isinstance(list_vm, list), True,
"Check list response returns a valid list")
self.assertNotEqual(len(list_vm),0,
"Check VM available in Delete Virtual Machines")
vm_response = list_vm[0]
self.assertEqual(vm_response.state, 'Running',
msg="VM is not in Running state")
return vm, vm_response.hostid
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_01_update_aff_grp_by_ids(self):
"""
Update the list of affinityGroups by using affinity groupids
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm2, hostid2 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm1.stop(self.api_client)
list_aff_grps = AffinityGroup.list(self.api_client, account=self.account.name, domainid=self.domain.id)
self.assertEqual(len(list_aff_grps), 2 , "2 affinity groups should be present")
vm1.update_affinity_group(self.api_client,
affinitygroupids=[list_aff_grps[0].id,
list_aff_grps[1].id])
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm1.id)
list_aff_grps_names = [list_aff_grps[0].name, list_aff_grps[1].name]
aff_grps_names = [self.aff_grp[0].name, self.aff_grp[1].name]
aff_grps_names.sort()
list_aff_grps_names.sort()
self.assertEqual(aff_grps_names, list_aff_grps_names,
"One of the Affinity Groups is missing %s"
%list_aff_grps_names)
vm1.start(self.api_client)
vm_status = VirtualMachine.list(self.api_client, id=vm1.id)
self.assertNotEqual(vm_status[0].hostid, hostid2, "The virtual machine "
"started on host %s violating the host anti-affinity"
"rule" %vm_status[0].hostid)
vm1.delete(self.api_client)
vm2.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
for aff_grp in self.aff_grp:
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_02_update_aff_grp_by_names(self):
"""
Update the list of affinityGroups by using affinity groupnames
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm2, hostid2 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm1.stop(self.api_client)
vm1.update_affinity_group(self.api_client,
affinitygroupnames=[self.aff_grp[0].name,
self.aff_grp[1].name])
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm1.id)
list_aff_grps_names = [list_aff_grps[0].name, list_aff_grps[1].name]
aff_grps_names = [self.aff_grp[0].name, self.aff_grp[1].name]
aff_grps_names.sort()
list_aff_grps_names.sort()
self.assertEqual(aff_grps_names, list_aff_grps_names,
"One of the Affinity Groups is missing %s"
%list_aff_grps_names)
vm1.start(self.api_client)
vm_status = VirtualMachine.list(self.api_client, id=vm1.id)
self.assertNotEqual(vm_status[0].hostid, hostid2, "The virtual machine "
"started on host %s violating the host anti-affinity"
"rule" %vm_status[0].hostid)
vm1.delete(self.api_client)
vm2.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
for aff_grp in self.aff_grp:
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_03_update_aff_grp_for_vm_with_no_aff_grp(self):
"""
Update the list of affinityGroups for vm which is not associated
with any affinity groups.
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps([], account_name=self.account.name, domain_id=self.domain.id)
vm2, hostid2 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm1.stop(self.api_client)
vm1.update_affinity_group(self.api_client,
affinitygroupnames=[self.aff_grp[0].name])
vm1.start(self.api_client)
vm_status = VirtualMachine.list(self.api_client, id=vm1.id)
self.assertNotEqual(vm_status[0].hostid, hostid2, "The virtual machine "
"started on host %s violating the host anti-affinity"
"rule" %vm_status[0].hostid)
vm1.delete(self.api_client)
vm2.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
aff_grps = [self.aff_grp[0], self.aff_grp[1]]
for aff_grp in aff_grps:
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost", "NotRun"])
def test_04_update_aff_grp_remove_all(self):
"""
Update the list of Affinity Groups to empty list
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
vm1, hostid1 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
aff_grps = [self.aff_grp[0], self.aff_grp[1]]
vm1.stop(self.api_client)
vm1.update_affinity_group(self.api_client, affinitygroupids = [])
vm1.start(self.api_client)
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm1.id)
self.assertEqual(list_aff_grps, [], "The affinity groups list is not empyty")
vm1.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
for aff_grp in aff_grps:
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_05_update_aff_grp_on_running_vm(self):
"""
Update the list of Affinity Groups on running vm
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps([self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
aff_grps = [self.aff_grp[0], self.aff_grp[1]]
with self.assertRaises(Exception):
vm1.update_affinity_group(self.api_client, affinitygroupnames=[])
vm1.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
for aff_grp in aff_grps:
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost", "NotRun"])
def test_06_update_aff_grp_invalid_args(self):
"""
Update the list of Affinity Groups with either both args or none
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"])
vm1, hostid1 = self.create_vm_in_aff_grps([], account_name=self.account.name, domain_id=self.domain.id)
aff_grps = [self.aff_grp[0], self.aff_grp[1]]
vm1.stop(self.api_client)
with self.assertRaises(Exception):
vm1.update_affinity_group(self.api_client)
with self.assertRaises(Exception):
vm1.update_affinity_group(self.api_client, affinitygroupids=[self.aff_grp[0].id], affinitygroupnames=[self.aff_grp[1].name])
vm1.update_affinity_group(self.api_client, affinitygroupids=[])
vm1.delete(self.api_client)
# Can cleanup affinity groups since none are set on the VM
for aff_grp in aff_grps:
aff_grp.delete(self.api_client)
class TestDeployVMAffinityGroups(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.testClient = super(TestDeployVMAffinityGroups, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["template"] = cls.template.id
cls.services["zoneid"] = cls.zone.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id
)
cls._cleanup.append(cls.account)
cls.services["account"] = cls.account.name
cls.services["domainid"] = cls.domain.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.aff_grp = []
self.cleanup = []
def tearDown(self):
try:
self.api_client = super(TestDeployVMAffinityGroups,self).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(self.api_client, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
@classmethod
def tearDownClass(cls):
try:
cls.api_client = super(TestDeployVMAffinityGroups, cls).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
def create_aff_grp(self, api_client=None, aff_grp=None,
acc=None, domainid=None):
if api_client == None:
api_client = self.api_client
if aff_grp == None:
aff_grp = self.services["host_anti_affinity"]
aff_grp["name"] = "aff_grp_" + random_gen(size=6)
try:
self.aff_grp.append(AffinityGroup.create(api_client,
aff_grp, acc, domainid))
except Exception as e:
raise Exception("Error: Creation of Affinity Group failed : %s" %e)
def create_vm_in_aff_grps(self, api_client=None, ag_list=None, ag_ids=None, account_name=None, domain_id=None):
if account_name == None:
account_name = "admin"
if domain_id == None:
domain_id = self.domain.id
if api_client == None:
api_client = self.api_client
self.debug('Creating VM in AffinityGroup=%s' % ag_list)
vm = VirtualMachine.create(
api_client,
self.services["virtual_machine"],
accountid=account_name,
domainid=domain_id,
templateid=self.template.id,
serviceofferingid=self.service_offering.id,
affinitygroupnames=ag_list,
affinitygroupids=ag_ids
)
self.debug('Created VM=%s in Affinity Group=%s' %
(vm.id, ag_list))
list_vm = list_virtual_machines(self.api_client, id=vm.id)
self.assertEqual(isinstance(list_vm, list), True,
"Check list response returns a valid list")
self.assertNotEqual(len(list_vm),0,
"Check VM available in Delete Virtual Machines")
vm_response = list_vm[0]
self.assertEqual(vm_response.state, 'Running',
msg="VM is not in Running state")
return vm, vm_response.hostid
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_01_deploy_vm_without_aff_grp(self):
"""
Deploy VM without affinity group
"""
vm1, hostid1 = self.create_vm_in_aff_grps(account_name=self.account.name, domain_id=self.domain.id)
vm1.delete(self.api_client)
#Wait for expunge interval to cleanup VM
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_02_deploy_vm_by_aff_grp_name(self):
"""
Deploy VM by aff grp name
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm1.delete(self.api_client)
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_03_deploy_vm_by_aff_grp_id(self):
"""
Deploy VM by aff grp id
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client,
name=self.aff_grp[0].name, account=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps(ag_ids=[list_aff_grps[0].id], account_name=self.account.name, domain_id=self.domain.id)
vm1.delete(self.api_client)
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_04_deploy_vm_anti_affinity_group(self):
"""
test DeployVM in anti-affinity groups
deploy VM1 and VM2 in the same host-anti-affinity groups
Verify that the vms are deployed on separate hosts
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
vm2, hostid2 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
self.assertNotEqual(hostid1, hostid2,
msg="Both VMs of affinity group %s are on the same host"
% self.aff_grp[0].name)
vm1.delete(self.api_client)
vm2.delete(self.api_client)
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_05_deploy_vm_by_id(self):
"""
Deploy vms by affinity group id
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client,
name=self.aff_grp[0].name, acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps(ag_ids=[list_aff_grps[0].id], account_name=self.account.name, domain_id=self.domain.id)
vm2, hostid2 = self.create_vm_in_aff_grps(ag_ids=[list_aff_grps[0].id], account_name=self.account.name, domain_id=self.domain.id)
self.assertNotEqual(hostid1, hostid2,
msg="Both VMs of affinity group %s are on the same host"
% self.aff_grp[0].name)
vm1.delete(self.api_client)
vm2.delete(self.api_client)
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_06_deploy_vm_aff_grp_of_other_user_by_name(self):
"""
Deploy vm in affinity group of another user by name
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user1.name,
domainid=self.domain.id)
self.user2 = Account.create(self.apiclient, self.services["new_account1"])
self.cleanup.append(self.user2)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user2.name,
DomainName=self.user2.domain,
type=0)
self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
with self.assertRaises(Exception):
vm1, hostid1 = self.create_vm_in_aff_grps(api_client=userapiclient,
ag_list=[self.aff_grp[0].name], account_name=self.account.name, domain_id=self.domain.id)
self.aff_grp[0].delete(self.api_client)
self.aff_grp[1].delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_07_deploy_vm_aff_grp_of_other_user_by_id(self):
"""
Deploy vm in affinity group of another user by id
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user1.name,
domainid=self.domain.id)
self.user2 = Account.create(self.apiclient, self.services["new_account1"])
self.cleanup.append(self.user2)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user2.name,
DomainName=self.user2.domain,
type=0)
self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client,
name=self.aff_grp[0].name)
# Deploy VM in Affinity group belonging to different user by id
with self.assertRaises(Exception):
vm1, hostid1 = self.create_vm_in_aff_grps(api_client=userapiclient,
ag_ids=[list_aff_grps[0].id], account_name=self.account.name, domain_id=self.domain.id)
self.aff_grp[0].delete(self.api_client)
self.aff_grp[1].delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_08_deploy_vm_multiple_aff_grps(self):
"""
Deploy vm in multiple affinity groups
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name,
self.aff_grp[1].name], account_name=self.account.name, domain_id=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm1.id)
aff_grps_names = [self.aff_grp[0].name, self.aff_grp[1].name]
list_aff_grps_names = [list_aff_grps[0].name, list_aff_grps[1].name]
aff_grps_names.sort()
list_aff_grps_names.sort()
self.assertEqual(aff_grps_names, list_aff_grps_names,
"One of the Affinity Groups is missing %s"
%list_aff_grps_names)
vm1.delete(self.api_client)
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
self.aff_grp[1].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_09_deploy_vm_multiple_aff_grps(self):
"""
Deploy multiple vms in multiple affinity groups
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
vm1, hostid1 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name,
self.aff_grp[1].name], account_name=self.account.name, domain_id=self.domain.id)
vm2, hostid2 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name,
self.aff_grp[1].name], account_name=self.account.name, domain_id=self.domain.id)
aff_grps_names = [self.aff_grp[0].name, self.aff_grp[1].name]
aff_grps_names.sort()
for vm in [vm1, vm2]:
list_aff_grps = AffinityGroup.list(self.api_client,
virtualmachineid=vm.id)
list_aff_grps_names = [list_aff_grps[0].name, list_aff_grps[1].name]
list_aff_grps_names.sort()
self.assertEqual(aff_grps_names, list_aff_grps_names,
"One of the Affinity Groups is missing %s"
%list_aff_grps_names)
vm1.delete(self.api_client)
vm2.delete(self.api_client)
wait_for_cleanup(self.apiclient, ["expunge.delay", "expunge.interval"])
self.aff_grp[0].delete(self.api_client)
self.aff_grp[1].delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_10_deploy_vm_by_aff_grp_name_and_id(self):
"""
Deploy VM by aff grp name and id
"""
self.create_aff_grp(aff_grp=self.services["host_anti_affinity"], acc=self.account.name, domainid=self.domain.id)
list_aff_grps = AffinityGroup.list(self.api_client,
name=self.aff_grp[0].name)
with self.assertRaises(Exception):
vm1, hostid1 = self.create_vm_in_aff_grps(ag_list=[self.aff_grp[0].name],
ag_ids=[list_aff_grps[0].id], account_name=self.account.name, domain_id=self.domain.id)
self.aff_grp[0].delete(self.api_client)
class TestAffinityGroupsAdminUser(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.testClient = super(TestAffinityGroupsAdminUser, cls).getClsTestClient()
cls.api_client = cls.testClient.getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client)
cls.zone = get_zone(cls.api_client, cls.testClient.getZoneForTests())
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["template"] = cls.template.id
cls.services["zoneid"] = cls.zone.id
cls._cleanup = []
cls.account = Account.create(
cls.api_client,
cls.services["account"],
domainid=cls.domain.id
)
cls._cleanup.append(cls.account)
cls.services["account"] = cls.account.name
cls.services["domainid"] = cls.domain.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup.append(cls.service_offering)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.aff_grp = []
self.cleanup = []
def tearDown(self):
try:
self.api_client = super(TestAffinityGroupsAdminUser,self).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(self.api_client, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
@classmethod
def tearDownClass(cls):
try:
cls.api_client = super(TestAffinityGroupsAdminUser, cls).getClsTestClient().getApiClient()
#Clean up, terminate the created templates
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
def create_aff_grp(self, api_client=None, aff_grp=None,
acc=None, domainid=None):
if api_client == None:
api_client = self.api_client
if aff_grp == None:
aff_grp = self.services["host_anti_affinity"]
aff_grp["name"] = "aff_grp_" + random_gen(size=6)
try:
return AffinityGroup.create(api_client, aff_grp, acc, domainid)
except Exception as e:
raise Exception("Error: Creation of Affinity Group failed : %s" %e)
def create_vm_in_aff_grps(self, api_client=None, ag_list=None, ag_ids=None, account_name=None, domain_id=None):
if account_name == None:
account_name = "admin"
if domain_id == None:
domain_id = self.domain.id
if api_client == None:
api_client = self.api_client
self.debug('Creating VM in AffinityGroup=%s' % ag_list)
vm = VirtualMachine.create(
api_client,
self.services["virtual_machine"],
templateid=self.template.id,
serviceofferingid=self.service_offering.id,
affinitygroupnames=ag_list,
affinitygroupids=ag_ids
)
self.debug('Created VM=%s in Affinity Group=%s' %
(vm.id, ag_list))
list_vm = list_virtual_machines(self.api_client, id=vm.id)
self.assertEqual(isinstance(list_vm, list), True,
"Check list response returns a valid list")
self.assertNotEqual(len(list_vm),0,
"Check VM available in Delete Virtual Machines")
vm_response = list_vm[0]
self.assertEqual(vm_response.state, 'Running',
msg="VM is not in Running state")
return vm, vm_response.hostid
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_01_deploy_vm_another_user(self):
"""
Deploy vm as Admin in Affinity Group belonging to regular user (should fail)
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user1.name,
DomainName=self.user1.domain,
type=0)
aff_grp = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
with self.assertRaises(Exception):
self.create_vm_in_aff_grps(api_client=self.apiclient, ag_list=[self.aff_grp[0].name])
aff_grp.delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced", "multihost"])
def test_02_create_aff_grp_user(self):
"""
Create Affinity Group as admin for regular user
"""
self.user = Account.create(self.api_client, self.services["new_account"],
domainid=self.domain.id)
self.cleanup.append(self.user)
aff_grp = self.create_aff_grp(aff_grp=self.services["host_anti_affinity"],
acc=self.user.name, domainid=self.domain.id)
aff_grp.delete(self.apiclient)
@attr(tags=["simulator", "basic", "advanced", "multihost"], required_hardware="false")
def test_03_list_aff_grp_all_users(self):
"""
List Affinity Groups as admin for all the users
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user1.name,
DomainName=self.user1.domain,
type=0)
aff_grp = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client)
self.assertNotEqual(list_aff_grps, [], "Admin not able to list Affinity "
"Groups of users")
aff_grp.delete(userapiclient)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_04_list_all_admin_aff_grp(self):
"""
List Affinity Groups belonging to admin user
"""
aff_grp1 = self.create_aff_grp(api_client=self.api_client,
aff_grp=self.services["host_anti_affinity"])
aff_grp2 = self.create_aff_grp(api_client=self.api_client,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client)
self.assertNotEqual(list_aff_grps, [], "Admin not able to list Affinity "
"Groups belonging to him")
grp_names = [aff_grp1.name, aff_grp2.name]
list_names = []
for grp in list_aff_grps:
list_names.append(grp.name)
for name in grp_names:
self.assertTrue(name in list_names,
"Listing affinity groups belonging to Admin didn't return group %s" %(name))
aff_grp1.delete(self.api_client)
aff_grp2.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_05_list_all_users_aff_grp(self):
"""
List Affinity Groups belonging to regular user passing account id and domain id
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user1.name,
DomainName=self.user1.domain,
type=0)
aff_grp1 = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
aff_grp2 = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(self.api_client, accountId=self.user1.id, domainId=self.user1.domainid)
self.assertNotEqual(list_aff_grps, [], "Admin not able to list Affinity "
"Groups of users")
grp_names = [aff_grp1.name, aff_grp2.name]
list_names = []
for grp in list_aff_grps:
list_names.append(grp.name)
for name in grp_names:
self.assertTrue(name in list_names,
"Missing Group %s from listing" %(name))
aff_grp1.delete(self.api_client)
aff_grp2.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_06_list_all_users_aff_grp_by_id(self):
"""
List Affinity Groups belonging to regular user passing group id
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user1.name,
DomainName=self.user1.domain,
type=0)
aff_grp = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(userapiclient)
aff_grp_by_id = AffinityGroup.list(self.api_client, id=list_aff_grps[0].id)
self.assertNotEqual(aff_grp_by_id, [], "Admin not able to list Affinity "
"Groups of users")
self.assertEqual(len(aff_grp_by_id), 1, "%s affinity groups listed by admin with id %s. Expected 1"
%(len(aff_grp_by_id), list_aff_grps[0].id))
self.assertEqual(aff_grp_by_id[0].name, aff_grp.name,
"Incorrect name returned when listing user affinity groups as admin by id Expected : %s Got: %s"
%(aff_grp.name, aff_grp_by_id[0].name )
)
aff_grp.delete(self.api_client)
@attr(tags=["simulator", "basic", "advanced"], required_hardware="false")
def test_07_delete_aff_grp_of_other_user(self):
"""
Delete Affinity Group belonging to regular user
"""
self.user1 = Account.create(self.api_client,
self.services["new_account"])
self.cleanup.append(self.user1)
userapiclient = self.testClient.getUserApiClient(
UserName=self.user1.name,
DomainName=self.user1.domain,
type=0)
aff_grp = self.create_aff_grp(api_client=userapiclient,
aff_grp=self.services["host_anti_affinity"])
list_aff_grps = AffinityGroup.list(userapiclient)
aff_grp_by_id = AffinityGroup.list(self.api_client, id=list_aff_grps[0].id)
self.assertNotEqual(aff_grp_by_id, [], "Admin not able to list Affinity "
"Groups of users")
self.assertEqual(len(aff_grp_by_id), 1, "%s affinity groups listed by admin with id %s. Expected 1"
%(len(aff_grp_by_id), list_aff_grps[0].id))
self.assertEqual(aff_grp_by_id[0].name, aff_grp.name,
"Incorrect name returned when listing user affinity groups as admin by id Expected : %s Got: %s"
%(aff_grp.name, aff_grp_by_id[0].name )
)
aff_grp.delete(self.api_client)
| 41.229852 | 139 | 0.608477 | 8,674 | 72,647 | 4.869149 | 0.04208 | 0.051569 | 0.051711 | 0.028555 | 0.885664 | 0.871457 | 0.860258 | 0.847828 | 0.830496 | 0.802747 | 0 | 0.008988 | 0.28633 | 72,647 | 1,761 | 140 | 41.253265 | 0.805636 | 0.033312 | 0 | 0.780672 | 0 | 0.00084 | 0.118242 | 0.00039 | 0 | 0 | 0 | 0 | 0.052101 | 0 | null | null | 0.002521 | 0.005042 | null | null | 0.00084 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0788963a41643a9c8805ed39a051a8b11ae3f28d | 2,125 | py | Python | tests/integration/unit_test/test_unit_test_java8.py | aahung/aws-sam-cli-app-templates | fb44b0030d124e53ee4db42bc95240081e4dbbd8 | [
"Apache-2.0"
] | null | null | null | tests/integration/unit_test/test_unit_test_java8.py | aahung/aws-sam-cli-app-templates | fb44b0030d124e53ee4db42bc95240081e4dbbd8 | [
"Apache-2.0"
] | null | null | null | tests/integration/unit_test/test_unit_test_java8.py | aahung/aws-sam-cli-app-templates | fb44b0030d124e53ee4db42bc95240081e4dbbd8 | [
"Apache-2.0"
] | null | null | null | from unittest import skip
from tests.integration.base import Base
class UnitTest_java8_cookiecutter_aws_sam_hello_java_gradle(Base.JavaUnitTestGradleBase):
directory = "java8/cookiecutter-aws-sam-hello-java-gradle"
code_directories = ["HelloWorldFunction"]
class UnitTest_java8_cookiecutter_aws_sam_hello_java_maven(Base.JavaUnitTestMavenBase):
directory = "java8/cookiecutter-aws-sam-hello-java-maven"
code_directories = ["HelloWorldFunction"]
class UnitTest_java8_cookiecutter_aws_sam_eventbridge_hello_java_gradle(Base.JavaUnitTestGradleBase):
directory = "java8/cookiecutter-aws-sam-eventbridge-hello-java-gradle"
code_directories = ["HelloWorldFunction"]
class UnitTest_java8_cookiecutter_aws_sam_eventbridge_hello_java_maven(Base.JavaUnitTestMavenBase):
directory = "java8/cookiecutter-aws-sam-eventbridge-hello-java-maven"
code_directories = ["HelloWorldFunction"]
@skip("eventbridge schema app seems not be able to build")
class UnitTest_java8_cookiecutter_aws_sam_eventbridge_schema_app_java_gradle(Base.JavaUnitTestGradleBase):
directory = "java8/cookiecutter-aws-sam-eventbridge-schema-app-java-gradle"
code_directories = ["HelloWorldFunction"]
@skip("eventbridge schema app seems not be able to build")
class UnitTest_java8_cookiecutter_aws_sam_eventbridge_schema_app_java_maven(Base.JavaUnitTestMavenBase):
directory = "java8/cookiecutter-aws-sam-eventbridge-schema-app-java-maven"
code_directories = ["HelloWorldFunction"]
class UnitTest_java8_cookiecutter_aws_sam_step_functions_sample_app_gradle(Base.JavaUnitTestGradleBase):
directory = "java8/cookiecutter-aws-sam-step-functions-sample-app-gradle"
code_directories = [
"functions/StockBuyer",
"functions/StockChecker",
"functions/StockSeller",
]
class UnitTest_java8_cookiecutter_aws_sam_step_functions_sample_app_maven(Base.JavaUnitTestMavenBase):
directory = "java8/cookiecutter-aws-sam-step-functions-sample-app-maven"
code_directories = [
"functions/StockBuyer",
"functions/StockChecker",
"functions/StockSeller",
]
| 39.351852 | 106 | 0.801882 | 236 | 2,125 | 6.90678 | 0.15678 | 0.166871 | 0.196319 | 0.225767 | 0.965644 | 0.965644 | 0.965644 | 0.962577 | 0.808589 | 0.711656 | 0 | 0.008466 | 0.110588 | 2,125 | 53 | 107 | 40.09434 | 0.853968 | 0 | 0 | 0.444444 | 0 | 0 | 0.361412 | 0.245647 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
6aff90a295e046ef0e015ba15a86cbf89588da29 | 31,356 | py | Python | idaes/models/properties/modular_properties/phase_equil/bubble_dew.py | OOAmusat/idaes-pse | ae7d3bb8e372bc32822dcdcb75e9fd96b78da539 | [
"RSA-MD"
] | null | null | null | idaes/models/properties/modular_properties/phase_equil/bubble_dew.py | OOAmusat/idaes-pse | ae7d3bb8e372bc32822dcdcb75e9fd96b78da539 | [
"RSA-MD"
] | null | null | null | idaes/models/properties/modular_properties/phase_equil/bubble_dew.py | OOAmusat/idaes-pse | ae7d3bb8e372bc32822dcdcb75e9fd96b78da539 | [
"RSA-MD"
] | 1 | 2022-03-17T11:08:43.000Z | 2022-03-17T11:08:43.000Z | #################################################################################
# The Institute for the Design of Advanced Energy Systems Integrated Platform
# Framework (IDAES IP) was produced under the DOE Institute for the
# Design of Advanced Energy Systems (IDAES), and is copyright (c) 2018-2021
# by the software owners: The Regents of the University of California, through
# Lawrence Berkeley National Laboratory, National Technology & Engineering
# Solutions of Sandia, LLC, Carnegie Mellon University, West Virginia University
# Research Corporation, et al. All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and
# license information.
#################################################################################
from pyomo.environ import Constraint
from idaes.models.properties.modular_properties.base.utility import (
get_method,
get_component_object as cobj,
)
import idaes.core.util.scaling as iscale
class IdealBubbleDew:
# -------------------------------------------------------------------------
# Bubble temperature methods
# This approach can only be used when both liquid and vapor phases use
# Ideal properties
# Henry's Law components also cause issues due to the need to (potentially)
# calcualte concentrations at the bubble and dew points
@staticmethod
def temperature_bubble(b):
try:
def rule_bubble_temp(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
return (
sum(
b.mole_frac_comp[j]
* get_method(b, "pressure_sat_comp", j)(
b, cobj(b, j), b.temperature_bubble[p1, p2]
)
for j in vl_comps
)
+ sum(
b.mole_frac_comp[j]
* b.params.get_component(j)
.config.henry_component[l_phase]["method"]
.return_expression(b, l_phase, j, b.temperature_bubble[p1, p2])
for j in henry_comps
)
- b.pressure
) == 0
b.eq_temperature_bubble = Constraint(
b.params._pe_pairs, rule=rule_bubble_temp
)
except AttributeError:
b.del_component(b.eq_temperature_bubble)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_bubble_temp(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
if j in vl_comps:
return b._mole_frac_tbub[p1, p2, j] * b.pressure == (
b.mole_frac_comp[j]
* get_method(b, "pressure_sat_comp", j)(
b, cobj(b, j), b.temperature_bubble[p1, p2]
)
)
elif j in henry_comps:
return b._mole_frac_tbub[p1, p2, j] * b.pressure == (
b.mole_frac_comp[j]
* b.params.get_component(j)
.config.henry_component[l_phase]["method"]
.return_expression(b, l_phase, j, b.temperature_bubble[p1, p2])
)
else:
return b._mole_frac_tbub[p1, p2, j] == 0
b.eq_mole_frac_tbub = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_mole_frac_bubble_temp
)
@staticmethod
def scale_temperature_bubble(b, overwrite=True):
sf_P = iscale.get_scaling_factor(b.pressure, default=1e-5, warning=True)
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
for j in b.component_list:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
if j in vl_comps:
sf = sf_P * sf_mf
else:
sf = sf_mf
iscale.constraint_scaling_transform(
b.eq_temperature_bubble[pp[0], pp[1]], sf_P, overwrite=overwrite
)
iscale.constraint_scaling_transform(
b.eq_mole_frac_tbub[pp[0], pp[1], j], sf, overwrite=overwrite
)
# -------------------------------------------------------------------------
# Dew temperature methods
@staticmethod
def temperature_dew(b):
try:
def rule_dew_temp(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vaporisables present, no dew point
return Constraint.Skip
return (
b.pressure
* (
sum(
b.mole_frac_comp[j]
/ get_method(b, "pressure_sat_comp", j)(
b, cobj(b, j), b.temperature_dew[p1, p2]
)
for j in vl_comps
)
+ sum(
b.mole_frac_comp[j]
/ b.params.get_component(j)
.config.henry_component[l_phase]["method"]
.return_expression(b, l_phase, j, b.temperature_dew[p1, p2])
for j in henry_comps
)
)
- 1
== 0
)
b.eq_temperature_dew = Constraint(b.params._pe_pairs, rule=rule_dew_temp)
except AttributeError:
b.del_component(b.eq_temperature_dew)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_dew_temp(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vaporisables present, no dew point
return Constraint.Skip
if j in vl_comps:
return (
b._mole_frac_tdew[p1, p2, j]
* get_method(b, "pressure_sat_comp", j)(
b, cobj(b, j), b.temperature_dew[p1, p2]
)
== b.mole_frac_comp[j] * b.pressure
)
elif j in henry_comps:
return (
b._mole_frac_tdew[p1, p2, j]
* b.params.get_component(j)
.config.henry_component[l_phase]["method"]
.return_expression(b, l_phase, j, b.temperature_dew[p1, p2])
== b.mole_frac_comp[j] * b.pressure
)
else:
return b._mole_frac_tdew[p1, p2, j] == 0
b.eq_mole_frac_tdew = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_mole_frac_dew_temp
)
@staticmethod
def scale_temperature_dew(b, overwrite=True):
sf_P = iscale.get_scaling_factor(b.pressure, default=1e-5, warning=True)
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
for j in b.component_list:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
if j in vl_comps:
sf = sf_P * sf_mf
else:
sf = sf_mf
# b.eq_temperature_dew is well-scaled by default
iscale.constraint_scaling_transform(
b.eq_mole_frac_tdew[pp[0], pp[1], j], sf, overwrite=overwrite
)
# -------------------------------------------------------------------------
# Bubble pressure methods
@staticmethod
def pressure_bubble(b):
try:
def rule_bubble_press(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
return b.pressure_bubble[p1, p2] == (
sum(b.mole_frac_comp[j] * b.pressure_sat_comp[j] for j in vl_comps)
+ sum(
b.mole_frac_comp[j] * b.henry[l_phase, j] for j in henry_comps
)
)
b.eq_pressure_bubble = Constraint(
b.params._pe_pairs, rule=rule_bubble_press
)
except AttributeError:
b.del_component(b.eq_pressure_bubble)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_bubble_press(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
if j in vl_comps:
return (
b._mole_frac_pbub[p1, p2, j] * b.pressure_bubble[p1, p2]
== b.mole_frac_comp[j] * b.pressure_sat_comp[j]
)
if j in henry_comps:
return (
b._mole_frac_pbub[p1, p2, j] * b.pressure_bubble[p1, p2]
== b.mole_frac_comp[j] * b.henry[l_phase, j]
)
else:
return b._mole_frac_pbub[p1, p2, j] == 0
b.eq_mole_frac_pbub = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_mole_frac_bubble_press
)
@staticmethod
def scale_pressure_bubble(b, overwrite=True):
sf_P = iscale.get_scaling_factor(b.pressure, default=1e-5, warning=True)
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
for j in b.component_list:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
if j in vl_comps:
sf = sf_P * sf_mf
else:
sf = sf_mf
iscale.constraint_scaling_transform(
b.eq_pressure_bubble[pp[0], pp[1]], sf_P, overwrite=overwrite
)
iscale.constraint_scaling_transform(
b.eq_mole_frac_pbub[pp[0], pp[1], j], sf, overwrite=overwrite
)
# -------------------------------------------------------------------------
# Dew pressure methods
@staticmethod
def pressure_dew(b):
try:
def rule_dew_press(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vaporisables present, no dew point
return Constraint.Skip
return 0 == 1 - b.pressure_dew[p1, p2] * (
sum(b.mole_frac_comp[j] / b.pressure_sat_comp[j] for j in vl_comps)
+ sum(
b.mole_frac_comp[j] / b.henry[l_phase, j] for j in henry_comps
)
)
b.eq_pressure_dew = Constraint(b.params._pe_pairs, rule=rule_dew_press)
except AttributeError:
b.del_component(b.eq_pressure_dew)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_dew_press(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vaporisables present, no dew point
return Constraint.Skip
if j in vl_comps:
return (
b._mole_frac_pdew[p1, p2, j] * b.pressure_sat_comp[j]
== b.mole_frac_comp[j] * b.pressure_dew[p1, p2]
)
elif j in henry_comps:
return (
b._mole_frac_pdew[p1, p2, j] * b.henry[l_phase, j]
== b.mole_frac_comp[j] * b.pressure_dew[p1, p2]
)
else:
return b._mole_frac_pdew[p1, p2, j] == 0
b.eq_mole_frac_pdew = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_mole_frac_dew_press
)
@staticmethod
def scale_pressure_dew(b, overwrite=True):
sf_P = iscale.get_scaling_factor(b.pressure, default=1e-5, warning=True)
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
for j in b.component_list:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
if j in vl_comps:
sf = sf_P * sf_mf
else:
sf = sf_mf
# b.eq_pressure_dew is well-scaled by default
iscale.constraint_scaling_transform(
b.eq_mole_frac_pdew[pp[0], pp[1], j], sf, overwrite=overwrite
)
class LogBubbleDew:
# -------------------------------------------------------------------------
# Bubble temperature methods
@staticmethod
def temperature_bubble(b):
try:
def rule_bubble_temp(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
l_eos = b.params.get_phase(l_phase).config.equation_of_state
v_eos = b.params.get_phase(v_phase).config.equation_of_state
if j in vl_comps or j in henry_comps:
return l_eos.log_fug_phase_comp_Tbub(
b, l_phase, j, (p1, p2)
) == v_eos.log_fug_phase_comp_Tbub(b, v_phase, j, (p1, p2))
else:
return b._mole_frac_tbub[p1, p2, j] == 0
b.eq_temperature_bubble = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_bubble_temp
)
except AttributeError:
b.del_component(b.eq_temperature_bubble)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_bubble_temp(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
return 1 == (
sum(b._mole_frac_tbub[p1, p2, j] for j in vl_comps)
+ sum(b._mole_frac_tbub[p1, p2, j] for j in henry_comps)
)
b.eq_mole_frac_tbub = Constraint(
b.params._pe_pairs, rule=rule_mole_frac_bubble_temp
)
@staticmethod
def scale_temperature_bubble(b, overwrite=True):
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
# Assume b.eq_temperature_bubble is well-scaled
iscale.constraint_scaling_transform(
b.eq_mole_frac_tbub[pp[0], pp[1]], sf_mf, overwrite=overwrite
)
# -------------------------------------------------------------------------
# Dew temperature methods
@staticmethod
def temperature_dew(b):
try:
def rule_dew_temp(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vapouriszbles present, no dew point
return Constraint.Skip
l_eos = b.params.get_phase(l_phase).config.equation_of_state
v_eos = b.params.get_phase(v_phase).config.equation_of_state
if j in vl_comps or j in henry_comps:
return l_eos.log_fug_phase_comp_Tdew(
b, l_phase, j, (p1, p2)
) == v_eos.log_fug_phase_comp_Tdew(b, v_phase, j, (p1, p2))
else:
return b._mole_frac_tdew[p1, p2, j] == 0
b.eq_temperature_dew = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_dew_temp
)
except AttributeError:
b.del_component(b.eq_temperature_dew)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_dew_temp(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vaporisables present, no dew point
return Constraint.Skip
return 1 == (
sum(b._mole_frac_tdew[p1, p2, j] for j in vl_comps)
+ sum(b._mole_frac_tdew[p1, p2, j] for j in henry_comps)
)
b.eq_mole_frac_tdew = Constraint(
b.params._pe_pairs, rule=rule_mole_frac_dew_temp
)
@staticmethod
def scale_temperature_dew(b, overwrite=True):
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
# Assume b.eq_temperature_dew is well-scaled
iscale.constraint_scaling_transform(
b.eq_mole_frac_tdew[pp[0], pp[1]], sf_mf, overwrite=overwrite
)
# -------------------------------------------------------------------------
# Bubble pressure methods
@staticmethod
def pressure_bubble(b):
try:
def rule_bubble_press(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
l_eos = b.params.get_phase(l_phase).config.equation_of_state
v_eos = b.params.get_phase(v_phase).config.equation_of_state
if j in vl_comps or j in henry_comps:
return l_eos.log_fug_phase_comp_Pbub(
b, l_phase, j, (p1, p2)
) == v_eos.log_fug_phase_comp_Pbub(b, v_phase, j, (p1, p2))
else:
return b._mole_frac_pbub[p1, p2, j] == 0
b.eq_pressure_bubble = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_bubble_press
)
except AttributeError:
b.del_component(b.eq_pressure_bubble)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_bubble_press(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
return 1 == (
sum(b._mole_frac_pbub[p1, p2, j] for j in vl_comps)
+ sum(b._mole_frac_pbub[p1, p2, j] for j in henry_comps)
)
b.eq_mole_frac_pbub = Constraint(
b.params._pe_pairs, rule=rule_mole_frac_bubble_press
)
@staticmethod
def scale_pressure_bubble(b, overwrite=True):
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
# Assume b.eq_pressure_bubble is well-scaled
iscale.constraint_scaling_transform(
b.eq_mole_frac_pbub[pp[0], pp[1]], sf_mf, overwrite=overwrite
)
# -------------------------------------------------------------------------
# Dew pressure methods
@staticmethod
def pressure_dew(b):
try:
def rule_dew_press(b, p1, p2, j):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif v_only_comps != []:
# Non-condensables present, no bubble point
return Constraint.Skip
l_eos = b.params.get_phase(l_phase).config.equation_of_state
v_eos = b.params.get_phase(v_phase).config.equation_of_state
if j in vl_comps or j in henry_comps:
return l_eos.log_fug_phase_comp_Pdew(
b, l_phase, j, (p1, p2)
) == v_eos.log_fug_phase_comp_Pdew(b, v_phase, j, (p1, p2))
else:
return b._mole_frac_pdew[p1, p2, j] == 0
b.eq_pressure_dew = Constraint(
b.params._pe_pairs, b.component_list, rule=rule_dew_press
)
except AttributeError:
b.del_component(b.eq_pressure_dew)
raise
# Don't need a try/except here, will pass if first constraint did
def rule_mole_frac_dew_press(b, p1, p2):
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, (p1, p2))
if l_phase is None or v_phase is None:
# Not a VLE pair
return Constraint.Skip
elif l_only_comps != []:
# Non-vaporisables present, no dew point
return Constraint.Skip
return 1 == (
sum(b._mole_frac_pdew[p1, p2, j] for j in vl_comps)
+ sum(b._mole_frac_pdew[p1, p2, j] for j in henry_comps)
)
b.eq_mole_frac_pdew = Constraint(
b.params._pe_pairs, rule=rule_mole_frac_dew_press
)
@staticmethod
def scale_pressure_dew(b, overwrite=True):
sf_mf = iscale.get_scaling_factor(b.mole_frac_comp, default=1e3, warning=True)
for pp in b.params._pe_pairs:
(
l_phase,
v_phase,
vl_comps,
henry_comps,
l_only_comps,
v_only_comps,
) = _valid_VL_component_list(b, pp)
if l_phase is None or v_phase is None:
continue
elif v_only_comps != []:
continue
# Assume b.eq_pressure_dew is well-scaled
iscale.constraint_scaling_transform(
b.eq_mole_frac_pdew[pp[0], pp[1]], sf_mf, overwrite=overwrite
)
def _valid_VL_component_list(blk, pp):
vl_comps = []
henry_comps = []
l_only_comps = []
v_only_comps = []
pparams = blk.params
l_phase = None
v_phase = None
if pparams.get_phase(pp[0]).is_liquid_phase():
l_phase = pp[0]
elif pparams.get_phase(pp[0]).is_vapor_phase():
v_phase = pp[0]
if pparams.get_phase(pp[1]).is_liquid_phase():
l_phase = pp[1]
elif pparams.get_phase(pp[1]).is_vapor_phase():
v_phase = pp[1]
# Only need to do this for V-L pairs, so check
if l_phase is not None and v_phase is not None:
for j in blk.params.component_list:
if (l_phase, j) in blk.phase_component_set and (
v_phase,
j,
) in blk.phase_component_set:
cobj = pparams.get_component(j)
if cobj.config.henry_component is not None and (
pp[0] in cobj.config.henry_component
or pp[1] in cobj.config.henry_component
):
henry_comps.append(j)
else:
vl_comps.append(j)
elif (l_phase, j) in blk.phase_component_set:
l_only_comps.append(j)
elif (v_phase, j) in blk.phase_component_set:
v_only_comps.append(j)
return l_phase, v_phase, vl_comps, henry_comps, l_only_comps, v_only_comps
| 35.87643 | 88 | 0.479876 | 3,605 | 31,356 | 3.866297 | 0.05742 | 0.045918 | 0.037882 | 0.031712 | 0.916559 | 0.909384 | 0.896398 | 0.889152 | 0.870354 | 0.859808 | 0 | 0.013074 | 0.426744 | 31,356 | 873 | 89 | 35.917526 | 0.762337 | 0.105211 | 0 | 0.715125 | 0 | 0 | 0.003307 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048458 | false | 0 | 0.004405 | 0 | 0.145374 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed0a8fc67c7dd335a99e12112fbc70fe1bcffaf0 | 10,975 | py | Python | mpf/tests/test_CreditsMode.py | pmansukhani/mpf | 0979965d24bcaba9423b43581c6a18b847b1b900 | [
"MIT"
] | null | null | null | mpf/tests/test_CreditsMode.py | pmansukhani/mpf | 0979965d24bcaba9423b43581c6a18b847b1b900 | [
"MIT"
] | null | null | null | mpf/tests/test_CreditsMode.py | pmansukhani/mpf | 0979965d24bcaba9423b43581c6a18b847b1b900 | [
"MIT"
] | null | null | null | from unittest.mock import MagicMock
from mpf.tests.MpfTestCase import MpfTestCase, test_config
class TestCreditsMode(MpfTestCase):
def getConfigFile(self):
return 'config.yaml'
def getMachinePath(self):
return 'tests/machine_files/credits/'
def start_game(self, should_work):
# shots only work in games so we have to do this a lot
self.machine.playfield.add_ball = MagicMock()
self.machine.ball_controller.num_balls_known = 3
self.hit_and_release_switch("s_start")
self.advance_time_and_run()
if should_work:
self.assertIsNotNone(self.machine.game)
self.machine.game.balls_in_play = 0
self.advance_time_and_run()
else:
self.assertIsNone(self.machine.game)
def start_two_player_game(self):
# game start should work
self.machine.playfield.add_ball = MagicMock()
self.machine.ball_controller.num_balls_known = 3
self.hit_and_release_switch("s_start")
self.advance_time_and_run()
self.assertIsNotNone(self.machine.game)
self.assertEqual(1, self.machine.game.num_players)
# add another player
self.hit_and_release_switch("s_start")
self.advance_time_and_run(1)
self.assertEqual(2, self.machine.game.num_players)
def stop_game(self):
# stop game
self.assertIsNotNone(self.machine.game)
self.machine.game.end_game()
self.advance_time_and_run()
self.assertIsNone(self.machine.game)
@test_config("config_freeplay.yaml")
def test_free_play_at_start(self):
self.assertEqual("FREE PLAY", self.machine.variables.get_machine_var('credits_string'))
self.assertFalse(self.machine.variables.is_machine_var("price_per_game_raw_0"))
self.assertFalse(self.machine.variables.is_machine_var("price_per_game_string_0"))
self.start_two_player_game()
def testToggleEvents(self):
self.assertTrue(self.machine.mode_controller.is_active('credits'))
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.post_event("toggle_credit_play")
self.assertEqual("FREE PLAY", self.machine.variables.get_machine_var('credits_string'))
self.post_event("toggle_credit_play")
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.start_game(False)
self.post_event("toggle_credit_play")
self.assertEqual("FREE PLAY", self.machine.variables.get_machine_var('credits_string'))
self.start_two_player_game()
self.stop_game()
self.post_event("enable_credit_play")
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.post_event("enable_credit_play")
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.post_event("enable_free_play")
self.assertEqual("FREE PLAY", self.machine.variables.get_machine_var('credits_string'))
self.post_event("enable_free_play")
self.assertEqual("FREE PLAY", self.machine.variables.get_machine_var('credits_string'))
def testCredits(self):
self.assertTrue(self.machine.mode_controller.is_active('credits'))
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
# no credits no game
self.start_game(False)
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 1/2", self.machine.variables.get_machine_var('credits_string'))
self.assertMachineVarEqual(0.5, "price_per_game_raw_0")
self.assertMachineVarEqual("1 CREDITS $0.5", "price_per_game_string_0")
self.assertMachineVarEqual(2, "price_per_game_raw_1")
self.assertMachineVarEqual("5 CREDITS $2.0", "price_per_game_string_1")
# not enough credits. no game
self.start_game(False)
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
# one is enough for a game
self.start_game(True)
self.stop_game()
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
# but only one
self.start_game(False)
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 2", self.machine.variables.get_machine_var('credits_string'))
# no more price tier after game
self.hit_and_release_switch("s_left_coin")
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 3", self.machine.variables.get_machine_var('credits_string'))
def testReplay(self):
# add coins
self.hit_and_release_switch("s_left_coin")
self.hit_and_release_switch("s_left_coin")
self.advance_time_and_run()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
# start game
self.start_game(True)
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
# no replay
self.stop_game()
# try again
self.hit_and_release_switch("s_left_coin")
self.hit_and_release_switch("s_left_coin")
self.advance_time_and_run()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
self.start_game(True)
# score 600k
self.machine.game.player.score = 600000
# replay credit on game end
self.stop_game()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
def testMorePlayers(self):
self.assertTrue(self.machine.mode_controller.is_active('credits'))
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.hit_and_release_switch("s_left_coin")
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
# one is enough for a game
self.machine.playfield.add_ball = MagicMock()
self.machine.ball_controller.num_balls_known = 3
self.hit_and_release_switch("s_start")
self.advance_time_and_run()
self.assertIsNotNone(self.machine.game)
# no more credits
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.assertEqual(1, self.machine.game.num_players)
# try to add another player
self.hit_and_release_switch("s_start")
# fails
self.assertEqual(1, self.machine.game.num_players)
# add credits
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 2", self.machine.variables.get_machine_var('credits_string'))
# try to add another player
self.hit_and_release_switch("s_start")
# wrorks
self.assertEqual(2, self.machine.game.num_players)
self.machine_run()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
def testMaxCredits(self):
self.assertTrue(self.machine.mode_controller.is_active('credits'))
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
self.hit_and_release_switch("s_right_coin")
self.hit_and_release_switch("s_right_coin")
self.hit_and_release_switch("s_right_coin")
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 10", self.machine.variables.get_machine_var('credits_string'))
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 12", self.machine.variables.get_machine_var('credits_string'))
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 12", self.machine.variables.get_machine_var('credits_string'))
def testPricingTiers(self):
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 2", self.machine.variables.get_machine_var('credits_string'))
self.hit_and_release_switch("s_right_coin")
self.machine_run()
self.assertEqual("CREDITS 5", self.machine.variables.get_machine_var('credits_string'))
def testFractionalTimeout(self):
self.hit_and_release_switch("s_right_coin")
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 2 1/2", self.machine.variables.get_machine_var('credits_string'))
self.advance_time_and_run(60 * 15)
self.assertEqual("CREDITS 2", self.machine.variables.get_machine_var('credits_string'))
# but not during game
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 2 1/2", self.machine.variables.get_machine_var('credits_string'))
self.start_game(True)
self.advance_time_and_run(60 * 15)
self.stop_game()
self.machine_run()
self.assertEqual("CREDITS 1 1/2", self.machine.variables.get_machine_var('credits_string'))
# but timeout restarts
self.advance_time_and_run(60 * 15)
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
def testCreditTimeout(self):
self.hit_and_release_switch("s_right_coin")
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 2 1/2", self.machine.variables.get_machine_var('credits_string'))
self.advance_time_and_run(3600 * 2)
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
# but not during game
self.hit_and_release_switch("s_right_coin")
self.hit_and_release_switch("s_left_coin")
self.machine_run()
self.assertEqual("CREDITS 2 1/2", self.machine.variables.get_machine_var('credits_string'))
self.start_game(True)
self.advance_time_and_run(3600 * 2)
self.stop_game()
self.machine_run()
self.assertEqual("CREDITS 1 1/2", self.machine.variables.get_machine_var('credits_string'))
# but timeout restarts
self.advance_time_and_run(3600 * 2)
self.assertEqual("CREDITS 0", self.machine.variables.get_machine_var('credits_string'))
def testServiceCredits(self):
self.hit_and_release_switch("s_esc")
self.machine_run()
self.assertEqual("CREDITS 1", self.machine.variables.get_machine_var('credits_string'))
| 38.91844 | 99 | 0.694761 | 1,434 | 10,975 | 4.996513 | 0.09205 | 0.132031 | 0.120028 | 0.131612 | 0.883182 | 0.859595 | 0.839916 | 0.836008 | 0.804466 | 0.77739 | 0 | 0.012806 | 0.195991 | 10,975 | 281 | 100 | 39.05694 | 0.799184 | 0.045285 | 0 | 0.794595 | 0 | 0 | 0.162185 | 0.009281 | 0 | 0 | 0 | 0 | 0.335135 | 1 | 0.081081 | false | 0 | 0.010811 | 0.010811 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed1623b603ecff0149f184c52e7485f44b62dc8c | 14,501 | py | Python | ThirdParty/Twisted/twisted/pair/test/test_ip.py | OpenGeoscience/VTK | a373e975b9284a022f43a062ebf5042bb17b4e44 | [
"BSD-3-Clause"
] | 7 | 2015-04-28T13:26:11.000Z | 2020-02-09T17:01:04.000Z | ThirdParty/Twisted/twisted/pair/test/test_ip.py | OpenGeoscience/VTK | a373e975b9284a022f43a062ebf5042bb17b4e44 | [
"BSD-3-Clause"
] | 4 | 2017-02-19T23:58:13.000Z | 2019-11-01T15:31:22.000Z | ThirdParty/Twisted/twisted/pair/test/test_ip.py | OpenGeoscience/VTK | a373e975b9284a022f43a062ebf5042bb17b4e44 | [
"BSD-3-Clause"
] | 6 | 2017-02-13T09:11:02.000Z | 2021-06-29T11:22:18.000Z | # Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
#
from twisted.trial import unittest
from twisted.internet import protocol, reactor, error
from twisted.python import failure, components
from twisted.pair import ip, raw
from zope import interface
class MyProtocol:
interface.implements(raw.IRawDatagramProtocol)
def __init__(self, expecting):
self.expecting = list(expecting)
def datagramReceived(self, data, **kw):
assert self.expecting, 'Got a packet when not expecting anymore.'
expectData, expectKw = self.expecting.pop(0)
expectKwKeys = expectKw.keys(); expectKwKeys.sort()
kwKeys = kw.keys(); kwKeys.sort()
assert expectKwKeys == kwKeys, "Expected %r, got %r" % (expectKwKeys, kwKeys)
for k in expectKwKeys:
assert expectKw[k] == kw[k], "Expected %s=%r, got %r" % (k, expectKw[k], kw[k])
assert expectKw == kw, "Expected %r, got %r" % (expectKw, kw)
assert expectData == data, "Expected %r, got %r" % (expectData, data)
class IPTestCase(unittest.TestCase):
def testPacketParsing(self):
proto = ip.IPProtocol()
p1 = MyProtocol([
('foobar', {
'partial': 0,
'dest': '1.2.3.4',
'source': '5.6.7.8',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
])
proto.addProto(0x0F, p1)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x05\x06\x07\x08" + "\x01\x02\x03\x04" + "foobar",
partial=0,
dest='dummy',
source='dummy',
protocol='dummy',
)
assert not p1.expecting, \
'Should not expect any more packets, but still want %r' % p1.expecting
def testMultiplePackets(self):
proto = ip.IPProtocol()
p1 = MyProtocol([
('foobar', {
'partial': 0,
'dest': '1.2.3.4',
'source': '5.6.7.8',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
('quux', {
'partial': 1,
'dest': '5.4.3.2',
'source': '6.7.8.9',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
])
proto.addProto(0x0F, p1)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x05\x06\x07\x08" + "\x01\x02\x03\x04" + "foobar",
partial=0,
dest='dummy',
source='dummy',
protocol='dummy',
)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x06\x07\x08\x09" + "\x05\x04\x03\x02" + "quux",
partial=1,
dest='dummy',
source='dummy',
protocol='dummy',
)
assert not p1.expecting, \
'Should not expect any more packets, but still want %r' % p1.expecting
def testMultipleSameProtos(self):
proto = ip.IPProtocol()
p1 = MyProtocol([
('foobar', {
'partial': 0,
'dest': '1.2.3.4',
'source': '5.6.7.8',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
])
p2 = MyProtocol([
('foobar', {
'partial': 0,
'dest': '1.2.3.4',
'source': '5.6.7.8',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
])
proto.addProto(0x0F, p1)
proto.addProto(0x0F, p2)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x05\x06\x07\x08" + "\x01\x02\x03\x04" + "foobar",
partial=0,
dest='dummy',
source='dummy',
protocol='dummy',
)
assert not p1.expecting, \
'Should not expect any more packets, but still want %r' % p1.expecting
assert not p2.expecting, \
'Should not expect any more packets, but still want %r' % p2.expecting
def testWrongProtoNotSeen(self):
proto = ip.IPProtocol()
p1 = MyProtocol([])
proto.addProto(1, p1)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x05\x06\x07\x08" + "\x01\x02\x03\x04" + "foobar",
partial=0,
dest='dummy',
source='dummy',
protocol='dummy',
)
def testDemuxing(self):
proto = ip.IPProtocol()
p1 = MyProtocol([
('foobar', {
'partial': 0,
'dest': '1.2.3.4',
'source': '5.6.7.8',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
('quux', {
'partial': 1,
'dest': '5.4.3.2',
'source': '6.7.8.9',
'protocol': 0x0F,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
])
proto.addProto(0x0F, p1)
p2 = MyProtocol([
('quux', {
'partial': 1,
'dest': '5.4.3.2',
'source': '6.7.8.9',
'protocol': 0x0A,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
('foobar', {
'partial': 0,
'dest': '1.2.3.4',
'source': '5.6.7.8',
'protocol': 0x0A,
'version': 4,
'ihl': 20,
'tos': 7,
'tot_len': 20+6,
'fragment_id': 0xDEAD,
'fragment_offset': 0x1EEF,
'dont_fragment': 0,
'more_fragments': 1,
'ttl': 0xC0,
}),
])
proto.addProto(0x0A, p2)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0A" #protocol
+ "FE" #checksum
+ "\x06\x07\x08\x09" + "\x05\x04\x03\x02" + "quux",
partial=1,
dest='dummy',
source='dummy',
protocol='dummy',
)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x05\x06\x07\x08" + "\x01\x02\x03\x04" + "foobar",
partial=0,
dest='dummy',
source='dummy',
protocol='dummy',
)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0F" #protocol
+ "FE" #checksum
+ "\x06\x07\x08\x09" + "\x05\x04\x03\x02" + "quux",
partial=1,
dest='dummy',
source='dummy',
protocol='dummy',
)
proto.datagramReceived("\x54" #ihl version
+ "\x07" #tos
+ "\x00\x1a" #tot_len
+ "\xDE\xAD" #id
+ "\xBE\xEF" #frag_off
+ "\xC0" #ttl
+ "\x0A" #protocol
+ "FE" #checksum
+ "\x05\x06\x07\x08" + "\x01\x02\x03\x04" + "foobar",
partial=0,
dest='dummy',
source='dummy',
protocol='dummy',
)
assert not p1.expecting, \
'Should not expect any more packets, but still want %r' % p1.expecting
assert not p2.expecting, \
'Should not expect any more packets, but still want %r' % p2.expecting
def testAddingBadProtos_WrongLevel(self):
"""Adding a wrong level protocol raises an exception."""
e = ip.IPProtocol()
try:
e.addProto(42, "silliness")
except components.CannotAdapt:
pass
else:
raise AssertionError, 'addProto must raise an exception for bad protocols'
def testAddingBadProtos_TooSmall(self):
"""Adding a protocol with a negative number raises an exception."""
e = ip.IPProtocol()
try:
e.addProto(-1, MyProtocol([]))
except TypeError, e:
if e.args == ('Added protocol must be positive or zero',):
pass
else:
raise
else:
raise AssertionError, 'addProto must raise an exception for bad protocols'
def testAddingBadProtos_TooBig(self):
"""Adding a protocol with a number >=2**32 raises an exception."""
e = ip.IPProtocol()
try:
e.addProto(2L**32, MyProtocol([]))
except TypeError, e:
if e.args == ('Added protocol must fit in 32 bits',):
pass
else:
raise
else:
raise AssertionError, 'addProto must raise an exception for bad protocols'
def testAddingBadProtos_TooBig2(self):
"""Adding a protocol with a number >=2**32 raises an exception."""
e = ip.IPProtocol()
try:
e.addProto(2L**32+1, MyProtocol([]))
except TypeError, e:
if e.args == ('Added protocol must fit in 32 bits',):
pass
else:
raise
else:
raise AssertionError, 'addProto must raise an exception for bad protocols'
| 34.691388 | 91 | 0.373698 | 1,207 | 14,501 | 4.43082 | 0.141674 | 0.020194 | 0.031414 | 0.040389 | 0.807966 | 0.807966 | 0.797307 | 0.797307 | 0.797307 | 0.781601 | 0 | 0.072444 | 0.505 | 14,501 | 417 | 92 | 34.77458 | 0.672611 | 0.035653 | 0 | 0.868132 | 0 | 0 | 0.19666 | 0 | 0 | 0 | 0.014942 | 0 | 0.041209 | 0 | null | null | 0.010989 | 0.013736 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ed265becede1a85c12bbe8dad61fe8ce4522789f | 10,430 | py | Python | test/test_metrics.py | saurabhya/kornia | f2b4fe9fb32d99795783f25b5a4c561001783ebf | [
"ECL-2.0",
"Apache-2.0"
] | 418 | 2018-10-02T22:31:36.000Z | 2019-01-16T14:15:45.000Z | test/test_metrics.py | saurabhya/kornia | f2b4fe9fb32d99795783f25b5a4c561001783ebf | [
"ECL-2.0",
"Apache-2.0"
] | 94 | 2019-01-17T22:10:45.000Z | 2019-05-22T23:47:58.000Z | test/test_metrics.py | saurabhya/kornia | f2b4fe9fb32d99795783f25b5a4c561001783ebf | [
"ECL-2.0",
"Apache-2.0"
] | 25 | 2018-10-02T22:50:04.000Z | 2019-01-13T18:14:11.000Z | import pytest
import torch
import kornia
from kornia.testing import assert_close
class TestMeanIoU:
def test_two_classes_perfect(self, device, dtype):
batch_size = 1
num_classes = 2
actual = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long)
predicted = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long)
mean_iou = kornia.metrics.mean_iou(predicted, actual, num_classes)
mean_iou_real = torch.tensor([[1.0, 1.0]], device=device, dtype=torch.float32)
assert mean_iou.shape == (batch_size, num_classes)
assert_close(mean_iou, mean_iou_real)
def test_two_classes_perfect_batch2(self, device, dtype):
batch_size = 2
num_classes = 2
actual = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long).repeat(batch_size, 1)
predicted = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long).repeat(batch_size, 1)
mean_iou = kornia.metrics.mean_iou(predicted, actual, num_classes)
mean_iou_real = torch.tensor([[1.0, 1.0], [1.0, 1.0]], device=device, dtype=torch.float32)
assert mean_iou.shape == (batch_size, num_classes)
assert_close(mean_iou, mean_iou_real)
def test_two_classes(self, device, dtype):
batch_size = 1
num_classes = 2
actual = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long)
predicted = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 1]], device=device, dtype=torch.long)
mean_iou = kornia.metrics.mean_iou(predicted, actual, num_classes)
mean_iou = kornia.metrics.mean_iou(predicted, actual, num_classes)
mean_iou_real = torch.tensor([[0.75, 0.80]], device=device, dtype=torch.float32)
assert mean_iou.shape == (batch_size, num_classes)
assert_close(mean_iou, mean_iou_real)
def test_four_classes_2d_perfect(self, device, dtype):
batch_size = 1
num_classes = 4
actual = torch.tensor(
[[[0, 0, 1, 1], [0, 0, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
predicted = torch.tensor(
[[[0, 0, 1, 1], [0, 0, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
mean_iou = kornia.metrics.mean_iou(predicted, actual, num_classes)
mean_iou_real = torch.tensor([[1.0, 1.0, 1.0, 1.0]], device=device, dtype=torch.float32)
assert mean_iou.shape == (batch_size, num_classes)
assert_close(mean_iou, mean_iou_real)
def test_four_classes_one_missing(self, device, dtype):
batch_size = 1
num_classes = 4
actual = torch.tensor(
[[[0, 0, 0, 0], [0, 0, 0, 0], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
predicted = torch.tensor(
[[[3, 3, 2, 2], [3, 3, 2, 2], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
mean_iou = kornia.metrics.mean_iou(predicted, actual, num_classes)
mean_iou_real = torch.tensor([[0.0, 1.0, 0.5, 0.5]], device=device, dtype=torch.float32)
assert mean_iou.shape == (batch_size, num_classes)
assert_close(mean_iou, mean_iou_real)
class TestConfusionMatrix:
def test_two_classes(self, device, dtype):
num_classes = 2
actual = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long)
predicted = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 1]], device=device, dtype=torch.long)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor([[[3, 1], [0, 4]]], device=device, dtype=torch.float32)
assert_close(conf_mat, conf_mat_real)
def test_two_classes_batch2(self, device, dtype):
batch_size = 2
num_classes = 2
actual = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 0]], device=device, dtype=torch.long).repeat(batch_size, 1)
predicted = torch.tensor([[1, 1, 1, 1, 0, 0, 0, 1]], device=device, dtype=torch.long).repeat(batch_size, 1)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor([[[3, 1], [0, 4]], [[3, 1], [0, 4]]], device=device, dtype=torch.float32)
assert_close(conf_mat, conf_mat_real)
def test_three_classes(self, device, dtype):
num_classes = 3
actual = torch.tensor([[2, 2, 0, 0, 1, 0, 0, 2, 1, 1, 0, 0, 1, 2, 1, 0]], device=device, dtype=torch.long)
predicted = torch.tensor([[2, 1, 0, 0, 0, 0, 0, 1, 0, 2, 2, 1, 0, 0, 2, 2]], device=device, dtype=torch.long)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor([[[4, 1, 2], [3, 0, 2], [1, 2, 1]]], device=device, dtype=torch.float32)
assert_close(conf_mat, conf_mat_real)
def test_four_classes_one_missing(self, device, dtype):
num_classes = 4
actual = torch.tensor([[3, 3, 1, 1, 2, 1, 1, 3, 2, 2, 1, 1, 2, 3, 2, 1]], device=device, dtype=torch.long)
predicted = torch.tensor([[3, 2, 1, 1, 1, 1, 1, 2, 1, 3, 3, 2, 1, 1, 3, 3]], device=device, dtype=torch.long)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor(
[[[0, 0, 0, 0], [0, 4, 1, 2], [0, 3, 0, 2], [0, 1, 2, 1]]], device=device, dtype=torch.float32
)
assert_close(conf_mat, conf_mat_real)
def test_three_classes_normalized(self, device, dtype):
num_classes = 3
normalized = True
actual = torch.tensor([[2, 2, 0, 0, 1, 0, 0, 2, 1, 1, 0, 0, 1, 2, 1, 0]], device=device, dtype=torch.long)
predicted = torch.tensor([[2, 1, 0, 0, 0, 0, 0, 1, 0, 2, 2, 1, 0, 0, 2, 2]], device=device, dtype=torch.long)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes, normalized)
conf_mat_real = torch.tensor(
[[[0.5000, 0.3333, 0.4000], [0.3750, 0.0000, 0.4000], [0.1250, 0.6667, 0.2000]]],
device=device,
dtype=torch.float32,
)
assert_close(conf_mat, conf_mat_real)
def test_four_classes_2d_perfect(self, device, dtype):
num_classes = 4
actual = torch.tensor(
[[[0, 0, 1, 1], [0, 0, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
predicted = torch.tensor(
[[[0, 0, 1, 1], [0, 0, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor(
[[[4, 0, 0, 0], [0, 4, 0, 0], [0, 0, 4, 0], [0, 0, 0, 4]]], device=device, dtype=torch.float32
)
assert_close(conf_mat, conf_mat_real)
def test_four_classes_2d_one_class_nonperfect(self, device, dtype):
num_classes = 4
actual = torch.tensor(
[[[0, 0, 1, 1], [0, 0, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
predicted = torch.tensor(
[[[0, 0, 1, 1], [0, 3, 0, 1], [2, 2, 1, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor(
[[[3, 0, 0, 1], [1, 3, 0, 0], [0, 0, 4, 0], [0, 1, 0, 3]]], device=device, dtype=torch.float32
)
assert_close(conf_mat, conf_mat_real)
def test_four_classes_2d_one_class_missing(self, device, dtype):
num_classes = 4
actual = torch.tensor(
[[[0, 0, 1, 1], [0, 0, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
predicted = torch.tensor(
[[[3, 3, 1, 1], [3, 3, 1, 1], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor(
[[[0, 0, 0, 4], [0, 4, 0, 0], [0, 0, 4, 0], [0, 0, 0, 4]]], device=device, dtype=torch.float32
)
assert_close(conf_mat, conf_mat_real)
def test_four_classes_2d_one_class_no_predicted(self, device, dtype):
num_classes = 4
actual = torch.tensor(
[[[0, 0, 0, 0], [0, 0, 0, 0], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
predicted = torch.tensor(
[[[3, 3, 2, 2], [3, 3, 2, 2], [2, 2, 3, 3], [2, 2, 3, 3]]], device=device, dtype=torch.long
)
conf_mat = kornia.metrics.confusion_matrix(predicted, actual, num_classes)
conf_mat_real = torch.tensor(
[[[0, 0, 4, 4], [0, 0, 0, 0], [0, 0, 4, 0], [0, 0, 0, 4]]], device=device, dtype=torch.float32
)
assert_close(conf_mat, conf_mat_real)
class TestPsnr:
def test_metric(self, device, dtype):
sample = torch.ones(1, device=device, dtype=dtype)
expected = torch.tensor(20.0, device=device, dtype=dtype)
actual = kornia.metrics.psnr(sample, 1.2 * sample, 2.0)
assert_close(actual, expected)
class TestMeanAveragePrecision:
def test_smoke(self, device, dtype):
boxes = torch.tensor([[100, 50, 150, 100.]], device=device, dtype=dtype)
labels = torch.tensor([1], device=device, dtype=torch.long)
scores = torch.tensor([.7], device=device, dtype=dtype)
gt_boxes = torch.tensor([[100, 50, 150, 100.]], device=device, dtype=dtype)
gt_labels = torch.tensor([1], device=device, dtype=torch.long)
mean_ap = kornia.metrics.mean_average_precision(
[boxes], [labels], [scores], [gt_boxes], [gt_labels], 2)
assert_close(mean_ap[0], torch.tensor(1., device=device, dtype=dtype))
assert_close(mean_ap[1][1], 1.0)
def test_raise(self, device, dtype):
boxes = torch.tensor([[100, 50, 150, 100.]], device=device, dtype=dtype)
labels = torch.tensor([1], device=device, dtype=torch.long)
scores = torch.tensor([.7], device=device, dtype=dtype)
gt_boxes = torch.tensor([[100, 50, 150, 100.]], device=device, dtype=dtype)
gt_labels = torch.tensor([1], device=device, dtype=torch.long)
with pytest.raises(AssertionError):
_ = kornia.metrics.mean_average_precision(
boxes[0], [labels], [scores], [gt_boxes], [gt_labels], 2)
| 46.355556 | 117 | 0.591563 | 1,596 | 10,430 | 3.712406 | 0.054511 | 0.036118 | 0.028861 | 0.170802 | 0.906667 | 0.894515 | 0.862278 | 0.845232 | 0.845232 | 0.833418 | 0 | 0.085446 | 0.24372 | 10,430 | 224 | 118 | 46.5625 | 0.665695 | 0 | 0 | 0.596685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132597 | 1 | 0.093923 | false | 0 | 0.022099 | 0 | 0.138122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed26b2130368c78fe0cbd66c81b25e9aa8050878 | 119 | py | Python | 01_basic/exercise_020.py | sideroff/python-exercises | 6a9cc55735d977a71697204c734b3ade84a0c4fd | [
"MIT"
] | null | null | null | 01_basic/exercise_020.py | sideroff/python-exercises | 6a9cc55735d977a71697204c734b3ade84a0c4fd | [
"MIT"
] | 4 | 2020-03-24T18:00:07.000Z | 2021-06-02T00:51:22.000Z | 01_basic/exercise_020.py | sideroff/python-exercises | 6a9cc55735d977a71697204c734b3ade84a0c4fd | [
"MIT"
] | null | null | null | def copies_of_string(string, number_of_copies):
return string * number_of_copies
print(copies_of_string("asd", 6)) | 29.75 | 47 | 0.789916 | 19 | 119 | 4.526316 | 0.473684 | 0.186047 | 0.325581 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.109244 | 119 | 4 | 48 | 29.75 | 0.801887 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
ed42316f4862d6d4bf6d506243ff399bd8b2b1a4 | 39,632 | py | Python | tasks-deploy/xortop/generate.py | irdkwmnsb/lkshl-ctf | e5c0200ddc8ba73df5f321b87b9763fb1bbaba57 | [
"MIT"
] | 3 | 2021-03-30T06:27:58.000Z | 2021-04-03T17:56:35.000Z | tasks-deploy/xortop/generate.py | irdkwmnsb/lkshl-ctf | e5c0200ddc8ba73df5f321b87b9763fb1bbaba57 | [
"MIT"
] | null | null | null | tasks-deploy/xortop/generate.py | irdkwmnsb/lkshl-ctf | e5c0200ddc8ba73df5f321b87b9763fb1bbaba57 | [
"MIT"
] | null | null | null | TITLE = "Xor - топ"
STATEMENT_TEMPLATE = '''
Дан код, шифрующий флаг, и результат его работы. Получите флаг.
`
with open("output.txt", "w") as f:
key = 0 # some x 0<x<256
flag = "some string"
encrypted_flag = []
for i in range(len(flag)):
encrypted_flag.append(ord(flag[i]) ^ key)
encrypted_flag.reverse()
print(" ".join(str(e) for e in encrypted_flag), file=f)
`
stdout:
`{0}`
'''
def generate(context):
participant = context['participant']
token = tokens[participant.id % len(tokens)]
return TaskStatement(TITLE, STATEMENT_TEMPLATE.format(token))
tokens = ['147 175 166 128 219 221 158 223 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 221 167 150 150 159 139 159 158 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 223 132 172 166 215 222 156 180 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 129 189 135 183 143 219 167 218 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 152 187 220 136 162 162 170 223 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 172 157 185 166 165 137 135 185 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 186 137 136 171 186 139 218 188 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 187 187 131 220 216 223 159 172 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 190 183 165 170 172 159 183 219 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 159 135 139 151 215 162 134 148 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 216 150 216 137 183 155 220 141 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 173 129 128 141 150 150 165 215 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 184 220 169 219 129 160 155 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 191 221 153 191 214 214 166 169 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 180 167 138 143 190 169 150 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 186 163 143 191 166 222 173 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 221 175 221 164 190 136 151 135 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 221 216 140 143 187 131 185 154 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 138 162 133 141 222 184 170 219 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 185 165 214 131 150 161 216 161 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 155 141 136 131 180 219 182 165 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 183 171 219 165 214 190 160 172 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 162 165 188 186 214 157 160 131 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 132 135 162 217 130 183 158 156 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 217 155 214 151 166 182 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 186 136 190 166 130 158 160 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 138 156 134 128 137 191 216 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 135 216 217 221 220 191 220 158 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 148 173 154 184 148 168 143 175 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 218 221 152 184 132 128 220 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 172 163 130 191 217 166 161 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 220 157 221 222 129 131 132 143 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 175 168 143 191 158 157 128 170 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 186 139 137 156 148 185 153 134 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 216 161 134 143 221 148 137 186 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 166 180 157 164 161 140 133 190 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 158 182 183 141 154 153 165 141 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 188 221 131 187 175 153 130 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 169 168 137 153 161 218 167 152 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 190 218 139 166 158 173 141 157 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 182 214 162 138 219 139 152 219 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 134 132 167 169 216 172 172 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 171 161 132 138 129 130 155 164 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 151 137 170 218 190 143 165 185 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 130 143 170 128 161 175 157 154 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 148 221 191 157 136 187 185 169 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 131 148 139 165 219 187 132 215 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 137 161 215 159 182 172 167 165 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 191 187 131 132 180 171 173 133 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 189 143 156 189 151 219 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 171 138 169 170 175 156 132 164 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 165 135 152 133 182 170 139 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 141 138 180 165 134 222 136 148 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 215 150 165 160 156 163 164 221 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 219 134 190 183 150 219 130 164 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 141 151 159 167 154 191 140 134 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 219 173 138 167 160 184 140 168 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 137 155 180 185 158 152 148 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 216 138 220 216 154 128 163 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 143 222 159 129 180 218 162 141 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 183 134 151 166 169 218 165 219 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 155 189 136 133 182 171 156 132 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 185 214 161 220 190 172 166 167 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 129 140 223 133 130 180 139 156 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 218 221 169 223 219 165 157 161 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 166 180 191 218 171 152 223 161 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 190 223 135 152 155 182 172 215 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 167 148 184 186 128 165 218 131 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 157 218 164 172 156 150 128 162 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 172 169 175 175 158 133 168 155 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 162 128 161 161 150 163 164 214 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 182 150 223 214 216 214 171 162 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 139 170 187 161 159 185 155 183 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 214 183 188 155 171 163 155 219 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 166 129 182 165 183 175 222 158 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 163 165 129 162 163 158 220 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 168 215 129 182 218 134 217 222 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 137 159 132 187 130 137 217 163 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 139 135 148 148 218 191 221 171 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 216 137 191 135 167 220 152 188 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 164 162 172 218 186 156 154 172 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 190 217 171 220 161 180 188 152 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 180 154 175 148 135 190 180 180 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 167 152 190 222 171 135 172 215 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 171 167 222 140 130 152 161 191 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 151 137 133 164 164 139 159 158 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 173 166 189 215 161 152 184 190 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 140 187 187 172 170 140 167 129 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 214 171 130 130 221 130 183 143 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 214 151 133 137 163 220 214 162 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 186 166 182 221 150 168 128 164 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 150 183 157 170 143 151 170 153 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 187 157 190 216 160 136 130 190 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 182 143 165 151 222 222 131 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 172 190 180 214 133 184 186 158 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 138 131 171 152 168 190 182 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 160 139 153 220 168 141 219 172 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 186 148 130 156 153 129 169 160 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 161 138 135 153 220 190 152 159 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 148 153 219 141 156 215 169 189 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 148 166 156 186 188 171 131 136 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 150 222 220 163 157 156 148 141 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 152 218 153 215 175 215 215 159 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 160 188 219 214 151 131 217 182 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 167 182 138 168 221 151 152 134 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 185 217 153 162 130 165 130 180 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 157 151 139 143 148 180 151 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 173 215 128 189 222 191 186 214 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 164 157 182 191 191 140 222 215 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 161 154 217 217 188 150 148 220 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 215 218 137 185 131 218 189 137 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 214 191 180 218 215 132 130 188 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 189 150 216 159 130 214 157 185 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 129 157 143 155 152 130 153 153 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 182 168 130 158 180 187 130 214 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 171 158 172 164 152 191 135 139 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 183 150 152 148 154 188 151 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 159 148 138 152 182 166 161 132 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 132 135 131 160 162 219 166 187 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 180 134 186 153 182 219 173 186 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 183 139 217 217 172 157 165 157 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 148 160 166 139 186 190 175 221 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 188 187 164 151 218 134 187 140 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 129 162 143 156 221 184 182 161 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 222 182 171 169 148 129 223 152 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 168 155 170 155 191 135 143 164 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 190 166 185 173 180 215 220 187 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 214 168 215 132 182 158 168 170 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 185 158 154 128 190 157 169 134 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 130 222 219 140 133 219 223 156 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 218 154 186 215 151 172 170 184 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 160 180 173 140 168 131 189 218 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 148 143 175 184 140 220 133 134 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 152 154 139 130 189 216 219 141 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 132 132 222 173 160 190 171 171 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 152 143 221 170 155 158 163 136 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 164 170 182 162 219 128 137 190 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 218 128 138 150 128 221 131 128 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 169 137 150 153 143 220 173 143 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 129 183 130 159 158 219 173 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 133 162 136 129 141 221 189 140 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 180 156 164 150 188 162 222 170 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 222 184 168 168 151 161 216 167 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 164 130 136 143 154 215 151 136 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 159 219 216 172 150 223 167 173 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 159 148 171 157 175 173 154 143 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 188 131 162 157 183 167 156 140 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 219 134 128 128 136 214 170 163 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 217 170 182 182 188 132 223 155 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 161 169 218 162 128 188 134 162 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 135 185 189 133 219 161 159 222 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 151 217 171 220 184 153 191 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 159 143 168 162 187 191 155 164 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 134 220 168 219 219 170 217 222 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 129 189 161 161 129 135 223 220 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 157 160 135 185 161 214 187 161 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 216 136 137 220 180 222 171 134 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 218 133 162 159 175 172 141 173 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 131 221 167 157 161 182 169 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 128 221 187 162 171 153 169 139 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 220 135 155 158 163 138 148 216 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 171 130 148 167 175 169 157 172 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 190 138 139 184 219 220 138 220 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 131 129 190 168 169 216 161 170 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 162 216 217 134 183 158 169 151 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 173 187 141 221 135 182 185 163 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 220 180 143 172 162 130 217 170 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 216 132 168 152 141 143 221 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 134 222 151 153 133 158 185 138 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 137 143 183 185 148 161 171 190 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 160 164 136 150 155 188 182 190 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 141 150 169 171 143 216 165 217 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 140 150 216 167 132 152 184 169 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 140 165 182 133 160 140 187 129 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 180 173 136 138 129 150 159 218 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 166 162 167 167 173 189 168 129 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 136 172 180 135 133 182 215 163 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 167 128 131 140 138 152 182 191 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 189 182 156 169 128 175 186 183 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 133 128 129 164 216 214 216 182 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 182 152 132 132 162 159 218 171 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 168 170 165 130 219 130 173 184 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 157 175 130 220 221 129 132 173 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 172 139 163 214 158 139 188 182 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 220 148 150 185 191 183 188 158 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 219 165 157 158 141 133 180 166 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 157 150 140 131 168 187 216 155 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 191 165 218 190 132 223 136 137 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 134 137 218 184 190 171 165 216 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 129 191 189 129 157 152 167 150 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 221 183 140 154 223 182 175 220 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 184 152 129 190 191 172 148 138 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 220 188 140 170 161 183 168 131 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 189 187 137 131 154 156 151 169 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 182 150 188 154 159 168 171 148 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 189 137 148 148 159 188 129 222 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 191 214 161 163 184 158 141 171 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 164 184 184 138 141 150 159 139 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 162 141 191 131 186 161 170 136 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162', '147 153 132 216 163 135 157 156 187 177 138 130 156 222 153 177 218 138 177 128 223 177 156 221 134 158 223 141 177 222 154 158 151 156 141 177 154 219 221 140 177 221 134 186 149 162 165 162'] | 1,651.333333 | 39,009 | 0.736677 | 9,683 | 39,632 | 3.014562 | 0.012186 | 0.083248 | 0.061665 | 0.08222 | 0.864234 | 0.864234 | 0.864234 | 0.864234 | 0.864234 | 0.864234 | 0 | 0.96383 | 0.245887 | 39,632 | 24 | 39,009 | 1,651.333333 | 0.012848 | 0 | 0 | 0 | 0 | 9.52381 | 0.974274 | 0.001464 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0 | 0 | 0.095238 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
ed47c1d1a74075ca2c346edfe4272d1145c6867e | 216 | py | Python | server/exception.py | VinhLoiIT/parcheesi | 54e62c189b0ef3784b6bc24a6110ce9dd620cc43 | [
"MIT"
] | 4 | 2021-05-20T01:50:22.000Z | 2021-08-01T19:23:37.000Z | server/exception.py | VinhLoiIT/parcheesi | 54e62c189b0ef3784b6bc24a6110ce9dd620cc43 | [
"MIT"
] | null | null | null | server/exception.py | VinhLoiIT/parcheesi | 54e62c189b0ef3784b6bc24a6110ce9dd620cc43 | [
"MIT"
] | null | null | null | class InvalidCommandException(Exception):
def __init__(self, command_str) -> None:
super().__init__()
self.command_str = command_str
def __str__(self) -> str:
return self.command_str
| 27 | 44 | 0.666667 | 24 | 216 | 5.333333 | 0.458333 | 0.3125 | 0.328125 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231481 | 216 | 7 | 45 | 30.857143 | 0.771084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
ed4d8aa7e19be9749d4646298a8b5b3678d4d773 | 7,181 | py | Python | rock-paper-scissors.py | PedroEduardoSS/projetos-python | fbf2614a048d82902f99f3299e2244f34ea1025f | [
"MIT"
] | null | null | null | rock-paper-scissors.py | PedroEduardoSS/projetos-python | fbf2614a048d82902f99f3299e2244f34ea1025f | [
"MIT"
] | null | null | null | rock-paper-scissors.py | PedroEduardoSS/projetos-python | fbf2614a048d82902f99f3299e2244f34ea1025f | [
"MIT"
] | null | null | null | from dearpygui.core import *
from dearpygui.simple import *
from random import randint
set_main_window_size(400, 500)
def jogar(sender, data):
items = ('Pedra', 'Papel', 'Tesoura')
computador = randint(0, 2)
player = get_value("Objeto")
if computador == 0:
if player == 0:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[0]}")
add_same_line(spacing=50)
add_text(f"{items[0]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Empate')
elif player == 1:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[1]}")
add_same_line(spacing=50)
add_text(f"{items[0]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Jogador ganhou!')
else:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[2]}")
add_same_line(spacing=50)
add_text(f"{items[0]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Jogador Perdeu')
elif computador == 1:
if player == 1:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[1]}")
add_same_line(spacing=50)
add_text(f"{items[1]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Empate')
elif player == 2:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[2]}")
add_same_line(spacing=50)
add_text(f"{items[1]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Jogador ganhou!')
else:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[0]}")
add_same_line(spacing=50)
add_text(f"{items[1]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Jogador Perdeu')
elif computador == 2:
if player == 2:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[2]}")
add_same_line(spacing=50)
add_text(f"{items[2]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Empate')
elif player == 0:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[0]}")
add_same_line(spacing=50)
add_text(f"{items[2]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Jogador ganhou!')
else:
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_same_line(spacing=80)
add_text("VOCÊ")
add_same_line(spacing=50)
add_text("COMPUTADOR")
add_spacing(count=5)
add_same_line(spacing=80)
add_text(f"{items[1]}")
add_same_line(spacing=50)
add_text(f"{items[2]}")
add_spacing(count=5)
add_same_line(spacing=110)
add_text('Jogador Perdeu')
else:
add_text('Inválido')
def rematch(sender, data):
delete_item("Pedra-Papel-Tesoura")
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_text("Bem-vindo(a) ao jogo Pedra, Papel e Tesoura")
add_text("No espaço abaixo: ")
add_text("Digite 0 se quiser PEDRA")
add_text("Digite 1 se quiser PAPEL")
add_text("Digite 2 se quiser TESOURA")
add_input_int("Objeto")
add_button("Jogar", callback=jogar)
add_same_line(spacing=10)
add_button("Jogar novamente", callback=rematch)
add_spacing(count=5)
with window("Pedra-Papel-Tesoura", width=380, height=450):
set_window_pos("Pedra-Papel-Tesoura", 0, 0)
add_text("Bem-vindo(a) ao jogo Pedra, Papel e Tesoura")
add_text("No espaço abaixo: ")
add_text("Digite 0 se quiser PEDRA")
add_text("Digite 1 se quiser PAPEL")
add_text("Digite 2 se quiser TESOURA")
add_input_int("Objeto")
add_button("Jogar", callback=jogar)
add_same_line(spacing=10)
add_button("Jogar novamente", callback=rematch)
add_spacing(count=5)
start_dearpygui() | 39.240437 | 70 | 0.526807 | 862 | 7,181 | 4.149652 | 0.089327 | 0.109589 | 0.144535 | 0.236511 | 0.9234 | 0.9234 | 0.9234 | 0.9234 | 0.9234 | 0.9234 | 0 | 0.054688 | 0.358307 | 7,181 | 183 | 71 | 39.240437 | 0.721571 | 0 | 0 | 0.88 | 0 | 0 | 0.167224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011429 | false | 0 | 0.017143 | 0 | 0.028571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ed52a42e234905926cc2fa802953f9cbe75d088e | 52,964 | py | Python | tests/test_custom_return_types.py | enavarro51/retworkx | 71e34d111623d1de2e4870a8227eddacfb3ade4c | [
"Apache-2.0"
] | null | null | null | tests/test_custom_return_types.py | enavarro51/retworkx | 71e34d111623d1de2e4870a8227eddacfb3ade4c | [
"Apache-2.0"
] | null | null | null | tests/test_custom_return_types.py | enavarro51/retworkx | 71e34d111623d1de2e4870a8227eddacfb3ade4c | [
"Apache-2.0"
] | 1 | 2022-03-24T05:00:30.000Z | 2022-03-24T05:00:30.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import pickle
import unittest
import retworkx
class TestBFSSuccessorsComparisons(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
def test__eq__match(self):
self.assertTrue(retworkx.bfs_successors(self.dag, 0) == [("a", ["b"])])
def test__eq__not_match(self):
self.assertFalse(retworkx.bfs_successors(self.dag, 0) == [("b", ["c"])])
def test_eq_not_match_inner(self):
self.assertFalse(retworkx.bfs_successors(self.dag, 0) == [("a", ["c"])])
def test__eq__different_length(self):
self.assertFalse(retworkx.bfs_successors(self.dag, 0) == [("a", ["b"]), ("b", ["c"])])
def test__eq__invalid_type(self):
with self.assertRaises(TypeError):
retworkx.bfs_successors(self.dag, 0) == ["a"]
def test__ne__match(self):
self.assertFalse(retworkx.bfs_successors(self.dag, 0) != [("a", ["b"])])
def test__ne__not_match(self):
self.assertTrue(retworkx.bfs_successors(self.dag, 0) != [("b", ["c"])])
def test_ne_not_match_inner(self):
self.assertTrue(retworkx.bfs_successors(self.dag, 0) != [("a", ["c"])])
def test__ne__different_length(self):
self.assertTrue(retworkx.bfs_successors(self.dag, 0) != [("a", ["b"]), ("b", ["c"])])
def test__ne__invalid_type(self):
with self.assertRaises(TypeError):
retworkx.bfs_successors(self.dag, 0) != ["a"]
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.bfs_successors(self.dag, 0) > [("b", ["c"])]
def test_deepcopy(self):
bfs = retworkx.bfs_successors(self.dag, 0)
bfs_copy = copy.deepcopy(bfs)
self.assertEqual(bfs, bfs_copy)
def test_pickle(self):
bfs = retworkx.bfs_successors(self.dag, 0)
bfs_pickle = pickle.dumps(bfs)
bfs_copy = pickle.loads(bfs_pickle)
self.assertEqual(bfs, bfs_copy)
def test_str(self):
res = retworkx.bfs_successors(self.dag, 0)
self.assertEqual("BFSSuccessors[(a, [b])]", str(res))
def test_hash(self):
res = retworkx.bfs_successors(self.dag, 0)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_hash_invalid_type(self):
self.dag.add_child(0, [1, 2, 3], "edgy")
res = retworkx.bfs_successors(self.dag, 0)
with self.assertRaises(TypeError):
hash(res)
class TestNodeIndicesComparisons(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
def test__eq__match(self):
self.assertTrue(self.dag.node_indexes() == [0, 1])
def test__eq__not_match(self):
self.assertFalse(self.dag.node_indexes() == [1, 2])
def test__eq__different_length(self):
self.assertFalse(self.dag.node_indexes() == [0, 1, 2, 3])
def test__eq__invalid_type(self):
with self.assertRaises(TypeError):
self.dag.node_indexes() == ["a", None]
def test__ne__match(self):
self.assertFalse(self.dag.node_indexes() != [0, 1])
def test__ne__not_match(self):
self.assertTrue(self.dag.node_indexes() != [1, 2])
def test__ne__different_length(self):
self.assertTrue(self.dag.node_indexes() != [0, 1, 2, 3])
def test__ne__invalid_type(self):
with self.assertRaises(TypeError):
self.dag.node_indexes() != ["a", None]
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.dag.node_indexes() > [2, 1]
def test_deepcopy(self):
nodes = self.dag.node_indexes()
nodes_copy = copy.deepcopy(nodes)
self.assertEqual(nodes, nodes_copy)
def test_pickle(self):
nodes = self.dag.node_indexes()
nodes_pickle = pickle.dumps(nodes)
nodes_copy = pickle.loads(nodes_pickle)
self.assertEqual(nodes, nodes_copy)
def test_str(self):
res = self.dag.node_indexes()
self.assertEqual("NodeIndices[0, 1]", str(res))
def test_hash(self):
res = self.dag.node_indexes()
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
class TestNodesCountMapping(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
def test__eq__match(self):
self.assertTrue(retworkx.num_shortest_paths_unweighted(self.dag, 0) == {1: 1})
def test__eq__not_match_keys(self):
self.assertFalse(retworkx.num_shortest_paths_unweighted(self.dag, 0) == {2: 1})
def test__eq__not_match_values(self):
self.assertFalse(retworkx.num_shortest_paths_unweighted(self.dag, 0) == {1: 2})
def test__eq__different_length(self):
self.assertFalse(retworkx.num_shortest_paths_unweighted(self.dag, 0) == {1: 1, 2: 2})
def test_eq__same_type(self):
self.assertEqual(
retworkx.num_shortest_paths_unweighted(self.dag, 0),
retworkx.num_shortest_paths_unweighted(self.dag, 0),
)
def test__eq__invalid_type(self):
self.assertFalse(retworkx.num_shortest_paths_unweighted(self.dag, 0) == ["a", None])
def test__eq__invalid_inner_type(self):
self.assertFalse(retworkx.num_shortest_paths_unweighted(self.dag, 0) == {0: "a"})
def test__ne__match(self):
self.assertFalse(retworkx.num_shortest_paths_unweighted(self.dag, 0) != {1: 1})
def test__ne__not_match(self):
self.assertTrue(retworkx.num_shortest_paths_unweighted(self.dag, 0) != {2: 1})
def test__ne__not_match_values(self):
self.assertTrue(retworkx.num_shortest_paths_unweighted(self.dag, 0) != {1: 2})
def test__ne__different_length(self):
self.assertTrue(retworkx.num_shortest_paths_unweighted(self.dag, 0) != {1: 1, 2: 2})
def test__ne__invalid_type(self):
self.assertTrue(retworkx.num_shortest_paths_unweighted(self.dag, 0) != ["a", None])
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.num_shortest_paths_unweighted(self.dag, 0) > {1: 1}
def test_deepcopy(self):
paths = retworkx.num_shortest_paths_unweighted(self.dag, 0)
paths_copy = copy.deepcopy(paths)
self.assertEqual(paths, paths_copy)
def test_pickle(self):
paths = retworkx.num_shortest_paths_unweighted(self.dag, 0)
paths_pickle = pickle.dumps(paths)
paths_copy = pickle.loads(paths_pickle)
self.assertEqual(paths, paths_copy)
def test_str(self):
res = retworkx.num_shortest_paths_unweighted(self.dag, 0)
self.assertEqual("NodesCountMapping{1: 1}", str(res))
def test_hash(self):
res = retworkx.num_shortest_paths_unweighted(self.dag, 0)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = retworkx.num_shortest_paths_unweighted(self.dag, 0)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = retworkx.num_shortest_paths_unweighted(self.dag, 0).keys()
self.assertEqual([1], list(keys))
def test_values(self):
values = retworkx.num_shortest_paths_unweighted(self.dag, 0).values()
self.assertEqual([1], list(values))
def test_items(self):
items = retworkx.num_shortest_paths_unweighted(self.dag, 0).items()
self.assertEqual([(1, 1)], list(items))
def test_iter(self):
mapping_iter = iter(retworkx.num_shortest_paths_unweighted(self.dag, 0))
output = list(mapping_iter)
self.assertEqual(output, [1])
def test_contains(self):
res = retworkx.num_shortest_paths_unweighted(self.dag, 0)
self.assertIn(1, res)
def test_not_contains(self):
res = retworkx.num_shortest_paths_unweighted(self.dag, 0)
self.assertNotIn(0, res)
class TestEdgeIndicesComparisons(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDiGraph()
node_a = self.dag.add_node("a")
node_b = self.dag.add_child(node_a, "b", "Edgy")
self.dag.add_child(node_b, "c", "Super Edgy")
def test__eq__match(self):
self.assertTrue(self.dag.edge_indices() == [0, 1])
def test__eq__not_match(self):
self.assertFalse(self.dag.edge_indices() == [1, 2])
def test__eq__different_length(self):
self.assertFalse(self.dag.edge_indices() == [0, 1, 2, 3])
def test__eq__invalid_type(self):
with self.assertRaises(TypeError):
self.dag.edge_indices() == ["a", None]
def test__ne__match(self):
self.assertFalse(self.dag.edge_indices() != [0, 1])
def test__ne__not_match(self):
self.assertTrue(self.dag.edge_indices() != [1, 2])
def test__ne__different_length(self):
self.assertTrue(self.dag.edge_indices() != [0, 1, 2, 3])
def test__ne__invalid_type(self):
with self.assertRaises(TypeError):
self.dag.edge_indices() != ["a", None]
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.dag.edge_indices() > [2, 1]
def test_deepcopy(self):
edges = self.dag.edge_indices()
edges_copy = copy.deepcopy(edges)
self.assertEqual(edges, edges_copy)
def test_pickle(self):
edges = self.dag.edge_indices()
edges_pickle = pickle.dumps(edges)
edges_copy = pickle.loads(edges_pickle)
self.assertEqual(edges, edges_copy)
def test_str(self):
res = self.dag.edge_indices()
self.assertEqual("EdgeIndices[0, 1]", str(res))
def test_hash(self):
res = self.dag.edge_indices()
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
class TestEdgeListComparisons(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
def test__eq__match(self):
self.assertTrue(self.dag.edge_list() == [(0, 1)])
def test__eq__not_match(self):
self.assertFalse(self.dag.edge_list() == [(1, 2)])
def test__eq__different_length(self):
self.assertFalse(self.dag.edge_list() == [(0, 1), (2, 3)])
def test__eq__invalid_type(self):
self.assertFalse(self.dag.edge_list() == ["a", None])
def test__ne__match(self):
self.assertFalse(self.dag.edge_list() != [(0, 1)])
def test__ne__not_match(self):
self.assertTrue(self.dag.edge_list() != [(1, 2)])
def test__ne__different_length(self):
self.assertTrue(self.dag.edge_list() != [(0, 1), (2, 3)])
def test__ne__invalid_type(self):
self.assertTrue(self.dag.edge_list() != ["a", None])
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.dag.edge_list() > [(2, 1)]
def test_deepcopy(self):
edges = self.dag.edge_list()
edges_copy = copy.deepcopy(edges)
self.assertEqual(edges, edges_copy)
def test_pickle(self):
edges = self.dag.edge_list()
edges_pickle = pickle.dumps(edges)
edges_copy = pickle.loads(edges_pickle)
self.assertEqual(edges, edges_copy)
def test_str(self):
res = self.dag.edge_list()
self.assertEqual("EdgeList[(0, 1)]", str(res))
def test_hash(self):
res = self.dag.edge_list()
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
class TestWeightedEdgeListComparisons(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
def test__eq__match(self):
self.assertTrue(self.dag.weighted_edge_list() == [(0, 1, "Edgy")])
def test__eq__not_match(self):
self.assertFalse(self.dag.weighted_edge_list() == [(1, 2, None)])
def test__eq__different_length(self):
self.assertFalse(self.dag.weighted_edge_list() == [(0, 1, "Edgy"), (2, 3, "Not Edgy")])
def test__eq__invalid_type(self):
self.assertFalse(self.dag.weighted_edge_list() == ["a", None])
def test__ne__match(self):
self.assertFalse(self.dag.weighted_edge_list() != [(0, 1, "Edgy")])
def test__ne__not_match(self):
self.assertTrue(self.dag.weighted_edge_list() != [(1, 2, "Not Edgy")])
def test__ne__different_length(self):
self.assertTrue(self.dag.node_indexes() != [0, 1, 2, 3])
def test__ne__invalid_type(self):
self.assertTrue(self.dag.weighted_edge_list() != ["a", None])
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.dag.weighted_edge_list() > [(2, 1, "Not Edgy")]
def test_deepcopy(self):
edges = self.dag.weighted_edge_list()
edges_copy = copy.deepcopy(edges)
self.assertEqual(edges, edges_copy)
def test_pickle(self):
edges = self.dag.weighted_edge_list()
edges_pickle = pickle.dumps(edges)
edges_copy = pickle.loads(edges_pickle)
self.assertEqual(edges, edges_copy)
def test_str(self):
res = self.dag.weighted_edge_list()
self.assertEqual("WeightedEdgeList[(0, 1, Edgy)]", str(res))
def test_hash(self):
res = self.dag.weighted_edge_list()
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_hash_invalid_type(self):
self.dag.add_child(0, "c", ["edgy", "not_edgy"])
res = self.dag.weighted_edge_list()
with self.assertRaises(TypeError):
hash(res)
class TestPathMapping(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
def test__eq__match(self):
self.assertTrue(retworkx.dijkstra_shortest_paths(self.dag, 0) == {1: [0, 1]})
def test__eq__not_match_keys(self):
self.assertFalse(retworkx.dijkstra_shortest_paths(self.dag, 0) == {2: [0, 1]})
def test__eq__not_match_values(self):
self.assertFalse(retworkx.dijkstra_shortest_paths(self.dag, 0) == {1: [0, 2]})
def test__eq__different_length(self):
self.assertFalse(retworkx.dijkstra_shortest_paths(self.dag, 0) == {1: [0, 1], 2: [0, 2]})
def test_eq__same_type(self):
self.assertEqual(
retworkx.dijkstra_shortest_paths(self.dag, 0),
retworkx.dijkstra_shortest_paths(self.dag, 0),
)
def test__eq__invalid_type(self):
self.assertFalse(retworkx.dijkstra_shortest_paths(self.dag, 0) == ["a", None])
def test__eq__invalid_inner_type(self):
self.assertFalse(retworkx.dijkstra_shortest_paths(self.dag, 0) == {0: {"a": None}})
def test__ne__match(self):
self.assertFalse(retworkx.dijkstra_shortest_paths(self.dag, 0) != {1: [0, 1]})
def test__ne__not_match(self):
self.assertTrue(retworkx.dijkstra_shortest_paths(self.dag, 0) != {2: [0, 1]})
def test__ne__not_match_values(self):
self.assertTrue(retworkx.dijkstra_shortest_paths(self.dag, 0) != {1: [0, 2]})
def test__ne__different_length(self):
self.assertTrue(retworkx.dijkstra_shortest_paths(self.dag, 0) != {1: [0, 1], 2: [0, 2]})
def test__ne__invalid_type(self):
self.assertTrue(retworkx.dijkstra_shortest_paths(self.dag, 0) != ["a", None])
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.dijkstra_shortest_paths(self.dag, 0) > {1: [0, 2]}
def test_deepcopy(self):
paths = retworkx.dijkstra_shortest_paths(self.dag, 0)
paths_copy = copy.deepcopy(paths)
self.assertEqual(paths, paths_copy)
def test_pickle(self):
paths = retworkx.dijkstra_shortest_paths(self.dag, 0)
paths_pickle = pickle.dumps(paths)
paths_copy = pickle.loads(paths_pickle)
self.assertEqual(paths, paths_copy)
def test_str(self):
res = retworkx.dijkstra_shortest_paths(self.dag, 0)
self.assertEqual("PathMapping{1: [0, 1]}", str(res))
def test_hash(self):
res = retworkx.dijkstra_shortest_paths(self.dag, 0)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = retworkx.dijkstra_shortest_paths(self.dag, 0)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = retworkx.dijkstra_shortest_paths(self.dag, 0).keys()
self.assertEqual([1], list(keys))
def test_values(self):
values = retworkx.dijkstra_shortest_paths(self.dag, 0).values()
self.assertEqual([[0, 1]], list(values))
def test_items(self):
items = retworkx.dijkstra_shortest_paths(self.dag, 0).items()
self.assertEqual([(1, [0, 1])], list(items))
def test_iter(self):
mapping_iter = iter(retworkx.dijkstra_shortest_paths(self.dag, 0))
output = list(mapping_iter)
self.assertEqual(output, [1])
def test_contains(self):
res = retworkx.dijkstra_shortest_paths(self.dag, 0)
self.assertIn(1, res)
def test_not_contains(self):
res = retworkx.dijkstra_shortest_paths(self.dag, 0)
self.assertNotIn(0, res)
class TestPathLengthMapping(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
self.fn = lambda _: 1.0
def test__eq__match(self):
self.assertTrue(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) == {1: 1.0})
def test__eq__not_match_keys(self):
self.assertFalse(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) == {2: 1.0})
def test__eq__not_match_values(self):
self.assertFalse(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) == {1: 2.0})
def test__eq__different_length(self):
self.assertFalse(
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) == {1: 1.0, 2: 2.0}
)
def test_eq__same_type(self):
self.assertEqual(
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn),
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn),
)
def test__eq__invalid_type(self):
self.assertFalse(
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) == ["a", None]
)
def test__eq__invalid_inner_type(self):
self.assertFalse(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) == {0: "a"})
def test__ne__match(self):
self.assertFalse(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) != {1: 1.0})
def test__ne__not_match(self):
self.assertTrue(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) != {2: 1.0})
def test__ne__not_match_values(self):
self.assertTrue(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) != {1: 2.0})
def test__ne__different_length(self):
self.assertTrue(
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) != {1: 1.0, 2: 2.0}
)
def test__ne__invalid_type(self):
self.assertTrue(
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) != ["a", None]
)
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn) > {1: 1.0}
def test_deepcopy(self):
paths = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn)
paths_copy = copy.deepcopy(paths)
self.assertEqual(paths, paths_copy)
def test_pickle(self):
paths = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn)
paths_pickle = pickle.dumps(paths)
paths_copy = pickle.loads(paths_pickle)
self.assertEqual(paths, paths_copy)
def test_str(self):
res = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, lambda _: 3.14)
self.assertEqual("PathLengthMapping{1: 3.14}", str(res))
def test_hash(self):
res = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn).keys()
self.assertEqual([1], list(keys))
def test_values(self):
values = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn).values()
self.assertEqual([1.0], list(values))
def test_items(self):
items = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn).items()
self.assertEqual([(1, 1.0)], list(items))
def test_iter(self):
mapping_iter = iter(retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn))
output = list(mapping_iter)
self.assertEqual(output, [1])
def test_contains(self):
res = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn)
self.assertIn(1, res)
def test_not_contains(self):
res = retworkx.dijkstra_shortest_path_lengths(self.dag, 0, self.fn)
self.assertNotIn(0, res)
class TestPos2DMapping(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDiGraph()
self.dag.add_node("a")
def test__eq__match(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertTrue(res == {0: (0.4883489113112722, 0.6545867364101975)})
def test__eq__not_match_keys(self):
self.assertFalse(retworkx.random_layout(self.dag, seed=10244242) == {2: 1.0})
def test__eq__not_match_values(self):
self.assertFalse(retworkx.random_layout(self.dag, seed=10244242) == {1: 2.0})
def test__eq__different_length(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertFalse(res == {1: 1.0, 2: 2.0})
def test_eq__same_type(self):
self.assertEqual(
retworkx.random_layout(self.dag, seed=10244242),
retworkx.random_layout(self.dag, seed=10244242),
)
def test__eq__invalid_type(self):
self.assertFalse(retworkx.random_layout(self.dag, seed=10244242) == {"a": None})
def test__ne__match(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertFalse(res != {0: (0.4883489113112722, 0.6545867364101975)})
def test__ne__not_match(self):
self.assertTrue(retworkx.random_layout(self.dag, seed=10244242) != {2: 1.0})
def test__ne__not_match_values(self):
self.assertTrue(retworkx.random_layout(self.dag, seed=10244242) != {1: 2.0})
def test__ne__different_length(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertTrue(res != {1: 1.0, 2: 2.0})
def test__ne__invalid_type(self):
self.assertTrue(retworkx.random_layout(self.dag, seed=10244242) != ["a", None])
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.random_layout(self.dag, seed=10244242) > {1: 1.0}
def test_deepcopy(self):
positions = retworkx.random_layout(self.dag)
positions_copy = copy.deepcopy(positions)
self.assertEqual(positions_copy, positions)
def test_pickle(self):
pos = retworkx.random_layout(self.dag)
pos_pickle = pickle.dumps(pos)
pos_copy = pickle.loads(pos_pickle)
self.assertEqual(pos, pos_copy)
def test_str(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertEqual(
"Pos2DMapping{0: [0.4883489113112722, 0.6545867364101975]}",
str(res),
)
def test_hash(self):
res = retworkx.random_layout(self.dag, seed=10244242)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = retworkx.random_layout(self.dag, seed=10244242)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = retworkx.random_layout(self.dag, seed=10244242).keys()
self.assertEqual([0], list(keys))
def test_values(self):
values = retworkx.random_layout(self.dag, seed=10244242).values()
expected = [[0.4883489113112722, 0.6545867364101975]]
self.assertEqual(expected, list(values))
def test_items(self):
items = retworkx.random_layout(self.dag, seed=10244242).items()
self.assertEqual([(0, [0.4883489113112722, 0.6545867364101975])], list(items))
def test_iter(self):
mapping_iter = iter(retworkx.random_layout(self.dag, seed=10244242))
output = list(mapping_iter)
self.assertEqual(output, [0])
def test_contains(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertIn(0, res)
def test_not_contains(self):
res = retworkx.random_layout(self.dag, seed=10244242)
self.assertNotIn(1, res)
class TestEdgeIndices(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDiGraph()
self.dag.add_node("a")
self.dag.add_child(0, "b", "edge")
def test__eq__match(self):
res = self.dag.edge_index_map()
self.assertTrue(res == {0: (0, 1, "edge")})
def test__eq__not_match_keys(self):
res = self.dag.edge_index_map()
self.assertFalse(res == {2: (0, 1, "edge")})
def test__eq__not_match_values(self):
res = self.dag.edge_index_map()
self.assertFalse(res == {0: (1, 2, "edge")})
self.assertFalse(res == {0: (0, 1, "not edge")})
def test__eq__different_length(self):
res = self.dag.edge_index_map()
self.assertFalse(res == {1: (0, 1, "edge"), 0: (0, 1, "double edge")})
def test_eq__same_type(self):
self.assertEqual(self.dag.edge_index_map(), self.dag.edge_index_map())
def test__eq__invalid_type(self):
res = self.dag.edge_index_map()
self.assertFalse(res == {"a": ("a", "b", "c")})
def test__ne__match(self):
res = self.dag.edge_index_map()
self.assertFalse(res != {0: (0, 1, "edge")})
def test__ne__not_match(self):
res = self.dag.edge_index_map()
self.assertTrue(res, {2: (0, 1, "edge")})
def test__ne__not_match_values(self):
res = self.dag.edge_index_map()
self.assertTrue(res, {0: (0, 2, "edge")})
def test__ne__different_length(self):
res = self.dag.edge_index_map()
self.assertTrue(res != {1: (0, 1, "double edge"), 0: (0, 1, "edge")})
def test__ne__invalid_type(self):
res = self.dag.edge_index_map()
self.assertTrue(res != {"a": ("a", "b", "c")})
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.dag.edge_index_map() > {0: (0, 1, "edge")}
def test_deepcopy(self):
edge_map = self.dag.edge_index_map()
edge_map_copy = copy.deepcopy(edge_map)
self.assertEqual(edge_map_copy, edge_map)
def test_pickle(self):
edge_map = self.dag.edge_index_map()
edge_map_pickle = pickle.dumps(edge_map)
edge_map_copy = pickle.loads(edge_map_pickle)
self.assertEqual(edge_map, edge_map_copy)
def test_str(self):
res = self.dag.edge_index_map()
self.assertEqual(
"EdgeIndexMap{0: (0, 1, edge)}",
str(res),
)
def test_hash(self):
res = self.dag.edge_index_map()
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = self.dag.edge_index_map()
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = self.dag.edge_index_map().keys()
self.assertEqual([0], list(keys))
def test_values(self):
values = self.dag.edge_index_map().values()
expected = [(0, 1, "edge")]
self.assertEqual(expected, list(values))
def test_items(self):
items = self.dag.edge_index_map().items()
self.assertEqual([(0, (0, 1, "edge"))], list(items))
def test_iter(self):
mapping_iter = iter(self.dag.edge_index_map())
output = list(mapping_iter)
self.assertEqual(output, [0])
def test_contains(self):
res = self.dag.edge_index_map()
self.assertIn(0, res)
def test_not_contains(self):
res = self.dag.edge_index_map()
self.assertNotIn(1, res)
class TestAllPairsPathMapping(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
self.fn = lambda _: 1.0
def test__eq__match(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) == {0: {1: [0, 1]}, 1: {}}
)
def test__eq__not_match_keys(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) == {2: {2: [0, 1]}, 1: {}}
)
def test__eq__not_match_values(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) == {0: {1: [0, 2]}, 1: {}}
)
def test__eq__different_length(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) == {1: [0, 1], 2: [0, 2]}
)
def test_eq__same_type(self):
self.assertEqual(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn),
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn),
)
def test__eq__invalid_type(self):
self.assertFalse(retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) == {"a": []})
def test__eq__invalid_inner_type(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) == {0: {1: None}}
)
def test__ne__match(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) != {0: {1: [0, 1]}, 1: {}}
)
def test__ne__not_match(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) != {2: [0, 1]}
)
def test__ne__not_match_values(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) != {1: [0, 2]}
)
def test__ne__different_length(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) != {1: [0, 1], 2: [0, 2]}
)
def test__ne__invalid_type(self):
self.assertTrue(retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) != {"a": {}})
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn) > {1: [0, 2]}
def test_deepcopy(self):
paths = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
paths_copy = copy.deepcopy(paths)
self.assertEqual(paths, paths_copy)
def test_pickle(self):
paths = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
paths_pickle = pickle.dumps(paths)
paths_copy = pickle.loads(paths_pickle)
self.assertEqual(paths, paths_copy)
def test_str(self):
res = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
# Since run in parallel the order is not deterministic
expected_valid = [
"AllPairsPathMapping{1: PathMapping{}, 0: PathMapping{1: [0, 1]}}",
"AllPairsPathMapping{0: PathMapping{1: [0, 1]}, 1: PathMapping{}}",
]
self.assertIn(str(res), expected_valid)
def test_hash(self):
res = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn).keys()
self.assertEqual([0, 1], list(sorted(keys)))
def test_values(self):
values = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn).values()
# Since run in parallel the order is not deterministic
expected_valid = [[{1: [0, 1]}, {}], [{}, {1: [0, 1]}]]
self.assertIn(list(values), expected_valid)
def test_items(self):
items = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn).items()
# Since run in parallel the order is not deterministic
expected_valid = [
[(0, {1: [0, 1]}), (1, {})],
[(1, {}), (0, {1: [0, 1]})],
]
self.assertIn(list(items), expected_valid)
def test_iter(self):
mapping_iter = iter(retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn))
output = list(sorted(mapping_iter))
self.assertEqual(output, [0, 1])
def test_contains(self):
res = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
self.assertIn(1, res)
def test_not_contains(self):
res = retworkx.all_pairs_dijkstra_shortest_paths(self.dag, self.fn)
self.assertNotIn(2, res)
class TestAllPairsPathLengthMapping(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
node_a = self.dag.add_node("a")
self.dag.add_child(node_a, "b", "Edgy")
self.fn = lambda _: 1.0
def test__eq__match(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) == {0: {1: 1.0}, 1: {}}
)
def test__eq__not_match_keys(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) == {1: {2: 1.0}}
)
def test__eq__not_match_values(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) == {0: {2: 2.0}}
)
def test__eq__different_length(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) == {0: {1: 1.0, 2: 2.0}}
)
def test_eq__same_type(self):
self.assertEqual(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn),
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn),
)
def test__eq__invalid_type(self):
self.assertFalse(retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) == {"a": 2})
def test__eq__invalid_inner_type(self):
self.assertFalse(retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) == {0: "a"})
def test__ne__match(self):
self.assertFalse(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) != {0: {1: 1.0}, 1: {}}
)
def test__ne__not_match(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) != {0: {2: 1.0}}
)
def test__ne__not_match_values(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) != {0: {1: 2.0}}
)
def test__ne__different_length(self):
self.assertTrue(
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
!= {0: {1: 1.0}, 2: {1: 2.0}}
)
def test__ne__invalid_type(self):
self.assertTrue(retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) != {1: []})
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn) > {1: 1.0}
def test_deepcopy(self):
paths = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
paths_copy = copy.deepcopy(paths)
self.assertEqual(paths, paths_copy)
def test_pickle(self):
paths = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
paths_pickle = pickle.dumps(paths)
paths_copy = pickle.loads(paths_pickle)
self.assertEqual(paths, paths_copy)
def test_str(self):
res = retworkx.all_pairs_dijkstra_path_lengths(self.dag, lambda _: 3.14)
# Since all_pairs_dijkstra_path_lengths() is parallel the order of the
# output is non-determinisitic
valid_values = [
"AllPairsPathLengthMapping{1: PathLengthMapping{}, " "0: PathLengthMapping{1: 3.14}}",
"AllPairsPathLengthMapping{"
"0: PathLengthMapping{1: 3.14}, "
"1: PathLengthMapping{}}",
]
self.assertIn(str(res), valid_values)
def test_hash(self):
res = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn).keys()
self.assertEqual([0, 1], list(sorted((keys))))
def test_values(self):
values = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn).values()
# Since run in parallel the order is not deterministic
valid_expected = [[{}, {1: 1.0}], [{1: 1.0}, {}]]
self.assertIn(list(values), valid_expected)
def test_items(self):
items = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn).items()
# Since run in parallel the order is not deterministic
valid_expected = [[(0, {1: 1.0}), (1, {})], [(1, {}), (0, {1: 1.0})]]
self.assertIn(list(items), valid_expected)
def test_iter(self):
mapping_iter = iter(retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn))
output = list(sorted(mapping_iter))
self.assertEqual(output, [0, 1])
def test_contains(self):
res = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
self.assertIn(0, res)
def test_not_contains(self):
res = retworkx.all_pairs_dijkstra_path_lengths(self.dag, self.fn)
self.assertNotIn(2, res)
class TestNodeMap(unittest.TestCase):
def setUp(self):
self.dag = retworkx.PyDAG()
self.dag.add_node("a")
self.in_dag = retworkx.generators.directed_path_graph(1)
def test__eq__match(self):
self.assertTrue(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) == {0: 1}
)
def test__eq__not_match_keys(self):
self.assertFalse(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) == {2: 1}
)
def test__eq__not_match_values(self):
self.assertFalse(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) == {0: 2}
)
def test__eq__different_length(self):
self.assertFalse(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
== {0: 1, 1: 2}
)
def test_eq__same_type(self):
res = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
self.assertEqual(res, res)
def test__ne__match(self):
self.assertFalse(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) != {0: 1}
)
def test__ne__not_match(self):
self.assertTrue(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) != {2: 2}
)
def test__ne__not_match_values(self):
self.assertTrue(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) != {0: 2}
)
def test__ne__different_length(self):
self.assertTrue(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
!= {0: 1, 1: 2}
)
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None) > {1: 2}
def test__len__(self):
in_dag = retworkx.generators.directed_grid_graph(5, 5)
node_map = self.dag.substitute_node_with_subgraph(0, in_dag, lambda *args: None)
self.assertEqual(25, len(node_map))
def test_deepcopy(self):
node_map = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
node_map_copy = copy.deepcopy(node_map)
self.assertEqual(node_map, node_map_copy)
def test_pickle(self):
node_map = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
node_map_pickle = pickle.dumps(node_map)
node_map_copy = pickle.loads(node_map_pickle)
self.assertEqual(node_map, node_map_copy)
def test_str(self):
res = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
self.assertEqual("NodeMap{0: 1}", str(res))
def test_hash(self):
res = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
hash_res = hash(res)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(res))
def test_index_error(self):
res = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
with self.assertRaises(IndexError):
res[42]
def test_keys(self):
keys = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None).keys()
self.assertEqual([0], list(keys))
def test_values(self):
values = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None).values()
self.assertEqual([1], list(values))
def test_items(self):
items = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None).items()
self.assertEqual([(0, 1)], list(items))
def test_iter(self):
mapping_iter = iter(
self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
)
output = list(mapping_iter)
self.assertEqual(output, [0])
def test_contains(self):
res = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
self.assertIn(0, res)
def test_not_contains(self):
res = self.dag.substitute_node_with_subgraph(0, self.in_dag, lambda *args: None)
self.assertNotIn(2, res)
def test_iter_stable_for_same_obj(self):
graph = retworkx.PyDiGraph()
graph.add_node(0)
in_graph = retworkx.generators.directed_path_graph(5)
res = self.dag.substitute_node_with_subgraph(0, in_graph, lambda *args: None)
first_iter = list(iter(res))
second_iter = list(iter(res))
third_iter = list(iter(res))
self.assertEqual(first_iter, second_iter)
self.assertEqual(first_iter, third_iter)
class TestChainsComparisons(unittest.TestCase):
def setUp(self):
self.graph = retworkx.generators.cycle_graph(3)
self.chains = retworkx.chain_decomposition(self.graph)
def test__eq__match(self):
self.assertTrue(self.chains == [[(0, 2), (2, 1), (1, 0)]])
def test__eq__not_match(self):
self.assertFalse(self.chains == [[(0, 2), (2, 1), (2, 0)]])
def test__eq__different_length(self):
self.assertFalse(self.chains == [[(0, 2)]])
def test__eq__invalid_type(self):
with self.assertRaises(TypeError):
self.chains == [0]
def test__ne__match(self):
self.assertFalse(self.chains != [[(0, 2), (2, 1), (1, 0)]])
def test__ne__not_match(self):
self.assertTrue(self.chains != [[(0, 2), (2, 1), (2, 0)]])
def test__ne__different_length(self):
self.assertTrue(self.chains != [[(0, 2)]])
def test__ne__invalid_type(self):
with self.assertRaises(TypeError):
self.chains != [0]
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.chains > [[(0, 2)]]
def test_deepcopy(self):
chains_copy = copy.deepcopy(self.chains)
self.assertEqual(self.chains, chains_copy)
def test_pickle(self):
chains_pickle = pickle.dumps(self.chains)
chains_copy = pickle.loads(chains_pickle)
self.assertEqual(self.chains, chains_copy)
def test_str(self):
self.assertEqual("Chains[EdgeList[(0, 2), (2, 1), (1, 0)]]", str(self.chains))
def test_hash(self):
hash_res = hash(self.chains)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(self.chains))
class TestProductNodeMap(unittest.TestCase):
def setUp(self):
self.first = retworkx.PyGraph()
self.first.add_node("a0")
self.first.add_node("a1")
self.second = retworkx.PyGraph()
self.second.add_node("b")
_, self.node_map = retworkx.graph_cartesian_product(self.first, self.second)
def test__eq__match(self):
self.assertTrue(self.node_map == {(0, 0): 0, (1, 0): 1})
def test__eq__not_match_keys(self):
self.assertFalse(self.node_map == {(0, 0): 0, (2, 0): 1})
def test__eq__not_match_values(self):
self.assertFalse(self.node_map == {(0, 0): 0, (1, 0): 2})
def test__eq__different_length(self):
self.assertFalse(self.node_map == {(0, 0): 0})
def test_eq__same_type(self):
_, res = retworkx.graph_cartesian_product(self.first, self.second)
self.assertEqual(self.node_map, res)
def test__ne__match(self):
self.assertFalse(self.node_map != {(0, 0): 0, (1, 0): 1})
def test__ne__not_match(self):
self.assertTrue(self.node_map != {(0, 0): 0, (2, 0): 1})
def test__ne__not_match_values(self):
self.assertTrue(self.node_map != {(0, 0): 0, (1, 0): 2})
def test__ne__different_length(self):
self.assertTrue(self.node_map != {(0, 0): 0})
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.node_map > {1: 2}
def test__len__(self):
self.assertEqual(2, len(self.node_map))
def test_deepcopy(self):
node_map_copy = copy.deepcopy(self.node_map)
self.assertEqual(self.node_map, node_map_copy)
def test_pickle(self):
node_map_pickle = pickle.dumps(self.node_map)
node_map_copy = pickle.loads(node_map_pickle)
self.assertEqual(self.node_map, node_map_copy)
def test_str(self):
valid_str_output = [
"ProductNodeMap{(0, 0): 0, (1, 0): 1}",
"ProductNodeMap{(1, 0): 1, (0, 0): 0}",
]
self.assertTrue(str(self.node_map) in valid_str_output)
def test_hash(self):
hash_res = hash(self.node_map)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(self.node_map))
def test_index_error(self):
with self.assertRaises(IndexError):
self.node_map[(1, 1)]
def test_keys(self):
keys = self.node_map.keys()
self.assertEqual(set([(0, 0), (1, 0)]), set(keys))
def test_values(self):
values = self.node_map.values()
self.assertEqual(set([0, 1]), set(values))
def test_items(self):
items = self.node_map.items()
self.assertEqual(set([((0, 0), 0), ((1, 0), 1)]), set(items))
def test_iter(self):
mapping_iter = iter(self.node_map)
output = set(mapping_iter)
self.assertEqual(output, set([(0, 0), (1, 0)]))
def test_contains(self):
self.assertIn((0, 0), self.node_map)
def test_not_contains(self):
self.assertNotIn((1, 1), self.node_map)
class TestBiconnectedComponentsMap(unittest.TestCase):
def setUp(self):
self.graph = retworkx.generators.path_graph(3)
self.bicon_map = retworkx.biconnected_components(self.graph)
def test__eq__match(self):
self.assertTrue(self.bicon_map == {(0, 1): 1, (1, 2): 0})
def test__eq__not_match_keys(self):
self.assertFalse(self.bicon_map == {(0, 0): 1, (2, 0): 0})
def test__eq__not_match_values(self):
self.assertFalse(self.bicon_map == {(0, 1): 2, (1, 2): 0})
def test__eq__different_length(self):
self.assertFalse(self.bicon_map == {(0, 1): 1})
def test_eq__same_type(self):
res = retworkx.biconnected_components(self.graph)
self.assertEqual(self.bicon_map, res)
def test__ne__match(self):
self.assertFalse(self.bicon_map != {(0, 1): 1, (1, 2): 0})
def test__ne__not_match(self):
self.assertTrue(self.bicon_map != {(0, 2): 1, (1, 2): 0})
def test__ne__not_match_values(self):
self.assertTrue(self.bicon_map != {(0, 1): 0, (1, 2): 0})
def test__ne__different_length(self):
self.assertTrue(self.bicon_map != {(0, 1): 1})
def test__gt__not_implemented(self):
with self.assertRaises(NotImplementedError):
self.bicon_map > {1: 2}
def test__len__(self):
self.assertEqual(2, len(self.bicon_map))
def test_deepcopy(self):
bicon_map_copy = copy.deepcopy(self.bicon_map)
self.assertEqual(self.bicon_map, bicon_map_copy)
def test_pickle(self):
bicon_map_pickle = pickle.dumps(self.bicon_map)
bicon_map_copy = pickle.loads(bicon_map_pickle)
self.assertEqual(self.bicon_map, bicon_map_copy)
def test_str(self):
valid_str_output = [
"BiconnectedComponents{(0, 1): 1, (1, 2): 0}",
"BiconnectedComponents{(1, 2): 0, (0, 1): 1}",
]
self.assertTrue(str(self.bicon_map) in valid_str_output)
def test_hash(self):
hash_res = hash(self.bicon_map)
self.assertIsInstance(hash_res, int)
# Assert hash is stable
self.assertEqual(hash_res, hash(self.bicon_map))
def test_index_error(self):
with self.assertRaises(IndexError):
self.bicon_map[(1, 1)]
def test_keys(self):
keys = self.bicon_map.keys()
self.assertEqual(set([(0, 1), (1, 2)]), set(keys))
def test_values(self):
values = self.bicon_map.values()
self.assertEqual(set([0, 1]), set(values))
def test_items(self):
items = self.bicon_map.items()
self.assertEqual(set([((0, 1), 1), ((1, 2), 0)]), set(items))
def test_iter(self):
mapping_iter = iter(self.bicon_map)
output = set(mapping_iter)
self.assertEqual(output, set([(0, 1), (1, 2)]))
def test_contains(self):
self.assertIn((0, 1), self.bicon_map)
def test_not_contains(self):
self.assertNotIn((0, 2), self.bicon_map)
| 35.49866 | 100 | 0.64389 | 7,132 | 52,964 | 4.493831 | 0.030987 | 0.068799 | 0.022715 | 0.039002 | 0.906833 | 0.877816 | 0.858846 | 0.829423 | 0.791201 | 0.755164 | 0 | 0.028754 | 0.22385 | 52,964 | 1,491 | 101 | 35.522468 | 0.7509 | 0.023337 | 0 | 0.635624 | 0 | 0 | 0.019829 | 0.002863 | 0 | 0 | 0 | 0 | 0.301085 | 1 | 0.299277 | false | 0 | 0.003617 | 0 | 0.31736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed5d2d7719276b06bbbefa6a6dd9257bc4e308c8 | 115 | py | Python | archivebox/core/welcome_message.py | TrAyZeN/ArchiveBox | 88cc75a0457859a63b06854e353b053c730b3752 | [
"MIT"
] | 6,340 | 2018-12-20T21:12:13.000Z | 2020-11-23T02:39:32.000Z | archivebox/core/welcome_message.py | TrAyZeN/ArchiveBox | 88cc75a0457859a63b06854e353b053c730b3752 | [
"MIT"
] | 388 | 2018-12-20T07:58:08.000Z | 2020-11-23T03:20:36.000Z | archivebox/core/welcome_message.py | TrAyZeN/ArchiveBox | 88cc75a0457859a63b06854e353b053c730b3752 | [
"MIT"
] | 439 | 2018-12-21T21:51:47.000Z | 2020-11-21T21:21:35.000Z | from archivebox.logging_util import log_shell_welcome_msg
if __name__ == '__main__':
log_shell_welcome_msg()
| 19.166667 | 57 | 0.8 | 16 | 115 | 4.8125 | 0.75 | 0.207792 | 0.38961 | 0.467532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 115 | 5 | 58 | 23 | 0.77 | 0 | 0 | 0 | 0 | 0 | 0.069565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
ed7f478156a31f4f13031e490826d9c38bd30e4a | 611,071 | py | Python | python/createAndAnalyzeStiffMatrixRVEs.py | LucaDiStasio/thinPlyMechanics | 813bdeef7e07db6b7830d41fcca198f8dd2eb3cf | [
"Apache-2.0"
] | 7 | 2018-06-04T10:15:30.000Z | 2021-09-04T03:53:54.000Z | python/createAndAnalyzeStiffMatrixRVEs.py | LucaDiStasio/thinPlyMechanics | 813bdeef7e07db6b7830d41fcca198f8dd2eb3cf | [
"Apache-2.0"
] | 123 | 2017-09-07T14:05:04.000Z | 2018-06-21T12:01:30.000Z | python/createAndAnalyzeStiffMatrixRVEs.py | LucaDiStasio/thinPlyMechanics | 813bdeef7e07db6b7830d41fcca198f8dd2eb3cf | [
"Apache-2.0"
] | 3 | 2017-08-09T19:20:39.000Z | 2020-12-14T20:55:44.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
'''
=====================================================================================
Copyright (c) 2016-2019 Université de Lorraine or Luleå tekniska universitet
Author: Luca Di Stasio <luca.distasio@gmail.com>
<luca.distasio@ingpec.eu>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the distribution
Neither the name of the Université de Lorraine or Luleå tekniska universitet
nor the names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
=====================================================================================
DESCRIPTION
Tested with Abaqus Python 2.6 (64-bit) distribution in Windows 7.
abaqus CAE noGUI=C:/02_Local-folder/01_Luca/01_WD/thinPlyMechanics/python/createAndAnalyzeStiffMatrixRVEs.py -- -dir C:/Users/lucad/OneDrive/01_Luca/07_DocMASE/07_Data/03_FEM/InputData/StiffMatrix -data inputRVEdataEfreeL%1-LPC.deck -iterables inputRVEiterablesEfreeL%1-LPC.deck -plot inputRVEplot &&
'''
import sys, os
import math
import numpy as np
import subprocess
from os.path import isfile, join, exists
from platform import platform,system
from shutil import copyfile
import sqlite3
import locale
import ast
from datetime import datetime
from time import strftime, sleep
import timeit
from abaqus import *
from abaqusConstants import *
import section
import regionToolset
import displayGroupMdbToolset as dgm
import part
import material
import assembly
import step
import interaction
import load
import mesh
import optimization
import job
import sketch
import visualization
import xyPlot
import displayGroupOdbToolset as dgo
import connectorBehavior
from odbAccess import *
from odbMaterial import *
from odbSection import *
#===============================================================================#
#===============================================================================#
# I/O functions
#===============================================================================#
#===============================================================================#
#===============================================================================#
# SHELL
#===============================================================================#
def printHelp():
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('*****************************************************************************************************')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' CREATION AND ANALYSIS OF RVEs/RUCs WITH FEM IN ABAQUS')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' by')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' Luca Di Stasio, 2016-2019')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('*****************************************************************************************************')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('Program syntax:')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('abaqus cae noGUI=createAndAnalyzeRVEs.py -- -dir/-directory <input file directory> -data <RVE base data> -iterables <parameters for iterations> -plot <parameters for plotting> -debug')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('Mandatory arguments:')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('-dir/-directory <input file directory> ===> full/path/to/folder/without/closing/slash')
print >> sys.__stdout__,('-data <RVE base data> ===> full/path/to/file/without/closing/slash')
print >> sys.__stdout__,('-iterables <parameters for iterations> ===> full/path/to/file/without/extension')
print >> sys.__stdout__,('-plot <parameters for plotting> ===> full/path/to/file/without/extension')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('Optional arguments:')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('-debug ===> debug mode active')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
sys.exit()
#===============================================================================#
# DECK file
#===============================================================================#
def fillDataDictionary(dataDict,inputKeywords,inputValue):
if len(inputKeywords)>1:
branchDict = dataDict.setdefault(inputKeywords[0],{})
fillDataDictionary(branchDict,inputKeywords[1:],inputValue)
else:
dataDict[inputKeywords[0]] = inputValue
#===============================================================================#
# CSV files
#===============================================================================#
def createCSVfile(dir,filename,titleline=None):
if len(filename.split('.'))<2:
filename += '.csv'
with open(join(dir,filename),'w') as csv:
if titleline != None:
csv.write(titleline.replace('\n','') + '\n')
else:
csv.write('# Automatically created on ' + datetime.now().strftime('%d/%m/%Y') + ' at' + datetime.now().strftime('%H:%M:%S') + '\n')
def appendCSVfile(dir,filename,data):
# data is a list of lists
# each list is written to a row
# no check is made on data consistency
if len(filename.split('.'))<2:
filename += '.csv'
with open(join(dir,filename),'a') as csv:
for row in data:
line = ''
for v,value in enumerate(row):
if v>0:
line += ', '
line += str(value)
csv.write(line + '\n')
#===============================================================================#
# ABAQUS input files
#===============================================================================#
def createABQinpfile(path):
with open(path,'w') as fi:
fi.write('** ABAQUS INPUT FILE' + '\n')
fi.write('** Automatically created on ' + datetime.now().strftime('%d/%m/%Y') + ' at ' + datetime.now().strftime('%H:%M:%S') + '\n')
fi.write('**' + '\n')
fi.write('**==============================================================================' + '\n')
fi.write('** Copyright (c) 2016-2018 Universite de Lorraine & Lulea tekniska universitet' + '\n')
fi.write('** Author: Luca Di Stasio <luca.distasio@gmail.com>' + '\n')
fi.write('** <luca.distasio@ingpec.eu>' + '\n')
fi.write('**' + '\n')
fi.write('** Redistribution and use in source and binary forms, with or without' + '\n')
fi.write('** modification, are permitted provided that the following conditions are met:' + '\n')
fi.write('**' + '\n')
fi.write('** Redistributions of source code must retain the above copyright' + '\n')
fi.write('** notice, this list of conditions and the following disclaimer.' + '\n')
fi.write('** Redistributions in binary form must reproduce the above copyright' + '\n')
fi.write('** notice, this list of conditions and the following disclaimer in' + '\n')
fi.write('** the documentation and/or other materials provided with the distribution' + '\n')
fi.write('** Neither the name of the Universite de Lorraine or Lulea tekniska universitet' + '\n')
fi.write('** nor the names of its contributors may be used to endorse or promote products' + '\n')
fi.write('** derived from this software without specific prior written permission.' + '\n')
fi.write('**' + '\n')
fi.write('** THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"' + '\n')
fi.write('** AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE' + '\n')
fi.write('** IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE' + '\n')
fi.write('** ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE' + '\n')
fi.write('** LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR' + '\n')
fi.write('** CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF' + '\n')
fi.write('** SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS' + '\n')
fi.write('** INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN' + '\n')
fi.write('** CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)' + '\n')
fi.write('** ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE' + '\n')
fi.write('** POSSIBILITY OF SUCH DAMAGE.' + '\n')
fi.write('**==============================================================================' + '\n')
fi.write('**' + '\n')
def readNodesFromInpFile(inpfullpath,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading content of original input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading nodes and saving to dictionary ...',True)
allnodes = {}
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
allnodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
#writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Stored node ' + str(int(line.replace('\n','').split(',')[0])) + ' with coordinates (' + str(float(line.replace('\n','').split(',')[1])) + ', ' + str(float(line.replace('\n','').split(',')[2])) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Node section ends at line ' + str(l),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'No more to go',True)
store = False
break
elif store == True:
allnodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
#writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Stored node ' + str(int(line.replace('\n','').split(',')[0])) + ' with coordinates (' + str(float(line.replace('\n','').split(',')[1])) + ', ' + str(float(line.replace('\n','').split(',')[2])) + ')',True)
elif ('*Node' in line or '*NODE' in line) and len(inpfilelines[l+1].replace('\n','').split(','))==3:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Node section starts at line ' + str(l),True)
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
return allnodes
def readQuadsFromInpFile(inpfullpath,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading content of original input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading quadrilateral elements and saving to dictionary ...',True)
allquads = {}
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
quadIndex = int(line.replace('\n','').split(',')[0])
allquads[quadIndex] = []
for node in line.replace('\n','').split(',')[1:]:
allquads[quadIndex].append(int(node))
store = False
break
elif store == True:
quadIndex = int(line.replace('\n','').split(',')[0])
allquads[quadIndex] = []
for node in line.replace('\n','').split(',')[1:]:
allquads[quadIndex].append(int(node))
elif ('*Element, type=CPE8' in line or '*ELEMENT, type=CPE8' in line or '*Element, type=CPE4' in line or '*ELEMENT, type=CPE4' in line) and (len(inpfilelines[l+1].replace('\n','').split(','))==5 or len(inpfilelines[l+1].replace('\n','').split(','))==9):
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Quadrilateral elements section starts at line ' + str(l),True)
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
return allquads
def readNodesetFromInpFile(inpfullpath,name,expLength,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading content of original input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
if expLength>1:
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading node set ' + name + ' and saving to list ...',True)
nodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
nodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
nodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in [name.lower(),name.upper()]:
store = True
else:
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading node set ' + name + ' and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in [name.lower(),name.upper()]:
nodeset = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
return nodeset
def readElementsetFromInpFile(inpfullpath,name,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading content of original input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading element set ' + name + ' and saving to list ...',True)
elementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
elementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
elementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in [name.lower(),name.upper()]:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
return elementset
def readNodesFromNodesInpFile(inpfullpath,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading nodes from included input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
allnodes = {}
for line in inpfilelines[1:]:
allnodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
return allnodes
def writeNodesToNodesInpFile(inpfullpath,allnodes,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Writing nodes to included input file ...',True)
with open(inpfullpath,'w') as inp:
inp.write('*NODE' + '\n')
for key in allnodes.keys():
inp.write(' ' + str(key) + ', ' + str(allnodes[key][0]) + ', ' + str(allnodes[key][1]) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def readQuadsFromQuadsInpFile(inpfullpath,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Reading quads from included input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
allquads = {}
for line in inpfilelines:
id = int(line.replace('\n','').split(',')[0])
nodes = line.replace('\n','').split(',')[1:]
nodesId = []
for node in nodes:
nodesId.append(int(node))
allquads[id] = nodesId
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
return allquads
def writeQuadsToQuadsInpFile(inpfullpath,allquads,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Writing quads to included input file ...',True)
with open(inpfullpath,'w') as inp:
inp.write('*ELEMENT, TYPE=CPE' + str(int(len(allquads[allquads.keys()[0]]))) + '\n')
for key in allquads.keys():
line = ' ' + str(key)
for node in allquads[key]:
line += ', ' + str(node)
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
#===============================================================================#
# Log files
#===============================================================================#
def writeLineToLogFile(logFileFullPath,mode,line,toScreen):
with open(logFileFullPath,mode) as log:
log.write(line + '\n')
if toScreen:
print >> sys.__stdout__,(line + '\n')
def skipLineToLogFile(logFileFullPath,mode,toScreen):
with open(logFileFullPath,mode) as log:
log.write('\n')
if toScreen:
print >> sys.__stdout__,('\n')
def writeTitleSepLineToLogFile(logFileFullPath,mode,toScreen):
with open(logFileFullPath,mode) as log:
log.write('===============================================================================================\n')
if toScreen:
print >> sys.__stdout__,('===============================================================================================\n')
def writeTitleSecToLogFile(logFileFullPath,mode,title,toScreen):
writeTitleSepLineToLogFile(logFileFullPath,mode,toScreen)
writeTitleSepLineToLogFile(logFileFullPath,'a',toScreen)
skipLineToLogFile(logFileFullPath,'a',toScreen)
writeLineToLogFile(logFileFullPath,'a',title,toScreen)
skipLineToLogFile(logFileFullPath,'a',toScreen)
writeLineToLogFile(logFileFullPath,'a','Starting on ' + datetime.now().strftime('%Y-%m-%d') + ' at ' + datetime.now().strftime('%H:%M:%S'),toScreen)
skipLineToLogFile(logFileFullPath,'a',toScreen)
writeLineToLogFile(logFileFullPath,'a','Platform: ' + platform(),toScreen)
skipLineToLogFile(logFileFullPath,'a',toScreen)
writeTitleSepLineToLogFile(logFileFullPath,'a',toScreen)
writeTitleSepLineToLogFile(logFileFullPath,'a',toScreen)
skipLineToLogFile(logFileFullPath,'a',toScreen)
def writeErrorToLogFile(logFileFullPath,mode,exc,err,toScreen):
with open(logFileFullPath,mode) as log:
log.write('!!! ----------------------------------------------------------------------------------------!!!\n')
log.write('\n')
log.write(' AN ERROR OCCURED\n')
log.write('\n')
log.write(' -------------------------\n')
log.write('\n')
log.write(str(exc) + '\n')
log.write(str(err) + '\n')
log.write('\n')
log.write('Terminating program\n')
log.write('\n')
log.write('!!! ----------------------------------------------------------------------------------------!!!\n')
log.write('\n')
if toScreen:
print >> sys.__stdout__,('!!! ----------------------------------------------------------------------------------------!!!\n')
print >> sys.__stdout__,('\n')
print >> sys.__stdout__,(' AN ERROR OCCURED\n')
print >> sys.__stdout__,('\n')
print >> sys.__stdout__,(' -------------------------\n')
print >> sys.__stdout__,('\n')
print >> sys.__stdout__,(str(exc) + '\n')
print >> sys.__stdout__,(str(err) + '\n')
print >> sys.__stdout__,('\n')
print >> sys.__stdout__,('Terminating program\n')
print >> sys.__stdout__,('\n')
print>> sys.__stdout__, ('!!! ----------------------------------------------------------------------------------------!!!\n')
print>> sys.__stdout__, ('\n')
#===============================================================================#
# Latex files
#===============================================================================#
def createLatexFile(folder,filename,documentclass,options=''):
if not exists(folder):
makedirs(folder)
with open(join(folder,filename + '.tex'),'w') as tex:
if options!='':
tex.write('\\documentclass[' + options + ']{' + documentclass + '}\n')
else:
tex.write('\\documentclass{' + documentclass + '}\n')
tex.write('\n')
def writeLatexPackages(folder,filename,packages,options):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('% Packages and basic declarations\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('\n')
for i,package in enumerate(packages):
if options[i]!='':
tex.write('\\usepackage[' + options[i] + ']{' + package + '}\n')
else:
tex.write('\\usepackage{' + package + '}\n')
tex.write('\n')
def writeLatexDocumentStarts(folder,filename):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('% DOCUMENT STARTS\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('\n')
tex.write('\\begin{document}\n')
tex.write('\n')
def writeLatexDocumentEnds(folder,filename):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('\\end{document}\n')
tex.write('\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('% DOCUMENT ENDS\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('%----------------------------------------------------------------------------------------------%\n')
tex.write('\n')
def writeLatexTikzPicStarts(folder,filename,options=''):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('%Tikz picture starts%\n')
tex.write('\n')
if options!='':
tex.write('\\begin{tikzpicture}[' + options + ']\n')
else:
tex.write('\\begin{tikzpicture}\n')
def writeLatexTikzPicEnds(folder,filename):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('\\end{tikzpicture}\n')
tex.write('%Tikz picture ends%\n')
tex.write('\n')
def writeLatexTikzAxisStarts(folder,filename,options):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('%Tikz axis starts%\n')
tex.write('\n')
if options!='':
tex.write('\\begin{axis}[' + options + ']\n')
else:
tex.write('\\begin{axis}\n')
def writeLatexTikzAxisEnds(folder,filename):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('\\end{axis}\n')
tex.write('%Tikz axis ends%\n')
tex.write('\n')
def writeLatexAddPlotTable(folder,filename,data,options):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\n')
tex.write('\\addplot')
if options!='':
tex.write('[' + options + ']\n')
tex.write('table{\n')
for element in data:
tex.write(str(element[0]) + ' ' + str(element[1]) + '\n')
tex.write('};\n')
def writeLatexSinglePlot(folder,filename,data,axoptions,dataoptions,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: writeLatexSinglePlot(folder,filename,data,axoptions,dataoptions,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create latex file',True)
createLatexFile(folder,filename,'standalone')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Write latex packages',True)
writeLatexPackages(folder,filename,['inputenc','pgfplots','tikz'],['utf8','',''])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Document starts',True)
writeLatexDocumentStarts(folder,filename)
writeLatexTikzPicStarts(folder,filename,'')
writeLatexTikzAxisStarts(folder,filename,axoptions)
writeLatexAddPlotTable(folder,filename,data,dataoptions)
writeLatexTikzAxisEnds(folder,filename)
writeLatexTikzPicEnds(folder,filename)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Document ends',True)
writeLatexDocumentEnds(folder,filename)
if 'Windows' in system():
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create Windows command file',True)
cmdfile = join(folder,filename,'runlatex.cmd')
with open(cmdfile,'w') as cmd:
cmd.write('\n')
cmd.write('CD ' + folder + '\n')
cmd.write('\n')
cmd.write('pdflatex ' + join(folder,filename.split('.')[0] + '.tex') + ' -job-name=' + filename.split('.')[0] + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Executing Windows command file...',True)
try:
subprocess.call('cmd.exe /C ' + cmdfile)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
except Exception:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ERROR',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(error),True)
sys.exc_clear()
elif 'Linux' in system():
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create Linux bash file',True)
bashfile = join(folder,filename,'runlatex.sh')
with open(bashfile,'w') as bsh:
bsh.write('#!/bin/bash\n')
bsh.write('\n')
bsh.write('cd ' + folder + '\n')
bsh.write('\n')
bsh.write('pdflatex ' + join(folder,filename.split('.')[0] + '.tex') + ' -job-name=' + filename.split('.')[0] + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Executing Linux bash file...',True)
try:
writeLineToLogFile(logfilename,'a',baselogindent + 3*logindent + 'Change permissions to ' + bashfile ,True)
os.chmod(bashfile, 0o755)
writeLineToLogFile(logfilename,'a','Run bash file',True)
call('.' + bashfile)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
except Exception:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ERROR',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(error),True)
sys.exc_clear()
def writeLatexMultiplePlots(folder,filename,data,axoptions,dataoptions,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: writeLatexMultiplePlots(folder,filename,data,axoptions,dataoptions,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create latex file',True)
createLatexFile(folder,filename,'standalone')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Write latex packages',True)
writeLatexPackages(folder,filename,['inputenc','pgfplots','tikz'],['utf8','',''])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Document starts',True)
writeLatexDocumentStarts(folder,filename)
writeLatexTikzPicStarts(folder,filename,'')
writeLatexTikzAxisStarts(folder,filename,axoptions)
for k,datum in enumerate(data):
writeLatexAddPlotTable(folder,filename,datum,dataoptions[k])
writeLatexTikzAxisEnds(folder,filename)
writeLatexTikzPicEnds(folder,filename)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Document ends',True)
writeLatexDocumentEnds(folder,filename)
if 'Windows' in system():
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create Windows command file',True)
cmdfile = join(folder,'runlatex.cmd')
with open(cmdfile,'w') as cmd:
cmd.write('\n')
cmd.write('CD ' + folder + '\n')
cmd.write('\n')
cmd.write('pdflatex ' + join(folder,filename.split('.')[0] + '.tex') + ' -job-name=' + filename.split('.')[0] + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Executing Windows command file...',True)
try:
subprocess.call('cmd.exe /C ' + cmdfile)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
except Exception,error:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ERROR',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(error),True)
sys.exc_clear()
elif 'Linux' in system():
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create Linux bash file',True)
bashfile = join(folder,filename,'runlatex.sh')
with open(bashfile,'w') as bsh:
bsh.write('#!/bin/bash\n')
bsh.write('\n')
bsh.write('cd ' + folder + '\n')
bsh.write('\n')
bsh.write('pdflatex ' + join(folder,filename.split('.')[0] + '.tex') + ' -job-name=' + filename.split('.')[0] + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Executing Linux bash file...',True)
try:
writeLineToLogFile(logfilename,'a',baselogindent + 3*logindent + 'Change permissions to ' + bashfile ,True)
os.chmod(bashfile, 0o755)
writeLineToLogFile(logfilename,'a','Run bash file',True)
call('.' + bashfile)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
except Exception:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ERROR',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + str(error),True)
sys.exc_clear()
def writeLatexGenericCommand(folder,filename,command,options,arguments):
with open(join(folder,filename + '.tex'),'a') as tex:
if options!='' and arguments!='':
tex.write('\\'+ command +'[' + options + ']{' + arguments + '}\n')
elif options!='':
tex.write('\\'+ command +'{' + arguments + '}\n')
else:
tex.write('\\'+ command + '\n')
tex.write('\n')
def writeLatexCustomLine(folder,filename,line):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write(line + '\n')
def writeLatexSetLength(folder,filename,length,value):
with open(join(folder,filename + '.tex'),'a') as tex:
tex.write('\\setlength' +'{' + '\\' + length + '}' +'{' + value + '}\n')
#===============================================================================#
#===============================================================================#
# General purpose functions
#===============================================================================#
#===============================================================================#
def rotateStress2D(sigXX,sigYY,tauXY,theta):
sig11 = sigXX*np.cos(theta)*np.cos(theta)+sigYY*np.sin(theta)*np.sin(theta)+2*tauXY*np.sin(theta)*np.cos(theta)
sig22 = sigXX*np.sin(theta)*np.sin(theta)+sigYY*np.cos(theta)*np.cos(theta)-2*tauXY*np.sin(theta)*np.cos(theta)
tau12 = (sigYY-sigXX)*np.sin(theta)*np.cos(theta)+tauXY*(np.cos(theta)*np.cos(theta)-np.sin(theta)*np.sin(theta))
return sig11,sig22,tau12
#===============================================================================#
#===============================================================================#
# Data extraction functions
#===============================================================================#
#===============================================================================#
def getPerfs(wd,sims,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: getPerfs(wd,sims,logfilepath,baselogindent,logindent)',True)
perf = []
perf.append(['PROJECT NAME','DEBOND [°]','NUMBER OF CPUS [-]','USER TIME [s]','SYSTEM TIME [s]','USER TIME/TOTAL CPU TIME [%]','SYSTEM TIME/TOTAL CPU TIME [%]','TOTAL CPU TIME [s]','WALLCLOCK TIME [s]','WALLCLOCK TIME [m]','WALLCLOCK TIME [h]','WALLCLOCK TIME/TOTAL CPU TIME [%]','ESTIMATED FLOATING POINT OPERATIONS PER ITERATION [-]','MINIMUM REQUIRED MEMORY [MB]','MEMORY TO MINIMIZE I/O [MB]','TOTAL NUMBER OF ELEMENTS [-]','NUMBER OF ELEMENTS DEFINED BY THE USER [-]','NUMBER OF ELEMENTS DEFINED BY THE PROGRAM [-]','TOTAL NUMBER OF NODES [-]','NUMBER OF NODES DEFINED BY THE USER [-]','NUMBER OF NODES DEFINED BY THE PROGRAM [-]','TOTAL NUMBER OF VARIABLES [-]'])
print('')
for sim in sims:
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Extract performances for simulation ' + sim,True)
usertime = 0
systemtime = 0
totalcpu = 0
wallclock = 0
floatops = 0
minMemory = 0
minIOmemory = 0
totEl = 0
userEl = 0
progEl = 0
totN = 0
userN = 0
progN = 0
totVar = 0
cpus = 0
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'In DAT file',True)
if exists(join(wd,sim+'.dat')):
with open(join(wd,sim+'.dat'),'r') as dat:
lines = dat.readlines()
for l,line in enumerate(lines):
if 'JOB TIME SUMMARY' in line:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - JOB TIME SUMMARY',True)
for subline in lines[l:]:
if 'USER TIME' in subline:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - USER TIME',True)
usertime = float(subline.split('=')[1])
elif 'SYSTEM TIME' in subline:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - SYSTEM TIME',True)
systemtime = float(subline.split('=')[1])
elif 'TOTAL CPU TIME' in subline:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - TOTAL CPU TIME',True)
totalcpu = float(subline.split('=')[1])
elif 'WALLCLOCK TIME' in subline:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - WALLCLOCK TIME',True)
wallclock = float(subline.split('=')[1])
elif 'M E M O R Y E S T I M A T E' in line:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - MEMORY ESTIMATE',True)
values = lines[l+6].replace('\n','').split(' ')
while '' in values:
values.remove('')
floatops = float(values[1])
minMemory = float(values[2])
minIOmemory = float(values[3])
elif 'P R O B L E M S I Z E' in line:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - PROBLEM SIZE',True)
words = lines[l+3].replace('\n','').split(' ')
while '' in words:
words.remove('')
totEl = int(words[-1])
words = lines[l+4].split(' ')
while '' in words:
words.remove('')
userEl = int(words[-1])
words = lines[l+5].split(' ')
while '' in words:
words.remove('')
progEl = int(words[-1])
words = lines[l+6].split(' ')
while '' in words:
words.remove('')
totN = int(words[-1])
words = lines[l+7].split(' ')
while '' in words:
words.remove('')
userN = int(words[-1])
words = lines[l+8].split(' ')
while '' in words:
words.remove('')
progN = int(words[-1])
words = lines[l+9].split(' ')
while '' in words:
words.remove('')
totVar = int(words[-1])
if exists(join(wd,sim+'.msg')):
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'In MSG file',True)
with open(join(wd,sim+'.msg'),'r') as msg:
lines = msg.readlines()
for line in lines:
if 'USING THE DIRECT SOLVER WITH' in line:
words = line.replace('\n','').split(' ')
while '' in words:
words.remove('')
cpus = int(words[words.index('PROCESSORS')-1])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' - PROCESSORS',True)
perf.append([sim,cpus,usertime,systemtime,usertime/totalcpu,systemtime/totalcpu,totalcpu,wallclock,wallclock/60.,wallclock/3600.,wallclock/totalcpu,floatops,minMemory,minIOmemory,totEl,userEl,progEl,totN,userN,progN,totVar])
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Exiting function: getPerfs(wd,sims,logfilepath,baselogindent,logindent)',True)
return perf
def getFrame(odbObj,step,frame):
return odbObj.steps[odbObj.steps.keys()[step]].frames[frame]
def getFirstAndLastFrame(odbObj,step):
return getFrame(odbObj,step,0),getFrame(odbObj,step,-1)
def getFirstAndLastFrameLastStep(odbObj):
first, last = getFirstAndLastFrame(odbObj,-1)
return first, last
def getSingleNodeSet(odbObj,part,nodeSet):
if part==None:
result = odbObj.rootAssembly.nodeSets[nodeSet]
else:
result = odbObj.rootAssembly.instances[part].nodeSets[nodeSet]
return result
def getSingleElementSet(odbObj,part,elementSet):
if part==None:
result = odbObj.rootAssembly.elementSets[elementSet]
else:
result = odbObj.rootAssembly.instances[part].elementSets[elementSet]
return result
def getSingleSetNodeCoordinates(odbObj,step,frame,part,nodeSet):
frameObj = getFrame(odbObj,step,frame)
allCoords = frameObj.fieldOutputs['COORD'].getSubset(position=NODAL)
coords = allCoords.getSubset(region=odbObj.rootAssembly.instances[part].nodeSets[nodeSet])
return coords
def getMultipleSetsNodeCoordinates(odbObj,nodeSets):
coords = {}
for set in nodeSets:
step = set[0]
frame = set[1]
part = set[2]
nodeSet = set[3]
coords[nodeSet] = getSingleSetNodeCoordinates(odbObj,step,frame,part,nodeSet)
return coords
def extractAndSaveNodesCoordinates(odbObj,nodeSetsData,folder,filename,ext):
nodeSets = getMultipleSetsNodeCoordinates(odbObj,nodeSetsData)
with open(join(folder,filename + ext),'w') as csv:
if len(nodeSets[nodeSetsData[0][3]].values[0].data)==1:
string = 'X'
elif len(nodeSets[nodeSetsData[0][3]].values[0].data)==2:
string = 'X, Y'
elif len(nodeSets[nodeSetsData[0][3]].values[0].data)==3:
string = 'X, Y, Z'
csv.write('DATA\n')
csv.write('NODE SET' + ', ' + 'NODE TYPE, NODE LABEL, ' + string + '\n')
for set in nodeSetsData:
for value in nodeSets[set[3]].values:
line = ''
line = set[3] + ', ' + 'NODAL' + ', ' + str(value.nodeLabel)
for datum in value.data:
line += ', ' + str(datum)
csv.write(line + '\n')
def getAllNodes(odbObj,step,frameN):
allNodes = {}
frame = getFrame(odbObj,step,frameN)
nodesCoords = frame.fieldOutputs['COORD'].getSubset(position=NODAL)
for value in nodesCoords.values:
components = []
for component in value.data:
components.append(component)
allNodes[str(value.nodeLabel)] = components
return allNodes
def getAndSaveAllNodes(odbObj,step,frameN,folder,filename,ext):
allNodes = {}
frame = getFrame(odbObj,step,frameN)
nodesCoords = frame.fieldOutputs['COORD'].getSubset(position=NODAL)
for value in nodesCoords.values:
components = []
for component in value.data:
components.append(component)
allNodes[str(value.nodeLabel)] = components
with open(join(folder,filename + ext),'w') as csv:
if len(nodesCoords.values[0].data)==1:
string = 'X'
elif len(nodesCoords.values[0].data)==2:
string = 'X, Y'
elif len(nodesCoords.values[0].data)==3:
string = 'X, Y, Z'
csv.write('DATA\n')
csv.write('NODE TYPE, NODE LABEL, ' + string + '\n')
for value in nodesCoords.values:
line = ''
line = 'NODAL' + ', ' + str(value.nodeLabel)
for datum in value.data:
line += ', ' + str(datum)
csv.write(line + '\n')
return allNodes
def getAllIntPoints(odbObj,step,frameN):
allIntPoints = {}
frame = getFrame(odbObj,step,frameN)
intpointCoords = frame.fieldOutputs['COORD'].getSubset(position=INTEGRATION_POINT)
for value in intpointCoords.values:
components = []
for component in value.data:
components.append(component)
allIntPoints[str(value.elementLabel)+'-'+str(value.integrationPoint)] = components
return allIntPoints
def getAndSaveAllIntPoints(odbObj,step,frameN,folder,filename,ext):
allIntPoints = {}
frame = getFrame(odbObj,step,frameN)
intpointCoords = frame.fieldOutputs['COORD'].getSubset(position=INTEGRATION_POINT)
for value in intpointCoords.values:
components = []
for component in value.data:
components.append(component)
allIntPoints[str(value.elementLabel)+'-'+str(value.integrationPoint)] = components
with open(join(folder,filename + ext),'w') as csv:
if len(intpointCoords.values[0].data)==1:
string = 'X'
elif len(intpointCoords.values[0].data)==2:
string = 'X, Y'
elif len(intpointCoords.values[0].data)==3:
string = 'X, Y, Z'
csv.write('DATA\n')
csv.write('NODE TYPE, NODE LABEL, ' + string + '\n')
for value in intpointCoords.values:
line = ''
line = 'INTEGRATION_POINT' + ', ' + str(value.elementLabel)+'-'+str(value.integrationPoint)
for datum in value.data:
line += ', ' + str(datum)
csv.write(line + '\n')
return allIntPoints
def getFieldOutput(odbObj,step,frame,fieldOutput,subset=None,pos=None):
frame = getFrame(odbObj,step,frame)
if subset!=None:
if pos==1:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=INTEGRATION_POINT)
elif pos==2:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=NODAL)
elif pos==3:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=ELEMENT_NODAL)
elif pos==4:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=CENTROID)
else:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset)
else:
out = frame.fieldOutputs[fieldOutput]
return out
def extractAndSaveFieldOutput(odbObj,step,frameN,folder,filename,ext,fieldOutput,subset=None,pos=None):
frame = getFrame(odbObj,step,frameN)
nodes = getAllNodes(odbObj,step,frameN)
intpoints = getAllIntPoints(odbObj,step,frameN)
if subset!=None:
if pos==1:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=INTEGRATION_POINT)
elif pos==2:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=NODAL)
elif pos==3:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=ELEMENT_NODAL)
elif pos==4:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset,position=CENTROID)
else:
out = frame.fieldOutputs[fieldOutput].getSubset(region=subset)
else:
out = frame.fieldOutputs[fieldOutput]
with open(join(folder,filename + ext),'w') as csv:
if fieldOutput== 'U' or fieldOutput=='RF':
if len(out.values[0].data)==1:
string = 'X, ' + fieldOutput + '1'
elif len(out.values[0].data)==2:
string = 'X, Y, ' + fieldOutput + '1' + ', ' + fieldOutput + '2'
elif len(out.values[0].data)==3:
string = 'X, Y, Z, ' + fieldOutput + '1' + ', ' + fieldOutput + '2' + ', ' + fieldOutput + '3'
elif fieldOutput== 'S' or fieldOutput=='EE':
if len(out.values[0].data)==2:
string = 'X, ' + fieldOutput + '11' + ', ' + fieldOutput + '12'
elif len(out.values[0].data)==4:
string = 'X, Y, ' + fieldOutput + '11' + ', ' + fieldOutput + '22' + ', ' + fieldOutput + '33' + ', ' + fieldOutput + '12'
elif len(out.values[0].data)==6:
string = 'X, Y, Z, ' + fieldOutput + '11' + ', ' + fieldOutput + '22' + ', ' + fieldOutput + '33' + ', ' + fieldOutput + '12' + ', ' + fieldOutput + '13' + ', ' + fieldOutput + '23'
csv.write('HEAT MAP\n')
csv.write('NODE TYPE, NODE LABEL, ' + string + '\n')
for value in out.values:
if 'NODAL' in str(value.position):
line = ''
line = 'NODAL' + ', ' + str(value.nodeLabel)
for datum in nodes[str(value.nodeLabel)]:
line += ', ' + str(datum)
for datum in value.data:
line += ', ' + str(datum)
csv.write(line + '\n')
elif 'INTEGRATION_POINT' in str(value.position):
line = ''
line = 'INTEGRATION_POINT' + ', ' + str(value.elementLabel)+'-'+str(value.integrationPoint)
for datum in intpoints[str(value.elementLabel)+'-'+str(value.integrationPoint)]:
line += ', ' + str(datum)
for datum in value.data:
line += ', ' + str(datum)
csv.write(line + '\n')
def getDispVsReactionOnBoundarySubset(odbObj,step,frame,part,subset,component):
set = getSingleNodeSet(odbObj,part,subset)
disp = getFieldOutput(odbObj,-1,-1,'U',set)
countdisp = 0
meandisp = 0
for value in disp.values:
countdisp += 1
meandisp += value.data[component]
meandisp /= countdisp
force = getFieldOutput(odbObj,-1,-1,'RF',set)
totalforce = 0
for value in force.values:
totalforce += value.data[component]
return meandisp,totalforce
def getJintegrals(wd,sim,ncontours,stepN):
with open(join(wd,sim + '.dat'),'r') as dat:
lines = dat.readlines()
for l,line in enumerate(lines):
if 'S T E P ' + str(stepN) + ' S T A T I C A N A L Y S I S' in line:
stepStart = l
values = []
for l,line in enumerate(lines):
if 'J - I N T E G R A L E S T I M A T E S' in line and l>stepStart:
for n in range(1,int(np.ceil(ncontours/5))+1):
if n>1:
temp = filter(lambda x: x!=' ' and x!='', lines[l+6+int(np.ceil(ncontours/5))+n].replace('\n','').split(' '))
else:
temp = filter(lambda x: x!=' ' and x!='', lines[l+6+int(np.ceil(ncontours/5))+n].replace('\n','').split(' '))[2:]
for value in temp:
values.append(float(value))
break
return values
#===============================================================================#
#===============================================================================#
# Data reporting functions
#===============================================================================#
#===============================================================================#
def writePerfToFile(od,outfile,performanceslist):
with open(join(od,outfile),'w') as csv:
for performances in performanceslist:
line = ''
for i,performance in enumerate(performances):
if i>0:
line += ','
line += str(performance)
csv.write(line + '\n')
#===============================================================================#
#===============================================================================#
# Model creation functions
#===============================================================================#
#===============================================================================#
def reportSketchGeomElements(sketchGeometry,sketchVertices,logfilepath,baselogindent,logindent):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'The sketch has ' + str(len(sketchGeometry)) + ' geometric elements',True)
for key in sketchGeometry.keys():
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'fiberGeometry[' + str(key) + '] = ' + str(sketchGeometry[key]),True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'The sketch has ' + str(len(sketchVertices)) + ' vertices',True)
for key in sketchVertices.keys():
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'fiberVertices[' + str(key) + '] = ' + str(sketchVertices[key]),True)
def defineSetOfVerticesByBoundingSphere(modelpart,Cx,Cy,Cz,R,setName,logfile,indent,toScreen):
setOfVertices = modelpart.vertices.getByBoundingSphere(center=(Cx,Cy,Cz),radius=R)
modelpart.Set(vertices=setOfVertices, name=setName)
writeLineToLogFile(logfile,'a',indent + '-- ' + setName,toScreen)
def defineSetOfEdgesByClosestPoints(modelpart,Ax,Ay,Az,Bx,By,Bz,setName,logfile,indent,toScreen):
setOfEdges = modelpart.edges.getClosest(coordinates=((Ax,Ay,Az),(Bx,By,Bz),))[0][0]
modelpart.Set(edges = modelpart.edges[setOfEdges.index:setOfEdges.index+1], name=setName)
writeLineToLogFile(logfile,'a',indent + '-- ' + setName,toScreen)
def defineSetOfFacesByFindAt(modelpart,Ax,Ay,Az,setName,logfile,indent,toScreen):
setOfFaces = modelpart.faces.findAt(coordinates=(Ax,Ay,Az))
modelpart.Set(faces = modelpart.faces[setOfFaces.index:setOfFaces.index+1], name=setName)
writeLineToLogFile(logfile,'a',indent + '-- ' + setName,toScreen)
def create2Drectanglesketch(currentmodel,partName,partDimensionality,partType,sizeOfSheet,Ax,Ay,Bx,By,Cx,Cy,Dx,Dy,Clabel,Dlabel,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: create2Drectanglesketch(currentmodel,partName,partDimensionality,partType,sizeOfSheet,Ax,Ay,Bx,By,Cx,Cy,Dx,Dy,Clabel,Dlabel,logfilepath,baselogindent,logindent)',True)
# create sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Initialize sketch to draw the external shape of the RVE ...',True)
currentsketch = currentmodel.ConstrainedSketch(name='__profile__',sheetSize=sizeOfSheet)
currentsketch.setPrimaryObject(option=STANDALONE)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create rectangle
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw a rectangle ...',True)
currentsketch.rectangle(point1=(Ax, Ay), point2=(Bx,By))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# set dimension labels
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Set dimension labels ...',True)
v = currentsketch.vertices
currentsketch.ObliqueDimension(vertex1=v[0], vertex2=v[1], textPoint=(Cx,Cy), value=Clabel)
currentsketch.ObliqueDimension(vertex1=v[1], vertex2=v[2], textPoint=(Dx,Dy), value=Dlabel)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# assign to part
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign sketch geometry to the part ...',True)
currentpart = currentmodel.Part(name='RVE',dimensionality=TWO_D_PLANAR,type=DEFORMABLE_BODY)
currentpart = currentmodel.parts['RVE']
currentpart.BaseShell(sketch=RVEsketch)
currentsketch.unsetPrimaryObject()
del currentmodel.sketches['__profile__']
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def create2DRVEregion(currentmodel,rvetype,L,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: create2DRVEregion(currentmodel,type,L,logfilepath,baselogindent,logindent)',True)
# initialize parameters
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Initialize parameters ...',True)
if rvetype=='quarter':
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Model: quarter of RVE',True)
sizeOfSheet = 2*L
Ax = 0.0
Ay = 0.0
Bx = L
By = L
Cx = 0.5*L
Cy = 1.1*L
Dx = 1.1*L
Dy = 0.5*L
Clabel = L
Dlabel = L
elif rvetype=='half':
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Model: quarter of RVE',True)
sizeOfSheet = 3*L
Ax = -L
Ay = 0.0
Bx = L
By = L
Cx = 1.1*L
Cy = 0.5*L
Dx = 0.0
Dy = 1.1*L
Clabel = L
Dlabel = 2*L
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Model: quarter of RVE',True)
sizeOfSheet = 3*L
Ax = -L
Ay = -L
Bx = L
By = L
Cx = -1.1*L
Cy = 0.0
Dx = 0.0
Dy = 1.1*L
Clabel = 2*L
Dlabel = 2*L
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' sheet size = ' + str(sizeOfSheet),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Ax = ' + str(Ax),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Ay = ' + str(Ay),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Bx = ' + str(Bx),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' By = ' + str(By),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Cx = ' + str(Cx),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Cy = ' + str(Cy),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Dx = ' + str(Dx),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Dy = ' + str(Dy),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Clabel = ' + str(Clabel),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' Dlabel = ' + str(Dlabel),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Calling function: create2Drectanglesketch(currentmodel,partName,partDimensionality,partType,sizeOfSheet,Ax,Ay,Bx,By,Cx,Cy,Dx,Dy,Clabel,Dlabel,logfilepath,baselogindent,logindent)',True)
create2Drectanglesketch(currentmodel,'RVE',TWO_D_PLANAR,DEFORMABLE_BODY,sizeOfSheet,Ax,Ay,Bx,By,Cx,Cy,Dx,Dy,Clabel,Dlabel,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Successfully returned from function: create2Drectanglesketch(currentmodel,partName,partDimensionality,partType,sizeOfSheet,Ax,Ay,Bx,By,Cx,Cy,Dx,Dy,Clabel,Dlabel,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def add2DSymmCrack(currentmodel,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: add2DSymmCrack()',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def add2DFullCrack(currentmodel,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: add2DFullCrack()',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def add2DFiberSection(currentpart,currentmodel,planeToSketch,fiber,L,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: add2DFiberSection()',True)
# create geometrical transform to draw partition sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create geometrical transform to draw partition sketch ...',True)
transformToSketch = currentpart.MakeSketchTransform(sketchPlane=planeToSketch, sketchPlaneSide=SIDE1, origin=(0.0,0.0,0.0))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create sketch ...',True)
fiberSketch = model.ConstrainedSketch(name='fiberSketch',sheetSize=3*L, gridSpacing=L/100.0, transform=transformToSketch)
fiberSketch = model.sketches['fiberSketch']
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create reference to geometrical objects (faces, edges and vertices) of the partition sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create reference to geometrical objects of the partition sketch ...',True)
fiberGeometry = fiberSketch.geometry
fiberVertices = fiberSketch.vertices
fiberSketch.setPrimaryObject(option=SUPERIMPOSE)
reportSketchGeomElements(sketchGeometry,sketchVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# Project reference onto sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Project reference onto sketch ...',True)
currentpart.projectReferencesOntoSketch(sketch=fiberSketch, filter=COPLANAR_EDGES)
reportSketchGeomElements(fiberGeometry,fiberVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw fiber
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw fiber ...',True)
fiberSketch.ArcByCenterEnds(center=(fiber['center'][0], fiber['center'][1]), point1=(fiber['center'][0]+fiber['Rf']*np.cos(fiber['arcStart']*np.pi/180.0), fiber['center'][1]+fiber['Rf']*np.sin(fiber['arcStart']*np.pi/180.0)), point2=(fiber['center'][0]+fiber['Rf']*np.cos(fiber['arcStop']*np.pi/180.0), fiber['center'][1]+fiber['Rf']*np.sin(fiber['arcStop']*np.pi/180.0)), direction=CLOCKWISE) # fiberGeometry[6]
reportSketchGeomElements(fiberGeometry,fiberVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Identify indeces of fiber and its center point ...',True)
#lastGeometryKey = 0
#for key in fiberGeometry.keys():
#lastGeometryKey = key
#if 'ARC' in fiberGeometry[key]['curveType']:
#fiberIndex = key
#lastVerticesKey = 0
for key in fiberVertices.keys():
#lastVerticesKey = key
if fiberVertices[key]['coords'][0]==0.0 and fiberVertices[key]['coords'][1]==0.0:
fiberOriginIndex = key
if fiber['isCracked']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'A DEBOND is present at the fiber/matrix interface',True)
regionRadiuses = [fiber['R1'],fiber['R2'],fiber['R3'],fiber['R4']]
circsectionsIndeces = []
for R in regionRadiuses:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw circular section with R = ' + str(R) + ' ...',True)
fiberSketch.ArcByCenterEnds(center=(fiber['center'][0], fiber['center'][1]), point1=(fiber['center'][0]+R*np.cos(fiber['arcStart']*np.pi/180.0), fiber['center'][1]+R*np.sin(fiber['arcStart']*np.pi/180.0)), point2=(fiber['center'][0]+R*np.cos(fiber['arcStop']*np.pi/180.0), fiber['center'][1]+R*np.sin(fiber['arcStop']*np.pi/180.0)), direction=CLOCKWISE) # fiberGeometry[6]
reportSketchGeomElements(fiberGeometry,fiberVertices,logfilepath,baselogindent + 2*logindent,logindent)
lastGeometryKey += 1
circsectionsIndeces.append(lastGeometryKey)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if len(fiber['cracks'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'There are ' + str(len(fiber['cracks'])) + ' cracks',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'There is 1 crack',True)
for cNum,crackKey in enumerate(fiber['cracks'].keys()):
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Crack number ' + str(cNum),True)
crack = fiber['cracks'][crackKey]
angles = [crack['theta']+crack['deltatheta']]
if crack['isMeasured']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The crack IS SUBJECT TO MEASUREMENTS',True)
angles.append(crack['theta']+crack['deltatheta']-crack['deltapsi'])
angles.append(crack['theta']+crack['deltatheta']+crack['deltapsi'])
angles.append(crack['theta']+crack['deltatheta']+crack['deltapsi']+crack['deltaphi'])
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The crack IS NOT SUBJECT TO MEASUREMENTS',True)
if not crack['isSymm']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The crack IS NOT SYMMETRIC',True)
angles.append(crack['theta']-crack['deltatheta']-crack['deltapsi']-crack['deltaphi'])
angles.append(crack['theta']-crack['deltatheta']-crack['deltapsi'])
angles.append(crack['theta']-crack['deltatheta']+crack['deltapsi'])
angles.append(crack['theta']-crack['deltatheta'])
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The crack IS SYMMETRIC',True)
constructionLinesIndeces = []
for angle in angles:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw construction line at = ' + str(angle) + ' deg',True)
fiberSketch.ConstructionLine(point1=(fiber['center'][0], fiber['center'][1]), angle=angle)
lastGeometryKey += 1
constructionLinesIndeces.append(lastGeometryKey)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[fiberOriginIndex], entity2=fiberGeometry[lastGeometryKey],addUndoState=False)
reportSketchGeomElements(fiberGeometry,fiberVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw segment at = ' + str(angle) + ' deg',True)
Ax = fiber['center'][0] + fiber['R2']*np.cos(angle)
Ay = fiber['center'][1] + fiber['R2']*np.sin(angle)
Bx = fiber['center'][0] + fiber['R3']*np.cos(angle)
By = fiber['center'][1] + fiber['R3']*np.sin(angle)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By))
lastGeometryKey += 1
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[circsectionsIndeces[1]], entity2=fiberGeometry[lastGeometryKey],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-2], entity2=fiberGeometry[circsectionsIndeces[1]],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-1], entity2=fiberGeometry[circsectionsIndeces[2]],addUndoState=False)
reportSketchGeomElements(fiberGeometry,fiberVertices,logfilepath,baselogindent + 2*logindent,logindent)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'A DEBOND is present at the fiber/matrix interface',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign partition sketch to part ...',True)
pickedFaces = currentpart.faces.findAt(coordinates=(0.5*L, 0.5*L, 0))
RVEpart.PartitionFaceBySketch(faces=pickedFaces, sketch=fiberSketch)
fiberSketch.unsetPrimaryObject()
del model.sketches['fiberSketch']
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def add2DFullFiber(currentpart,currentmodel,planeToSketch,fiber,L,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: add2DFullFiber()',True)
# create geometrical transform to draw partition sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create geometrical transform to draw partition sketch ...',True)
transformToSketch = currentpart.MakeSketchTransform(sketchPlane=planeToSketch, sketchPlaneSide=SIDE1, origin=(0.0,0.0,0.0))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create sketch ...',True)
fiberSketch = model.ConstrainedSketch(name='fiberSketch',sheetSize=3*L, gridSpacing=L/100.0, transform=transformToSketch)
fiberSketch = model.sketches['fiberSketch']
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create reference to geometrical objects (faces, edges and vertices) of the partition sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create reference to geometrical objects of the partition sketch ...',True)
fiberGeometry = fiberSketch.geometry
fiberVertices = fiberSketch.vertices
fiberSketch.setPrimaryObject(option=SUPERIMPOSE)
reportSketchGeomElements(sketchGeometry,sketchVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# Project reference onto sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Project reference onto sketch ...',True)
currentpart.projectReferencesOntoSketch(sketch=fiberSketch, filter=COPLANAR_EDGES)
reportSketchGeomElements(sketchGeometry,sketchVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw fiber
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw fiber ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Fiber',True)
fiberSketch.CircleByCenterPerimeter(center=(fiber['center'][0], fiber['center'][1]), point1=(fiber['center'][0]+fiber['Rf']*np.cos(45.0*np.pi/180.0), fiber['center'][1]+fiber['Rf']*np.sin(45.0*np.pi/180.0)))
reportSketchGeomElements(sketchGeometry,sketchVertices,logfilepath,baselogindent + 2*logindent,logindent)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Identify indeces of fiber and its center point ...',True)
#lastGeometryKey = 0
#for key in fiberGeometry.keys():
#lastGeometryKey = key
#if 'ARC' in fiberGeometry[key]['curveType']:
#fiberIndex = key
#lastVerticesKey = 0
for key in fiberVertices.keys():
#lastVerticesKey = key
if fiberVertices[key]['coords'][0]==0.0 and fiberVertices[key]['coords'][1]==0.0:
fiberOriginIndex = key
if fiber['isCracked']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'A DEBOND is present at the fiber/matrix interface',True)
regionRadiuses = [fiber['R1'],fiber['R2'],fiber['R3'],fiber['R4']]
circsectionsIndeces = []
for R in regionRadiuses:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw circular section with R = ' + str(R) + ' ...',True)
fiberSketch.CircleByCenterPerimeter(center=(fiber['center'][0], fiber['center'][1]), point1=(fiber['center'][0]+R*np.cos(45.0*np.pi/180.0), fiber['center'][1]+R*np.sin(45.0*np.pi/180.0)))
reportSketchGeomElements(fiberGeometry,fiberVertices,logfilepath,baselogindent + 2*logindent,logindent)
lastGeometryKey += 1
circsectionsIndeces.append(lastGeometryKey)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if len(fiber['cracks'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'There are ' + str(len(fiber['cracks'])) + ' cracks',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'There is 1 crack',True)
for cNum,crackKey in enumerate(fiber['cracks'].keys()):
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Crack number ' + str(cNum),True)
crack = fiber['cracks'][crackKey]
angles = [crack['theta']+crack['deltatheta']]
if crack['isMeasured']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The crack IS SUBJECT TO MEASUREMENTS',True)
angles.append(crack['theta']+crack['deltatheta']-crack['deltapsi'])
angles.append(crack['theta']+crack['deltatheta']+crack['deltapsi'])
angles.append(crack['theta']+crack['deltatheta']+crack['deltapsi']+crack['deltaphi'])
angles.append(crack['theta']-crack['deltatheta']-crack['deltapsi']-crack['deltaphi'])
angles.append(crack['theta']-crack['deltatheta']-crack['deltapsi'])
angles.append(crack['theta']-crack['deltatheta']+crack['deltapsi'])
angles.append(crack['theta']-crack['deltatheta'])
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The crack IS NOT SUBJECT TO MEASUREMENTS',True)
constructionLinesIndeces = []
for angle in angles:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw construction line at = ' + str(angle) + ' deg',True)
fiberSketch.ConstructionLine(point1=(fiber['center'][0], fiber['center'][1]), angle=angle)
lastGeometryKey += 1
constructionLinesIndeces.append(lastGeometryKey)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[fiberOriginIndex], entity2=fiberGeometry[lastGeometryKey],addUndoState=False)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw segment at = ' + str(angle) + ' deg',True)
Ax = fiber['center'][0] + fiber['R2']*np.cos(angle)
Ay = fiber['center'][1] + fiber['R2']*np.sin(angle)
Bx = fiber['center'][0] + fiber['R3']*np.cos(angle)
By = fiber['center'][1] + fiber['R3']*np.sin(angle)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By))
lastGeometryKey += 1
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[circsectionsIndeces[1]], entity2=fiberGeometry[lastGeometryKey],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-2], entity2=fiberGeometry[circsectionsIndeces[1]],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-1], entity2=fiberGeometry[circsectionsIndeces[2]],addUndoState=False)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'NO DEBOND is present at the fiber/matrix interface',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign partition sketch to part ...',True)
pickedFaces = currentpart.faces.findAt(coordinates=(0.5*L, 0.5*L, 0))
RVEpart.PartitionFaceBySketch(faces=pickedFaces, sketch=fiberSketch)
fiberSketch.unsetPrimaryObject()
del model.sketches['fiberSketch']
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def addMaterial(currentmodel,material,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: addMaterial(currentmodel,material,logfilepath,baselogindent,logindent)',True)
currentmodel.Material(name=material['name'])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'MATERIAL: ' + material['name'],True)
try:
values = material['elastic']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
if v>0 and v%8==0:
tuplelist.append(tuple(valuelist))
valuelist = []
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Elastic(type=material['elastic']['type'],table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' ELASTIC',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + line,True)
except Exception:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' NO ELASTIC PROPERTY',True)
sys.exc_clear()
try:
values = material['density']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
if v>0 and v%8==0:
tuplelist.append(tuple(valuelist))
valuelist = []
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Density(table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' DENSITY',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' NO DENSITY PROPERTY',True)
sys.exc_clear()
try:
values = material['thermalexpansion']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
if v>0 and v%8==0:
tuplelist.append(tuple(valuelist))
valuelist = []
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Expansion(type=material['thermalexpansion']['type'],table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' THERMAL EXPANSION',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' NO THERMAL EXPANSION PROPERTY',True)
sys.exc_clear()
try:
values = material['thermalconductivity']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
if v>0 and v%8==0:
tuplelist.append(tuple(valuelist))
valuelist = []
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Conductivity(type=material['thermalconductivity']['type'],table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' THERMAL CONDUCTIVITY',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + ' NO THERMAL CONDUCTIVITY PROPERTY',True)
sys.exc_clear()
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def applyBC(currentmodel,bc,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: applyBC(currentmodel,bc,logfilepath,baselogindent,logindent)',True)
if bc['type'] in ['YSYMM','ysymm','Ysymm','ySymm']:
model.YsymmBC(name=bc['name'], createStepName='Load-Step',
region=model.rootAssembly.instances['RVE-assembly'].sets[bc['set']], localCsys=None)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def applyLoad(currentmodel,parameters,load,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: applyLoad(currentmodel,load,logfilepath,baselogindent,logindent)',True)
if load['type'] in ['appliedstrain','appliedStrain','Applied Strain','applied strain']:
if load['type'] in ['x','X','1']:
model.DisplacementBC(name=load['name'],createStepName='Load-Step',region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u1=load['value']*L, amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif load['type'] in ['y','Y','2']:
model.DisplacementBC(name=load['name'],createStepName='Load-Step',region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u2=load['value']*L, amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif load['type'] in ['z','Z','3']:
model.DisplacementBC(name=load['name'],createStepName='Load-Step',region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u3=load['value']*L, amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif load['type'] in ['applieddisplacement','appliedDisplacement','Applied Displacement','applied displacement']:
if load['type'] in ['x','X','1']:
model.DisplacementBC(name=load['name'],createStepName='Load-Step',region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u1=load['value'], amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif load['type'] in ['y','Y','2']:
model.DisplacementBC(name=load['name'],createStepName='Load-Step',region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u2=load['value'], amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif load['type'] in ['z','Z','3']:
model.DisplacementBC(name=load['name'],createStepName='Load-Step',region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u3=load['value'], amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
def assignMeshControls(thisModel,assemblyName,setName,elementShape,controls,logfile,indent,toScreen):
thisModel.rootAssembly.setMeshControls(regions=(thisModel.rootAssembly.instances[assemblyName].sets[setName].faces), elemShape=elementShape, technique=controls)
writeLineToLogFile(logfile,'a',indent + '-- ' + setName,toScreen)
def seedEdgeByNumber(thisModel,assemblyName,setName,seedsNumber,seedsConstraint,logfile,indent,toScreen):
thisModel.rootAssembly.seedEdgeByNumber(edges=(thisModel.rootAssembly.instances[assemblyName].sets[setName].edges), number=int(seedsNumber), constraint=seedsConstraint)
writeLineToLogFile(logfile,'a',indent + '-- ' + setName,toScreen)
def listGeomElements(logfilepath,baselogindent,logindent,fiberGeometry,fiberVertices):
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'The sketch has ' + str(len(fiberGeometry)) + ' geometric elements',True)
for key in fiberGeometry.keys():
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'fiberGeometry[' + str(key) + '] = ' + str(fiberGeometry[key]),True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'The sketch has ' + str(len(fiberVertices)) + ' vertices',True)
for key in fiberVertices.keys():
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'fiberVertices[' + str(key) + '] = ' + str(fiberVertices[key]),True)
def createRVE(parameters,logfilepath,baselogindent,logindent):
#===============================================================================#
# Parameters
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: createRVE(parameters,logfilepath,logindent)',True)
# assign most used parameters to variables
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Read and assign most used parameters to variables ...',True)
baselogindent += logindent
wd = parameters['input']['wd']
caefilename = parameters['input']['caefilename'].split('.')[0] + '.cae'
modelname = parameters['input']['modelname']
L = parameters['geometry']['L']
Rf = parameters['geometry']['Rf']
if 'full' in parameters['geometry']['fiber']['type']:
CornerAy = -L
elif 'half' in parameters['geometry']['fiber']['type']:
CornerAy = 0.0
elif 'quarter' in parameters['geometry']['fiber']['type']:
CornerAy = 0.0
else:
CornerAy = 0.0
if 'boundingPly' in parameters['BC']['northSide']['type'] and 'adjacentFibers' in parameters['BC']['northSide']['type']:
nFibers = parameters['BC']['northSide']['nFibers']
tRatio = parameters['BC']['northSide']['tRatio']
Lply = nFibers*(2*L)
Ludply = tRatio*(2*(L + Lply))
CornerBy = L + Lply + Ludply
elif 'boundingPly' in parameters['BC']['northSide']['type'] or 'adjacentFibers' in parameters['BC']['northSide']['type']:
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
tRatio = parameters['BC']['northSide']['nFibers']
else:
tRatio = parameters['BC']['northSide']['tRatio']
Lply = tRatio*(2*L)
CornerBy = L + Lply
else:
CornerBy = L
if 'boundingPly' in parameters['BC']['rightSide']['type'] and 'boundingPly' in parameters['BC']['leftSide']['type'] and 'adjacentFibers' in parameters['BC']['rightSide']['type'] and 'adjacentFibers' in parameters['BC']['leftSide']['type']:
if 'quarter' in parameters['geometry']['fiber']['type']:
skipLineToLogFile(logfilepath,'a',True)
writeErrorToLogFile(logfilepath,'a','GEOMETRY','Clashing geometric requirements: asked for quarter fiber and for material on the left side. Review and select the appropriate.',True)
sys.exit(2)
wRatioRight = parameters['BC']['rightSide']['wRatio']
wRatioLeft = parameters['BC']['leftSide']['wRatio']
nFibersRight = parameters['BC']['rightSide']['nFibers']
nFibersLeft = parameters['BC']['leftSide']['nFibers']
wRightPly = nFibersRight*(2*L)
wLeftPly = nFibersLeft*(2*L)
wRightHPly = wRatioRight*(wRightPly+wLeftPly+2*L)
wLeftHPly = wRatioLeft*(wRightPly+wLeftPly+2*L)
CornerAx = -(L+wLeftPly+wLeftHPly)
CornerBx = L+wRightPly+wRightHPly
elif ('boundingPly' in parameters['BC']['rightSide']['type'] and 'boundingPly' in parameters['BC']['leftSide']['type']) or ('adjacentFibers' in parameters['BC']['rightSide']['type'] and 'adjacentFibers' in parameters['BC']['leftSide']['type']):
if 'quarter' in parameters['geometry']['fiber']['type']:
skipLineToLogFile(logfilepath,'a',True)
writeErrorToLogFile(logfilepath,'a','GEOMETRY','Clashing geometric requirements: asked for quarter fiber and for material on the left side. Review and select the appropriate.',True)
sys.exit(2)
if 'boundingPly' in parameters['BC']['rightSide']['type'] and 'boundingPly' in parameters['BC']['leftSide']['type']:
wRatioRight = parameters['BC']['rightSide']['wRatio']
wRatioLeft = parameters['BC']['leftSide']['wRatio']
wRightHPly = wRatioRight*(2*L)
wLeftHPly = wRatioLeft*(2*L)
else:
wRatioRight = parameters['BC']['rightSide']['nFibers']
wRatioLeft = parameters['BC']['leftSide']['nFibers']
wRightPly = wRatioRight*(2*L)
wLeftPly = wRatioLeft*(2*L)
CornerAx = -(L+wLeftPly)
CornerBx = L+wRightPly
elif 'boundingPly' in parameters['BC']['rightSide']['type'] and 'adjacentFibers' in parameters['BC']['rightSide']['type']:
wRatioRight = parameters['BC']['rightSide']['wRatio']
nFibersRight = parameters['BC']['rightSide']['nFibers']
wRightPly = nFibersRight*(2*L)
wRightHPly = wRatioRight*(wRightPly+wLeftPly+2*L)
CornerAx = -L
CornerBx = L+wRightPly+wRightHPly
elif 'boundingPly' in parameters['BC']['leftSide']['type'] and 'adjacentFibers' in parameters['BC']['leftSide']['type']:
if 'quarter' in parameters['geometry']['fiber']['type']:
skipLineToLogFile(logfilepath,'a',True)
writeErrorToLogFile(logfilepath,'a','GEOMETRY','Clashing geometric requirements: asked for quarter fiber and for material on the left side. Review and select the appropriate.',True)
sys.exit(2)
wRatioLeft = parameters['BC']['leftSide']['wRatio']
nFibersLeft = parameters['BC']['leftSide']['nFibers']
wLeftPly = nFibersLeft*(2*L)
wLeftHPly = wRatioLeft*(wRightPly+wLeftPly+2*L)
CornerAx = -(L+wLeftPly+wLeftHPly)
CornerBx = L
elif 'boundingPly' in parameters['BC']['rightSide']['type'] or 'adjacentFibers' in parameters['BC']['rightSide']['type']:
if 'boundingPly' in parameters['BC']['rightSide']['type']:
wRatioRight = parameters['BC']['rightSide']['wRatio']
else:
wRatioRight = parameters['BC']['rightSide']['nFibers']
wRatioRight = parameters['BC']['rightSide']['wRatio']
wRightPly = wRatioRight*(2*L)
CornerAx = -L
CornerBx = L+wRightPly
elif 'boundingPly' in parameters['BC']['leftSide']['type'] or 'adjacentFibers' in parameters['BC']['leftSide']['type']:
if 'quarter' in parameters['geometry']['fiber']['type']:
skipLineToLogFile(logfilepath,'a',True)
writeErrorToLogFile(logfilepath,'a','GEOMETRY','Clashing geometric requirements: asked for quarter fiber and for material on the left side. Review and select the appropriate.',True)
sys.exit(2)
if 'boundingPly' in parameters['BC']['leftSide']['type']:
wRatioLeft = parameters['BC']['leftSide']['wRatio']
else:
wRatioLeft = parameters['BC']['leftSide']['nFibers']
wRatioLeft = parameters['BC']['leftSide']['wRatio']
wLeftPly = wRatioLeft*(2*L)
CornerAx = -(L+wLeftPly)
CornerBx = L
else:
CornerBx = L
if 'quarter' in parameters['geometry']['fiber']['type']:
CornerAx = 0.0
else:
CornerAx = -L
theta = parameters['geometry']['theta'] # in degrees !!!
deltatheta = parameters['geometry']['deltatheta'] # in degrees !!!
if np.abs(theta)>0.0:
if theta-deltatheta<=0.0:
skipLineToLogFile(logfilepath,'a',True)
writeErrorToLogFile(logfilepath,'a','GEOMETRY','The provided debond geometry is not correct: the debond ends at or under the symmetry line at 0 degree',True)
sys.exit(2)
elif theta+deltatheta>=180.0:
skipLineToLogFile(logfilepath,'a',True)
writeErrorToLogFile(logfilepath,'a','GEOMETRY','The provided debond geometry is not correct: the debond ends at or under the symmetry line at 180 degree',True)
sys.exit(2)
deltapsi = parameters['mesh']['size']['deltapsi'] # in degrees !!!
deltaphi = parameters['mesh']['size']['deltaphi'] # in degrees !!!
delta = parameters['mesh']['size']['delta'] # in degrees !!!
minElNum = parameters['mesh']['elements']['minElNum']
if ((theta+deltatheta-deltapsi)<=0.0 or (theta+deltatheta-deltapsi)/delta<minElNum) and ((theta+deltatheta+deltapsi+deltaphi)>=180.0 or (180.0-(theta+deltatheta+deltapsi+deltaphi))/delta<minElNum):
deltapsi = 0.6*((180.0-(theta+deltatheta))-np.max([0.5*(theta+deltatheta),0.1*(180.0-(theta+deltatheta)),minElNum*delta]))
deltaphi = 0.4*((180.0-(theta+deltatheta))-np.max([0.5*(theta+deltatheta),0.1*(180.0-(theta+deltatheta)),minElNum*delta]))
elif (theta+deltatheta-deltapsi)<=0.0 or (theta+deltatheta-deltapsi)/delta<minElNum:
deltapsi = (theta+deltatheta) - np.max([0.5*(theta+deltatheta),minElNum*delta])
elif (theta+deltatheta+deltapsi+deltaphi)>=180.0 or (180.0-(theta+deltatheta+deltapsi+deltaphi))/delta<minElNum:
deltapsi = 0.6*((180.0-(theta+deltatheta))-np.max([0.1*(180.0-(theta+deltatheta)),minElNum*delta]))
deltaphi = 0.4*((180.0-(theta+deltatheta))-np.max([0.1*(180.0-(theta+deltatheta)),minElNum*delta]))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Working directory: ' + wd,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'CAE database name: ' + caefilename,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Model name: ' + modelname,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'L: ' + str(L),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Rf: ' + str(Rf),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'L/Rf: ' + str(L/Rf),True)
if 'boundingPly' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Lply: ' + str(Lply),True)
if 'boundingPly' in parameters['BC']['rightSide']['type'] and 'boundingPly' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'wRightPly: ' + str(wRightPly),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'wLeftPly: ' + str(wLeftPly),True)
elif 'boundingPly' in parameters['BC']['rightSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'wRightPly: ' + str(wRightPly),True)
elif 'boundingPly' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'wLeftPly: ' + str(wLeftPly),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'theta: ' + str(theta),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'deltatheta: ' + str(deltatheta),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'deltapsi: ' + str(deltapsi),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'deltaphi: ' + str(deltaphi),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'delta: ' + str(delta),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'minElnum: ' + str(minElNum),True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
#===============================================================================#
# Model database creation
#===============================================================================#
# if CAE database exists, open it; otherwise create new one
caefullpath = join(wd,caefilename)
if isfile(caefullpath):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'CAE database already exists. Opening it ...',True)
openMdb(caefullpath)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'CAE database does not exist. Creating it ...',True)
mdb.saveAs(caefullpath)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
# create and assign model object to variable for lighter code
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Creating model ' + modelname + ' ...',True)
mdb.Model(name=modelname)
model = mdb.models[modelname]
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
#===============================================================================#
# Parts creation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Creating part ...',True)
# create sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Initialize sketch to draw the external shape of the RVE ...',True)
RVEsketch = model.ConstrainedSketch(name='__profile__',
sheetSize=3*L)
RVEsketch.setPrimaryObject(option=STANDALONE)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create rectangle
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw a rectangle ...',True)
RVEsketch.rectangle(point1=(CornerAx,CornerAy), point2=(CornerBx,CornerBy))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# set dimension labels
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Set dimension labels ...',True)
v = RVEsketch.vertices
RVEsketch.ObliqueDimension(vertex1=v[0], vertex2=v[1], textPoint=(1.1*CornerAx,0.5*CornerBy), value=CornerBy)
RVEsketch.ObliqueDimension(vertex1=v[1], vertex2=v[2], textPoint=(0.0,1.1*CornerBy), value=(-CornerAx+CornerBx))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# assign to part
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign sketch geometry to the part ...',True)
RVEpart = model.Part(name='RVE',dimensionality=TWO_D_PLANAR,type=DEFORMABLE_BODY)
RVEpart = model.parts['RVE']
RVEpart.BaseShell(sketch=RVEsketch)
RVEsketch.unsetPrimaryObject()
del model.sketches['__profile__']
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create reference to geometrical objects (faces, edges and vertices) of the part
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create reference to geometrical objects of the part ...',True)
RVEfaces = RVEpart.faces
RVEedges = RVEpart.edges
RVEvertices = RVEpart.vertices
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create geometrical transform to draw partition sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create geometrical transform to draw partition sketch ...',True)
transformToSketch = RVEpart.MakeSketchTransform(sketchPlane=RVEfaces[0], sketchPlaneSide=SIDE1, origin=(0.0,0.0,0.0))
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create sketch ...',True)
fiberSketch = model.ConstrainedSketch(name='fiberSketch',sheetSize=3*L, gridSpacing=L/100.0, transform=transformToSketch)
fiberSketch = model.sketches['fiberSketch']
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# create reference to geometrical objects (faces, edges and vertices) of the partition sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create reference to geometrical objects of the partition sketch ...',True)
fiberGeometry = fiberSketch.geometry
fiberVertices = fiberSketch.vertices
fiberSketch.setPrimaryObject(option=SUPERIMPOSE)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# Project reference onto sketch
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Project reference onto sketch ...',True)
RVEpart.projectReferencesOntoSketch(sketch=fiberSketch, filter=COPLANAR_EDGES)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw fiber and circular sections for mesh generation
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw fiber and circular sections for mesh generation ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Fiber',True)
if 'full' in parameters['geometry']['fiber']['type']:
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-Rf, 0.0))
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.75*Rf',True)
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-0.75*Rf, 0.0)) # fiberGeometry[7]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.5*Rf',True)
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-0.5*Rf, 0.0)) # fiberGeometry[8]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.25*Rf',True)
if L>2*Rf:
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-1.25*Rf, 0.0)) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-1.5*Rf, 0.0)) # fiberGeometry[10]
else:
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-(Rf+0.25*(L-Rf)), 0.0)) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0), point1=(-(Rf+0.5*(L-Rf)), 0.0)) # fiberGeometry[10]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
elif 'half' in parameters['geometry']['fiber']['type']:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-Rf, 0.0), point2=(Rf,0.0), direction=CLOCKWISE) # fiberGeometry[6]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.75*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-0.75*Rf, 0.0), point2=(0.75*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[7]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-0.5*Rf, 0.0), point2=(0.5*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[8]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.25*Rf',True)
if L>2*Rf:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-1.25*Rf, 0.0), point2=(1.25*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-1.5*Rf, 0.0), point2=(1.5*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[10]
else:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-(Rf+0.25*(L-Rf)), 0.0), point2=((Rf+0.25*(L-Rf)),0.0), direction=CLOCKWISE) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-(Rf+0.5*(L-Rf)), 0.0), point2=((Rf+0.5*(L-Rf)),0.0), direction=CLOCKWISE) # fiberGeometry[10]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
elif 'quarter' in parameters['geometry']['fiber']['type']:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+Rf), point2=(Rf,0.0), direction=CLOCKWISE) # fiberGeometry[6]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.75*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+0.75*Rf), point2=(0.75*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[7]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+0.5*Rf), point2=(0.5*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[8]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.25*Rf',True)
if L>2*Rf:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+1.25*Rf), point2=(1.25*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+1.5*Rf), point2=(1.5*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[10]
else:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+(Rf+0.25*(L-Rf))), point2=((Rf+0.25*(L-Rf)),0.0), direction=CLOCKWISE) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(0.0, 0.0+(Rf+0.5*(L-Rf))), point2=((Rf+0.5*(L-Rf)),0.0), direction=CLOCKWISE) # fiberGeometry[10]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
else:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-Rf, 0.0), point2=(Rf,0.0), direction=CLOCKWISE) # fiberGeometry[6]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.75*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-0.75*Rf, 0.0), point2=(0.75*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[7]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 0.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-0.5*Rf, 0.0), point2=(0.5*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[8]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.25*Rf',True)
if L>2*Rf:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-1.25*Rf, 0.0), point2=(1.25*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-1.5*Rf, 0.0), point2=(1.5*Rf,0.0), direction=CLOCKWISE) # fiberGeometry[10]
else:
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-(Rf+0.25*(L-Rf)), 0.0), point2=((Rf+0.25*(L-Rf)),0.0), direction=CLOCKWISE) # fiberGeometry[9]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Arc at 1.5*Rf',True)
fiberSketch.ArcByCenterEnds(center=(0.0, 0.0), point1=(-(Rf+0.5*(L-Rf)), 0.0), point2=((Rf+0.5*(L-Rf)),0.0), direction=CLOCKWISE) # fiberGeometry[10]
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# calculate angles for construction lines
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Calculate angles for construction lines ...',True)
alpha = theta + deltatheta - deltapsi
beta = theta + deltatheta + deltapsi
gamma = theta + deltatheta + deltapsi + deltaphi
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw construction lines
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw construction lines ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(theta+deltatheta) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=(theta+deltatheta)) # fiberGeometry[11]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[11],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(alpha) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=alpha) # fiberGeometry[12]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[12],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(beta) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=beta) # fiberGeometry[13]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[13],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(gamma) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=gamma) # fiberGeometry[14]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[14],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw angular sections to identify the crack and for mesh generation
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'draw angular sections to identify the crack and for mesh generation ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute internal and external radii ...',True)
Rint = 0.75*Rf
if L>2*Rf:
Rext = 1.25*Rf
else:
Rext = Rf+0.25*(L-Rf)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create first circular section ...',True)
Ax = Rint*np.cos(alpha*np.pi/180.0)
Ay = 0.0+Rint*np.sin(alpha*np.pi/180.0)
Bx = Rext*np.cos(alpha*np.pi/180.0)
By = 0.0+Rext*np.sin(alpha*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[15]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[15],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[15], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[16], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create second circular section ...',True)
Ax = Rint*np.cos((theta+deltatheta)*np.pi/180.0)
Ay = 0.0+Rint*np.sin((theta+deltatheta)*np.pi/180.0)
Bx = Rext*np.cos((theta+deltatheta)*np.pi/180.0)
By = 0.0+Rext*np.sin((theta+deltatheta)*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[16]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[16],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[17], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[18], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create third circular section ...',True)
Ax = Rint*np.cos(beta*np.pi/180.0)
Ay = 0.0+Rint*np.sin(beta*np.pi/180.0)
Bx = Rext*np.cos(beta*np.pi/180.0)
By = 0.0+Rext*np.sin(beta*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[17]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[17],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[19], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[20], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create fourth circular section ...',True)
Ax = Rint*np.cos(gamma*np.pi/180.0)
Ay = 0.0+Rint*np.sin(gamma*np.pi/180.0)
Bx = Rext*np.cos(gamma*np.pi/180.0)
By = 0.0+Rext*np.sin(gamma*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[18]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[18],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[21], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[22], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# if theta != 0, construct second crack tip
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Construct second crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Calculate angles for construction lines ...',True)
alpha = theta - deltatheta + deltapsi
beta = theta - deltatheta - deltapsi
gamma = theta - deltatheta - deltapsi - deltaphi
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw construction lines
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw construction lines ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(theta+deltatheta) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=(theta-deltatheta)) # fiberGeometry[19]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[19],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(alpha) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=alpha) # fiberGeometry[20]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[20],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(beta) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=beta) # fiberGeometry[21]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[21],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Construction line at ' + str(gamma) + ' deg',True)
fiberSketch.ConstructionLine(point1=(0.0, 0.0), angle=gamma) # fiberGeometry[22]
fiberSketch.CoincidentConstraint(entity1=fiberVertices[6], entity2=fiberGeometry[22],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# draw angular sections to identify the crack and for mesh generation
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'draw angular sections to identify the crack and for mesh generation ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute internal and external radii ...',True)
Rint = 0.75*Rf
if L>2*Rf:
Rext = 1.25*Rf
else:
Rext = Rf+0.25*(L-Rf)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create first circular section ...',True)
Ax = Rint*np.cos(alpha*np.pi/180.0)
Ay = 0.0+Rint*np.sin(alpha*np.pi/180.0)
Bx = Rext*np.cos(alpha*np.pi/180.0)
By = 0.0+Rext*np.sin(alpha*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[23]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[23],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-2], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-1], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create second circular section ...',True)
Ax = Rint*np.cos((theta+deltatheta)*np.pi/180.0)
Ay = 0.0+Rint*np.sin((theta+deltatheta)*np.pi/180.0)
Bx = Rext*np.cos((theta+deltatheta)*np.pi/180.0)
By = 0.0+Rext*np.sin((theta+deltatheta)*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[24]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[24],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-2], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-1], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create third circular section ...',True)
Ax = Rint*np.cos(beta*np.pi/180.0)
Ay = 0.0+Rint*np.sin(beta*np.pi/180.0)
Bx = Rext*np.cos(beta*np.pi/180.0)
By = 0.0+Rext*np.sin(beta*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[25]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[25],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-2], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-1], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
#raw_input()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create fourth circular section ...',True)
Ax = Rint*np.cos(gamma*np.pi/180.0)
Ay = 0.0+Rint*np.sin(gamma*np.pi/180.0)
Bx = Rext*np.cos(gamma*np.pi/180.0)
By = 0.0+Rext*np.sin(gamma*np.pi/180.0)
fiberSketch.Line(point1=(Ax,Ay),point2=(Bx,By)) # fiberGeometry[26]
fiberSketch.PerpendicularConstraint(entity1=fiberGeometry[7], entity2=fiberGeometry[26],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-2], entity2=fiberGeometry[7],addUndoState=False)
fiberSketch.CoincidentConstraint(entity1=fiberVertices[-1], entity2=fiberGeometry[9],addUndoState=False)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
# if bounding ply is present, draw interface line
if 'boundingPly' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw upper ply interface line ...',True)
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
fiberSketch.Line(point1=(CornerAx,L+Lply),point2=(CornerBx,L+Lply))
else:
fiberSketch.Line(point1=(CornerAx,L),point2=(CornerBx,L))
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if 'boundingPly' in parameters['BC']['rightSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw ply right interface line ...',True)
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
fiberSketch.Line(point1=(CornerBx-wRightHPly,0.0),point2=(CornerBx-wRightHPly,L+Lply))
else:
fiberSketch.Line(point1=(CornerBx-wRightHPly,0.0),point2=(CornerBx-wRightHPly,L))
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if 'boundingPly' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw ply left interface line ...',True)
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
fiberSketch.Line(point1=(CornerAx+wLeftHPly,0.0),point2=(CornerAx+wLeftHPly,L+Lply))
else:
fiberSketch.Line(point1=(CornerAx+wLeftHPly,0.0),point2=(CornerAx+wLeftHPly,L))
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw fibers above ...',True)
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
fiberSketch.CircleByCenterPerimeter(center=(0.0, 0.0+(nFiber+1)*2*L), point1=(Rf, 0.0+(nFiber+1)*2*L))
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
for mFiber in range(0,parameters['BC']['rightSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
fiberSketch.CircleByCenterPerimeter(center=((mFiber+1)*2*L, 0.0+(nFiber+1)*2*L), point1=((mFiber+1)*2*L+Rf, 0.0+(nFiber+1)*2*L))
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
for mFiber in range(0,parameters['BC']['leftSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
fiberSketch.CircleByCenterPerimeter(center=(-(mFiber+1)*2*L, 0.0+(nFiber+1)*2*L), point1=(-(mFiber+1)*2*L+Rf, 0.0+(nFiber+1)*2*L))
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw fibers to the right ...',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
for nFiber in range(0,parameters['BC']['rightSide']['nFibers']):
fiberSketch.CircleByCenterPerimeter(center=((nFiber+1)*2*L, 0.0), point1=((nFiber+1)*2*L-Rf, 0.0))
else:
for nFiber in range(0,parameters['BC']['rightSide']['nFibers']):
fiberSketch.ArcByCenterEnds(center=((nFiber+1)*2*L, 0.0), point1=((nFiber+1)*2*L-Rf, 0.0), point2=((nFiber+1)*2*L+Rf,0.0), direction=CLOCKWISE)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Draw fibers to the left ...',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
for nFiber in range(0,parameters['BC']['leftSide']['nFibers']):
fiberSketch.CircleByCenterPerimeter(center=(-(nFiber+1)*2*L, 0.0), point1=(-(nFiber+1)*2*L-Rf, 0.0))
else:
for nFiber in range(0,parameters['BC']['leftSide']['nFibers']):
fiberSketch.ArcByCenterEnds(center=(-(nFiber+1)*2*L, 0.0), point1=(-(nFiber+1)*2*L-Rf, 0.0), point2=(-(nFiber+1)*2*L+Rf,0.0), direction=CLOCKWISE)
listGeomElements(logfilepath,baselogindent+2*logindent,logindent,fiberGeometry,fiberVertices)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign partition sketch to part ...',True)
pickedFaces = RVEfaces.findAt(coordinates=(0.0, 0.5*L, 0))
RVEpart.PartitionFaceBySketch(faces=pickedFaces, sketch=fiberSketch)
fiberSketch.unsetPrimaryObject()
del model.sketches['fiberSketch']
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
mdb.save()
#-------------------#
# #
# create sets #
# #
#-------------------#
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create sets ...',True)
# create reference to geometric elements for lighter code
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create reference to geometric elements of the part ...',True)
RVEvertices = RVEpart.vertices
RVEedges = RVEpart.edges
RVEfaces = RVEpart.faces
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The part has ' + str(len(RVEvertices)) + ' vertices',True)
for e,element in enumerate(RVEvertices):
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'RVEvertices[' + str(e) + '] = ' + str(element),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The part has ' + str(len(RVEedges)) + ' edges',True)
for e,element in enumerate(RVEedges):
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'RVEedges[' + str(e) + '] = ' + str(element),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'The part has ' + str(len(RVEfaces)) + ' faces',True)
for e,element in enumerate(RVEfaces):
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'RVEfaces[' + str(e) + '] = ' + str(element),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# sets of vertices
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sets of vertices',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
defineSetOfVerticesByBoundingSphere(RVEpart,Rf*np.cos((theta+deltatheta)*np.pi/180),Rf*np.sin((theta+deltatheta)*np.pi/180),0.0,0.0001*Rf,'CRACKTIPUP',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,Rf*np.cos((theta-deltatheta)*np.pi/180),Rf*np.sin((theta-deltatheta)*np.pi/180),0.0,0.0001*Rf,'CRACKTIPLOW',logfilepath,baselogindent + 4*logindent,True)
else:
defineSetOfVerticesByBoundingSphere(RVEpart,Rf*np.cos((theta+deltatheta)*np.pi/180),Rf*np.sin((theta+deltatheta)*np.pi/180),0.0,0.0001*Rf,'CRACKTIP',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,CornerBx,CornerBy,0.0,0.00001*Rf,'NE-CORNER',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,CornerAx,CornerBy,0.0,0.00001*Rf,'NW-CORNER',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,CornerBx,0.0,0.0,0.00001*Rf,'SE-CORNER',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,CornerAx,0.0,0.0,0.00001*Rf,'SW-CORNER',logfilepath,baselogindent + 4*logindent,True)
if 'boundingPly' in parameters['BC']['northSide']['type']:
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
defineSetOfVerticesByBoundingSphere(RVEpart,CornerBx,L+Lply,0.0,0.00001*Rf,'PLYINTERFACE-NE-CORNER',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,CornerAx,L+Lply,0.0,0.00001*Rf,'PLYINTERFACE-NW-CORNER',logfilepath,baselogindent + 4*logindent,True)
else:
defineSetOfVerticesByBoundingSphere(RVEpart,CornerBx,L,0.0,0.00001*Rf,'PLYINTERFACE-NE-CORNER',logfilepath,baselogindent + 4*logindent,True)
defineSetOfVerticesByBoundingSphere(RVEpart,CornerAx,L,0.0,0.00001*Rf,'PLYINTERFACE-NW-CORNER',logfilepath,baselogindent + 4*logindent,True)
if 'boundingPly' in parameters['BC']['rightSide']['type']:
defineSetOfVerticesByBoundingSphere(RVEpart,L,L,0.0,0.00001*Rf,'RIGHTPLYINTERFACE-N-CORNER',logfilepath,baselogindent + 4*logindent,True)
if 'boundingPly' in parameters['BC']['leftSide']['type']:
defineSetOfVerticesByBoundingSphere(RVEpart,-L,L,0.0,0.00001*Rf,'LEFTPLYINTERFACE-N-CORNER',logfilepath,baselogindent + 4*logindent,True)
if 'structuralModel' in parameters['mesh']['elements'].keys():
if 'generalizedPlaneStrain' in parameters['mesh']['elements']['structuralModel']:
defineSetOfVerticesByBoundingSphere(RVEpart,0.0,-50.0,0.0,0.00001,'GPE-REF',logfilepath,baselogindent + 4*logindent,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# sets of edges
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sets of edges',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
alphaup = theta + deltatheta - deltapsi
betaup = theta + deltatheta + deltapsi
gammaup = theta + deltatheta + deltapsi + deltaphi
alphalow = theta - deltatheta + deltapsi
betalow = theta - deltatheta - deltapsi
gammalow = theta - deltatheta - deltapsi + deltaphi
setsOfEdgesData = [[0.99*Rf*np.cos(theta*np.pi/180),0.99*Rf*np.sin(theta*np.pi/180),0.0,1.01*Rf*np.cos(theta*np.pi/180),1.01*Rf*np.sin(theta*np.pi/180),0.0,'CRACK-CENTER'], [0.99*Rf*np.cos((alphalow-0.5*deltapsi)*np.pi/180),0.99*Rf*np.sin((alphalow-0.5*deltapsi)*np.pi/180),0.0,1.01*Rf*np.cos((alphalow-0.5*deltapsi)*np.pi/180),1.01*Rf*np.sin((alphalow-0.5*deltapsi)*np.pi/180),0.0,'CRACK-LOWER'], [0.99*Rf*np.cos((alphaup+0.5*deltapsi)*np.pi/180),0.99*Rf*np.sin((alphaup+0.5*deltapsi)*np.pi/180),0.0,1.01*Rf*np.cos((alphaup+0.5*deltapsi)*np.pi/180),1.01*Rf*np.sin((alphaup+0.5*deltapsi)*np.pi/180),0.0,'CRACK-UPPER']]
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
RVEpart.SetByBoolean(name='CRACK', sets=[RVEpart.sets['CRACK-CENTER'],RVEpart.sets['CRACK-LOWER'],RVEpart.sets['CRACK-UPPER']])
else:
alpha = theta + deltatheta - deltapsi
beta = theta + deltatheta + deltapsi
gamma = theta + deltatheta + deltapsi + deltaphi
setsOfEdgesData = [[0.99*Rf*np.cos(0.5*alpha*np.pi/180),0.99*Rf*np.sin(0.5*alpha*np.pi/180),0.0,1.01*Rf*np.cos(0.5*alpha*np.pi/180),1.01*Rf*np.sin(0.5*alpha*np.pi/180),0.0,'CRACK-LOWER'], [0.99*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180),0.99*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,1.01*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180),1.01*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,'CRACK-UPPER']]
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
RVEpart.SetByBoolean(name='CRACK', sets=[RVEpart.sets['CRACK-LOWER'],RVEpart.sets['CRACK-UPPER']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- CRACK',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
lowerSideSets = []
setsOfEdgesData = [[0.001*Rf,-L+0.001,0.0,0.001*Rf,-L-0.001,0.0,'LOWERSIDE-CENTER']]
if 'boundingPly' in parameters['BC']['rightSide']['type']:
setsOfEdgesData.append([0.99*CornerBx,0.001,0.0,0.99*CornerBx,-0.001,0.0,'LOWERSIDE-RIGHT-HOMOGENIZED-PLY'])
if 'boundingPly' in parameters['BC']['leftSide']['type']:
setsOfEdgesData.append([0.99*CornerAx,0.001,0.0,0.99*CornerAx,-0.001,0.0,'LOWERSIDE-LEFT-HOMOGENIZED-PLY'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
lowerSideSets.append(RVEpart.sets[setOfEdgesData[-1]])
RVEpart.SetByBoolean(name='LOWERSIDE', sets=lowerSideSets)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LOWERSIDE',True)
else:
setsOfEdgesData = [[0.001*Rf,0.001,0.0,0.001*Rf,-0.001,0.0,'LOWERSIDE-CENTER'],
[0.65*Rf,0.001,0.0,0.65*Rf,-0.001,0.0,'LOWERSIDE-FIRSTRING-RIGHT'],
[0.99*L,0.001,0.0,0.99*L,-0.001,0.0,'LOWERSIDE-MATRIXBULK-RIGHT']]
if 'half' in parameters['geometry']['fiber']['type']:
setsOfEdgesData.append([-0.65*Rf,0.001,0.0,-0.65*Rf,-0.001,0.0,'LOWERSIDE-FIRSTRING-LEFT'])
setsOfEdgesData.append([-0.99*L,0.001,0.0,-0.99*L,-0.001,0.0,'LOWERSIDE-MATRIXBULK-LEFT'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
if 'half' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='LOWERSIDE-FIRSTRING', sets=[RVEpart.sets['LOWERSIDE-FIRSTRING-RIGHT'],RVEpart.sets['LOWERSIDE-FIRSTRING-LEFT']])
else:
RVEpart.SetByBoolean(name='LOWERSIDE-FIRSTRING', sets=[RVEpart.sets['LOWERSIDE-FIRSTRING-RIGHT']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LOWERSIDE-FIRSTRING',True)
setsOfEdgesData = [[0.85*Rf,0.001,0.0,0.85*Rf,-0.001,0.0,'LOWERSIDE-SECONDRING-RIGHT']]
if 'half' in parameters['geometry']['fiber']['type']:
setsOfEdgesData.append([-0.85*Rf,0.001,0.0,-0.85*Rf,-0.001,0.0,'LOWERSIDE-SECONDRING-LEFT'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
if 'half' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='LOWERSIDE-SECONDRING', sets=[RVEpart.sets['LOWERSIDE-SECONDRING-RIGHT'],RVEpart.sets['LOWERSIDE-SECONDRING-LEFT']])
else:
RVEpart.SetByBoolean(name='LOWERSIDE-SECONDRING', sets=[RVEpart.sets['LOWERSIDE-SECONDRING-RIGHT']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LOWERSIDE-SECONDRING',True)
if L>2*Rf:
R1 = (1+0.5*0.25)*Rf
R2 = (1.25+0.5*0.25)*Rf
else:
R1 = Rf+0.5*0.25*(L-Rf)
R2 = Rf+1.5*0.25*(L-Rf)
setsOfEdgesData = [[R1,0.001,0.0,R1,-0.001,0.0,'LOWERSIDE-THIRDRING-RIGHT']]
if 'half' in parameters['geometry']['fiber']['type']:
setsOfEdgesData.append([-R1,0.001,0.0,-R1,-0.001,0.0,'LOWERSIDE-THIRDRING-LEFT'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
if 'half' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='LOWERSIDE-THIRDRING', sets=[RVEpart.sets['LOWERSIDE-THIRDRING-RIGHT'],RVEpart.sets['LOWERSIDE-THIRDRING-LEFT']])
else:
RVEpart.SetByBoolean(name='LOWERSIDE-THIRDRING', sets=[RVEpart.sets['LOWERSIDE-THIRDRING-RIGHT']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LOWERSIDE-THIRDRING',True)
setsOfEdgesData = [[R2,0.001,0.0,R2,-0.001,0.0,'LOWERSIDE-FOURTHRING-RIGHT']]
if 'half' in parameters['geometry']['fiber']['type']:
setsOfEdgesData.append([-R2,0.001,0.0,-R2,-0.001,0.0,'LOWERSIDE-FOURTHRING-LEFT'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
if 'half' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='LOWERSIDE-FOURTHRING', sets=[RVEpart.sets['LOWERSIDE-FOURTHRING-RIGHT'],RVEpart.sets['LOWERSIDE-FOURTHRING-LEFT']])
else:
RVEpart.SetByBoolean(name='LOWERSIDE-FOURTHRING', sets=[RVEpart.sets['LOWERSIDE-FOURTHRING-RIGHT']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LOWERSIDE-FOURTHRING',True)
lowerSideSets = [RVEpart.sets['LOWERSIDE-CENTER'],RVEpart.sets['LOWERSIDE-FIRSTRING'],RVEpart.sets['LOWERSIDE-SECONDRING'],RVEpart.sets['LOWERSIDE-THIRDRING'],RVEpart.sets['LOWERSIDE-FOURTHRING'],RVEpart.sets['LOWERSIDE-MATRIXBULK-RIGHT'],RVEpart.sets['LOWERSIDE-MATRIXBULK-LEFT']]
setsOfEdgesData = []
if 'boundingPly' in parameters['BC']['rightSide']['type']:
setsOfEdgesData.append([0.99*CornerBx,0.001,0.0,0.99*CornerBx,-0.001,0.0,'LOWERSIDE-RIGHT-HOMOGENIZED-PLY'])
if 'boundingPly' in parameters['BC']['leftSide']['type']:
setsOfEdgesData.append([0.99*CornerAx,0.001,0.0,0.99*CornerAx,-0.001,0.0,'LOWERSIDE-LEFT-HOMOGENIZED-PLY'])
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
for nFiber in range(0,parameters['BC']['rightSide']['nFibers']):
setsOfEdgesData.append([(nFiber+1)*2*L,0.001,0.0,(nFiber+1)*2*L,-0.001,0.0,'LOWERSIDE-RIGHT-FIBER'+str(nFiber+1)])
setsOfEdgesData.append([(nFiber+1)*2*L+1.01*Rf,0.001,0.0,(nFiber+1)*2*L+1.01*Rf,-0.001,0.0,'LOWERSIDE-RIGHT-FIBER'+str(nFiber+1)+'-RIGHTMAT'])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
for nFiber in range(0,parameters['BC']['leftSide']['nFibers']):
setsOfEdgesData.append([-(nFiber+1)*2*L,0.001,0.0,-(nFiber+1)*2*L,-0.001,0.0,'LOWERSIDE-LEFT-FIBER'+str(nFiber+1)])
setsOfEdgesData.append([-(nFiber+1)*2*L-1.01*Rf,0.001,0.0,-(nFiber+1)*2*L-1.01*Rf,-0.001,0.0,'LOWERSIDE-LEFT-FIBER'+str(nFiber+1)+'-LEFTMAT'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
lowerSideSets.append(RVEpart.sets[setOfEdgesData[-1]])
RVEpart.SetByBoolean(name='LOWERSIDE', sets=lowerSideSets)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LOWERSIDE',True)
setsOfEdgesData = [[0.49*Rf*np.cos((theta+deltatheta)*np.pi/180),0.49*Rf*np.sin((theta+deltatheta)*np.pi/180),0.0,0.51*Rf*np.cos((theta+deltatheta)*np.pi/180),0.51*Rf*np.sin((theta+deltatheta)*np.pi/180),0.0,'FIRSTCIRCLE']]
if L>2*Rf:
setsOfEdgesData.append([1.49*Rf*np.cos((theta+deltatheta)*np.pi/180),1.49*Rf*np.sin((theta+deltatheta)*np.pi/180),0.0,1.51*Rf*np.cos((theta+deltatheta)*np.pi/180),1.51*Rf*np.sin((theta+deltatheta)*np.pi/180),0.0,'FIFTHCIRCLE'])
else:
setsOfEdgesData.append([(Rf+0.49*(L-Rf))*np.cos((theta+deltatheta)*np.pi/180),(Rf+0.49*(L-Rf))*np.sin((theta+deltatheta)*np.pi/180),0.0,(Rf+0.51*(L-Rf))*np.cos((theta+deltatheta)*np.pi/180),(Rf+0.51*(L-Rf))*np.sin((theta+deltatheta)*np.pi/180),0.0,'FIFTHCIRCLE'])
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
ctNames = ['CTUP','CTLOW']
circleNames = ['SECOND','THIRD','FOURTH']
alphas = [theta + deltatheta - deltapsi,theta - deltatheta + deltapsi]
ctAngles = [theta + deltatheta,theta - deltatheta]
betas = [theta + deltatheta + deltapsi,theta - deltatheta - deltapsi]
gammas = [theta + deltatheta + deltapsi + deltaphi,theta - deltatheta - deltapsi - deltaphi]
incs = [1.0,-1.0]
if L>2*Rf:
R4 = 1.25*Rf
else:
R4 = Rf+0.25*(L-Rf)
radiuses = [0.75*Rf,Rf,R4]
for rIndex,rValue in enumerate(radiuses):
for aIndex,aValue in enumerate(alphas):
setsOfEdgesData.append([0.99*rValue*np.cos((alphas[aIndex]+incs[aIndex])*np.pi/180),0.99*rValue*np.sin((alphas[aIndex]+incs[aIndex])*np.pi/180),0.0,1.01*rValue*np.cos((alphas[aIndex]+incs[aIndex])*np.pi/180),1.01*rValue*np.sin((alphas[aIndex]+incs[aIndex])*np.pi/180),0.0,circleNames[rIndex]+'CIRCLE-UPPERCRACK-'+ctNames[aIndex]])
setsOfEdgesData.append([0.99*rValue*np.cos((ctAngles[aIndex]+incs[aIndex])*np.pi/180),0.99*rValue*np.sin((ctAngles[aIndex]+incs[aIndex])*np.pi/180),0.0,1.01*rValue*np.cos((ctAngles[aIndex]+incs[aIndex])*np.pi/180),1.01*rValue*np.sin((ctAngles[aIndex]+incs[aIndex])*np.pi/180),0.0,circleNames[rIndex]+'CIRCLE-FIRSTBOUNDED-'+ctNames[aIndex]])
setsOfEdgesData.append([0.99*rValue*np.cos((betas[aIndex]+incs[aIndex])*np.pi/180),0.99*rValue*np.sin((betas[aIndex]+incs[aIndex])*np.pi/180),0.0,1.01*rValue*np.cos((betas[aIndex]+incs[aIndex])*np.pi/180),1.01*rValue*np.sin((betas[aIndex]+incs[aIndex])*np.pi/180),0.0,circleNames[rIndex]+'CIRCLE-SECONDBOUNDED-'+ctNames[aIndex]])
setsOfEdgesData.append([0.99*rValue*np.cos(theta*np.pi/180),0.99*rValue*np.sin(theta*np.pi/180),0.0,1.01*rValue*np.cos(theta*np.pi/180),1.01*rValue*np.sin(theta*np.pi/180),0.0,circleNames[rIndex]+'CIRCLE-CENTERCRACK'])
setsOfEdgesData.append([0.99*rValue*np.cos(1.025*gammas[0]*np.pi/180),0.99*rValue*np.sin(1.025*gammas[0]*np.pi/180),0.0,1.01*rValue*np.cos(1.025*gammas[0]*np.pi/180),1.01*rValue*np.sin(1.025*gammas[0]*np.pi/180),0.0,circleNames[rIndex]+'CIRCLE-RESTBOUNDED'])
for aIndex,aValue in enumerate(alphas):
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*alphas[aIndex]*np.pi/180),0.85*Rf*np.sin(0.99*alphas[aIndex]*np.pi/180),0.0,0.85*Rf*np.cos(1.01*alphas[aIndex]*np.pi/180),0.85*Rf*np.sin(1.01*alphas[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-FIRSTFIBER-'+ctNames[aIndex]])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*alphas[aIndex]*np.pi/180),1.05*Rf*np.sin(0.99*alphas[aIndex]*np.pi/180),0.0,1.05*Rf*np.cos(1.01*alphas[aIndex]*np.pi/180),1.05*Rf*np.sin(1.01*alphas[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-FIRSTMATRIX-'+ctNames[aIndex]])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*ctAngles[aIndex]*np.pi/180),0.85*Rf*np.sin(0.99*ctAngles[aIndex]*np.pi/180),0.0,0.85*Rf*np.cos(1.01*ctAngles[aIndex]*np.pi/180),0.85*Rf*np.sin(1.01*ctAngles[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-SECONDFIBER-'+ctNames[aIndex]])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*ctAngles[aIndex]*np.pi/180),1.05*Rf*np.sin(0.99*ctAngles[aIndex]*np.pi/180),0.0,1.05*Rf*np.cos(1.01*ctAngles[aIndex]*np.pi/180),1.05*Rf*np.sin(1.01*ctAngles[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-SECONDMATRIX-'+ctNames[aIndex]])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*betas[aIndex]*np.pi/180),0.85*Rf*np.sin(0.99*betas[aIndex]*np.pi/180),0.0,0.85*Rf*np.cos(1.01*betas[aIndex]*np.pi/180),0.85*Rf*np.sin(1.01*betas[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-THIRDFIBER-'+ctNames[aIndex]])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*betas[aIndex]*np.pi/180),1.05*Rf*np.sin(0.99*betas[aIndex]*np.pi/180),0.0,1.05*Rf*np.cos(1.01*betas[aIndex]*np.pi/180),1.05*Rf*np.sin(1.01*betas[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-THIRDMATRIX-'+ctNames[aIndex]])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*gammas[aIndex]*np.pi/180),0.85*Rf*np.sin(0.99*gammas[aIndex]*np.pi/180),0.0,0.85*Rf*np.cos(1.01*gammas[aIndex]*np.pi/180),0.85*Rf*np.sin(1.01*gammas[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-FOURTHFIBER-'+ctNames[aIndex]])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*gammas[aIndex]*np.pi/180),1.05*Rf*np.sin(0.99*gammas[aIndex]*np.pi/180),0.0,1.05*Rf*np.cos(1.01*gammas[aIndex]*np.pi/180),1.05*Rf*np.sin(1.01*gammas[aIndex]*np.pi/180),0.0,'TRANSVERSALCUT-FOURTHMATRIX-'+ctNames[aIndex]])
else:
alpha = theta + deltatheta - deltapsi
beta = theta + deltatheta + deltapsi
gamma = theta + deltatheta + deltapsi + deltaphi
setsOfEdgesData.append([0.74*Rf*np.cos(0.5*alpha*np.pi/180),0.74*Rf*np.sin(0.5*alpha*np.pi/180),0.0,0.76*Rf*np.cos(0.5*alpha*np.pi/180),0.76*Rf*np.sin(0.5*alpha*np.pi/180),0.0,'SECONDCIRCLE-LOWERCRACK'])
setsOfEdgesData.append([0.74*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180),0.74*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,0.76*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180),0.76*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,'SECONDCIRCLE-UPPERCRACK'])
setsOfEdgesData.append([0.74*Rf*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.74*Rf*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.0,0.76*Rf*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.76*Rf*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.0,'SECONDCIRCLE-FIRSTBOUNDED'])
setsOfEdgesData.append([0.74*Rf*np.cos((beta+0.5*deltaphi)*np.pi/180),0.74*Rf*np.sin((beta+0.5*deltaphi)*np.pi/180),0.0,0.76*Rf*np.cos((beta+0.5*deltaphi)*np.pi/180),0.76*Rf*np.sin((beta+0.5*deltaphi)*np.pi/180),0.0,'SECONDCIRCLE-SECONDBOUNDED'])
setsOfEdgesData.append([0.74*Rf*np.cos(1.025*gamma*np.pi/180),0.74*Rf*np.sin(1.025*gamma*np.pi/180),0.0,0.76*Rf*np.cos(1.025*gamma*np.pi/180),0.76*Rf*np.sin(1.025*gamma*np.pi/180),0.0,'SECONDCIRCLE-RESTBOUNDED'])
setsOfEdgesData.append([0.99*Rf*np.cos(0.5*alpha*np.pi/180),0.99*Rf*np.sin(0.5*alpha*np.pi/180),0.0,1.01*Rf*np.cos(0.5*alpha*np.pi/180),1.01*Rf*np.sin(0.5*alpha*np.pi/180),0.0,'THIRDCIRCLE-LOWERCRACK'])
setsOfEdgesData.append([0.99*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180),0.99*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,1.01*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180),1.01*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,'THIRDCIRCLE-UPPERCRACK'])
setsOfEdgesData.append([0.99*Rf*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.99*Rf*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.0,1.01*Rf*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180),1.01*Rf*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.0,'THIRDCIRCLE-FIRSTBOUNDED'])
setsOfEdgesData.append([0.99*Rf*np.cos((beta+0.5*deltaphi)*np.pi/180),0.99*Rf*np.sin((beta+0.5*deltaphi)*np.pi/180),0.0,1.01*Rf*np.cos((beta+0.5*deltaphi)*np.pi/180),1.01*Rf*np.sin((beta+0.5*deltaphi)*np.pi/180),0.0,'THIRDCIRCLE-SECONDBOUNDED'])
setsOfEdgesData.append([0.99*Rf*np.cos((gamma+0.5*(180.0-gamma))*np.pi/180),0.99*Rf*np.sin((gamma+0.5*(180.0-gamma))*np.pi/180),0.0,1.01*Rf*np.cos((gamma+0.5*(180.0-gamma))*np.pi/180),1.01*Rf*np.sin((gamma+0.5*(180.0-gamma))*np.pi/180),0.0,'THIRDCIRCLE-RESTBOUNDED'])
if L>2*Rf:
R4 = 1.25*Rf
else:
R4 = Rf+0.25*(L-Rf)
setsOfEdgesData.append([0.99*R4*np.cos(0.5*alpha*np.pi/180),0.99*R4*np.sin(0.5*alpha*np.pi/180),0.0,1.01*R4*np.cos(0.5*alpha*np.pi/180),1.01*R4*np.sin(0.5*alpha*np.pi/180),0.0,'FOURTHCIRCLE-LOWERCRACK'])
setsOfEdgesData.append([0.99*R4*np.cos((alpha+0.5*deltapsi)*np.pi/180),0.99*R4*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,1.01*R4*np.cos((alpha+0.5*deltapsi)*np.pi/180),1.01*R4*np.sin((alpha+0.5*deltapsi)*np.pi/180),0.0,'FOURTHCIRCLE-UPPERCRACK'])
setsOfEdgesData.append([0.99*R4*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.99*R4*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.0,1.01*R4*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180),1.01*R4*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180),0.0,'FOURTHCIRCLE-FIRSTBOUNDED'])
setsOfEdgesData.append([0.99*R4*np.cos((beta+0.5*deltaphi)*np.pi/180),0.99*R4*np.sin((beta+0.5*deltaphi)*np.pi/180),0.0,1.01*R4*np.cos((beta+0.5*deltaphi)*np.pi/180),1.01*R4*np.sin((beta+0.5*deltaphi)*np.pi/180),0.0,'FOURTHCIRCLE-SECONDBOUNDED'])
setsOfEdgesData.append([0.99*R4*np.cos((gamma+0.5*(180.0-gamma))*np.pi/180),0.99*R4*np.sin((gamma+0.5*(180.0-gamma))*np.pi/180),0.0,1.01*R4*np.cos((gamma+0.5*(180.0-gamma))*np.pi/180),1.01*R4*np.sin((gamma+0.5*(180.0-gamma))*np.pi/180),0.0,'FOURTHCIRCLE-RESTBOUNDED'])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*alpha*np.pi/180),0.85*Rf*np.sin(0.99*alpha*np.pi/180),0.0,0.85*Rf*np.cos(1.01*alpha*np.pi/180),0.85*Rf*np.sin(1.01*alpha*np.pi/180),0.0,'TRANSVERSALCUT-FIRSTFIBER'])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*alpha*np.pi/180),1.05*Rf*np.sin(0.99*alpha*np.pi/180),0.0,1.05*Rf*np.cos(1.01*alpha*np.pi/180),1.05*Rf*np.sin(1.01*alpha*np.pi/180),0.0,'TRANSVERSALCUT-FIRSTMATRIX'])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*(theta+deltatheta)*np.pi/180),0.85*Rf*np.sin(0.99*(theta+deltatheta)*np.pi/180),0.0,0.85*Rf*np.cos(1.01*(theta+deltatheta)*np.pi/180),0.85*Rf*np.sin(1.01*(theta+deltatheta)*np.pi/180),0.0,'TRANSVERSALCUT-SECONDFIBER'])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*(theta+deltatheta)*np.pi/180),1.05*Rf*np.sin(0.99*(theta+deltatheta)*np.pi/180),0.0,1.05*Rf*np.cos(1.01*(theta+deltatheta)*np.pi/180),1.05*Rf*np.sin(1.01*(theta+deltatheta)*np.pi/180),0.0,'TRANSVERSALCUT-SECONDMATRIX'])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*beta*np.pi/180),0.85*Rf*np.sin(0.99*beta*np.pi/180),0.0,0.85*Rf*np.cos(1.01*beta*np.pi/180),0.85*Rf*np.sin(1.01*beta*np.pi/180),0.0,'TRANSVERSALCUT-THIRDFIBER'])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*beta*np.pi/180),1.05*Rf*np.sin(0.99*beta*np.pi/180),0.0,1.05*Rf*np.cos(1.01*beta*np.pi/180),1.05*Rf*np.sin(1.01*beta*np.pi/180),0.0,'TRANSVERSALCUT-THIRDMATRIX'])
setsOfEdgesData.append([0.85*Rf*np.cos(0.99*gamma*np.pi/180),0.85*Rf*np.sin(0.99*gamma*np.pi/180),0.0,0.85*Rf*np.cos(1.01*gamma*np.pi/180),0.85*Rf*np.sin(1.01*gamma*np.pi/180),0.0,'TRANSVERSALCUT-FOURTHFIBER'])
setsOfEdgesData.append([1.05*Rf*np.cos(0.99*gamma*np.pi/180),1.05*Rf*np.sin(0.99*gamma*np.pi/180),0.0,1.05*Rf*np.cos(1.01*gamma*np.pi/180),1.05*Rf*np.sin(1.01*gamma*np.pi/180),0.0,'TRANSVERSALCUT-FOURTHMATRIX'])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
setsOfEdgesData = []
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='SECONDCIRCLE', sets=[RVEpart.sets['SECONDCIRCLE-CENTERCRACK'],RVEpart.sets['SECONDCIRCLE-UPPERCRACK-CTUP'],RVEpart.sets['SECONDCIRCLE-FIRSTBOUNDED-CTUP'],RVEpart.sets['SECONDCIRCLE-SECONDBOUNDED-CTUP'],RVEpart.sets['SECONDCIRCLE-UPPERCRACK-CTLOW'],RVEpart.sets['SECONDCIRCLE-FIRSTBOUNDED-CTLOW'],RVEpart.sets['SECONDCIRCLE-SECONDBOUNDED-CTLOW'],RVEpart.sets['SECONDCIRCLE-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- SECONDCIRCLE',True)
RVEpart.SetByBoolean(name='THIRDCIRCLE', sets=[RVEpart.sets['THIRDCIRCLE-CENTERCRACK'],RVEpart.sets['THIRDCIRCLE-UPPERCRACK-CTUP'],RVEpart.sets['THIRDCIRCLE-FIRSTBOUNDED-CTUP'],RVEpart.sets['THIRDCIRCLE-SECONDBOUNDED-CTUP'],RVEpart.sets['THIRDCIRCLE-UPPERCRACK-CTLOW'],RVEpart.sets['THIRDCIRCLE-FIRSTBOUNDED-CTLOW'],RVEpart.sets['THIRDCIRCLE-SECONDBOUNDED-CTLOW'],RVEpart.sets['THIRDCIRCLE-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- THIRDCIRCLE',True)
RVEpart.SetByBoolean(name='FOURTHCIRCLE', sets=[RVEpart.sets['FOURTHCIRCLE-CENTERCRACK'],RVEpart.sets['FOURTHCIRCLE-UPPERCRACK-CTUP'],RVEpart.sets['FOURTHCIRCLE-FIRSTBOUNDED-CTUP'],RVEpart.sets['FOURTHCIRCLE-SECONDBOUNDED-CTUP'],RVEpart.sets['FOURTHCIRCLE-UPPERCRACK-CTLOW'],RVEpart.sets['FOURTHCIRCLE-FIRSTBOUNDED-CTLOW'],RVEpart.sets['FOURTHCIRCLE-SECONDBOUNDED-CTLOW'],RVEpart.sets['FOURTHCIRCLE-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- FOURTHCIRCLE',True)
else:
RVEpart.SetByBoolean(name='SECONDCIRCLE', sets=[RVEpart.sets['SECONDCIRCLE-LOWERCRACK'],RVEpart.sets['SECONDCIRCLE-UPPERCRACK'],RVEpart.sets['SECONDCIRCLE-FIRSTBOUNDED'],RVEpart.sets['SECONDCIRCLE-SECONDBOUNDED'],RVEpart.sets['SECONDCIRCLE-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- SECONDCIRCLE',True)
RVEpart.SetByBoolean(name='THIRDCIRCLE', sets=[RVEpart.sets['THIRDCIRCLE-LOWERCRACK'],RVEpart.sets['THIRDCIRCLE-UPPERCRACK'],RVEpart.sets['THIRDCIRCLE-FIRSTBOUNDED'],RVEpart.sets['THIRDCIRCLE-SECONDBOUNDED'],RVEpart.sets['THIRDCIRCLE-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- THIRDCIRCLE',True)
RVEpart.SetByBoolean(name='FOURTHCIRCLE', sets=[RVEpart.sets['FOURTHCIRCLE-LOWERCRACK'],RVEpart.sets['FOURTHCIRCLE-UPPERCRACK'],RVEpart.sets['FOURTHCIRCLE-FIRSTBOUNDED'],RVEpart.sets['FOURTHCIRCLE-SECONDBOUNDED'],RVEpart.sets['FOURTHCIRCLE-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- FOURTHCIRCLE',True)
if ('boundingPly' in parameters['BC']['rightSide']['type'] or 'boundingPly' in parameters['BC']['leftSide']['type']) and not 'boundingPly' in parameters['BC']['northSide']['type']:
setsOfEdgesData.append([0.0,0.99999*CornerBy,0.0,0.0,1.00001*CornerBy,0.0,'CENTER-RUC-UPPERSIDE'])
if 'boundingPly' in parameters['BC']['rightSide']['type']:
setsOfEdgesData.append([0.99999*CornerBx,0.99999*CornerBy,0.0,0.99999*CornerBx,1.00001*CornerBy,0.0,'RIGHT-HOMOPLY-UPPERSIDE'])
if 'boundingPly' in parameters['BC']['leftSide']['type']:
setsOfEdgesData.append([0.99999*CornerAx,0.99999*CornerBy,0.0,0.99999*CornerAx,1.00001*CornerBy,0.0,'LEFT-HOMOPLY-UPPERSIDE'])
else:
setsOfEdgesData.append([0.0,0.99999*CornerBy,0.0,0.0,1.00001*CornerBy,0.0,'UPPERSIDE'])
if 'boundingPly' in parameters['BC']['northSide']['type']:
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
setsOfEdgesData.append([0.001,0.99999*(L+Lply),0.0,0.001,1.00001*(L+Lply),0.0,'PLYINTERFACE'])
setsOfEdgesData.append([0.99999*CornerBx,0.5*L,0.0,1.00001*CornerBx,0.5*L,0.0,'LOWER-RIGHTSIDE'])
setsOfEdgesData.append([0.99999*CornerAx,0.5*L,0.0,1.00001*CornerAx,0.5*L,0.0,'LOWER-LEFTSIDE'])
setsOfEdgesData.append([0.99999*CornerBx,(L+Lply)+0.5*Ludply,0.0,1.00001*CornerBx,(L+Lply)+0.5*Ludply,0.0,'UPPER-RIGHTSIDE'])
setsOfEdgesData.append([0.99999*CornerAx,(L+Lply)+0.5*Ludply,0.0,1.00001*CornerAx,(L+Lply)+0.5*Ludply,0.0,'UPPER-LEFTSIDE'])
else:
setsOfEdgesData.append([0.001,0.99999*L,0.0,0.001,1.00001*L,0.0,'PLYINTERFACE'])
setsOfEdgesData.append([0.99999*CornerBx,0.5*L,0.0,1.00001*CornerBx,0.5*L,0.0,'LOWER-RIGHTSIDE'])
setsOfEdgesData.append([0.99999*CornerAx,0.5*L,0.0,1.00001*CornerAx,0.5*L,0.0,'LOWER-LEFTSIDE'])
setsOfEdgesData.append([0.99999*CornerBx,L+0.5*Lply,0.0,1.00001*CornerBx,L+0.5*Lply,0.0,'UPPER-RIGHTSIDE'])
setsOfEdgesData.append([0.99999*CornerAx,L+0.5*Lply,0.0,1.00001*CornerAx,L+0.5*Lply,0.0,'UPPER-LEFTSIDE'])
else:
setsOfEdgesData.append([0.99999*CornerBx,0.5*L,0.0,1.00001*CornerBx,0.5*L,0.0,'RIGHTSIDE'])
setsOfEdgesData.append([0.99999*CornerAx,0.5*L,0.0,1.00001*CornerAx,0.5*L,0.0,'LEFTSIDE'])
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
setsOfEdgesData.append([0.99*Rf,(nFiber+1)*2*L,0.0,1.01*Rf,(nFiber+1)*2*L,0.0,'INTERFACE-UPPER-FIBER-C'+str(nFiber+1)])
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
for mFiber in range(0,parameters['BC']['rightSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
setsOfEdgesData.append([(mFiber+1)*2*L+0.99*Rf,(nFiber+1)*2*L,0.0,(mFiber+1)*2*L+1.01*Rf,(nFiber+1)*2*L,0.0,'INTERFACE-UPPER-FIBER-R'+str(int(nFiber+1+mFiber*parameters['BC']['northSide']['nFibers']))])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
Nfibers = parameters['BC']['northSide']['nFibers']
for mFiber in range(0,parameters['BC']['leftSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
setsOfEdgesData.append([-(mFiber+1)*2*L+0.99*Rf,(nFiber+1)*2*L,0.0,-(mFiber+1)*2*L+1.01*Rf,(nFiber+1)*2*L,0.0,'INTERFACE-UPPER-FIBER-L'+str(int(nFiber+1+mFiber*parameters['BC']['northSide']['nFibers']))])
for setOfEdgesData in setsOfEdgesData:
defineSetOfEdgesByClosestPoints(RVEpart,setOfEdgesData[0],setOfEdgesData[1],setOfEdgesData[2],setOfEdgesData[3],setOfEdgesData[4],setOfEdgesData[5],setOfEdgesData[-1],logfilepath,baselogindent + 4*logindent,True)
setsOfEdgesData = []
if ('boundingPly' in parameters['BC']['rightSide']['type'] or 'boundingPly' in parameters['BC']['leftSide']['type']) and not 'boundingPly' in parameters['BC']['northSide']['type']:
if 'boundingPly' in parameters['BC']['rightSide']['type'] and 'boundingPly' in parameters['BC']['leftSide']:
RVEpart.SetByBoolean(name='UPPERSIDE', sets=[RVEpart.sets['CENTER-RUC-UPPERSIDE'],RVEpart.sets['RIGHT-HOMOPLY-UPPERSIDE'],RVEpart.sets['LEFT-HOMOPLY-UPPERSIDE']])
elif 'boundingPly' in parameters['BC']['rightSide']['type']:
RVEpart.SetByBoolean(name='UPPERSIDE', sets=[RVEpart.sets['CENTER-RUC-UPPERSIDE'],RVEpart.sets['RIGHT-HOMOPLY-UPPERSIDE']])
elif 'boundingPly' in parameters['BC']['leftSide']['type']:
RVEpart.SetByBoolean(name='UPPERSIDE', sets=[RVEpart.sets['CENTER-RUC-UPPERSIDE'],RVEpart.sets['LEFT-HOMOPLY-UPPERSIDE']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- UPPERSIDE',True)
if 'boundingPly' in parameters['BC']['northSide']['type']:
RVEpart.SetByBoolean(name='RIGHTSIDE', sets=[RVEpart.sets['LOWER-RIGHTSIDE'],RVEpart.sets['UPPER-RIGHTSIDE']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- RIGHTSIDE',True)
RVEpart.SetByBoolean(name='LEFTSIDE', sets=[RVEpart.sets['LOWER-LEFTSIDE'],RVEpart.sets['UPPER-LEFTSIDE']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- LEFTSIDE',True)
# sets of faces
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sets of faces',True)
setsOfFacesData = [[0.01*Rf, 0.25*Rf, 0,'FIBER-CENTER'],
[0.0, 0.65*Rf, 0,'FIBER-INTERMEDIATEANNULUS']]
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
setsOfFacesData.append([0.85*Rf*np.cos(theta*np.pi/180), 0.85*Rf*np.sin(theta*np.pi/180), 0,'FIBER-EXTANNULUS-CENTERCRACK'])
setsOfFacesData.append([0.85*Rf*np.cos((theta+deltatheta-0.5*deltapsi)*np.pi/180), 0.85*Rf*np.sin((theta+deltatheta-0.5*deltapsi)*np.pi/180), 0,'FIBER-EXTANNULUS-UPPERCRACK-CTUP'])
setsOfFacesData.append([0.85*Rf*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0.85*Rf*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0,'FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP'])
setsOfFacesData.append([0.85*Rf*np.cos((theta+deltatheta+deltapsi+0.5*deltaphi)*np.pi/180), 0.85*Rf*np.sin((theta+deltatheta+deltapsi+0.5*deltaphi)*np.pi/180), 0,'FIBER-EXTANNULUS-SECONDBOUNDED-CTUP'])
setsOfFacesData.append([0.85*Rf*np.cos((theta-deltatheta+0.5*deltapsi)*np.pi/180), 0.85*Rf*np.sin((theta-deltatheta+0.5*deltapsi)*np.pi/180), 0,'FIBER-EXTANNULUS-UPPERCRACK-CTLOW'])
setsOfFacesData.append([0.85*Rf*np.cos((theta-deltatheta-0.5*deltapsi)*np.pi/180), 0.85*Rf*np.sin((theta-deltatheta-0.5*deltapsi)*np.pi/180), 0,'FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW'])
setsOfFacesData.append([0.85*Rf*np.cos((theta-deltatheta-deltapsi-0.5*deltaphi)*np.pi/180), 0.85*Rf*np.sin((theta-deltatheta-deltapsi-0.5*deltaphi)*np.pi/180), 0,'FIBER-EXTANNULUS-SECONDBOUNDED-CTLOW'])
setsOfFacesData.append([0.85*Rf*np.cos((theta+deltatheta+deltapsi+deltaphi+1.0)*np.pi/180), 0.85*Rf*np.sin((theta+deltatheta+deltapsi+deltaphi+1.0)*np.pi/180), 0,'FIBER-EXTANNULUS-RESTBOUNDED'])
else:
alpha = theta + deltatheta - deltapsi
beta = theta + deltatheta + deltapsi
gamma = theta + deltatheta + deltapsi + deltaphi
setsOfFacesData.append([0.85*Rf*np.cos(0.5*alpha*np.pi/180), 0.85*Rf*np.sin(0.5*alpha*np.pi/180), 0,'FIBER-EXTANNULUS-LOWERCRACK'])
setsOfFacesData.append([0.85*Rf*np.cos((alpha+0.5*deltapsi)*np.pi/180), 0.85*Rf*np.sin((alpha+0.5*deltapsi)*np.pi/180), 0,'FIBER-EXTANNULUS-UPPERCRACK'])
setsOfFacesData.append([0.85*Rf*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0.85*Rf*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0,'FIBER-EXTANNULUS-FIRSTBOUNDED'])
setsOfFacesData.append([0.85*Rf*np.cos((beta+0.5*deltaphi)*np.pi/180), 0.85*Rf*np.sin((beta+0.5*deltaphi)*np.pi/180), 0,'FIBER-EXTANNULUS-SECONDBOUNDED'])
setsOfFacesData.append([0.85*Rf*np.cos((gamma+0.5*(180-gamma))*np.pi/180), 0.85*Rf*np.sin((gamma+0.5*(180-gamma))*np.pi/180), 0,'FIBER-EXTANNULUS-RESTBOUNDED'])
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='FIBER-EXTANNULUS', sets=[RVEpart.sets['FIBER-EXTANNULUS-CENTERCRACK'],RVEpart.sets['FIBER-EXTANNULUS-UPPERCRACK-CTUP'],RVEpart.sets['FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP'],RVEpart.sets['FIBER-EXTANNULUS-SECONDBOUNDED-CTUP'],RVEpart.sets['FIBER-EXTANNULUS-UPPERCRACK-CTLOW'],RVEpart.sets['FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW'],RVEpart.sets['FIBER-EXTANNULUS-SECONDBOUNDED-CTLOW'],RVEpart.sets['FIBER-EXTANNULUS-RESTBOUNDED']])
else:
RVEpart.SetByBoolean(name='FIBER-EXTANNULUS', sets=[RVEpart.sets['FIBER-EXTANNULUS-LOWERCRACK'],RVEpart.sets['FIBER-EXTANNULUS-UPPERCRACK'],RVEpart.sets['FIBER-EXTANNULUS-FIRSTBOUNDED'],RVEpart.sets['FIBER-EXTANNULUS-SECONDBOUNDED'],RVEpart.sets['FIBER-EXTANNULUS-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- FIBER-EXTANNULUS',True)
RVEpart.SetByBoolean(name='FIBER', sets=[RVEpart.sets['FIBER-CENTER'],RVEpart.sets['FIBER-INTERMEDIATEANNULUS'],RVEpart.sets['FIBER-EXTANNULUS']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- FIBER',True)
if L>2*Rf:
R1 = (1+0.5*0.25)*Rf
R2 = (1.25+0.5*0.25)*Rf
else:
R1 = Rf+0.5*0.25*(L-Rf)
R2 = Rf+1.5*0.25*(L-Rf)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
setsOfFacesData = []
setsOfFacesData.append([0.85*R1*np.cos(theta*np.pi/180), 0.85*R1*np.sin(theta*np.pi/180), 0,'MATRIX-EXTANNULUS-CENTERCRACK'])
setsOfFacesData.append([0.85*R1*np.cos((theta+deltatheta-0.5*deltapsi)*np.pi/180), 0.85*R1*np.sin((theta+deltatheta-0.5*deltapsi)*np.pi/180), 0,'MATRIX-EXTANNULUS-UPPERCRACK-CTUP'])
setsOfFacesData.append([0.85*R1*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0.85*R1*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0,'MATRIX-EXTANNULUS-FIRSTBOUNDED-CTUP'])
setsOfFacesData.append([0.85*R1*np.cos((theta+deltatheta+deltapsi+0.5*deltaphi)*np.pi/180), 0.85*R1*np.sin((theta+deltatheta+deltapsi+0.5*deltaphi)*np.pi/180), 0,'MATRIX-EXTANNULUS-SECONDBOUNDED-CTUP'])
setsOfFacesData.append([0.85*R1*np.cos((theta-deltatheta+0.5*deltapsi)*np.pi/180), 0.85*R1*np.sin((theta-deltatheta+0.5*deltapsi)*np.pi/180), 0,'MATRIX-EXTANNULUS-UPPERCRACK-CTLOW'])
setsOfFacesData.append([0.85*R1*np.cos((theta-deltatheta-0.5*deltapsi)*np.pi/180), 0.85*R1*np.sin((theta-deltatheta-0.5*deltapsi)*np.pi/180), 0,'MATRIX-EXTANNULUS-FIRSTBOUNDED-CTLOW'])
setsOfFacesData.append([0.85*R1*np.cos((theta-deltatheta-deltapsi-0.5*deltaphi)*np.pi/180), 0.85*R1*np.sin((theta-deltatheta-deltapsi-0.5*deltaphi)*np.pi/180), 0,'MATRIX-EXTANNULUS-SECONDBOUNDED-CTLOW'])
setsOfFacesData.append([0.85*R1*np.cos((theta+deltatheta+deltapsi+deltaphi+1.0)*np.pi/180), 0.85*R1*np.sin((theta+deltatheta+deltapsi+deltaphi+1.0)*np.pi/180), 0,'MATRIX-EXTANNULUS-RESTBOUNDED'])
else:
alpha = theta + deltatheta - deltapsi
beta = theta + deltatheta + deltapsi
gamma = theta + deltatheta + deltapsi + deltaphi
setsOfFacesData = [[R1*np.cos(0.5*alpha*np.pi/180), R1*np.sin(0.5*alpha*np.pi/180), 0,'MATRIX-INTANNULUS-LOWERCRACK'],
[R1*np.cos((alpha+0.5*deltapsi)*np.pi/180), R1*np.sin((alpha+0.5*deltapsi)*np.pi/180), 0,'MATRIX-INTANNULUS-UPPERCRACK'],
[R1*np.cos((theta+deltatheta+0.5*deltapsi)*np.pi/180), R1*np.sin((theta+deltatheta+0.5*deltapsi)*np.pi/180), 0,'MATRIX-INTANNULUS-FIRSTBOUNDED'],
[R1*np.cos((beta+0.5*deltaphi)*np.pi/180), R1*np.sin((beta+0.5*deltaphi)*np.pi/180), 0,'MATRIX-INTANNULUS-SECONDBOUNDED'],
[R1*np.cos((gamma+0.5*(180-gamma))*np.pi/180), R1*np.sin((gamma+0.5*(180-gamma))*np.pi/180), 0,'MATRIX-INTANNULUS-RESTBOUNDED']]
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
RVEpart.SetByBoolean(name='MATRIX-EXTANNULUS', sets=[RVEpart.sets['MATRIX-EXTANNULUS-CENTERCRACK'],RVEpart.sets['MATRIX-EXTANNULUS-UPPERCRACK-CTUP'],RVEpart.sets['MATRIX-EXTANNULUS-FIRSTBOUNDED-CTUP'],RVEpart.sets['MATRIX-EXTANNULUS-SECONDBOUNDED-CTUP'],RVEpart.sets['MATRIX-EXTANNULUS-UPPERCRACK-CTLOW'],RVEpart.sets['MATRIX-EXTANNULUS-FIRSTBOUNDED-CTLOW'],RVEpart.sets['MATRIX-EXTANNULUS-SECONDBOUNDED-CTLOW'],RVEpart.sets['MATRIX-EXTANNULUS-RESTBOUNDED']])
else:
RVEpart.SetByBoolean(name='MATRIX-INTANNULUS', sets=[RVEpart.sets['MATRIX-INTANNULUS-LOWERCRACK'],RVEpart.sets['MATRIX-INTANNULUS-UPPERCRACK'],RVEpart.sets['MATRIX-INTANNULUS-FIRSTBOUNDED'],RVEpart.sets['MATRIX-INTANNULUS-SECONDBOUNDED'],RVEpart.sets['MATRIX-INTANNULUS-RESTBOUNDED']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- MATRIX-INTANNULUS',True)
setsOfFacesData = [[0.0, R2, 0,'MATRIX-INTERMEDIATEANNULUS'],
[0.975*L, 0.975*L, 0,'MATRIX-BODY']]
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
RVEpart.SetByBoolean(name='MATRIX', sets=[RVEpart.sets['MATRIX-BODY'],RVEpart.sets['MATRIX-INTERMEDIATEANNULUS'],RVEpart.sets['MATRIX-INTANNULUS']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- MATRIX',True)
if 'boundingPly' in parameters['BC']['northSide']['type']:
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
setsOfFacesData = [[0.975*L, 0.975*(L+Lply+Ludply), 0,'BOUNDING-PLY']]
else:
setsOfFacesData = [[0.975*L, 0.975*(L+Lply), 0,'BOUNDING-PLY']]
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
if 'boundingPly' in parameters['BC']['rightSide']['type'] and 'boundingPly' in parameters['BC']['leftSide']['type']:
setsOfFacesData = [[0.975*CornerBx, 0.5*L, 0,'RIGHT-HOMOGENIZED-CROSSPLY'],
[0.975*CornerAx, 0.5*L, 0,'LEFT-HOMOGENIZED-CROSSPLY']]
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
RVEpart.SetByBoolean(name='HOMOGENIZED-CROSSPLY', sets=[RVEpart.sets['RIGHT-HOMOGENIZED-CROSSPLY'],RVEpart.sets['LEFT-HOMOGENIZED-CROSSPLY']])
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- HOMOGENIZED-CROSSPLY',True)
elif 'boundingPly' in parameters['BC']['rightSide']['type']:
setsOfFacesData = [[0.975*CornerBx, 0.5*L, 0,'RIGHT-HOMOGENIZED-CROSSPLY']]
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
elif 'boundingPly' in parameters['BC']['leftSide']['type']:
setsOfFacesData = [[0.975*CornerAx, 0.5*L, 0,'LEFT-HOMOGENIZED-CROSSPLY']]
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
setsOfFacesData = []
booleanSets = []
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
setsOfFacesData.append([0.0, (nFiber+1)*2*L, 0.0,'UPPER-FIBER-C'+str(nFiber+1)])
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
for mFiber in range(0,parameters['BC']['rightSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
setsOfFacesData.append([(mFiber+1)*2*L, (nFiber+1)*2*L, 0.0,'UPPER-FIBER-R'+str(int(nFiber+1+mFiber*parameters['BC']['northSide']['nFibers']))])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
for mFiber in range(0,parameters['BC']['leftSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
setsOfFacesData.append([-(mFiber+1)*2*L, (nFiber+1)*2*L, 0.0,'UPPER-FIBER-L'+str(int(nFiber+1+mFiber*parameters['BC']['northSide']['nFibers']))])
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
booleanSets.append(RVEpart.sets[setOfFacesData[-1]])
RVEpart.SetByBoolean(name='UPPER-FIBERS', sets=booleanSets)
setsOfFacesData = []
booleanSets = []
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
for nFiber in range(0,parameters['BC']['rightSide']['nFibers']):
setsOfFacesData.append([(nFiber+1)*2*L, 0.25*Rf, 0.0,'RIGHT-FIBER'+str(nFiber+1)])
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
booleanSets.append(RVEpart.sets[setOfFacesData[-1]])
RVEpart.SetByBoolean(name='RIGHT-FIBERS', sets=booleanSets)
setsOfFacesData = []
booleanSets = []
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
for nFiber in range(0,parameters['BC']['rightSide']['nFibers']):
setsOfFacesData.append([-(nFiber+1)*2*L, 0.25*Rf, 0.0,'LEFT-FIBER'+str(nFiber+1)])
for setOfFacesData in setsOfFacesData:
defineSetOfFacesByFindAt(RVEpart,setOfFacesData[0],setOfFacesData[1],setOfFacesData[2],setOfFacesData[-1],logfilepath,baselogindent + 4*logindent,True)
booleanSets.append(RVEpart.sets[setOfFacesData[-1]])
RVEpart.SetByBoolean(name='LEFT-FIBERS', sets=booleanSets)
booleanSets = [RVEpart.sets['FIBER'],RVEpart.sets['MATRIX']]
if 'boundingPly' in parameters['BC']['northSide']['type']:
booleanSets.append(RVEpart.sets['BOUNDING-PLY'])
if 'boundingPly' in parameters['BC']['rightSide']['type']:
booleanSets.append(RVEpart.sets['RIGHT-HOMOGENIZED-CROSSPLY'])
if 'boundingPly' in parameters['BC']['leftSide']['type']:
booleanSets.append(RVEpart.sets['LEFT-HOMOGENIZED-CROSSPLY'])
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
booleanSets.append(RVEpart.sets['UPPER-FIBERS'])
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
booleanSets.append(RVEpart.sets['RIGHT-FIBERS'])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
booleanSets.append(RVEpart.sets['LEFT-FIBERS'])
RVEpart.SetByBoolean(name='RVE', sets=booleanSets)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + '-- RVE',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# sets of cells (none, i.e. 2D geometry)
mdb.save()
writeLineToLogFile(logfilepath,'a',2*logindent + '... done.',True)
#===============================================================================#
# Material Orientation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Creating reference system for material orientation ...',True)
RVEpart.DatumCsysByThreePoints(name='refOrientation',coordSysType=CARTESIAN,origin=(0.0,0.0,0.0),point1=(1.0,0.0,0.0),point2=(1.0,1.0,0.0))
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Assigning material orientation to FIBER ...',True)
RVEpart.MaterialOrientation(orientationType=SYSTEM,region=RVEpart.sets['FIBER'],localCsys=RVEpart.datums[RVEpart.features['refOrientation'].id])
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Assigning material orientation to MATRIX ...',True)
RVEpart.MaterialOrientation(orientationType=SYSTEM,region=RVEpart.sets['MATRIX'],localCsys=RVEpart.datums[RVEpart.features['refOrientation'].id])
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
if 'boundingPly' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Assigning material orientation to BOUNDING-PLY ...',True)
RVEpart.MaterialOrientation(orientationType=SYSTEM,region=RVEpart.sets['BOUNDING-PLY'],localCsys=RVEpart.datums[RVEpart.features['refOrientation'].id])
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
if 'boundingPly' in parameters['BC']['rightSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Assigning material orientation to RIGHT-HOMOGENIZED-CROSSPLY ...',True)
RVEpart.MaterialOrientation(orientationType=SYSTEM,region=RVEpart.sets['RIGHT-HOMOGENIZED-CROSSPLY'],localCsys=RVEpart.datums[RVEpart.features['refOrientation'].id])
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
if 'boundingPly' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Assigning material orientation to LEFT-HOMOGENIZED-CROSSPLY ...',True)
RVEpart.MaterialOrientation(orientationType=SYSTEM,region=RVEpart.sets['LEFT-HOMOGENIZED-CROSSPLY'],localCsys=RVEpart.datums[RVEpart.features['refOrientation'].id])
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + '... done.',True)
#===============================================================================#
# Materials creation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating materials ...',True)
for material in parameters['materials'].values():
mdb.models[modelname].Material(name=material['name'])
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'MATERIAL: ' + material['name'],True)
try:
values = material['elastic']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Elastic(type=material['elastic']['type'],table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' ELASTIC',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' NO ELASTIC PROPERTY',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(error),True)
#sys.exit(2)
sys.exc_clear()
try:
values = material['density']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Density(table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' DENSITY',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' NO DENSITY PROPERTY',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(error),True)
sys.exc_clear()
try:
values = material['thermalexpansion']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Expansion(type=material['thermalexpansion']['type'],table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' THERMAL EXPANSION',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' NO THERMAL EXPANSION PROPERTY',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(error),True)
sys.exc_clear()
try:
values = material['thermalconductivity']['values']
tuplelist = []
valuelist = []
for v,value in enumerate(values):
valuelist.append(value)
tuplelist.append(tuple(valuelist))
mdb.models[modelname].materials[material['name']].Conductivity(type=material['thermalconductivity']['type'],table=tuple(tuplelist))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' THERMAL CONDUCTIVITY',True)
line = ' '
for v,value in enumerate(values):
if v>0:
line += ', '
line += str(value)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + line,True)
except Exception, error:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' NO THERMAL CONDUCTIVITY PROPERTY',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + str(error),True)
sys.exc_clear()
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Sections creation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating sections ...',True)
for section in parameters['sections'].values():
if 'structuralModel' in parameters['mesh']['elements'].keys():
if 'generalizedPlaneStrain' in parameters['mesh']['elements']['structuralModel']:
mdb.models[modelname].PEGSection(name=section['name'],material=section['material'], thickness=section['thickness'], wedgeAngle1=0.0, wedgeAngle2=0.0)
if 'HomogeneousSolidSection' in section['type'] or 'Homogeneous Solid Section' in section['type'] or 'homogeneoussolidsection' in section['type'] or 'homogeneous solid section' in section['type'] or 'Homogeneous solid section' in section['type']:
mdb.models[modelname].HomogeneousSolidSection(name=section['name'],material=section['material'], thickness=section['thickness'])
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Sections assignment
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Making section assignments ...',True)
for sectionRegion in parameters['sectionRegions'].values():
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- ' + sectionRegion['name'],True)
RVEpart.SectionAssignment(region=RVEpart.sets[sectionRegion['set']], sectionName=sectionRegion['name'], offset=sectionRegion['offsetValue'],offsetType=sectionRegion['offsetType'], offsetField=sectionRegion['offsetField'],thicknessAssignment=sectionRegion['thicknessAssignment'])
# p.SectionAssignment(region=region, sectionName='MatrixSection', offset=0.0,
# offsetType=MIDDLE_SURFACE, offsetField='',
# thicknessAssignment=FROM_SECTION)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Instance creation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating instance ...',True)
model.rootAssembly.DatumCsysByDefault(CARTESIAN)
model.rootAssembly.Instance(name='RVE-assembly', part=RVEpart, dependent=OFF)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Step creation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating step ...',True)
for step in parameters['steps'].values():
model.StaticStep(name=step['name'], previous=step['previous'],minInc=step['minimumIncrement'])
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Boundary conditions
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assigning boundary conditions ...',True)
# SOUTH side: symmetry line
if 'full' in parameters['geometry']['fiber']['type']:
for step in parameters['steps'].values():
if 'symmetric' in parameters['BC']['northSide']['type']:
model.YsymmBC(name='NorthSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['UPPERSIDE'], localCsys=None)
if 'symmetric' in parameters['BC']['southSide']['type']:
model.YsymmBC(name='SouthSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LOWERSIDE'], localCsys=None)
if 'symmetric' in parameters['BC']['rightSide']['type']:
model.XsymmBC(name='RightSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['RIGHTSIDE'], localCsys=None)
if 'symmetric' in parameters['BC']['leftSide']['type']:
model.XsymmBC(name='LeftSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LEFTSIDE'], localCsys=None)
elif 'half' in parameters['geometry']['fiber']['type']:
for step in parameters['steps'].values():
model.YsymmBC(name='SymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LOWERSIDE'], localCsys=None)
if 'symmetric' in parameters['BC']['rightSide']['type']:
model.XsymmBC(name='RightSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['RIGHTSIDE'], localCsys=None)
if 'symmetric' in parameters['BC']['leftSide']['type']:
model.XsymmBC(name='LeftSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LEFTSIDE'], localCsys=None)
elif 'quarter' in parameters['geometry']['fiber']['type']:
for step in parameters['steps'].values():
model.YsymmBC(name='LowerSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LOWERSIDE'], localCsys=None)
model.XsymmBC(name='LeftSymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LEFTSIDE'], localCsys=None)
else:
for step in parameters['steps'].values():
model.YsymmBC(name='SymmetryBound', createStepName=step['name'],region=model.rootAssembly.instances['RVE-assembly'].sets['LOWERSIDE'], localCsys=None)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Applied load
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',2*logindent + 'Assigning loads ...',True)
for load in parameters['loads'].values():
writeLineToLogFile(logfilepath,'a',3*logindent + 'Apply ' + load['type'] + ' on ' + load['set'] + ' set',True)
if 'appliedstrain' in load['type'] or 'appliedStrain' in load['type'] or 'Applied Strain' in load['type'] or 'applied strain' in load['type']:
if 'right' in load['set'] or 'Right' in load['set'] or 'RIGHT' in load['set']:
model.DisplacementBC(name=load['name'],createStepName=load['stepName'],region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u1=load['value'][0]*CornerBx, amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif 'left' in load['set'] or 'Left' in load['set'] or 'LEFT' in load['set']:
model.DisplacementBC(name=load['name'],createStepName=load['stepName'],region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u1=load['value'][0]*CornerAx, amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif 'upper' in load['set'] or 'Upper' in load['set'] or 'UPPER' in load['set']:
model.DisplacementBC(name=load['name'],createStepName=load['stepName'],region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u2=load['value'][1]*CornerBy, amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif 'applieddisplacement' in load['type'] or 'appliedDisplacement' in load['type'] or 'Applied Displacement' in load['type'] or 'applied displacement' in load['type']:
model.DisplacementBC(name=load['name'],createStepName=load['stepName'],region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], u1=load['value'][0], amplitude=UNSET, fixed=OFF, distributionType=UNIFORM, fieldName='',localCsys=None)
elif 'temperature' in load['type'] or 'Temperature' in load['type'] or 'TEMPERATURE' in load['type']:
model.TemperatureBC(name=load['name'],createStepName=load['stepName'],region=model.rootAssembly.instances['RVE-assembly'].sets[load['set']], magnitude=load['value'],distributionType=UNIFORM)
# elif 'appliedstress' in load['type'] or 'appliedStress' in load['type'] or 'Applied Stress' in load['type'] or 'applied stress' in load['type']:
#
# elif 'appliedforce' in load['type'] or 'appliedForce' in load['type'] or 'Applied Force' in load['type'] or 'applied Force' in load['type']:
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Crack
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating cracks ...',True)
# assign seam
model.rootAssembly.engineeringFeatures.assignSeam(regions=model.rootAssembly.instances['RVE-assembly'].sets['CRACK'])
if 'inverseSquareRoot' in parameters['singularity']['type']:
midNodePos = 0.25
else:
midNodePos = 0.5
# contour integral
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
xC = Rf*np.cos((theta+deltatheta)*np.pi/180)
yC = Rf*np.sin((theta+deltatheta)*np.pi/180)
xA = Rf*np.cos((theta+1.025*deltatheta)*np.pi/180)
yA = -xC*(xA-xC)/yC + yC
model.rootAssembly.engineeringFeatures.ContourIntegral(name='DebondUp',symmetric=OFF,crackFront=model.rootAssembly.instances['RVE-assembly'].sets['CRACK'],crackTip=model.rootAssembly.instances['RVE-assembly'].sets['CRACKTIPUP'],extensionDirectionMethod=Q_VECTORS, qVectors=(((xC,yC,0.0),(xA,yA,0.0)), ), midNodePosition=midNodePos, collapsedElementAtTip=NONE)
xC = Rf*np.cos((theta-deltatheta)*np.pi/180)
yC = Rf*np.sin((theta-deltatheta)*np.pi/180)
xA = Rf*np.cos((theta-1.025*deltatheta)*np.pi/180)
yA = -xC*(xA-xC)/yC + yC
model.rootAssembly.engineeringFeatures.ContourIntegral(name='DebondLow',symmetric=OFF,crackFront=model.rootAssembly.instances['RVE-assembly'].sets['CRACK'],crackTip=model.rootAssembly.instances['RVE-assembly'].sets['CRACKTIPLOW'],extensionDirectionMethod=Q_VECTORS, qVectors=(((xC,yC,0.0),(xA,yA,0.0)), ), midNodePosition=midNodePos, collapsedElementAtTip=NONE)
else:
xC = Rf*np.cos((theta+deltatheta)*np.pi/180)
yC = Rf*np.sin((theta+deltatheta)*np.pi/180)
xA = Rf*np.cos((theta+1.025*deltatheta)*np.pi/180)
yA = -xC*(xA-xC)/yC + yC
model.rootAssembly.engineeringFeatures.ContourIntegral(name='Debond',symmetric=OFF,crackFront=model.rootAssembly.instances['RVE-assembly'].sets['CRACK'],crackTip=model.rootAssembly.instances['RVE-assembly'].sets['CRACKTIP'],extensionDirectionMethod=Q_VECTORS, qVectors=(((xC,yC,0.0),(xA,yA,0.0)), ), midNodePosition=midNodePos, collapsedElementAtTip=NONE)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Mesh
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating mesh ...',True)
nTangential = np.floor(deltapsi/delta)
nRadialFiber = np.floor(0.25/(delta*np.pi/180.0))
nTangential1 = np.floor(deltaphi/parameters['mesh']['size']['delta2'])
nTangential2 = np.floor((180-(theta+deltatheta+deltapsi+deltaphi))/parameters['mesh']['size']['delta3'])
nTangential3 = np.floor(alpha/parameters['mesh']['size']['delta1'])
#nRadialFiber1 = np.floor(0.25/parameters['mesh']['size']['delta3'])
if L>2*Rf:
nRadialMatrix = np.floor(0.25/(delta*np.pi/180.0))
#nRadialMatrix1 = np.floor(0.25/parameters['mesh']['size']['delta3'])
else:
nRadialMatrix = np.floor(0.25*(L-Rf)/(delta*np.pi/180.0))
#nRadialMatrix1 = np.floor(0.25*(L-Rf)/(Rf*parameters['mesh']['size']['delta3']))
if nTangential<parameters['Jintegral']['numberOfContours'] or nRadialFiber<parameters['Jintegral']['numberOfContours'] or nRadialMatrix<parameters['Jintegral']['numberOfContours']:
parameters['Jintegral']['numberOfContours'] = int(np.floor(np.min([nTangential,nRadialFiber,nRadialMatrix])) - 1)
writeErrorToLogFile(logfilepath,'a','MESH SIZE','The provided element size around the crack tip is incompatible with the number of contour integral requested.\nContour integral option in ABAQUS is available only for quadrilateral and hexahedral elements.\nThe number of contour requested will be automatically adjusted to ' + str(parameters['Jintegral']['numberOfContours']),True)
# assign mesh controls
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assigning mesh controls ...',True)
regionSets = [['FIBER-CENTER',QUAD_DOMINATED,FREE],
['FIBER-INTERMEDIATEANNULUS',QUAD_DOMINATED,FREE],
['FIBER-EXTANNULUS-RESTBOUNDED',QUAD_DOMINATED,FREE],
['MATRIX-INTANNULUS-RESTBOUNDED',TRI,FREE],
['MATRIX-INTERMEDIATEANNULUS',TRI,FREE],
['MATRIX-BODY',QUAD_DOMINATED,FREE]]
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
regionSets.append(['FIBER-EXTANNULUS-CRACK',QUAD_DOMINATED,FREE])
regionSets.append(['FIBER-EXTANNULUS-UPPERCRACK-CTUP',QUAD,STRUCTURED])
regionSets.append(['FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP',QUAD,STRUCTURED])
regionSets.append(['FIBER-EXTANNULUS-SECONDBOUNDED-CTUP',QUAD_DOMINATED,FREE])
regionSets.append(['FIBER-EXTANNULUS-UPPERCRACK-CTLOW',QUAD,STRUCTURED])
regionSets.append(['FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW',QUAD,STRUCTURED])
regionSets.append(['FIBER-EXTANNULUS-SECONDBOUNDED-CTLOW',QUAD_DOMINATED,FREE])
regionSets.append(['MATRIX-INTANNULUS-CRACK',QUAD_DOMINATED,FREE])
regionSets.append(['MATRIX-INTANNULUS-UPPERCRACK-CTUP',QUAD,STRUCTURED])
regionSets.append(['MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP',QUAD,STRUCTURED])
regionSets.append(['MATRIX-INTANNULUS-SECONDBOUNDED-CTUP',TRI,FREE])
regionSets.append(['MATRIX-INTANNULUS-UPPERCRACK-CTLOW',QUAD,STRUCTURED])
regionSets.append(['MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW',QUAD,STRUCTURED])
regionSets.append(['MATRIX-INTANNULUS-SECONDBOUNDED-CTLOW',TRI,FREE])
else:
regionSets.append(['FIBER-EXTANNULUS-LOWERCRACK',QUAD_DOMINATED,FREE])
regionSets.append(['FIBER-EXTANNULUS-UPPERCRACK',QUAD,STRUCTURED])
regionSets.append(['FIBER-EXTANNULUS-FIRSTBOUNDED',QUAD,STRUCTURED])
regionSets.append(['FIBER-EXTANNULUS-SECONDBOUNDED',QUAD_DOMINATED,FREE])
regionSets.append(['MATRIX-INTANNULUS-LOWERCRACK',QUAD_DOMINATED,FREE])
regionSets.append(['MATRIX-INTANNULUS-UPPERCRACK',QUAD,STRUCTURED])
regionSets.append(['MATRIX-INTANNULUS-FIRSTBOUNDED',QUAD,STRUCTURED])
regionSets.append(['MATRIX-INTANNULUS-SECONDBOUNDED',TRI,FREE])
if 'boundingPly' in parameters['BC']['northSide']['type']:
regionSets.append(['BOUNDING-PLY',QUAD_DOMINATED,FREE])
if 'boundingPly' in parameters['BC']['rightSide']['type']:
regionSets.append(['RIGHT-HOMOGENIZED-CROSSPLY',QUAD_DOMINATED,FREE])
if 'boundingPly' in parameters['BC']['leftSide']['type']:
regionSets.append(['LEFT-HOMOGENIZED-CROSSPLY',QUAD_DOMINATED,FREE])
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
regionSets.append(['UPPER-FIBERS',QUAD_DOMINATED,FREE])
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
regionSets.append(['RIGHT-FIBERS',QUAD_DOMINATED,FREE])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
regionSets.append(['LEFT-FIBERS',QUAD_DOMINATED,FREE])
for regionSet in regionSets:
assignMeshControls(model,'RVE-assembly',regionSet[0],regionSet[1],regionSet[2],logfilepath,baselogindent + 3*logindent,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# assign seeds
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Seeding edges ...',True)
regionSets = [['FIRSTCIRCLE',18],
['FIFTHCIRCLE',90]]
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
regionSets.append(['SECONDCIRCLE-UPPERCRACK-CTUP',nTangential])
regionSets.append(['SECONDCIRCLE-FIRSTBOUNDED-CTUP',nTangential])
regionSets.append(['THIRDCIRCLE-UPPERCRACK-CTUP',nTangential])
regionSets.append(['THIRDCIRCLE-FIRSTBOUNDED-CTUP',nTangential])
regionSets.append(['FOURTHCIRCLE-UPPERCRACK-CTUP',nTangential])
regionSets.append(['FOURTHCIRCLE-FIRSTBOUNDED-CTUP',nTangential])
regionSets.append(['SECONDCIRCLE-UPPERCRACK-CTLOW',nTangential])
regionSets.append(['SECONDCIRCLE-FIRSTBOUNDED-CTLOW',nTangential])
regionSets.append(['THIRDCIRCLE-UPPERCRACK-CTLOW',nTangential])
regionSets.append(['THIRDCIRCLE-FIRSTBOUNDED-CTLOW',nTangential])
regionSets.append(['FOURTHCIRCLE-UPPERCRACK-CTLOW',nTangential])
regionSets.append(['FOURTHCIRCLE-FIRSTBOUNDED-CTLOW',nTangential])
regionSets.append(['TRANSVERSALCUT-FIRSTFIBER-CTUP',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-FIRSTMATRIX-CTUP',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-SECONDFIBER-CTUP',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-SECONDMATRIX-CTUP',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-THIRDFIBER-CTUP',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-THIRDMATRIX-CTUP',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-FIRSTFIBER-CTLOW',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-FIRSTMATRIX-CTLOW',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-SECONDFIBER-CTLOW',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-SECONDMATRIX-CTLOW',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-THIRDFIBER-CTLOW',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-THIRDMATRIX-CTLOW',nRadialMatrix])
regionSets.append(['SECONDCIRCLE-SECONDBOUNDED-CTUP',nTangential1])
regionSets.append(['SECONDCIRCLE-SECONDBOUNDED-CTLOW',nTangential1])
regionSets.append(['SECONDCIRCLE-RESTBOUNDED',nTangential2])
regionSets.append(['THIRDCIRCLE-SECONDBOUNDED-CTUP',nTangential1])
regionSets.append(['THIRDCIRCLE-SECONDBOUNDED-CTLOW',nTangential1])
regionSets.append(['THIRDCIRCLE-RESTBOUNDED',nTangential2])
regionSets.append(['FOURTHCIRCLE-SECONDBOUNDED-CTUP',nTangential1])
regionSets.append(['FOURTHCIRCLE-SECONDBOUNDED-CTLOW',nTangential1])
regionSets.append(['FOURTHCIRCLE-RESTBOUNDED',nTangential2])
regionSets.append(['TRANSVERSALCUT-FOURTHFIBER-CTUP',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-FOURTHMATRIX-CTUP',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-FOURTHFIBER-CTLOW',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-FOURTHMATRIX-CTLOW',nRadialMatrix])
regionSets.append(['SECONDCIRCLE-CRACK',nTangential3])
regionSets.append(['THIRDCIRCLE-CRACK',nTangential3])
regionSets.append(['FOURTHCIRCLE-CRACK',nTangential3])
if 'full' not in parameters['geometry']['fiber']['type']:
regionSets.append(['LOWERSIDE-CENTER',6])
regionSets.append(['LOWERSIDE-SECONDRING-RIGHT',nRadialFiber])
regionSets.append(['LOWERSIDE-THIRDRING-RIGHT',nRadialMatrix])
else:
regionSets.append(['SECONDCIRCLE-UPPERCRACK',nTangential])
regionSets.append(['SECONDCIRCLE-FIRSTBOUNDED',nTangential])
regionSets.append(['THIRDCIRCLE-UPPERCRACK',nTangential])
regionSets.append(['THIRDCIRCLE-FIRSTBOUNDED',nTangential])
regionSets.append(['FOURTHCIRCLE-UPPERCRACK',nTangential])
regionSets.append(['FOURTHCIRCLE-FIRSTBOUNDED',nTangential])
regionSets.append(['TRANSVERSALCUT-FIRSTFIBER',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-FIRSTMATRIX',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-SECONDFIBER',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-SECONDMATRIX',nRadialMatrix])
regionSets.append(['TRANSVERSALCUT-THIRDFIBER',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-THIRDMATRIX',nRadialMatrix])
regionSets.append(['LOWERSIDE-SECONDRING-RIGHT',nRadialFiber])
regionSets.append(['LOWERSIDE-THIRDRING-RIGHT',nRadialMatrix])
regionSets.append(['SECONDCIRCLE-SECONDBOUNDED',nTangential1])
regionSets.append(['SECONDCIRCLE-RESTBOUNDED',nTangential2])
regionSets.append(['THIRDCIRCLE-SECONDBOUNDED',nTangential1])
regionSets.append(['THIRDCIRCLE-RESTBOUNDED',nTangential2])
regionSets.append(['FOURTHCIRCLE-SECONDBOUNDED',nTangential1])
regionSets.append(['FOURTHCIRCLE-RESTBOUNDED',nTangential2])
regionSets.append(['TRANSVERSALCUT-FOURTHFIBER',nRadialFiber])
regionSets.append(['TRANSVERSALCUT-FOURTHMATRIX',nRadialMatrix])
regionSets.append(['SECONDCIRCLE-LOWERCRACK',nTangential3])
regionSets.append(['THIRDCIRCLE-LOWERCRACK',nTangential3])
regionSets.append(['FOURTHCIRCLE-LOWERCRACK',nTangential3])
nFibersHorizontal = 1
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
nFibersHorizontal += parameters['BC']['rightSide']['nFibers']
for nFiber in range(0,parameters['BC']['rightSide']['nFibers']):
regionSets.append(['LOWERSIDE-RIGHT-FIBER'+str(nFiber+1),10])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
nFibersHorizontal += parameters['BC']['leftSide']['nFibers']
for nFiber in range(0,parameters['BC']['leftSide']['nFibers']):
regionSets.append(['LOWERSIDE-LEFT-FIBER'+str(nFiber+1),10])
regionSets.append(['UPPERSIDE',30*nFibersHorizontal])
if 'boundingPly' in parameters['BC']['northSide']['type']:
if 'adjacentFibers' in parameters['BC']['northSide']['type'] and parameters['BC']['northSide']['nFibers']>10:
regionSets.append(['RIGHTSIDE',int(np.floor(30*(1+10*math.log10(parameters['BC']['northSide']['nFibers']))))])
regionSets.append(['LEFTSIDE',int(np.floor(30*(1+10*math.log10(parameters['BC']['northSide']['nFibers']))))])
elif 'adjacentFibers' in parameters['BC']['northSide']['type'] and 'adjacentFibers' in parameters['BC']['rightSide']['type'] and 'adjacentFibers' in parameters['BC']['leftSide']['type'] and parameters['BC']['rightSide']['nFibers']>10 and parameters['BC']['leftSide']['nFibers']>10:
regionSets.append(['RIGHTSIDE',int(np.floor(30*(1+5*math.log10(parameters['BC']['northSide']['nFibers']))))])
regionSets.append(['LEFTSIDE',int(np.floor(30*(1+5*math.log10(parameters['BC']['northSide']['nFibers']))))])
else:
regionSets.append(['LOWER-RIGHTSIDE',30])
regionSets.append(['LOWER-LEFTSIDE',30])
regionSets.append(['UPPER-RIGHTSIDE',int(np.floor(30*(1+math.log10(tRatio))))])
regionSets.append(['UPPER-LEFTSIDE',int(np.floor(30*(1+math.log10(tRatio))))])
elif 'adjacentFibers' in parameters['BC']['northSide']['type'] and parameters['BC']['northSide']['nFibers']>10:
regionSets.append(['RIGHTSIDE',int(np.floor(30*(1+10*math.log10(parameters['BC']['northSide']['nFibers']))))])
regionSets.append(['LEFTSIDE',int(np.floor(30*(1+10*math.log10(parameters['BC']['northSide']['nFibers']))))])
elif 'adjacentFibers' in parameters['BC']['northSide']['type'] and 'adjacentFibers' in parameters['BC']['rightSide']['type'] and 'adjacentFibers' in parameters['BC']['leftSide']['type'] and parameters['BC']['rightSide']['nFibers']>10 and parameters['BC']['leftSide']['nFibers']>10:
regionSets.append(['RIGHTSIDE',int(np.floor(30*(1+5*math.log10(parameters['BC']['northSide']['nFibers']))))])
regionSets.append(['LEFTSIDE',int(np.floor(30*(1+5*math.log10(parameters['BC']['northSide']['nFibers']))))])
else:
regionSets.append(['RIGHTSIDE',30])
regionSets.append(['LEFTSIDE',30])
if 'adjacentFibers' in parameters['BC']['northSide']['type']:
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
regionSets.append(['INTERFACE-UPPER-FIBER-C'+str(nFiber+1),72])
if 'adjacentFibers' in parameters['BC']['rightSide']['type']:
for mFiber in range(0,parameters['BC']['rightSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
regionSets.append(['INTERFACE-UPPER-FIBER-R'+str(int(nFiber+1+mFiber*parameters['BC']['northSide']['nFibers'])),72])
if 'adjacentFibers' in parameters['BC']['leftSide']['type']:
for mFiber in range(0,parameters['BC']['leftSide']['nFibers']):
for nFiber in range(0,parameters['BC']['northSide']['nFibers']):
regionSets.append(['INTERFACE-UPPER-FIBER-L'+str(int(nFiber+1+mFiber*parameters['BC']['northSide']['nFibers'])),72])
for regionSet in regionSets:
seedEdgeByNumber(model,'RVE-assembly',regionSet[0],regionSet[1],FINER,logfilepath,baselogindent + 3*logindent,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# select element type
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Selecting and assigning element types ...',True)
if 'structuralModel' in parameters['mesh']['elements'].keys():
if 'generalizedPlaneStrain' in parameters['mesh']['elements']['structuralModel']:
if 'first' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPEG4, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPEG3, elemLibrary=STANDARD)
elif 'second' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPEG8, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPEG6, elemLibrary=STANDARD)
#elif 'generalizedPlaneStress' in parameters['mesh']['elements']['structuralModel']:
# if 'first' in parameters['mesh']['elements']['order']:
# elemType1 = mesh.ElemType(elemCode=CPE4, elemLibrary=STANDARD)
# elemType2 = mesh.ElemType(elemCode=CPE3, elemLibrary=STANDARD)
# elif 'second' in parameters['mesh']['elements']['order']:
# elemType1 = mesh.ElemType(elemCode=CPE8, elemLibrary=STANDARD)
# elemType2 = mesh.ElemType(elemCode=CPE6, elemLibrary=STANDARD)
elif 'planeStrain' in parameters['mesh']['elements']['structuralModel']:
if 'first' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPE4, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPE3, elemLibrary=STANDARD)
elif 'second' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPE8, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPE6, elemLibrary=STANDARD)
elif 'planeStress' in parameters['mesh']['elements']['structuralModel']:
if 'first' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPS4, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPS3, elemLibrary=STANDARD)
elif 'second' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPS8, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPS6, elemLibrary=STANDARD)
else:
if 'first' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPE4, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPE3, elemLibrary=STANDARD)
elif 'second' in parameters['mesh']['elements']['order']:
elemType1 = mesh.ElemType(elemCode=CPE8, elemLibrary=STANDARD)
elemType2 = mesh.ElemType(elemCode=CPE6, elemLibrary=STANDARD)
model.rootAssembly.setElementType(regions=(model.rootAssembly.instances['RVE-assembly'].sets['RVE']), elemTypes=(elemType1, elemType2))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# mesh part
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Meshing part ...',True)
localStart = timeit.default_timer()
model.rootAssembly.generateMesh(regions=(model.rootAssembly.instances['RVE-assembly'],))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Mesh creation time: ' + str(timeit.default_timer() - localStart) + ' [s]',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
mdb.save()
# extract mesh statistics
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Extracting mesh statistics ...',True)
meshStats = model.rootAssembly.getMeshStats(regions=(model.rootAssembly.instances['RVE-assembly'],))
modelData = {}
modelData['numNodes'] = meshStats.numNodes
modelData['numQuads'] = meshStats.numQuadElems
modelData['numTris'] = meshStats.numTriElems
modelData['numEls'] = meshStats.numQuadElems + meshStats.numTriElems
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Output
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating output requests ...',True)
# field output
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Field output ...',True)
for step in parameters['steps'].values():
model.FieldOutputRequest(name='F-Output-1',createStepName=step['name'],variables=('U','RF','S','E','EE','COORD',))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
# history output
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'History output ...',True)
for step in parameters['steps'].values():
model.HistoryOutputRequest(name='H-Output-1',createStepName=step['name'])
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
model.historyOutputRequests['H-Output-1'].setValues(contourIntegral='DebondUp',sectionPoints=DEFAULT,rebar=EXCLUDE,numberOfContours=parameters['Jintegral']['numberOfContours'])
model.historyOutputRequests['H-Output-2'].setValues(contourIntegral='DebondLow',sectionPoints=DEFAULT,rebar=EXCLUDE,numberOfContours=parameters['Jintegral']['numberOfContours'])
else:
model.historyOutputRequests['H-Output-1'].setValues(contourIntegral='Debond',sectionPoints=DEFAULT,rebar=EXCLUDE,numberOfContours=parameters['Jintegral']['numberOfContours'])
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#===============================================================================#
# Job creation
#===============================================================================#
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating and submitting job ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Set job name',True)
modelData['jobname'] = 'Job-Jintegral-' + modelname
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create job with name ' + modelData['jobname'],True)
mdb.Job(name=modelData['jobname'], model=modelname, description='', type=ANALYSIS, atTime=None, waitMinutes=0, waitHours=0, queue=None, memory=99, memoryUnits=PERCENTAGE, getMemoryFromAnalysis=True, explicitPrecision=SINGLE, nodalOutputPrecision=SINGLE, echoPrint=ON, modelPrint=ON, contactPrint=ON, historyPrint=ON, userSubroutine='',scratch='', multiprocessingMode=DEFAULT, numCpus=parameters['solver']['cpus'], numDomains=12,numGPUs=0)
mdb.save()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Submit job and wait for completion',True)
localStart = timeit.default_timer()
#mdb.jobs['Job-' + modelname].submit(consistencyChecking=OFF)
mdb.jobs[modelData['jobname']].writeInput(consistencyChecking=OFF)
mdb.jobs[modelData['jobname']].waitForCompletion()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Job time: ' + str(timeit.default_timer() - localStart) + ' [s]',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Closing database ...',True)
mdb.save()
mdb.close()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Exiting function: createRVE(parameters,logfilepath,logindent)',True)
return modelData
def modifyRVEinputfile(parameters,mdbData,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: modifyRVE(parameters,mdbData)',True)
skipLineToLogFile(logfilepath,'a',True)
theta = parameters['geometry']['theta']
# odb name and path
#odbname = mdbData['jobname'] + '.odb'
#odbfullpath = join(parameters['wd'],odbname)
# input file name and path
inpname = mdbData['jobname'] + '.inp'
inpfullpath = join(parameters['input']['wd'],inpname)
# modified input file name
modinpname = 'Job-VCCTandJintegral-' + parameters['input']['modelname'] + '.inp'
modinpfullpath = join(parameters['input']['wd'],modinpname)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Working directory: ' + parameters['input']['wd'],True)
#writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ODB database name: ' + odbname,True)
#writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ODB database full path: ' + join(parameters['wd'],odbname),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Input file name: ' + inpname,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Input file full path: ' + join(parameters['input']['wd'],inpname),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Modified input file name: ' + modinpname,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Modified input file full path: ' + join(parameters['input']['wd'],modinpname),True)
createABQinpfile(modinpname)
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading content of original input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading nodes and saving to dictionary ...',True)
nodes = {}
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
nodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
store = False
break
elif store == True:
nodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
elif ('*Node' in line or '*NODE' in line) and len(inpfilelines[l+1].replace('\n','').split(','))==3:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading quadrilateral elements and saving to dictionary ...',True)
quads = {}
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
quadIndex = int(line.replace('\n','').split(',')[0])
quads[quadIndex] = []
for node in line.replace('\n','').split(',')[1:]:
quads[quadIndex].append(int(node))
store = False
break
elif store == True:
quadIndex = int(line.replace('\n','').split(',')[0])
quads[quadIndex] = []
for node in line.replace('\n','').split(',')[1:]:
quads[quadIndex].append(int(node))
elif ('*Element, type=CPE8' in line or '*ELEMENT, type=CPE8' in line or '*Element, type=CPE4' in line or '*ELEMENT, type=CPE4' in line) and (len(inpfilelines[l+1].replace('\n','').split(','))==5 or len(inpfilelines[l+1].replace('\n','').split(','))==9):
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack tip sets and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACKTIPUP','cracktipup']:
cracktipupIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACKTIPLOW','cracktiplow']:
cracktiplowIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack tip set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACKTIP','cracktip']:
cracktipIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack faces node set and saving to list ...',True)
crackfacesNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACK','crack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading north side node set and saving to list ...',True)
northSideNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
northSideNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
northSideNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['UPPERSIDE','upperside']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading right side node set and saving to list ...',True)
rightSideNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
rightSideNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
rightSideNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['RIGHTSIDE','rightside']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading left side node set and saving to list ...',True)
leftSideNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
leftSideNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
leftSideNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['LEFTSIDE','leftside']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading north-east corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['NE-CORNER','ne-corner']:
northeastIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading north-west corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['NW-CORNER','nw-corner']:
northwestIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading south-east corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['SE-CORNER','se-corner']:
southeastIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading south-west corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['SW-CORNER','sw-corner']:
southwestIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack faces element set and saving to list ...',True)
crackfacesElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACK','crack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading fiber node set and saving to list ...',True)
fiberNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER','fiber']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading matrix node set and saving to list ...',True)
matrixNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX','matrix']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading fiber element set and saving to list ...',True)
fiberElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER','fiber']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading matrix element set and saving to list ...',True)
matrixElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX','matrix']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-UPPERCRACK-CTUP and saving to list ...',True)
fiberExtannUppcrackCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTUP','fiber-extannulus-uppercrack-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannUppcrackCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTUP','fiber-extannulus-uppercrack-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-UPPERCRACK-CTLOW and saving to list ...',True)
fiberExtannUppcrackCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTLOW','fiber-extannulus-uppercrack-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannUppcrackCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTLOW','fiber-extannulus-uppercrack-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-UPPERCRACK and saving to list ...',True)
fiberExtannUppcrackElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK','fiber-extannulus-uppercrack'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannUppcrackElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK','fiber-extannulus-uppercrack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP and saving to list ...',True)
fiberExtannFirstbounCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP','fiber-extannulus-firstbounded-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannFirstbounCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP','fiber-extannulus-firstbounded-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW and saving to list ...',True)
fiberExtannFirstbounCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW','fiber-extannulus-firstbounded-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannFirstbounCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW','fiber-extannulus-firstbounded-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-FIRSTBOUNDED and saving to list ...',True)
fiberExtannFirstbounElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED','fiber-extannulus-firstbounded'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannFirstbounElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED','fiber-extannulus-firstbounded']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-UPPERCRACK-CTUP and saving to list ...',True)
matrixIntannUppcrackCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTUP','matrix-intannulus-uppercrack-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannUppcrackCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTUP','matrix-intannulus-uppercrack-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-UPPERCRACK-CTLOW and saving to list ...',True)
matrixIntannUppcrackCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTLOW','matrix-intannulus-uppercrack-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannUppcrackCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTLOW','matrix-intannulus-uppercrack-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-UPPERCRACK and saving to list ...',True)
matrixIntannUppcrackElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK','matrix-intannulus-uppercrack'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannUppcrackElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK','matrix-intannulus-uppercrack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP and saving to list ...',True)
matrixIntannFirstbounCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP','matrix-intannulus-firstbounded-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannFirstbounCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP','matrix-intannulus-firstbounded-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW and saving to list ...',True)
matrixIntannFirstbounCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW','matrix-intannulus-firstbounded-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannFirstbounCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW','matrix-intannulus-firstbounded-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-FIRSTBOUNDED and saving to list ...',True)
matrixIntannFirstbounElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED','matrix-intannulus-firstbounded'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannFirstbounElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED','matrix-intannulus-firstbounded']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-WITHOUT-CORNERS ...',True)
northSideWithoutCornersNodeset = []
for node in northSideNodeset:
if not node in [northeastIndex,northwestIndex]:
northSideWithoutCornersNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-CENTER ...',True)
for node in northSideNodeset:
if nodes[node][0]==0.0:
northSideCenter = node
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Node ' + str(northSideCenter) + ' is at the center of the NORTH boundary',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-POSSIDE ...',True)
northSidePosSide = []
for node in northSideNodeset:
if nodes[node][0]>0.0:
northSidePosSide.append(node)
northSidePosSideCoords = [nodes[i][0] for i in northSidePosSide]
northSidePosSide = np.array(northSidePosSide)[np.argsort(northSidePosSideCoords)].tolist()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Set northSidePosSide contains ' + str(len(northSidePosSide)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-NEGSIDE ...',True)
northSideNegSide = []
for node in northSideNodeset:
if nodes[node][0]<0.0:
northSideNegSide.append(node)
northSideNegSideCoords = [nodes[i][0] for i in northSideNegSide]
northSideNegSide = np.array(northSideNegSide)[np.argsort(northSideNegSideCoords)].tolist()
northSideNegSide = northSideNegSide[::-1]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Set northSideNegSide contains ' + str(len(northSideNegSide)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set RIGHT-SIDE-WITHOUT-CORNERS ...',True)
rightSideWithoutCornersNodeset = []
for node in rightSideNodeset:
if not node in [northeastIndex,southeastIndex]:
rightSideWithoutCornersNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set LEFT-SIDE-WITHOUT-CORNERS ...',True)
leftSideWithoutCornersNodeset = []
for node in leftSideNodeset:
if not node in [southwestIndex,northwestIndex]:
leftSideWithoutCornersNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Insert new coincident node(s) at the crack tip and create dummy node(s) ...',True)
numNodes = mdbData['numNodes']
numEls = mdbData['numEls']
numQuads = mdbData['numQuads']
numTris = mdbData['numTris']
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of nodes = ' + str(numNodes),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of elements = ' + str(numEls),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of quadrilateral elements = ' + str(numQuads),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of triangular elements = ' + str(numTris),True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Index of current crack tip nodes: ' + str(cracktipUPIndex) + ', ' + str(cracktipLOWIndex),True)
matrixCracktipUPIndex = numNodes + 1000
cracktipUPDummyIndex = numNodes + 1000 + 1
matrixCracktipLOWIndex = numNodes + 1000 + 50
cracktipLOWDummyIndex = numNodes + 1000 + 50 + 1
nodes[matrixCracktipUPIndex] = [nodes[cracktipUPIndex][0],nodes[cracktipUPIndex][1]]
nodes[cracktipUPDummyIndex] = [-5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
nodes[matrixCracktipLOWIndex] = [nodes[cracktipLOWIndex][0],nodes[cracktipLOWIndex][1]]
nodes[cracktipLOWDummyIndex] = [-5*parameters['geometry']['Rf'],-20*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix upper crack tip node with index ' + str(matrixCracktipUPIndex) + ' and coordinates (' + str(nodes[cracktipUPIndex][0]) + ', '+ str(nodes[cracktipUPIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix upper crack tip dummy node with index ' + str(cracktipUPDummyIndex)+ ' and coordinates (' + str(-5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix lower crack tip node with index ' + str(matrixCracktipLOWIndex) + ' and coordinates (' + str(nodes[cracktipLOWIndex][0]) + ', '+ str(nodes[cracktipLOWIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix lower crack tip dummy node with index ' + str(cracktipLOWDummyIndex)+ ' and coordinates (' + str(-5*parameters['geometry']['Rf']) + ', '+ str(-20*parameters['geometry']['Rf']) + ')',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Index of current crack tip node: ' + str(cracktipIndex),True)
matrixCracktipIndex = numNodes + 1000
cracktipDummyIndex = numNodes + 1000 + 1
nodes[matrixCracktipIndex] = [nodes[cracktipIndex][0],nodes[cracktipIndex][1]]
nodes[cracktipDummyIndex] = [-5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix crack tip node with index ' + str(matrixCracktipIndex) + ' and coordinates (' + str(nodes[cracktipIndex][0]) + ', '+ str(nodes[cracktipIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix dummy node with index ' + str(cracktipDummyIndex)+ ' and coordinates (' + str(-5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Searching for elements connected to the upper crack tip',True)
fiberElswithCracktipUP = []
matrixElswithCracktipUP = []
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On fiber',True)
for element in fiberExtannUppcrackCtUpElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipUP.append(element)
firstdebondedFiberElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for e in range(len(fiberExtannFirstbounCtUpElementset)-1,-1,-1):
element = fiberExtannFirstbounElementset[e]
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipUP.append(element)
firstboundedFiberElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On matrix',True)
for element in matrixIntannUppcrackCtUpElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipUP.append(element)
firstdebondedMatrixElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for element in matrixIntannFirstbounCtUpElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipUP.append(element)
firstboundedMatrixElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Searching for elements connected to the lower crack tip',True)
fiberElswithCracktipLOW = []
matrixElswithCracktipLOW = []
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On fiber',True)
for element in fiberExtannUppcrackCtLowElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipLOW.append(element)
firstdebondedFiberElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for e in range(len(fiberExtannFirstbounCtLowElementset)-1,-1,-1):
element = fiberExtannFirstbounElementset[e]
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipLOW.append(element)
firstboundedFiberElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On matrix',True)
for element in matrixIntannUppcrackCtLowElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipLOW.append(element)
firstdebondedMatrixElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for element in matrixIntannFirstbounCtLowElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipLOW.append(element)
firstboundedMatrixElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Searching for elements connected to the crack tip',True)
fiberElswithCracktip = []
matrixElswithCracktip = []
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On fiber',True)
for element in fiberExtannUppcrackElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktip.append(element)
firstdebondedFiberEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for e in range(len(fiberExtannFirstbounElementset)-1,-1,-1):
element = fiberExtannFirstbounElementset[e]
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktip.append(element)
firstboundedFiberEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On matrix',True)
for element in matrixIntannUppcrackElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktip.append(element)
firstdebondedMatrixEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for element in matrixIntannFirstbounElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktip.append(element)
firstboundedMatrixEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Second order elements are used',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
matrixFirstBehindCracktipUPIndex = numNodes + 1000 + 2
firstBehindCracktipUPDummyIndex = numNodes + 1000 + 3
matrixFirstBehindCracktipLOWUPIndex = numNodes + 1000 + 50 + 2
firstBehindCracktipLOWDummyIndex = numNodes + 1000 + 50 + 3
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix first behind upper crack tip node with index ' + str(matrixFirstBehindCracktipUPIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating upper crack tip dummy node with index ' + str(firstBehindCracktipUPDummyIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix first behind lower crack tip node with index ' + str(matrixFirstBehindCracktipLOWIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating lower crack tip dummy node with index ' + str(firstBehindCracktipLOWDummyIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find common nodes of bounded upper crack tip elements on fiber and matrix',True)
commonNodesUP = []
fiberElnodesUP = quads[firstboundedFiberElUP]
matrixElnodesUP = quads[firstboundedMatrixElUP]
for node in fiberElnodesUP:
if node in matrixElnodesUP:
commonNodesUP.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - node ' + str(node),True)
if len(commonNodesUP)==3:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of bounded nodes from upper cracktip',True)
distancesUP = []
for node in commonNodesUP:
if node != cracktipUPIndex:
distancesUP.append(np.sqrt((nodes[node][0]-nodes[cracktipUPIndex][0])*(nodes[node][0]-nodes[cracktipUPIndex][0])+(nodes[node][1]-nodes[cracktipUPIndex][1])*(nodes[node][1]-nodes[cracktipUPIndex][1])))
else:
distancesUP.append(0.0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Reordering labels based on distances',True)
fiberFirstBehindCracktipUPIndex = commonNodesUP[np.argsort(distancesUP)[-2]] # argsort goes from smaller to higher
if 'inverseSquareRoot' in parameters['singularity']['type']:
fiberSecondBehindCracktipUPIndex = commonNodesUP[np.argsort(distancesUP)[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix upper crack tip node with index ' + str(matrixFirstBehindCracktipUPIndex) + ' and coordinates (' + str(nodes[fiberFirstBehindCracktipUPIndex][0]) + ', '+ str(nodes[fiberFirstBehindCracktipUPIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating upper crack tip dummy node with index ' + str(firstBehindCracktipUPDummyIndex)+ ' and coordinates (' + str(5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
nodes[matrixFirstBehindCracktipUPIndex] = [nodes[fiberFirstBehindCracktipUPIndex][0],nodes[fiberFirstBehindCracktipUPIndex][1]]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[matrixSecondBehindCracktipUPIndex] = [nodes[fiberSecondBehindCracktipUPIndex][0],nodes[fiberSecondBehindCracktipUPIndex][1]]
nodes[firstBehindCracktipUPDummyIndex] = [5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[secondBehindCracktipUPDummyIndex] = [5*parameters['geometry']['Rf'],-20*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find common nodes of bounded lower crack tip elements on fiber and matrix',True)
commonNodesLOW = []
fiberElnodesLOW = quads[firstboundedFiberElLOW]
matrixElnodesLOW = quads[firstboundedMatrixElLOW]
for node in fiberElnodesLOW:
if node in matrixElnodesLOW:
commonNodesLOW.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - node ' + str(node),True)
if len(commonNodesLOW)==3:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of bounded nodes from lower cracktip',True)
distancesLOW = []
for node in commonNodesLOW:
if node != cracktipLOWIndex:
distancesLOW.append(np.sqrt((nodes[node][0]-nodes[cracktipLOWIndex][0])*(nodes[node][0]-nodes[cracktipLOWIndex][0])+(nodes[node][1]-nodes[cracktipLOWIndex][1])*(nodes[node][1]-nodes[cracktipLOWIndex][1])))
else:
distancesLOW.append(0.0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Reordering labels based on distances',True)
fiberFirstBehindCracktipLOWIndex = commonNodesLOW[np.argsort(distancesLOW)[-2]] # argsort goes from smaller to higher
if 'inverseSquareRoot' in parameters['singularity']['type']:
fiberSecondBehindCracktipLOWIndex = commonNodesLOW[np.argsort(distancesLOW)[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix lower crack tip node with index ' + str(matrixFirstBehindCracktipLOWIndex) + ' and coordinates (' + str(nodes[fiberFirstBehindCracktipLOWIndex][0]) + ', '+ str(nodes[fiberFirstBehindCracktipLOWIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating lower crack tip dummy node with index ' + str(firstBehindCracktipLOWDummyIndex)+ ' and coordinates (' + str(5*parameters['geometry']['Rf']) + ', '+ str(-20*parameters['geometry']['Rf']) + ')',True)
nodes[matrixFirstBehindCracktipLOWIndex] = [nodes[fiberFirstBehindCracktipLOWIndex][0],nodes[fiberFirstBehindCracktipLOWIndex][1]]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[matrixSecondBehindCracktipLOWIndex] = [nodes[fiberSecondBehindCracktipLOWIndex][0],nodes[fiberSecondBehindCracktipLOWIndex][1]]
nodes[firstBehindCracktipLOWDummyIndex] = [5*parameters['geometry']['Rf'],-20*parameters['geometry']['Rf']]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[secondBehindCracktipLOWDummyIndex] = [5*parameters['geometry']['Rf'],-40*parameters['geometry']['Rf']]
else:
matrixFirstBehindCracktipIndex = numNodes + 1000 + 2
firstBehindCracktipDummyIndex = numNodes + 1000 + 3
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix first behind crack tip node with index ' + str(matrixFirstBehindCracktipIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix dummy node with index ' + str(firstBehindCracktipDummyIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find common nodes of bounded crack tip elements on fiber and matrix',True)
commonNodes = []
fiberElnodes = quads[firstboundedFiberEl]
matrixElnodes = quads[firstboundedMatrixEl]
for node in fiberElnodes:
if node in matrixElnodes:
commonNodes.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - node ' + str(node),True)
if len(commonNodes)==3:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of bounded nodes from cracktip',True)
distances = []
for node in commonNodes:
if node != cracktipIndex:
distances.append(np.sqrt((nodes[node][0]-nodes[cracktipIndex][0])*(nodes[node][0]-nodes[cracktipIndex][0])+(nodes[node][1]-nodes[cracktipIndex][1])*(nodes[node][1]-nodes[cracktipIndex][1])))
else:
distances.append(0.0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Reordering labels based on distances',True)
fiberFirstBehindCracktipIndex = commonNodes[np.argsort(distances)[-2]] # argsort goes from smaller to higher
if 'inverseSquareRoot' in parameters['singularity']['type']:
fiberSecondBehindCracktipIndex = commonNodes[np.argsort(distances)[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix crack tip node with index ' + str(matrixFirstBehindCracktipIndex) + ' and coordinates (' + str(nodes[fiberFirstBehindCracktipIndex][0]) + ', '+ str(nodes[fiberFirstBehindCracktipIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix dummy node with index ' + str(firstBehindCracktipDummyIndex)+ ' and coordinates (' + str(5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
nodes[matrixFirstBehindCracktipIndex] = [nodes[fiberFirstBehindCracktipIndex][0],nodes[fiberFirstBehindCracktipIndex][1]]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[matrixSecondBehindCracktipIndex] = [nodes[fiberSecondBehindCracktipIndex][0],nodes[fiberSecondBehindCracktipIndex][1]]
nodes[firstBehindCracktipDummyIndex] = [5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[secondBehindCracktipDummyIndex] = [5*parameters['geometry']['Rf'],-40*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Identify nodes on crack faces for displacement measurements ...',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the fiber elements around the upper crack tip',True)
nodesAroundCracktipUP = quads[firstdebondedFiberElUP]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesFiberDisplacementMeasUP = []
for node in nodesAroundCracktipUP:
if node in crackfacesNodeset and node!=cracktipUPIndex:
nodesFiberDisplacementMeasUP.append(node)
if len(nodesFiberDisplacementMeasUP)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesFiberDisplacementMeasUP)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesFiberDisplacementMeasUP = []
for node in nodesFiberDisplacementMeasUP:
distancesFiberDisplacementMeasUP.append(np.sqrt((nodes[node][0]-nodes[cracktipUPIndex][0])*(nodes[node][0]-nodes[cracktipUPIndex][0])+(nodes[node][1]-nodes[cracktipUPIndex][1])*(nodes[node][1]-nodes[cracktipUPIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the matrix elements around the upper crack tip',True)
nodesAroundCracktipUP = quads[firstdebondedMatrixElUP]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesMatrixDisplacementMeasUP = []
for node in nodesAroundCracktipUP:
if node in crackfacesNodeset and node!=cracktipUPIndex:
nodesMatrixDisplacementMeasUP.append(node)
if len(nodesMatrixDisplacementMeasUP)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesMatrixDisplacementMeasUP)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from upper cracktip',True)
distancesMatrixDisplacementMeasUP = []
for node in nodesMatrixDisplacementMeasUP:
distancesMatrixDisplacementMeasUP.append(np.sqrt((nodes[node][0]-nodes[cracktipUPIndex][0])*(nodes[node][0]-nodes[cracktipUPIndex][0])+(nodes[node][1]-nodes[cracktipUPIndex][1])*(nodes[node][1]-nodes[cracktipUPIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sort lists with computed distances',True)
sortedFiberDistanceIndecesUP = np.argsort(distancesFiberDisplacementMeasUP)
sortedMatrixDistanceIndecesUP = np.argsort(distancesMatrixDisplacementMeasUP)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort fiber nodes ' + str(sortedFiberDistanceIndecesUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort matrix nodes ' + str(sortedMatrixDistanceIndecesUP),True)
if 'second' in parameters['mesh']['elements']['order']:
cracktipFiberDispMeasIndexUP = nodesFiberDisplacementMeasUP[sortedFiberDistanceIndecesUP[-1]]
firstBehindCracktipFiberDispMeasIndexUP = nodesFiberDisplacementMeasUP[sortedFiberDistanceIndecesUP[-2]]
cracktipMatrixDispMeasIndexUP = nodesMatrixDisplacementMeasUP[sortedMatrixDistanceIndecesUP[-1]]
firstBehindCracktipMatrixDispMeasIndexUP = nodesMatrixDisplacementMeasUP[sortedMatrixDistanceIndecesUP[-2]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the matrix crack tip is measured on node ' + str(cracktipMatrixDispMeasIndexUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the matrix crack tip is measured on node ' + str(firstBehindCracktipMatrixDispMeasIndexUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the fiber crack tip is measured on node ' + str(cracktipFiberDispMeasIndexUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the fiber crack tip is measured on node ' + str(firstBehindCracktipFiberDispMeasIndexUP),True)
else:
cracktipFiberDispMeasIndexUP = nodesFiberDisplacementMeasUP[sortedFiberDistanceIndecesUP[-1]]
cracktipMatrixDispMeasIndexUP = nodesMatrixDisplacementMeasUP[sortedMatrixDistanceIndecesUP[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the fiber elements around the lower crack tip',True)
nodesAroundCracktipLOW = quads[firstdebondedFiberElLOW]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesFiberDisplacementMeasLOW = []
for node in nodesAroundCracktipLOW:
if node in crackfacesNodeset and node!=cracktipLOWIndex:
nodesFiberDisplacementMeasLOW.append(node)
if len(nodesFiberDisplacementMeasLOW)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesFiberDisplacementMeasLOW)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesFiberDisplacementMeasLOW = []
for node in nodesFiberDisplacementMeasLOW:
distancesFiberDisplacementMeasLOW.append(np.sqrt((nodes[node][0]-nodes[cracktipLOWIndex][0])*(nodes[node][0]-nodes[cracktipLOWIndex][0])+(nodes[node][1]-nodes[cracktipLOWIndex][1])*(nodes[node][1]-nodes[cracktipLOWIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the matrix elements around the lower crack tip',True)
nodesAroundCracktipLOW = quads[firstdebondedMatrixElLOW]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesMatrixDisplacementMeasLOW = []
for node in nodesAroundCracktipLOW:
if node in crackfacesNodeset and node!=cracktipLOWIndex:
nodesMatrixDisplacementMeasLOW.append(node)
if len(nodesMatrixDisplacementMeasLOW)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesMatrixDisplacementMeasLOW)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from lower cracktip',True)
distancesMatrixDisplacementMeasLOW = []
for node in nodesMatrixDisplacementMeasLOW:
distancesMatrixDisplacementMeasLOW.append(np.sqrt((nodes[node][0]-nodes[cracktipLOWIndex][0])*(nodes[node][0]-nodes[cracktipLOWIndex][0])+(nodes[node][1]-nodes[cracktipLOWIndex][1])*(nodes[node][1]-nodes[cracktipLOWIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sort lists with computed distances',True)
sortedFiberDistanceIndecesLOW = np.argsort(distancesFiberDisplacementMeasLOW)
sortedMatrixDistanceIndecesLOW = np.argsort(distancesMatrixDisplacementMeasLOW)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort fiber nodes ' + str(sortedFiberDistanceIndecesLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort matrix nodes ' + str(sortedMatrixDistanceIndecesLOW),True)
if 'second' in parameters['mesh']['elements']['order']:
cracktipFiberDispMeasIndexLOW = nodesFiberDisplacementMeasLOW[sortedFiberDistanceIndecesLOW[-1]]
firstBehindCracktipFiberDispMeasIndexLOW = nodesFiberDisplacementMeasLOW[sortedFiberDistanceIndecesLOW[-2]]
cracktipMatrixDispMeasIndexLOW = nodesMatrixDisplacementMeasLOW[sortedMatrixDistanceIndecesLOW[-1]]
firstBehindCracktipMatrixDispMeasIndexLOW = nodesMatrixDisplacementMeasLOW[sortedMatrixDistanceIndecesLOW[-2]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the matrix crack tip is measured on node ' + str(cracktipMatrixDispMeasIndexLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the matrix crack tip is measured on node ' + str(firstBehindCracktipMatrixDispMeasIndexLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the fiber crack tip is measured on node ' + str(cracktipFiberDispMeasIndexLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the fiber crack tip is measured on node ' + str(firstBehindCracktipFiberDispMeasIndexLOW),True)
else:
cracktipFiberDispMeasIndexLOW = nodesFiberDisplacementMeasLOW[sortedFiberDistanceIndecesLOW[-1]]
cracktipMatrixDispMeasIndexLOW = nodesMatrixDisplacementMeasLOW[sortedMatrixDistanceIndecesLOW[-1]]
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the fiber elements around the crack tip',True)
nodesAroundCracktip = quads[firstdebondedFiberEl]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesFiberDisplacementMeas = []
for node in nodesAroundCracktip:
if node in crackfacesNodeset and node!=cracktipIndex:
nodesFiberDisplacementMeas.append(node)
if len(nodesFiberDisplacementMeas)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesFiberDisplacementMeas)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesFiberDisplacementMeas = []
for node in nodesFiberDisplacementMeas:
distancesFiberDisplacementMeas.append(np.sqrt((nodes[node][0]-nodes[cracktipIndex][0])*(nodes[node][0]-nodes[cracktipIndex][0])+(nodes[node][1]-nodes[cracktipIndex][1])*(nodes[node][1]-nodes[cracktipIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the matrix elements around the crack tip',True)
nodesAroundCracktip = quads[firstdebondedMatrixEl]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesMatrixDisplacementMeas = []
for node in nodesAroundCracktip:
if node in crackfacesNodeset and node!=cracktipIndex:
nodesMatrixDisplacementMeas.append(node)
if len(nodesMatrixDisplacementMeas)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesMatrixDisplacementMeas)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesMatrixDisplacementMeas = []
for node in nodesMatrixDisplacementMeas:
distancesMatrixDisplacementMeas.append(np.sqrt((nodes[node][0]-nodes[cracktipIndex][0])*(nodes[node][0]-nodes[cracktipIndex][0])+(nodes[node][1]-nodes[cracktipIndex][1])*(nodes[node][1]-nodes[cracktipIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sort lists with computed distances',True)
sortedFiberDistanceIndeces = np.argsort(distancesFiberDisplacementMeas)
sortedMatrixDistanceIndeces = np.argsort(distancesMatrixDisplacementMeas)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort fiber nodes ' + str(sortedFiberDistanceIndeces),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort matrix nodes ' + str(sortedMatrixDistanceIndeces),True)
if 'second' in parameters['mesh']['elements']['order']:
cracktipFiberDispMeasIndex = nodesFiberDisplacementMeas[sortedFiberDistanceIndeces[-1]]
firstBehindCracktipFiberDispMeasIndex = nodesFiberDisplacementMeas[sortedFiberDistanceIndeces[-2]]
cracktipMatrixDispMeasIndex = nodesMatrixDisplacementMeas[sortedMatrixDistanceIndeces[-1]]
firstBehindCracktipMatrixDispMeasIndex = nodesMatrixDisplacementMeas[sortedMatrixDistanceIndeces[-2]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the matrix crack tip is measured on node ' + str(cracktipMatrixDispMeasIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the matrix crack tip is measured on node ' + str(firstBehindCracktipMatrixDispMeasIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the fiber crack tip is measured on node ' + str(cracktipFiberDispMeasIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the fiber crack tip is measured on node ' + str(firstBehindCracktipFiberDispMeasIndex),True)
else:
cracktipFiberDispMeasIndex = nodesFiberDisplacementMeas[sortedFiberDistanceIndeces[-1]]
cracktipMatrixDispMeasIndex = nodesMatrixDisplacementMeas[sortedMatrixDistanceIndeces[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign new crack tip nodes to matrix elements at upper crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElUP]):
if node == cracktipUPIndex:
quads[firstboundedMatrixElUP][n] = matrixCracktipUPIndex
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new first behind upper crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElUP]):
if node == fiberFirstBehindCracktipUPIndex:
quads[firstboundedMatrixElUP][n] = matrixFirstBehindCracktipUPIndex
if 'inverseSquareRoot' in parameters['singularity']['type']:
for n,node in enumerate(quads[firstboundedMatrixElUP]):
if node == fiberSecondBehindCracktipUPIndex:
quads[firstboundedMatrixElUP][n] = matrixSecondBehindCracktipUPIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new upper crack tip index to the debonded element on the matrix',True)
for n,node in enumerate(quads[firstdebondedMatrixElUP]):
if node == cracktipUPIndex:
quads[firstdebondedMatrixElUP][n] = matrixCracktipUPIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign new crack tip nodes to matrix elements at lower crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElLOW]):
if node == cracktipLOWIndex:
quads[firstboundedMatrixElLOW][n] = matrixCracktipLOWIndex
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new first behind lower crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElLOW]):
if node == fiberFirstBehindCracktipLOWIndex:
quads[firstboundedMatrixElLOW][n] = matrixFirstBehindCracktipLOWIndex
if 'inverseSquareRoot' in parameters['singularity']['type']:
for n,node in enumerate(quads[firstboundedMatrixElLOW]):
if node == fiberSecondBehindCracktipLOWIndex:
quads[firstboundedMatrixElLOW][n] = matrixSecondBehindCracktipLOWIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new lower crack tip index to the debonded element on the matrix',True)
for n,node in enumerate(quads[firstdebondedMatrixElLOW]):
if node == cracktipLOWIndex:
quads[firstdebondedMatrixElLOW][n] = matrixCracktipLOWIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign new crack tip nodes to matrix elements at crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixEl]):
if node == cracktipIndex:
quads[firstboundedMatrixEl][n] = matrixCracktipIndex
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new first behind crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixEl]):
if node == fiberFirstBehindCracktipIndex:
quads[firstboundedMatrixEl][n] = matrixFirstBehindCracktipIndex
if 'inverseSquareRoot' in parameters['singularity']['type']:
for n,node in enumerate(quads[firstboundedMatrixEl]):
if node == fiberSecondBehindCracktipIndex:
quads[firstboundedMatrixEl][n] = matrixSecondBehindCracktipIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the debonded element on the matrix',True)
for n,node in enumerate(quads[firstdebondedMatrixEl]):
if node == cracktipIndex:
quads[firstdebondedMatrixEl][n] = matrixCracktipIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Find set of debonded elements on fiber and on matrix ...',True)
crackfaceFiberElementset = []
crackfaceMatrixElementset = []
for element in crackfacesElementset:
if element in fiberElementset:
crackfaceFiberElementset.append(element)
else:
crackfaceMatrixElementset.append(element)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Find set of debonded nodes on fiber and on matrix ...',True)
crackfaceFiberNodeset = []
crackfaceMatrixNodeset = []
for node in crackfacesNodeset:
if node in fiberNodeset:
crackfaceFiberNodeset.append(node)
else:
crackfaceMatrixNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Writing new input file ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify node section ...',True)
started = False
for l,line in enumerate(inpfilelines):
if started and '*' in line:
nodeSecStop = l-1
break
elif ('*Node' in line or '*NODE' in line) and len(inpfilelines[l+1].replace('\n','').split(',')) == 3:
nodeSecStart = l
started = True
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Node section begins at line ' + str(nodeSecStart) + ' and ends at line ' + str(nodeSecStop),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify quadrilateral element section ...',True)
started = False
for l,line in enumerate(inpfilelines):
if started and '*' in line:
elementSecStop = l-1
break
elif ('*Element, type=CPE8' in line or '*ELEMENT, type=CPE8' in line or '*Element, type=CPE4' in line or '*ELEMENT, type=CPE4' in line) and (len(inpfilelines[l+1].replace('\n','').split(','))==5 or len(inpfilelines[l+1].replace('\n','').split(','))==9):
elementSecStart = l
started = True
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Element section begins at line ' + str(elementSecStart) + ' and ends at line ' + str(elementSecStop),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify end of assembly section ...',True)
for l,line in enumerate(inpfilelines):
if '*End Assembly' in line or '*END ASSEMBLY' in line:
endAssembly = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if len(parameters['steps'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of thermal step section ...',True)
for l,line in enumerate(inpfilelines):
if '*Step, name=Temp-Step' in line or '*STEP, NAME=TEMP-STEP' in line:
startTempStep = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of mechanical step section ...',True)
for l,line in enumerate(inpfilelines):
if '*Step, name=Load-Step' in line or '*STEP, NAME=LOAD-STEP' in line:
startLoadStep = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of thermal contour integral section ...',True)
for l,line in enumerate(inpfilelines):
if ('*CONTOUR INTEGRAL' in line or '*Contour Integral' in line) and l>startTempStep and l<startLoadStep:
startTempCI = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of mechanical contour integral section ...',True)
for l,line in enumerate(inpfilelines):
if ('*CONTOUR INTEGRAL' in line or '*Contour Integral' in line) and l>startLoadStep:
startLoadCI = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of boundary conditions section ...',True)
for l,line in enumerate(inpfilelines):
if '** BOUNDARY CONDITIONS' in line or '** Boundary Conditions' in line:
startBC = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of contour integral section ...',True)
for l,line in enumerate(inpfilelines):
if '*CONTOUR INTEGRAL' in line or '*Contour Integral' in line:
startCI = l
endCI = l+1
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[:nodeSecStart]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write nodes ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NODE' + '\n')
for node in nodes.keys():
line = str(node)
for coord in nodes[node]:
line += ', ' + str(coord)
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[nodeSecStop+1:elementSecStart]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write quadrilateral elements ...',True)
with open(modinpfullpath,'a') as inp:
inp.write(inpfilelines[elementSecStart])
for quad in quads.keys():
line = str(quad)
for node in quads[quad]:
line += ', ' + str(node)
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[elementSecStop+1:endAssembly]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write crack faces node and element sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=FIBER-CRACKFACE-NODES, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(crackfaceFiberNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKFACE-NODES, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(crackfaceMatrixNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
inp.write('*ELSET, ELSET=FIBER-CRACKFACE-ELEMENTS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,element in enumerate(crackfaceFiberElementset):
if n>0 and n%8==0.0:
line += ' ' + str(element)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(element) + ','
if len(line)>0:
inp.write(line + '\n')
inp.write('*ELSET, ELSET=MATRIX-CRACKFACE-ELEMENTS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,element in enumerate(crackfaceMatrixElementset):
if n>0 and n%8==0.0:
line += ' ' + str(element)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(element) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write VCCT and J-integral node sets ...',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=FIBER-CRACKTIPUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipUPIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPUP-CONTOURINTEGRAL, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipUPIndex) + ', ' + str(matrixCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIPUP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipFiberDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPUP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipMatrixDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIPLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPLOW-CONTOURINTEGRAL, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipLOWIndex) + ', ' + str(matrixCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIPLOW-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipFiberDispMeasIndexLOW) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPLOW-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipMatrixDispMeasIndexLOW) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIBER-NODE-FIRSTBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberFirstBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-FIRSTBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixFirstBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=FIBER-FIRSTBOUNDED-DISPMEASUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipFiberDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=MATRIX-FIRSTBOUNDED-DISPMEASUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipMatrixDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=FIBER-NODE-FIRSTBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberFirstBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-FIRSTBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixFirstBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=FIBER-FIRSTBOUNDED-DISPMEASLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipFiberDispMeasIndexLOW) + '\n')
inp.write('*NSET, NSET=MATRIX-FIRSTBOUNDED-DISPMEASLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipMatrixDispMeasIndexLOW) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=FIBER-NODE-SECONDBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberSecondBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-SECONDBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixSecondBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=FIBER-NODE-SECONDBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberSecondBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-SECONDBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixSecondBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPUP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipUPDummyIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPLOW-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipLOWDummyIndex) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIRSTBOUNDEDUP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipUPDummyIndex) + '\n')
inp.write('*NSET, NSET=FIRSTBOUNDEDLOW-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipLOWDummyIndex) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=SECONDBOUNDEDUP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(secondBehindCracktipUPDummyIndex) + '\n')
inp.write('*NSET, NSET=SECONDBOUNDEDLOW-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(secondBehindCracktipLOWDummyIndex) + '\n')
else:
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=FIBER-CRACKTIP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixCracktipIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIP-CONTOURINTEGRAL, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipIndex) + ', ' + str(matrixCracktipIndex) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipFiberDispMeasIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipMatrixDispMeasIndex) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIBER-NODE-FIRSTBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberFirstBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-FIRSTBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixFirstBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=FIBER-FIRSTBOUNDED-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipFiberDispMeasIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-FIRSTBOUNDED-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipMatrixDispMeasIndex) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=FIBER-NODE-SECONDBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberSecondBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-SECONDBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixSecondBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipDummyIndex) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIRSTBOUNDED-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipDummyIndex) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=SECONDBOUNDED-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(secondBehindCracktipDummyIndex) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write right side node sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=RIGHTSIDE-WITHOUT-CORNERS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(rightSideWithoutCornersNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write left side node sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=LEFTSIDE-WITHOUT-CORNERS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(leftSideWithoutCornersNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write north side node sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=SOUTHWEST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(southwestIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=SOUTHEAST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(southeastIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=UPPERSIDE-WITHOUT-CORNERS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(northSideWithoutCornersNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=UPPERSIDE-WITHOUT-NECORNER, INSTANCE=RVE-assembly' + '\n')
line = ' ' + str(northwestIndex) + ','
for n,node in enumerate(northSideWithoutCornersNodeset):
if (n+1)>0 and (n+1)%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=UPPERSIDE-WITHOUT-NWCORNER, INSTANCE=RVE-assembly' + '\n')
line = ' ' + str(northeastIndex) + ','
for n,node in enumerate(northSideWithoutCornersNodeset):
if (n+1)>0 and (n+1)%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHWEST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(northwestIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHEAST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(northeastIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHSIDE-CENTER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(northSideCenter) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHSIDE-POSSIDE, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(northSidePosSide):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHSIDE-NEGSIDE, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(northSideNegSide):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'ulinearCoupling' in parameters['BC']['northSide']['type'] or 'vkinCouplingmeanside' in parameters['BC']['northSide']['type']:
with open(modinpfullpath,'a') as inp:
for n,node in enumerate(northSideWithoutCornersNodeset):
inp.write('*NSET, NSET=NORTHSIDE-N'+ str(n+1) +', INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(node) + '\n')
if 'antisymmetry' in parameters['BC']['northSide']['type']:
with open(modinpfullpath,'a') as inp:
for n,node in enumerate(northSidePosSide):
inp.write('*NSET, NSET=NORTHSIDE-POSSIDE-N'+ str(n+1) +', INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(node) + '\n')
for n,node in enumerate(northSideNegSide):
inp.write('*NSET, NSET=NORTHSIDE-NEGSIDE-N'+ str(n+1) +', INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(node) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write equation definitions ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPUP,1,1,MATRIX-CRACKTIPUP,1,-1,CRACKTIPUP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPLOW,1,1,MATRIX-CRACKTIPLOW,1,-1,CRACKTIPLOW-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPUP,2,1,MATRIX-CRACKTIPUP,2,-1,CRACKTIPUP-DUMMY-NODE,2,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPLOW,2,1,MATRIX-CRACKTIPLOW,2,-1,CRACKTIPLOW-DUMMY-NODE,2,-1' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDUP,1,1,MATRIX-NODE-FIRSTBOUNDEDUP,1,-1,FIRSTBOUNDEDUP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDLOW,1,1,MATRIX-NODE-FIRSTBOUNDEDLOW,1,-1,FIRSTBOUNDEDLOW-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDUP,2,1,MATRIX-NODE-FIRSTBOUNDEDUP,2,-1,FIRSTBOUNDEDUP-DUMMY-NODE,2,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDLOW,2,1,MATRIX-NODE-FIRSTBOUNDEDLOW,2,-1,FIRSTBOUNDEDLOW-DUMMY-NODE,2,-1' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDUP,1,1,MATRIX-NODE-SECONDBOUNDEDUP,1,-1,SECONDBOUNDEDUP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDLOW,1,1,MATRIX-NODE-SECONDBOUNDEDLOW,1,-1,SECONDBOUNDEDLOW-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDUP,2,1,MATRIX-NODE-SECONDBOUNDEDUP,2,-1,SECONDBOUNDEDUP-DUMMY-NODE,2,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDLOW,2,1,MATRIX-NODE-SECONDBOUNDEDLOW,2,-1,SECONDBOUNDEDLOW-DUMMY-NODE,2,-1' + '\n')
else:
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIP,1,1,MATRIX-CRACKTIP,1,-1,CRACKTIP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIP,2,1,MATRIX-CRACKTIP,2,-1,CRACKTIP-DUMMY-NODE,2,-1' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDED,1,1,MATRIX-NODE-FIRSTBOUNDED,1,-1,FIRSTBOUNDED-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDED,2,1,MATRIX-NODE-FIRSTBOUNDED,2,-1,FIRSTBOUNDED-DUMMY-NODE,2,-1' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDED,1,1,MATRIX-NODE-SECONDBOUNDED,1,-1,SECONDBOUNDED-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDED,2,1,MATRIX-NODE-SECONDBOUNDED,2,-1,SECONDBOUNDED-DUMMY-NODE,2,-1' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'vgeomCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: geometric coupling',True)
with open(modinpfullpath,'a') as inp:
inp.write('*MPC' + '\n')
inp.write(' SLIDER, UPPERSIDE-WITHOUT-CORNERS, NORTHWEST-CORNER, NORTHEAST-CORNER' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinrightCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: kinematic coupling with north-east corner as reference node',True)
with open(modinpfullpath,'a') as inp:
inp.write('*KINEMATIC COUPLING, REF NODE = NORTHEAST-CORNER' + '\n')
inp.write(' UPPERSIDE-WITHOUT-NECORNER, 2' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinleftCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: kinematic coupling with north-west corner as reference node',True)
with open(modinpfullpath,'a') as inp:
inp.write('*KINEMATIC COUPLING, REF NODE = NORTHWEST-CORNER' + '\n')
inp.write(' UPPERSIDE-WITHOUT-NWCORNER, 2' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinCouplingmeancorners' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: nw and ne vertical displacements are set to be equal and all other points are set to this value',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
inp.write(' 2' + '\n')
inp.write(' NORTHWEST-CORNER, 2, 1, NORTHEAST-CORNER, 2, -1' + '\n')
inp.write(' 3' + '\n')
inp.write(' UPPERSIDE-WITHOUT-CORNERS, 2, 1, NORTHWEST-CORNER, 2, -0.5, NORTHEAST-CORNER, 2, -0.5' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinCouplingmeanside' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: mean vertical displacement over all nodes is taken as reference',True)
with open(modinpfullpath,'a') as inp:
nEq = len(northSideWithoutCornersNodeset)+2
inp.write('*EQUATION' + '\n')
for n in range(0,nEq):
inp.write(' ' + str(int(nEq)) + '\n')
line = ''
for m in range(0,nEq):
if m==n:
coeff = -nEq*(1.0-1.0/nEq)
else:
coeff = 1.0
if m==0:
nodeName = 'NORTHWEST-CORNER'
elif m==1:
nodeName = 'NORTHEAST-CORNER'
else:
nodeName = 'NORTHSIDE-N'+ str(m+1-2)
line += ' ' + nodeName + ', 2, ' + str(coeff) + ','
if m>0 and (m+1)%4==0:
line += '\n'
inp.write(line)
line = ''
if len(line)>0:
line += '\n'
inp.write(line)
elif 'antisymmetry' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: antisymmetry',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
for n,node in enumerate(northSidePosSide):
inp.write(' 3' + '\n')
inp.write(' NORTHSIDE-POSSIDE-N'+ str(n+1) +', 2, 1, NORTHSIDE-NEGSIDE-N'+ str(n+1) +', 2, 1, NORTHSIDE-CENTER, 2, -2' + '\n')
inp.write(' 2' + '\n')
inp.write(' NORTHSIDE-POSSIDE-N'+ str(n+1) +', 1, 1, NORTHSIDE-NEGSIDE-N'+ str(n+1) +', 1, 1' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'ulinearCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: applied linear horizontal displacement',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
for n,node in enumerate(northSideWithoutCornersNodeset):
inp.write(' 2' + '\n')
inp.write(' NORTHSIDE-N'+ str(n+1) +', 1, 1, NORTHEAST-CORNER, 1, ' + str(-nodes[node][0]/nodes[northeastIndex][0]) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'vkinCouplingmeancorners' in parameters['BC']['rightSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on RIGHT side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: ne and se horizontal displacements are set to be equal and all other points are set to this value',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
inp.write(' 2' + '\n')
inp.write(' SOUTHEAST-CORNER, 1, 1, NORTHEAST-CORNER, 1, -1' + '\n')
inp.write(' 3' + '\n')
inp.write(' RIGHTSIDE-WITHOUT-CORNERS, 1, 1, SOUTHEAST-CORNER, 1, -0.5, NORTHEAST-CORNER, 1, -0.5' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'vkinCouplingmeancorners' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on LEFT side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: nw and sw horizontal displacements are set to be equal and all other points are set to this value',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
inp.write(' 2' + '\n')
inp.write(' SOUTHWEST-CORNER, 1, 1, NORTHWEST-CORNER, 1, -1' + '\n')
inp.write(' 3' + '\n')
inp.write(' LEFTSIDE-WITHOUT-CORNERS, 1, 1, SOUTHWEST-CORNER, 1, -0.5, NORTHWEST-CORNER, 1, -0.5' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write surface definitions ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*SURFACE, NAME=FiberSurface, TYPE=ELEMENT' + '\n')
inp.write(' FIBER-CRACKFACE-ELEMENTS' + '\n')
inp.write('*SURFACE, NAME=MatrixSurface, TYPE=ELEMENT' + '\n')
inp.write(' MATRIX-CRACKFACE-ELEMENTS' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write end assembly ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*End Assembly' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write contact interaction ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTACT PAIR, INTERACTION=CrackFacesContact, SMALL SLIDING' + '\n')
inp.write(' MatrixSurface, FiberSurface' + '\n')
inp.write('*SURFACE INTERACTION, NAME=CrackFacesContact' + '\n')
inp.write(' 1.0' + '\n')
if 'static' in parameters['surface']['friction']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Static friction (Coulomb model) is present between crack faces',True)
with open(modinpfullpath,'a') as inp:
if 'maxtau' in parameters['surface']['friction']['type']:
inp.write('*FRICTION, TAUMAX=' + str(parameters['surface']['friction']['maxtau']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 5*logindent + 'Maximum tangential stress = ' + str(parameters['surface']['friction']['maxtau']) + '[MPa]',True)
else:
inp.write('*FRICTION' + '\n')
inp.write(' ' + str(parameters['surface']['friction']['static']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 5*logindent + 'Static friction coefficient = ' + str(parameters['surface']['friction']['static']) + '[-]',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if len(parameters['steps'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[endAssembly+1:startTempStep+2]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions for VCCT ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** BOUNDARY CONDITIONS' + '\n')
inp.write('**' + '\n')
inp.write('*BOUNDARY, OP=MOD' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' CRACKTIPUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' CRACKTIPLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' FIRSTBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' SECONDBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
else:
inp.write(' CRACKTIP-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
inp.write('**' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startTempStep+2:startTempCI]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write J-integral over reduced contours ...',True)
crackName = inpfilelines[startTempCI].replace('\n','').split(',')[1].split('=')[1]
nContours = inpfilelines[startTempCI].replace('\n','').split(',')[2].split('=')[1]
qx = -np.sin(parameters['geometry']['deltatheta']*np.pi/180.0)
qy = np.cos(parameters['geometry']['deltatheta']*np.pi/180.0)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTOUR INTEGRAL, CRACK NAME=' + crackName + ', CONTOURS=' + nContours + '\n')
inp.write(' ' + 'CRACKTIP-CONTOURINTEGRAL, ' + str(qx) + ', ' + str(qy) + ', 0.0' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startTempCI+2:startLoadStep+2]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write loads ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** LOADS' + '\n')
inp.write('**' + '\n')
for load in parameters['loads'].values():
if 'appliedUniformPressure' in load['type'] or 'applieduniformpressure' in load['type'] or 'applied Uniform Pressure' in load['type'] or 'applied uniform pressure' in load['type']:
inp.write('*DSLOAD, OP=MOD' + '\n')
inp.write(' ' + load['set'] + ', P, ' + str(load['value']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions for VCCT ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** BOUNDARY CONDITIONS' + '\n')
inp.write('**' + '\n')
inp.write('*BOUNDARY, OP=MOD' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' CRACKTIPUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' CRACKTIPLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' FIRSTBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' SECONDBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
else:
inp.write(' CRACKTIP-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
inp.write('**' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startLoadStep+2:startLoadCI]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write J-integral over reduced contours ...',True)
crackName = inpfilelines[startLoadCI].replace('\n','').split(',')[1].split('=')[1]
nContours = inpfilelines[startLoadCI].replace('\n','').split(',')[2].split('=')[1]
qx = -np.sin(parameters['geometry']['deltatheta']*np.pi/180.0)
qy = np.cos(parameters['geometry']['deltatheta']*np.pi/180.0)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTOUR INTEGRAL, CRACK NAME=' + crackName + ', CONTOURS=' + nContours + '\n')
inp.write(' ' + 'CRACKTIP-CONTOURINTEGRAL, ' + str(qx) + ', ' + str(qy) + ', 0.0' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startLoadCI+2:]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[endAssembly+1:startBC]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write loads ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** LOADS' + '\n')
inp.write('**' + '\n')
for load in parameters['loads'].values():
if 'appliedUniformPressure' in load['type'] or 'applieduniformpressure' in load['type'] or 'applied Uniform Pressure' in load['type'] or 'applied uniform pressure' in load['type']:
inp.write('*DSLOAD, OP=MOD' + '\n')
inp.write(' ' + load['set'] + ', P, ' + str(load['value']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions for VCCT ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** BOUNDARY CONDITIONS' + '\n')
inp.write('**' + '\n')
inp.write('*BOUNDARY, OP=MOD' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' CRACKTIPUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' CRACKTIPLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' FIRSTBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' SECONDBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
else:
inp.write(' CRACKTIP-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
inp.write('**' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startBC+1:startCI]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write J-integral over reduced contours ...',True)
crackName = inpfilelines[startCI].replace('\n','').split(',')[1].split('=')[1]
nContours = inpfilelines[startCI].replace('\n','').split(',')[2].split('=')[1]
qx = -np.sin(parameters['geometry']['deltatheta']*np.pi/180.0)
qy = np.cos(parameters['geometry']['deltatheta']*np.pi/180.0)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTOUR INTEGRAL, CRACK NAME=' + crackName + ', CONTOURS=' + nContours + '\n')
inp.write(' ' + 'CRACKTIP-CONTOURINTEGRAL, ' + str(qx) + ', ' + str(qy) + ', 0.0' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[endCI+1:]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
return modinpname
def modifyRVEinputfilePerturbationStep(parameters,mdbData,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: modifyRVE(parameters,mdbData)',True)
skipLineToLogFile(logfilepath,'a',True)
theta = parameters['geometry']['theta']
# odb name and path
#odbname = mdbData['jobname'] + '.odb'
#odbfullpath = join(parameters['wd'],odbname)
# input file name and path
inpname = mdbData['jobname'] + '.inp'
inpfullpath = join(parameters['input']['wd'],inpname)
# modified input file name
modinpname = 'Job-Perturbation-' + parameters['input']['modelname'] + '.inp'
modinpfullpath = join(parameters['input']['wd'],modinpname)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Working directory: ' + parameters['input']['wd'],True)
#writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ODB database name: ' + odbname,True)
#writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'ODB database full path: ' + join(parameters['wd'],odbname),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Input file name: ' + inpname,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Input file full path: ' + join(parameters['input']['wd'],inpname),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Modified input file name: ' + modinpname,True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Modified input file full path: ' + join(parameters['input']['wd'],modinpname),True)
createABQinpfile(modinpname)
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading content of original input file ...',True)
with open(inpfullpath,'r') as inp:
inpfilelines = inp.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading nodes and saving to dictionary ...',True)
nodes = {}
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
nodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
store = False
break
elif store == True:
nodes[int(line.replace('\n','').split(',')[0])] = [float(line.replace('\n','').split(',')[1]),float(line.replace('\n','').split(',')[2])]
elif ('*Node' in line or '*NODE' in line) and len(inpfilelines[l+1].replace('\n','').split(','))==3:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading quadrilateral elements and saving to dictionary ...',True)
quads = {}
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
quadIndex = int(line.replace('\n','').split(',')[0])
quads[quadIndex] = []
for node in line.replace('\n','').split(',')[1:]:
quads[quadIndex].append(int(node))
store = False
break
elif store == True:
quadIndex = int(line.replace('\n','').split(',')[0])
quads[quadIndex] = []
for node in line.replace('\n','').split(',')[1:]:
quads[quadIndex].append(int(node))
elif ('*Element, type=CPE8' in line or '*ELEMENT, type=CPE8' in line or '*Element, type=CPE4' in line or '*ELEMENT, type=CPE4' in line) and (len(inpfilelines[l+1].replace('\n','').split(','))==5 or len(inpfilelines[l+1].replace('\n','').split(','))==9):
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack tip sets and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACKTIPUP','cracktipup']:
cracktipupIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACKTIPLOW','cracktiplow']:
cracktiplowIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack tip set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACKTIP','cracktip']:
cracktipIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack faces node set and saving to list ...',True)
crackfacesNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACK','crack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading north side node set and saving to list ...',True)
northSideNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
northSideNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
northSideNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['UPPERSIDE','upperside']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading right side node set and saving to list ...',True)
rightSideNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
rightSideNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
rightSideNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['RIGHTSIDE','rightside']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading left side node set and saving to list ...',True)
leftSideNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
leftSideNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
leftSideNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['LEFTSIDE','leftside']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading north-east corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['NE-CORNER','ne-corner']:
northeastIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading north-west corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['NW-CORNER','nw-corner']:
northwestIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading south-east corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['SE-CORNER','se-corner']:
southeastIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading south-west corner node set and saving to variable ...',True)
for l,line in enumerate(inpfilelines):
if ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['SW-CORNER','sw-corner']:
southwestIndex = int(inpfilelines[l+1].replace('\n','').split(',')[0])
break
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading crack faces element set and saving to list ...',True)
crackfacesElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
crackfacesElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['CRACK','crack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading fiber node set and saving to list ...',True)
fiberNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER','fiber']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading matrix node set and saving to list ...',True)
matrixNodeset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixNodeset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixNodeset.append(int(index))
elif ('*Nset' in line or '*NSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX','matrix']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading fiber element set and saving to list ...',True)
fiberElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER','fiber']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading matrix element set and saving to list ...',True)
matrixElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX','matrix']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-UPPERCRACK-CTUP and saving to list ...',True)
fiberExtannUppcrackCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTUP','fiber-extannulus-uppercrack-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannUppcrackCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTUP','fiber-extannulus-uppercrack-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-UPPERCRACK-CTLOW and saving to list ...',True)
fiberExtannUppcrackCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTLOW','fiber-extannulus-uppercrack-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannUppcrackCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK-CTLOW','fiber-extannulus-uppercrack-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-UPPERCRACK and saving to list ...',True)
fiberExtannUppcrackElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannUppcrackElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK','fiber-extannulus-uppercrack'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannUppcrackElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-UPPERCRACK','fiber-extannulus-uppercrack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP and saving to list ...',True)
fiberExtannFirstbounCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP','fiber-extannulus-firstbounded-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannFirstbounCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTUP','fiber-extannulus-firstbounded-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW and saving to list ...',True)
fiberExtannFirstbounCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW','fiber-extannulus-firstbounded-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannFirstbounCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED-CTLOW','fiber-extannulus-firstbounded-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set FIBER-EXTANNULUS-FIRSTBOUNDED and saving to list ...',True)
fiberExtannFirstbounElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
fiberExtannFirstbounElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED','fiber-extannulus-firstbounded'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
fiberExtannFirstbounElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['FIBER-EXTANNULUS-FIRSTBOUNDED','fiber-extannulus-firstbounded']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-UPPERCRACK-CTUP and saving to list ...',True)
matrixIntannUppcrackCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTUP','matrix-intannulus-uppercrack-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannUppcrackCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTUP','matrix-intannulus-uppercrack-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-UPPERCRACK-CTLOW and saving to list ...',True)
matrixIntannUppcrackCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTLOW','matrix-intannulus-uppercrack-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannUppcrackCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK-CTLOW','matrix-intannulus-uppercrack-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-UPPERCRACK and saving to list ...',True)
matrixIntannUppcrackElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannUppcrackElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK','matrix-intannulus-uppercrack'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannUppcrackElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-UPPERCRACK','matrix-intannulus-uppercrack']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP and saving to list ...',True)
matrixIntannFirstbounCtUpElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtUpElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtUpElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP','matrix-intannulus-firstbounded-ctup'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannFirstbounCtUpElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTUP','matrix-intannulus-firstbounded-ctup']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW and saving to list ...',True)
matrixIntannFirstbounCtLowElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtLowElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounCtLowElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW','matrix-intannulus-firstbounded-ctlow'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannFirstbounCtLowElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED-CTLOW','matrix-intannulus-firstbounded-ctlow']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Reading element set MATRIX-INTANNULUS-FIRSTBOUNDED and saving to list ...',True)
matrixIntannFirstbounElementset = []
store = False
for l,line in enumerate(inpfilelines):
if store == True and '*' in inpfilelines[l+1]:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounElementset.append(int(index))
store = False
break
elif store == True:
for index in line.replace('\n','').split(','):
if index!='' and index!=' ':
matrixIntannFirstbounElementset.append(int(index))
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED','matrix-intannulus-firstbounded'] and line.replace('\n','').split(',')[2].replace(' ','') in ['GENERATE','generate']:
store = False
startEl = int(inpfilelines[l+1].replace('\n','').split(',')[0])
endEl = int(inpfilelines[l+1].replace('\n','').split(',')[1])
deltaEl = int(inpfilelines[l+1].replace('\n','').split(',')[2])
for index in range(startEl,endEl+deltaEl,deltaEl):
matrixIntannFirstbounElementset.append(index)
break
elif ('*Elset' in line or '*ELSET' in line) and line.replace('\n','').split(',')[1].split('=')[1] in ['MATRIX-INTANNULUS-FIRSTBOUNDED','matrix-intannulus-firstbounded']:
store = True
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-WITHOUT-CORNERS ...',True)
northSideWithoutCornersNodeset = []
for node in northSideNodeset:
if not node in [northeastIndex,northwestIndex]:
northSideWithoutCornersNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-CENTER ...',True)
for node in northSideNodeset:
if nodes[node][0]==0.0:
northSideCenter = node
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Node ' + str(northSideCenter) + ' is at the center of the NORTH boundary',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-POSSIDE ...',True)
northSidePosSide = []
for node in northSideNodeset:
if nodes[node][0]>0.0:
northSidePosSide.append(node)
northSidePosSideCoords = [nodes[i][0] for i in northSidePosSide]
northSidePosSide = np.array(northSidePosSide)[np.argsort(northSidePosSideCoords)].tolist()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Set northSidePosSide contains ' + str(len(northSidePosSide)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set NORTH-SIDE-NEGSIDE ...',True)
northSideNegSide = []
for node in northSideNodeset:
if nodes[node][0]<0.0:
northSideNegSide.append(node)
northSideNegSideCoords = [nodes[i][0] for i in northSideNegSide]
northSideNegSide = np.array(northSideNegSide)[np.argsort(northSideNegSideCoords)].tolist()
northSideNegSide = northSideNegSide[::-1]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Set northSideNegSide contains ' + str(len(northSideNegSide)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set RIGHT-SIDE-WITHOUT-CORNERS ...',True)
rightSideWithoutCornersNodeset = []
for node in rightSideNodeset:
if not node in [northeastIndex,southeastIndex]:
rightSideWithoutCornersNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Create node set LEFT-SIDE-WITHOUT-CORNERS ...',True)
leftSideWithoutCornersNodeset = []
for node in leftSideNodeset:
if not node in [southwestIndex,northwestIndex]:
leftSideWithoutCornersNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Insert new coincident node(s) at the crack tip and create dummy node(s) ...',True)
numNodes = mdbData['numNodes']
numEls = mdbData['numEls']
numQuads = mdbData['numQuads']
numTris = mdbData['numTris']
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of nodes = ' + str(numNodes),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of elements = ' + str(numEls),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of quadrilateral elements = ' + str(numQuads),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Total number of triangular elements = ' + str(numTris),True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Index of current crack tip nodes: ' + str(cracktipUPIndex) + ', ' + str(cracktipLOWIndex),True)
matrixCracktipUPIndex = numNodes + 1000
cracktipUPDummyIndex = numNodes + 1000 + 1
matrixCracktipLOWIndex = numNodes + 1000 + 50
cracktipLOWDummyIndex = numNodes + 1000 + 50 + 1
nodes[matrixCracktipUPIndex] = [nodes[cracktipUPIndex][0],nodes[cracktipUPIndex][1]]
nodes[cracktipUPDummyIndex] = [-5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
nodes[matrixCracktipLOWIndex] = [nodes[cracktipLOWIndex][0],nodes[cracktipLOWIndex][1]]
nodes[cracktipLOWDummyIndex] = [-5*parameters['geometry']['Rf'],-20*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix upper crack tip node with index ' + str(matrixCracktipUPIndex) + ' and coordinates (' + str(nodes[cracktipUPIndex][0]) + ', '+ str(nodes[cracktipUPIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix upper crack tip dummy node with index ' + str(cracktipUPDummyIndex)+ ' and coordinates (' + str(-5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix lower crack tip node with index ' + str(matrixCracktipLOWIndex) + ' and coordinates (' + str(nodes[cracktipLOWIndex][0]) + ', '+ str(nodes[cracktipLOWIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix lower crack tip dummy node with index ' + str(cracktipLOWDummyIndex)+ ' and coordinates (' + str(-5*parameters['geometry']['Rf']) + ', '+ str(-20*parameters['geometry']['Rf']) + ')',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Index of current crack tip node: ' + str(cracktipIndex),True)
matrixCracktipIndex = numNodes + 1000
cracktipDummyIndex = numNodes + 1000 + 1
nodes[matrixCracktipIndex] = [nodes[cracktipIndex][0],nodes[cracktipIndex][1]]
nodes[cracktipDummyIndex] = [-5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix crack tip node with index ' + str(matrixCracktipIndex) + ' and coordinates (' + str(nodes[cracktipIndex][0]) + ', '+ str(nodes[cracktipIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix dummy node with index ' + str(cracktipDummyIndex)+ ' and coordinates (' + str(-5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Searching for elements connected to the upper crack tip',True)
fiberElswithCracktipUP = []
matrixElswithCracktipUP = []
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On fiber',True)
for element in fiberExtannUppcrackCtUpElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipUP.append(element)
firstdebondedFiberElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for e in range(len(fiberExtannFirstbounCtUpElementset)-1,-1,-1):
element = fiberExtannFirstbounElementset[e]
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipUP.append(element)
firstboundedFiberElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On matrix',True)
for element in matrixIntannUppcrackCtUpElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipUP.append(element)
firstdebondedMatrixElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for element in matrixIntannFirstbounCtUpElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipUP.append(element)
firstboundedMatrixElUP = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Searching for elements connected to the lower crack tip',True)
fiberElswithCracktipLOW = []
matrixElswithCracktipLOW = []
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On fiber',True)
for element in fiberExtannUppcrackCtLowElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipLOW.append(element)
firstdebondedFiberElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for e in range(len(fiberExtannFirstbounCtLowElementset)-1,-1,-1):
element = fiberExtannFirstbounElementset[e]
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktipLOW.append(element)
firstboundedFiberElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On matrix',True)
for element in matrixIntannUppcrackCtLowElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipLOW.append(element)
firstdebondedMatrixElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for element in matrixIntannFirstbounCtLowElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktipLOW.append(element)
firstboundedMatrixElLOW = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Searching for elements connected to the crack tip',True)
fiberElswithCracktip = []
matrixElswithCracktip = []
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On fiber',True)
for element in fiberExtannUppcrackElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktip.append(element)
firstdebondedFiberEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for e in range(len(fiberExtannFirstbounElementset)-1,-1,-1):
element = fiberExtannFirstbounElementset[e]
if element in quads.keys():
if cracktipIndex in quads[element]:
fiberElswithCracktip.append(element)
firstboundedFiberEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' On matrix',True)
for element in matrixIntannUppcrackElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktip.append(element)
firstdebondedMatrixEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Debonded element: ' + str(element),True)
break
for element in matrixIntannFirstbounElementset:
if element in quads.keys():
if cracktipIndex in quads[element]:
matrixElswithCracktip.append(element)
firstboundedMatrixEl = element
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - Bonded element: ' + str(element),True)
break
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Second order elements are used',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
matrixFirstBehindCracktipUPIndex = numNodes + 1000 + 2
firstBehindCracktipUPDummyIndex = numNodes + 1000 + 3
matrixFirstBehindCracktipLOWUPIndex = numNodes + 1000 + 50 + 2
firstBehindCracktipLOWDummyIndex = numNodes + 1000 + 50 + 3
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix first behind upper crack tip node with index ' + str(matrixFirstBehindCracktipUPIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating upper crack tip dummy node with index ' + str(firstBehindCracktipUPDummyIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix first behind lower crack tip node with index ' + str(matrixFirstBehindCracktipLOWIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating lower crack tip dummy node with index ' + str(firstBehindCracktipLOWDummyIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find common nodes of bounded upper crack tip elements on fiber and matrix',True)
commonNodesUP = []
fiberElnodesUP = quads[firstboundedFiberElUP]
matrixElnodesUP = quads[firstboundedMatrixElUP]
for node in fiberElnodesUP:
if node in matrixElnodesUP:
commonNodesUP.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - node ' + str(node),True)
if len(commonNodesUP)==3:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of bounded nodes from upper cracktip',True)
distancesUP = []
for node in commonNodesUP:
if node != cracktipUPIndex:
distancesUP.append(np.sqrt((nodes[node][0]-nodes[cracktipUPIndex][0])*(nodes[node][0]-nodes[cracktipUPIndex][0])+(nodes[node][1]-nodes[cracktipUPIndex][1])*(nodes[node][1]-nodes[cracktipUPIndex][1])))
else:
distancesUP.append(0.0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Reordering labels based on distances',True)
fiberFirstBehindCracktipUPIndex = commonNodesUP[np.argsort(distancesUP)[-2]] # argsort goes from smaller to higher
if 'inverseSquareRoot' in parameters['singularity']['type']:
fiberSecondBehindCracktipUPIndex = commonNodesUP[np.argsort(distancesUP)[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix upper crack tip node with index ' + str(matrixFirstBehindCracktipUPIndex) + ' and coordinates (' + str(nodes[fiberFirstBehindCracktipUPIndex][0]) + ', '+ str(nodes[fiberFirstBehindCracktipUPIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating upper crack tip dummy node with index ' + str(firstBehindCracktipUPDummyIndex)+ ' and coordinates (' + str(5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
nodes[matrixFirstBehindCracktipUPIndex] = [nodes[fiberFirstBehindCracktipUPIndex][0],nodes[fiberFirstBehindCracktipUPIndex][1]]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[matrixSecondBehindCracktipUPIndex] = [nodes[fiberSecondBehindCracktipUPIndex][0],nodes[fiberSecondBehindCracktipUPIndex][1]]
nodes[firstBehindCracktipUPDummyIndex] = [5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[secondBehindCracktipUPDummyIndex] = [5*parameters['geometry']['Rf'],-20*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find common nodes of bounded lower crack tip elements on fiber and matrix',True)
commonNodesLOW = []
fiberElnodesLOW = quads[firstboundedFiberElLOW]
matrixElnodesLOW = quads[firstboundedMatrixElLOW]
for node in fiberElnodesLOW:
if node in matrixElnodesLOW:
commonNodesLOW.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - node ' + str(node),True)
if len(commonNodesLOW)==3:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of bounded nodes from lower cracktip',True)
distancesLOW = []
for node in commonNodesLOW:
if node != cracktipLOWIndex:
distancesLOW.append(np.sqrt((nodes[node][0]-nodes[cracktipLOWIndex][0])*(nodes[node][0]-nodes[cracktipLOWIndex][0])+(nodes[node][1]-nodes[cracktipLOWIndex][1])*(nodes[node][1]-nodes[cracktipLOWIndex][1])))
else:
distancesLOW.append(0.0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Reordering labels based on distances',True)
fiberFirstBehindCracktipLOWIndex = commonNodesLOW[np.argsort(distancesLOW)[-2]] # argsort goes from smaller to higher
if 'inverseSquareRoot' in parameters['singularity']['type']:
fiberSecondBehindCracktipLOWIndex = commonNodesLOW[np.argsort(distancesLOW)[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix lower crack tip node with index ' + str(matrixFirstBehindCracktipLOWIndex) + ' and coordinates (' + str(nodes[fiberFirstBehindCracktipLOWIndex][0]) + ', '+ str(nodes[fiberFirstBehindCracktipLOWIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating lower crack tip dummy node with index ' + str(firstBehindCracktipLOWDummyIndex)+ ' and coordinates (' + str(5*parameters['geometry']['Rf']) + ', '+ str(-20*parameters['geometry']['Rf']) + ')',True)
nodes[matrixFirstBehindCracktipLOWIndex] = [nodes[fiberFirstBehindCracktipLOWIndex][0],nodes[fiberFirstBehindCracktipLOWIndex][1]]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[matrixSecondBehindCracktipLOWIndex] = [nodes[fiberSecondBehindCracktipLOWIndex][0],nodes[fiberSecondBehindCracktipLOWIndex][1]]
nodes[firstBehindCracktipLOWDummyIndex] = [5*parameters['geometry']['Rf'],-20*parameters['geometry']['Rf']]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[secondBehindCracktipLOWDummyIndex] = [5*parameters['geometry']['Rf'],-40*parameters['geometry']['Rf']]
else:
matrixFirstBehindCracktipIndex = numNodes + 1000 + 2
firstBehindCracktipDummyIndex = numNodes + 1000 + 3
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix first behind crack tip node with index ' + str(matrixFirstBehindCracktipIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix dummy node with index ' + str(firstBehindCracktipDummyIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find common nodes of bounded crack tip elements on fiber and matrix',True)
commonNodes = []
fiberElnodes = quads[firstboundedFiberEl]
matrixElnodes = quads[firstboundedMatrixEl]
for node in fiberElnodes:
if node in matrixElnodes:
commonNodes.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + ' - node ' + str(node),True)
if len(commonNodes)==3:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of bounded nodes from cracktip',True)
distances = []
for node in commonNodes:
if node != cracktipIndex:
distances.append(np.sqrt((nodes[node][0]-nodes[cracktipIndex][0])*(nodes[node][0]-nodes[cracktipIndex][0])+(nodes[node][1]-nodes[cracktipIndex][1])*(nodes[node][1]-nodes[cracktipIndex][1])))
else:
distances.append(0.0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Reordering labels based on distances',True)
fiberFirstBehindCracktipIndex = commonNodes[np.argsort(distances)[-2]] # argsort goes from smaller to higher
if 'inverseSquareRoot' in parameters['singularity']['type']:
fiberSecondBehindCracktipIndex = commonNodes[np.argsort(distances)[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix crack tip node with index ' + str(matrixFirstBehindCracktipIndex) + ' and coordinates (' + str(nodes[fiberFirstBehindCracktipIndex][0]) + ', '+ str(nodes[fiberFirstBehindCracktipIndex][1]) + ')',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Creating matrix dummy node with index ' + str(firstBehindCracktipDummyIndex)+ ' and coordinates (' + str(5*parameters['geometry']['Rf']) + ', '+ str(-10*parameters['geometry']['Rf']) + ')',True)
nodes[matrixFirstBehindCracktipIndex] = [nodes[fiberFirstBehindCracktipIndex][0],nodes[fiberFirstBehindCracktipIndex][1]]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[matrixSecondBehindCracktipIndex] = [nodes[fiberSecondBehindCracktipIndex][0],nodes[fiberSecondBehindCracktipIndex][1]]
nodes[firstBehindCracktipDummyIndex] = [5*parameters['geometry']['Rf'],-10*parameters['geometry']['Rf']]
if 'inverseSquareRoot' in parameters['singularity']['type']:
nodes[secondBehindCracktipDummyIndex] = [5*parameters['geometry']['Rf'],-40*parameters['geometry']['Rf']]
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Identify nodes on crack faces for displacement measurements ...',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the fiber elements around the upper crack tip',True)
nodesAroundCracktipUP = quads[firstdebondedFiberElUP]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesFiberDisplacementMeasUP = []
for node in nodesAroundCracktipUP:
if node in crackfacesNodeset and node!=cracktipUPIndex:
nodesFiberDisplacementMeasUP.append(node)
if len(nodesFiberDisplacementMeasUP)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesFiberDisplacementMeasUP)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesFiberDisplacementMeasUP = []
for node in nodesFiberDisplacementMeasUP:
distancesFiberDisplacementMeasUP.append(np.sqrt((nodes[node][0]-nodes[cracktipUPIndex][0])*(nodes[node][0]-nodes[cracktipUPIndex][0])+(nodes[node][1]-nodes[cracktipUPIndex][1])*(nodes[node][1]-nodes[cracktipUPIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the matrix elements around the upper crack tip',True)
nodesAroundCracktipUP = quads[firstdebondedMatrixElUP]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesMatrixDisplacementMeasUP = []
for node in nodesAroundCracktipUP:
if node in crackfacesNodeset and node!=cracktipUPIndex:
nodesMatrixDisplacementMeasUP.append(node)
if len(nodesMatrixDisplacementMeasUP)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesMatrixDisplacementMeasUP)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from upper cracktip',True)
distancesMatrixDisplacementMeasUP = []
for node in nodesMatrixDisplacementMeasUP:
distancesMatrixDisplacementMeasUP.append(np.sqrt((nodes[node][0]-nodes[cracktipUPIndex][0])*(nodes[node][0]-nodes[cracktipUPIndex][0])+(nodes[node][1]-nodes[cracktipUPIndex][1])*(nodes[node][1]-nodes[cracktipUPIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sort lists with computed distances',True)
sortedFiberDistanceIndecesUP = np.argsort(distancesFiberDisplacementMeasUP)
sortedMatrixDistanceIndecesUP = np.argsort(distancesMatrixDisplacementMeasUP)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort fiber nodes ' + str(sortedFiberDistanceIndecesUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort matrix nodes ' + str(sortedMatrixDistanceIndecesUP),True)
if 'second' in parameters['mesh']['elements']['order']:
cracktipFiberDispMeasIndexUP = nodesFiberDisplacementMeasUP[sortedFiberDistanceIndecesUP[-1]]
firstBehindCracktipFiberDispMeasIndexUP = nodesFiberDisplacementMeasUP[sortedFiberDistanceIndecesUP[-2]]
cracktipMatrixDispMeasIndexUP = nodesMatrixDisplacementMeasUP[sortedMatrixDistanceIndecesUP[-1]]
firstBehindCracktipMatrixDispMeasIndexUP = nodesMatrixDisplacementMeasUP[sortedMatrixDistanceIndecesUP[-2]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the matrix crack tip is measured on node ' + str(cracktipMatrixDispMeasIndexUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the matrix crack tip is measured on node ' + str(firstBehindCracktipMatrixDispMeasIndexUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the fiber crack tip is measured on node ' + str(cracktipFiberDispMeasIndexUP),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the fiber crack tip is measured on node ' + str(firstBehindCracktipFiberDispMeasIndexUP),True)
else:
cracktipFiberDispMeasIndexUP = nodesFiberDisplacementMeasUP[sortedFiberDistanceIndecesUP[-1]]
cracktipMatrixDispMeasIndexUP = nodesMatrixDisplacementMeasUP[sortedMatrixDistanceIndecesUP[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the fiber elements around the lower crack tip',True)
nodesAroundCracktipLOW = quads[firstdebondedFiberElLOW]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesFiberDisplacementMeasLOW = []
for node in nodesAroundCracktipLOW:
if node in crackfacesNodeset and node!=cracktipLOWIndex:
nodesFiberDisplacementMeasLOW.append(node)
if len(nodesFiberDisplacementMeasLOW)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesFiberDisplacementMeasLOW)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesFiberDisplacementMeasLOW = []
for node in nodesFiberDisplacementMeasLOW:
distancesFiberDisplacementMeasLOW.append(np.sqrt((nodes[node][0]-nodes[cracktipLOWIndex][0])*(nodes[node][0]-nodes[cracktipLOWIndex][0])+(nodes[node][1]-nodes[cracktipLOWIndex][1])*(nodes[node][1]-nodes[cracktipLOWIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the matrix elements around the lower crack tip',True)
nodesAroundCracktipLOW = quads[firstdebondedMatrixElLOW]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesMatrixDisplacementMeasLOW = []
for node in nodesAroundCracktipLOW:
if node in crackfacesNodeset and node!=cracktipLOWIndex:
nodesMatrixDisplacementMeasLOW.append(node)
if len(nodesMatrixDisplacementMeasLOW)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesMatrixDisplacementMeasLOW)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from lower cracktip',True)
distancesMatrixDisplacementMeasLOW = []
for node in nodesMatrixDisplacementMeasLOW:
distancesMatrixDisplacementMeasLOW.append(np.sqrt((nodes[node][0]-nodes[cracktipLOWIndex][0])*(nodes[node][0]-nodes[cracktipLOWIndex][0])+(nodes[node][1]-nodes[cracktipLOWIndex][1])*(nodes[node][1]-nodes[cracktipLOWIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sort lists with computed distances',True)
sortedFiberDistanceIndecesLOW = np.argsort(distancesFiberDisplacementMeasLOW)
sortedMatrixDistanceIndecesLOW = np.argsort(distancesMatrixDisplacementMeasLOW)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort fiber nodes ' + str(sortedFiberDistanceIndecesLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort matrix nodes ' + str(sortedMatrixDistanceIndecesLOW),True)
if 'second' in parameters['mesh']['elements']['order']:
cracktipFiberDispMeasIndexLOW = nodesFiberDisplacementMeasLOW[sortedFiberDistanceIndecesLOW[-1]]
firstBehindCracktipFiberDispMeasIndexLOW = nodesFiberDisplacementMeasLOW[sortedFiberDistanceIndecesLOW[-2]]
cracktipMatrixDispMeasIndexLOW = nodesMatrixDisplacementMeasLOW[sortedMatrixDistanceIndecesLOW[-1]]
firstBehindCracktipMatrixDispMeasIndexLOW = nodesMatrixDisplacementMeasLOW[sortedMatrixDistanceIndecesLOW[-2]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the matrix crack tip is measured on node ' + str(cracktipMatrixDispMeasIndexLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the matrix crack tip is measured on node ' + str(firstBehindCracktipMatrixDispMeasIndexLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the fiber crack tip is measured on node ' + str(cracktipFiberDispMeasIndexLOW),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the fiber crack tip is measured on node ' + str(firstBehindCracktipFiberDispMeasIndexLOW),True)
else:
cracktipFiberDispMeasIndexLOW = nodesFiberDisplacementMeasLOW[sortedFiberDistanceIndecesLOW[-1]]
cracktipMatrixDispMeasIndexLOW = nodesMatrixDisplacementMeasLOW[sortedMatrixDistanceIndecesLOW[-1]]
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the fiber elements around the crack tip',True)
nodesAroundCracktip = quads[firstdebondedFiberEl]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesFiberDisplacementMeas = []
for node in nodesAroundCracktip:
if node in crackfacesNodeset and node!=cracktipIndex:
nodesFiberDisplacementMeas.append(node)
if len(nodesFiberDisplacementMeas)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesFiberDisplacementMeas)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesFiberDisplacementMeas = []
for node in nodesFiberDisplacementMeas:
distancesFiberDisplacementMeas.append(np.sqrt((nodes[node][0]-nodes[cracktipIndex][0])*(nodes[node][0]-nodes[cracktipIndex][0])+(nodes[node][1]-nodes[cracktipIndex][1])*(nodes[node][1]-nodes[cracktipIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Find nodes belonging to the matrix elements around the crack tip',True)
nodesAroundCracktip = quads[firstdebondedMatrixEl]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Of these, identify the ones beloging to the crack surface',True)
nodesMatrixDisplacementMeas = []
for node in nodesAroundCracktip:
if node in crackfacesNodeset and node!=cracktipIndex:
nodesMatrixDisplacementMeas.append(node)
if len(nodesMatrixDisplacementMeas)==2:
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Found ' + str(len(nodesMatrixDisplacementMeas)) + ' nodes',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute distances of debonded nodes from cracktip',True)
distancesMatrixDisplacementMeas = []
for node in nodesMatrixDisplacementMeas:
distancesMatrixDisplacementMeas.append(np.sqrt((nodes[node][0]-nodes[cracktipIndex][0])*(nodes[node][0]-nodes[cracktipIndex][0])+(nodes[node][1]-nodes[cracktipIndex][1])*(nodes[node][1]-nodes[cracktipIndex][1])))
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Sort lists with computed distances',True)
sortedFiberDistanceIndeces = np.argsort(distancesFiberDisplacementMeas)
sortedMatrixDistanceIndeces = np.argsort(distancesMatrixDisplacementMeas)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort fiber nodes ' + str(sortedFiberDistanceIndeces),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Indeces to sort matrix nodes ' + str(sortedMatrixDistanceIndeces),True)
if 'second' in parameters['mesh']['elements']['order']:
cracktipFiberDispMeasIndex = nodesFiberDisplacementMeas[sortedFiberDistanceIndeces[-1]]
firstBehindCracktipFiberDispMeasIndex = nodesFiberDisplacementMeas[sortedFiberDistanceIndeces[-2]]
cracktipMatrixDispMeasIndex = nodesMatrixDisplacementMeas[sortedMatrixDistanceIndeces[-1]]
firstBehindCracktipMatrixDispMeasIndex = nodesMatrixDisplacementMeas[sortedMatrixDistanceIndeces[-2]]
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the matrix crack tip is measured on node ' + str(cracktipMatrixDispMeasIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the matrix crack tip is measured on node ' + str(firstBehindCracktipMatrixDispMeasIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the fiber crack tip is measured on node ' + str(cracktipFiberDispMeasIndex),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacement for the first bonded node behind the fiber crack tip is measured on node ' + str(firstBehindCracktipFiberDispMeasIndex),True)
else:
cracktipFiberDispMeasIndex = nodesFiberDisplacementMeas[sortedFiberDistanceIndeces[-1]]
cracktipMatrixDispMeasIndex = nodesMatrixDisplacementMeas[sortedMatrixDistanceIndeces[-1]]
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign new crack tip nodes to matrix elements at upper crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElUP]):
if node == cracktipUPIndex:
quads[firstboundedMatrixElUP][n] = matrixCracktipUPIndex
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new first behind upper crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElUP]):
if node == fiberFirstBehindCracktipUPIndex:
quads[firstboundedMatrixElUP][n] = matrixFirstBehindCracktipUPIndex
if 'inverseSquareRoot' in parameters['singularity']['type']:
for n,node in enumerate(quads[firstboundedMatrixElUP]):
if node == fiberSecondBehindCracktipUPIndex:
quads[firstboundedMatrixElUP][n] = matrixSecondBehindCracktipUPIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new upper crack tip index to the debonded element on the matrix',True)
for n,node in enumerate(quads[firstdebondedMatrixElUP]):
if node == cracktipUPIndex:
quads[firstdebondedMatrixElUP][n] = matrixCracktipUPIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign new crack tip nodes to matrix elements at lower crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElLOW]):
if node == cracktipLOWIndex:
quads[firstboundedMatrixElLOW][n] = matrixCracktipLOWIndex
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new first behind lower crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixElLOW]):
if node == fiberFirstBehindCracktipLOWIndex:
quads[firstboundedMatrixElLOW][n] = matrixFirstBehindCracktipLOWIndex
if 'inverseSquareRoot' in parameters['singularity']['type']:
for n,node in enumerate(quads[firstboundedMatrixElLOW]):
if node == fiberSecondBehindCracktipLOWIndex:
quads[firstboundedMatrixElLOW][n] = matrixSecondBehindCracktipLOWIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new lower crack tip index to the debonded element on the matrix',True)
for n,node in enumerate(quads[firstdebondedMatrixElLOW]):
if node == cracktipLOWIndex:
quads[firstdebondedMatrixElLOW][n] = matrixCracktipLOWIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Assign new crack tip nodes to matrix elements at crack tip ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixEl]):
if node == cracktipIndex:
quads[firstboundedMatrixEl][n] = matrixCracktipIndex
if 'second' in parameters['mesh']['elements']['order']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new first behind crack tip index to the bonded element on the matrix',True)
for n,node in enumerate(quads[firstboundedMatrixEl]):
if node == fiberFirstBehindCracktipIndex:
quads[firstboundedMatrixEl][n] = matrixFirstBehindCracktipIndex
if 'inverseSquareRoot' in parameters['singularity']['type']:
for n,node in enumerate(quads[firstboundedMatrixEl]):
if node == fiberSecondBehindCracktipIndex:
quads[firstboundedMatrixEl][n] = matrixSecondBehindCracktipIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Assign new crack tip index to the debonded element on the matrix',True)
for n,node in enumerate(quads[firstdebondedMatrixEl]):
if node == cracktipIndex:
quads[firstdebondedMatrixEl][n] = matrixCracktipIndex
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Find set of debonded elements on fiber and on matrix ...',True)
crackfaceFiberElementset = []
crackfaceMatrixElementset = []
for element in crackfacesElementset:
if element in fiberElementset:
crackfaceFiberElementset.append(element)
else:
crackfaceMatrixElementset.append(element)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Find set of debonded nodes on fiber and on matrix ...',True)
crackfaceFiberNodeset = []
crackfaceMatrixNodeset = []
for node in crackfacesNodeset:
if node in fiberNodeset:
crackfaceFiberNodeset.append(node)
else:
crackfaceMatrixNodeset.append(node)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Writing new input file ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify node section ...',True)
started = False
for l,line in enumerate(inpfilelines):
if started and '*' in line:
nodeSecStop = l-1
break
elif ('*Node' in line or '*NODE' in line) and len(inpfilelines[l+1].replace('\n','').split(',')) == 3:
nodeSecStart = l
started = True
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Node section begins at line ' + str(nodeSecStart) + ' and ends at line ' + str(nodeSecStop),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify quadrilateral element section ...',True)
started = False
for l,line in enumerate(inpfilelines):
if started and '*' in line:
elementSecStop = l-1
break
elif ('*Element, type=CPE8' in line or '*ELEMENT, type=CPE8' in line or '*Element, type=CPE4' in line or '*ELEMENT, type=CPE4' in line) and (len(inpfilelines[l+1].replace('\n','').split(','))==5 or len(inpfilelines[l+1].replace('\n','').split(','))==9):
elementSecStart = l
started = True
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Element section begins at line ' + str(elementSecStart) + ' and ends at line ' + str(elementSecStop),True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify end of assembly section ...',True)
for l,line in enumerate(inpfilelines):
if '*End Assembly' in line or '*END ASSEMBLY' in line:
endAssembly = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if len(parameters['steps'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of thermal step section ...',True)
for l,line in enumerate(inpfilelines):
if '*Step, name=Temp-Step' in line or '*STEP, NAME=TEMP-STEP' in line:
startTempStep = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of mechanical step section ...',True)
for l,line in enumerate(inpfilelines):
if '*Step, name=Load-Step' in line or '*STEP, NAME=LOAD-STEP' in line:
startLoadStep = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of thermal contour integral section ...',True)
for l,line in enumerate(inpfilelines):
if ('*CONTOUR INTEGRAL' in line or '*Contour Integral' in line) and l>startTempStep and l<startLoadStep:
startTempCI = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of mechanical contour integral section ...',True)
for l,line in enumerate(inpfilelines):
if ('*CONTOUR INTEGRAL' in line or '*Contour Integral' in line) and l>startLoadStep:
startLoadCI = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of boundary conditions section ...',True)
for l,line in enumerate(inpfilelines):
if '** BOUNDARY CONDITIONS' in line or '** Boundary Conditions' in line:
startBC = l
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Identify start of contour integral section ...',True)
for l,line in enumerate(inpfilelines):
if '*CONTOUR INTEGRAL' in line or '*Contour Integral' in line:
startCI = l
endCI = l+1
break
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[:nodeSecStart]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write nodes ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NODE' + '\n')
for node in nodes.keys():
line = str(node)
for coord in nodes[node]:
line += ', ' + str(coord)
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[nodeSecStop+1:elementSecStart]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write quadrilateral elements ...',True)
with open(modinpfullpath,'a') as inp:
inp.write(inpfilelines[elementSecStart])
for quad in quads.keys():
line = str(quad)
for node in quads[quad]:
line += ', ' + str(node)
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[elementSecStop+1:endAssembly]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write crack faces node and element sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=FIBER-CRACKFACE-NODES, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(crackfaceFiberNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKFACE-NODES, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(crackfaceMatrixNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
inp.write('*ELSET, ELSET=FIBER-CRACKFACE-ELEMENTS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,element in enumerate(crackfaceFiberElementset):
if n>0 and n%8==0.0:
line += ' ' + str(element)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(element) + ','
if len(line)>0:
inp.write(line + '\n')
inp.write('*ELSET, ELSET=MATRIX-CRACKFACE-ELEMENTS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,element in enumerate(crackfaceMatrixElementset):
if n>0 and n%8==0.0:
line += ' ' + str(element)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(element) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write VCCT and J-integral node sets ...',True)
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=FIBER-CRACKTIPUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipUPIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPUP-CONTOURINTEGRAL, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipUPIndex) + ', ' + str(matrixCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIPUP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipFiberDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPUP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipMatrixDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIPLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPLOW-CONTOURINTEGRAL, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipLOWIndex) + ', ' + str(matrixCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIPLOW-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipFiberDispMeasIndexLOW) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIPLOW-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipMatrixDispMeasIndexLOW) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIBER-NODE-FIRSTBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberFirstBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-FIRSTBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixFirstBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=FIBER-FIRSTBOUNDED-DISPMEASUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipFiberDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=MATRIX-FIRSTBOUNDED-DISPMEASUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipMatrixDispMeasIndexUP) + '\n')
inp.write('*NSET, NSET=FIBER-NODE-FIRSTBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberFirstBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-FIRSTBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixFirstBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=FIBER-FIRSTBOUNDED-DISPMEASLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipFiberDispMeasIndexLOW) + '\n')
inp.write('*NSET, NSET=MATRIX-FIRSTBOUNDED-DISPMEASLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipMatrixDispMeasIndexLOW) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=FIBER-NODE-SECONDBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberSecondBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-SECONDBOUNDEDUP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixSecondBehindCracktipUPIndex) + '\n')
inp.write('*NSET, NSET=FIBER-NODE-SECONDBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberSecondBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-SECONDBOUNDEDLOW, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixSecondBehindCracktipLOWIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPUP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipUPDummyIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIPLOW-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipLOWDummyIndex) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIRSTBOUNDEDUP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipUPDummyIndex) + '\n')
inp.write('*NSET, NSET=FIRSTBOUNDEDLOW-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipLOWDummyIndex) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=SECONDBOUNDEDUP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(secondBehindCracktipUPDummyIndex) + '\n')
inp.write('*NSET, NSET=SECONDBOUNDEDLOW-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(secondBehindCracktipLOWDummyIndex) + '\n')
else:
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=FIBER-CRACKTIP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIP, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixCracktipIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIP-CONTOURINTEGRAL, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipIndex) + ', ' + str(matrixCracktipIndex) + '\n')
inp.write('*NSET, NSET=FIBER-CRACKTIP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipFiberDispMeasIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-CRACKTIP-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipMatrixDispMeasIndex) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIBER-NODE-FIRSTBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberFirstBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-FIRSTBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixFirstBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=FIBER-FIRSTBOUNDED-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipFiberDispMeasIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-FIRSTBOUNDED-DISPMEAS, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipMatrixDispMeasIndex) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=FIBER-NODE-SECONDBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(fiberSecondBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=MATRIX-NODE-SECONDBOUNDED, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(matrixSecondBehindCracktipIndex) + '\n')
inp.write('*NSET, NSET=CRACKTIP-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(cracktipDummyIndex) + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write('*NSET, NSET=FIRSTBOUNDED-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(firstBehindCracktipDummyIndex) + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write('*NSET, NSET=SECONDBOUNDED-DUMMY-NODE, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(secondBehindCracktipDummyIndex) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write right side node sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=RIGHTSIDE-WITHOUT-CORNERS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(rightSideWithoutCornersNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write left side node sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=LEFTSIDE-WITHOUT-CORNERS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(leftSideWithoutCornersNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write north side node sets ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=SOUTHWEST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(southwestIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=SOUTHEAST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(southeastIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=UPPERSIDE-WITHOUT-CORNERS, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(northSideWithoutCornersNodeset):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=UPPERSIDE-WITHOUT-NECORNER, INSTANCE=RVE-assembly' + '\n')
line = ' ' + str(northwestIndex) + ','
for n,node in enumerate(northSideWithoutCornersNodeset):
if (n+1)>0 and (n+1)%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=UPPERSIDE-WITHOUT-NWCORNER, INSTANCE=RVE-assembly' + '\n')
line = ' ' + str(northeastIndex) + ','
for n,node in enumerate(northSideWithoutCornersNodeset):
if (n+1)>0 and (n+1)%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHWEST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(northwestIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHEAST-CORNER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(northeastIndex) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHSIDE-CENTER, INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(northSideCenter) + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHSIDE-POSSIDE, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(northSidePosSide):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
with open(modinpfullpath,'a') as inp:
inp.write('*NSET, NSET=NORTHSIDE-NEGSIDE, INSTANCE=RVE-assembly' + '\n')
line = ''
for n,node in enumerate(northSideNegSide):
if n>0 and n%8==0.0:
line += ' ' + str(node)
inp.write(line + '\n')
line = ''
else:
line += ' ' + str(node) + ','
if len(line)>0:
inp.write(line + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'ulinearCoupling' in parameters['BC']['northSide']['type'] or 'vkinCouplingmeanside' in parameters['BC']['northSide']['type']:
with open(modinpfullpath,'a') as inp:
for n,node in enumerate(northSideWithoutCornersNodeset):
inp.write('*NSET, NSET=NORTHSIDE-N'+ str(n+1) +', INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(node) + '\n')
if 'antisymmetry' in parameters['BC']['northSide']['type']:
with open(modinpfullpath,'a') as inp:
for n,node in enumerate(northSidePosSide):
inp.write('*NSET, NSET=NORTHSIDE-POSSIDE-N'+ str(n+1) +', INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(node) + '\n')
for n,node in enumerate(northSideNegSide):
inp.write('*NSET, NSET=NORTHSIDE-NEGSIDE-N'+ str(n+1) +', INSTANCE=RVE-assembly' + '\n')
inp.write(' ' + str(node) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write equation definitions ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPUP,1,1,MATRIX-CRACKTIPUP,1,-1,CRACKTIPUP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPLOW,1,1,MATRIX-CRACKTIPLOW,1,-1,CRACKTIPLOW-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPUP,2,1,MATRIX-CRACKTIPUP,2,-1,CRACKTIPUP-DUMMY-NODE,2,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIPLOW,2,1,MATRIX-CRACKTIPLOW,2,-1,CRACKTIPLOW-DUMMY-NODE,2,-1' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDUP,1,1,MATRIX-NODE-FIRSTBOUNDEDUP,1,-1,FIRSTBOUNDEDUP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDLOW,1,1,MATRIX-NODE-FIRSTBOUNDEDLOW,1,-1,FIRSTBOUNDEDLOW-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDUP,2,1,MATRIX-NODE-FIRSTBOUNDEDUP,2,-1,FIRSTBOUNDEDUP-DUMMY-NODE,2,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDEDLOW,2,1,MATRIX-NODE-FIRSTBOUNDEDLOW,2,-1,FIRSTBOUNDEDLOW-DUMMY-NODE,2,-1' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDUP,1,1,MATRIX-NODE-SECONDBOUNDEDUP,1,-1,SECONDBOUNDEDUP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDLOW,1,1,MATRIX-NODE-SECONDBOUNDEDLOW,1,-1,SECONDBOUNDEDLOW-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDUP,2,1,MATRIX-NODE-SECONDBOUNDEDUP,2,-1,SECONDBOUNDEDUP-DUMMY-NODE,2,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDEDLOW,2,1,MATRIX-NODE-SECONDBOUNDEDLOW,2,-1,SECONDBOUNDEDLOW-DUMMY-NODE,2,-1' + '\n')
else:
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIP,1,1,MATRIX-CRACKTIP,1,-1,CRACKTIP-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-CRACKTIP,2,1,MATRIX-CRACKTIP,2,-1,CRACKTIP-DUMMY-NODE,2,-1' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDED,1,1,MATRIX-NODE-FIRSTBOUNDED,1,-1,FIRSTBOUNDED-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-FIRSTBOUNDED,2,1,MATRIX-NODE-FIRSTBOUNDED,2,-1,FIRSTBOUNDED-DUMMY-NODE,2,-1' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDED,1,1,MATRIX-NODE-SECONDBOUNDED,1,-1,SECONDBOUNDED-DUMMY-NODE,1,-1' + '\n')
inp.write(' 3' + '\n')
inp.write(' FIBER-NODE-SECONDBOUNDED,2,1,MATRIX-NODE-SECONDBOUNDED,2,-1,SECONDBOUNDED-DUMMY-NODE,2,-1' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'vgeomCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: geometric coupling',True)
with open(modinpfullpath,'a') as inp:
inp.write('*MPC' + '\n')
inp.write(' SLIDER, UPPERSIDE-WITHOUT-CORNERS, NORTHWEST-CORNER, NORTHEAST-CORNER' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinrightCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: kinematic coupling with north-east corner as reference node',True)
with open(modinpfullpath,'a') as inp:
inp.write('*KINEMATIC COUPLING, REF NODE = NORTHEAST-CORNER' + '\n')
inp.write(' UPPERSIDE-WITHOUT-NECORNER, 2' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinleftCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: kinematic coupling with north-west corner as reference node',True)
with open(modinpfullpath,'a') as inp:
inp.write('*KINEMATIC COUPLING, REF NODE = NORTHWEST-CORNER' + '\n')
inp.write(' UPPERSIDE-WITHOUT-NWCORNER, 2' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinCouplingmeancorners' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: nw and ne vertical displacements are set to be equal and all other points are set to this value',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
inp.write(' 2' + '\n')
inp.write(' NORTHWEST-CORNER, 2, 1, NORTHEAST-CORNER, 2, -1' + '\n')
inp.write(' 3' + '\n')
inp.write(' UPPERSIDE-WITHOUT-CORNERS, 2, 1, NORTHWEST-CORNER, 2, -0.5, NORTHEAST-CORNER, 2, -0.5' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
elif 'vkinCouplingmeanside' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: mean vertical displacement over all nodes is taken as reference',True)
with open(modinpfullpath,'a') as inp:
nEq = len(northSideWithoutCornersNodeset)+2
inp.write('*EQUATION' + '\n')
for n in range(0,nEq):
inp.write(' ' + str(int(nEq)) + '\n')
line = ''
for m in range(0,nEq):
if m==n:
coeff = -nEq*(1.0-1.0/nEq)
else:
coeff = 1.0
if m==0:
nodeName = 'NORTHWEST-CORNER'
elif m==1:
nodeName = 'NORTHEAST-CORNER'
else:
nodeName = 'NORTHSIDE-N'+ str(m+1-2)
line += ' ' + nodeName + ', 2, ' + str(coeff) + ','
if m>0 and (m+1)%4==0:
line += '\n'
inp.write(line)
line = ''
if len(line)>0:
line += '\n'
inp.write(line)
elif 'antisymmetry' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: antisymmetry',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
for n,node in enumerate(northSidePosSide):
inp.write(' 3' + '\n')
inp.write(' NORTHSIDE-POSSIDE-N'+ str(n+1) +', 2, 1, NORTHSIDE-NEGSIDE-N'+ str(n+1) +', 2, 1, NORTHSIDE-CENTER, 2, -2' + '\n')
inp.write(' 2' + '\n')
inp.write(' NORTHSIDE-POSSIDE-N'+ str(n+1) +', 1, 1, NORTHSIDE-NEGSIDE-N'+ str(n+1) +', 1, 1' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'ulinearCoupling' in parameters['BC']['northSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on NORTH side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: applied linear horizontal displacement',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
for n,node in enumerate(northSideWithoutCornersNodeset):
inp.write(' 2' + '\n')
inp.write(' NORTHSIDE-N'+ str(n+1) +', 1, 1, NORTHEAST-CORNER, 1, ' + str(-nodes[node][0]/nodes[northeastIndex][0]) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'vkinCouplingmeancorners' in parameters['BC']['rightSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on RIGHT side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: ne and se horizontal displacements are set to be equal and all other points are set to this value',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
inp.write(' 2' + '\n')
inp.write(' SOUTHEAST-CORNER, 1, 1, NORTHEAST-CORNER, 1, -1' + '\n')
inp.write(' 3' + '\n')
inp.write(' RIGHTSIDE-WITHOUT-CORNERS, 1, 1, SOUTHEAST-CORNER, 1, -0.5, NORTHEAST-CORNER, 1, -0.5' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if 'vkinCouplingmeancorners' in parameters['BC']['leftSide']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions on LEFT side ...',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Chosen boundary condition: nw and sw horizontal displacements are set to be equal and all other points are set to this value',True)
with open(modinpfullpath,'a') as inp:
inp.write('*EQUATION' + '\n')
inp.write(' 2' + '\n')
inp.write(' SOUTHWEST-CORNER, 1, 1, NORTHWEST-CORNER, 1, -1' + '\n')
inp.write(' 3' + '\n')
inp.write(' LEFTSIDE-WITHOUT-CORNERS, 1, 1, SOUTHWEST-CORNER, 1, -0.5, NORTHWEST-CORNER, 1, -0.5' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write surface definitions ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*SURFACE, NAME=FiberSurface, TYPE=ELEMENT' + '\n')
inp.write(' FIBER-CRACKFACE-ELEMENTS' + '\n')
inp.write('*SURFACE, NAME=MatrixSurface, TYPE=ELEMENT' + '\n')
inp.write(' MATRIX-CRACKFACE-ELEMENTS' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write end assembly ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*End Assembly' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write contact interaction ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTACT PAIR, INTERACTION=CrackFacesContact, SMALL SLIDING' + '\n')
inp.write(' MatrixSurface, FiberSurface' + '\n')
inp.write('*SURFACE INTERACTION, NAME=CrackFacesContact' + '\n')
inp.write(' 1.0' + '\n')
if 'static' in parameters['surface']['friction']['type']:
writeLineToLogFile(logfilepath,'a',baselogindent + 4*logindent + 'Static friction (Coulomb model) is present between crack faces',True)
with open(modinpfullpath,'a') as inp:
if 'maxtau' in parameters['surface']['friction']['type']:
inp.write('*FRICTION, TAUMAX=' + str(parameters['surface']['friction']['maxtau']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 5*logindent + 'Maximum tangential stress = ' + str(parameters['surface']['friction']['maxtau']) + '[MPa]',True)
else:
inp.write('*FRICTION' + '\n')
inp.write(' ' + str(parameters['surface']['friction']['static']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 5*logindent + 'Static friction coefficient = ' + str(parameters['surface']['friction']['static']) + '[-]',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if len(parameters['steps'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[endAssembly+1:startTempStep+2]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions for VCCT ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** BOUNDARY CONDITIONS' + '\n')
inp.write('**' + '\n')
inp.write('*BOUNDARY, OP=MOD' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' CRACKTIPUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' CRACKTIPLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' FIRSTBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' SECONDBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
else:
inp.write(' CRACKTIP-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
inp.write('**' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startTempStep+2:startTempCI]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write J-integral over reduced contours ...',True)
crackName = inpfilelines[startTempCI].replace('\n','').split(',')[1].split('=')[1]
nContours = inpfilelines[startTempCI].replace('\n','').split(',')[2].split('=')[1]
qx = -np.sin(parameters['geometry']['deltatheta']*np.pi/180.0)
qy = np.cos(parameters['geometry']['deltatheta']*np.pi/180.0)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTOUR INTEGRAL, CRACK NAME=' + crackName + ', CONTOURS=' + nContours + '\n')
inp.write(' ' + 'CRACKTIP-CONTOURINTEGRAL, ' + str(qx) + ', ' + str(qy) + ', 0.0' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startTempCI+2:startLoadStep+2]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write loads ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** LOADS' + '\n')
inp.write('**' + '\n')
for load in parameters['loads'].values():
if 'appliedUniformPressure' in load['type'] or 'applieduniformpressure' in load['type'] or 'applied Uniform Pressure' in load['type'] or 'applied uniform pressure' in load['type']:
inp.write('*DSLOAD, OP=MOD' + '\n')
inp.write(' ' + load['set'] + ', P, ' + str(load['value']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions for VCCT ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** BOUNDARY CONDITIONS' + '\n')
inp.write('**' + '\n')
inp.write('*BOUNDARY, OP=MOD' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' CRACKTIPUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' CRACKTIPLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' FIRSTBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' SECONDBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
else:
inp.write(' CRACKTIP-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
inp.write('**' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startLoadStep+2:startLoadCI]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write J-integral over reduced contours ...',True)
crackName = inpfilelines[startLoadCI].replace('\n','').split(',')[1].split('=')[1]
nContours = inpfilelines[startLoadCI].replace('\n','').split(',')[2].split('=')[1]
qx = -np.sin(parameters['geometry']['deltatheta']*np.pi/180.0)
qy = np.cos(parameters['geometry']['deltatheta']*np.pi/180.0)
with open(modinpfullpath,'a') as inp:
inp.write('*CONTOUR INTEGRAL, CRACK NAME=' + crackName + ', CONTOURS=' + nContours + '\n')
inp.write(' ' + 'CRACKTIP-CONTOURINTEGRAL, ' + str(qx) + ', ' + str(qy) + ', 0.0' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startLoadCI+2:]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
else:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[endAssembly+1:startBC]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write loads ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** LOADS' + '\n')
inp.write('**' + '\n')
for load in parameters['loads'].values():
if 'appliedUniformPressure' in load['type'] or 'applieduniformpressure' in load['type'] or 'applied Uniform Pressure' in load['type'] or 'applied uniform pressure' in load['type']:
inp.write('*DSLOAD, OP=MOD' + '\n')
inp.write(' ' + load['set'] + ', P, ' + str(load['value']) + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write boundary conditions for VCCT ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** BOUNDARY CONDITIONS' + '\n')
inp.write('**' + '\n')
inp.write('*BOUNDARY, OP=MOD' + '\n')
if np.abs(theta)>0.0 or 'full' in parameters['geometry']['fiber']['type']:
inp.write(' CRACKTIPUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' CRACKTIPLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' FIRSTBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDEDUP-DUMMY-NODE, ENCASTRE' + '\n')
inp.write(' SECONDBOUNDEDLOW-DUMMY-NODE, ENCASTRE' + '\n')
else:
inp.write(' CRACKTIP-DUMMY-NODE, ENCASTRE' + '\n')
if 'second' in parameters['mesh']['elements']['order']:
inp.write(' FIRSTBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
if 'inverseSquareRoot' in parameters['singularity']['type']:
inp.write(' SECONDBOUNDED-DUMMY-NODE, ENCASTRE' + '\n')
inp.write('**' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[startBC+1:startCI]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write from original input file ...',True)
with open(modinpfullpath,'a') as inp:
for line in inpfilelines[endCI+1:]:
inp.write(line)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Write STIFFNESS MATRIX generation step ...',True)
with open(modinpfullpath,'a') as inp:
inp.write('** LINEAR PERTURBATION STEP: OUTPUT GLOBAL STIFFNESS MATRIX' + '\n')
inp.write('*STEP, NAME=GlobalStiffnessMatrix' + '\n')
inp.write('*MATRIX GENERATE, STIFFNESS' + '\n')
inp.write('*MATRIX OUTPUT, STIFFNESS, FORMAT=LABELS' + '\n')
inp.write('*END STEP' + '\n')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
if parameters['simulation-pipeline']['remove-INP']:
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Remove .inp file from working directory... ',True)
try:
os.remove(inpfullpath)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilepath,'a',Exception,error,True)
sys.exc_clear()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
return modinpname
def runRVEsimulation(wd,inpfile,ncpus,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: runRVEsimulation(wd,inpfile,ncpus,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Creating and submitting job ...',True)
try:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Create job ' + inpfile.split('.')[0] + ' from input file ' + inpfile,True)
mdb.JobFromInputFile(name=inpfile.split('.')[0],inputFileName=inpfile,type=ANALYSIS, atTime=None, waitMinutes=0, waitHours=0, queue=None, memory=99, memoryUnits=PERCENTAGE, getMemoryFromAnalysis=True, explicitPrecision=SINGLE, nodalOutputPrecision=SINGLE, userSubroutine='',scratch='', multiprocessingMode=DEFAULT, numCpus=ncpus, numDomains=12,numGPUs=0)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Submit job ...',True)
mdb.jobs[inpfile.split('.')[0]].submit(consistencyChecking=OFF)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Wait for completion ...',True)
mdb.jobs[inpfile.split('.')[0]].waitForCompletion()
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilepath,'a',Exception,error,True)
sys.exc_clear()
if 'Windows' in system():
writeLineToLogFile(logfilepath,'a',2*logindent + 'Create Windows command file',True)
cmdfile = join(wd,'executeABAanalysis.cmd')
with open(cmdfile,'w') as cmd:
cmd.write('\n')
cmd.write('CD ' + wd + '\n')
cmd.write('\n')
cmd.write('abaqus analysis job=' + inpfile.split('.')[0] + ' interactive cpus=' + str(ncpus) + '\n')
writeLineToLogFile(logfilepath,'a',2*logindent + 'Executing Windows command file...',True)
try:
subprocess.call('cmd.exe /C ' + cmdfile)
writeLineToLogFile(logfilepath,'a',2*logindent + '... done.',True)
except Exception,error:
writeLineToLogFile(logfilepath,'a',2*logindent + 'ERROR',True)
writeLineToLogFile(logfilepath,'a',2*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',2*logindent + str(error),True)
sys.exc_clear()
elif 'Linux' in system():
writeLineToLogFile(logfilepath,'a',2*logindent + 'Create Linux bash file',True)
bashfile = join(wd,'executeABAanalysis.sh')
with open(bashfile,'w') as bsh:
bsh.write('#!/bin/bash\n')
bsh.write('\n')
bsh.write('cd ' + wd + '\n')
bsh.write('\n')
bsh.write('abaqus analysis job=' + inpfile.split('.')[0] + ' interactive cpus=' + str(ncpus) + '\n')
writeLineToLogFile(logfilepath,'a',2*logindent + 'Executing Linux bash file...',True)
try:
writeLineToLogFile(logfilepath,'a',3*logindent + 'Change permissions to ' + bashfile ,True)
os.chmod(bashfile, 0o755)
writeLineToLogFile(logfilepath,'a','Run bash file',True)
call('.' + bashfile)
writeLineToLogFile(logfilepath,'a',2*logindent + '... done.',True)
except Exception:
writeLineToLogFile(logfilepath,'a',2*logindent + 'ERROR',True)
writeLineToLogFile(logfilepath,'a',2*logindent + str(Exception),True)
writeLineToLogFile(logfilepath,'a',2*logindent + str(error),True)
sys.exc_clear()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Exiting function: runRVEsimulation(wd,inpfile,ncpus,baselogindent,logindent)',True)
def analyzeRVEresults(odbname,parameters,logfilepath,baselogindent,logindent):
skipLineToLogFile(logfilepath,'a',True)
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'In function: analyzeRVEresults(wd,odbname)',True)
wd = parameters['input']['wd']
#=======================================================================
# BEGIN - extract stiffness matrix
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Extract stiffness matrix...',True)
with open(join(wd,odbname.replace('VCCTandJintegral','Perturbation').split('.')[0]+'_'+'STIF2'+'.mtx'),'r') as mtx:
lines = mtx.readlines()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - extract stiffness matrix
#=======================================================================
#=======================================================================
# BEGIN - Copy stiffness matrix to csv file
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Copy stiffness matrix to csv file...',True)
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['globalstiffnessmatrix'],'ROW INDEX, ROW DOF, COLUMN INDEX, COLUMN DOF, VALUE')
appendCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['globalstiffnessmatrix'],lines[2:])
#=======================================================================
# END - Copy stiffness matrix to csv file
#=======================================================================
#=======================================================================
# BEGIN - extract J-integral results
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Extracting J-integral results ...',True)
if parameters['simulation-pipeline']['analysis']['report-energyreleaserates']:
if len(parameters['steps'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '--> THERMAL STEP <--',True)
try:
Jintegrals = getJintegrals(wd,odbname.split('.')[0],parameters['Jintegral']['numberOfContours'],1)
except Exception,e:
writeErrorToLogFile(logfilepath,'a',Exception,e,True)
sys.exc_clear()
JintegralsWithDistance = []
for v,value in enumerate(Jintegrals):
JintegralsWithDistance.append([v+1,(v+1)*parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0,value])
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['thermalJintegral'],'CONTOUR, AVERAGE DISTANCE, GTOT')
appendCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['thermalJintegral'],JintegralsWithDistance)
del JintegralsWithDistance
thermalJintegrals = Jintegrals
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '--> MECHANICAL STEP <--',True)
try:
Jintegrals = getJintegrals(wd,odbname.split('.')[0],parameters['Jintegral']['numberOfContours'],2)
except Exception,e:
writeErrorToLogFile(logfilepath,'a',Exception,e,True)
sys.exc_clear()
JintegralsWithDistance = []
for v,value in enumerate(Jintegrals):
JintegralsWithDistance.append([v+1,(v+1)*parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0,value])
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['Jintegral'],'CONTOUR, AVERAGE DISTANCE, GTOT')
appendCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['Jintegral'],JintegralsWithDistance)
del JintegralsWithDistance
else:
try:
Jintegrals = getJintegrals(wd,odbname.split('.')[0],parameters['Jintegral']['numberOfContours'],1)
except Exception,e:
writeErrorToLogFile(logfilepath,'a',Exception,e,True)
sys.exc_clear()
JintegralsWithDistance = []
for v,value in enumerate(Jintegrals):
JintegralsWithDistance.append([v+1,(v+1)*parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0,value])
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['Jintegral'],'CONTOUR, AVERAGE DISTANCE, GTOT')
appendCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['Jintegral'],JintegralsWithDistance)
del JintegralsWithDistance
else:
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['Jintegral'],'CONTOUR, AVERAGE DISTANCE, GTOT')
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - extract J-integral results
#=======================================================================
#=======================================================================
# BEGIN - open ODB
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Opening ODB database ' + odbname + ' in directory ' + wd + ' ...',True)
if '.odb' not in odbname:
odbname += '.odb'
odbfullpath = join(wd,odbname)
odb = openOdb(path=odbfullpath, readOnly=True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - open ODB
#=======================================================================
#=======================================================================
# BEGIN - extract node sets
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Extracting node sets ...',True)
rve = getSingleNodeSet(odb,'RVE-ASSEMBLY','RVE')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- RVE',True)
matrixCrackfaceNodes = getSingleNodeSet(odb,None,'MATRIX-CRACKFACE-NODES')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- MATRIX-CRACKFACE-NODES',True)
matrixCrackfaceNodes = getSingleNodeSet(odb,None,'MATRIX-CRACKFACE-NODES')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- MATRIX-CRACKFACE-NODES',True)
fiberCracktip = getSingleNodeSet(odb,None,'FIBER-CRACKTIP')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- FIBER-CRACKTIP',True)
matrixCracktip = getSingleNodeSet(odb,None,'MATRIX-CRACKTIP')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- MATRIX-CRACKTIP',True)
cracktipDummyNode = getSingleNodeSet(odb,None,'CRACKTIP-DUMMY-NODE')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- CRACKTIP-DUMMY-NODE',True)
fiberCracktipDispMeas = getSingleNodeSet(odb,None,'FIBER-CRACKTIP-DISPMEAS')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- FIBER-CRACKTIP-DISPMEAS',True)
matrixCracktipDispMeas = getSingleNodeSet(odb,None,'MATRIX-CRACKTIP-DISPMEAS')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- MATRIX-CRACKTIP-DISPMEAS',True)
if 'second' in parameters['mesh']['elements']['order']:
firstboundedFiber = getSingleNodeSet(odb,None,'FIBER-NODE-FIRSTBOUNDED')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- FIBER-NODE-FIRSTBOUNDED',True)
firstboundedDummyNode = getSingleNodeSet(odb,None,'FIRSTBOUNDED-DUMMY-NODE')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- FIRSTBOUNDED-DUMMY-NODE',True)
fiberFirstboundedDispMeas = getSingleNodeSet(odb,None,'FIBER-FIRSTBOUNDED-DISPMEAS')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- FIBER-FIRSTBOUNDED-DISPMEAS',True)
matrixFirstboundedDispMeas = getSingleNodeSet(odb,None,'MATRIX-FIRSTBOUNDED-DISPMEAS')
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '-- MATRIX-FIRSTBOUNDED-DISPMEAS',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - extract node sets
#=======================================================================
#=======================================================================
# BEGIN - extract displacements of all nodes and copy to csv file
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Extract displacements of all nodes...',True)
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['globaldispvector'],'NODE LABEL, Ux, Uy')
rveDisps = getFieldOutput(odb,-1,-1,'U',rve)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Displacements extracted',True)
globalDisps = {}
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Dictionary initialized',True)
for valueset in rveDisps.values:
rowIndex = int(valueset.nodeLabel)
globalDisps[rowIndex] = {}
globalDisps[rowIndex][1] = valueset.data[0]
globalDisps[rowIndex][2] = valueset.data[1]
appendCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['globaldispvector'],[[rowIndex,valueset.data[0],valueset.data[1]]])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - extract displacements of all nodes and copy to csv file
#=======================================================================
#=======================================================================
# BEGIN - compute crack tip reference frame transformation
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Compute crack tip reference frame transformation ...',True)
undefCracktipCoords = getFieldOutput(odb,0,0,'COORD',fiberCracktip)
phi = np.arctan2(undefCracktipCoords.values[0].data[1],undefCracktipCoords.values[0].data[0])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - compute crack tip reference frame transformation
#=======================================================================
#=======================================================================
# BEGIN - compute mesh size reference frame transformation
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Compute mesh size reference frame transformation ...',True)
delta = parameters['mesh']['size']['delta']*np.pi/180.0
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - compute mesh size reference frame transformation
#=======================================================================
#=======================================================================
# BEGIN - save indeces to build matrices
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Save indeces to build matrices...',True)
cracktipIndex = odb.rootAssembly.instances['RVE-ASSEMBLY'].getNodeFromLabel(getFieldOutput(odb,0,0,'COORD',fiberCracktip).values[0].nodeLabel)
fibercracktipdispmeasIndex = odb.rootAssembly.instances['RVE-ASSEMBLY'].getNodeFromLabel(getFieldOutput(odb,0,0,'COORD',fiberCracktipDispMeas).values[0].nodeLabel)
matrixcracktipdispmeasIndex = odb.rootAssembly.instances['RVE-ASSEMBLY'].getNodeFromLabel(getFieldOutput(odb,0,0,'COORD',matrixCracktipDispMeas).values[0].nodeLabel)
if 'second' in parameters['mesh']['elements']['order']:
fiberfirstBounded = odb.rootAssembly.instances['RVE-ASSEMBLY'].getNodeFromLabel(getFieldOutput(odb,0,0,'COORD',firstboundedFiber).values[0].nodeLabel)
fiberfirstboundispmeasIndex = odb.rootAssembly.instances['RVE-ASSEMBLY'].getNodeFromLabel(getFieldOutput(odb,0,0,'COORD',fiberFirstboundedDispMeas).values[0].nodeLabel)
matrixfirstboundispmeasIndex = odb.rootAssembly.instances['RVE-ASSEMBLY'].getNodeFromLabel(getFieldOutput(odb,0,0,'COORD',matrixFirstboundedDispMeas).values[0].nodeLabel)
createCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['matrixindeces'],'cracktipIndex,fibercracktipdispmeasIndex,matrixcracktipdispmeasIndex,fiberfirstBounded,fiberfirstboundispmeasIndex,matrixfirstboundispmeasIndex')
data = [cracktipIndex,fibercracktipdispmeasIndex,matrixcracktipdispmeasIndex]
if 'second' in parameters['mesh']['elements']['order']:
data.append(fiberfirstBounded)
data.append(fiberfirstboundispmeasIndex)
data.append(matrixfirstboundispmeasIndex)
appendCSVfile(parameters['output']['local']['directory'],parameters['output']['local']['filenames']['matrixindeces'],[data])
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - build matrices
#=======================================================================
#=======================================================================
# BEGIN - compute VCCT
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Compute VCCT ...',True)
if parameters['simulation-pipeline']['analysis']['report-energyreleaserates']:
if len(parameters['steps'])>1:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '--> THERMAL STEP <--',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Check if crack faces are pressure-loaded in this step ...',True)
isPressureLoadedCrack = False
for load in parameters['loads'].values():
if ('appliedUniformPressure' in load['type'] or 'applieduniformpressure' in load['type'] or 'applied Uniform Pressure' in load['type'] or 'applied uniform pressure' in load['type']) and 'Temp-Step' in load['stepName'] and ('FiberSurface' in load['set'] or 'MatrixSurface' in load['set']):
isPressureLoadedCrack = True
uniformP = load['value']
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Pressure loaded crack faces are present, corrected VCCT will be used.',True)
break
if not isPressureLoadedCrack:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Pressure loaded crack faces are not present.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Extract forces and displacements ...',True)
RFcracktip = getFieldOutput(odb,-2,-1,'RF',cracktipDummyNode)
if 'second' in parameters['mesh']['elements']['order']:
RFfirstbounded = getFieldOutput(odb,-2,-1,'RF',firstboundedDummyNode)
fiberCracktipDisplacement = getFieldOutput(odb,-2,-1,'U',fiberCracktipDispMeas)
matrixCracktipDisplacement = getFieldOutput(odb,-2,-1,'U',matrixCracktipDispMeas)
if 'second' in parameters['mesh']['elements']['order']:
fiberFirstboundedDisplacement = getFieldOutput(odb,-2,-1,'U',fiberFirstboundedDispMeas)
matrixFirstboundedDisplacement = getFieldOutput(odb,-2,-1,'U',matrixFirstboundedDispMeas)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Rotate forces and displacements ...',True)
xRFcracktip = RFcracktip.values[0].data[0]
yRFcracktip = RFcracktip.values[0].data[1]
rRFcracktip = np.cos(phi)*xRFcracktip + np.sin(phi)*yRFcracktip
thetaRFcracktip = -np.sin(phi)*xRFcracktip + np.cos(phi)*yRFcracktip
if 'second' in parameters['mesh']['elements']['order']:
xRFfirstbounded = RFfirstbounded.values[0].data[0]
yRFfirstbounded = RFfirstbounded.values[0].data[1]
rRFfirstbounded = np.cos(phi)*xRFfirstbounded + np.sin(phi)*yRFfirstbounded
thetaRFfirstbounded = -np.sin(phi)*xRFfirstbounded + np.cos(phi)*yRFfirstbounded
if isPressureLoadedCrack:
rRFcracktip -= uniformP*(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0)/6
rRFfirstbounded -= 2*uniformP*(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0)/3
else:
if isPressureLoadedCrack:
rRFcracktip -= uniformP*(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0)/2
xfiberCracktipDisplacement = fiberCracktipDisplacement.values[0].data[0]
yfiberCracktipDisplacement = fiberCracktipDisplacement.values[0].data[1]
rfiberCracktipDisplacement = np.cos(phi)*xfiberCracktipDisplacement + np.sin(phi)*yfiberCracktipDisplacement
thetafiberCracktipDisplacement = -np.sin(phi)*xfiberCracktipDisplacement + np.cos(phi)*yfiberCracktipDisplacement
xmatrixCracktipDisplacement = matrixCracktipDisplacement.values[0].data[0]
ymatrixCracktipDisplacement = matrixCracktipDisplacement.values[0].data[1]
rmatrixCracktipDisplacement = np.cos(phi)*xmatrixCracktipDisplacement + np.sin(phi)*ymatrixCracktipDisplacement
thetamatrixCracktipDisplacement = -np.sin(phi)*xmatrixCracktipDisplacement + np.cos(phi)*ymatrixCracktipDisplacement
if 'second' in parameters['mesh']['elements']['order']:
xfiberFirstboundedDisplacement = fiberFirstboundedDisplacement.values[0].data[0]
yfiberFirstboundedDisplacement = fiberFirstboundedDisplacement.values[0].data[1]
rfiberFirstboundedDisplacement = np.cos(phi)*xfiberFirstboundedDisplacement + np.sin(phi)*yfiberFirstboundedDisplacement
thetafiberFirstboundedDisplacement = -np.sin(phi)*xfiberFirstboundedDisplacement + np.cos(phi)*yfiberFirstboundedDisplacement
xmatrixFirstboundedDisplacement = matrixFirstboundedDisplacement.values[0].data[0]
ymatrixFirstboundedDisplacement = matrixFirstboundedDisplacement.values[0].data[1]
rmatrixFirstboundedDisplacement = np.cos(phi)*xmatrixFirstboundedDisplacement + np.sin(phi)*ymatrixFirstboundedDisplacement
thetamatrixFirstboundedDisplacement = -np.sin(phi)*xmatrixFirstboundedDisplacement + np.cos(phi)*ymatrixFirstboundedDisplacement
xcracktipDisplacement = xmatrixCracktipDisplacement - xfiberCracktipDisplacement
ycracktipDisplacement = ymatrixCracktipDisplacement - yfiberCracktipDisplacement
rcracktipDisplacement = rmatrixCracktipDisplacement - rfiberCracktipDisplacement
thetacracktipDisplacement = thetamatrixCracktipDisplacement - thetafiberCracktipDisplacement
if 'second' in parameters['mesh']['elements']['order']:
xfirstboundedDisplacement = xmatrixFirstboundedDisplacement - xfiberFirstboundedDisplacement
yfirstboundedDisplacement = ymatrixFirstboundedDisplacement - yfiberFirstboundedDisplacement
rfirstboundedDisplacement = rmatrixFirstboundedDisplacement - rfiberFirstboundedDisplacement
thetafirstboundedDisplacement = thetamatrixFirstboundedDisplacement - thetafiberFirstboundedDisplacement
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute VCCT with GTOT=GI+GII ...',True)
if 'second' in parameters['mesh']['elements']['order']:
GI = np.abs(0.5*(rRFcracktip*rcracktipDisplacement+rRFfirstbounded*rfirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GII = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement+thetaRFfirstbounded*thetafirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOTequiv = np.abs(0.5*(xRFcracktip*xcracktipDisplacement+yRFcracktip*ycracktipDisplacement+xRFfirstbounded*xfirstboundedDisplacement+yRFfirstbounded*yfirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
else:
GI = np.abs(0.5*(rRFcracktip*rcracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GII = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOTequiv = np.abs(0.5*(xRFcracktip*xcracktipDisplacement+yRFcracktip*ycracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOT = GI + GII
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute VCCT with GI=GTOT-GII ...',True)
if 'second' in parameters['mesh']['elements']['order']:
GIIv2 = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement+thetaRFfirstbounded*thetafirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
else:
GIIv2 = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOTv2 = thermalJintegrals[-1]
GIv2 = GTOTv2 - GIIv2
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Save to file ...',True)
if 'second' in parameters['mesh']['elements']['order']:
appendCSVfile(parameters['output']['global']['directory'],parameters['output']['global']['filenames']['thermalenergyreleaserate'],[[parameters['geometry']['deltatheta'],parameters['geometry']['Rf'],parameters['geometry']['L'],parameters['geometry']['L']/parameters['geometry']['Rf'],phiCZthermal*180.0/np.pi,G0,GI/G0,GII/G0,GTOT/G0,GIv2/G0,GIIv2/G0,GTOTv2/G0,GTOTequiv/G0,GI,GII,GTOT,GIv2,GIIv2,GTOTv2,GTOTequiv,np.min(uRthermal),np.max(uRthermal),np.mean(uRthermal),np.min(uThetathermal),np.max(uThetathermal),np.mean(uThetathermal),phiSZthermal*180.0/np.pi,xRFcracktip,yRFcracktip,xRFfirstbounded,yRFfirstbounded,rRFcracktip,thetaRFcracktip,rRFfirstbounded,thetaRFfirstbounded,xcracktipDisplacement,ycracktipDisplacement,rcracktipDisplacement,thetacracktipDisplacement,xfirstboundedDisplacement,yfirstboundedDisplacement,rfirstboundedDisplacement,thetafirstboundedDisplacement,xfiberCracktipDisplacement,yfiberCracktipDisplacement,rfiberCracktipDisplacement,thetafiberCracktipDisplacement,xfiberFirstboundedDisplacement,yfiberFirstboundedDisplacement,rfiberFirstboundedDisplacement,thetafiberFirstboundedDisplacement,xmatrixCracktipDisplacement,ymatrixCracktipDisplacement,rmatrixCracktipDisplacement,thetamatrixCracktipDisplacement,xmatrixFirstboundedDisplacement,ymatrixFirstboundedDisplacement,rmatrixFirstboundedDisplacement,thetamatrixFirstboundedDisplacement]])
else:
appendCSVfile(parameters['output']['global']['directory'],parameters['output']['global']['filenames']['thermalenergyreleaserate'],[[parameters['geometry']['deltatheta'],parameters['geometry']['Rf'],parameters['geometry']['L'],parameters['geometry']['L']/parameters['geometry']['Rf'],phiCZthermal*180.0/np.pi,G0,GI/G0,GII/G0,GTOT/G0,GIv2/G0,GIIv2/G0,GTOTv2/G0,GTOTequiv/G0,GI,GII,GTOT,GIv2,GIIv2,GTOTv2,GTOTequiv,np.min(uRthermal),np.max(uRthermal),np.mean(uRthermal),np.min(uThetathermal),np.max(uThetathermal),np.mean(uThetathermal),phiSZthermal*180.0/np.pi,xRFcracktip,yRFcracktip,rRFcracktip,thetaRFcracktip,xcracktipDisplacement,ycracktipDisplacement,rcracktipDisplacement,thetacracktipDisplacement,xfiberCracktipDisplacement,yfiberCracktipDisplacement,rfiberCracktipDisplacement,thetafiberCracktipDisplacement,xmatrixCracktipDisplacement,ymatrixCracktipDisplacement,rmatrixCracktipDisplacement,thetamatrixCracktipDisplacement]])
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '--> MECHANICAL STEP <--',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Check if crack faces are pressure-loaded in this step ...',True)
isPressureLoadedCrack = False
for load in parameters['loads'].values():
if ('appliedUniformPressure' in load['type'] or 'applieduniformpressure' in load['type'] or 'applied Uniform Pressure' in load['type'] or 'applied uniform pressure' in load['type']) and 'Load-Step' in load['stepName'] and ('FiberSurface' in load['set'] or 'MatrixSurface' in load['set']):
isPressureLoadedCrack = True
uniformP = load['value']
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Pressure loaded crack faces are present, corrected VCCT will be used.',True)
break
if not isPressureLoadedCrack:
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Pressure loaded crack faces are not present.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Extract forces and displacements ...',True)
RFcracktip = getFieldOutput(odb,-1,-1,'RF',cracktipDummyNode)
if 'second' in parameters['mesh']['elements']['order']:
RFfirstbounded = getFieldOutput(odb,-1,-1,'RF',firstboundedDummyNode)
fiberCracktipDisplacement = getFieldOutput(odb,-1,-1,'U',fiberCracktipDispMeas)
matrixCracktipDisplacement = getFieldOutput(odb,-1,-1,'U',matrixCracktipDispMeas)
if 'second' in parameters['mesh']['elements']['order']:
fiberFirstboundedDisplacement = getFieldOutput(odb,-1,-1,'U',fiberFirstboundedDispMeas)
matrixFirstboundedDisplacement = getFieldOutput(odb,-1,-1,'U',matrixFirstboundedDispMeas)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Rotate forces and displacements ...',True)
xRFcracktip = RFcracktip.values[0].data[0]
yRFcracktip = RFcracktip.values[0].data[1]
rRFcracktip = np.cos(phi)*xRFcracktip + np.sin(phi)*yRFcracktip
thetaRFcracktip = -np.sin(phi)*xRFcracktip + np.cos(phi)*yRFcracktip
if 'second' in parameters['mesh']['elements']['order']:
xRFfirstbounded = RFfirstbounded.values[0].data[0]
yRFfirstbounded = RFfirstbounded.values[0].data[1]
rRFfirstbounded = np.cos(phi)*xRFfirstbounded + np.sin(phi)*yRFfirstbounded
thetaRFfirstbounded = -np.sin(phi)*xRFfirstbounded + np.cos(phi)*yRFfirstbounded
if isPressureLoadedCrack:
rRFcracktip -= uniformP*(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0)/6
rRFfirstbounded -= 2*uniformP*(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0)/3
else:
if isPressureLoadedCrack:
rRFcracktip -= uniformP*(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0)/2
xfiberCracktipDisplacement = fiberCracktipDisplacement.values[0].data[0]
yfiberCracktipDisplacement = fiberCracktipDisplacement.values[0].data[1]
rfiberCracktipDisplacement = np.cos(phi)*xfiberCracktipDisplacement + np.sin(phi)*yfiberCracktipDisplacement
thetafiberCracktipDisplacement = -np.sin(phi)*xfiberCracktipDisplacement + np.cos(phi)*yfiberCracktipDisplacement
xmatrixCracktipDisplacement = matrixCracktipDisplacement.values[0].data[0]
ymatrixCracktipDisplacement = matrixCracktipDisplacement.values[0].data[1]
rmatrixCracktipDisplacement = np.cos(phi)*xmatrixCracktipDisplacement + np.sin(phi)*ymatrixCracktipDisplacement
thetamatrixCracktipDisplacement = -np.sin(phi)*xmatrixCracktipDisplacement + np.cos(phi)*ymatrixCracktipDisplacement
if 'second' in parameters['mesh']['elements']['order']:
xfiberFirstboundedDisplacement = fiberFirstboundedDisplacement.values[0].data[0]
yfiberFirstboundedDisplacement = fiberFirstboundedDisplacement.values[0].data[1]
rfiberFirstboundedDisplacement = np.cos(phi)*xfiberFirstboundedDisplacement + np.sin(phi)*yfiberFirstboundedDisplacement
thetafiberFirstboundedDisplacement = -np.sin(phi)*xfiberFirstboundedDisplacement + np.cos(phi)*yfiberFirstboundedDisplacement
xmatrixFirstboundedDisplacement = matrixFirstboundedDisplacement.values[0].data[0]
ymatrixFirstboundedDisplacement = matrixFirstboundedDisplacement.values[0].data[1]
rmatrixFirstboundedDisplacement = np.cos(phi)*xmatrixFirstboundedDisplacement + np.sin(phi)*ymatrixFirstboundedDisplacement
thetamatrixFirstboundedDisplacement = -np.sin(phi)*xmatrixFirstboundedDisplacement + np.cos(phi)*ymatrixFirstboundedDisplacement
xcracktipDisplacement = xmatrixCracktipDisplacement - xfiberCracktipDisplacement
ycracktipDisplacement = ymatrixCracktipDisplacement - yfiberCracktipDisplacement
rcracktipDisplacement = rmatrixCracktipDisplacement - rfiberCracktipDisplacement
thetacracktipDisplacement = thetamatrixCracktipDisplacement - thetafiberCracktipDisplacement
if 'second' in parameters['mesh']['elements']['order']:
xfirstboundedDisplacement = xmatrixFirstboundedDisplacement - xfiberFirstboundedDisplacement
yfirstboundedDisplacement = ymatrixFirstboundedDisplacement - yfiberFirstboundedDisplacement
rfirstboundedDisplacement = rmatrixFirstboundedDisplacement - rfiberFirstboundedDisplacement
thetafirstboundedDisplacement = thetamatrixFirstboundedDisplacement - thetafiberFirstboundedDisplacement
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute VCCT with GTOT=GI+GII ...',True)
if 'second' in parameters['mesh']['elements']['order']:
GI = np.abs(0.5*(rRFcracktip*rcracktipDisplacement+rRFfirstbounded*rfirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GII = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement+thetaRFfirstbounded*thetafirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOTequiv = np.abs(0.5*(xRFcracktip*xcracktipDisplacement+yRFcracktip*ycracktipDisplacement+xRFfirstbounded*xfirstboundedDisplacement+yRFfirstbounded*yfirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
else:
GI = np.abs(0.5*(rRFcracktip*rcracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GII = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOTequiv = np.abs(0.5*(xRFcracktip*xcracktipDisplacement+yRFcracktip*ycracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOT = GI + GII
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Compute VCCT with GI=GTOT-GII ...',True)
if 'second' in parameters['mesh']['elements']['order']:
GIIv2 = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement+thetaRFfirstbounded*thetafirstboundedDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
else:
GIIv2 = np.abs(0.5*(thetaRFcracktip*thetacracktipDisplacement)/(parameters['geometry']['Rf']*parameters['mesh']['size']['delta']*np.pi/180.0))
GTOTv2 = Jintegrals[-1]
GIv2 = GTOTv2 - GIIv2
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + 'Save to file ...',True)
if 'second' in parameters['mesh']['elements']['order']:
appendCSVfile(parameters['output']['global']['directory'],parameters['output']['global']['filenames']['energyreleaserate'],[[parameters['geometry']['deltatheta'],parameters['geometry']['Rf'],parameters['geometry']['L'],parameters['geometry']['L']/parameters['geometry']['Rf'],phiCZ*180.0/np.pi,G0,GI/G0,GII/G0,GTOT/G0,GIv2/G0,GIIv2/G0,GTOTv2/G0,GTOTequiv/G0,GI,GII,GTOT,GIv2,GIIv2,GTOTv2,GTOTequiv,np.min(uR),np.max(uR),np.mean(uR),np.min(uTheta),np.max(uTheta),np.mean(uTheta),phiSZ*180.0/np.pi,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24, xRFcracktip,yRFcracktip,xRFfirstbounded,yRFfirstbounded,rRFcracktip,thetaRFcracktip,rRFfirstbounded,thetaRFfirstbounded,xcracktipDisplacement,ycracktipDisplacement,rcracktipDisplacement,thetacracktipDisplacement,xfirstboundedDisplacement,yfirstboundedDisplacement,rfirstboundedDisplacement,thetafirstboundedDisplacement,xfiberCracktipDisplacement,yfiberCracktipDisplacement,rfiberCracktipDisplacement,thetafiberCracktipDisplacement,xfiberFirstboundedDisplacement,yfiberFirstboundedDisplacement,rfiberFirstboundedDisplacement,thetafiberFirstboundedDisplacement,xmatrixCracktipDisplacement,ymatrixCracktipDisplacement,rmatrixCracktipDisplacement,thetamatrixCracktipDisplacement,xmatrixFirstboundedDisplacement,ymatrixFirstboundedDisplacement,rmatrixFirstboundedDisplacement,thetamatrixFirstboundedDisplacement]])
else:
appendCSVfile(parameters['output']['global']['directory'],parameters['output']['global']['filenames']['energyreleaserate'],[[parameters['geometry']['deltatheta'],parameters['geometry']['Rf'],parameters['geometry']['L'],parameters['geometry']['L']/parameters['geometry']['Rf'],phiCZ*180.0/np.pi,G0,GI/G0,GII/G0,GTOT/G0,GIv2/G0,GIIv2/G0,GTOTv2/G0,GTOTequiv/G0,GI,GII,GTOT,GIv2,GIIv2,GTOTv2,GTOTequiv,np.min(uR),np.max(uR),np.mean(uR),np.min(uTheta),np.max(uTheta),np.mean(uTheta),phiSZ*180.0/np.pi,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,xRFcracktip,yRFcracktip,rRFcracktip,thetaRFcracktip,xcracktipDisplacement,ycracktipDisplacement,rcracktipDisplacement,thetacracktipDisplacement,xfiberCracktipDisplacement,yfiberCracktipDisplacement,rfiberCracktipDisplacement,thetafiberCracktipDisplacement,xmatrixCracktipDisplacement,ymatrixCracktipDisplacement,rmatrixCracktipDisplacement,thetamatrixCracktipDisplacement]])
writeLineToLogFile(logfilepath,'a',baselogindent + 3*logindent + '... done.',True)
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - compute VCCT
#=======================================================================
#=======================================================================
# BEGIN - close ODB
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + 'Closing ODB database ...',True)
odb.close()
writeLineToLogFile(logfilepath,'a',baselogindent + 2*logindent + '... done.',True)
#=======================================================================
# END - close ODB
#=======================================================================
writeLineToLogFile(logfilepath,'a',baselogindent + logindent + 'Exiting function: analyzeRVEresults(wd,odbname,parameters)',True)
def main(argv):
#=======================================================================
# BEGIN - PARSE COMMAND LINE
#=======================================================================
debug = False
for a,arg in enumerate(argv):
if '-help' in arg:
printHelp()
elif '-dir' in arg or '-directory' in arg:
inputDirectory = argv[a+1]
elif '-data' in arg:
dataFile = argv[a+1]
elif '-iterables' in arg:
iterablesFile = argv[a+1]
elif '-plot' in arg:
plotFile = argv[a+1]
elif '-debug' in arg:
debug = True
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('>>>>-----------------------<<<<')
print >> sys.__stdout__,('>>>> Running in DEBUG MODE <<<<')
print >> sys.__stdout__,('>>>>-----------------------<<<<')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
if 'inputDirectory' not in locals():
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('!!! ERROR: missing input directory !!!')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
printHelp()
if 'dataFile' not in locals():
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('!!! ERROR: missing data file !!!')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
printHelp()
if 'iterablesFile' not in locals():
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('!!! ERROR: missing iterables file !!!')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
printHelp()
if 'plotFile' not in locals():
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,('!!! ERROR: missing plot file !!!')
print >> sys.__stdout__,(' ')
print >> sys.__stdout__,(' ')
printHelp()
#=======================================================================
# END - PARSE COMMAND LINE
#=======================================================================
#=======================================================================
# BEGIN - DATA
#=======================================================================
# units are already the ones used in simulation, not SI
ABQbuiltinDict = {'ISOTROPIC':ISOTROPIC,
'ENGINEERING_CONSTANTS':ENGINEERING_CONSTANTS,
'MIDDLE_SURFACE':MIDDLE_SURFACE,
'FROM_SECTION':FROM_SECTION}
if inputDirectory[-1]=='/' or inputDirectory[-1]=='\\':
inputDirectory = inputDirectory[:-1]
with open(join(inputDirectory,dataFile.split('.')[0]+'.deck'),'r') as deck:
decklines = deck.readlines()
keywords = []
values = []
for line in decklines:
if line[0] == '#':
continue
removeComment = line.replace('\n','').split('#')[0]
keywordSet = removeComment.split('@')[0]
keywords.append(keywordSet.replace(' ','').split(','))
dataType = removeComment.split('$')[1]
if 'list of boolean' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(ast.literal_eval(dataString))
values.append(dataList)
elif 'list of int' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(int(dataString))
values.append(dataList)
elif 'list of float' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(float(dataString))
values.append(dataList)
elif 'list of string' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(str(dataString))
values.append(dataList)
elif 'list of ABAQUS keyword' in dataType:
values.append(ABQbuiltinDict[removeComment.split('@')[1].split('$')[0]])
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(ABQbuiltinDict[dataString])
values.append(dataList)
elif 'boolean' in dataType:
values.append(ast.literal_eval(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'int' in dataType:
values.append(int(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'float' in dataType:
values.append(float(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'string' in dataType:
values.append(str(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'ABAQUS keyword' in dataType:
values.append(ABQbuiltinDict[removeComment.split('@')[1].split('$')[0].replace(' ','')])
RVEparams = {}
for k,keywordSet in enumerate(keywords):
fillDataDictionary(RVEparams,keywordSet,values[k])
# parameters for iterations
# RVEparams['modelname']
# RVEparams['deltatheta']
# RVEparams['deltapsi']
# RVEparams['deltaphi']
#=======================================================================
# END - DATA
#=======================================================================
#=======================================================================
# BEGIN - ITERABLES
#=======================================================================
with open(join(inputDirectory,iterablesFile.split('.')[0]+'.deck'),'r') as deck:
decklines = deck.readlines()
for l,line in enumerate(decklines):
if line[0] == '#':
continue
elif 'basename' in line:
basename = str(line.replace('\n','').split('#')[0].split('$')[0].split('@')[1].replace(' ',''))
elif 'free parameters' in line:
freeParamsStart = l+1
keywords = []
values = []
lenOfValues = []
for line in decklines[freeParamsStart:]:
if line[0] == '#':
continue
removeComment = line.replace('\n','').split('#')[0]
keywordSet = removeComment.split('@')[0]
keywords.append(keywordSet.replace(' ','').split(','))
dataType = removeComment.split('$')[1]
listAsString = removeComment.split('@')[1].split('$')[0].replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(float(dataString))
if 'min' in dataType and 'max' in dataType and 'step' in dataType:
values.append(np.arange(dataList[0],dataList[1]+dataList[2],dataList[2]))
else:
values.append(dataList)
lenOfValues.append(len(values[-1]))
lenSortedIndeces = np.argsort(lenOfValues)
sortedValues = []
sortedKeywords = []
for index in lenSortedIndeces:
sortedValues.append(values[index])
sortedKeywords.append(keywords[index])
iterationsSets = []
indecesCollection = []
totalSets = 1
for valueSet in sortedValues:
totalSets *= len(valueSet)
indeces = []
for j in range(0,len(sortedKeywords)):
indeces.append(0)
indecesCollection.append(indeces)
iterationSet = []
for i,index in enumerate(indeces):
iterationSet.append(sortedValues[i][index])
iterationsSets.append(iterationSet)
for k in range(1,totalSets):
indeces = []
for j in range(0,len(sortedKeywords)-1):
indeces.append(0)
if indecesCollection[k-1][-1]==len(sortedValues[-1])-1:
indeces.append(0)
else:
indeces.append(indecesCollection[k-1][-1] + 1)
for j in range(len(sortedKeywords)-2,-1,-1):
if indeces[j+1]==0:
if indecesCollection[k-1][j]==len(sortedValues[j])-1:
indeces.append(0)
else:
indeces.append(indecesCollection[k-1][j] + 1)
else:
indeces.append(indecesCollection[k-1][j])
indecesCollection.append(indeces)
iterationSet = []
for i,index in enumerate(indeces):
iterationSet.append(sortedValues[i][index])
iterationsSets.append(iterationSet)
#=======================================================================
# END - ITERABLES
#=======================================================================
#=======================================================================
# BEGIN - PLOT SETTINGS
#=======================================================================
with open(join(inputDirectory,plotFile.split('.')[0]+'.deck'),'r') as deck:
decklines = deck.readlines()
keywords = []
values = []
for line in decklines:
if line[0] == '#':
continue
removeComment = line.replace('\n','').split('#')[0]
keywordSet = removeComment.split('@')[0]
keywords.append(keywordSet.replace(' ','').split(','))
dataType = removeComment.split('$')[1]
if 'list of boolean' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(ast.literal_eval(dataString))
values.append(dataList)
elif 'list of int' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(int(dataString))
values.append(dataList)
elif 'list of float' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(float(dataString))
values.append(dataList)
elif 'list of string' in dataType:
listAsString = removeComment.split('@')[1].split('$')[0].replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(str(dataString))
values.append(dataList)
elif 'list of ABAQUS keyword' in dataType:
values.append(ABQbuiltinDict[removeComment.split('@')[1].split('$')[0]])
listAsString = removeComment.split('@')[1].split('$')[0].replace(' ','').replace('[','').replace(']','').split(',')
dataList = []
for dataString in listAsString:
dataList.append(ABQbuiltinDict[dataString])
values.append(dataList)
elif 'boolean' in dataType:
values.append(ast.literal_eval(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'int' in dataType:
values.append(int(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'float' in dataType:
values.append(float(removeComment.split('@')[1].split('$')[0].replace(' ','')))
elif 'string' in dataType:
values.append(str(removeComment.split('@')[1].split('$')[0]))
elif 'ABAQUS keyword' in dataType:
values.append(ABQbuiltinDict[removeComment.split('@')[1].split('$')[0].replace(' ','')])
for k,keywordSet in enumerate(keywords):
fillDataDictionary(RVEparams,keywordSet,values[k])
#=======================================================================
# END - PLOT SETTINGS
#=======================================================================
#=======================================================================
# BEGIN - ANALYSIS
#=======================================================================
workDir = RVEparams['input']['wd']
RVEparams['output']['global']['filenames']['inputdata'] = basename + '_InputData'
RVEparams['output']['global']['filenames']['performances'] = basename + '_ABQ-Performances'
RVEparams['output']['global']['filenames']['stiffness'] = basename + '_Stiffness'
RVEparams['output']['global']['filenames']['energyreleaserate'] = basename + '_ERRTS'
if len(RVEparams['steps'])>1:
RVEparams['output']['global']['filenames']['thermalenergyreleaserate'] = basename + '_thermalERRTS'
logfilename = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') + '_ABQ-RVE-generation-and-analysis' + '.log'
logfilefullpath = join(workDir,logfilename)
logindent = ' '
if not os.path.exists(RVEparams['output']['global']['directory']):
os.mkdir(RVEparams['output']['global']['directory'])
with open(logfilefullpath,'w') as log:
log.write('Automatic generation and FEM analysis of RVEs with Abaqus Python' + '\n')
createCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_TIME','ITERATION PARAMETER VALUE, T(createRVE()) [s], T(modifyRVEinputfile()) [s], T(runRVEsimulation()) [s], T(analyzeRVEresults()) [s],TOTAL TIME FOR ITERATION [s]')
createCSVfile(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['inputdata'],'Rf [um],L [um],L/Rf [-],Vff [-],BC,applied strain [-],fiber,matrix')
appendCSVfile(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['inputdata'],[[RVEparams['geometry']['Rf'],RVEparams['geometry']['L'],RVEparams['geometry']['L']/RVEparams['geometry']['Rf'],(RVEparams['geometry']['Rf']*RVEparams['geometry']['Rf']*np.pi)/(4*RVEparams['geometry']['L']*RVEparams['geometry']['L']),RVEparams['sections']['1']['material'],RVEparams['sections']['2']['material']]])
createCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist','ABSOLUTE PATH, NAME, TO PLOT, PLOT VARIABLES')
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['inputdata']+'.csv'),'MODEL-DATA',RVEparams['plot']['global']['inputdata']['toPlot'],RVEparams['plot']['global']['inputdata']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['energyreleaserate']+'.csv'),'GLOBAL-ERRTS',RVEparams['plot']['global']['errts']['toPlot'],RVEparams['plot']['global']['errts']['variables']]])
if len(RVEparams['steps'])>1:
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['thermalenergyreleaserate']+'.csv'),'GLOBAL-THERMALERRTS',RVEparams['plot']['global']['errts']['toPlot'],RVEparams['plot']['global']['errts']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['global']['directory'],logfilename.split('.')[0] + '_TIME'+'.csv'),'GLOBAL-TIME',RVEparams['plot']['global']['globaltime']['toPlot'],RVEparams['plot']['global']['globaltime']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['performances']+'.csv'),'GLOBAL-ABQperformances',RVEparams['plot']['global']['abqperf']['toPlot'],RVEparams['plot']['global']['abqperf']['variables']]])
createCSVfile(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['performances'],'PROJECT NAME, NUMBER OF CPUS [-], USER TIME [s], SYSTEM TIME [s], USER TIME/TOTAL CPU TIME [%], SYSTEM TIME/TOTAL CPU TIME [%], TOTAL CPU TIME [s], WALLCLOCK TIME [s], WALLCLOCK TIME [m], WALLCLOCK TIME [h], WALLCLOCK TIME/TOTAL CPU TIME [%], ESTIMATED FLOATING POINT OPERATIONS PER ITERATION [-], MINIMUM REQUIRED MEMORY [MB], MEMORY TO MINIMIZE I/O [MB], TOTAL NUMBER OF ELEMENTS [-], NUMBER OF ELEMENTS DEFINED BY THE USER [-], NUMBER OF ELEMENTS DEFINED BY THE PROGRAM [-], TOTAL NUMBER OF NODES [-], NUMBER OF NODES DEFINED BY THE USER [-], NUMBER OF NODES DEFINED BY THE PROGRAM [-], TOTAL NUMBER OF VARIABLES [-]')
titleline = ''
if 'second' in RVEparams['mesh']['elements']['order']:
titleline = 'deltatheta [deg],Rf,L,L/Rf,phiCZ [deg],G0,GI/G0,GII/G0,GTOT/G0,GIv2/G0,GIIv2/G0,GTOTv2/G0,GTOTequiv/G0,GI,GII,GTOT,GIv2,GIIv2,GTOTv2,GTOTequiv,np.min(uR),np.max(uR),np.mean(uR),np.min(uTheta),np.max(uTheta),np.mean(uTheta),phiSZ [deg],matGabq[0,0],matGabq[0,1],matGabq[1,0],matGabq[1,1],eigG1abq,eigG2abq,eigvecG1abq[0],eigvecG1abq[1],psi1abq,psi2abq,psi1abq+90.0,psi2abq+90.0,xRFcracktip,yRFcracktip,xRFfirstbounded,yRFfirstbounded,rRFcracktip,thetaRFcracktip,rRFfirstbounded,thetaRFfirstbounded,xcracktipDisplacement,ycracktipDisplacement,rcracktipDisplacement,thetacracktipDisplacement,xfirstboundedDisplacement,yfirstboundedDisplacement,rfirstboundedDisplacement,thetafirstboundedDisplacement,xfiberCracktipDisplacement,yfiberCracktipDisplacement,rfiberCracktipDisplacement,thetafiberCracktipDisplacement,xfiberFirstboundedDisplacement,yfiberFirstboundedDisplacement,rfiberFirstboundedDisplacement,thetafiberFirstboundedDisplacement,xmatrixracktipDisplacement,ymatrixCracktipDisplacement,rmatrixCracktipDisplacement,thetamatrixCracktipDisplacement,xmatrixFirstboundedDisplacement,ymatrixFirstboundedDisplacement,rmatrixFirstboundedDisplacement,thetamatrixFirstboundedDisplacement'
else:
titleline = 'deltatheta [deg],Rf,L,L/Rf,phiCZ [deg],G0,GI/G0,GII/G0,GTOT/G0,GIv2/G0,GIIv2/G0,GTOTv2/G0,GTOTequiv/G0,GI,GII,GTOT,GIv2,GIIv2,GTOTv2,GTOTequiv,np.min(uR),np.max(uR),np.mean(uR),np.min(uTheta),np.max(uTheta),np.mean(uTheta),phiSZ [deg],matGabq[0,0],matGabq[0,1],matGabq[1,0],matGabq[1,1],,eigG1abq,eigG2abq,eigvecG1abq[0],eigvecG1abq[1],psi1abq,psi2abq,psi1abq+90.0,psi2abq+90.0,xRFcracktip,yRFcracktip,rRFcracktip,thetaRFcracktip,xcracktipDisplacement,ycracktipDisplacement,rcracktipDisplacement,thetacracktipDisplacement,xfiberCracktipDisplacement,yfiberCracktipDisplacement,rfiberCracktipDisplacement,thetafiberCracktipDisplacement,xmatrixCracktipDisplacement,ymatrixCracktipDisplacement,rmatrixCracktipDisplacement,thetamatrixCracktipDisplacement'
createCSVfile(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['energyreleaserate'],titleline)
if len(RVEparams['steps'])>1:
createCSVfile(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['thermalenergyreleaserate'],titleline)
createCSVfile(RVEparams['output']['global']['directory'],RVEparams['output']['global']['filenames']['stiffness'],'deltatheta [deg], Rf [mum], L [mum], L/Rf [-], RVE area [mum2], app strain [%], avg strain [%], avg stress [MPa], E1 (avg stress/avg strain) [MPa], E1 (avg stress/avg strain) [GPa], E1 (avg stress/app strain) [MPa], E1 (avg stress/app strain) [GPa], avg COD [mum], max COD [mum], avg CSD [mum], max CSD [mum], beta22/rhoD [mum], beta33/rhoD [mum], beta23/rhoD [mum], OZ - tol=0.0% [deg], CZ - tol=0.0% [deg], OZ - tol=0.1% [deg], CZ - tol=0.1% [deg], OZ - tol=0.2% [deg], CZ - tol=0.2% [deg], OZ - tol=0.3% [deg], CZ - tol=0.3% [deg], OZ - tol=0.4% [deg], CZ - tol=0.4% [deg], OZ - tol=0.5% [deg], CZ - tol=0.5% [deg], OZ - tol=0.6% [deg], CZ - tol=0.6% [deg], OZ - tol=0.7% [deg], CZ - tol=0.7% [deg], OZ - tol=0.8% [deg], CZ - tol=0.8% [deg], OZ - tol=0.9% [deg], CZ - tol=0.9% [deg], OZ - tol=1.0% [deg], CZ - tol=1.0% [deg], OZ - tol=2.0% [deg], CZ - tol=2.0% [deg], OZ - tol=3.0% [deg], CZ - tol=3.0% [deg], OZ - tol=4.0% [deg], CZ - tol=4.0% [deg], OZ - tol=5.0% [deg], CZ - tol=5.0% [deg]')
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a','In function: main(argv)',True)
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Global timer starts',True)
globalStart = timeit.default_timer()
for iterationSet in iterationsSets:
timedataList = []
totalIterationTime = 0.0
variationString = ''
for v,value in enumerate(iterationSet):
if v>0:
variationString += '-'
variationString += str(sortedKeywords[v][-1]) + str(value).replace('.','_')
fillDataDictionary(RVEparams,sortedKeywords[v],value)
RVEparams['input']['modelname'] = basename + '_' + variationString
if RVEparams['geometry']['deltatheta']<20:
RVEparams['mesh']['size']['deltapsi'] = 0.5*RVEparams['geometry']['deltatheta']
RVEparams['mesh']['size']['deltaphi'] = 20.0
elif RVEparams['geometry']['deltatheta']<140:
RVEparams['mesh']['size']['deltapsi'] = 10.0
RVEparams['mesh']['size']['deltaphi'] = 20.0
else:
RVEparams['mesh']['size']['deltapsi'] = 0.4*(180.0-RVEparams['geometry']['deltatheta'])
RVEparams['mesh']['size']['deltaphi'] = 0.4*(180.0-RVEparams['geometry']['deltatheta'])
RVEparams['output']['local']['directory'] = join(RVEparams['output']['global']['directory'],RVEparams['input']['modelname'])
RVEparams['output']['local']['filenames']['Jintegral'] = RVEparams['input']['modelname'] + '-Jintegral'
RVEparams['output']['local']['filenames']['stressesatboundary'] = RVEparams['input']['modelname'] + '-stressesatboundary'
RVEparams['output']['local']['filenames']['stressesatsymmetryline'] = RVEparams['input']['modelname'] + '-stressesatsymmetryline'
RVEparams['output']['local']['filenames']['stressesatbondedinterface'] = RVEparams['input']['modelname'] + '-stressesatbondedinterface'
RVEparams['output']['local']['filenames']['crackdisplacements'] = RVEparams['input']['modelname'] + '-crackdisplacements'
RVEparams['output']['local']['filenames']['contactzonetolerance'] = RVEparams['input']['modelname'] + '-contactzonetol'
RVEparams['output']['local']['filenames']['globalstiffnessmatrix'] = RVEparams['input']['modelname'] + '-globalstiffnessmatrix'
RVEparams['output']['local']['filenames']['globalloadvector'] = RVEparams['input']['modelname'] + '-globalloadvector'
RVEparams['output']['local']['filenames']['globaldispvector'] = RVEparams['input']['modelname'] + '-globaldispvector'
RVEparams['output']['local']['filenames']['matrixindeces'] = RVEparams['input']['modelname'] + '-matrixindeces'
RVEparams['output']['report']['local']['directory'].append(join(RVEparams['output']['global']['directory'],RVEparams['input']['modelname']))
RVEparams['output']['report']['local']['filenames']['Jintegral'].append(RVEparams['input']['modelname'] + '-Jintegral')
RVEparams['output']['report']['local']['filenames']['stressesatboundary'].append(RVEparams['input']['modelname'] + '-stressesatboundary')
RVEparams['output']['report']['local']['filenames']['crackdisplacements'].append(RVEparams['input']['modelname'] + '-crackdisplacements')
RVEparams['output']['report']['local']['filenames']['contactzonetolerance'].append(RVEparams['input']['modelname'] + '-contactzonetol')
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['Jintegral']+'.csv'),'Jintegral-Params='+variationString,RVEparams['plot']['local']['Jintegral']['toPlot'],RVEparams['plot']['local']['Jintegral']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['stressesatboundary']+'.csv'),'StressAtBoundary-Params='+variationString,RVEparams['plot']['local']['stressatboundary']['toPlot'],RVEparams['plot']['local']['stressatboundary']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['stressesatsymmetryline']+'.csv'),'StressAtSymmLine-Params='+variationString,RVEparams['plot']['local']['stressatsymmetryline']['toPlot'],RVEparams['plot']['local']['stressatsymmetryline']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['stressesatbondedinterface']+'.csv'),'StressAtBondInter-Params='+variationString,RVEparams['plot']['local']['stressatbondedinterface']['toPlot'],RVEparams['plot']['local']['stressatbondedinterface']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['crackdisplacements']+'.csv'),'CrackDisps-Params='+variationString,RVEparams['plot']['local']['crackdisplacements']['toPlot'],RVEparams['plot']['local']['crackdisplacements']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['contactzonetolerance']+'.csv'),'TolCZ-Params='+variationString,RVEparams['plot']['local']['contactzonetolerance']['toPlot'],RVEparams['plot']['local']['contactzonetolerance']['variables']]])
if len(RVEparams['steps'])>1:
RVEparams['output']['local']['filenames']['thermalJintegral'] = RVEparams['input']['modelname'] + '-thermalJintegral'
RVEparams['output']['local']['filenames']['thermalcrackdisplacements'] = RVEparams['input']['modelname'] + '-thermalcrackdisplacements'
RVEparams['output']['local']['filenames']['thermalcontactzonetolerance'] = RVEparams['input']['modelname'] + '-thermalcontactzonetol'
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['thermalJintegral']+'.csv'),'thermalJintegral-Params='+variationString,RVEparams['plot']['local']['Jintegral']['toPlot'],RVEparams['plot']['local']['Jintegral']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['thermalcrackdisplacements']+'.csv'),'thermalCrackDisps-Params='+variationString,RVEparams['plot']['local']['crackdisplacements']['toPlot'],RVEparams['plot']['local']['crackdisplacements']['variables']]])
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0].split('_')[-1] + '_csvfileslist',[[join(RVEparams['output']['local']['directory'],RVEparams['output']['local']['filenames']['thermalcontactzonetolerance']+'.csv'),'thermalTolCZ-Params='+variationString,RVEparams['plot']['local']['contactzonetolerance']['toPlot'],RVEparams['plot']['local']['contactzonetolerance']['variables']]])
timedataList.append(RVEparams['input']['modelname'])
if not os.path.exists(RVEparams['output']['local']['directory']):
os.mkdir(RVEparams['output']['local']['directory'])
#================= create ABAQUS CAE model
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Calling function: createRVE(parameters,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
try:
if RVEparams['simulation-pipeline']['create-CAE']:
modelData = createRVE(RVEparams,logfilefullpath,logindent,logindent)
localElapsedTime = timeit.default_timer() - localStart
timedataList.append(localElapsedTime)
totalIterationTime += localElapsedTime
writeLineToLogFile(logfilefullpath,'a',logindent + 'Successfully returned from function: createRVE(parameters,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exit(2)
#================= modify input file
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Calling function: modifyRVEinputfile(parameters,mdbData,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
try:
if RVEparams['simulation-pipeline']['modify-INP']:
inputfilename = modifyRVEinputfile(RVEparams,modelData,logfilefullpath,logindent,logindent)
localElapsedTime = timeit.default_timer() - localStart
timedataList.append(localElapsedTime)
totalIterationTime += localElapsedTime
writeLineToLogFile(logfilefullpath,'a',logindent + 'Successfully returned from function: modifyRVEinputfile(parameters,mdbData,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exit(2)
#================= modify input file
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Calling function: modifyRVEinputfile(parameters,mdbData,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
try:
if RVEparams['simulation-pipeline']['modify-INP']:
perturbationinputfilename = modifyRVEinputfilePerturbationStep(RVEparams,modelData,logfilefullpath,logindent,logindent)
localElapsedTime = timeit.default_timer() - localStart
timedataList.append(localElapsedTime)
totalIterationTime += localElapsedTime
writeLineToLogFile(logfilefullpath,'a',logindent + 'Successfully returned from function: modifyRVEinputfile(parameters,mdbData,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exit(2)
#================= run ABAQUS simulation
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Calling function: runRVEsimulation(wd,inpfile,ncpus,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
try:
if RVEparams['simulation-pipeline']['analyze-ODB']:
runRVEsimulation(RVEparams['input']['wd'],inputfilename,RVEparams['solver']['cpus'],logfilefullpath,logindent,logindent)
localElapsedTime = timeit.default_timer() - localStart
timedataList.append(localElapsedTime)
totalIterationTime += localElapsedTime
writeLineToLogFile(logfilefullpath,'a',logindent + 'Successfully returned from function: runRVEsimulation(wd,inpfile,ncpus,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exit(2)
#================= run ABAQUS simulation
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Calling function: runRVEsimulation(wd,inpfile,ncpus,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
try:
if RVEparams['simulation-pipeline']['analyze-ODB']:
runRVEsimulation(RVEparams['input']['wd'],perturbationinputfilename,RVEparams['solver']['cpus'],logfilefullpath,logindent,logindent)
localElapsedTime = timeit.default_timer() - localStart
timedataList.append(localElapsedTime)
totalIterationTime += localElapsedTime
writeLineToLogFile(logfilefullpath,'a',logindent + 'Successfully returned from function: runRVEsimulation(wd,inpfile,ncpus,logfilepath,baselogindent,logindent)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exit(2)
#inputfilename = 'Job-VCCTandJintegral-RVE100-Half-SmallDisplacement-Free-10' + '.inp'
#================= extract and analyze data from ODB
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Calling function: analyzeRVEresults(wd,odbname,logfilepath,parameters)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
try:
if RVEparams['simulation-pipeline']['analyze-ODB']:
analyzeRVEresults(inputfilename.split('.')[0] + '.odb',RVEparams,logfilefullpath,logindent,logindent)
localElapsedTime = timeit.default_timer() - localStart
timedataList.append(localElapsedTime)
totalIterationTime += localElapsedTime
writeLineToLogFile(logfilefullpath,'a',logindent + 'Successfully returned from function: analyzeRVEresults(wd,odbname,logfilepath,parameters)',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exit(2)
timedataList.append(np.sum(timedataList[1:]))
appendCSVfile(RVEparams['output']['global']['directory'],logfilename.split('.')[0] + '_TIME',[timedataList])
if RVEparams['simulation-pipeline']['archive-ODB']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Moving ODB to archive... ',True)
try:
copyfile(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.odb'),join(RVEparams['output']['archive']['directory'],inputfilename.split('.')[0]+'.odb'))
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.odb'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
elif RVEparams['simulation-pipeline']['remove-ODB']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .odb file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.odb'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-DAT']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .dat file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.dat'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-PRT']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .prt file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.prt'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-STA']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .sta file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.sta'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-SIM']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .sim file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.sim'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-MSG']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .msg file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.msg'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-INP']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .inp file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.inp'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if RVEparams['simulation-pipeline']['remove-COM']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Remove .com file from working directory... ',True)
try:
os.remove(join(RVEparams['input']['wd'],inputfilename.split('.')[0]+'.com'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
if debug:
break
if RVEparams['simulation-pipeline']['archive-CAE']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Moving CAE to archive... ',True)
try:
copyfile(join(RVEparams['input']['wd'],RVEparams['input']['caefilename']+'.cae'),join(RVEparams['output']['archive']['directory'],RVEparams['input']['caefilename']+'.cae'))
os.remove(join(RVEparams['input']['wd'],RVEparams['input']['caefilename']+'.cae'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
except Exception, error:
writeErrorToLogFile(logfilefullpath,'a',Exception,error,True)
sys.exc_clear()
#=======================================================================
# END - ANALYSIS
#=======================================================================
#=======================================================================
# BEGIN - REPORTING
#=======================================================================
writeLineToLogFile(logfilefullpath,'a',logindent + '... done. ',True)
if RVEparams['simulation-pipeline']['report-EXCEL']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Begin reporting in excel',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
codeFolder = 'D:/01_Luca/06_WD/thinPlyMechanics/python'
if 'Windows' in system():
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Create Windows command file',True)
cmdfile = join(RVEparams['output']['global']['directory'],'dataToXlsx.cmd')
with open(cmdfile,'w') as cmd:
cmd.write('\n')
cmd.write('CD ' + RVEparams['output']['global']['directory'] + '\n')
cmd.write('\n')
cmd.write('python ' + join(codeFolder,'reportData' + '.py') + ' -w ' + RVEparams['output']['global']['directory'] + ' -i ' + logfilename.split('.')[0].split('_')[-1] + '_csvfileslist' + '.csv' + ' -o ' + RVEparams['output']['global']['directory'] + ' -f ' + RVEparams['input']['caefilename'] + '.xlsx' + ' --excel ' + '\n')
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Executing Windows command file...',True)
try:
subprocess.call('cmd.exe /C ' + cmdfile)
writeLineToLogFile(logfilefullpath,'a',2*logindent + '... done.',True)
except Exception,error:
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'ERROR',True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + str(Exception),True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + str(error),True)
sys.exc_clear()
elif 'Linux' in system():
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Create Linux bash file',True)
bashfile = join(RVEparams['output']['global']['directory'],'dataToXlsx.sh')
with open(bashfile,'w') as bsh:
bsh.write('#!/bin/bash\n')
bsh.write('\n')
bsh.write('cd ' + RVEparams['output']['global']['directory'] + '\n')
bsh.write('\n')
bsh.write('python ' + join(codeFolder,'reportData' + '.py') + ' -w ' + RVEparams['output']['global']['directory'] + ' -i ' + logfilename.split('.')[0].split('_')[-1] + '_csvfileslist' + '.csv' + ' -f ' + RVEparams['input']['caefilename'] + '.xlsx' + '\n')
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Executing Linux bash file...',True)
try:
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Change permissions to ' + bashfile ,True)
os.chmod(bashfile, 0o755)
writeLineToLogFile(logfilefullpath,'a','Run bash file',True)
call('.' + bashfile)
writeLineToLogFile(logfilefullpath,'a',2*logindent + '... done.',True)
except Exception:
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'ERROR',True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + str(Exception),True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + str(error),True)
sys.exc_clear()
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
if RVEparams['simulation-pipeline']['report-LATEX']:
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Begin reporting in latex',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer starts',True)
localStart = timeit.default_timer()
writeLineToLogFile(logfilefullpath,'a',logindent + 'Setting the locale to US english ... ',True)
locale.setlocale(locale.LC_TIME,'us_US')
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Check if latex output directories exist and create them if needed ... ',True)
reportFolder = RVEparams['output']['report']['global']['directory']
reportFilename = RVEparams['output']['report']['global']['filename'].split('.')[0]
if not os.path.exists(reportFolder):
os.mkdir(reportFolder)
if not os.path.exists(join(reportFolder,'pics')):
os.mkdir(join(reportFolder,'pics'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Copy report template images to latex folder ... ',True)
copyfile(join('D:/01_Luca/06_WD/thinPlyMechanics/tex/Templates/Template_reports','Docmase_logo.jpg'),join(reportFolder,'pics','Docmase_logo.jpg'))
copyfile(join('D:/01_Luca/06_WD/thinPlyMechanics/tex/Templates/Template_reports','erasmusmundus_logo.jpg'),join(reportFolder,'pics','erasmusmundus_logo.jpg'))
copyfile(join('D:/01_Luca/06_WD/thinPlyMechanics/tex/Templates/Template_slides','logo-eeigm.jpg'),join(reportFolder,'pics','logo-eeigm.jpg'))
copyfile(join('D:/01_Luca/06_WD/thinPlyMechanics/tex/Templates/Template_reports','lulea_logo1.jpg'),join(reportFolder,'pics','lulea_logo1.jpg'))
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Reading index of generated csv files ... ',True)
with open(join(RVEparams['output']['global']['directory'],logfilename.split('.')[0] + '_csvfileslist' + '.csv'),'r') as csv:
lines = csv.readlines()
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Generating local plots ... ',True)
for l,line in enumerate(lines[5:]):
csvPath = line.replace('\n','').split(',')[0]
outDir = csvPath.split('\\')[0] + '/' + csvPath.split('\\')[1]
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Opening file ' + csvPath,True)
with open(csvPath,'r') as csv:
csvlines = csv.readlines()
toPlot = bool(line.replace('\n','').split(',')[2])
plotSettings = []
if toPlot:
stringToEval = ','.join(line.replace('\n','').split(',')[3:])
plotSettings = ast.literal_eval(stringToEval[1:])
writeLineToLogFile(logfilefullpath,'a',2*logindent + str(len(plotSettings)) + ' PLOTS REQUESTED',True)
for p,plot in enumerate(plotSettings):
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Plot name: ' + plot[-1],True)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'x-axis name: ' + plot[-3],True)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'y-axis name: ' + plot[-2],True)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Number of curves: ' + str(len(plot[:-3])),True)
xyData = []
legendEntries = ''
dataoptions = []
for c,curve in enumerate(plot[:-3]):
writeLineToLogFile(logfilefullpath,'a',4*logindent + '(' + str(c+1) + ') Curve name: ' + curve[2],True)
writeLineToLogFile(logfilefullpath,'a',4*logindent + ' x-values: ' + csvlines[0].replace('\n','').split(',')[int(curve[0])],True)
xData = []
for csvline in csvlines[1:]:
if len(csvline)>2:
xData.append(float(csvline.replace('\n','').split(',')[int(curve[0])]))
writeLineToLogFile(logfilefullpath,'a',4*logindent + ' y-values: ' + csvlines[0].replace('\n','').split(',')[int(curve[1])],True)
yData = []
for csvline in csvlines[1:]:
if len(csvline)>2:
yData.append(float(csvline.replace('\n','').split(',')[int(curve[1])]))
xyData.append(np.transpose([np.array(xData),np.array(yData)]))
if c>0:
legendEntries += ', '
legendEntries += '{$' + curve[2] + '$}'
dataoptions.append('red!' + str(100.0*float(c)/float(len(plot[:-3]))) + '!blue')
axisoptions = 'width=30cm,\n ' \
'title={\\bf{' + plot[-1] + '}},\n ' \
'title style={font=\\fontsize{40}{8}\\selectfont},\n ' \
'xlabel style={at={(axis description cs:0.5,-0.02)},anchor=north,font=\\fontsize{44}{40}\\selectfont},\n ' \
'ylabel style={at={(axis description cs:-0.025,.5)},anchor=south,font=\\fontsize{44}{40}\\selectfont},\n ' \
'xlabel={$' + plot[-3] + '$},ylabel={$' + plot[-2] + '$},\n ' \
'tick align=outside,\n ' \
'tick label style={font=\\huge},\n ' \
'xmajorgrids,\n ' \
'x grid style={lightgray!92.026143790849673!black},\n ' \
'ymajorgrids,\n ' \
'y grid style={lightgray!92.026143790849673!black},\n ' \
'line width=0.5mm,\n ' \
'legend style={draw=white!80.0!black,font=\\fontsize{28}{24}\\selectfont,row sep=15pt},\n ' \
'legend entries={' + legendEntries + '},\n ' \
'legend image post style={xscale=2},\n ' \
'legend cell align={left}'
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Create plot in file ' + plot[-1].replace(' ','-').replace('/','-').replace(',','') + '.pdf' + ' in directory ' + outDir,True)
writeLatexMultiplePlots(outDir,plot[-1].replace(' ','-').replace('/','-').replace(',','') + '.tex',xyData,axisoptions,dataoptions,logfilefullpath,3*logindent,logindent)
else:
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'NO PLOT REQUESTED',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Generating global plots ... ',True)
for l,line in enumerate(lines[1:5]):
csvPath = line.replace('\n','').split(',')[0]
outDir = csvPath.split('\\')[0]
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Opening file ' + csvPath,True)
with open(csvPath,'r') as csv:
csvlines = csv.readlines()
plotName = line.replace('\n','').split(',')[1]
toPlot = bool(line.replace('\n','').split(',')[2])
plotSettings = []
if toPlot:
stringToEval = ','.join(line.replace('\n','').split(',')[3:])
plotSettings = ast.literal_eval(stringToEval[1:])
writeLineToLogFile(logfilefullpath,'a',2*logindent + str(len(plotSettings)) + ' PLOTS REQUESTED',True)
for p,plot in enumerate(plotSettings):
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Plot name: ' + plot[-1],True)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'x-axis name: ' + plot[-3],True)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'y-axis name: ' + plot[-2],True)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Number of curves: ' + str(len(plot[:-3])),True)
xyData = []
legendEntries = ''
dataoptions = []
for c,curve in enumerate(plot[:-3]):
writeLineToLogFile(logfilefullpath,'a',4*logindent + '(' + str(c+1) + ') Curve name: ' + curve[2],True)
writeLineToLogFile(logfilefullpath,'a',4*logindent + ' x-values: ' + csvlines[0].replace('\n','').split(',')[int(curve[0])],True)
xData = []
for csvline in csvlines[1:]:
if len(csvline)>2:
xData.append(float(csvline.replace('\n','').split(',')[int(curve[0])]))
writeLineToLogFile(logfilefullpath,'a',4*logindent + ' y-values: ' + csvlines[0].replace('\n','').split(',')[int(curve[1])],True)
yData = []
for csvline in csvlines[1:]:
if len(csvline)>2:
yData.append(float(csvline.replace('\n','').split(',')[int(curve[1])]))
xyData.append(np.transpose([np.array(xData),np.array(yData)]))
if c>0:
legendEntries += ', '
legendEntries += '{$' + curve[2] + '$}'
dataoptions.append('red!' + str(100.0*float(c)/float(len(plot[:-3]))) + '!blue')
axisoptions = 'width=30cm,\n ' \
'title={\\bf{' + plot[-1] + '}},\n ' \
'title style={font=\\fontsize{40}{8}\\selectfont},\n ' \
'xlabel style={at={(axis description cs:0.5,-0.02)},anchor=north,font=\\fontsize{44}{40}\\selectfont},\n ' \
'ylabel style={at={(axis description cs:-0.025,.5)},anchor=south,font=\\fontsize{44}{40}\\selectfont},\n ' \
'xlabel={$' + plot[-3] + '$},ylabel={$' + plot[-2] + '$},\n ' \
'tick align=outside,\n ' \
'tick label style={font=\\huge},\n ' \
'xmajorgrids,\n ' \
'x grid style={lightgray!92.026143790849673!black},\n ' \
'ymajorgrids,\n ' \
'y grid style={lightgray!92.026143790849673!black},\n ' \
'line width=0.5mm,\n ' \
'legend style={draw=white!80.0!black,font=\\fontsize{28}{24}\\selectfont,row sep=15pt},\n ' \
'legend entries={' + legendEntries + '},\n ' \
'legend image post style={xscale=2},\n ' \
'legend cell align={left}'
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Create plot in file ' + plot[-1].replace(' ','-').replace('/','-').replace(',','') + '.pdf' + ' in directory ' + outDir,True)
writeLatexMultiplePlots(outDir,plot[-1].replace(' ','-').replace('/','-').replace(',','') + '.tex',xyData,axisoptions,dataoptions,logfilefullpath,3*logindent,logindent)
else:
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'NO PLOT REQUESTED',True)
writeLineToLogFile(logfilefullpath,'a',logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Creating main report ...',True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Create latex file ...',True)
createLatexFile(reportFolder,reportFilename,'scrartcl',options='a4paper, twoside,12pt, abstract')
packages = ['inputenc',
'fontenc',
'amsfonts',
'amsmath',
'amssymb',
'amstext',
'animate',
'babel',
'biblatex',
'bm',
'booktabs',
'caption',
'colortbl',
'csquotes',
'enumerate',
'eurosym',
'geometry',
'graphicx',
'float',
'helvet',
'longtable',
'makeidx',
'multirow',
'nameref',
'parskip',
'pdfpages',
'rotating',
'scrpage2',
'setspace',
'standalone',
'subcaption',
'tabularx',
'tikz',
'xcolor',
'glossaries',
'hyperref']
options = ['utf8',
'fontenc',
'',
'',
'',
'',
'',
'english',
'backend=bibtex, sorting=none,style=numeric',
'',
'',
'',
'',
'',
'',
'right',
'inner=3cm,outer=2cm,top=2.7cm,bottom=3.2cm',
'',
'',
'scaled=.90',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'acronym,nonumberlist,nopostdot,toc',
'']
writeLatexPackages(reportFolder,reportFilename,packages,options)
writeLineToLogFile(logfilefullpath,'a',2*logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Write packages ...',True)
writeLatexCustomLine(reportFolder,reportFilename,'\\definecolor{Gray}{gray}{0.85}')
writeLatexCustomLine(reportFolder,reportFilename,'\\definecolor{LightCyan}{rgb}{0.88,1,1}')
writeLatexCustomLine(reportFolder,reportFilename,'\\sloppy % avoids lines that are too long on the right side')
writeLatexCustomLine(reportFolder,reportFilename,'% avoid "orphans"')
writeLatexCustomLine(reportFolder,reportFilename,'\\clubpenalty = 10000')
writeLatexCustomLine(reportFolder,reportFilename,'% avoid "widows"')
writeLatexCustomLine(reportFolder,reportFilename,'\\widowpenalty = 10000')
writeLatexCustomLine(reportFolder,reportFilename,'% this makes the table of content etc. look better')
writeLatexCustomLine(reportFolder,reportFilename,'\\renewcommand{\\dotfill}{\\leaders\\hbox to 5pt{\\hss.\\hss}\\hfill}')
writeLatexCustomLine(reportFolder,reportFilename,'% avoid indentation of line after a paragraph')
writeLatexSetLength(reportFolder,reportFilename,'parindent','0pt')
writeLatexGenericCommand(reportFolder,reportFilename,'pagestyle','','scrheadings')
writeLatexGenericCommand(reportFolder,reportFilename,'automark','section','section')
writeLatexGenericCommand(reportFolder,reportFilename,'ofoot','','\\pagemark')
writeLatexGenericCommand(reportFolder,reportFilename,'ifoot','','Research Plan')
writeLatexSetLength(reportFolder,reportFilename,'unitlength','1cm')
writeLatexSetLength(reportFolder,reportFilename,'oddsidemargin','0.3cm')
writeLatexSetLength(reportFolder,reportFilename,'evensidemargin','0.3cm')
writeLatexSetLength(reportFolder,reportFilename,'textwidth','15.5cm')
writeLatexSetLength(reportFolder,reportFilename,'topmargin','0cm')
writeLatexSetLength(reportFolder,reportFilename,'textheight','22cm')
writeLatexCustomLine(reportFolder,reportFilename,'\\columnsep 0.5cm')
writeLatexCustomLine(reportFolder,reportFilename,'\\newcommand{\\brac}[1]{\\left(#1\\right)}')
writeLatexGenericCommand(reportFolder,reportFilename,'graphicspath','','{./pics/}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addto\\captionsenglish{\\renewcommand{\\listfigurename}{Figures}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addto\\captionsenglish{\\renewcommand{\\listtablename}{Tables}}')
writeLatexGenericCommand(reportFolder,reportFilename,'makeglossaries','','')
writeLatexGenericCommand(reportFolder,reportFilename,'makeindex','','',)
writeLineToLogFile(logfilefullpath,'a',2*logindent + '... done.',True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Document starts ...',True)
writeLatexDocumentStarts(reportFolder,reportFilename)
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Title page',True)
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% Front Matter %')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% Title Page')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ihead{\\href{http://www.ltu.se/}{\\includegraphics[height=1.5cm]{lulea_logo1.jpg}}\\hspace{6.1953125cm}\\href{http://www.eeigm.univ-lorraine.fr/}{\\includegraphics[height=1.5cm]{logo-eeigm.jpg}}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{\\noindent\\makebox[\\linewidth]{\\rule{\\textwidth}{0.4pt}}\\\\\\href{http://eacea.ec.europa.eu/erasmus_mundus/index_en.php}{\\includegraphics[height=1.75cm]{erasmusmundus_logo.jpg}}\\hspace{9.55cm}\\href{http://www.uni-saarland.de/einrichtung/eusmat/international-studies/phd/docmase.html}{\\includegraphics[height=1.75cm]{Docmase_logo.jpg}}}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{center}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\vspace*{0.1cm}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{Large}')
writeLatexCustomLine(reportFolder,reportFilename,'\\textbf{\\textsc{EUSMAT}}\\\\[0.75ex]')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{Large}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{large}')
writeLatexCustomLine(reportFolder,reportFilename,'\\textbf{European School of Materials}\\\\[0.75ex]')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\vspace*{1cm}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\textbf{DocMASE}\\\\[0.75ex]')
writeLatexCustomLine(reportFolder,reportFilename,'\\textbf{\\textsc{Doctorate in Materials Science and Engineering}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{large}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\vspace{1.75cm}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{Large}')
writeLatexCustomLine(reportFolder,reportFilename,'\\textbf{\\textsc{Simulation Report}}\\\\[0.75ex]')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{Large}')
writeLatexCustomLine(reportFolder,reportFilename,'\\vspace*{0.5cm}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{LARGE}')
writeLatexCustomLine(reportFolder,reportFilename,'\\textbf{\\textsc{Report of ABAQUS simulations}}\\\\[0.75ex]')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{LARGE}')
writeLatexCustomLine(reportFolder,reportFilename,'\\vspace*{2.5cm}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{flushright}')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{tabular}{l l }')
writeLatexCustomLine(reportFolder,reportFilename,'{\\large \\textbf{Doctoral Candidate:}} & {\\large \\href{http://lucadistasioengineering.com/}{Luca DI STASIO}}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'{\\large \\textbf{Thesis Supervisors:}}& {\\large Prof. Zoubir AYADI}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&{\\large Universit\\\'e de Lorraine}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&{\\large Nancy, France}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'& {\\large Prof. Janis VARNA}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&{\\large Lule\\aa\\ University of Technology}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'&{\\large Lule\\aa, Sweden}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{tabular}')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{flushright}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\vspace*{2cm}')
writeLatexCustomLine(reportFolder,reportFilename,'')
timeNow = datetime.now()
writeLatexCustomLine(reportFolder,reportFilename,'{\\large \\textbf{Created on ' + timeNow.strftime('%B') + timeNow.strftime('%d') + ', ' + timeNow.strftime('%Y') +'}}\\\\[10pt]')
writeLatexCustomLine(reportFolder,reportFilename,'{\\large \\textbf{Last Updated on \\today}}\\\\')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{center}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepage')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Table of Contents',True)
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% Table of Contents')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagenumbering{roman}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\setcounter{page}{1}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\contentsname}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\tableofcontents')
writeLatexCustomLine(reportFolder,reportFilename,'\\label{sec:content}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'List of Figures',True)
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% List of Figures')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\listfigurename}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%\\section*{List of Figures}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addcontentsline{toc}{section}{\\listfigurename}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\listoffigures')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'List of Tables',True)
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% List of Tables')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\listtablename}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%\\section*{List of Tables}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addcontentsline{toc}{section}{\\listtablename}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\listoftables')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'List of Acronyms',True)
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% List of Acronyms')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\nameref{sec:acr}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\section*{Acronyms}\\label{sec:acr}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addcontentsline{toc}{section}{\\nameref{sec:acr}}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\printglossary[type=\\acronymtype]')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%\\printglossary')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'List of Symbols',True)
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% List of Symbols')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\nameref{sec:sym}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\section*{Symbols}\\label{sec:sym}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addcontentsline{toc}{section}{\\nameref{sec:sym}}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%\\input{symbols}')
writeLatexCustomLine(reportFolder,reportFilename,'%')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Abstract',True)
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% Abstract')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\nameref{sec:abs}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\section*{Abstract}\\label{sec:abs}')
writeLatexCustomLine(reportFolder,reportFilename,'\\addcontentsline{toc}{section}{\\nameref{sec:abs}}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% Main Matter %')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagenumbering{arabic}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\setcounter{page}{1}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Global results',True)
for l,line in enumerate(lines[1:5]):
csvPath = line.replace('\n','').split(',')[0]
outDir = csvPath.split('\\')[0]
writeLineToLogFile(logfilefullpath,'a',4*logindent + 'Opening file ' + csvPath,True)
with open(csvPath,'r') as csv:
csvlines = csv.readlines()
plotName = line.replace('\n','').split(',')[1]
toPlot = bool(line.replace('\n','').split(',')[2])
plotSettings = []
if toPlot:
writeLineToLogFile(logfilefullpath,'a',4*logindent + str(len(plotSettings)) + ' PLOTS TO BE INSERTED',True)
plotSettings = ast.literal_eval(','.join(line.replace('\n','').split(',')[3:]))
for p,plot in enumerate(plotSettings):
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% GLOBAL - ' + plot[-1])
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\nameref{sec:sec1}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\section{Parametric study: ' + plot[-1] + '}\label{sec:sec1}')
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{figure}[!h]')
writeLatexCustomLine(reportFolder,reportFilename,'\\includegraphics[width=\\textwidth]{' + outDir + plot[-1].replace(' ','-').replace('/','-').replace(',','') + '.pdf}')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{figure}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Local results',True)
for l,line in enumerate(lines[5:]):
csvPath = line.replace('\n','').split(',')[0]
outDir = csvPath.split('\\')[0] + '/' + csvPath.split('\\')[1]
writeLineToLogFile(logfilefullpath,'a',4*logindent + 'Opening file ' + csvPath,True)
with open(csvPath,'r') as csv:
csvlines = csv.readlines()
plotName = line.replace('\n','').split(',')[1]
toPlot = bool(line.replace('\n','').split(',')[2])
plotSettings = []
if toPlot:
writeLineToLogFile(logfilefullpath,'a',4*logindent + str(len(plotSettings)) + ' PLOTS TO BE INSERTED',True)
plotSettings = ast.literal_eval(','.join(line.replace('\n','').split(',')[3:]))
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'% SIMULATION N. ' + str(p+1))
writeLatexCustomLine(reportFolder,reportFilename,'%------------------------------------------------%')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\clearscrheadings')
writeLatexCustomLine(reportFolder,reportFilename,'\\pagestyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'\\manualmark')
writeLatexCustomLine(reportFolder,reportFilename,'\\ofoot{\\\\ \\hyperref[sec:content]{\\pagemark}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\ifoot{} % ofoo')
writeLatexCustomLine(reportFolder,reportFilename,'\\ohead{\\nameref{sec:sec1}}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadtopline{2pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setheadsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'\\setfootsepline{0.5pt}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\section{Simulation n. ' + str(p+1) + '}\label{sec:sec1}')
for p,plot in enumerate(plotSettings):
writeLatexCustomLine(reportFolder,reportFilename,'\\begin{figure}[!h]')
writeLatexCustomLine(reportFolder,reportFilename,'\\includegraphics[width=\\textwidth]{' + outDir + plot[-1].replace(' ','-').replace('/','-').replace(',','') + '.pdf}')
writeLatexCustomLine(reportFolder,reportFilename,'\\end{figure}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLatexCustomLine(reportFolder,reportFilename,'\\cleardoublepageusingstyle{scrheadings}')
writeLatexCustomLine(reportFolder,reportFilename,'')
writeLineToLogFile(logfilefullpath,'a',3*logindent + 'Documents ends',True)
writeLatexDocumentEnds(reportFolder,reportFilename)
writeLineToLogFile(logfilefullpath,'a',2*logindent + '... done. ',True)
writeLineToLogFile(logfilefullpath,'a',2*logindent + 'Compile pdf ... ',True)
cmdfile = join(reportFolder,'runlatex.cmd')
with open(cmdfile,'w') as cmd:
cmd.write('\n')
cmd.write('CD ' + reportFolder + '\n')
cmd.write('\n')
cmd.write('pdflatex ' + join(reportFolder,reportFilename.split(',')[0] + '.tex') + ' -job-name=' + reportFilename.split(',')[0] + '\n')
try:
subprocess.call('cmd.exe /C ' + cmdfile)
except Exception:
sys.exc_clear()
writeLineToLogFile(logfilefullpath,'a',2*logindent + '... done. ',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Local timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(localElapsedTime) + ' [s]',True)
writeLineToLogFile(logfilefullpath,'a',logindent + '... done. ',True)
#=======================================================================
# END - REPORTING
#=======================================================================
globalElapsedTime = timeit.default_timer() - globalStart
writeLineToLogFile(logfilefullpath,'a',logindent + 'Global timer stopped',True)
writeLineToLogFile(logfilefullpath,'a',logindent + 'Elapsed time: ' + str(globalElapsedTime) + ' [s]',True)
skipLineToLogFile(logfilefullpath,'a',True)
writeLineToLogFile(logfilefullpath,'a','Exiting function: main(argv)',True)
writeLineToLogFile(logfilefullpath,'a','Goodbye!',True)
if __name__ == "__main__":
main(sys.argv[1:])
| 72.009309 | 1,385 | 0.626911 | 61,369 | 611,071 | 6.233636 | 0.033502 | 0.038175 | 0.091203 | 0.128814 | 0.914689 | 0.893654 | 0.872773 | 0.848358 | 0.828332 | 0.804944 | 0 | 0.0196 | 0.198542 | 611,071 | 8,485 | 1,386 | 72.017796 | 0.761518 | 0.030168 | 0 | 0.787043 | 0 | 0.009887 | 0.205332 | 0.061259 | 0.001171 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.004553 | null | null | 0.010928 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ed8358edf6d42f8cb2d9be065ce9db8013b7542a | 10,484 | py | Python | tests/benchmark/milvus_benchmark/runners/insert.py | AropJoe/milvus | 132b3c2248c50e96a4edde56aefb43659a270837 | [
"Apache-2.0"
] | 2 | 2021-09-05T15:00:49.000Z | 2022-01-05T06:42:23.000Z | tests/benchmark/milvus_benchmark/runners/insert.py | AropJoe/milvus | 132b3c2248c50e96a4edde56aefb43659a270837 | [
"Apache-2.0"
] | 38 | 2021-11-22T11:15:27.000Z | 2022-03-30T08:14:12.000Z | tests/benchmark/milvus_benchmark/runners/insert.py | Bennu-Li/milvus | 35612881e33ce19a7407628769f6b51a7518bfe9 | [
"Apache-2.0"
] | 3 | 2021-11-17T09:21:42.000Z | 2021-11-22T11:54:09.000Z | import time
import copy
import logging
from milvus_benchmark import parser
from milvus_benchmark.runners import utils
from milvus_benchmark.runners.base import BaseRunner
logger = logging.getLogger("milvus_benchmark.runners.insert")
class InsertRunner(BaseRunner):
"""run insert"""
name = "insert_performance"
def __init__(self, env, metric):
super(InsertRunner, self).__init__(env, metric)
def extract_cases(self, collection):
collection_name = collection["collection_name"] if "collection_name" in collection else None
(data_type, collection_size, dimension, metric_type) = parser.collection_parser(collection_name)
ni_per = collection["ni_per"]
build_index = collection["build_index"] if "build_index" in collection else False
index_info = None
vector_type = utils.get_vector_type(data_type)
other_fields = collection["other_fields"] if "other_fields" in collection else None
collection_info = {
"dimension": dimension,
"metric_type": metric_type,
"dataset_name": collection_name,
"collection_size": collection_size,
"other_fields": other_fields,
"ni_per": ni_per
}
index_field_name = None
index_type = None
index_param = None
if build_index is True:
index_type = collection["index_type"]
index_param = collection["index_param"]
index_info = {
"index_type": index_type,
"index_param": index_param
}
index_field_name = utils.get_default_field_name(vector_type)
flush = True
if "flush" in collection and collection["flush"] == "no":
flush = False
self.init_metric(self.name, collection_info, index_info, None)
case_metric = copy.deepcopy(self.metric)
# set metric type as case
case_metric.set_case_metric_type()
case_metrics = list()
case_params = list()
case_metrics.append(case_metric)
case_param = {
"collection_name": collection_name,
"data_type": data_type,
"dimension": dimension,
"collection_size": collection_size,
"ni_per": ni_per,
"metric_type": metric_type,
"vector_type": vector_type,
"other_fields": other_fields,
"build_index": build_index,
"flush_after_insert": flush,
"index_field_name": index_field_name,
"index_type": index_type,
"index_param": index_param,
}
case_params.append(case_param)
return case_params, case_metrics
def prepare(self, **case_param):
collection_name = case_param["collection_name"]
dimension = case_param["dimension"]
vector_type = case_param["vector_type"]
other_fields = case_param["other_fields"]
index_field_name = case_param["index_field_name"]
build_index = case_param["build_index"]
self.milvus.set_collection(collection_name)
if self.milvus.exists_collection():
logger.debug("Start drop collection")
self.milvus.drop()
time.sleep(utils.DELETE_INTERVAL_TIME)
self.milvus.create_collection(dimension, data_type=vector_type, other_fields=other_fields)
# TODO: update fields in collection_info
# fields = self.get_fields(self.milvus, collection_name)
# collection_info = {
# "dimension": dimension,
# "metric_type": metric_type,
# "dataset_name": collection_name,
# "fields": fields
# }
if build_index is True:
if case_param["index_type"]:
self.milvus.create_index(index_field_name, case_param["index_type"], case_param["metric_type"], index_param=case_param["index_param"])
logger.debug(self.milvus.describe_index(index_field_name))
else:
# build_index = False
logger.warning("Please specify the index_type")
# TODO: error handler
def run_case(self, case_metric, **case_param):
collection_name = case_param["collection_name"]
dimension = case_param["dimension"]
index_field_name = case_param["index_field_name"]
build_index = case_param["build_index"]
tmp_result = self.insert(self.milvus, collection_name, case_param["data_type"], dimension, case_param["collection_size"], case_param["ni_per"])
flush_time = 0.0
build_time = 0.0
if case_param["flush_after_insert"] is True:
start_time = time.time()
self.milvus.flush()
flush_time = round(time.time()-start_time, 2)
logger.debug(self.milvus.count())
if build_index is True:
logger.debug("Start build index for last file")
start_time = time.time()
self.milvus.create_index(index_field_name, case_param["index_type"], case_param["metric_type"], index_param=case_param["index_param"])
build_time = round(time.time()-start_time, 2)
tmp_result.update({"flush_time": flush_time, "build_time": build_time})
return tmp_result
class BPInsertRunner(BaseRunner):
"""run insert"""
name = "bp_insert_performance"
def __init__(self, env, metric):
super(BPInsertRunner, self).__init__(env, metric)
def extract_cases(self, collection):
collection_name = collection["collection_name"] if "collection_name" in collection else None
(data_type, collection_size, dimension, metric_type) = parser.collection_parser(collection_name)
ni_pers = collection["ni_pers"]
build_index = collection["build_index"] if "build_index" in collection else False
index_info = None
vector_type = utils.get_vector_type(data_type)
other_fields = collection["other_fields"] if "other_fields" in collection else None
index_field_name = None
index_type = None
index_param = None
if build_index is True:
index_type = collection["index_type"]
index_param = collection["index_param"]
index_info = {
"index_type": index_type,
"index_param": index_param
}
index_field_name = utils.get_default_field_name(vector_type)
flush = True
if "flush" in collection and collection["flush"] == "no":
flush = False
case_metrics = list()
case_params = list()
for ni_per in ni_pers:
collection_info = {
"dimension": dimension,
"metric_type": metric_type,
"dataset_name": collection_name,
"collection_size": collection_size,
"other_fields": other_fields,
"ni_per": ni_per
}
self.init_metric(self.name, collection_info, index_info, None)
case_metric = copy.deepcopy(self.metric)
case_metric.set_case_metric_type()
case_metrics.append(case_metric)
case_param = {
"collection_name": collection_name,
"data_type": data_type,
"dimension": dimension,
"collection_size": collection_size,
"ni_per": ni_per,
"metric_type": metric_type,
"vector_type": vector_type,
"other_fields": other_fields,
"build_index": build_index,
"flush_after_insert": flush,
"index_field_name": index_field_name,
"index_type": index_type,
"index_param": index_param,
}
case_params.append(case_param)
return case_params, case_metrics
def prepare(self, **case_param):
collection_name = case_param["collection_name"]
dimension = case_param["dimension"]
vector_type = case_param["vector_type"]
other_fields = case_param["other_fields"]
index_field_name = case_param["index_field_name"]
build_index = case_param["build_index"]
self.milvus.set_collection(collection_name)
if self.milvus.exists_collection():
logger.debug("Start drop collection")
self.milvus.drop()
time.sleep(utils.DELETE_INTERVAL_TIME)
self.milvus.create_collection(dimension, data_type=vector_type,
other_fields=other_fields)
# TODO: update fields in collection_info
# fields = self.get_fields(self.milvus, collection_name)
# collection_info = {
# "dimension": dimension,
# "metric_type": metric_type,
# "dataset_name": collection_name,
# "fields": fields
# }
if build_index is True:
if case_param["index_type"]:
self.milvus.create_index(index_field_name, case_param["index_type"], case_param["metric_type"], index_param=case_param["index_param"])
logger.debug(self.milvus.describe_index(index_field_name))
else:
build_index = False
logger.warning("Please specify the index_type")
# TODO: error handler
def run_case(self, case_metric, **case_param):
collection_name = case_param["collection_name"]
dimension = case_param["dimension"]
index_field_name = case_param["index_field_name"]
build_index = case_param["build_index"]
# TODO:
tmp_result = self.insert(self.milvus, collection_name, case_param["data_type"], dimension, case_param["collection_size"], case_param["ni_per"])
flush_time = 0.0
build_time = 0.0
if case_param["flush_after_insert"] is True:
start_time = time.time()
self.milvus.flush()
flush_time = round(time.time()-start_time, 2)
logger.debug(self.milvus.count())
if build_index is True:
logger.debug("Start build index for last file")
start_time = time.time()
self.milvus.create_index(index_field_name, case_param["index_type"], case_param["metric_type"], index_param=case_param["index_param"])
build_time = round(time.time()-start_time, 2)
tmp_result.update({"flush_time": flush_time, "build_time": build_time})
return tmp_result
| 43.144033 | 151 | 0.623331 | 1,196 | 10,484 | 5.105351 | 0.080268 | 0.073698 | 0.050442 | 0.037668 | 0.937111 | 0.937111 | 0.929414 | 0.929414 | 0.905011 | 0.905011 | 0 | 0.001593 | 0.281667 | 10,484 | 242 | 152 | 43.322314 | 0.809189 | 0.055322 | 0 | 0.871287 | 0 | 0 | 0.1496 | 0.005267 | 0 | 0 | 0 | 0.004132 | 0 | 1 | 0.039604 | false | 0 | 0.029703 | 0 | 0.108911 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed84ca197a09a78582e6db8334a9b26eb39d63e4 | 104 | py | Python | pyisic/_standards/tsic2552/__init__.py | sayari-analytics/pyisic | 42ed46f5bc446a0bbc0edf30b64bc4ab939dd033 | [
"MIT"
] | 3 | 2021-11-18T15:32:38.000Z | 2022-02-28T19:16:14.000Z | pyisic/_standards/tsic2552/__init__.py | sayari-analytics/pyisic | 42ed46f5bc446a0bbc0edf30b64bc4ab939dd033 | [
"MIT"
] | 18 | 2021-06-28T19:17:49.000Z | 2022-03-23T20:20:18.000Z | pyisic/_standards/tsic2552/__init__.py | sayari-analytics/pyisic | 42ed46f5bc446a0bbc0edf30b64bc4ab939dd033 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .tsic2552 import TSIC2552
from .tsic2552_to_isic3 import TSIC2552_to_ISIC3
| 26 | 48 | 0.769231 | 15 | 104 | 5.066667 | 0.533333 | 0.315789 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208791 | 0.125 | 104 | 3 | 49 | 34.666667 | 0.626374 | 0.201923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
ed88acb70a0f51078e448c18abb49c505213c8a5 | 99 | py | Python | autocython/__main__.py | chrisjbillington/autocython | 9cd0590291d9418725a40be8567882001e291b85 | [
"BSD-2-Clause"
] | null | null | null | autocython/__main__.py | chrisjbillington/autocython | 9cd0590291d9418725a40be8567882001e291b85 | [
"BSD-2-Clause"
] | null | null | null | autocython/__main__.py | chrisjbillington/autocython | 9cd0590291d9418725a40be8567882001e291b85 | [
"BSD-2-Clause"
] | null | null | null | import os
from autocython import ensure_extensions_compiled
ensure_extensions_compiled(os.getcwd()) | 33 | 49 | 0.888889 | 13 | 99 | 6.461538 | 0.615385 | 0.380952 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 99 | 3 | 50 | 33 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9c0267110f5ce8c14ddf75eeabf4299a177429a9 | 148 | py | Python | backend/EmployeeData/public/resources.py | jacumol/MGLobal-Python-Hands-On-Test-Flask | fb8b6d8fbc4e3a35b3046e22f856ca6bf064b58f | [
"MIT"
] | null | null | null | backend/EmployeeData/public/resources.py | jacumol/MGLobal-Python-Hands-On-Test-Flask | fb8b6d8fbc4e3a35b3046e22f856ca6bf064b58f | [
"MIT"
] | null | null | null | backend/EmployeeData/public/resources.py | jacumol/MGLobal-Python-Hands-On-Test-Flask | fb8b6d8fbc4e3a35b3046e22f856ca6bf064b58f | [
"MIT"
] | null | null | null | from flask import render_template
from public import public_bp
@public_bp.route("/")
def index():
return render_template("public/index.html")
| 18.5 | 47 | 0.763514 | 21 | 148 | 5.190476 | 0.571429 | 0.256881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128378 | 148 | 7 | 48 | 21.142857 | 0.844961 | 0 | 0 | 0 | 0 | 0 | 0.121622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
9c04d4c732a4b93400c49c1937448acf3467b8e0 | 483,956 | py | Python | sdk/search/azure-search-documents/azure/search/documents/indexes/_generated/models/_models_py3.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | 1 | 2022-03-09T08:59:13.000Z | 2022-03-09T08:59:13.000Z | sdk/search/azure-search-documents/azure/search/documents/indexes/_generated/models/_models_py3.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | sdk/search/azure-search-documents/azure/search/documents/indexes/_generated/models/_models_py3.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import datetime
from typing import Any, Dict, List, Optional, Union
from azure.core.exceptions import HttpResponseError
import msrest.serialization
from ._search_client_enums import *
class AnalyzedTokenInfo(msrest.serialization.Model):
"""Information about a token returned by an analyzer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar token: Required. The token returned by the analyzer.
:vartype token: str
:ivar start_offset: Required. The index of the first character of the token in the input text.
:vartype start_offset: int
:ivar end_offset: Required. The index of the last character of the token in the input text.
:vartype end_offset: int
:ivar position: Required. The position of the token in the input text relative to other tokens.
The first token in the input text has position 0, the next has position 1, and so on. Depending
on the analyzer used, some tokens might have the same position, for example if they are
synonyms of each other.
:vartype position: int
"""
_validation = {
'token': {'required': True, 'readonly': True},
'start_offset': {'required': True, 'readonly': True},
'end_offset': {'required': True, 'readonly': True},
'position': {'required': True, 'readonly': True},
}
_attribute_map = {
'token': {'key': 'token', 'type': 'str'},
'start_offset': {'key': 'startOffset', 'type': 'int'},
'end_offset': {'key': 'endOffset', 'type': 'int'},
'position': {'key': 'position', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(AnalyzedTokenInfo, self).__init__(**kwargs)
self.token = None
self.start_offset = None
self.end_offset = None
self.position = None
class AnalyzeRequest(msrest.serialization.Model):
"""Specifies some text and analysis components used to break that text into tokens.
All required parameters must be populated in order to send to Azure.
:ivar text: Required. The text to break into tokens.
:vartype text: str
:ivar analyzer: The name of the analyzer to use to break the given text. Possible values
include: "ar.microsoft", "ar.lucene", "hy.lucene", "bn.microsoft", "eu.lucene", "bg.microsoft",
"bg.lucene", "ca.microsoft", "ca.lucene", "zh-Hans.microsoft", "zh-Hans.lucene",
"zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft", "cs.microsoft", "cs.lucene",
"da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene", "en.microsoft", "en.lucene",
"et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft", "fr.lucene", "gl.lucene",
"de.microsoft", "de.lucene", "el.microsoft", "el.lucene", "gu.microsoft", "he.microsoft",
"hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene", "is.microsoft", "id.microsoft",
"id.lucene", "ga.lucene", "it.microsoft", "it.lucene", "ja.microsoft", "ja.lucene",
"kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft", "lv.lucene", "lt.microsoft",
"ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft", "no.lucene", "fa.lucene",
"pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene", "pt-PT.microsoft",
"pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft", "ru.lucene",
"sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft", "es.microsoft",
"es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft", "th.microsoft",
"th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft", "vi.microsoft",
"standard.lucene", "standardasciifolding.lucene", "keyword", "pattern", "simple", "stop",
"whitespace".
:vartype analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:ivar tokenizer: The name of the tokenizer to use to break the given text. Possible values
include: "classic", "edgeNGram", "keyword_v2", "letter", "lowercase",
"microsoft_language_tokenizer", "microsoft_language_stemming_tokenizer", "nGram",
"path_hierarchy_v2", "pattern", "standard_v2", "uax_url_email", "whitespace".
:vartype tokenizer: str or ~azure.search.documents.indexes.models.LexicalTokenizerName
:ivar normalizer: The name of the normalizer to use to normalize the given text. Possible
values include: "asciifolding", "elision", "lowercase", "standard", "uppercase".
:vartype normalizer: str or ~azure.search.documents.indexes.models.LexicalNormalizerName
:ivar token_filters: An optional list of token filters to use when breaking the given text.
:vartype token_filters: list[str or ~azure.search.documents.indexes.models.TokenFilterName]
:ivar char_filters: An optional list of character filters to use when breaking the given text.
:vartype char_filters: list[str or ~azure.search.documents.indexes.models.CharFilterName]
"""
_validation = {
'text': {'required': True},
}
_attribute_map = {
'text': {'key': 'text', 'type': 'str'},
'analyzer': {'key': 'analyzer', 'type': 'str'},
'tokenizer': {'key': 'tokenizer', 'type': 'str'},
'normalizer': {'key': 'normalizer', 'type': 'str'},
'token_filters': {'key': 'tokenFilters', 'type': '[str]'},
'char_filters': {'key': 'charFilters', 'type': '[str]'},
}
def __init__(
self,
*,
text: str,
analyzer: Optional[Union[str, "LexicalAnalyzerName"]] = None,
tokenizer: Optional[Union[str, "LexicalTokenizerName"]] = None,
normalizer: Optional[Union[str, "LexicalNormalizerName"]] = None,
token_filters: Optional[List[Union[str, "TokenFilterName"]]] = None,
char_filters: Optional[List[Union[str, "CharFilterName"]]] = None,
**kwargs
):
"""
:keyword text: Required. The text to break into tokens.
:paramtype text: str
:keyword analyzer: The name of the analyzer to use to break the given text. Possible values
include: "ar.microsoft", "ar.lucene", "hy.lucene", "bn.microsoft", "eu.lucene", "bg.microsoft",
"bg.lucene", "ca.microsoft", "ca.lucene", "zh-Hans.microsoft", "zh-Hans.lucene",
"zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft", "cs.microsoft", "cs.lucene",
"da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene", "en.microsoft", "en.lucene",
"et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft", "fr.lucene", "gl.lucene",
"de.microsoft", "de.lucene", "el.microsoft", "el.lucene", "gu.microsoft", "he.microsoft",
"hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene", "is.microsoft", "id.microsoft",
"id.lucene", "ga.lucene", "it.microsoft", "it.lucene", "ja.microsoft", "ja.lucene",
"kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft", "lv.lucene", "lt.microsoft",
"ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft", "no.lucene", "fa.lucene",
"pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene", "pt-PT.microsoft",
"pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft", "ru.lucene",
"sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft", "es.microsoft",
"es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft", "th.microsoft",
"th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft", "vi.microsoft",
"standard.lucene", "standardasciifolding.lucene", "keyword", "pattern", "simple", "stop",
"whitespace".
:paramtype analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:keyword tokenizer: The name of the tokenizer to use to break the given text. Possible values
include: "classic", "edgeNGram", "keyword_v2", "letter", "lowercase",
"microsoft_language_tokenizer", "microsoft_language_stemming_tokenizer", "nGram",
"path_hierarchy_v2", "pattern", "standard_v2", "uax_url_email", "whitespace".
:paramtype tokenizer: str or ~azure.search.documents.indexes.models.LexicalTokenizerName
:keyword normalizer: The name of the normalizer to use to normalize the given text. Possible
values include: "asciifolding", "elision", "lowercase", "standard", "uppercase".
:paramtype normalizer: str or ~azure.search.documents.indexes.models.LexicalNormalizerName
:keyword token_filters: An optional list of token filters to use when breaking the given text.
:paramtype token_filters: list[str or ~azure.search.documents.indexes.models.TokenFilterName]
:keyword char_filters: An optional list of character filters to use when breaking the given
text.
:paramtype char_filters: list[str or ~azure.search.documents.indexes.models.CharFilterName]
"""
super(AnalyzeRequest, self).__init__(**kwargs)
self.text = text
self.analyzer = analyzer
self.tokenizer = tokenizer
self.normalizer = normalizer
self.token_filters = token_filters
self.char_filters = char_filters
class AnalyzeResult(msrest.serialization.Model):
"""The result of testing an analyzer on text.
All required parameters must be populated in order to send to Azure.
:ivar tokens: Required. The list of tokens returned by the analyzer specified in the request.
:vartype tokens: list[~azure.search.documents.indexes.models.AnalyzedTokenInfo]
"""
_validation = {
'tokens': {'required': True},
}
_attribute_map = {
'tokens': {'key': 'tokens', 'type': '[AnalyzedTokenInfo]'},
}
def __init__(
self,
*,
tokens: List["AnalyzedTokenInfo"],
**kwargs
):
"""
:keyword tokens: Required. The list of tokens returned by the analyzer specified in the
request.
:paramtype tokens: list[~azure.search.documents.indexes.models.AnalyzedTokenInfo]
"""
super(AnalyzeResult, self).__init__(**kwargs)
self.tokens = tokens
class TokenFilter(msrest.serialization.Model):
"""Base type for token filters.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: AsciiFoldingTokenFilter, CjkBigramTokenFilter, CommonGramTokenFilter, DictionaryDecompounderTokenFilter, EdgeNGramTokenFilter, EdgeNGramTokenFilterV2, ElisionTokenFilter, KeepTokenFilter, KeywordMarkerTokenFilter, LengthTokenFilter, LimitTokenFilter, NGramTokenFilter, NGramTokenFilterV2, PatternCaptureTokenFilter, PatternReplaceTokenFilter, PhoneticTokenFilter, ShingleTokenFilter, SnowballTokenFilter, StemmerOverrideTokenFilter, StemmerTokenFilter, StopwordsTokenFilter, SynonymTokenFilter, TruncateTokenFilter, UniqueTokenFilter, WordDelimiterTokenFilter.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.AsciiFoldingTokenFilter': 'AsciiFoldingTokenFilter', '#Microsoft.Azure.Search.CjkBigramTokenFilter': 'CjkBigramTokenFilter', '#Microsoft.Azure.Search.CommonGramTokenFilter': 'CommonGramTokenFilter', '#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter': 'DictionaryDecompounderTokenFilter', '#Microsoft.Azure.Search.EdgeNGramTokenFilter': 'EdgeNGramTokenFilter', '#Microsoft.Azure.Search.EdgeNGramTokenFilterV2': 'EdgeNGramTokenFilterV2', '#Microsoft.Azure.Search.ElisionTokenFilter': 'ElisionTokenFilter', '#Microsoft.Azure.Search.KeepTokenFilter': 'KeepTokenFilter', '#Microsoft.Azure.Search.KeywordMarkerTokenFilter': 'KeywordMarkerTokenFilter', '#Microsoft.Azure.Search.LengthTokenFilter': 'LengthTokenFilter', '#Microsoft.Azure.Search.LimitTokenFilter': 'LimitTokenFilter', '#Microsoft.Azure.Search.NGramTokenFilter': 'NGramTokenFilter', '#Microsoft.Azure.Search.NGramTokenFilterV2': 'NGramTokenFilterV2', '#Microsoft.Azure.Search.PatternCaptureTokenFilter': 'PatternCaptureTokenFilter', '#Microsoft.Azure.Search.PatternReplaceTokenFilter': 'PatternReplaceTokenFilter', '#Microsoft.Azure.Search.PhoneticTokenFilter': 'PhoneticTokenFilter', '#Microsoft.Azure.Search.ShingleTokenFilter': 'ShingleTokenFilter', '#Microsoft.Azure.Search.SnowballTokenFilter': 'SnowballTokenFilter', '#Microsoft.Azure.Search.StemmerOverrideTokenFilter': 'StemmerOverrideTokenFilter', '#Microsoft.Azure.Search.StemmerTokenFilter': 'StemmerTokenFilter', '#Microsoft.Azure.Search.StopwordsTokenFilter': 'StopwordsTokenFilter', '#Microsoft.Azure.Search.SynonymTokenFilter': 'SynonymTokenFilter', '#Microsoft.Azure.Search.TruncateTokenFilter': 'TruncateTokenFilter', '#Microsoft.Azure.Search.UniqueTokenFilter': 'UniqueTokenFilter', '#Microsoft.Azure.Search.WordDelimiterTokenFilter': 'WordDelimiterTokenFilter'}
}
def __init__(
self,
*,
name: str,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
"""
super(TokenFilter, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.name = name
class AsciiFoldingTokenFilter(TokenFilter):
"""Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar preserve_original: A value indicating whether the original token will be kept. Default is
false.
:vartype preserve_original: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'preserve_original': {'key': 'preserveOriginal', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
preserve_original: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword preserve_original: A value indicating whether the original token will be kept. Default
is false.
:paramtype preserve_original: bool
"""
super(AsciiFoldingTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.AsciiFoldingTokenFilter' # type: str
self.preserve_original = preserve_original
class AzureActiveDirectoryApplicationCredentials(msrest.serialization.Model):
"""Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.
All required parameters must be populated in order to send to Azure.
:ivar application_id: Required. An AAD Application ID that was granted the required access
permissions to the Azure Key Vault that is to be used when encrypting your data at rest. The
Application ID should not be confused with the Object ID for your AAD Application.
:vartype application_id: str
:ivar application_secret: The authentication key of the specified AAD application.
:vartype application_secret: str
"""
_validation = {
'application_id': {'required': True},
}
_attribute_map = {
'application_id': {'key': 'applicationId', 'type': 'str'},
'application_secret': {'key': 'applicationSecret', 'type': 'str'},
}
def __init__(
self,
*,
application_id: str,
application_secret: Optional[str] = None,
**kwargs
):
"""
:keyword application_id: Required. An AAD Application ID that was granted the required access
permissions to the Azure Key Vault that is to be used when encrypting your data at rest. The
Application ID should not be confused with the Object ID for your AAD Application.
:paramtype application_id: str
:keyword application_secret: The authentication key of the specified AAD application.
:paramtype application_secret: str
"""
super(AzureActiveDirectoryApplicationCredentials, self).__init__(**kwargs)
self.application_id = application_id
self.application_secret = application_secret
class SearchIndexerSkill(msrest.serialization.Model):
"""Base type for skills.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: AzureMachineLearningSkill, WebApiSkill, CustomEntityLookupSkill, EntityRecognitionSkill, KeyPhraseExtractionSkill, LanguageDetectionSkill, MergeSkill, PIIDetectionSkill, SentimentSkill, SplitSkill, TextTranslationSkill, EntityLinkingSkill, EntityRecognitionSkillV3, SentimentSkillV3, ConditionalSkill, DocumentExtractionSkill, ShaperSkill, ImageAnalysisSkill, OcrSkill.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Skills.Custom.AmlSkill': 'AzureMachineLearningSkill', '#Microsoft.Skills.Custom.WebApiSkill': 'WebApiSkill', '#Microsoft.Skills.Text.CustomEntityLookupSkill': 'CustomEntityLookupSkill', '#Microsoft.Skills.Text.EntityRecognitionSkill': 'EntityRecognitionSkill', '#Microsoft.Skills.Text.KeyPhraseExtractionSkill': 'KeyPhraseExtractionSkill', '#Microsoft.Skills.Text.LanguageDetectionSkill': 'LanguageDetectionSkill', '#Microsoft.Skills.Text.MergeSkill': 'MergeSkill', '#Microsoft.Skills.Text.PIIDetectionSkill': 'PIIDetectionSkill', '#Microsoft.Skills.Text.SentimentSkill': 'SentimentSkill', '#Microsoft.Skills.Text.SplitSkill': 'SplitSkill', '#Microsoft.Skills.Text.TranslationSkill': 'TextTranslationSkill', '#Microsoft.Skills.Text.V3.EntityLinkingSkill': 'EntityLinkingSkill', '#Microsoft.Skills.Text.V3.EntityRecognitionSkill': 'EntityRecognitionSkillV3', '#Microsoft.Skills.Text.V3.SentimentSkill': 'SentimentSkillV3', '#Microsoft.Skills.Util.ConditionalSkill': 'ConditionalSkill', '#Microsoft.Skills.Util.DocumentExtractionSkill': 'DocumentExtractionSkill', '#Microsoft.Skills.Util.ShaperSkill': 'ShaperSkill', '#Microsoft.Skills.Vision.ImageAnalysisSkill': 'ImageAnalysisSkill', '#Microsoft.Skills.Vision.OcrSkill': 'OcrSkill'}
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
"""
super(SearchIndexerSkill, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.name = name
self.description = description
self.context = context
self.inputs = inputs
self.outputs = outputs
class AzureMachineLearningSkill(SearchIndexerSkill):
"""The AML skill allows you to extend AI enrichment with a custom Azure Machine Learning (AML) model. Once an AML model is trained and deployed, an AML skill integrates it into AI enrichment.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar scoring_uri: (Required for no authentication or key authentication) The scoring URI of
the AML service to which the JSON payload will be sent. Only the https URI scheme is allowed.
:vartype scoring_uri: str
:ivar authentication_key: (Required for key authentication) The key for the AML service.
:vartype authentication_key: str
:ivar resource_id: (Required for token authentication). The Azure Resource Manager resource ID
of the AML service. It should be in the format
subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/services/{service_name}.
:vartype resource_id: str
:ivar timeout: (Optional) When specified, indicates the timeout for the http client making the
API call.
:vartype timeout: ~datetime.timedelta
:ivar region: (Optional for token authentication). The region the AML service is deployed in.
:vartype region: str
:ivar degree_of_parallelism: (Optional) When specified, indicates the number of calls the
indexer will make in parallel to the endpoint you have provided. You can decrease this value if
your endpoint is failing under too high of a request load, or raise it if your endpoint is able
to accept more requests and you would like an increase in the performance of the indexer. If
not set, a default value of 5 is used. The degreeOfParallelism can be set to a maximum of 10
and a minimum of 1.
:vartype degree_of_parallelism: int
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'scoring_uri': {'key': 'uri', 'type': 'str'},
'authentication_key': {'key': 'key', 'type': 'str'},
'resource_id': {'key': 'resourceId', 'type': 'str'},
'timeout': {'key': 'timeout', 'type': 'duration'},
'region': {'key': 'region', 'type': 'str'},
'degree_of_parallelism': {'key': 'degreeOfParallelism', 'type': 'int'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
scoring_uri: Optional[str] = None,
authentication_key: Optional[str] = None,
resource_id: Optional[str] = None,
timeout: Optional[datetime.timedelta] = None,
region: Optional[str] = None,
degree_of_parallelism: Optional[int] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword scoring_uri: (Required for no authentication or key authentication) The scoring URI of
the AML service to which the JSON payload will be sent. Only the https URI scheme is allowed.
:paramtype scoring_uri: str
:keyword authentication_key: (Required for key authentication) The key for the AML service.
:paramtype authentication_key: str
:keyword resource_id: (Required for token authentication). The Azure Resource Manager resource
ID of the AML service. It should be in the format
subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/services/{service_name}.
:paramtype resource_id: str
:keyword timeout: (Optional) When specified, indicates the timeout for the http client making
the API call.
:paramtype timeout: ~datetime.timedelta
:keyword region: (Optional for token authentication). The region the AML service is deployed
in.
:paramtype region: str
:keyword degree_of_parallelism: (Optional) When specified, indicates the number of calls the
indexer will make in parallel to the endpoint you have provided. You can decrease this value if
your endpoint is failing under too high of a request load, or raise it if your endpoint is able
to accept more requests and you would like an increase in the performance of the indexer. If
not set, a default value of 5 is used. The degreeOfParallelism can be set to a maximum of 10
and a minimum of 1.
:paramtype degree_of_parallelism: int
"""
super(AzureMachineLearningSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Custom.AmlSkill' # type: str
self.scoring_uri = scoring_uri
self.authentication_key = authentication_key
self.resource_id = resource_id
self.timeout = timeout
self.region = region
self.degree_of_parallelism = degree_of_parallelism
class Similarity(msrest.serialization.Model):
"""Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: BM25Similarity, ClassicSimilarity.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Constant filled by server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.BM25Similarity': 'BM25Similarity', '#Microsoft.Azure.Search.ClassicSimilarity': 'ClassicSimilarity'}
}
def __init__(
self,
**kwargs
):
"""
"""
super(Similarity, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
class BM25Similarity(Similarity):
"""Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter).
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Constant filled by server.
:vartype odata_type: str
:ivar k1: This property controls the scaling function between the term frequency of each
matching terms and the final relevance score of a document-query pair. By default, a value of
1.2 is used. A value of 0.0 means the score does not scale with an increase in term frequency.
:vartype k1: float
:ivar b: This property controls how the length of a document affects the relevance score. By
default, a value of 0.75 is used. A value of 0.0 means no length normalization is applied,
while a value of 1.0 means the score is fully normalized by the length of the document.
:vartype b: float
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'k1': {'key': 'k1', 'type': 'float'},
'b': {'key': 'b', 'type': 'float'},
}
def __init__(
self,
*,
k1: Optional[float] = None,
b: Optional[float] = None,
**kwargs
):
"""
:keyword k1: This property controls the scaling function between the term frequency of each
matching terms and the final relevance score of a document-query pair. By default, a value of
1.2 is used. A value of 0.0 means the score does not scale with an increase in term frequency.
:paramtype k1: float
:keyword b: This property controls how the length of a document affects the relevance score. By
default, a value of 0.75 is used. A value of 0.0 means no length normalization is applied,
while a value of 1.0 means the score is fully normalized by the length of the document.
:paramtype b: float
"""
super(BM25Similarity, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.BM25Similarity' # type: str
self.k1 = k1
self.b = b
class CharFilter(msrest.serialization.Model):
"""Base type for character filters.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: MappingCharFilter, PatternReplaceCharFilter.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the char filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the char filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.MappingCharFilter': 'MappingCharFilter', '#Microsoft.Azure.Search.PatternReplaceCharFilter': 'PatternReplaceCharFilter'}
}
def __init__(
self,
*,
name: str,
**kwargs
):
"""
:keyword name: Required. The name of the char filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
"""
super(CharFilter, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.name = name
class CjkBigramTokenFilter(TokenFilter):
"""Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar ignore_scripts: The scripts to ignore.
:vartype ignore_scripts: list[str or
~azure.search.documents.indexes.models.CjkBigramTokenFilterScripts]
:ivar output_unigrams: A value indicating whether to output both unigrams and bigrams (if
true), or just bigrams (if false). Default is false.
:vartype output_unigrams: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'ignore_scripts': {'key': 'ignoreScripts', 'type': '[str]'},
'output_unigrams': {'key': 'outputUnigrams', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
ignore_scripts: Optional[List[Union[str, "CjkBigramTokenFilterScripts"]]] = None,
output_unigrams: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword ignore_scripts: The scripts to ignore.
:paramtype ignore_scripts: list[str or
~azure.search.documents.indexes.models.CjkBigramTokenFilterScripts]
:keyword output_unigrams: A value indicating whether to output both unigrams and bigrams (if
true), or just bigrams (if false). Default is false.
:paramtype output_unigrams: bool
"""
super(CjkBigramTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.CjkBigramTokenFilter' # type: str
self.ignore_scripts = ignore_scripts
self.output_unigrams = output_unigrams
class ClassicSimilarity(Similarity):
"""Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Constant filled by server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ClassicSimilarity, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.ClassicSimilarity' # type: str
class LexicalTokenizer(msrest.serialization.Model):
"""Base type for tokenizers.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: ClassicTokenizer, EdgeNGramTokenizer, KeywordTokenizer, KeywordTokenizerV2, MicrosoftLanguageStemmingTokenizer, MicrosoftLanguageTokenizer, NGramTokenizer, PathHierarchyTokenizerV2, PatternTokenizer, LuceneStandardTokenizer, LuceneStandardTokenizerV2, UaxUrlEmailTokenizer.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.ClassicTokenizer': 'ClassicTokenizer', '#Microsoft.Azure.Search.EdgeNGramTokenizer': 'EdgeNGramTokenizer', '#Microsoft.Azure.Search.KeywordTokenizer': 'KeywordTokenizer', '#Microsoft.Azure.Search.KeywordTokenizerV2': 'KeywordTokenizerV2', '#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer': 'MicrosoftLanguageStemmingTokenizer', '#Microsoft.Azure.Search.MicrosoftLanguageTokenizer': 'MicrosoftLanguageTokenizer', '#Microsoft.Azure.Search.NGramTokenizer': 'NGramTokenizer', '#Microsoft.Azure.Search.PathHierarchyTokenizerV2': 'PathHierarchyTokenizerV2', '#Microsoft.Azure.Search.PatternTokenizer': 'PatternTokenizer', '#Microsoft.Azure.Search.StandardTokenizer': 'LuceneStandardTokenizer', '#Microsoft.Azure.Search.StandardTokenizerV2': 'LuceneStandardTokenizerV2', '#Microsoft.Azure.Search.UaxUrlEmailTokenizer': 'UaxUrlEmailTokenizer'}
}
def __init__(
self,
*,
name: str,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
"""
super(LexicalTokenizer, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.name = name
class ClassicTokenizer(LexicalTokenizer):
"""Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:vartype max_token_length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:paramtype max_token_length: int
"""
super(ClassicTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.ClassicTokenizer' # type: str
self.max_token_length = max_token_length
class CognitiveServicesAccount(msrest.serialization.Model):
"""Base type for describing any cognitive service resource attached to a skillset.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: CognitiveServicesAccountKey, DefaultCognitiveServicesAccount.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the cognitive service resource
attached to a skillset.Constant filled by server.
:vartype odata_type: str
:ivar description: Description of the cognitive service resource attached to a skillset.
:vartype description: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.CognitiveServicesByKey': 'CognitiveServicesAccountKey', '#Microsoft.Azure.Search.DefaultCognitiveServices': 'DefaultCognitiveServicesAccount'}
}
def __init__(
self,
*,
description: Optional[str] = None,
**kwargs
):
"""
:keyword description: Description of the cognitive service resource attached to a skillset.
:paramtype description: str
"""
super(CognitiveServicesAccount, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.description = description
class CognitiveServicesAccountKey(CognitiveServicesAccount):
"""A cognitive service resource provisioned with a key that is attached to a skillset.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the cognitive service resource
attached to a skillset.Constant filled by server.
:vartype odata_type: str
:ivar description: Description of the cognitive service resource attached to a skillset.
:vartype description: str
:ivar key: Required. The key used to provision the cognitive service resource attached to a
skillset.
:vartype key: str
"""
_validation = {
'odata_type': {'required': True},
'key': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'key': {'key': 'key', 'type': 'str'},
}
def __init__(
self,
*,
key: str,
description: Optional[str] = None,
**kwargs
):
"""
:keyword description: Description of the cognitive service resource attached to a skillset.
:paramtype description: str
:keyword key: Required. The key used to provision the cognitive service resource attached to a
skillset.
:paramtype key: str
"""
super(CognitiveServicesAccountKey, self).__init__(description=description, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.CognitiveServicesByKey' # type: str
self.key = key
class CommonGramTokenFilter(TokenFilter):
"""Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar common_words: Required. The set of common words.
:vartype common_words: list[str]
:ivar ignore_case: A value indicating whether common words matching will be case insensitive.
Default is false.
:vartype ignore_case: bool
:ivar use_query_mode: A value that indicates whether the token filter is in query mode. When in
query mode, the token filter generates bigrams and then removes common words and single terms
followed by a common word. Default is false.
:vartype use_query_mode: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'common_words': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'common_words': {'key': 'commonWords', 'type': '[str]'},
'ignore_case': {'key': 'ignoreCase', 'type': 'bool'},
'use_query_mode': {'key': 'queryMode', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
common_words: List[str],
ignore_case: Optional[bool] = False,
use_query_mode: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword common_words: Required. The set of common words.
:paramtype common_words: list[str]
:keyword ignore_case: A value indicating whether common words matching will be case
insensitive. Default is false.
:paramtype ignore_case: bool
:keyword use_query_mode: A value that indicates whether the token filter is in query mode. When
in query mode, the token filter generates bigrams and then removes common words and single
terms followed by a common word. Default is false.
:paramtype use_query_mode: bool
"""
super(CommonGramTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.CommonGramTokenFilter' # type: str
self.common_words = common_words
self.ignore_case = ignore_case
self.use_query_mode = use_query_mode
class ConditionalSkill(SearchIndexerSkill):
"""A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
"""
super(ConditionalSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Util.ConditionalSkill' # type: str
class CorsOptions(msrest.serialization.Model):
"""Defines options to control Cross-Origin Resource Sharing (CORS) for an index.
All required parameters must be populated in order to send to Azure.
:ivar allowed_origins: Required. The list of origins from which JavaScript code will be granted
access to your index. Can contain a list of hosts of the form
{protocol}://{fully-qualified-domain-name}[:{port#}], or a single '*' to allow all origins (not
recommended).
:vartype allowed_origins: list[str]
:ivar max_age_in_seconds: The duration for which browsers should cache CORS preflight
responses. Defaults to 5 minutes.
:vartype max_age_in_seconds: long
"""
_validation = {
'allowed_origins': {'required': True},
}
_attribute_map = {
'allowed_origins': {'key': 'allowedOrigins', 'type': '[str]'},
'max_age_in_seconds': {'key': 'maxAgeInSeconds', 'type': 'long'},
}
def __init__(
self,
*,
allowed_origins: List[str],
max_age_in_seconds: Optional[int] = None,
**kwargs
):
"""
:keyword allowed_origins: Required. The list of origins from which JavaScript code will be
granted access to your index. Can contain a list of hosts of the form
{protocol}://{fully-qualified-domain-name}[:{port#}], or a single '*' to allow all origins (not
recommended).
:paramtype allowed_origins: list[str]
:keyword max_age_in_seconds: The duration for which browsers should cache CORS preflight
responses. Defaults to 5 minutes.
:paramtype max_age_in_seconds: long
"""
super(CorsOptions, self).__init__(**kwargs)
self.allowed_origins = allowed_origins
self.max_age_in_seconds = max_age_in_seconds
class LexicalAnalyzer(msrest.serialization.Model):
"""Base type for analyzers.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: CustomAnalyzer, PatternAnalyzer, LuceneStandardAnalyzer, StopAnalyzer.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the analyzer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the analyzer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.CustomAnalyzer': 'CustomAnalyzer', '#Microsoft.Azure.Search.PatternAnalyzer': 'PatternAnalyzer', '#Microsoft.Azure.Search.StandardAnalyzer': 'LuceneStandardAnalyzer', '#Microsoft.Azure.Search.StopAnalyzer': 'StopAnalyzer'}
}
def __init__(
self,
*,
name: str,
**kwargs
):
"""
:keyword name: Required. The name of the analyzer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
"""
super(LexicalAnalyzer, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.name = name
class CustomAnalyzer(LexicalAnalyzer):
"""Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the analyzer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the analyzer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar tokenizer: Required. The name of the tokenizer to use to divide continuous text into a
sequence of tokens, such as breaking a sentence into words. Possible values include: "classic",
"edgeNGram", "keyword_v2", "letter", "lowercase", "microsoft_language_tokenizer",
"microsoft_language_stemming_tokenizer", "nGram", "path_hierarchy_v2", "pattern",
"standard_v2", "uax_url_email", "whitespace".
:vartype tokenizer: str or ~azure.search.documents.indexes.models.LexicalTokenizerName
:ivar token_filters: A list of token filters used to filter out or modify the tokens generated
by a tokenizer. For example, you can specify a lowercase filter that converts all characters to
lowercase. The filters are run in the order in which they are listed.
:vartype token_filters: list[str or ~azure.search.documents.indexes.models.TokenFilterName]
:ivar char_filters: A list of character filters used to prepare input text before it is
processed by the tokenizer. For instance, they can replace certain characters or symbols. The
filters are run in the order in which they are listed.
:vartype char_filters: list[str or ~azure.search.documents.indexes.models.CharFilterName]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'tokenizer': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'tokenizer': {'key': 'tokenizer', 'type': 'str'},
'token_filters': {'key': 'tokenFilters', 'type': '[str]'},
'char_filters': {'key': 'charFilters', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
tokenizer: Union[str, "LexicalTokenizerName"],
token_filters: Optional[List[Union[str, "TokenFilterName"]]] = None,
char_filters: Optional[List[Union[str, "CharFilterName"]]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the analyzer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword tokenizer: Required. The name of the tokenizer to use to divide continuous text into a
sequence of tokens, such as breaking a sentence into words. Possible values include: "classic",
"edgeNGram", "keyword_v2", "letter", "lowercase", "microsoft_language_tokenizer",
"microsoft_language_stemming_tokenizer", "nGram", "path_hierarchy_v2", "pattern",
"standard_v2", "uax_url_email", "whitespace".
:paramtype tokenizer: str or ~azure.search.documents.indexes.models.LexicalTokenizerName
:keyword token_filters: A list of token filters used to filter out or modify the tokens
generated by a tokenizer. For example, you can specify a lowercase filter that converts all
characters to lowercase. The filters are run in the order in which they are listed.
:paramtype token_filters: list[str or ~azure.search.documents.indexes.models.TokenFilterName]
:keyword char_filters: A list of character filters used to prepare input text before it is
processed by the tokenizer. For instance, they can replace certain characters or symbols. The
filters are run in the order in which they are listed.
:paramtype char_filters: list[str or ~azure.search.documents.indexes.models.CharFilterName]
"""
super(CustomAnalyzer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.CustomAnalyzer' # type: str
self.tokenizer = tokenizer
self.token_filters = token_filters
self.char_filters = char_filters
class CustomEntity(msrest.serialization.Model):
"""An object that contains information about the matches that were found, and related metadata.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The top-level entity descriptor. Matches in the skill output will be
grouped by this name, and it should represent the "normalized" form of the text being found.
:vartype name: str
:ivar description: This field can be used as a passthrough for custom metadata about the
matched text(s). The value of this field will appear with every match of its entity in the
skill output.
:vartype description: str
:ivar type: This field can be used as a passthrough for custom metadata about the matched
text(s). The value of this field will appear with every match of its entity in the skill
output.
:vartype type: str
:ivar subtype: This field can be used as a passthrough for custom metadata about the matched
text(s). The value of this field will appear with every match of its entity in the skill
output.
:vartype subtype: str
:ivar id: This field can be used as a passthrough for custom metadata about the matched
text(s). The value of this field will appear with every match of its entity in the skill
output.
:vartype id: str
:ivar case_sensitive: Defaults to false. Boolean value denoting whether comparisons with the
entity name should be sensitive to character casing. Sample case insensitive matches of
"Microsoft" could be: microsoft, microSoft, MICROSOFT.
:vartype case_sensitive: bool
:ivar accent_sensitive: Defaults to false. Boolean value denoting whether comparisons with the
entity name should be sensitive to accent.
:vartype accent_sensitive: bool
:ivar fuzzy_edit_distance: Defaults to 0. Maximum value of 5. Denotes the acceptable number of
divergent characters that would still constitute a match with the entity name. The smallest
possible fuzziness for any given match is returned. For instance, if the edit distance is set
to 3, "Windows10" would still match "Windows", "Windows10" and "Windows 7". When case
sensitivity is set to false, case differences do NOT count towards fuzziness tolerance, but
otherwise do.
:vartype fuzzy_edit_distance: int
:ivar default_case_sensitive: Changes the default case sensitivity value for this entity. It be
used to change the default value of all aliases caseSensitive values.
:vartype default_case_sensitive: bool
:ivar default_accent_sensitive: Changes the default accent sensitivity value for this entity.
It be used to change the default value of all aliases accentSensitive values.
:vartype default_accent_sensitive: bool
:ivar default_fuzzy_edit_distance: Changes the default fuzzy edit distance value for this
entity. It can be used to change the default value of all aliases fuzzyEditDistance values.
:vartype default_fuzzy_edit_distance: int
:ivar aliases: An array of complex objects that can be used to specify alternative spellings or
synonyms to the root entity name.
:vartype aliases: list[~azure.search.documents.indexes.models.CustomEntityAlias]
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'subtype': {'key': 'subtype', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'case_sensitive': {'key': 'caseSensitive', 'type': 'bool'},
'accent_sensitive': {'key': 'accentSensitive', 'type': 'bool'},
'fuzzy_edit_distance': {'key': 'fuzzyEditDistance', 'type': 'int'},
'default_case_sensitive': {'key': 'defaultCaseSensitive', 'type': 'bool'},
'default_accent_sensitive': {'key': 'defaultAccentSensitive', 'type': 'bool'},
'default_fuzzy_edit_distance': {'key': 'defaultFuzzyEditDistance', 'type': 'int'},
'aliases': {'key': 'aliases', 'type': '[CustomEntityAlias]'},
}
def __init__(
self,
*,
name: str,
description: Optional[str] = None,
type: Optional[str] = None,
subtype: Optional[str] = None,
id: Optional[str] = None,
case_sensitive: Optional[bool] = None,
accent_sensitive: Optional[bool] = None,
fuzzy_edit_distance: Optional[int] = None,
default_case_sensitive: Optional[bool] = None,
default_accent_sensitive: Optional[bool] = None,
default_fuzzy_edit_distance: Optional[int] = None,
aliases: Optional[List["CustomEntityAlias"]] = None,
**kwargs
):
"""
:keyword name: Required. The top-level entity descriptor. Matches in the skill output will be
grouped by this name, and it should represent the "normalized" form of the text being found.
:paramtype name: str
:keyword description: This field can be used as a passthrough for custom metadata about the
matched text(s). The value of this field will appear with every match of its entity in the
skill output.
:paramtype description: str
:keyword type: This field can be used as a passthrough for custom metadata about the matched
text(s). The value of this field will appear with every match of its entity in the skill
output.
:paramtype type: str
:keyword subtype: This field can be used as a passthrough for custom metadata about the matched
text(s). The value of this field will appear with every match of its entity in the skill
output.
:paramtype subtype: str
:keyword id: This field can be used as a passthrough for custom metadata about the matched
text(s). The value of this field will appear with every match of its entity in the skill
output.
:paramtype id: str
:keyword case_sensitive: Defaults to false. Boolean value denoting whether comparisons with the
entity name should be sensitive to character casing. Sample case insensitive matches of
"Microsoft" could be: microsoft, microSoft, MICROSOFT.
:paramtype case_sensitive: bool
:keyword accent_sensitive: Defaults to false. Boolean value denoting whether comparisons with
the entity name should be sensitive to accent.
:paramtype accent_sensitive: bool
:keyword fuzzy_edit_distance: Defaults to 0. Maximum value of 5. Denotes the acceptable number
of divergent characters that would still constitute a match with the entity name. The smallest
possible fuzziness for any given match is returned. For instance, if the edit distance is set
to 3, "Windows10" would still match "Windows", "Windows10" and "Windows 7". When case
sensitivity is set to false, case differences do NOT count towards fuzziness tolerance, but
otherwise do.
:paramtype fuzzy_edit_distance: int
:keyword default_case_sensitive: Changes the default case sensitivity value for this entity. It
be used to change the default value of all aliases caseSensitive values.
:paramtype default_case_sensitive: bool
:keyword default_accent_sensitive: Changes the default accent sensitivity value for this
entity. It be used to change the default value of all aliases accentSensitive values.
:paramtype default_accent_sensitive: bool
:keyword default_fuzzy_edit_distance: Changes the default fuzzy edit distance value for this
entity. It can be used to change the default value of all aliases fuzzyEditDistance values.
:paramtype default_fuzzy_edit_distance: int
:keyword aliases: An array of complex objects that can be used to specify alternative spellings
or synonyms to the root entity name.
:paramtype aliases: list[~azure.search.documents.indexes.models.CustomEntityAlias]
"""
super(CustomEntity, self).__init__(**kwargs)
self.name = name
self.description = description
self.type = type
self.subtype = subtype
self.id = id
self.case_sensitive = case_sensitive
self.accent_sensitive = accent_sensitive
self.fuzzy_edit_distance = fuzzy_edit_distance
self.default_case_sensitive = default_case_sensitive
self.default_accent_sensitive = default_accent_sensitive
self.default_fuzzy_edit_distance = default_fuzzy_edit_distance
self.aliases = aliases
class CustomEntityAlias(msrest.serialization.Model):
"""A complex object that can be used to specify alternative spellings or synonyms to the root entity name.
All required parameters must be populated in order to send to Azure.
:ivar text: Required. The text of the alias.
:vartype text: str
:ivar case_sensitive: Determine if the alias is case sensitive.
:vartype case_sensitive: bool
:ivar accent_sensitive: Determine if the alias is accent sensitive.
:vartype accent_sensitive: bool
:ivar fuzzy_edit_distance: Determine the fuzzy edit distance of the alias.
:vartype fuzzy_edit_distance: int
"""
_validation = {
'text': {'required': True},
}
_attribute_map = {
'text': {'key': 'text', 'type': 'str'},
'case_sensitive': {'key': 'caseSensitive', 'type': 'bool'},
'accent_sensitive': {'key': 'accentSensitive', 'type': 'bool'},
'fuzzy_edit_distance': {'key': 'fuzzyEditDistance', 'type': 'int'},
}
def __init__(
self,
*,
text: str,
case_sensitive: Optional[bool] = None,
accent_sensitive: Optional[bool] = None,
fuzzy_edit_distance: Optional[int] = None,
**kwargs
):
"""
:keyword text: Required. The text of the alias.
:paramtype text: str
:keyword case_sensitive: Determine if the alias is case sensitive.
:paramtype case_sensitive: bool
:keyword accent_sensitive: Determine if the alias is accent sensitive.
:paramtype accent_sensitive: bool
:keyword fuzzy_edit_distance: Determine the fuzzy edit distance of the alias.
:paramtype fuzzy_edit_distance: int
"""
super(CustomEntityAlias, self).__init__(**kwargs)
self.text = text
self.case_sensitive = case_sensitive
self.accent_sensitive = accent_sensitive
self.fuzzy_edit_distance = fuzzy_edit_distance
class CustomEntityLookupSkill(SearchIndexerSkill):
"""A skill looks for text from a custom, user-defined list of words and phrases.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "de", "en", "es", "fi", "fr", "it", "ko", "pt".
:vartype default_language_code: str or
~azure.search.documents.indexes.models.CustomEntityLookupSkillLanguage
:ivar entities_definition_uri: Path to a JSON or CSV file containing all the target text to
match against. This entity definition is read at the beginning of an indexer run. Any updates
to this file during an indexer run will not take effect until subsequent runs. This config must
be accessible over HTTPS.
:vartype entities_definition_uri: str
:ivar inline_entities_definition: The inline CustomEntity definition.
:vartype inline_entities_definition: list[~azure.search.documents.indexes.models.CustomEntity]
:ivar global_default_case_sensitive: A global flag for CaseSensitive. If CaseSensitive is not
set in CustomEntity, this value will be the default value.
:vartype global_default_case_sensitive: bool
:ivar global_default_accent_sensitive: A global flag for AccentSensitive. If AccentSensitive is
not set in CustomEntity, this value will be the default value.
:vartype global_default_accent_sensitive: bool
:ivar global_default_fuzzy_edit_distance: A global flag for FuzzyEditDistance. If
FuzzyEditDistance is not set in CustomEntity, this value will be the default value.
:vartype global_default_fuzzy_edit_distance: int
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'entities_definition_uri': {'key': 'entitiesDefinitionUri', 'type': 'str'},
'inline_entities_definition': {'key': 'inlineEntitiesDefinition', 'type': '[CustomEntity]'},
'global_default_case_sensitive': {'key': 'globalDefaultCaseSensitive', 'type': 'bool'},
'global_default_accent_sensitive': {'key': 'globalDefaultAccentSensitive', 'type': 'bool'},
'global_default_fuzzy_edit_distance': {'key': 'globalDefaultFuzzyEditDistance', 'type': 'int'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[Union[str, "CustomEntityLookupSkillLanguage"]] = None,
entities_definition_uri: Optional[str] = None,
inline_entities_definition: Optional[List["CustomEntity"]] = None,
global_default_case_sensitive: Optional[bool] = None,
global_default_accent_sensitive: Optional[bool] = None,
global_default_fuzzy_edit_distance: Optional[int] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "de", "en", "es", "fi", "fr", "it", "ko", "pt".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.CustomEntityLookupSkillLanguage
:keyword entities_definition_uri: Path to a JSON or CSV file containing all the target text to
match against. This entity definition is read at the beginning of an indexer run. Any updates
to this file during an indexer run will not take effect until subsequent runs. This config must
be accessible over HTTPS.
:paramtype entities_definition_uri: str
:keyword inline_entities_definition: The inline CustomEntity definition.
:paramtype inline_entities_definition:
list[~azure.search.documents.indexes.models.CustomEntity]
:keyword global_default_case_sensitive: A global flag for CaseSensitive. If CaseSensitive is
not set in CustomEntity, this value will be the default value.
:paramtype global_default_case_sensitive: bool
:keyword global_default_accent_sensitive: A global flag for AccentSensitive. If AccentSensitive
is not set in CustomEntity, this value will be the default value.
:paramtype global_default_accent_sensitive: bool
:keyword global_default_fuzzy_edit_distance: A global flag for FuzzyEditDistance. If
FuzzyEditDistance is not set in CustomEntity, this value will be the default value.
:paramtype global_default_fuzzy_edit_distance: int
"""
super(CustomEntityLookupSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.CustomEntityLookupSkill' # type: str
self.default_language_code = default_language_code
self.entities_definition_uri = entities_definition_uri
self.inline_entities_definition = inline_entities_definition
self.global_default_case_sensitive = global_default_case_sensitive
self.global_default_accent_sensitive = global_default_accent_sensitive
self.global_default_fuzzy_edit_distance = global_default_fuzzy_edit_distance
class LexicalNormalizer(msrest.serialization.Model):
"""Base type for normalizers.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: CustomNormalizer.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the normalizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the normalizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters. It cannot end in '.microsoft' nor '.lucene', nor be named 'asciifolding',
'standard', 'lowercase', 'uppercase', or 'elision'.
:vartype name: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.CustomNormalizer': 'CustomNormalizer'}
}
def __init__(
self,
*,
name: str,
**kwargs
):
"""
:keyword name: Required. The name of the normalizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters. It cannot end in '.microsoft' nor '.lucene', nor be named
'asciifolding', 'standard', 'lowercase', 'uppercase', or 'elision'.
:paramtype name: str
"""
super(LexicalNormalizer, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
self.name = name
class CustomNormalizer(LexicalNormalizer):
"""Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the normalizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the normalizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters. It cannot end in '.microsoft' nor '.lucene', nor be named 'asciifolding',
'standard', 'lowercase', 'uppercase', or 'elision'.
:vartype name: str
:ivar token_filters: A list of token filters used to filter out or modify the input token. For
example, you can specify a lowercase filter that converts all characters to lowercase. The
filters are run in the order in which they are listed.
:vartype token_filters: list[str or ~azure.search.documents.indexes.models.TokenFilterName]
:ivar char_filters: A list of character filters used to prepare input text before it is
processed. For instance, they can replace certain characters or symbols. The filters are run in
the order in which they are listed.
:vartype char_filters: list[str or ~azure.search.documents.indexes.models.CharFilterName]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'token_filters': {'key': 'tokenFilters', 'type': '[str]'},
'char_filters': {'key': 'charFilters', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
token_filters: Optional[List[Union[str, "TokenFilterName"]]] = None,
char_filters: Optional[List[Union[str, "CharFilterName"]]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the normalizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters. It cannot end in '.microsoft' nor '.lucene', nor be named
'asciifolding', 'standard', 'lowercase', 'uppercase', or 'elision'.
:paramtype name: str
:keyword token_filters: A list of token filters used to filter out or modify the input token.
For example, you can specify a lowercase filter that converts all characters to lowercase. The
filters are run in the order in which they are listed.
:paramtype token_filters: list[str or ~azure.search.documents.indexes.models.TokenFilterName]
:keyword char_filters: A list of character filters used to prepare input text before it is
processed. For instance, they can replace certain characters or symbols. The filters are run in
the order in which they are listed.
:paramtype char_filters: list[str or ~azure.search.documents.indexes.models.CharFilterName]
"""
super(CustomNormalizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.CustomNormalizer' # type: str
self.token_filters = token_filters
self.char_filters = char_filters
class DataChangeDetectionPolicy(msrest.serialization.Model):
"""Base type for data change detection policies.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: HighWaterMarkChangeDetectionPolicy, SqlIntegratedChangeTrackingPolicy.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the data change detection
policy.Constant filled by server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy': 'HighWaterMarkChangeDetectionPolicy', '#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy': 'SqlIntegratedChangeTrackingPolicy'}
}
def __init__(
self,
**kwargs
):
"""
"""
super(DataChangeDetectionPolicy, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
class DataDeletionDetectionPolicy(msrest.serialization.Model):
"""Base type for data deletion detection policies.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: SoftDeleteColumnDeletionDetectionPolicy.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the data deletion detection
policy.Constant filled by server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy': 'SoftDeleteColumnDeletionDetectionPolicy'}
}
def __init__(
self,
**kwargs
):
"""
"""
super(DataDeletionDetectionPolicy, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
class DataSourceCredentials(msrest.serialization.Model):
"""Represents credentials that can be used to connect to a datasource.
:ivar connection_string: The connection string for the datasource. Set to ':code:`<unchanged>`'
if you do not want the connection string updated.
:vartype connection_string: str
"""
_attribute_map = {
'connection_string': {'key': 'connectionString', 'type': 'str'},
}
def __init__(
self,
*,
connection_string: Optional[str] = None,
**kwargs
):
"""
:keyword connection_string: The connection string for the datasource. Set to
':code:`<unchanged>`' if you do not want the connection string updated.
:paramtype connection_string: str
"""
super(DataSourceCredentials, self).__init__(**kwargs)
self.connection_string = connection_string
class DefaultCognitiveServicesAccount(CognitiveServicesAccount):
"""An empty object that represents the default cognitive service resource for a skillset.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the cognitive service resource
attached to a skillset.Constant filled by server.
:vartype odata_type: str
:ivar description: Description of the cognitive service resource attached to a skillset.
:vartype description: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
}
def __init__(
self,
*,
description: Optional[str] = None,
**kwargs
):
"""
:keyword description: Description of the cognitive service resource attached to a skillset.
:paramtype description: str
"""
super(DefaultCognitiveServicesAccount, self).__init__(description=description, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.DefaultCognitiveServices' # type: str
class DictionaryDecompounderTokenFilter(TokenFilter):
"""Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar word_list: Required. The list of words to match against.
:vartype word_list: list[str]
:ivar min_word_size: The minimum word size. Only words longer than this get processed. Default
is 5. Maximum is 300.
:vartype min_word_size: int
:ivar min_subword_size: The minimum subword size. Only subwords longer than this are outputted.
Default is 2. Maximum is 300.
:vartype min_subword_size: int
:ivar max_subword_size: The maximum subword size. Only subwords shorter than this are
outputted. Default is 15. Maximum is 300.
:vartype max_subword_size: int
:ivar only_longest_match: A value indicating whether to add only the longest matching subword
to the output. Default is false.
:vartype only_longest_match: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'word_list': {'required': True},
'min_word_size': {'maximum': 300},
'min_subword_size': {'maximum': 300},
'max_subword_size': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'word_list': {'key': 'wordList', 'type': '[str]'},
'min_word_size': {'key': 'minWordSize', 'type': 'int'},
'min_subword_size': {'key': 'minSubwordSize', 'type': 'int'},
'max_subword_size': {'key': 'maxSubwordSize', 'type': 'int'},
'only_longest_match': {'key': 'onlyLongestMatch', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
word_list: List[str],
min_word_size: Optional[int] = 5,
min_subword_size: Optional[int] = 2,
max_subword_size: Optional[int] = 15,
only_longest_match: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword word_list: Required. The list of words to match against.
:paramtype word_list: list[str]
:keyword min_word_size: The minimum word size. Only words longer than this get processed.
Default is 5. Maximum is 300.
:paramtype min_word_size: int
:keyword min_subword_size: The minimum subword size. Only subwords longer than this are
outputted. Default is 2. Maximum is 300.
:paramtype min_subword_size: int
:keyword max_subword_size: The maximum subword size. Only subwords shorter than this are
outputted. Default is 15. Maximum is 300.
:paramtype max_subword_size: int
:keyword only_longest_match: A value indicating whether to add only the longest matching
subword to the output. Default is false.
:paramtype only_longest_match: bool
"""
super(DictionaryDecompounderTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter' # type: str
self.word_list = word_list
self.min_word_size = min_word_size
self.min_subword_size = min_subword_size
self.max_subword_size = max_subword_size
self.only_longest_match = only_longest_match
class ScoringFunction(msrest.serialization.Model):
"""Base type for functions that can modify document scores during ranking.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: DistanceScoringFunction, FreshnessScoringFunction, MagnitudeScoringFunction, TagScoringFunction.
All required parameters must be populated in order to send to Azure.
:ivar type: Required. Indicates the type of function to use. Valid values include magnitude,
freshness, distance, and tag. The function type must be lower case.Constant filled by server.
:vartype type: str
:ivar field_name: Required. The name of the field used as input to the scoring function.
:vartype field_name: str
:ivar boost: Required. A multiplier for the raw score. Must be a positive number not equal to
1.0.
:vartype boost: float
:ivar interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:vartype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
"""
_validation = {
'type': {'required': True},
'field_name': {'required': True},
'boost': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'field_name': {'key': 'fieldName', 'type': 'str'},
'boost': {'key': 'boost', 'type': 'float'},
'interpolation': {'key': 'interpolation', 'type': 'str'},
}
_subtype_map = {
'type': {'distance': 'DistanceScoringFunction', 'freshness': 'FreshnessScoringFunction', 'magnitude': 'MagnitudeScoringFunction', 'tag': 'TagScoringFunction'}
}
def __init__(
self,
*,
field_name: str,
boost: float,
interpolation: Optional[Union[str, "ScoringFunctionInterpolation"]] = None,
**kwargs
):
"""
:keyword field_name: Required. The name of the field used as input to the scoring function.
:paramtype field_name: str
:keyword boost: Required. A multiplier for the raw score. Must be a positive number not equal
to 1.0.
:paramtype boost: float
:keyword interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:paramtype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
"""
super(ScoringFunction, self).__init__(**kwargs)
self.type = None # type: Optional[str]
self.field_name = field_name
self.boost = boost
self.interpolation = interpolation
class DistanceScoringFunction(ScoringFunction):
"""Defines a function that boosts scores based on distance from a geographic location.
All required parameters must be populated in order to send to Azure.
:ivar type: Required. Indicates the type of function to use. Valid values include magnitude,
freshness, distance, and tag. The function type must be lower case.Constant filled by server.
:vartype type: str
:ivar field_name: Required. The name of the field used as input to the scoring function.
:vartype field_name: str
:ivar boost: Required. A multiplier for the raw score. Must be a positive number not equal to
1.0.
:vartype boost: float
:ivar interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:vartype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:ivar parameters: Required. Parameter values for the distance scoring function.
:vartype parameters: ~azure.search.documents.indexes.models.DistanceScoringParameters
"""
_validation = {
'type': {'required': True},
'field_name': {'required': True},
'boost': {'required': True},
'parameters': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'field_name': {'key': 'fieldName', 'type': 'str'},
'boost': {'key': 'boost', 'type': 'float'},
'interpolation': {'key': 'interpolation', 'type': 'str'},
'parameters': {'key': 'distance', 'type': 'DistanceScoringParameters'},
}
def __init__(
self,
*,
field_name: str,
boost: float,
parameters: "DistanceScoringParameters",
interpolation: Optional[Union[str, "ScoringFunctionInterpolation"]] = None,
**kwargs
):
"""
:keyword field_name: Required. The name of the field used as input to the scoring function.
:paramtype field_name: str
:keyword boost: Required. A multiplier for the raw score. Must be a positive number not equal
to 1.0.
:paramtype boost: float
:keyword interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:paramtype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:keyword parameters: Required. Parameter values for the distance scoring function.
:paramtype parameters: ~azure.search.documents.indexes.models.DistanceScoringParameters
"""
super(DistanceScoringFunction, self).__init__(field_name=field_name, boost=boost, interpolation=interpolation, **kwargs)
self.type = 'distance' # type: str
self.parameters = parameters
class DistanceScoringParameters(msrest.serialization.Model):
"""Provides parameter values to a distance scoring function.
All required parameters must be populated in order to send to Azure.
:ivar reference_point_parameter: Required. The name of the parameter passed in search queries
to specify the reference location.
:vartype reference_point_parameter: str
:ivar boosting_distance: Required. The distance in kilometers from the reference location where
the boosting range ends.
:vartype boosting_distance: float
"""
_validation = {
'reference_point_parameter': {'required': True},
'boosting_distance': {'required': True},
}
_attribute_map = {
'reference_point_parameter': {'key': 'referencePointParameter', 'type': 'str'},
'boosting_distance': {'key': 'boostingDistance', 'type': 'float'},
}
def __init__(
self,
*,
reference_point_parameter: str,
boosting_distance: float,
**kwargs
):
"""
:keyword reference_point_parameter: Required. The name of the parameter passed in search
queries to specify the reference location.
:paramtype reference_point_parameter: str
:keyword boosting_distance: Required. The distance in kilometers from the reference location
where the boosting range ends.
:paramtype boosting_distance: float
"""
super(DistanceScoringParameters, self).__init__(**kwargs)
self.reference_point_parameter = reference_point_parameter
self.boosting_distance = boosting_distance
class DocumentExtractionSkill(SearchIndexerSkill):
"""A skill that extracts content from a file within the enrichment pipeline.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar parsing_mode: The parsingMode for the skill. Will be set to 'default' if not defined.
:vartype parsing_mode: str
:ivar data_to_extract: The type of data to be extracted for the skill. Will be set to
'contentAndMetadata' if not defined.
:vartype data_to_extract: str
:ivar configuration: A dictionary of configurations for the skill.
:vartype configuration: dict[str, any]
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'parsing_mode': {'key': 'parsingMode', 'type': 'str'},
'data_to_extract': {'key': 'dataToExtract', 'type': 'str'},
'configuration': {'key': 'configuration', 'type': '{object}'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
parsing_mode: Optional[str] = None,
data_to_extract: Optional[str] = None,
configuration: Optional[Dict[str, Any]] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword parsing_mode: The parsingMode for the skill. Will be set to 'default' if not defined.
:paramtype parsing_mode: str
:keyword data_to_extract: The type of data to be extracted for the skill. Will be set to
'contentAndMetadata' if not defined.
:paramtype data_to_extract: str
:keyword configuration: A dictionary of configurations for the skill.
:paramtype configuration: dict[str, any]
"""
super(DocumentExtractionSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Util.DocumentExtractionSkill' # type: str
self.parsing_mode = parsing_mode
self.data_to_extract = data_to_extract
self.configuration = configuration
class DocumentKeysOrIds(msrest.serialization.Model):
"""DocumentKeysOrIds.
:ivar document_keys: document keys to be reset.
:vartype document_keys: list[str]
:ivar datasource_document_ids: datasource document identifiers to be reset.
:vartype datasource_document_ids: list[str]
"""
_attribute_map = {
'document_keys': {'key': 'documentKeys', 'type': '[str]'},
'datasource_document_ids': {'key': 'datasourceDocumentIds', 'type': '[str]'},
}
def __init__(
self,
*,
document_keys: Optional[List[str]] = None,
datasource_document_ids: Optional[List[str]] = None,
**kwargs
):
"""
:keyword document_keys: document keys to be reset.
:paramtype document_keys: list[str]
:keyword datasource_document_ids: datasource document identifiers to be reset.
:paramtype datasource_document_ids: list[str]
"""
super(DocumentKeysOrIds, self).__init__(**kwargs)
self.document_keys = document_keys
self.datasource_document_ids = datasource_document_ids
class EdgeNGramTokenFilter(TokenFilter):
"""Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar min_gram: The minimum n-gram length. Default is 1. Must be less than the value of
maxGram.
:vartype min_gram: int
:ivar max_gram: The maximum n-gram length. Default is 2.
:vartype max_gram: int
:ivar side: Specifies which side of the input the n-gram should be generated from. Default is
"front". Possible values include: "front", "back".
:vartype side: str or ~azure.search.documents.indexes.models.EdgeNGramTokenFilterSide
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_gram': {'key': 'minGram', 'type': 'int'},
'max_gram': {'key': 'maxGram', 'type': 'int'},
'side': {'key': 'side', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
min_gram: Optional[int] = 1,
max_gram: Optional[int] = 2,
side: Optional[Union[str, "EdgeNGramTokenFilterSide"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_gram: The minimum n-gram length. Default is 1. Must be less than the value of
maxGram.
:paramtype min_gram: int
:keyword max_gram: The maximum n-gram length. Default is 2.
:paramtype max_gram: int
:keyword side: Specifies which side of the input the n-gram should be generated from. Default
is "front". Possible values include: "front", "back".
:paramtype side: str or ~azure.search.documents.indexes.models.EdgeNGramTokenFilterSide
"""
super(EdgeNGramTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.EdgeNGramTokenFilter' # type: str
self.min_gram = min_gram
self.max_gram = max_gram
self.side = side
class EdgeNGramTokenFilterV2(TokenFilter):
"""Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the
value of maxGram.
:vartype min_gram: int
:ivar max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:vartype max_gram: int
:ivar side: Specifies which side of the input the n-gram should be generated from. Default is
"front". Possible values include: "front", "back".
:vartype side: str or ~azure.search.documents.indexes.models.EdgeNGramTokenFilterSide
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'min_gram': {'maximum': 300},
'max_gram': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_gram': {'key': 'minGram', 'type': 'int'},
'max_gram': {'key': 'maxGram', 'type': 'int'},
'side': {'key': 'side', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
min_gram: Optional[int] = 1,
max_gram: Optional[int] = 2,
side: Optional[Union[str, "EdgeNGramTokenFilterSide"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than
the value of maxGram.
:paramtype min_gram: int
:keyword max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:paramtype max_gram: int
:keyword side: Specifies which side of the input the n-gram should be generated from. Default
is "front". Possible values include: "front", "back".
:paramtype side: str or ~azure.search.documents.indexes.models.EdgeNGramTokenFilterSide
"""
super(EdgeNGramTokenFilterV2, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.EdgeNGramTokenFilterV2' # type: str
self.min_gram = min_gram
self.max_gram = max_gram
self.side = side
class EdgeNGramTokenizer(LexicalTokenizer):
"""Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the
value of maxGram.
:vartype min_gram: int
:ivar max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:vartype max_gram: int
:ivar token_chars: Character classes to keep in the tokens.
:vartype token_chars: list[str or ~azure.search.documents.indexes.models.TokenCharacterKind]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'min_gram': {'maximum': 300},
'max_gram': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_gram': {'key': 'minGram', 'type': 'int'},
'max_gram': {'key': 'maxGram', 'type': 'int'},
'token_chars': {'key': 'tokenChars', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
min_gram: Optional[int] = 1,
max_gram: Optional[int] = 2,
token_chars: Optional[List[Union[str, "TokenCharacterKind"]]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than
the value of maxGram.
:paramtype min_gram: int
:keyword max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:paramtype max_gram: int
:keyword token_chars: Character classes to keep in the tokens.
:paramtype token_chars: list[str or ~azure.search.documents.indexes.models.TokenCharacterKind]
"""
super(EdgeNGramTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.EdgeNGramTokenizer' # type: str
self.min_gram = min_gram
self.max_gram = max_gram
self.token_chars = token_chars
class ElisionTokenFilter(TokenFilter):
"""Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar articles: The set of articles to remove.
:vartype articles: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'articles': {'key': 'articles', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
articles: Optional[List[str]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword articles: The set of articles to remove.
:paramtype articles: list[str]
"""
super(ElisionTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.ElisionTokenFilter' # type: str
self.articles = articles
class EntityLinkingSkill(SearchIndexerSkill):
"""Using the Text Analytics API, extracts linked entities from text.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
:vartype default_language_code: str
:ivar minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:vartype minimum_precision: float
:ivar model_version: The version of the model to use when calling the Text Analytics service.
It will default to the latest available when not specified. We recommend you do not specify
this value unless absolutely necessary.
:vartype model_version: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
'minimum_precision': {'maximum': 1, 'minimum': 0},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'minimum_precision': {'key': 'minimumPrecision', 'type': 'float'},
'model_version': {'key': 'modelVersion', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[str] = None,
minimum_precision: Optional[float] = None,
model_version: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
:paramtype default_language_code: str
:keyword minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:paramtype minimum_precision: float
:keyword model_version: The version of the model to use when calling the Text Analytics
service. It will default to the latest available when not specified. We recommend you do not
specify this value unless absolutely necessary.
:paramtype model_version: str
"""
super(EntityLinkingSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.V3.EntityLinkingSkill' # type: str
self.default_language_code = default_language_code
self.minimum_precision = minimum_precision
self.model_version = model_version
class EntityRecognitionSkill(SearchIndexerSkill):
"""Text analytics entity recognition.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar categories: A list of entity categories that should be extracted.
:vartype categories: list[str or ~azure.search.documents.indexes.models.EntityCategory]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "ar", "cs", "zh-Hans", "zh-Hant", "da", "nl", "en", "fi", "fr", "de",
"el", "hu", "it", "ja", "ko", "no", "pl", "pt-PT", "pt-BR", "ru", "es", "sv", "tr".
:vartype default_language_code: str or
~azure.search.documents.indexes.models.EntityRecognitionSkillLanguage
:ivar include_typeless_entities: Determines whether or not to include entities which are well
known but don't conform to a pre-defined type. If this configuration is not set (default), set
to null or set to false, entities which don't conform to one of the pre-defined types will not
be surfaced.
:vartype include_typeless_entities: bool
:ivar minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:vartype minimum_precision: float
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'categories': {'key': 'categories', 'type': '[str]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'include_typeless_entities': {'key': 'includeTypelessEntities', 'type': 'bool'},
'minimum_precision': {'key': 'minimumPrecision', 'type': 'float'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
categories: Optional[List[Union[str, "EntityCategory"]]] = None,
default_language_code: Optional[Union[str, "EntityRecognitionSkillLanguage"]] = None,
include_typeless_entities: Optional[bool] = None,
minimum_precision: Optional[float] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword categories: A list of entity categories that should be extracted.
:paramtype categories: list[str or ~azure.search.documents.indexes.models.EntityCategory]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "ar", "cs", "zh-Hans", "zh-Hant", "da", "nl", "en", "fi", "fr", "de",
"el", "hu", "it", "ja", "ko", "no", "pl", "pt-PT", "pt-BR", "ru", "es", "sv", "tr".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.EntityRecognitionSkillLanguage
:keyword include_typeless_entities: Determines whether or not to include entities which are
well known but don't conform to a pre-defined type. If this configuration is not set (default),
set to null or set to false, entities which don't conform to one of the pre-defined types will
not be surfaced.
:paramtype include_typeless_entities: bool
:keyword minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:paramtype minimum_precision: float
"""
super(EntityRecognitionSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.EntityRecognitionSkill' # type: str
self.categories = categories
self.default_language_code = default_language_code
self.include_typeless_entities = include_typeless_entities
self.minimum_precision = minimum_precision
class EntityRecognitionSkillV3(SearchIndexerSkill):
"""Using the Text Analytics API, extracts entities of different types from text.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar categories: A list of entity categories that should be extracted.
:vartype categories: list[str]
:ivar default_language_code: A value indicating which language code to use. Default is en.
:vartype default_language_code: str
:ivar minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:vartype minimum_precision: float
:ivar model_version: The version of the model to use when calling the Text Analytics service.
It will default to the latest available when not specified. We recommend you do not specify
this value unless absolutely necessary.
:vartype model_version: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
'minimum_precision': {'maximum': 1, 'minimum': 0},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'categories': {'key': 'categories', 'type': '[str]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'minimum_precision': {'key': 'minimumPrecision', 'type': 'float'},
'model_version': {'key': 'modelVersion', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
categories: Optional[List[str]] = None,
default_language_code: Optional[str] = None,
minimum_precision: Optional[float] = None,
model_version: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword categories: A list of entity categories that should be extracted.
:paramtype categories: list[str]
:keyword default_language_code: A value indicating which language code to use. Default is en.
:paramtype default_language_code: str
:keyword minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:paramtype minimum_precision: float
:keyword model_version: The version of the model to use when calling the Text Analytics
service. It will default to the latest available when not specified. We recommend you do not
specify this value unless absolutely necessary.
:paramtype model_version: str
"""
super(EntityRecognitionSkillV3, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.V3.EntityRecognitionSkill' # type: str
self.categories = categories
self.default_language_code = default_language_code
self.minimum_precision = minimum_precision
self.model_version = model_version
class FieldMapping(msrest.serialization.Model):
"""Defines a mapping between a field in a data source and a target field in an index.
All required parameters must be populated in order to send to Azure.
:ivar source_field_name: Required. The name of the field in the data source.
:vartype source_field_name: str
:ivar target_field_name: The name of the target field in the index. Same as the source field
name by default.
:vartype target_field_name: str
:ivar mapping_function: A function to apply to each source field value before indexing.
:vartype mapping_function: ~azure.search.documents.indexes.models.FieldMappingFunction
"""
_validation = {
'source_field_name': {'required': True},
}
_attribute_map = {
'source_field_name': {'key': 'sourceFieldName', 'type': 'str'},
'target_field_name': {'key': 'targetFieldName', 'type': 'str'},
'mapping_function': {'key': 'mappingFunction', 'type': 'FieldMappingFunction'},
}
def __init__(
self,
*,
source_field_name: str,
target_field_name: Optional[str] = None,
mapping_function: Optional["FieldMappingFunction"] = None,
**kwargs
):
"""
:keyword source_field_name: Required. The name of the field in the data source.
:paramtype source_field_name: str
:keyword target_field_name: The name of the target field in the index. Same as the source field
name by default.
:paramtype target_field_name: str
:keyword mapping_function: A function to apply to each source field value before indexing.
:paramtype mapping_function: ~azure.search.documents.indexes.models.FieldMappingFunction
"""
super(FieldMapping, self).__init__(**kwargs)
self.source_field_name = source_field_name
self.target_field_name = target_field_name
self.mapping_function = mapping_function
class FieldMappingFunction(msrest.serialization.Model):
"""Represents a function that transforms a value from a data source before indexing.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the field mapping function.
:vartype name: str
:ivar parameters: A dictionary of parameter name/value pairs to pass to the function. Each
value must be of a primitive type.
:vartype parameters: dict[str, any]
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'parameters': {'key': 'parameters', 'type': '{object}'},
}
def __init__(
self,
*,
name: str,
parameters: Optional[Dict[str, Any]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the field mapping function.
:paramtype name: str
:keyword parameters: A dictionary of parameter name/value pairs to pass to the function. Each
value must be of a primitive type.
:paramtype parameters: dict[str, any]
"""
super(FieldMappingFunction, self).__init__(**kwargs)
self.name = name
self.parameters = parameters
class FreshnessScoringFunction(ScoringFunction):
"""Defines a function that boosts scores based on the value of a date-time field.
All required parameters must be populated in order to send to Azure.
:ivar type: Required. Indicates the type of function to use. Valid values include magnitude,
freshness, distance, and tag. The function type must be lower case.Constant filled by server.
:vartype type: str
:ivar field_name: Required. The name of the field used as input to the scoring function.
:vartype field_name: str
:ivar boost: Required. A multiplier for the raw score. Must be a positive number not equal to
1.0.
:vartype boost: float
:ivar interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:vartype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:ivar parameters: Required. Parameter values for the freshness scoring function.
:vartype parameters: ~azure.search.documents.indexes.models.FreshnessScoringParameters
"""
_validation = {
'type': {'required': True},
'field_name': {'required': True},
'boost': {'required': True},
'parameters': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'field_name': {'key': 'fieldName', 'type': 'str'},
'boost': {'key': 'boost', 'type': 'float'},
'interpolation': {'key': 'interpolation', 'type': 'str'},
'parameters': {'key': 'freshness', 'type': 'FreshnessScoringParameters'},
}
def __init__(
self,
*,
field_name: str,
boost: float,
parameters: "FreshnessScoringParameters",
interpolation: Optional[Union[str, "ScoringFunctionInterpolation"]] = None,
**kwargs
):
"""
:keyword field_name: Required. The name of the field used as input to the scoring function.
:paramtype field_name: str
:keyword boost: Required. A multiplier for the raw score. Must be a positive number not equal
to 1.0.
:paramtype boost: float
:keyword interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:paramtype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:keyword parameters: Required. Parameter values for the freshness scoring function.
:paramtype parameters: ~azure.search.documents.indexes.models.FreshnessScoringParameters
"""
super(FreshnessScoringFunction, self).__init__(field_name=field_name, boost=boost, interpolation=interpolation, **kwargs)
self.type = 'freshness' # type: str
self.parameters = parameters
class FreshnessScoringParameters(msrest.serialization.Model):
"""Provides parameter values to a freshness scoring function.
All required parameters must be populated in order to send to Azure.
:ivar boosting_duration: Required. The expiration period after which boosting will stop for a
particular document.
:vartype boosting_duration: ~datetime.timedelta
"""
_validation = {
'boosting_duration': {'required': True},
}
_attribute_map = {
'boosting_duration': {'key': 'boostingDuration', 'type': 'duration'},
}
def __init__(
self,
*,
boosting_duration: datetime.timedelta,
**kwargs
):
"""
:keyword boosting_duration: Required. The expiration period after which boosting will stop for
a particular document.
:paramtype boosting_duration: ~datetime.timedelta
"""
super(FreshnessScoringParameters, self).__init__(**kwargs)
self.boosting_duration = boosting_duration
class GetIndexStatisticsResult(msrest.serialization.Model):
"""Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar document_count: Required. The number of documents in the index.
:vartype document_count: long
:ivar storage_size: Required. The amount of storage in bytes consumed by the index.
:vartype storage_size: long
"""
_validation = {
'document_count': {'required': True, 'readonly': True},
'storage_size': {'required': True, 'readonly': True},
}
_attribute_map = {
'document_count': {'key': 'documentCount', 'type': 'long'},
'storage_size': {'key': 'storageSize', 'type': 'long'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(GetIndexStatisticsResult, self).__init__(**kwargs)
self.document_count = None
self.storage_size = None
class HighWaterMarkChangeDetectionPolicy(DataChangeDetectionPolicy):
"""Defines a data change detection policy that captures changes based on the value of a high water mark column.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the data change detection
policy.Constant filled by server.
:vartype odata_type: str
:ivar high_water_mark_column_name: Required. The name of the high water mark column.
:vartype high_water_mark_column_name: str
"""
_validation = {
'odata_type': {'required': True},
'high_water_mark_column_name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'high_water_mark_column_name': {'key': 'highWaterMarkColumnName', 'type': 'str'},
}
def __init__(
self,
*,
high_water_mark_column_name: str,
**kwargs
):
"""
:keyword high_water_mark_column_name: Required. The name of the high water mark column.
:paramtype high_water_mark_column_name: str
"""
super(HighWaterMarkChangeDetectionPolicy, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy' # type: str
self.high_water_mark_column_name = high_water_mark_column_name
class ImageAnalysisSkill(SearchIndexerSkill):
"""A skill that analyzes image files. It extracts a rich set of visual features based on the image content.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "en", "es", "ja", "pt", "zh".
:vartype default_language_code: str or
~azure.search.documents.indexes.models.ImageAnalysisSkillLanguage
:ivar visual_features: A list of visual features.
:vartype visual_features: list[str or ~azure.search.documents.indexes.models.VisualFeature]
:ivar details: A string indicating which domain-specific details to return.
:vartype details: list[str or ~azure.search.documents.indexes.models.ImageDetail]
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'visual_features': {'key': 'visualFeatures', 'type': '[str]'},
'details': {'key': 'details', 'type': '[str]'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[Union[str, "ImageAnalysisSkillLanguage"]] = None,
visual_features: Optional[List[Union[str, "VisualFeature"]]] = None,
details: Optional[List[Union[str, "ImageDetail"]]] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "en", "es", "ja", "pt", "zh".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.ImageAnalysisSkillLanguage
:keyword visual_features: A list of visual features.
:paramtype visual_features: list[str or ~azure.search.documents.indexes.models.VisualFeature]
:keyword details: A string indicating which domain-specific details to return.
:paramtype details: list[str or ~azure.search.documents.indexes.models.ImageDetail]
"""
super(ImageAnalysisSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Vision.ImageAnalysisSkill' # type: str
self.default_language_code = default_language_code
self.visual_features = visual_features
self.details = details
class IndexerCurrentState(msrest.serialization.Model):
"""Represents all of the state that defines and dictates the indexer's current execution.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar mode: The mode the indexer is running in. Possible values include: "indexingAllDocs",
"indexingResetDocs".
:vartype mode: str or ~azure.search.documents.indexes.models.IndexingMode
:ivar all_docs_initial_change_tracking_state: Change tracking state used when indexing starts
on all documents in the datasource.
:vartype all_docs_initial_change_tracking_state: str
:ivar all_docs_final_change_tracking_state: Change tracking state value when indexing finishes
on all documents in the datasource.
:vartype all_docs_final_change_tracking_state: str
:ivar reset_docs_initial_change_tracking_state: Change tracking state used when indexing starts
on select, reset documents in the datasource.
:vartype reset_docs_initial_change_tracking_state: str
:ivar reset_docs_final_change_tracking_state: Change tracking state value when indexing
finishes on select, reset documents in the datasource.
:vartype reset_docs_final_change_tracking_state: str
:ivar reset_document_keys: The list of document keys that have been reset. The document key is
the document's unique identifier for the data in the search index. The indexer will prioritize
selectively re-ingesting these keys.
:vartype reset_document_keys: list[str]
:ivar reset_datasource_document_ids: The list of datasource document ids that have been reset.
The datasource document id is the unique identifier for the data in the datasource. The indexer
will prioritize selectively re-ingesting these ids.
:vartype reset_datasource_document_ids: list[str]
"""
_validation = {
'mode': {'readonly': True},
'all_docs_initial_change_tracking_state': {'readonly': True},
'all_docs_final_change_tracking_state': {'readonly': True},
'reset_docs_initial_change_tracking_state': {'readonly': True},
'reset_docs_final_change_tracking_state': {'readonly': True},
'reset_document_keys': {'readonly': True},
'reset_datasource_document_ids': {'readonly': True},
}
_attribute_map = {
'mode': {'key': 'mode', 'type': 'str'},
'all_docs_initial_change_tracking_state': {'key': 'allDocsInitialChangeTrackingState', 'type': 'str'},
'all_docs_final_change_tracking_state': {'key': 'allDocsFinalChangeTrackingState', 'type': 'str'},
'reset_docs_initial_change_tracking_state': {'key': 'resetDocsInitialChangeTrackingState', 'type': 'str'},
'reset_docs_final_change_tracking_state': {'key': 'resetDocsFinalChangeTrackingState', 'type': 'str'},
'reset_document_keys': {'key': 'resetDocumentKeys', 'type': '[str]'},
'reset_datasource_document_ids': {'key': 'resetDatasourceDocumentIds', 'type': '[str]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(IndexerCurrentState, self).__init__(**kwargs)
self.mode = None
self.all_docs_initial_change_tracking_state = None
self.all_docs_final_change_tracking_state = None
self.reset_docs_initial_change_tracking_state = None
self.reset_docs_final_change_tracking_state = None
self.reset_document_keys = None
self.reset_datasource_document_ids = None
class IndexerExecutionResult(msrest.serialization.Model):
"""Represents the result of an individual indexer execution.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar status: Required. The outcome of this indexer execution. Possible values include:
"transientFailure", "success", "inProgress", "reset".
:vartype status: str or ~azure.search.documents.indexes.models.IndexerExecutionStatus
:ivar status_detail: The outcome of this indexer execution. Possible values include:
"resetDocs".
:vartype status_detail: str or
~azure.search.documents.indexes.models.IndexerExecutionStatusDetail
:ivar current_state: All of the state that defines and dictates the indexer's current
execution.
:vartype current_state: ~azure.search.documents.indexes.models.IndexerCurrentState
:ivar error_message: The error message indicating the top-level error, if any.
:vartype error_message: str
:ivar start_time: The start time of this indexer execution.
:vartype start_time: ~datetime.datetime
:ivar end_time: The end time of this indexer execution, if the execution has already completed.
:vartype end_time: ~datetime.datetime
:ivar errors: Required. The item-level indexing errors.
:vartype errors: list[~azure.search.documents.indexes.models.SearchIndexerError]
:ivar warnings: Required. The item-level indexing warnings.
:vartype warnings: list[~azure.search.documents.indexes.models.SearchIndexerWarning]
:ivar item_count: Required. The number of items that were processed during this indexer
execution. This includes both successfully processed items and items where indexing was
attempted but failed.
:vartype item_count: int
:ivar failed_item_count: Required. The number of items that failed to be indexed during this
indexer execution.
:vartype failed_item_count: int
:ivar initial_tracking_state: Change tracking state with which an indexer execution started.
:vartype initial_tracking_state: str
:ivar final_tracking_state: Change tracking state with which an indexer execution finished.
:vartype final_tracking_state: str
"""
_validation = {
'status': {'required': True, 'readonly': True},
'status_detail': {'readonly': True},
'current_state': {'readonly': True},
'error_message': {'readonly': True},
'start_time': {'readonly': True},
'end_time': {'readonly': True},
'errors': {'required': True, 'readonly': True},
'warnings': {'required': True, 'readonly': True},
'item_count': {'required': True, 'readonly': True},
'failed_item_count': {'required': True, 'readonly': True},
'initial_tracking_state': {'readonly': True},
'final_tracking_state': {'readonly': True},
}
_attribute_map = {
'status': {'key': 'status', 'type': 'str'},
'status_detail': {'key': 'statusDetail', 'type': 'str'},
'current_state': {'key': 'currentState', 'type': 'IndexerCurrentState'},
'error_message': {'key': 'errorMessage', 'type': 'str'},
'start_time': {'key': 'startTime', 'type': 'iso-8601'},
'end_time': {'key': 'endTime', 'type': 'iso-8601'},
'errors': {'key': 'errors', 'type': '[SearchIndexerError]'},
'warnings': {'key': 'warnings', 'type': '[SearchIndexerWarning]'},
'item_count': {'key': 'itemsProcessed', 'type': 'int'},
'failed_item_count': {'key': 'itemsFailed', 'type': 'int'},
'initial_tracking_state': {'key': 'initialTrackingState', 'type': 'str'},
'final_tracking_state': {'key': 'finalTrackingState', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(IndexerExecutionResult, self).__init__(**kwargs)
self.status = None
self.status_detail = None
self.current_state = None
self.error_message = None
self.start_time = None
self.end_time = None
self.errors = None
self.warnings = None
self.item_count = None
self.failed_item_count = None
self.initial_tracking_state = None
self.final_tracking_state = None
class IndexingParameters(msrest.serialization.Model):
"""Represents parameters for indexer execution.
:ivar batch_size: The number of items that are read from the data source and indexed as a
single batch in order to improve performance. The default depends on the data source type.
:vartype batch_size: int
:ivar max_failed_items: The maximum number of items that can fail indexing for indexer
execution to still be considered successful. -1 means no limit. Default is 0.
:vartype max_failed_items: int
:ivar max_failed_items_per_batch: The maximum number of items in a single batch that can fail
indexing for the batch to still be considered successful. -1 means no limit. Default is 0.
:vartype max_failed_items_per_batch: int
:ivar configuration: A dictionary of indexer-specific configuration properties. Each name is
the name of a specific property. Each value must be of a primitive type.
:vartype configuration: ~azure.search.documents.indexes.models.IndexingParametersConfiguration
"""
_attribute_map = {
'batch_size': {'key': 'batchSize', 'type': 'int'},
'max_failed_items': {'key': 'maxFailedItems', 'type': 'int'},
'max_failed_items_per_batch': {'key': 'maxFailedItemsPerBatch', 'type': 'int'},
'configuration': {'key': 'configuration', 'type': 'IndexingParametersConfiguration'},
}
def __init__(
self,
*,
batch_size: Optional[int] = None,
max_failed_items: Optional[int] = 0,
max_failed_items_per_batch: Optional[int] = 0,
configuration: Optional["IndexingParametersConfiguration"] = None,
**kwargs
):
"""
:keyword batch_size: The number of items that are read from the data source and indexed as a
single batch in order to improve performance. The default depends on the data source type.
:paramtype batch_size: int
:keyword max_failed_items: The maximum number of items that can fail indexing for indexer
execution to still be considered successful. -1 means no limit. Default is 0.
:paramtype max_failed_items: int
:keyword max_failed_items_per_batch: The maximum number of items in a single batch that can
fail indexing for the batch to still be considered successful. -1 means no limit. Default is 0.
:paramtype max_failed_items_per_batch: int
:keyword configuration: A dictionary of indexer-specific configuration properties. Each name is
the name of a specific property. Each value must be of a primitive type.
:paramtype configuration:
~azure.search.documents.indexes.models.IndexingParametersConfiguration
"""
super(IndexingParameters, self).__init__(**kwargs)
self.batch_size = batch_size
self.max_failed_items = max_failed_items
self.max_failed_items_per_batch = max_failed_items_per_batch
self.configuration = configuration
class IndexingParametersConfiguration(msrest.serialization.Model):
"""A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.
:ivar additional_properties: Unmatched properties from the message are deserialized to this
collection.
:vartype additional_properties: dict[str, any]
:ivar parsing_mode: Represents the parsing mode for indexing from an Azure blob data source.
Possible values include: "default", "text", "delimitedText", "json", "jsonArray", "jsonLines".
Default value: "default".
:vartype parsing_mode: str or ~azure.search.documents.indexes.models.BlobIndexerParsingMode
:ivar excluded_file_name_extensions: Comma-delimited list of filename extensions to ignore when
processing from Azure blob storage. For example, you could exclude ".png, .mp4" to skip over
those files during indexing.
:vartype excluded_file_name_extensions: str
:ivar indexed_file_name_extensions: Comma-delimited list of filename extensions to select when
processing from Azure blob storage. For example, you could focus indexing on specific
application files ".docx, .pptx, .msg" to specifically include those file types.
:vartype indexed_file_name_extensions: str
:ivar fail_on_unsupported_content_type: For Azure blobs, set to false if you want to continue
indexing when an unsupported content type is encountered, and you don't know all the content
types (file extensions) in advance.
:vartype fail_on_unsupported_content_type: bool
:ivar fail_on_unprocessable_document: For Azure blobs, set to false if you want to continue
indexing if a document fails indexing.
:vartype fail_on_unprocessable_document: bool
:ivar index_storage_metadata_only_for_oversized_documents: For Azure blobs, set this property
to true to still index storage metadata for blob content that is too large to process.
Oversized blobs are treated as errors by default. For limits on blob size, see
https://docs.microsoft.com/azure/search/search-limits-quotas-capacity.
:vartype index_storage_metadata_only_for_oversized_documents: bool
:ivar delimited_text_headers: For CSV blobs, specifies a comma-delimited list of column
headers, useful for mapping source fields to destination fields in an index.
:vartype delimited_text_headers: str
:ivar delimited_text_delimiter: For CSV blobs, specifies the end-of-line single-character
delimiter for CSV files where each line starts a new document (for example, "|").
:vartype delimited_text_delimiter: str
:ivar first_line_contains_headers: For CSV blobs, indicates that the first (non-blank) line of
each blob contains headers.
:vartype first_line_contains_headers: bool
:ivar document_root: For JSON arrays, given a structured or semi-structured document, you can
specify a path to the array using this property.
:vartype document_root: str
:ivar data_to_extract: Specifies the data to extract from Azure blob storage and tells the
indexer which data to extract from image content when "imageAction" is set to a value other
than "none". This applies to embedded image content in a .PDF or other application, or image
files such as .jpg and .png, in Azure blobs. Possible values include: "storageMetadata",
"allMetadata", "contentAndMetadata". Default value: "contentAndMetadata".
:vartype data_to_extract: str or
~azure.search.documents.indexes.models.BlobIndexerDataToExtract
:ivar image_action: Determines how to process embedded images and image files in Azure blob
storage. Setting the "imageAction" configuration to any value other than "none" requires that
a skillset also be attached to that indexer. Possible values include: "none",
"generateNormalizedImages", "generateNormalizedImagePerPage". Default value: "none".
:vartype image_action: str or ~azure.search.documents.indexes.models.BlobIndexerImageAction
:ivar allow_skillset_to_read_file_data: If true, will create a path //document//file_data that
is an object representing the original file data downloaded from your blob data source. This
allows you to pass the original file data to a custom skill for processing within the
enrichment pipeline, or to the Document Extraction skill.
:vartype allow_skillset_to_read_file_data: bool
:ivar pdf_text_rotation_algorithm: Determines algorithm for text extraction from PDF files in
Azure blob storage. Possible values include: "none", "detectAngles". Default value: "none".
:vartype pdf_text_rotation_algorithm: str or
~azure.search.documents.indexes.models.BlobIndexerPDFTextRotationAlgorithm
:ivar execution_environment: Specifies the environment in which the indexer should execute.
Possible values include: "standard", "private". Default value: "standard".
:vartype execution_environment: str or
~azure.search.documents.indexes.models.IndexerExecutionEnvironment
:ivar query_timeout: Increases the timeout beyond the 5-minute default for Azure SQL database
data sources, specified in the format "hh:mm:ss".
:vartype query_timeout: str
"""
_attribute_map = {
'additional_properties': {'key': '', 'type': '{object}'},
'parsing_mode': {'key': 'parsingMode', 'type': 'str'},
'excluded_file_name_extensions': {'key': 'excludedFileNameExtensions', 'type': 'str'},
'indexed_file_name_extensions': {'key': 'indexedFileNameExtensions', 'type': 'str'},
'fail_on_unsupported_content_type': {'key': 'failOnUnsupportedContentType', 'type': 'bool'},
'fail_on_unprocessable_document': {'key': 'failOnUnprocessableDocument', 'type': 'bool'},
'index_storage_metadata_only_for_oversized_documents': {'key': 'indexStorageMetadataOnlyForOversizedDocuments', 'type': 'bool'},
'delimited_text_headers': {'key': 'delimitedTextHeaders', 'type': 'str'},
'delimited_text_delimiter': {'key': 'delimitedTextDelimiter', 'type': 'str'},
'first_line_contains_headers': {'key': 'firstLineContainsHeaders', 'type': 'bool'},
'document_root': {'key': 'documentRoot', 'type': 'str'},
'data_to_extract': {'key': 'dataToExtract', 'type': 'str'},
'image_action': {'key': 'imageAction', 'type': 'str'},
'allow_skillset_to_read_file_data': {'key': 'allowSkillsetToReadFileData', 'type': 'bool'},
'pdf_text_rotation_algorithm': {'key': 'pdfTextRotationAlgorithm', 'type': 'str'},
'execution_environment': {'key': 'executionEnvironment', 'type': 'str'},
'query_timeout': {'key': 'queryTimeout', 'type': 'str'},
}
def __init__(
self,
*,
additional_properties: Optional[Dict[str, Any]] = None,
parsing_mode: Optional[Union[str, "BlobIndexerParsingMode"]] = "default",
excluded_file_name_extensions: Optional[str] = "",
indexed_file_name_extensions: Optional[str] = "",
fail_on_unsupported_content_type: Optional[bool] = False,
fail_on_unprocessable_document: Optional[bool] = False,
index_storage_metadata_only_for_oversized_documents: Optional[bool] = False,
delimited_text_headers: Optional[str] = None,
delimited_text_delimiter: Optional[str] = None,
first_line_contains_headers: Optional[bool] = True,
document_root: Optional[str] = None,
data_to_extract: Optional[Union[str, "BlobIndexerDataToExtract"]] = "contentAndMetadata",
image_action: Optional[Union[str, "BlobIndexerImageAction"]] = "none",
allow_skillset_to_read_file_data: Optional[bool] = False,
pdf_text_rotation_algorithm: Optional[Union[str, "BlobIndexerPDFTextRotationAlgorithm"]] = "none",
execution_environment: Optional[Union[str, "IndexerExecutionEnvironment"]] = "standard",
query_timeout: Optional[str] = "00:05:00",
**kwargs
):
"""
:keyword additional_properties: Unmatched properties from the message are deserialized to this
collection.
:paramtype additional_properties: dict[str, any]
:keyword parsing_mode: Represents the parsing mode for indexing from an Azure blob data source.
Possible values include: "default", "text", "delimitedText", "json", "jsonArray", "jsonLines".
Default value: "default".
:paramtype parsing_mode: str or ~azure.search.documents.indexes.models.BlobIndexerParsingMode
:keyword excluded_file_name_extensions: Comma-delimited list of filename extensions to ignore
when processing from Azure blob storage. For example, you could exclude ".png, .mp4" to skip
over those files during indexing.
:paramtype excluded_file_name_extensions: str
:keyword indexed_file_name_extensions: Comma-delimited list of filename extensions to select
when processing from Azure blob storage. For example, you could focus indexing on specific
application files ".docx, .pptx, .msg" to specifically include those file types.
:paramtype indexed_file_name_extensions: str
:keyword fail_on_unsupported_content_type: For Azure blobs, set to false if you want to
continue indexing when an unsupported content type is encountered, and you don't know all the
content types (file extensions) in advance.
:paramtype fail_on_unsupported_content_type: bool
:keyword fail_on_unprocessable_document: For Azure blobs, set to false if you want to continue
indexing if a document fails indexing.
:paramtype fail_on_unprocessable_document: bool
:keyword index_storage_metadata_only_for_oversized_documents: For Azure blobs, set this
property to true to still index storage metadata for blob content that is too large to process.
Oversized blobs are treated as errors by default. For limits on blob size, see
https://docs.microsoft.com/azure/search/search-limits-quotas-capacity.
:paramtype index_storage_metadata_only_for_oversized_documents: bool
:keyword delimited_text_headers: For CSV blobs, specifies a comma-delimited list of column
headers, useful for mapping source fields to destination fields in an index.
:paramtype delimited_text_headers: str
:keyword delimited_text_delimiter: For CSV blobs, specifies the end-of-line single-character
delimiter for CSV files where each line starts a new document (for example, "|").
:paramtype delimited_text_delimiter: str
:keyword first_line_contains_headers: For CSV blobs, indicates that the first (non-blank) line
of each blob contains headers.
:paramtype first_line_contains_headers: bool
:keyword document_root: For JSON arrays, given a structured or semi-structured document, you
can specify a path to the array using this property.
:paramtype document_root: str
:keyword data_to_extract: Specifies the data to extract from Azure blob storage and tells the
indexer which data to extract from image content when "imageAction" is set to a value other
than "none". This applies to embedded image content in a .PDF or other application, or image
files such as .jpg and .png, in Azure blobs. Possible values include: "storageMetadata",
"allMetadata", "contentAndMetadata". Default value: "contentAndMetadata".
:paramtype data_to_extract: str or
~azure.search.documents.indexes.models.BlobIndexerDataToExtract
:keyword image_action: Determines how to process embedded images and image files in Azure blob
storage. Setting the "imageAction" configuration to any value other than "none" requires that
a skillset also be attached to that indexer. Possible values include: "none",
"generateNormalizedImages", "generateNormalizedImagePerPage". Default value: "none".
:paramtype image_action: str or ~azure.search.documents.indexes.models.BlobIndexerImageAction
:keyword allow_skillset_to_read_file_data: If true, will create a path //document//file_data
that is an object representing the original file data downloaded from your blob data source.
This allows you to pass the original file data to a custom skill for processing within the
enrichment pipeline, or to the Document Extraction skill.
:paramtype allow_skillset_to_read_file_data: bool
:keyword pdf_text_rotation_algorithm: Determines algorithm for text extraction from PDF files
in Azure blob storage. Possible values include: "none", "detectAngles". Default value: "none".
:paramtype pdf_text_rotation_algorithm: str or
~azure.search.documents.indexes.models.BlobIndexerPDFTextRotationAlgorithm
:keyword execution_environment: Specifies the environment in which the indexer should execute.
Possible values include: "standard", "private". Default value: "standard".
:paramtype execution_environment: str or
~azure.search.documents.indexes.models.IndexerExecutionEnvironment
:keyword query_timeout: Increases the timeout beyond the 5-minute default for Azure SQL
database data sources, specified in the format "hh:mm:ss".
:paramtype query_timeout: str
"""
super(IndexingParametersConfiguration, self).__init__(**kwargs)
self.additional_properties = additional_properties
self.parsing_mode = parsing_mode
self.excluded_file_name_extensions = excluded_file_name_extensions
self.indexed_file_name_extensions = indexed_file_name_extensions
self.fail_on_unsupported_content_type = fail_on_unsupported_content_type
self.fail_on_unprocessable_document = fail_on_unprocessable_document
self.index_storage_metadata_only_for_oversized_documents = index_storage_metadata_only_for_oversized_documents
self.delimited_text_headers = delimited_text_headers
self.delimited_text_delimiter = delimited_text_delimiter
self.first_line_contains_headers = first_line_contains_headers
self.document_root = document_root
self.data_to_extract = data_to_extract
self.image_action = image_action
self.allow_skillset_to_read_file_data = allow_skillset_to_read_file_data
self.pdf_text_rotation_algorithm = pdf_text_rotation_algorithm
self.execution_environment = execution_environment
self.query_timeout = query_timeout
class IndexingSchedule(msrest.serialization.Model):
"""Represents a schedule for indexer execution.
All required parameters must be populated in order to send to Azure.
:ivar interval: Required. The interval of time between indexer executions.
:vartype interval: ~datetime.timedelta
:ivar start_time: The time when an indexer should start running.
:vartype start_time: ~datetime.datetime
"""
_validation = {
'interval': {'required': True},
}
_attribute_map = {
'interval': {'key': 'interval', 'type': 'duration'},
'start_time': {'key': 'startTime', 'type': 'iso-8601'},
}
def __init__(
self,
*,
interval: datetime.timedelta,
start_time: Optional[datetime.datetime] = None,
**kwargs
):
"""
:keyword interval: Required. The interval of time between indexer executions.
:paramtype interval: ~datetime.timedelta
:keyword start_time: The time when an indexer should start running.
:paramtype start_time: ~datetime.datetime
"""
super(IndexingSchedule, self).__init__(**kwargs)
self.interval = interval
self.start_time = start_time
class InputFieldMappingEntry(msrest.serialization.Model):
"""Input field mapping for a skill.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the input.
:vartype name: str
:ivar source: The source of the input.
:vartype source: str
:ivar source_context: The source context used for selecting recursive inputs.
:vartype source_context: str
:ivar inputs: The recursive inputs used when creating a complex type.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'source_context': {'key': 'sourceContext', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
}
def __init__(
self,
*,
name: str,
source: Optional[str] = None,
source_context: Optional[str] = None,
inputs: Optional[List["InputFieldMappingEntry"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the input.
:paramtype name: str
:keyword source: The source of the input.
:paramtype source: str
:keyword source_context: The source context used for selecting recursive inputs.
:paramtype source_context: str
:keyword inputs: The recursive inputs used when creating a complex type.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
"""
super(InputFieldMappingEntry, self).__init__(**kwargs)
self.name = name
self.source = source
self.source_context = source_context
self.inputs = inputs
class KeepTokenFilter(TokenFilter):
"""A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar keep_words: Required. The list of words to keep.
:vartype keep_words: list[str]
:ivar lower_case_keep_words: A value indicating whether to lower case all words first. Default
is false.
:vartype lower_case_keep_words: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'keep_words': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'keep_words': {'key': 'keepWords', 'type': '[str]'},
'lower_case_keep_words': {'key': 'keepWordsCase', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
keep_words: List[str],
lower_case_keep_words: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword keep_words: Required. The list of words to keep.
:paramtype keep_words: list[str]
:keyword lower_case_keep_words: A value indicating whether to lower case all words first.
Default is false.
:paramtype lower_case_keep_words: bool
"""
super(KeepTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.KeepTokenFilter' # type: str
self.keep_words = keep_words
self.lower_case_keep_words = lower_case_keep_words
class KeyPhraseExtractionSkill(SearchIndexerSkill):
"""A skill that uses text analytics for key phrase extraction.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "nl", "en", "fi", "fr", "de", "it", "ja", "ko", "no", "pl",
"pt-PT", "pt-BR", "ru", "es", "sv".
:vartype default_language_code: str or
~azure.search.documents.indexes.models.KeyPhraseExtractionSkillLanguage
:ivar max_key_phrase_count: A number indicating how many key phrases to return. If absent, all
identified key phrases will be returned.
:vartype max_key_phrase_count: int
:ivar model_version: The version of the model to use when calling the Text Analytics service.
It will default to the latest available when not specified. We recommend you do not specify
this value unless absolutely necessary.
:vartype model_version: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'max_key_phrase_count': {'key': 'maxKeyPhraseCount', 'type': 'int'},
'model_version': {'key': 'modelVersion', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[Union[str, "KeyPhraseExtractionSkillLanguage"]] = None,
max_key_phrase_count: Optional[int] = None,
model_version: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "nl", "en", "fi", "fr", "de", "it", "ja", "ko", "no", "pl",
"pt-PT", "pt-BR", "ru", "es", "sv".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.KeyPhraseExtractionSkillLanguage
:keyword max_key_phrase_count: A number indicating how many key phrases to return. If absent,
all identified key phrases will be returned.
:paramtype max_key_phrase_count: int
:keyword model_version: The version of the model to use when calling the Text Analytics
service. It will default to the latest available when not specified. We recommend you do not
specify this value unless absolutely necessary.
:paramtype model_version: str
"""
super(KeyPhraseExtractionSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.KeyPhraseExtractionSkill' # type: str
self.default_language_code = default_language_code
self.max_key_phrase_count = max_key_phrase_count
self.model_version = model_version
class KeywordMarkerTokenFilter(TokenFilter):
"""Marks terms as keywords. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar keywords: Required. A list of words to mark as keywords.
:vartype keywords: list[str]
:ivar ignore_case: A value indicating whether to ignore case. If true, all words are converted
to lower case first. Default is false.
:vartype ignore_case: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'keywords': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'keywords': {'key': 'keywords', 'type': '[str]'},
'ignore_case': {'key': 'ignoreCase', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
keywords: List[str],
ignore_case: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword keywords: Required. A list of words to mark as keywords.
:paramtype keywords: list[str]
:keyword ignore_case: A value indicating whether to ignore case. If true, all words are
converted to lower case first. Default is false.
:paramtype ignore_case: bool
"""
super(KeywordMarkerTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.KeywordMarkerTokenFilter' # type: str
self.keywords = keywords
self.ignore_case = ignore_case
class KeywordTokenizer(LexicalTokenizer):
"""Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar buffer_size: The read buffer size in bytes. Default is 256.
:vartype buffer_size: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'buffer_size': {'key': 'bufferSize', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
buffer_size: Optional[int] = 256,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword buffer_size: The read buffer size in bytes. Default is 256.
:paramtype buffer_size: int
"""
super(KeywordTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.KeywordTokenizer' # type: str
self.buffer_size = buffer_size
class KeywordTokenizerV2(LexicalTokenizer):
"""Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Default is 256. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:vartype max_token_length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 256,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Default is 256. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:paramtype max_token_length: int
"""
super(KeywordTokenizerV2, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.KeywordTokenizerV2' # type: str
self.max_token_length = max_token_length
class LanguageDetectionSkill(SearchIndexerSkill):
"""A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_country_hint: A country code to use as a hint to the language detection model if
it cannot disambiguate the language.
:vartype default_country_hint: str
:ivar model_version: The version of the model to use when calling the Text Analytics service.
It will default to the latest available when not specified. We recommend you do not specify
this value unless absolutely necessary.
:vartype model_version: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_country_hint': {'key': 'defaultCountryHint', 'type': 'str'},
'model_version': {'key': 'modelVersion', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_country_hint: Optional[str] = None,
model_version: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_country_hint: A country code to use as a hint to the language detection model
if it cannot disambiguate the language.
:paramtype default_country_hint: str
:keyword model_version: The version of the model to use when calling the Text Analytics
service. It will default to the latest available when not specified. We recommend you do not
specify this value unless absolutely necessary.
:paramtype model_version: str
"""
super(LanguageDetectionSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.LanguageDetectionSkill' # type: str
self.default_country_hint = default_country_hint
self.model_version = model_version
class LengthTokenFilter(TokenFilter):
"""Removes words that are too long or too short. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar min_length: The minimum length in characters. Default is 0. Maximum is 300. Must be less
than the value of max.
:vartype min_length: int
:ivar max_length: The maximum length in characters. Default and maximum is 300.
:vartype max_length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'min_length': {'maximum': 300},
'max_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_length': {'key': 'min', 'type': 'int'},
'max_length': {'key': 'max', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
min_length: Optional[int] = 0,
max_length: Optional[int] = 300,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_length: The minimum length in characters. Default is 0. Maximum is 300. Must be
less than the value of max.
:paramtype min_length: int
:keyword max_length: The maximum length in characters. Default and maximum is 300.
:paramtype max_length: int
"""
super(LengthTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.LengthTokenFilter' # type: str
self.min_length = min_length
self.max_length = max_length
class LimitTokenFilter(TokenFilter):
"""Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar max_token_count: The maximum number of tokens to produce. Default is 1.
:vartype max_token_count: int
:ivar consume_all_tokens: A value indicating whether all tokens from the input must be consumed
even if maxTokenCount is reached. Default is false.
:vartype consume_all_tokens: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_count': {'key': 'maxTokenCount', 'type': 'int'},
'consume_all_tokens': {'key': 'consumeAllTokens', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
max_token_count: Optional[int] = 1,
consume_all_tokens: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_count: The maximum number of tokens to produce. Default is 1.
:paramtype max_token_count: int
:keyword consume_all_tokens: A value indicating whether all tokens from the input must be
consumed even if maxTokenCount is reached. Default is false.
:paramtype consume_all_tokens: bool
"""
super(LimitTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.LimitTokenFilter' # type: str
self.max_token_count = max_token_count
self.consume_all_tokens = consume_all_tokens
class ListAliasesResult(msrest.serialization.Model):
"""Response from a List Aliases request. If successful, it includes the associated index mappings for all aliases.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar aliases: Required. The aliases in the Search service.
:vartype aliases: list[~azure.search.documents.indexes.models.SearchAlias]
"""
_validation = {
'aliases': {'required': True, 'readonly': True},
}
_attribute_map = {
'aliases': {'key': 'value', 'type': '[SearchAlias]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ListAliasesResult, self).__init__(**kwargs)
self.aliases = None
class ListDataSourcesResult(msrest.serialization.Model):
"""Response from a List Datasources request. If successful, it includes the full definitions of all datasources.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar data_sources: Required. The datasources in the Search service.
:vartype data_sources: list[~azure.search.documents.indexes.models.SearchIndexerDataSource]
"""
_validation = {
'data_sources': {'required': True, 'readonly': True},
}
_attribute_map = {
'data_sources': {'key': 'value', 'type': '[SearchIndexerDataSource]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ListDataSourcesResult, self).__init__(**kwargs)
self.data_sources = None
class ListIndexersResult(msrest.serialization.Model):
"""Response from a List Indexers request. If successful, it includes the full definitions of all indexers.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar indexers: Required. The indexers in the Search service.
:vartype indexers: list[~azure.search.documents.indexes.models.SearchIndexer]
"""
_validation = {
'indexers': {'required': True, 'readonly': True},
}
_attribute_map = {
'indexers': {'key': 'value', 'type': '[SearchIndexer]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ListIndexersResult, self).__init__(**kwargs)
self.indexers = None
class ListIndexesResult(msrest.serialization.Model):
"""Response from a List Indexes request. If successful, it includes the full definitions of all indexes.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar indexes: Required. The indexes in the Search service.
:vartype indexes: list[~azure.search.documents.indexes.models.SearchIndex]
"""
_validation = {
'indexes': {'required': True, 'readonly': True},
}
_attribute_map = {
'indexes': {'key': 'value', 'type': '[SearchIndex]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ListIndexesResult, self).__init__(**kwargs)
self.indexes = None
class ListSkillsetsResult(msrest.serialization.Model):
"""Response from a list skillset request. If successful, it includes the full definitions of all skillsets.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar skillsets: Required. The skillsets defined in the Search service.
:vartype skillsets: list[~azure.search.documents.indexes.models.SearchIndexerSkillset]
"""
_validation = {
'skillsets': {'required': True, 'readonly': True},
}
_attribute_map = {
'skillsets': {'key': 'value', 'type': '[SearchIndexerSkillset]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ListSkillsetsResult, self).__init__(**kwargs)
self.skillsets = None
class ListSynonymMapsResult(msrest.serialization.Model):
"""Response from a List SynonymMaps request. If successful, it includes the full definitions of all synonym maps.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar synonym_maps: Required. The synonym maps in the Search service.
:vartype synonym_maps: list[~azure.search.documents.indexes.models.SynonymMap]
"""
_validation = {
'synonym_maps': {'required': True, 'readonly': True},
}
_attribute_map = {
'synonym_maps': {'key': 'value', 'type': '[SynonymMap]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(ListSynonymMapsResult, self).__init__(**kwargs)
self.synonym_maps = None
class LuceneStandardAnalyzer(LexicalAnalyzer):
"""Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the analyzer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the analyzer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:vartype max_token_length: int
:ivar stopwords: A list of stopwords.
:vartype stopwords: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
'stopwords': {'key': 'stopwords', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
stopwords: Optional[List[str]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the analyzer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:paramtype max_token_length: int
:keyword stopwords: A list of stopwords.
:paramtype stopwords: list[str]
"""
super(LuceneStandardAnalyzer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StandardAnalyzer' # type: str
self.max_token_length = max_token_length
self.stopwords = stopwords
class LuceneStandardTokenizer(LexicalTokenizer):
"""Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split.
:vartype max_token_length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split.
:paramtype max_token_length: int
"""
super(LuceneStandardTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StandardTokenizer' # type: str
self.max_token_length = max_token_length
class LuceneStandardTokenizerV2(LexicalTokenizer):
"""Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:vartype max_token_length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:paramtype max_token_length: int
"""
super(LuceneStandardTokenizerV2, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StandardTokenizerV2' # type: str
self.max_token_length = max_token_length
class MagnitudeScoringFunction(ScoringFunction):
"""Defines a function that boosts scores based on the magnitude of a numeric field.
All required parameters must be populated in order to send to Azure.
:ivar type: Required. Indicates the type of function to use. Valid values include magnitude,
freshness, distance, and tag. The function type must be lower case.Constant filled by server.
:vartype type: str
:ivar field_name: Required. The name of the field used as input to the scoring function.
:vartype field_name: str
:ivar boost: Required. A multiplier for the raw score. Must be a positive number not equal to
1.0.
:vartype boost: float
:ivar interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:vartype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:ivar parameters: Required. Parameter values for the magnitude scoring function.
:vartype parameters: ~azure.search.documents.indexes.models.MagnitudeScoringParameters
"""
_validation = {
'type': {'required': True},
'field_name': {'required': True},
'boost': {'required': True},
'parameters': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'field_name': {'key': 'fieldName', 'type': 'str'},
'boost': {'key': 'boost', 'type': 'float'},
'interpolation': {'key': 'interpolation', 'type': 'str'},
'parameters': {'key': 'magnitude', 'type': 'MagnitudeScoringParameters'},
}
def __init__(
self,
*,
field_name: str,
boost: float,
parameters: "MagnitudeScoringParameters",
interpolation: Optional[Union[str, "ScoringFunctionInterpolation"]] = None,
**kwargs
):
"""
:keyword field_name: Required. The name of the field used as input to the scoring function.
:paramtype field_name: str
:keyword boost: Required. A multiplier for the raw score. Must be a positive number not equal
to 1.0.
:paramtype boost: float
:keyword interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:paramtype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:keyword parameters: Required. Parameter values for the magnitude scoring function.
:paramtype parameters: ~azure.search.documents.indexes.models.MagnitudeScoringParameters
"""
super(MagnitudeScoringFunction, self).__init__(field_name=field_name, boost=boost, interpolation=interpolation, **kwargs)
self.type = 'magnitude' # type: str
self.parameters = parameters
class MagnitudeScoringParameters(msrest.serialization.Model):
"""Provides parameter values to a magnitude scoring function.
All required parameters must be populated in order to send to Azure.
:ivar boosting_range_start: Required. The field value at which boosting starts.
:vartype boosting_range_start: float
:ivar boosting_range_end: Required. The field value at which boosting ends.
:vartype boosting_range_end: float
:ivar should_boost_beyond_range_by_constant: A value indicating whether to apply a constant
boost for field values beyond the range end value; default is false.
:vartype should_boost_beyond_range_by_constant: bool
"""
_validation = {
'boosting_range_start': {'required': True},
'boosting_range_end': {'required': True},
}
_attribute_map = {
'boosting_range_start': {'key': 'boostingRangeStart', 'type': 'float'},
'boosting_range_end': {'key': 'boostingRangeEnd', 'type': 'float'},
'should_boost_beyond_range_by_constant': {'key': 'constantBoostBeyondRange', 'type': 'bool'},
}
def __init__(
self,
*,
boosting_range_start: float,
boosting_range_end: float,
should_boost_beyond_range_by_constant: Optional[bool] = None,
**kwargs
):
"""
:keyword boosting_range_start: Required. The field value at which boosting starts.
:paramtype boosting_range_start: float
:keyword boosting_range_end: Required. The field value at which boosting ends.
:paramtype boosting_range_end: float
:keyword should_boost_beyond_range_by_constant: A value indicating whether to apply a constant
boost for field values beyond the range end value; default is false.
:paramtype should_boost_beyond_range_by_constant: bool
"""
super(MagnitudeScoringParameters, self).__init__(**kwargs)
self.boosting_range_start = boosting_range_start
self.boosting_range_end = boosting_range_end
self.should_boost_beyond_range_by_constant = should_boost_beyond_range_by_constant
class MappingCharFilter(CharFilter):
"""A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the char filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the char filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar mappings: Required. A list of mappings of the following format: "a=>b" (all occurrences
of the character "a" will be replaced with character "b").
:vartype mappings: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'mappings': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'mappings': {'key': 'mappings', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
mappings: List[str],
**kwargs
):
"""
:keyword name: Required. The name of the char filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword mappings: Required. A list of mappings of the following format: "a=>b" (all
occurrences of the character "a" will be replaced with character "b").
:paramtype mappings: list[str]
"""
super(MappingCharFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.MappingCharFilter' # type: str
self.mappings = mappings
class MergeSkill(SearchIndexerSkill):
"""A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar insert_pre_tag: The tag indicates the start of the merged text. By default, the tag is an
empty space.
:vartype insert_pre_tag: str
:ivar insert_post_tag: The tag indicates the end of the merged text. By default, the tag is an
empty space.
:vartype insert_post_tag: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'insert_pre_tag': {'key': 'insertPreTag', 'type': 'str'},
'insert_post_tag': {'key': 'insertPostTag', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
insert_pre_tag: Optional[str] = " ",
insert_post_tag: Optional[str] = " ",
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword insert_pre_tag: The tag indicates the start of the merged text. By default, the tag is
an empty space.
:paramtype insert_pre_tag: str
:keyword insert_post_tag: The tag indicates the end of the merged text. By default, the tag is
an empty space.
:paramtype insert_post_tag: str
"""
super(MergeSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.MergeSkill' # type: str
self.insert_pre_tag = insert_pre_tag
self.insert_post_tag = insert_post_tag
class MicrosoftLanguageStemmingTokenizer(LexicalTokenizer):
"""Divides text using language-specific rules and reduces words to their base forms.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Tokens longer than the maximum length are
split. Maximum token length that can be used is 300 characters. Tokens longer than 300
characters are first split into tokens of length 300 and then each of those tokens is split
based on the max token length set. Default is 255.
:vartype max_token_length: int
:ivar is_search_tokenizer: A value indicating how the tokenizer is used. Set to true if used as
the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
:vartype is_search_tokenizer: bool
:ivar language: The language to use. The default is English. Possible values include: "arabic",
"bangla", "bulgarian", "catalan", "croatian", "czech", "danish", "dutch", "english",
"estonian", "finnish", "french", "german", "greek", "gujarati", "hebrew", "hindi", "hungarian",
"icelandic", "indonesian", "italian", "kannada", "latvian", "lithuanian", "malay", "malayalam",
"marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi",
"romanian", "russian", "serbianCyrillic", "serbianLatin", "slovak", "slovenian", "spanish",
"swedish", "tamil", "telugu", "turkish", "ukrainian", "urdu".
:vartype language: str or
~azure.search.documents.indexes.models.MicrosoftStemmingTokenizerLanguage
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
'is_search_tokenizer': {'key': 'isSearchTokenizer', 'type': 'bool'},
'language': {'key': 'language', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
is_search_tokenizer: Optional[bool] = False,
language: Optional[Union[str, "MicrosoftStemmingTokenizerLanguage"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Tokens longer than the maximum length are
split. Maximum token length that can be used is 300 characters. Tokens longer than 300
characters are first split into tokens of length 300 and then each of those tokens is split
based on the max token length set. Default is 255.
:paramtype max_token_length: int
:keyword is_search_tokenizer: A value indicating how the tokenizer is used. Set to true if used
as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
:paramtype is_search_tokenizer: bool
:keyword language: The language to use. The default is English. Possible values include:
"arabic", "bangla", "bulgarian", "catalan", "croatian", "czech", "danish", "dutch", "english",
"estonian", "finnish", "french", "german", "greek", "gujarati", "hebrew", "hindi", "hungarian",
"icelandic", "indonesian", "italian", "kannada", "latvian", "lithuanian", "malay", "malayalam",
"marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi",
"romanian", "russian", "serbianCyrillic", "serbianLatin", "slovak", "slovenian", "spanish",
"swedish", "tamil", "telugu", "turkish", "ukrainian", "urdu".
:paramtype language: str or
~azure.search.documents.indexes.models.MicrosoftStemmingTokenizerLanguage
"""
super(MicrosoftLanguageStemmingTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer' # type: str
self.max_token_length = max_token_length
self.is_search_tokenizer = is_search_tokenizer
self.language = language
class MicrosoftLanguageTokenizer(LexicalTokenizer):
"""Divides text using language-specific rules.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Tokens longer than the maximum length are
split. Maximum token length that can be used is 300 characters. Tokens longer than 300
characters are first split into tokens of length 300 and then each of those tokens is split
based on the max token length set. Default is 255.
:vartype max_token_length: int
:ivar is_search_tokenizer: A value indicating how the tokenizer is used. Set to true if used as
the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
:vartype is_search_tokenizer: bool
:ivar language: The language to use. The default is English. Possible values include: "bangla",
"bulgarian", "catalan", "chineseSimplified", "chineseTraditional", "croatian", "czech",
"danish", "dutch", "english", "french", "german", "greek", "gujarati", "hindi", "icelandic",
"indonesian", "italian", "japanese", "kannada", "korean", "malay", "malayalam", "marathi",
"norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi", "romanian",
"russian", "serbianCyrillic", "serbianLatin", "slovenian", "spanish", "swedish", "tamil",
"telugu", "thai", "ukrainian", "urdu", "vietnamese".
:vartype language: str or ~azure.search.documents.indexes.models.MicrosoftTokenizerLanguage
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
'is_search_tokenizer': {'key': 'isSearchTokenizer', 'type': 'bool'},
'language': {'key': 'language', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
is_search_tokenizer: Optional[bool] = False,
language: Optional[Union[str, "MicrosoftTokenizerLanguage"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Tokens longer than the maximum length are
split. Maximum token length that can be used is 300 characters. Tokens longer than 300
characters are first split into tokens of length 300 and then each of those tokens is split
based on the max token length set. Default is 255.
:paramtype max_token_length: int
:keyword is_search_tokenizer: A value indicating how the tokenizer is used. Set to true if used
as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
:paramtype is_search_tokenizer: bool
:keyword language: The language to use. The default is English. Possible values include:
"bangla", "bulgarian", "catalan", "chineseSimplified", "chineseTraditional", "croatian",
"czech", "danish", "dutch", "english", "french", "german", "greek", "gujarati", "hindi",
"icelandic", "indonesian", "italian", "japanese", "kannada", "korean", "malay", "malayalam",
"marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi",
"romanian", "russian", "serbianCyrillic", "serbianLatin", "slovenian", "spanish", "swedish",
"tamil", "telugu", "thai", "ukrainian", "urdu", "vietnamese".
:paramtype language: str or ~azure.search.documents.indexes.models.MicrosoftTokenizerLanguage
"""
super(MicrosoftLanguageTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.MicrosoftLanguageTokenizer' # type: str
self.max_token_length = max_token_length
self.is_search_tokenizer = is_search_tokenizer
self.language = language
class NGramTokenFilter(TokenFilter):
"""Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar min_gram: The minimum n-gram length. Default is 1. Must be less than the value of
maxGram.
:vartype min_gram: int
:ivar max_gram: The maximum n-gram length. Default is 2.
:vartype max_gram: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_gram': {'key': 'minGram', 'type': 'int'},
'max_gram': {'key': 'maxGram', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
min_gram: Optional[int] = 1,
max_gram: Optional[int] = 2,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_gram: The minimum n-gram length. Default is 1. Must be less than the value of
maxGram.
:paramtype min_gram: int
:keyword max_gram: The maximum n-gram length. Default is 2.
:paramtype max_gram: int
"""
super(NGramTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.NGramTokenFilter' # type: str
self.min_gram = min_gram
self.max_gram = max_gram
class NGramTokenFilterV2(TokenFilter):
"""Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the
value of maxGram.
:vartype min_gram: int
:ivar max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:vartype max_gram: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'min_gram': {'maximum': 300},
'max_gram': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_gram': {'key': 'minGram', 'type': 'int'},
'max_gram': {'key': 'maxGram', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
min_gram: Optional[int] = 1,
max_gram: Optional[int] = 2,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than
the value of maxGram.
:paramtype min_gram: int
:keyword max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:paramtype max_gram: int
"""
super(NGramTokenFilterV2, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.NGramTokenFilterV2' # type: str
self.min_gram = min_gram
self.max_gram = max_gram
class NGramTokenizer(LexicalTokenizer):
"""Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the
value of maxGram.
:vartype min_gram: int
:ivar max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:vartype max_gram: int
:ivar token_chars: Character classes to keep in the tokens.
:vartype token_chars: list[str or ~azure.search.documents.indexes.models.TokenCharacterKind]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'min_gram': {'maximum': 300},
'max_gram': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'min_gram': {'key': 'minGram', 'type': 'int'},
'max_gram': {'key': 'maxGram', 'type': 'int'},
'token_chars': {'key': 'tokenChars', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
min_gram: Optional[int] = 1,
max_gram: Optional[int] = 2,
token_chars: Optional[List[Union[str, "TokenCharacterKind"]]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword min_gram: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than
the value of maxGram.
:paramtype min_gram: int
:keyword max_gram: The maximum n-gram length. Default is 2. Maximum is 300.
:paramtype max_gram: int
:keyword token_chars: Character classes to keep in the tokens.
:paramtype token_chars: list[str or ~azure.search.documents.indexes.models.TokenCharacterKind]
"""
super(NGramTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.NGramTokenizer' # type: str
self.min_gram = min_gram
self.max_gram = max_gram
self.token_chars = token_chars
class OcrSkill(SearchIndexerSkill):
"""A skill that extracts text from image files.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "zh-Hans", "zh-Hant", "cs", "da", "nl", "en", "fi", "fr", "de", "el",
"hu", "it", "ja", "ko", "nb", "pl", "pt", "ru", "es", "sv", "tr", "ar", "ro", "sr-Cyrl",
"sr-Latn", "sk", "unk".
:vartype default_language_code: str or ~azure.search.documents.indexes.models.OcrSkillLanguage
:ivar should_detect_orientation: A value indicating to turn orientation detection on or not.
Default is false.
:vartype should_detect_orientation: bool
:ivar line_ending: Defines the sequence of characters to use between the lines of text
recognized by the OCR skill. The default value is "space". Possible values include: "space",
"carriageReturn", "lineFeed", "carriageReturnLineFeed".
:vartype line_ending: str or ~azure.search.documents.indexes.models.LineEnding
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'should_detect_orientation': {'key': 'detectOrientation', 'type': 'bool'},
'line_ending': {'key': 'lineEnding', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[Union[str, "OcrSkillLanguage"]] = None,
should_detect_orientation: Optional[bool] = False,
line_ending: Optional[Union[str, "LineEnding"]] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "zh-Hans", "zh-Hant", "cs", "da", "nl", "en", "fi", "fr", "de", "el",
"hu", "it", "ja", "ko", "nb", "pl", "pt", "ru", "es", "sv", "tr", "ar", "ro", "sr-Cyrl",
"sr-Latn", "sk", "unk".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.OcrSkillLanguage
:keyword should_detect_orientation: A value indicating to turn orientation detection on or not.
Default is false.
:paramtype should_detect_orientation: bool
:keyword line_ending: Defines the sequence of characters to use between the lines of text
recognized by the OCR skill. The default value is "space". Possible values include: "space",
"carriageReturn", "lineFeed", "carriageReturnLineFeed".
:paramtype line_ending: str or ~azure.search.documents.indexes.models.LineEnding
"""
super(OcrSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Vision.OcrSkill' # type: str
self.default_language_code = default_language_code
self.should_detect_orientation = should_detect_orientation
self.line_ending = line_ending
class OutputFieldMappingEntry(msrest.serialization.Model):
"""Output field mapping for a skill.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the output defined by the skill.
:vartype name: str
:ivar target_name: The target name of the output. It is optional and default to name.
:vartype target_name: str
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'target_name': {'key': 'targetName', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
target_name: Optional[str] = None,
**kwargs
):
"""
:keyword name: Required. The name of the output defined by the skill.
:paramtype name: str
:keyword target_name: The target name of the output. It is optional and default to name.
:paramtype target_name: str
"""
super(OutputFieldMappingEntry, self).__init__(**kwargs)
self.name = name
self.target_name = target_name
class PathHierarchyTokenizerV2(LexicalTokenizer):
"""Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar delimiter: The delimiter character to use. Default is "/".
:vartype delimiter: str
:ivar replacement: A value that, if set, replaces the delimiter character. Default is "/".
:vartype replacement: str
:ivar max_token_length: The maximum token length. Default and maximum is 300.
:vartype max_token_length: int
:ivar reverse_token_order: A value indicating whether to generate tokens in reverse order.
Default is false.
:vartype reverse_token_order: bool
:ivar number_of_tokens_to_skip: The number of initial tokens to skip. Default is 0.
:vartype number_of_tokens_to_skip: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'delimiter': {'key': 'delimiter', 'type': 'str'},
'replacement': {'key': 'replacement', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
'reverse_token_order': {'key': 'reverse', 'type': 'bool'},
'number_of_tokens_to_skip': {'key': 'skip', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
delimiter: Optional[str] = "/",
replacement: Optional[str] = "/",
max_token_length: Optional[int] = 300,
reverse_token_order: Optional[bool] = False,
number_of_tokens_to_skip: Optional[int] = 0,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword delimiter: The delimiter character to use. Default is "/".
:paramtype delimiter: str
:keyword replacement: A value that, if set, replaces the delimiter character. Default is "/".
:paramtype replacement: str
:keyword max_token_length: The maximum token length. Default and maximum is 300.
:paramtype max_token_length: int
:keyword reverse_token_order: A value indicating whether to generate tokens in reverse order.
Default is false.
:paramtype reverse_token_order: bool
:keyword number_of_tokens_to_skip: The number of initial tokens to skip. Default is 0.
:paramtype number_of_tokens_to_skip: int
"""
super(PathHierarchyTokenizerV2, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PathHierarchyTokenizerV2' # type: str
self.delimiter = delimiter
self.replacement = replacement
self.max_token_length = max_token_length
self.reverse_token_order = reverse_token_order
self.number_of_tokens_to_skip = number_of_tokens_to_skip
class PatternAnalyzer(LexicalAnalyzer):
"""Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the analyzer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the analyzer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar lower_case_terms: A value indicating whether terms should be lower-cased. Default is
true.
:vartype lower_case_terms: bool
:ivar pattern: A regular expression pattern to match token separators. Default is an expression
that matches one or more non-word characters.
:vartype pattern: str
:ivar flags: Regular expression flags. Possible values include: "CANON_EQ", "CASE_INSENSITIVE",
"COMMENTS", "DOTALL", "LITERAL", "MULTILINE", "UNICODE_CASE", "UNIX_LINES".
:vartype flags: str or ~azure.search.documents.indexes.models.RegexFlags
:ivar stopwords: A list of stopwords.
:vartype stopwords: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'lower_case_terms': {'key': 'lowercase', 'type': 'bool'},
'pattern': {'key': 'pattern', 'type': 'str'},
'flags': {'key': 'flags', 'type': 'str'},
'stopwords': {'key': 'stopwords', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
lower_case_terms: Optional[bool] = True,
pattern: Optional[str] = "\W+",
flags: Optional[Union[str, "RegexFlags"]] = None,
stopwords: Optional[List[str]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the analyzer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword lower_case_terms: A value indicating whether terms should be lower-cased. Default is
true.
:paramtype lower_case_terms: bool
:keyword pattern: A regular expression pattern to match token separators. Default is an
expression that matches one or more non-word characters.
:paramtype pattern: str
:keyword flags: Regular expression flags. Possible values include: "CANON_EQ",
"CASE_INSENSITIVE", "COMMENTS", "DOTALL", "LITERAL", "MULTILINE", "UNICODE_CASE", "UNIX_LINES".
:paramtype flags: str or ~azure.search.documents.indexes.models.RegexFlags
:keyword stopwords: A list of stopwords.
:paramtype stopwords: list[str]
"""
super(PatternAnalyzer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PatternAnalyzer' # type: str
self.lower_case_terms = lower_case_terms
self.pattern = pattern
self.flags = flags
self.stopwords = stopwords
class PatternCaptureTokenFilter(TokenFilter):
"""Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar patterns: Required. A list of patterns to match against each token.
:vartype patterns: list[str]
:ivar preserve_original: A value indicating whether to return the original token even if one of
the patterns matches. Default is true.
:vartype preserve_original: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'patterns': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'patterns': {'key': 'patterns', 'type': '[str]'},
'preserve_original': {'key': 'preserveOriginal', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
patterns: List[str],
preserve_original: Optional[bool] = True,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword patterns: Required. A list of patterns to match against each token.
:paramtype patterns: list[str]
:keyword preserve_original: A value indicating whether to return the original token even if one
of the patterns matches. Default is true.
:paramtype preserve_original: bool
"""
super(PatternCaptureTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PatternCaptureTokenFilter' # type: str
self.patterns = patterns
self.preserve_original = preserve_original
class PatternReplaceCharFilter(CharFilter):
"""A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the char filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the char filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar pattern: Required. A regular expression pattern.
:vartype pattern: str
:ivar replacement: Required. The replacement text.
:vartype replacement: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'pattern': {'required': True},
'replacement': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'pattern': {'key': 'pattern', 'type': 'str'},
'replacement': {'key': 'replacement', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
pattern: str,
replacement: str,
**kwargs
):
"""
:keyword name: Required. The name of the char filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword pattern: Required. A regular expression pattern.
:paramtype pattern: str
:keyword replacement: Required. The replacement text.
:paramtype replacement: str
"""
super(PatternReplaceCharFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PatternReplaceCharFilter' # type: str
self.pattern = pattern
self.replacement = replacement
class PatternReplaceTokenFilter(TokenFilter):
"""A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar pattern: Required. A regular expression pattern.
:vartype pattern: str
:ivar replacement: Required. The replacement text.
:vartype replacement: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'pattern': {'required': True},
'replacement': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'pattern': {'key': 'pattern', 'type': 'str'},
'replacement': {'key': 'replacement', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
pattern: str,
replacement: str,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword pattern: Required. A regular expression pattern.
:paramtype pattern: str
:keyword replacement: Required. The replacement text.
:paramtype replacement: str
"""
super(PatternReplaceTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PatternReplaceTokenFilter' # type: str
self.pattern = pattern
self.replacement = replacement
class PatternTokenizer(LexicalTokenizer):
"""Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar pattern: A regular expression pattern to match token separators. Default is an expression
that matches one or more non-word characters.
:vartype pattern: str
:ivar flags: Regular expression flags. Possible values include: "CANON_EQ", "CASE_INSENSITIVE",
"COMMENTS", "DOTALL", "LITERAL", "MULTILINE", "UNICODE_CASE", "UNIX_LINES".
:vartype flags: str or ~azure.search.documents.indexes.models.RegexFlags
:ivar group: The zero-based ordinal of the matching group in the regular expression pattern to
extract into tokens. Use -1 if you want to use the entire pattern to split the input into
tokens, irrespective of matching groups. Default is -1.
:vartype group: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'pattern': {'key': 'pattern', 'type': 'str'},
'flags': {'key': 'flags', 'type': 'str'},
'group': {'key': 'group', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
pattern: Optional[str] = "\W+",
flags: Optional[Union[str, "RegexFlags"]] = None,
group: Optional[int] = -1,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword pattern: A regular expression pattern to match token separators. Default is an
expression that matches one or more non-word characters.
:paramtype pattern: str
:keyword flags: Regular expression flags. Possible values include: "CANON_EQ",
"CASE_INSENSITIVE", "COMMENTS", "DOTALL", "LITERAL", "MULTILINE", "UNICODE_CASE", "UNIX_LINES".
:paramtype flags: str or ~azure.search.documents.indexes.models.RegexFlags
:keyword group: The zero-based ordinal of the matching group in the regular expression pattern
to extract into tokens. Use -1 if you want to use the entire pattern to split the input into
tokens, irrespective of matching groups. Default is -1.
:paramtype group: int
"""
super(PatternTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PatternTokenizer' # type: str
self.pattern = pattern
self.flags = flags
self.group = group
class PhoneticTokenFilter(TokenFilter):
"""Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar encoder: The phonetic encoder to use. Default is "metaphone". Possible values include:
"metaphone", "doubleMetaphone", "soundex", "refinedSoundex", "caverphone1", "caverphone2",
"cologne", "nysiis", "koelnerPhonetik", "haasePhonetik", "beiderMorse".
:vartype encoder: str or ~azure.search.documents.indexes.models.PhoneticEncoder
:ivar replace_original_tokens: A value indicating whether encoded tokens should replace
original tokens. If false, encoded tokens are added as synonyms. Default is true.
:vartype replace_original_tokens: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'encoder': {'key': 'encoder', 'type': 'str'},
'replace_original_tokens': {'key': 'replace', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
encoder: Optional[Union[str, "PhoneticEncoder"]] = None,
replace_original_tokens: Optional[bool] = True,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword encoder: The phonetic encoder to use. Default is "metaphone". Possible values include:
"metaphone", "doubleMetaphone", "soundex", "refinedSoundex", "caverphone1", "caverphone2",
"cologne", "nysiis", "koelnerPhonetik", "haasePhonetik", "beiderMorse".
:paramtype encoder: str or ~azure.search.documents.indexes.models.PhoneticEncoder
:keyword replace_original_tokens: A value indicating whether encoded tokens should replace
original tokens. If false, encoded tokens are added as synonyms. Default is true.
:paramtype replace_original_tokens: bool
"""
super(PhoneticTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.PhoneticTokenFilter' # type: str
self.encoder = encoder
self.replace_original_tokens = replace_original_tokens
class PIIDetectionSkill(SearchIndexerSkill):
"""Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
:vartype default_language_code: str
:ivar minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:vartype minimum_precision: float
:ivar masking_mode: A parameter that provides various ways to mask the personal information
detected in the input text. Default is 'none'. Possible values include: "none", "replace".
:vartype masking_mode: str or
~azure.search.documents.indexes.models.PIIDetectionSkillMaskingMode
:ivar masking_character: The character used to mask the text if the maskingMode parameter is
set to replace. Default is '*'.
:vartype masking_character: str
:ivar model_version: The version of the model to use when calling the Text Analytics service.
It will default to the latest available when not specified. We recommend you do not specify
this value unless absolutely necessary.
:vartype model_version: str
:ivar pii_categories: A list of PII entity categories that should be extracted and masked.
:vartype pii_categories: list[str]
:ivar domain: If specified, will set the PII domain to include only a subset of the entity
categories. Possible values include: 'phi', 'none'. Default is 'none'.
:vartype domain: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
'minimum_precision': {'maximum': 1, 'minimum': 0},
'masking_character': {'max_length': 1, 'min_length': 0},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'minimum_precision': {'key': 'minimumPrecision', 'type': 'float'},
'masking_mode': {'key': 'maskingMode', 'type': 'str'},
'masking_character': {'key': 'maskingCharacter', 'type': 'str'},
'model_version': {'key': 'modelVersion', 'type': 'str'},
'pii_categories': {'key': 'piiCategories', 'type': '[str]'},
'domain': {'key': 'domain', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[str] = None,
minimum_precision: Optional[float] = None,
masking_mode: Optional[Union[str, "PIIDetectionSkillMaskingMode"]] = None,
masking_character: Optional[str] = None,
model_version: Optional[str] = None,
pii_categories: Optional[List[str]] = None,
domain: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
:paramtype default_language_code: str
:keyword minimum_precision: A value between 0 and 1 that be used to only include entities whose
confidence score is greater than the value specified. If not set (default), or if explicitly
set to null, all entities will be included.
:paramtype minimum_precision: float
:keyword masking_mode: A parameter that provides various ways to mask the personal information
detected in the input text. Default is 'none'. Possible values include: "none", "replace".
:paramtype masking_mode: str or
~azure.search.documents.indexes.models.PIIDetectionSkillMaskingMode
:keyword masking_character: The character used to mask the text if the maskingMode parameter is
set to replace. Default is '*'.
:paramtype masking_character: str
:keyword model_version: The version of the model to use when calling the Text Analytics
service. It will default to the latest available when not specified. We recommend you do not
specify this value unless absolutely necessary.
:paramtype model_version: str
:keyword pii_categories: A list of PII entity categories that should be extracted and masked.
:paramtype pii_categories: list[str]
:keyword domain: If specified, will set the PII domain to include only a subset of the entity
categories. Possible values include: 'phi', 'none'. Default is 'none'.
:paramtype domain: str
"""
super(PIIDetectionSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.PIIDetectionSkill' # type: str
self.default_language_code = default_language_code
self.minimum_precision = minimum_precision
self.masking_mode = masking_mode
self.masking_character = masking_character
self.model_version = model_version
self.pii_categories = pii_categories
self.domain = domain
class PrioritizedFields(msrest.serialization.Model):
"""Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers.
:ivar title_field: Defines the title field to be used for semantic ranking, captions,
highlights, and answers. If you don't have a title field in your index, leave this blank.
:vartype title_field: ~azure.search.documents.indexes.models.SemanticField
:ivar prioritized_content_fields: Defines the content fields to be used for semantic ranking,
captions, highlights, and answers. For the best result, the selected fields should contain text
in natural language form. The order of the fields in the array represents their priority.
Fields with lower priority may get truncated if the content is long.
:vartype prioritized_content_fields: list[~azure.search.documents.indexes.models.SemanticField]
:ivar prioritized_keywords_fields: Defines the keyword fields to be used for semantic ranking,
captions, highlights, and answers. For the best result, the selected fields should contain a
list of keywords. The order of the fields in the array represents their priority. Fields with
lower priority may get truncated if the content is long.
:vartype prioritized_keywords_fields:
list[~azure.search.documents.indexes.models.SemanticField]
"""
_attribute_map = {
'title_field': {'key': 'titleField', 'type': 'SemanticField'},
'prioritized_content_fields': {'key': 'prioritizedContentFields', 'type': '[SemanticField]'},
'prioritized_keywords_fields': {'key': 'prioritizedKeywordsFields', 'type': '[SemanticField]'},
}
def __init__(
self,
*,
title_field: Optional["SemanticField"] = None,
prioritized_content_fields: Optional[List["SemanticField"]] = None,
prioritized_keywords_fields: Optional[List["SemanticField"]] = None,
**kwargs
):
"""
:keyword title_field: Defines the title field to be used for semantic ranking, captions,
highlights, and answers. If you don't have a title field in your index, leave this blank.
:paramtype title_field: ~azure.search.documents.indexes.models.SemanticField
:keyword prioritized_content_fields: Defines the content fields to be used for semantic
ranking, captions, highlights, and answers. For the best result, the selected fields should
contain text in natural language form. The order of the fields in the array represents their
priority. Fields with lower priority may get truncated if the content is long.
:paramtype prioritized_content_fields:
list[~azure.search.documents.indexes.models.SemanticField]
:keyword prioritized_keywords_fields: Defines the keyword fields to be used for semantic
ranking, captions, highlights, and answers. For the best result, the selected fields should
contain a list of keywords. The order of the fields in the array represents their priority.
Fields with lower priority may get truncated if the content is long.
:paramtype prioritized_keywords_fields:
list[~azure.search.documents.indexes.models.SemanticField]
"""
super(PrioritizedFields, self).__init__(**kwargs)
self.title_field = title_field
self.prioritized_content_fields = prioritized_content_fields
self.prioritized_keywords_fields = prioritized_keywords_fields
class RequestOptions(msrest.serialization.Model):
"""Parameter group.
:ivar x_ms_client_request_id: The tracking ID sent with the request to help with debugging.
:vartype x_ms_client_request_id: str
"""
_attribute_map = {
'x_ms_client_request_id': {'key': 'x-ms-client-request-id', 'type': 'str'},
}
def __init__(
self,
*,
x_ms_client_request_id: Optional[str] = None,
**kwargs
):
"""
:keyword x_ms_client_request_id: The tracking ID sent with the request to help with debugging.
:paramtype x_ms_client_request_id: str
"""
super(RequestOptions, self).__init__(**kwargs)
self.x_ms_client_request_id = x_ms_client_request_id
class ResourceCounter(msrest.serialization.Model):
"""Represents a resource's usage and quota.
All required parameters must be populated in order to send to Azure.
:ivar usage: Required. The resource usage amount.
:vartype usage: long
:ivar quota: The resource amount quota.
:vartype quota: long
"""
_validation = {
'usage': {'required': True},
}
_attribute_map = {
'usage': {'key': 'usage', 'type': 'long'},
'quota': {'key': 'quota', 'type': 'long'},
}
def __init__(
self,
*,
usage: int,
quota: Optional[int] = None,
**kwargs
):
"""
:keyword usage: Required. The resource usage amount.
:paramtype usage: long
:keyword quota: The resource amount quota.
:paramtype quota: long
"""
super(ResourceCounter, self).__init__(**kwargs)
self.usage = usage
self.quota = quota
class ScoringProfile(msrest.serialization.Model):
"""Defines parameters for a search index that influence scoring in search queries.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the scoring profile.
:vartype name: str
:ivar text_weights: Parameters that boost scoring based on text matches in certain index
fields.
:vartype text_weights: ~azure.search.documents.indexes.models.TextWeights
:ivar functions: The collection of functions that influence the scoring of documents.
:vartype functions: list[~azure.search.documents.indexes.models.ScoringFunction]
:ivar function_aggregation: A value indicating how the results of individual scoring functions
should be combined. Defaults to "Sum". Ignored if there are no scoring functions. Possible
values include: "sum", "average", "minimum", "maximum", "firstMatching".
:vartype function_aggregation: str or
~azure.search.documents.indexes.models.ScoringFunctionAggregation
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'text_weights': {'key': 'text', 'type': 'TextWeights'},
'functions': {'key': 'functions', 'type': '[ScoringFunction]'},
'function_aggregation': {'key': 'functionAggregation', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
text_weights: Optional["TextWeights"] = None,
functions: Optional[List["ScoringFunction"]] = None,
function_aggregation: Optional[Union[str, "ScoringFunctionAggregation"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the scoring profile.
:paramtype name: str
:keyword text_weights: Parameters that boost scoring based on text matches in certain index
fields.
:paramtype text_weights: ~azure.search.documents.indexes.models.TextWeights
:keyword functions: The collection of functions that influence the scoring of documents.
:paramtype functions: list[~azure.search.documents.indexes.models.ScoringFunction]
:keyword function_aggregation: A value indicating how the results of individual scoring
functions should be combined. Defaults to "Sum". Ignored if there are no scoring functions.
Possible values include: "sum", "average", "minimum", "maximum", "firstMatching".
:paramtype function_aggregation: str or
~azure.search.documents.indexes.models.ScoringFunctionAggregation
"""
super(ScoringProfile, self).__init__(**kwargs)
self.name = name
self.text_weights = text_weights
self.functions = functions
self.function_aggregation = function_aggregation
class SearchAlias(msrest.serialization.Model):
"""Represents an index alias, which describes a mapping from the alias name to an index. The alias name can be used in place of the index name for supported operations.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the alias.
:vartype name: str
:ivar indexes: Required. The name of the index this alias maps to. Only one index name may be
specified.
:vartype indexes: list[str]
:ivar e_tag: The ETag of the alias.
:vartype e_tag: str
"""
_validation = {
'name': {'required': True},
'indexes': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'indexes': {'key': 'indexes', 'type': '[str]'},
'e_tag': {'key': '@odata\\.etag', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
indexes: List[str],
e_tag: Optional[str] = None,
**kwargs
):
"""
:keyword name: Required. The name of the alias.
:paramtype name: str
:keyword indexes: Required. The name of the index this alias maps to. Only one index name may
be specified.
:paramtype indexes: list[str]
:keyword e_tag: The ETag of the alias.
:paramtype e_tag: str
"""
super(SearchAlias, self).__init__(**kwargs)
self.name = name
self.indexes = indexes
self.e_tag = e_tag
class SearchError(msrest.serialization.Model):
"""Describes an error condition for the Azure Cognitive Search API.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar code: One of a server-defined set of error codes.
:vartype code: str
:ivar message: Required. A human-readable representation of the error.
:vartype message: str
:ivar details: An array of details about specific errors that led to this reported error.
:vartype details: list[~azure.search.documents.indexes.models.SearchError]
"""
_validation = {
'code': {'readonly': True},
'message': {'required': True, 'readonly': True},
'details': {'readonly': True},
}
_attribute_map = {
'code': {'key': 'code', 'type': 'str'},
'message': {'key': 'message', 'type': 'str'},
'details': {'key': 'details', 'type': '[SearchError]'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchError, self).__init__(**kwargs)
self.code = None
self.message = None
self.details = None
class SearchField(msrest.serialization.Model):
"""Represents a field in an index definition, which describes the name, data type, and search behavior of a field.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the field, which must be unique within the fields collection
of the index or parent field.
:vartype name: str
:ivar type: Required. The data type of the field. Possible values include: "Edm.String",
"Edm.Int32", "Edm.Int64", "Edm.Double", "Edm.Boolean", "Edm.DateTimeOffset",
"Edm.GeographyPoint", "Edm.ComplexType".
:vartype type: str or ~azure.search.documents.indexes.models.SearchFieldDataType
:ivar key: A value indicating whether the field uniquely identifies documents in the index.
Exactly one top-level field in each index must be chosen as the key field and it must be of
type Edm.String. Key fields can be used to look up documents directly and update or delete
specific documents. Default is false for simple fields and null for complex fields.
:vartype key: bool
:ivar retrievable: A value indicating whether the field can be returned in a search result. You
can disable this option if you want to use a field (for example, margin) as a filter, sorting,
or scoring mechanism but do not want the field to be visible to the end user. This property
must be true for key fields, and it must be null for complex fields. This property can be
changed on existing fields. Enabling this property does not cause any increase in index storage
requirements. Default is true for simple fields and null for complex fields.
:vartype retrievable: bool
:ivar searchable: A value indicating whether the field is full-text searchable. This means it
will undergo analysis such as word-breaking during indexing. If you set a searchable field to a
value like "sunny day", internally it will be split into the individual tokens "sunny" and
"day". This enables full-text searches for these terms. Fields of type Edm.String or
Collection(Edm.String) are searchable by default. This property must be false for simple fields
of other non-string data types, and it must be null for complex fields. Note: searchable fields
consume extra space in your index since Azure Cognitive Search will store an additional
tokenized version of the field value for full-text searches. If you want to save space in your
index and you don't need a field to be included in searches, set searchable to false.
:vartype searchable: bool
:ivar filterable: A value indicating whether to enable the field to be referenced in $filter
queries. filterable differs from searchable in how strings are handled. Fields of type
Edm.String or Collection(Edm.String) that are filterable do not undergo word-breaking, so
comparisons are for exact matches only. For example, if you set such a field f to "sunny day",
$filter=f eq 'sunny' will find no matches, but $filter=f eq 'sunny day' will. This property
must be null for complex fields. Default is true for simple fields and null for complex fields.
:vartype filterable: bool
:ivar sortable: A value indicating whether to enable the field to be referenced in $orderby
expressions. By default Azure Cognitive Search sorts results by score, but in many experiences
users will want to sort by fields in the documents. A simple field can be sortable only if it
is single-valued (it has a single value in the scope of the parent document). Simple collection
fields cannot be sortable, since they are multi-valued. Simple sub-fields of complex
collections are also multi-valued, and therefore cannot be sortable. This is true whether it's
an immediate parent field, or an ancestor field, that's the complex collection. Complex fields
cannot be sortable and the sortable property must be null for such fields. The default for
sortable is true for single-valued simple fields, false for multi-valued simple fields, and
null for complex fields.
:vartype sortable: bool
:ivar facetable: A value indicating whether to enable the field to be referenced in facet
queries. Typically used in a presentation of search results that includes hit count by category
(for example, search for digital cameras and see hits by brand, by megapixels, by price, and so
on). This property must be null for complex fields. Fields of type Edm.GeographyPoint or
Collection(Edm.GeographyPoint) cannot be facetable. Default is true for all other simple
fields.
:vartype facetable: bool
:ivar analyzer: The name of the analyzer to use for the field. This option can be used only
with searchable fields and it can't be set together with either searchAnalyzer or
indexAnalyzer. Once the analyzer is chosen, it cannot be changed for the field. Must be null
for complex fields. Possible values include: "ar.microsoft", "ar.lucene", "hy.lucene",
"bn.microsoft", "eu.lucene", "bg.microsoft", "bg.lucene", "ca.microsoft", "ca.lucene",
"zh-Hans.microsoft", "zh-Hans.lucene", "zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft",
"cs.microsoft", "cs.lucene", "da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene",
"en.microsoft", "en.lucene", "et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft",
"fr.lucene", "gl.lucene", "de.microsoft", "de.lucene", "el.microsoft", "el.lucene",
"gu.microsoft", "he.microsoft", "hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene",
"is.microsoft", "id.microsoft", "id.lucene", "ga.lucene", "it.microsoft", "it.lucene",
"ja.microsoft", "ja.lucene", "kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft",
"lv.lucene", "lt.microsoft", "ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft",
"no.lucene", "fa.lucene", "pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene",
"pt-PT.microsoft", "pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft",
"ru.lucene", "sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft",
"es.microsoft", "es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft",
"th.microsoft", "th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft",
"vi.microsoft", "standard.lucene", "standardasciifolding.lucene", "keyword", "pattern",
"simple", "stop", "whitespace".
:vartype analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:ivar search_analyzer: The name of the analyzer used at search time for the field. This option
can be used only with searchable fields. It must be set together with indexAnalyzer and it
cannot be set together with the analyzer option. This property cannot be set to the name of a
language analyzer; use the analyzer property instead if you need a language analyzer. This
analyzer can be updated on an existing field. Must be null for complex fields. Possible values
include: "ar.microsoft", "ar.lucene", "hy.lucene", "bn.microsoft", "eu.lucene", "bg.microsoft",
"bg.lucene", "ca.microsoft", "ca.lucene", "zh-Hans.microsoft", "zh-Hans.lucene",
"zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft", "cs.microsoft", "cs.lucene",
"da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene", "en.microsoft", "en.lucene",
"et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft", "fr.lucene", "gl.lucene",
"de.microsoft", "de.lucene", "el.microsoft", "el.lucene", "gu.microsoft", "he.microsoft",
"hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene", "is.microsoft", "id.microsoft",
"id.lucene", "ga.lucene", "it.microsoft", "it.lucene", "ja.microsoft", "ja.lucene",
"kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft", "lv.lucene", "lt.microsoft",
"ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft", "no.lucene", "fa.lucene",
"pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene", "pt-PT.microsoft",
"pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft", "ru.lucene",
"sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft", "es.microsoft",
"es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft", "th.microsoft",
"th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft", "vi.microsoft",
"standard.lucene", "standardasciifolding.lucene", "keyword", "pattern", "simple", "stop",
"whitespace".
:vartype search_analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:ivar index_analyzer: The name of the analyzer used at indexing time for the field. This option
can be used only with searchable fields. It must be set together with searchAnalyzer and it
cannot be set together with the analyzer option. This property cannot be set to the name of a
language analyzer; use the analyzer property instead if you need a language analyzer. Once the
analyzer is chosen, it cannot be changed for the field. Must be null for complex fields.
Possible values include: "ar.microsoft", "ar.lucene", "hy.lucene", "bn.microsoft", "eu.lucene",
"bg.microsoft", "bg.lucene", "ca.microsoft", "ca.lucene", "zh-Hans.microsoft",
"zh-Hans.lucene", "zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft", "cs.microsoft",
"cs.lucene", "da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene", "en.microsoft",
"en.lucene", "et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft", "fr.lucene",
"gl.lucene", "de.microsoft", "de.lucene", "el.microsoft", "el.lucene", "gu.microsoft",
"he.microsoft", "hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene", "is.microsoft",
"id.microsoft", "id.lucene", "ga.lucene", "it.microsoft", "it.lucene", "ja.microsoft",
"ja.lucene", "kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft", "lv.lucene",
"lt.microsoft", "ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft", "no.lucene",
"fa.lucene", "pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene", "pt-PT.microsoft",
"pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft", "ru.lucene",
"sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft", "es.microsoft",
"es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft", "th.microsoft",
"th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft", "vi.microsoft",
"standard.lucene", "standardasciifolding.lucene", "keyword", "pattern", "simple", "stop",
"whitespace".
:vartype index_analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:ivar normalizer: The name of the normalizer to use for the field. This option can be used only
with fields with filterable, sortable, or facetable enabled. Once the normalizer is chosen, it
cannot be changed for the field. Must be null for complex fields. Possible values include:
"asciifolding", "elision", "lowercase", "standard", "uppercase".
:vartype normalizer: str or ~azure.search.documents.indexes.models.LexicalNormalizerName
:ivar synonym_maps: A list of the names of synonym maps to associate with this field. This
option can be used only with searchable fields. Currently only one synonym map per field is
supported. Assigning a synonym map to a field ensures that query terms targeting that field are
expanded at query-time using the rules in the synonym map. This attribute can be changed on
existing fields. Must be null or an empty collection for complex fields.
:vartype synonym_maps: list[str]
:ivar fields: A list of sub-fields if this is a field of type Edm.ComplexType or
Collection(Edm.ComplexType). Must be null or empty for simple fields.
:vartype fields: list[~azure.search.documents.indexes.models.SearchField]
"""
_validation = {
'name': {'required': True},
'type': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'key': {'key': 'key', 'type': 'bool'},
'retrievable': {'key': 'retrievable', 'type': 'bool'},
'searchable': {'key': 'searchable', 'type': 'bool'},
'filterable': {'key': 'filterable', 'type': 'bool'},
'sortable': {'key': 'sortable', 'type': 'bool'},
'facetable': {'key': 'facetable', 'type': 'bool'},
'analyzer': {'key': 'analyzer', 'type': 'str'},
'search_analyzer': {'key': 'searchAnalyzer', 'type': 'str'},
'index_analyzer': {'key': 'indexAnalyzer', 'type': 'str'},
'normalizer': {'key': 'normalizer', 'type': 'str'},
'synonym_maps': {'key': 'synonymMaps', 'type': '[str]'},
'fields': {'key': 'fields', 'type': '[SearchField]'},
}
def __init__(
self,
*,
name: str,
type: Union[str, "SearchFieldDataType"],
key: Optional[bool] = None,
retrievable: Optional[bool] = None,
searchable: Optional[bool] = None,
filterable: Optional[bool] = None,
sortable: Optional[bool] = None,
facetable: Optional[bool] = None,
analyzer: Optional[Union[str, "LexicalAnalyzerName"]] = None,
search_analyzer: Optional[Union[str, "LexicalAnalyzerName"]] = None,
index_analyzer: Optional[Union[str, "LexicalAnalyzerName"]] = None,
normalizer: Optional[Union[str, "LexicalNormalizerName"]] = None,
synonym_maps: Optional[List[str]] = None,
fields: Optional[List["SearchField"]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the field, which must be unique within the fields
collection of the index or parent field.
:paramtype name: str
:keyword type: Required. The data type of the field. Possible values include: "Edm.String",
"Edm.Int32", "Edm.Int64", "Edm.Double", "Edm.Boolean", "Edm.DateTimeOffset",
"Edm.GeographyPoint", "Edm.ComplexType".
:paramtype type: str or ~azure.search.documents.indexes.models.SearchFieldDataType
:keyword key: A value indicating whether the field uniquely identifies documents in the index.
Exactly one top-level field in each index must be chosen as the key field and it must be of
type Edm.String. Key fields can be used to look up documents directly and update or delete
specific documents. Default is false for simple fields and null for complex fields.
:paramtype key: bool
:keyword retrievable: A value indicating whether the field can be returned in a search result.
You can disable this option if you want to use a field (for example, margin) as a filter,
sorting, or scoring mechanism but do not want the field to be visible to the end user. This
property must be true for key fields, and it must be null for complex fields. This property can
be changed on existing fields. Enabling this property does not cause any increase in index
storage requirements. Default is true for simple fields and null for complex fields.
:paramtype retrievable: bool
:keyword searchable: A value indicating whether the field is full-text searchable. This means
it will undergo analysis such as word-breaking during indexing. If you set a searchable field
to a value like "sunny day", internally it will be split into the individual tokens "sunny" and
"day". This enables full-text searches for these terms. Fields of type Edm.String or
Collection(Edm.String) are searchable by default. This property must be false for simple fields
of other non-string data types, and it must be null for complex fields. Note: searchable fields
consume extra space in your index since Azure Cognitive Search will store an additional
tokenized version of the field value for full-text searches. If you want to save space in your
index and you don't need a field to be included in searches, set searchable to false.
:paramtype searchable: bool
:keyword filterable: A value indicating whether to enable the field to be referenced in $filter
queries. filterable differs from searchable in how strings are handled. Fields of type
Edm.String or Collection(Edm.String) that are filterable do not undergo word-breaking, so
comparisons are for exact matches only. For example, if you set such a field f to "sunny day",
$filter=f eq 'sunny' will find no matches, but $filter=f eq 'sunny day' will. This property
must be null for complex fields. Default is true for simple fields and null for complex fields.
:paramtype filterable: bool
:keyword sortable: A value indicating whether to enable the field to be referenced in $orderby
expressions. By default Azure Cognitive Search sorts results by score, but in many experiences
users will want to sort by fields in the documents. A simple field can be sortable only if it
is single-valued (it has a single value in the scope of the parent document). Simple collection
fields cannot be sortable, since they are multi-valued. Simple sub-fields of complex
collections are also multi-valued, and therefore cannot be sortable. This is true whether it's
an immediate parent field, or an ancestor field, that's the complex collection. Complex fields
cannot be sortable and the sortable property must be null for such fields. The default for
sortable is true for single-valued simple fields, false for multi-valued simple fields, and
null for complex fields.
:paramtype sortable: bool
:keyword facetable: A value indicating whether to enable the field to be referenced in facet
queries. Typically used in a presentation of search results that includes hit count by category
(for example, search for digital cameras and see hits by brand, by megapixels, by price, and so
on). This property must be null for complex fields. Fields of type Edm.GeographyPoint or
Collection(Edm.GeographyPoint) cannot be facetable. Default is true for all other simple
fields.
:paramtype facetable: bool
:keyword analyzer: The name of the analyzer to use for the field. This option can be used only
with searchable fields and it can't be set together with either searchAnalyzer or
indexAnalyzer. Once the analyzer is chosen, it cannot be changed for the field. Must be null
for complex fields. Possible values include: "ar.microsoft", "ar.lucene", "hy.lucene",
"bn.microsoft", "eu.lucene", "bg.microsoft", "bg.lucene", "ca.microsoft", "ca.lucene",
"zh-Hans.microsoft", "zh-Hans.lucene", "zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft",
"cs.microsoft", "cs.lucene", "da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene",
"en.microsoft", "en.lucene", "et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft",
"fr.lucene", "gl.lucene", "de.microsoft", "de.lucene", "el.microsoft", "el.lucene",
"gu.microsoft", "he.microsoft", "hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene",
"is.microsoft", "id.microsoft", "id.lucene", "ga.lucene", "it.microsoft", "it.lucene",
"ja.microsoft", "ja.lucene", "kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft",
"lv.lucene", "lt.microsoft", "ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft",
"no.lucene", "fa.lucene", "pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene",
"pt-PT.microsoft", "pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft",
"ru.lucene", "sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft",
"es.microsoft", "es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft",
"th.microsoft", "th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft",
"vi.microsoft", "standard.lucene", "standardasciifolding.lucene", "keyword", "pattern",
"simple", "stop", "whitespace".
:paramtype analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:keyword search_analyzer: The name of the analyzer used at search time for the field. This
option can be used only with searchable fields. It must be set together with indexAnalyzer and
it cannot be set together with the analyzer option. This property cannot be set to the name of
a language analyzer; use the analyzer property instead if you need a language analyzer. This
analyzer can be updated on an existing field. Must be null for complex fields. Possible values
include: "ar.microsoft", "ar.lucene", "hy.lucene", "bn.microsoft", "eu.lucene", "bg.microsoft",
"bg.lucene", "ca.microsoft", "ca.lucene", "zh-Hans.microsoft", "zh-Hans.lucene",
"zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft", "cs.microsoft", "cs.lucene",
"da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene", "en.microsoft", "en.lucene",
"et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft", "fr.lucene", "gl.lucene",
"de.microsoft", "de.lucene", "el.microsoft", "el.lucene", "gu.microsoft", "he.microsoft",
"hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene", "is.microsoft", "id.microsoft",
"id.lucene", "ga.lucene", "it.microsoft", "it.lucene", "ja.microsoft", "ja.lucene",
"kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft", "lv.lucene", "lt.microsoft",
"ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft", "no.lucene", "fa.lucene",
"pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene", "pt-PT.microsoft",
"pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft", "ru.lucene",
"sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft", "es.microsoft",
"es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft", "th.microsoft",
"th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft", "vi.microsoft",
"standard.lucene", "standardasciifolding.lucene", "keyword", "pattern", "simple", "stop",
"whitespace".
:paramtype search_analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:keyword index_analyzer: The name of the analyzer used at indexing time for the field. This
option can be used only with searchable fields. It must be set together with searchAnalyzer and
it cannot be set together with the analyzer option. This property cannot be set to the name of
a language analyzer; use the analyzer property instead if you need a language analyzer. Once
the analyzer is chosen, it cannot be changed for the field. Must be null for complex fields.
Possible values include: "ar.microsoft", "ar.lucene", "hy.lucene", "bn.microsoft", "eu.lucene",
"bg.microsoft", "bg.lucene", "ca.microsoft", "ca.lucene", "zh-Hans.microsoft",
"zh-Hans.lucene", "zh-Hant.microsoft", "zh-Hant.lucene", "hr.microsoft", "cs.microsoft",
"cs.lucene", "da.microsoft", "da.lucene", "nl.microsoft", "nl.lucene", "en.microsoft",
"en.lucene", "et.microsoft", "fi.microsoft", "fi.lucene", "fr.microsoft", "fr.lucene",
"gl.lucene", "de.microsoft", "de.lucene", "el.microsoft", "el.lucene", "gu.microsoft",
"he.microsoft", "hi.microsoft", "hi.lucene", "hu.microsoft", "hu.lucene", "is.microsoft",
"id.microsoft", "id.lucene", "ga.lucene", "it.microsoft", "it.lucene", "ja.microsoft",
"ja.lucene", "kn.microsoft", "ko.microsoft", "ko.lucene", "lv.microsoft", "lv.lucene",
"lt.microsoft", "ml.microsoft", "ms.microsoft", "mr.microsoft", "nb.microsoft", "no.lucene",
"fa.lucene", "pl.microsoft", "pl.lucene", "pt-BR.microsoft", "pt-BR.lucene", "pt-PT.microsoft",
"pt-PT.lucene", "pa.microsoft", "ro.microsoft", "ro.lucene", "ru.microsoft", "ru.lucene",
"sr-cyrillic.microsoft", "sr-latin.microsoft", "sk.microsoft", "sl.microsoft", "es.microsoft",
"es.lucene", "sv.microsoft", "sv.lucene", "ta.microsoft", "te.microsoft", "th.microsoft",
"th.lucene", "tr.microsoft", "tr.lucene", "uk.microsoft", "ur.microsoft", "vi.microsoft",
"standard.lucene", "standardasciifolding.lucene", "keyword", "pattern", "simple", "stop",
"whitespace".
:paramtype index_analyzer: str or ~azure.search.documents.indexes.models.LexicalAnalyzerName
:keyword normalizer: The name of the normalizer to use for the field. This option can be used
only with fields with filterable, sortable, or facetable enabled. Once the normalizer is
chosen, it cannot be changed for the field. Must be null for complex fields. Possible values
include: "asciifolding", "elision", "lowercase", "standard", "uppercase".
:paramtype normalizer: str or ~azure.search.documents.indexes.models.LexicalNormalizerName
:keyword synonym_maps: A list of the names of synonym maps to associate with this field. This
option can be used only with searchable fields. Currently only one synonym map per field is
supported. Assigning a synonym map to a field ensures that query terms targeting that field are
expanded at query-time using the rules in the synonym map. This attribute can be changed on
existing fields. Must be null or an empty collection for complex fields.
:paramtype synonym_maps: list[str]
:keyword fields: A list of sub-fields if this is a field of type Edm.ComplexType or
Collection(Edm.ComplexType). Must be null or empty for simple fields.
:paramtype fields: list[~azure.search.documents.indexes.models.SearchField]
"""
super(SearchField, self).__init__(**kwargs)
self.name = name
self.type = type
self.key = key
self.retrievable = retrievable
self.searchable = searchable
self.filterable = filterable
self.sortable = sortable
self.facetable = facetable
self.analyzer = analyzer
self.search_analyzer = search_analyzer
self.index_analyzer = index_analyzer
self.normalizer = normalizer
self.synonym_maps = synonym_maps
self.fields = fields
class SearchIndex(msrest.serialization.Model):
"""Represents a search index definition, which describes the fields and search behavior of an index.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the index.
:vartype name: str
:ivar fields: Required. The fields of the index.
:vartype fields: list[~azure.search.documents.indexes.models.SearchField]
:ivar scoring_profiles: The scoring profiles for the index.
:vartype scoring_profiles: list[~azure.search.documents.indexes.models.ScoringProfile]
:ivar default_scoring_profile: The name of the scoring profile to use if none is specified in
the query. If this property is not set and no scoring profile is specified in the query, then
default scoring (tf-idf) will be used.
:vartype default_scoring_profile: str
:ivar cors_options: Options to control Cross-Origin Resource Sharing (CORS) for the index.
:vartype cors_options: ~azure.search.documents.indexes.models.CorsOptions
:ivar suggesters: The suggesters for the index.
:vartype suggesters: list[~azure.search.documents.indexes.models.Suggester]
:ivar analyzers: The analyzers for the index.
:vartype analyzers: list[~azure.search.documents.indexes.models.LexicalAnalyzer]
:ivar tokenizers: The tokenizers for the index.
:vartype tokenizers: list[~azure.search.documents.indexes.models.LexicalTokenizer]
:ivar token_filters: The token filters for the index.
:vartype token_filters: list[~azure.search.documents.indexes.models.TokenFilter]
:ivar char_filters: The character filters for the index.
:vartype char_filters: list[~azure.search.documents.indexes.models.CharFilter]
:ivar normalizers: The normalizers for the index.
:vartype normalizers: list[~azure.search.documents.indexes.models.LexicalNormalizer]
:ivar encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your data when you
want full assurance that no one, not even Microsoft, can decrypt your data in Azure Cognitive
Search. Once you have encrypted your data, it will always remain encrypted. Azure Cognitive
Search will ignore attempts to set this property to null. You can change this property as
needed if you want to rotate your encryption key; Your data will be unaffected. Encryption with
customer-managed keys is not available for free search services, and is only available for paid
services created on or after January 1, 2019.
:vartype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
:ivar similarity: The type of similarity algorithm to be used when scoring and ranking the
documents matching a search query. The similarity algorithm can only be defined at index
creation time and cannot be modified on existing indexes. If null, the ClassicSimilarity
algorithm is used.
:vartype similarity: ~azure.search.documents.indexes.models.Similarity
:ivar semantic_settings: Defines parameters for a search index that influence semantic
capabilities.
:vartype semantic_settings: ~azure.search.documents.indexes.models.SemanticSettings
:ivar e_tag: The ETag of the index.
:vartype e_tag: str
"""
_validation = {
'name': {'required': True},
'fields': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'fields': {'key': 'fields', 'type': '[SearchField]'},
'scoring_profiles': {'key': 'scoringProfiles', 'type': '[ScoringProfile]'},
'default_scoring_profile': {'key': 'defaultScoringProfile', 'type': 'str'},
'cors_options': {'key': 'corsOptions', 'type': 'CorsOptions'},
'suggesters': {'key': 'suggesters', 'type': '[Suggester]'},
'analyzers': {'key': 'analyzers', 'type': '[LexicalAnalyzer]'},
'tokenizers': {'key': 'tokenizers', 'type': '[LexicalTokenizer]'},
'token_filters': {'key': 'tokenFilters', 'type': '[TokenFilter]'},
'char_filters': {'key': 'charFilters', 'type': '[CharFilter]'},
'normalizers': {'key': 'normalizers', 'type': '[LexicalNormalizer]'},
'encryption_key': {'key': 'encryptionKey', 'type': 'SearchResourceEncryptionKey'},
'similarity': {'key': 'similarity', 'type': 'Similarity'},
'semantic_settings': {'key': 'semantic', 'type': 'SemanticSettings'},
'e_tag': {'key': '@odata\\.etag', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
fields: List["SearchField"],
scoring_profiles: Optional[List["ScoringProfile"]] = None,
default_scoring_profile: Optional[str] = None,
cors_options: Optional["CorsOptions"] = None,
suggesters: Optional[List["Suggester"]] = None,
analyzers: Optional[List["LexicalAnalyzer"]] = None,
tokenizers: Optional[List["LexicalTokenizer"]] = None,
token_filters: Optional[List["TokenFilter"]] = None,
char_filters: Optional[List["CharFilter"]] = None,
normalizers: Optional[List["LexicalNormalizer"]] = None,
encryption_key: Optional["SearchResourceEncryptionKey"] = None,
similarity: Optional["Similarity"] = None,
semantic_settings: Optional["SemanticSettings"] = None,
e_tag: Optional[str] = None,
**kwargs
):
"""
:keyword name: Required. The name of the index.
:paramtype name: str
:keyword fields: Required. The fields of the index.
:paramtype fields: list[~azure.search.documents.indexes.models.SearchField]
:keyword scoring_profiles: The scoring profiles for the index.
:paramtype scoring_profiles: list[~azure.search.documents.indexes.models.ScoringProfile]
:keyword default_scoring_profile: The name of the scoring profile to use if none is specified
in the query. If this property is not set and no scoring profile is specified in the query,
then default scoring (tf-idf) will be used.
:paramtype default_scoring_profile: str
:keyword cors_options: Options to control Cross-Origin Resource Sharing (CORS) for the index.
:paramtype cors_options: ~azure.search.documents.indexes.models.CorsOptions
:keyword suggesters: The suggesters for the index.
:paramtype suggesters: list[~azure.search.documents.indexes.models.Suggester]
:keyword analyzers: The analyzers for the index.
:paramtype analyzers: list[~azure.search.documents.indexes.models.LexicalAnalyzer]
:keyword tokenizers: The tokenizers for the index.
:paramtype tokenizers: list[~azure.search.documents.indexes.models.LexicalTokenizer]
:keyword token_filters: The token filters for the index.
:paramtype token_filters: list[~azure.search.documents.indexes.models.TokenFilter]
:keyword char_filters: The character filters for the index.
:paramtype char_filters: list[~azure.search.documents.indexes.models.CharFilter]
:keyword normalizers: The normalizers for the index.
:paramtype normalizers: list[~azure.search.documents.indexes.models.LexicalNormalizer]
:keyword encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your data when you
want full assurance that no one, not even Microsoft, can decrypt your data in Azure Cognitive
Search. Once you have encrypted your data, it will always remain encrypted. Azure Cognitive
Search will ignore attempts to set this property to null. You can change this property as
needed if you want to rotate your encryption key; Your data will be unaffected. Encryption with
customer-managed keys is not available for free search services, and is only available for paid
services created on or after January 1, 2019.
:paramtype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
:keyword similarity: The type of similarity algorithm to be used when scoring and ranking the
documents matching a search query. The similarity algorithm can only be defined at index
creation time and cannot be modified on existing indexes. If null, the ClassicSimilarity
algorithm is used.
:paramtype similarity: ~azure.search.documents.indexes.models.Similarity
:keyword semantic_settings: Defines parameters for a search index that influence semantic
capabilities.
:paramtype semantic_settings: ~azure.search.documents.indexes.models.SemanticSettings
:keyword e_tag: The ETag of the index.
:paramtype e_tag: str
"""
super(SearchIndex, self).__init__(**kwargs)
self.name = name
self.fields = fields
self.scoring_profiles = scoring_profiles
self.default_scoring_profile = default_scoring_profile
self.cors_options = cors_options
self.suggesters = suggesters
self.analyzers = analyzers
self.tokenizers = tokenizers
self.token_filters = token_filters
self.char_filters = char_filters
self.normalizers = normalizers
self.encryption_key = encryption_key
self.similarity = similarity
self.semantic_settings = semantic_settings
self.e_tag = e_tag
class SearchIndexer(msrest.serialization.Model):
"""Represents an indexer.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the indexer.
:vartype name: str
:ivar description: The description of the indexer.
:vartype description: str
:ivar data_source_name: Required. The name of the datasource from which this indexer reads
data.
:vartype data_source_name: str
:ivar skillset_name: The name of the skillset executing with this indexer.
:vartype skillset_name: str
:ivar target_index_name: Required. The name of the index to which this indexer writes data.
:vartype target_index_name: str
:ivar schedule: The schedule for this indexer.
:vartype schedule: ~azure.search.documents.indexes.models.IndexingSchedule
:ivar parameters: Parameters for indexer execution.
:vartype parameters: ~azure.search.documents.indexes.models.IndexingParameters
:ivar field_mappings: Defines mappings between fields in the data source and corresponding
target fields in the index.
:vartype field_mappings: list[~azure.search.documents.indexes.models.FieldMapping]
:ivar output_field_mappings: Output field mappings are applied after enrichment and immediately
before indexing.
:vartype output_field_mappings: list[~azure.search.documents.indexes.models.FieldMapping]
:ivar is_disabled: A value indicating whether the indexer is disabled. Default is false.
:vartype is_disabled: bool
:ivar e_tag: The ETag of the indexer.
:vartype e_tag: str
:ivar encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your indexer
definition (as well as indexer execution status) when you want full assurance that no one, not
even Microsoft, can decrypt them in Azure Cognitive Search. Once you have encrypted your
indexer definition, it will always remain encrypted. Azure Cognitive Search will ignore
attempts to set this property to null. You can change this property as needed if you want to
rotate your encryption key; Your indexer definition (and indexer execution status) will be
unaffected. Encryption with customer-managed keys is not available for free search services,
and is only available for paid services created on or after January 1, 2019.
:vartype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
:ivar cache: Adds caching to an enrichment pipeline to allow for incremental modification steps
without having to rebuild the index every time.
:vartype cache: ~azure.search.documents.indexes.models.SearchIndexerCache
"""
_validation = {
'name': {'required': True},
'data_source_name': {'required': True},
'target_index_name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'data_source_name': {'key': 'dataSourceName', 'type': 'str'},
'skillset_name': {'key': 'skillsetName', 'type': 'str'},
'target_index_name': {'key': 'targetIndexName', 'type': 'str'},
'schedule': {'key': 'schedule', 'type': 'IndexingSchedule'},
'parameters': {'key': 'parameters', 'type': 'IndexingParameters'},
'field_mappings': {'key': 'fieldMappings', 'type': '[FieldMapping]'},
'output_field_mappings': {'key': 'outputFieldMappings', 'type': '[FieldMapping]'},
'is_disabled': {'key': 'disabled', 'type': 'bool'},
'e_tag': {'key': '@odata\\.etag', 'type': 'str'},
'encryption_key': {'key': 'encryptionKey', 'type': 'SearchResourceEncryptionKey'},
'cache': {'key': 'cache', 'type': 'SearchIndexerCache'},
}
def __init__(
self,
*,
name: str,
data_source_name: str,
target_index_name: str,
description: Optional[str] = None,
skillset_name: Optional[str] = None,
schedule: Optional["IndexingSchedule"] = None,
parameters: Optional["IndexingParameters"] = None,
field_mappings: Optional[List["FieldMapping"]] = None,
output_field_mappings: Optional[List["FieldMapping"]] = None,
is_disabled: Optional[bool] = False,
e_tag: Optional[str] = None,
encryption_key: Optional["SearchResourceEncryptionKey"] = None,
cache: Optional["SearchIndexerCache"] = None,
**kwargs
):
"""
:keyword name: Required. The name of the indexer.
:paramtype name: str
:keyword description: The description of the indexer.
:paramtype description: str
:keyword data_source_name: Required. The name of the datasource from which this indexer reads
data.
:paramtype data_source_name: str
:keyword skillset_name: The name of the skillset executing with this indexer.
:paramtype skillset_name: str
:keyword target_index_name: Required. The name of the index to which this indexer writes data.
:paramtype target_index_name: str
:keyword schedule: The schedule for this indexer.
:paramtype schedule: ~azure.search.documents.indexes.models.IndexingSchedule
:keyword parameters: Parameters for indexer execution.
:paramtype parameters: ~azure.search.documents.indexes.models.IndexingParameters
:keyword field_mappings: Defines mappings between fields in the data source and corresponding
target fields in the index.
:paramtype field_mappings: list[~azure.search.documents.indexes.models.FieldMapping]
:keyword output_field_mappings: Output field mappings are applied after enrichment and
immediately before indexing.
:paramtype output_field_mappings: list[~azure.search.documents.indexes.models.FieldMapping]
:keyword is_disabled: A value indicating whether the indexer is disabled. Default is false.
:paramtype is_disabled: bool
:keyword e_tag: The ETag of the indexer.
:paramtype e_tag: str
:keyword encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your indexer
definition (as well as indexer execution status) when you want full assurance that no one, not
even Microsoft, can decrypt them in Azure Cognitive Search. Once you have encrypted your
indexer definition, it will always remain encrypted. Azure Cognitive Search will ignore
attempts to set this property to null. You can change this property as needed if you want to
rotate your encryption key; Your indexer definition (and indexer execution status) will be
unaffected. Encryption with customer-managed keys is not available for free search services,
and is only available for paid services created on or after January 1, 2019.
:paramtype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
:keyword cache: Adds caching to an enrichment pipeline to allow for incremental modification
steps without having to rebuild the index every time.
:paramtype cache: ~azure.search.documents.indexes.models.SearchIndexerCache
"""
super(SearchIndexer, self).__init__(**kwargs)
self.name = name
self.description = description
self.data_source_name = data_source_name
self.skillset_name = skillset_name
self.target_index_name = target_index_name
self.schedule = schedule
self.parameters = parameters
self.field_mappings = field_mappings
self.output_field_mappings = output_field_mappings
self.is_disabled = is_disabled
self.e_tag = e_tag
self.encryption_key = encryption_key
self.cache = cache
class SearchIndexerCache(msrest.serialization.Model):
"""SearchIndexerCache.
:ivar storage_connection_string: The connection string to the storage account where the cache
data will be persisted.
:vartype storage_connection_string: str
:ivar enable_reprocessing: Specifies whether incremental reprocessing is enabled.
:vartype enable_reprocessing: bool
"""
_attribute_map = {
'storage_connection_string': {'key': 'storageConnectionString', 'type': 'str'},
'enable_reprocessing': {'key': 'enableReprocessing', 'type': 'bool'},
}
def __init__(
self,
*,
storage_connection_string: Optional[str] = None,
enable_reprocessing: Optional[bool] = None,
**kwargs
):
"""
:keyword storage_connection_string: The connection string to the storage account where the
cache data will be persisted.
:paramtype storage_connection_string: str
:keyword enable_reprocessing: Specifies whether incremental reprocessing is enabled.
:paramtype enable_reprocessing: bool
"""
super(SearchIndexerCache, self).__init__(**kwargs)
self.storage_connection_string = storage_connection_string
self.enable_reprocessing = enable_reprocessing
class SearchIndexerDataContainer(msrest.serialization.Model):
"""Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the table or view (for Azure SQL data source) or collection
(for CosmosDB data source) that will be indexed.
:vartype name: str
:ivar query: A query that is applied to this data container. The syntax and meaning of this
parameter is datasource-specific. Not supported by Azure SQL datasources.
:vartype query: str
"""
_validation = {
'name': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'query': {'key': 'query', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
query: Optional[str] = None,
**kwargs
):
"""
:keyword name: Required. The name of the table or view (for Azure SQL data source) or
collection (for CosmosDB data source) that will be indexed.
:paramtype name: str
:keyword query: A query that is applied to this data container. The syntax and meaning of this
parameter is datasource-specific. Not supported by Azure SQL datasources.
:paramtype query: str
"""
super(SearchIndexerDataContainer, self).__init__(**kwargs)
self.name = name
self.query = query
class SearchIndexerDataIdentity(msrest.serialization.Model):
"""Abstract base type for data identities.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: SearchIndexerDataNoneIdentity, SearchIndexerDataUserAssignedIdentity.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the identity.Constant filled by
server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
_subtype_map = {
'odata_type': {'#Microsoft.Azure.Search.SearchIndexerDataNoneIdentity': 'SearchIndexerDataNoneIdentity', '#Microsoft.Azure.Search.SearchIndexerDataUserAssignedIdentity': 'SearchIndexerDataUserAssignedIdentity'}
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchIndexerDataIdentity, self).__init__(**kwargs)
self.odata_type = None # type: Optional[str]
class SearchIndexerDataNoneIdentity(SearchIndexerDataIdentity):
"""Clears the identity property of a datasource.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the identity.Constant filled by
server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchIndexerDataNoneIdentity, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.SearchIndexerDataNoneIdentity' # type: str
class SearchIndexerDataSource(msrest.serialization.Model):
"""Represents a datasource definition, which can be used to configure an indexer.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the datasource.
:vartype name: str
:ivar description: The description of the datasource.
:vartype description: str
:ivar type: Required. The type of the datasource. Possible values include: "azuresql",
"cosmosdb", "azureblob", "azuretable", "mysql", "adlsgen2".
:vartype type: str or ~azure.search.documents.indexes.models.SearchIndexerDataSourceType
:ivar credentials: Required. Credentials for the datasource.
:vartype credentials: ~azure.search.documents.indexes.models.DataSourceCredentials
:ivar container: Required. The data container for the datasource.
:vartype container: ~azure.search.documents.indexes.models.SearchIndexerDataContainer
:ivar identity: An explicit managed identity to use for this datasource. If not specified and
the connection string is a managed identity, the system-assigned managed identity is used. If
not specified, the value remains unchanged. If "none" is specified, the value of this property
is cleared.
:vartype identity: ~azure.search.documents.indexes.models.SearchIndexerDataIdentity
:ivar data_change_detection_policy: The data change detection policy for the datasource.
:vartype data_change_detection_policy:
~azure.search.documents.indexes.models.DataChangeDetectionPolicy
:ivar data_deletion_detection_policy: The data deletion detection policy for the datasource.
:vartype data_deletion_detection_policy:
~azure.search.documents.indexes.models.DataDeletionDetectionPolicy
:ivar e_tag: The ETag of the data source.
:vartype e_tag: str
:ivar encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your datasource
definition when you want full assurance that no one, not even Microsoft, can decrypt your data
source definition in Azure Cognitive Search. Once you have encrypted your data source
definition, it will always remain encrypted. Azure Cognitive Search will ignore attempts to set
this property to null. You can change this property as needed if you want to rotate your
encryption key; Your datasource definition will be unaffected. Encryption with customer-managed
keys is not available for free search services, and is only available for paid services created
on or after January 1, 2019.
:vartype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
"""
_validation = {
'name': {'required': True},
'type': {'required': True},
'credentials': {'required': True},
'container': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'credentials': {'key': 'credentials', 'type': 'DataSourceCredentials'},
'container': {'key': 'container', 'type': 'SearchIndexerDataContainer'},
'identity': {'key': 'identity', 'type': 'SearchIndexerDataIdentity'},
'data_change_detection_policy': {'key': 'dataChangeDetectionPolicy', 'type': 'DataChangeDetectionPolicy'},
'data_deletion_detection_policy': {'key': 'dataDeletionDetectionPolicy', 'type': 'DataDeletionDetectionPolicy'},
'e_tag': {'key': '@odata\\.etag', 'type': 'str'},
'encryption_key': {'key': 'encryptionKey', 'type': 'SearchResourceEncryptionKey'},
}
def __init__(
self,
*,
name: str,
type: Union[str, "SearchIndexerDataSourceType"],
credentials: "DataSourceCredentials",
container: "SearchIndexerDataContainer",
description: Optional[str] = None,
identity: Optional["SearchIndexerDataIdentity"] = None,
data_change_detection_policy: Optional["DataChangeDetectionPolicy"] = None,
data_deletion_detection_policy: Optional["DataDeletionDetectionPolicy"] = None,
e_tag: Optional[str] = None,
encryption_key: Optional["SearchResourceEncryptionKey"] = None,
**kwargs
):
"""
:keyword name: Required. The name of the datasource.
:paramtype name: str
:keyword description: The description of the datasource.
:paramtype description: str
:keyword type: Required. The type of the datasource. Possible values include: "azuresql",
"cosmosdb", "azureblob", "azuretable", "mysql", "adlsgen2".
:paramtype type: str or ~azure.search.documents.indexes.models.SearchIndexerDataSourceType
:keyword credentials: Required. Credentials for the datasource.
:paramtype credentials: ~azure.search.documents.indexes.models.DataSourceCredentials
:keyword container: Required. The data container for the datasource.
:paramtype container: ~azure.search.documents.indexes.models.SearchIndexerDataContainer
:keyword identity: An explicit managed identity to use for this datasource. If not specified
and the connection string is a managed identity, the system-assigned managed identity is used.
If not specified, the value remains unchanged. If "none" is specified, the value of this
property is cleared.
:paramtype identity: ~azure.search.documents.indexes.models.SearchIndexerDataIdentity
:keyword data_change_detection_policy: The data change detection policy for the datasource.
:paramtype data_change_detection_policy:
~azure.search.documents.indexes.models.DataChangeDetectionPolicy
:keyword data_deletion_detection_policy: The data deletion detection policy for the datasource.
:paramtype data_deletion_detection_policy:
~azure.search.documents.indexes.models.DataDeletionDetectionPolicy
:keyword e_tag: The ETag of the data source.
:paramtype e_tag: str
:keyword encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your datasource
definition when you want full assurance that no one, not even Microsoft, can decrypt your data
source definition in Azure Cognitive Search. Once you have encrypted your data source
definition, it will always remain encrypted. Azure Cognitive Search will ignore attempts to set
this property to null. You can change this property as needed if you want to rotate your
encryption key; Your datasource definition will be unaffected. Encryption with customer-managed
keys is not available for free search services, and is only available for paid services created
on or after January 1, 2019.
:paramtype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
"""
super(SearchIndexerDataSource, self).__init__(**kwargs)
self.name = name
self.description = description
self.type = type
self.credentials = credentials
self.container = container
self.identity = identity
self.data_change_detection_policy = data_change_detection_policy
self.data_deletion_detection_policy = data_deletion_detection_policy
self.e_tag = e_tag
self.encryption_key = encryption_key
class SearchIndexerDataUserAssignedIdentity(SearchIndexerDataIdentity):
"""Specifies the identity for a datasource to use.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the identity.Constant filled by
server.
:vartype odata_type: str
:ivar user_assigned_identity: Required. The fully qualified Azure resource Id of a user
assigned managed identity typically in the form
"/subscriptions/12345678-1234-1234-1234-1234567890ab/resourceGroups/rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myId"
that should have been assigned to the search service.
:vartype user_assigned_identity: str
"""
_validation = {
'odata_type': {'required': True},
'user_assigned_identity': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'user_assigned_identity': {'key': 'userAssignedIdentity', 'type': 'str'},
}
def __init__(
self,
*,
user_assigned_identity: str,
**kwargs
):
"""
:keyword user_assigned_identity: Required. The fully qualified Azure resource Id of a user
assigned managed identity typically in the form
"/subscriptions/12345678-1234-1234-1234-1234567890ab/resourceGroups/rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myId"
that should have been assigned to the search service.
:paramtype user_assigned_identity: str
"""
super(SearchIndexerDataUserAssignedIdentity, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.SearchIndexerDataUserAssignedIdentity' # type: str
self.user_assigned_identity = user_assigned_identity
class SearchIndexerError(msrest.serialization.Model):
"""Represents an item- or document-level indexing error.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar key: The key of the item for which indexing failed.
:vartype key: str
:ivar error_message: Required. The message describing the error that occurred while processing
the item.
:vartype error_message: str
:ivar status_code: Required. The status code indicating why the indexing operation failed.
Possible values include: 400 for a malformed input document, 404 for document not found, 409
for a version conflict, 422 when the index is temporarily unavailable, or 503 for when the
service is too busy.
:vartype status_code: int
:ivar name: The name of the source at which the error originated. For example, this could refer
to a particular skill in the attached skillset. This may not be always available.
:vartype name: str
:ivar details: Additional, verbose details about the error to assist in debugging the indexer.
This may not be always available.
:vartype details: str
:ivar documentation_link: A link to a troubleshooting guide for these classes of errors. This
may not be always available.
:vartype documentation_link: str
"""
_validation = {
'key': {'readonly': True},
'error_message': {'required': True, 'readonly': True},
'status_code': {'required': True, 'readonly': True},
'name': {'readonly': True},
'details': {'readonly': True},
'documentation_link': {'readonly': True},
}
_attribute_map = {
'key': {'key': 'key', 'type': 'str'},
'error_message': {'key': 'errorMessage', 'type': 'str'},
'status_code': {'key': 'statusCode', 'type': 'int'},
'name': {'key': 'name', 'type': 'str'},
'details': {'key': 'details', 'type': 'str'},
'documentation_link': {'key': 'documentationLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchIndexerError, self).__init__(**kwargs)
self.key = None
self.error_message = None
self.status_code = None
self.name = None
self.details = None
self.documentation_link = None
class SearchIndexerKnowledgeStore(msrest.serialization.Model):
"""Definition of additional projections to azure blob, table, or files, of enriched data.
All required parameters must be populated in order to send to Azure.
:ivar storage_connection_string: Required. The connection string to the storage account
projections will be stored in.
:vartype storage_connection_string: str
:ivar projections: Required. A list of additional projections to perform during indexing.
:vartype projections:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreProjection]
"""
_validation = {
'storage_connection_string': {'required': True},
'projections': {'required': True},
}
_attribute_map = {
'storage_connection_string': {'key': 'storageConnectionString', 'type': 'str'},
'projections': {'key': 'projections', 'type': '[SearchIndexerKnowledgeStoreProjection]'},
}
def __init__(
self,
*,
storage_connection_string: str,
projections: List["SearchIndexerKnowledgeStoreProjection"],
**kwargs
):
"""
:keyword storage_connection_string: Required. The connection string to the storage account
projections will be stored in.
:paramtype storage_connection_string: str
:keyword projections: Required. A list of additional projections to perform during indexing.
:paramtype projections:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreProjection]
"""
super(SearchIndexerKnowledgeStore, self).__init__(**kwargs)
self.storage_connection_string = storage_connection_string
self.projections = projections
class SearchIndexerKnowledgeStoreProjectionSelector(msrest.serialization.Model):
"""Abstract class to share properties between concrete selectors.
:ivar reference_key_name: Name of reference key to different projection.
:vartype reference_key_name: str
:ivar generated_key_name: Name of generated key to store projection under.
:vartype generated_key_name: str
:ivar source: Source data to project.
:vartype source: str
:ivar source_context: Source context for complex projections.
:vartype source_context: str
:ivar inputs: Nested inputs for complex projections.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
"""
_attribute_map = {
'reference_key_name': {'key': 'referenceKeyName', 'type': 'str'},
'generated_key_name': {'key': 'generatedKeyName', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'source_context': {'key': 'sourceContext', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
}
def __init__(
self,
*,
reference_key_name: Optional[str] = None,
generated_key_name: Optional[str] = None,
source: Optional[str] = None,
source_context: Optional[str] = None,
inputs: Optional[List["InputFieldMappingEntry"]] = None,
**kwargs
):
"""
:keyword reference_key_name: Name of reference key to different projection.
:paramtype reference_key_name: str
:keyword generated_key_name: Name of generated key to store projection under.
:paramtype generated_key_name: str
:keyword source: Source data to project.
:paramtype source: str
:keyword source_context: Source context for complex projections.
:paramtype source_context: str
:keyword inputs: Nested inputs for complex projections.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
"""
super(SearchIndexerKnowledgeStoreProjectionSelector, self).__init__(**kwargs)
self.reference_key_name = reference_key_name
self.generated_key_name = generated_key_name
self.source = source
self.source_context = source_context
self.inputs = inputs
class SearchIndexerKnowledgeStoreBlobProjectionSelector(SearchIndexerKnowledgeStoreProjectionSelector):
"""Abstract class to share properties between concrete selectors.
All required parameters must be populated in order to send to Azure.
:ivar reference_key_name: Name of reference key to different projection.
:vartype reference_key_name: str
:ivar generated_key_name: Name of generated key to store projection under.
:vartype generated_key_name: str
:ivar source: Source data to project.
:vartype source: str
:ivar source_context: Source context for complex projections.
:vartype source_context: str
:ivar inputs: Nested inputs for complex projections.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar storage_container: Required. Blob container to store projections in.
:vartype storage_container: str
"""
_validation = {
'storage_container': {'required': True},
}
_attribute_map = {
'reference_key_name': {'key': 'referenceKeyName', 'type': 'str'},
'generated_key_name': {'key': 'generatedKeyName', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'source_context': {'key': 'sourceContext', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'storage_container': {'key': 'storageContainer', 'type': 'str'},
}
def __init__(
self,
*,
storage_container: str,
reference_key_name: Optional[str] = None,
generated_key_name: Optional[str] = None,
source: Optional[str] = None,
source_context: Optional[str] = None,
inputs: Optional[List["InputFieldMappingEntry"]] = None,
**kwargs
):
"""
:keyword reference_key_name: Name of reference key to different projection.
:paramtype reference_key_name: str
:keyword generated_key_name: Name of generated key to store projection under.
:paramtype generated_key_name: str
:keyword source: Source data to project.
:paramtype source: str
:keyword source_context: Source context for complex projections.
:paramtype source_context: str
:keyword inputs: Nested inputs for complex projections.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword storage_container: Required. Blob container to store projections in.
:paramtype storage_container: str
"""
super(SearchIndexerKnowledgeStoreBlobProjectionSelector, self).__init__(reference_key_name=reference_key_name, generated_key_name=generated_key_name, source=source, source_context=source_context, inputs=inputs, **kwargs)
self.storage_container = storage_container
class SearchIndexerKnowledgeStoreFileProjectionSelector(SearchIndexerKnowledgeStoreBlobProjectionSelector):
"""Projection definition for what data to store in Azure Files.
All required parameters must be populated in order to send to Azure.
:ivar reference_key_name: Name of reference key to different projection.
:vartype reference_key_name: str
:ivar generated_key_name: Name of generated key to store projection under.
:vartype generated_key_name: str
:ivar source: Source data to project.
:vartype source: str
:ivar source_context: Source context for complex projections.
:vartype source_context: str
:ivar inputs: Nested inputs for complex projections.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar storage_container: Required. Blob container to store projections in.
:vartype storage_container: str
"""
_validation = {
'storage_container': {'required': True},
}
_attribute_map = {
'reference_key_name': {'key': 'referenceKeyName', 'type': 'str'},
'generated_key_name': {'key': 'generatedKeyName', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'source_context': {'key': 'sourceContext', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'storage_container': {'key': 'storageContainer', 'type': 'str'},
}
def __init__(
self,
*,
storage_container: str,
reference_key_name: Optional[str] = None,
generated_key_name: Optional[str] = None,
source: Optional[str] = None,
source_context: Optional[str] = None,
inputs: Optional[List["InputFieldMappingEntry"]] = None,
**kwargs
):
"""
:keyword reference_key_name: Name of reference key to different projection.
:paramtype reference_key_name: str
:keyword generated_key_name: Name of generated key to store projection under.
:paramtype generated_key_name: str
:keyword source: Source data to project.
:paramtype source: str
:keyword source_context: Source context for complex projections.
:paramtype source_context: str
:keyword inputs: Nested inputs for complex projections.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword storage_container: Required. Blob container to store projections in.
:paramtype storage_container: str
"""
super(SearchIndexerKnowledgeStoreFileProjectionSelector, self).__init__(reference_key_name=reference_key_name, generated_key_name=generated_key_name, source=source, source_context=source_context, inputs=inputs, storage_container=storage_container, **kwargs)
class SearchIndexerKnowledgeStoreObjectProjectionSelector(SearchIndexerKnowledgeStoreBlobProjectionSelector):
"""Projection definition for what data to store in Azure Blob.
All required parameters must be populated in order to send to Azure.
:ivar reference_key_name: Name of reference key to different projection.
:vartype reference_key_name: str
:ivar generated_key_name: Name of generated key to store projection under.
:vartype generated_key_name: str
:ivar source: Source data to project.
:vartype source: str
:ivar source_context: Source context for complex projections.
:vartype source_context: str
:ivar inputs: Nested inputs for complex projections.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar storage_container: Required. Blob container to store projections in.
:vartype storage_container: str
"""
_validation = {
'storage_container': {'required': True},
}
_attribute_map = {
'reference_key_name': {'key': 'referenceKeyName', 'type': 'str'},
'generated_key_name': {'key': 'generatedKeyName', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'source_context': {'key': 'sourceContext', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'storage_container': {'key': 'storageContainer', 'type': 'str'},
}
def __init__(
self,
*,
storage_container: str,
reference_key_name: Optional[str] = None,
generated_key_name: Optional[str] = None,
source: Optional[str] = None,
source_context: Optional[str] = None,
inputs: Optional[List["InputFieldMappingEntry"]] = None,
**kwargs
):
"""
:keyword reference_key_name: Name of reference key to different projection.
:paramtype reference_key_name: str
:keyword generated_key_name: Name of generated key to store projection under.
:paramtype generated_key_name: str
:keyword source: Source data to project.
:paramtype source: str
:keyword source_context: Source context for complex projections.
:paramtype source_context: str
:keyword inputs: Nested inputs for complex projections.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword storage_container: Required. Blob container to store projections in.
:paramtype storage_container: str
"""
super(SearchIndexerKnowledgeStoreObjectProjectionSelector, self).__init__(reference_key_name=reference_key_name, generated_key_name=generated_key_name, source=source, source_context=source_context, inputs=inputs, storage_container=storage_container, **kwargs)
class SearchIndexerKnowledgeStoreProjection(msrest.serialization.Model):
"""Container object for various projection selectors.
:ivar tables: Projections to Azure Table storage.
:vartype tables:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreTableProjectionSelector]
:ivar objects: Projections to Azure Blob storage.
:vartype objects:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreObjectProjectionSelector]
:ivar files: Projections to Azure File storage.
:vartype files:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreFileProjectionSelector]
"""
_attribute_map = {
'tables': {'key': 'tables', 'type': '[SearchIndexerKnowledgeStoreTableProjectionSelector]'},
'objects': {'key': 'objects', 'type': '[SearchIndexerKnowledgeStoreObjectProjectionSelector]'},
'files': {'key': 'files', 'type': '[SearchIndexerKnowledgeStoreFileProjectionSelector]'},
}
def __init__(
self,
*,
tables: Optional[List["SearchIndexerKnowledgeStoreTableProjectionSelector"]] = None,
objects: Optional[List["SearchIndexerKnowledgeStoreObjectProjectionSelector"]] = None,
files: Optional[List["SearchIndexerKnowledgeStoreFileProjectionSelector"]] = None,
**kwargs
):
"""
:keyword tables: Projections to Azure Table storage.
:paramtype tables:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreTableProjectionSelector]
:keyword objects: Projections to Azure Blob storage.
:paramtype objects:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreObjectProjectionSelector]
:keyword files: Projections to Azure File storage.
:paramtype files:
list[~azure.search.documents.indexes.models.SearchIndexerKnowledgeStoreFileProjectionSelector]
"""
super(SearchIndexerKnowledgeStoreProjection, self).__init__(**kwargs)
self.tables = tables
self.objects = objects
self.files = files
class SearchIndexerKnowledgeStoreTableProjectionSelector(SearchIndexerKnowledgeStoreProjectionSelector):
"""Description for what data to store in Azure Tables.
All required parameters must be populated in order to send to Azure.
:ivar reference_key_name: Name of reference key to different projection.
:vartype reference_key_name: str
:ivar generated_key_name: Name of generated key to store projection under.
:vartype generated_key_name: str
:ivar source: Source data to project.
:vartype source: str
:ivar source_context: Source context for complex projections.
:vartype source_context: str
:ivar inputs: Nested inputs for complex projections.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar table_name: Required. Name of the Azure table to store projected data in.
:vartype table_name: str
"""
_validation = {
'table_name': {'required': True},
}
_attribute_map = {
'reference_key_name': {'key': 'referenceKeyName', 'type': 'str'},
'generated_key_name': {'key': 'generatedKeyName', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'source_context': {'key': 'sourceContext', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'table_name': {'key': 'tableName', 'type': 'str'},
}
def __init__(
self,
*,
table_name: str,
reference_key_name: Optional[str] = None,
generated_key_name: Optional[str] = None,
source: Optional[str] = None,
source_context: Optional[str] = None,
inputs: Optional[List["InputFieldMappingEntry"]] = None,
**kwargs
):
"""
:keyword reference_key_name: Name of reference key to different projection.
:paramtype reference_key_name: str
:keyword generated_key_name: Name of generated key to store projection under.
:paramtype generated_key_name: str
:keyword source: Source data to project.
:paramtype source: str
:keyword source_context: Source context for complex projections.
:paramtype source_context: str
:keyword inputs: Nested inputs for complex projections.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword table_name: Required. Name of the Azure table to store projected data in.
:paramtype table_name: str
"""
super(SearchIndexerKnowledgeStoreTableProjectionSelector, self).__init__(reference_key_name=reference_key_name, generated_key_name=generated_key_name, source=source, source_context=source_context, inputs=inputs, **kwargs)
self.table_name = table_name
class SearchIndexerLimits(msrest.serialization.Model):
"""SearchIndexerLimits.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar max_run_time: The maximum duration that the indexer is permitted to run for one
execution.
:vartype max_run_time: ~datetime.timedelta
:ivar max_document_extraction_size: The maximum size of a document, in bytes, which will be
considered valid for indexing.
:vartype max_document_extraction_size: long
:ivar max_document_content_characters_to_extract: The maximum number of characters that will be
extracted from a document picked up for indexing.
:vartype max_document_content_characters_to_extract: long
"""
_validation = {
'max_run_time': {'readonly': True},
'max_document_extraction_size': {'readonly': True},
'max_document_content_characters_to_extract': {'readonly': True},
}
_attribute_map = {
'max_run_time': {'key': 'maxRunTime', 'type': 'duration'},
'max_document_extraction_size': {'key': 'maxDocumentExtractionSize', 'type': 'long'},
'max_document_content_characters_to_extract': {'key': 'maxDocumentContentCharactersToExtract', 'type': 'long'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchIndexerLimits, self).__init__(**kwargs)
self.max_run_time = None
self.max_document_extraction_size = None
self.max_document_content_characters_to_extract = None
class SearchIndexerSkillset(msrest.serialization.Model):
"""A list of skills.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the skillset.
:vartype name: str
:ivar description: The description of the skillset.
:vartype description: str
:ivar skills: Required. A list of skills in the skillset.
:vartype skills: list[~azure.search.documents.indexes.models.SearchIndexerSkill]
:ivar cognitive_services_account: Details about cognitive services to be used when running
skills.
:vartype cognitive_services_account:
~azure.search.documents.indexes.models.CognitiveServicesAccount
:ivar knowledge_store: Definition of additional projections to azure blob, table, or files, of
enriched data.
:vartype knowledge_store: ~azure.search.documents.indexes.models.SearchIndexerKnowledgeStore
:ivar e_tag: The ETag of the skillset.
:vartype e_tag: str
:ivar encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your skillset
definition when you want full assurance that no one, not even Microsoft, can decrypt your
skillset definition in Azure Cognitive Search. Once you have encrypted your skillset
definition, it will always remain encrypted. Azure Cognitive Search will ignore attempts to set
this property to null. You can change this property as needed if you want to rotate your
encryption key; Your skillset definition will be unaffected. Encryption with customer-managed
keys is not available for free search services, and is only available for paid services created
on or after January 1, 2019.
:vartype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
"""
_validation = {
'name': {'required': True},
'skills': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'skills': {'key': 'skills', 'type': '[SearchIndexerSkill]'},
'cognitive_services_account': {'key': 'cognitiveServices', 'type': 'CognitiveServicesAccount'},
'knowledge_store': {'key': 'knowledgeStore', 'type': 'SearchIndexerKnowledgeStore'},
'e_tag': {'key': '@odata\\.etag', 'type': 'str'},
'encryption_key': {'key': 'encryptionKey', 'type': 'SearchResourceEncryptionKey'},
}
def __init__(
self,
*,
name: str,
skills: List["SearchIndexerSkill"],
description: Optional[str] = None,
cognitive_services_account: Optional["CognitiveServicesAccount"] = None,
knowledge_store: Optional["SearchIndexerKnowledgeStore"] = None,
e_tag: Optional[str] = None,
encryption_key: Optional["SearchResourceEncryptionKey"] = None,
**kwargs
):
"""
:keyword name: Required. The name of the skillset.
:paramtype name: str
:keyword description: The description of the skillset.
:paramtype description: str
:keyword skills: Required. A list of skills in the skillset.
:paramtype skills: list[~azure.search.documents.indexes.models.SearchIndexerSkill]
:keyword cognitive_services_account: Details about cognitive services to be used when running
skills.
:paramtype cognitive_services_account:
~azure.search.documents.indexes.models.CognitiveServicesAccount
:keyword knowledge_store: Definition of additional projections to azure blob, table, or files,
of enriched data.
:paramtype knowledge_store: ~azure.search.documents.indexes.models.SearchIndexerKnowledgeStore
:keyword e_tag: The ETag of the skillset.
:paramtype e_tag: str
:keyword encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your skillset
definition when you want full assurance that no one, not even Microsoft, can decrypt your
skillset definition in Azure Cognitive Search. Once you have encrypted your skillset
definition, it will always remain encrypted. Azure Cognitive Search will ignore attempts to set
this property to null. You can change this property as needed if you want to rotate your
encryption key; Your skillset definition will be unaffected. Encryption with customer-managed
keys is not available for free search services, and is only available for paid services created
on or after January 1, 2019.
:paramtype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
"""
super(SearchIndexerSkillset, self).__init__(**kwargs)
self.name = name
self.description = description
self.skills = skills
self.cognitive_services_account = cognitive_services_account
self.knowledge_store = knowledge_store
self.e_tag = e_tag
self.encryption_key = encryption_key
class SearchIndexerStatus(msrest.serialization.Model):
"""Represents the current status and execution history of an indexer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar status: Required. Overall indexer status. Possible values include: "unknown", "error",
"running".
:vartype status: str or ~azure.search.documents.indexes.models.IndexerStatus
:ivar last_result: The result of the most recent or an in-progress indexer execution.
:vartype last_result: ~azure.search.documents.indexes.models.IndexerExecutionResult
:ivar execution_history: Required. History of the recent indexer executions, sorted in reverse
chronological order.
:vartype execution_history: list[~azure.search.documents.indexes.models.IndexerExecutionResult]
:ivar limits: Required. The execution limits for the indexer.
:vartype limits: ~azure.search.documents.indexes.models.SearchIndexerLimits
"""
_validation = {
'status': {'required': True, 'readonly': True},
'last_result': {'readonly': True},
'execution_history': {'required': True, 'readonly': True},
'limits': {'required': True, 'readonly': True},
}
_attribute_map = {
'status': {'key': 'status', 'type': 'str'},
'last_result': {'key': 'lastResult', 'type': 'IndexerExecutionResult'},
'execution_history': {'key': 'executionHistory', 'type': '[IndexerExecutionResult]'},
'limits': {'key': 'limits', 'type': 'SearchIndexerLimits'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchIndexerStatus, self).__init__(**kwargs)
self.status = None
self.last_result = None
self.execution_history = None
self.limits = None
class SearchIndexerWarning(msrest.serialization.Model):
"""Represents an item-level warning.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar key: The key of the item which generated a warning.
:vartype key: str
:ivar message: Required. The message describing the warning that occurred while processing the
item.
:vartype message: str
:ivar name: The name of the source at which the warning originated. For example, this could
refer to a particular skill in the attached skillset. This may not be always available.
:vartype name: str
:ivar details: Additional, verbose details about the warning to assist in debugging the
indexer. This may not be always available.
:vartype details: str
:ivar documentation_link: A link to a troubleshooting guide for these classes of warnings. This
may not be always available.
:vartype documentation_link: str
"""
_validation = {
'key': {'readonly': True},
'message': {'required': True, 'readonly': True},
'name': {'readonly': True},
'details': {'readonly': True},
'documentation_link': {'readonly': True},
}
_attribute_map = {
'key': {'key': 'key', 'type': 'str'},
'message': {'key': 'message', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'details': {'key': 'details', 'type': 'str'},
'documentation_link': {'key': 'documentationLink', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SearchIndexerWarning, self).__init__(**kwargs)
self.key = None
self.message = None
self.name = None
self.details = None
self.documentation_link = None
class SearchResourceEncryptionKey(msrest.serialization.Model):
"""A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure Cognitive Search, such as indexes and synonym maps.
All required parameters must be populated in order to send to Azure.
:ivar key_name: Required. The name of your Azure Key Vault key to be used to encrypt your data
at rest.
:vartype key_name: str
:ivar key_version: Required. The version of your Azure Key Vault key to be used to encrypt your
data at rest.
:vartype key_version: str
:ivar vault_uri: Required. The URI of your Azure Key Vault, also referred to as DNS name, that
contains the key to be used to encrypt your data at rest. An example URI might be
https://my-keyvault-name.vault.azure.net.
:vartype vault_uri: str
:ivar access_credentials: Optional Azure Active Directory credentials used for accessing your
Azure Key Vault. Not required if using managed identity instead.
:vartype access_credentials:
~azure.search.documents.indexes.models.AzureActiveDirectoryApplicationCredentials
:ivar identity: An explicit managed identity to use for this encryption key. If not specified
and the access credentials property is null, the system-assigned managed identity is used. On
update to the resource, if the explicit identity is unspecified, it remains unchanged. If
"none" is specified, the value of this property is cleared.
:vartype identity: ~azure.search.documents.indexes.models.SearchIndexerDataIdentity
"""
_validation = {
'key_name': {'required': True},
'key_version': {'required': True},
'vault_uri': {'required': True},
}
_attribute_map = {
'key_name': {'key': 'keyVaultKeyName', 'type': 'str'},
'key_version': {'key': 'keyVaultKeyVersion', 'type': 'str'},
'vault_uri': {'key': 'keyVaultUri', 'type': 'str'},
'access_credentials': {'key': 'accessCredentials', 'type': 'AzureActiveDirectoryApplicationCredentials'},
'identity': {'key': 'identity', 'type': 'SearchIndexerDataIdentity'},
}
def __init__(
self,
*,
key_name: str,
key_version: str,
vault_uri: str,
access_credentials: Optional["AzureActiveDirectoryApplicationCredentials"] = None,
identity: Optional["SearchIndexerDataIdentity"] = None,
**kwargs
):
"""
:keyword key_name: Required. The name of your Azure Key Vault key to be used to encrypt your
data at rest.
:paramtype key_name: str
:keyword key_version: Required. The version of your Azure Key Vault key to be used to encrypt
your data at rest.
:paramtype key_version: str
:keyword vault_uri: Required. The URI of your Azure Key Vault, also referred to as DNS name,
that contains the key to be used to encrypt your data at rest. An example URI might be
https://my-keyvault-name.vault.azure.net.
:paramtype vault_uri: str
:keyword access_credentials: Optional Azure Active Directory credentials used for accessing
your Azure Key Vault. Not required if using managed identity instead.
:paramtype access_credentials:
~azure.search.documents.indexes.models.AzureActiveDirectoryApplicationCredentials
:keyword identity: An explicit managed identity to use for this encryption key. If not
specified and the access credentials property is null, the system-assigned managed identity is
used. On update to the resource, if the explicit identity is unspecified, it remains unchanged.
If "none" is specified, the value of this property is cleared.
:paramtype identity: ~azure.search.documents.indexes.models.SearchIndexerDataIdentity
"""
super(SearchResourceEncryptionKey, self).__init__(**kwargs)
self.key_name = key_name
self.key_version = key_version
self.vault_uri = vault_uri
self.access_credentials = access_credentials
self.identity = identity
class SemanticConfiguration(msrest.serialization.Model):
"""Defines a specific configuration to be used in the context of semantic capabilities.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the semantic configuration.
:vartype name: str
:ivar prioritized_fields: Required. Describes the title, content, and keyword fields to be used
for semantic ranking, captions, highlights, and answers. At least one of the three sub
properties (titleField, prioritizedKeywordsFields and prioritizedContentFields) need to be set.
:vartype prioritized_fields: ~azure.search.documents.indexes.models.PrioritizedFields
"""
_validation = {
'name': {'required': True},
'prioritized_fields': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'prioritized_fields': {'key': 'prioritizedFields', 'type': 'PrioritizedFields'},
}
def __init__(
self,
*,
name: str,
prioritized_fields: "PrioritizedFields",
**kwargs
):
"""
:keyword name: Required. The name of the semantic configuration.
:paramtype name: str
:keyword prioritized_fields: Required. Describes the title, content, and keyword fields to be
used for semantic ranking, captions, highlights, and answers. At least one of the three sub
properties (titleField, prioritizedKeywordsFields and prioritizedContentFields) need to be set.
:paramtype prioritized_fields: ~azure.search.documents.indexes.models.PrioritizedFields
"""
super(SemanticConfiguration, self).__init__(**kwargs)
self.name = name
self.prioritized_fields = prioritized_fields
class SemanticField(msrest.serialization.Model):
"""A field that is used as part of the semantic configuration.
:ivar field_name:
:vartype field_name: str
"""
_attribute_map = {
'field_name': {'key': 'fieldName', 'type': 'str'},
}
def __init__(
self,
*,
field_name: Optional[str] = None,
**kwargs
):
"""
:keyword field_name:
:paramtype field_name: str
"""
super(SemanticField, self).__init__(**kwargs)
self.field_name = field_name
class SemanticSettings(msrest.serialization.Model):
"""Defines parameters for a search index that influence semantic capabilities.
:ivar configurations: The semantic configurations for the index.
:vartype configurations: list[~azure.search.documents.indexes.models.SemanticConfiguration]
"""
_attribute_map = {
'configurations': {'key': 'configurations', 'type': '[SemanticConfiguration]'},
}
def __init__(
self,
*,
configurations: Optional[List["SemanticConfiguration"]] = None,
**kwargs
):
"""
:keyword configurations: The semantic configurations for the index.
:paramtype configurations: list[~azure.search.documents.indexes.models.SemanticConfiguration]
"""
super(SemanticSettings, self).__init__(**kwargs)
self.configurations = configurations
class SentimentSkill(SearchIndexerSkill):
"""Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "nl", "en", "fi", "fr", "de", "el", "it", "no", "pl", "pt-PT",
"ru", "es", "sv", "tr".
:vartype default_language_code: str or
~azure.search.documents.indexes.models.SentimentSkillLanguage
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[Union[str, "SentimentSkillLanguage"]] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "nl", "en", "fi", "fr", "de", "el", "it", "no", "pl", "pt-PT",
"ru", "es", "sv", "tr".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.SentimentSkillLanguage
"""
super(SentimentSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.SentimentSkill' # type: str
self.default_language_code = default_language_code
class SentimentSkillV3(SearchIndexerSkill):
"""Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
:vartype default_language_code: str
:ivar include_opinion_mining: If set to true, the skill output will include information from
Text Analytics for opinion mining, namely targets (nouns or verbs) and their associated
assessment (adjective) in the text. Default is false.
:vartype include_opinion_mining: bool
:ivar model_version: The version of the model to use when calling the Text Analytics service.
It will default to the latest available when not specified. We recommend you do not specify
this value unless absolutely necessary.
:vartype model_version: str
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'include_opinion_mining': {'key': 'includeOpinionMining', 'type': 'bool'},
'model_version': {'key': 'modelVersion', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[str] = None,
include_opinion_mining: Optional[bool] = False,
model_version: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
:paramtype default_language_code: str
:keyword include_opinion_mining: If set to true, the skill output will include information from
Text Analytics for opinion mining, namely targets (nouns or verbs) and their associated
assessment (adjective) in the text. Default is false.
:paramtype include_opinion_mining: bool
:keyword model_version: The version of the model to use when calling the Text Analytics
service. It will default to the latest available when not specified. We recommend you do not
specify this value unless absolutely necessary.
:paramtype model_version: str
"""
super(SentimentSkillV3, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.V3.SentimentSkill' # type: str
self.default_language_code = default_language_code
self.include_opinion_mining = include_opinion_mining
self.model_version = model_version
class ServiceCounters(msrest.serialization.Model):
"""Represents service-level resource counters and quotas.
All required parameters must be populated in order to send to Azure.
:ivar alias_counter: Total number of aliases.
:vartype alias_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar document_counter: Required. Total number of documents across all indexes in the service.
:vartype document_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar index_counter: Required. Total number of indexes.
:vartype index_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar indexer_counter: Required. Total number of indexers.
:vartype indexer_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar data_source_counter: Required. Total number of data sources.
:vartype data_source_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar storage_size_counter: Required. Total size of used storage in bytes.
:vartype storage_size_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar synonym_map_counter: Required. Total number of synonym maps.
:vartype synonym_map_counter: ~azure.search.documents.indexes.models.ResourceCounter
:ivar skillset_counter: Total number of skillsets.
:vartype skillset_counter: ~azure.search.documents.indexes.models.ResourceCounter
"""
_validation = {
'document_counter': {'required': True},
'index_counter': {'required': True},
'indexer_counter': {'required': True},
'data_source_counter': {'required': True},
'storage_size_counter': {'required': True},
'synonym_map_counter': {'required': True},
}
_attribute_map = {
'alias_counter': {'key': 'aliasesCount', 'type': 'ResourceCounter'},
'document_counter': {'key': 'documentCount', 'type': 'ResourceCounter'},
'index_counter': {'key': 'indexesCount', 'type': 'ResourceCounter'},
'indexer_counter': {'key': 'indexersCount', 'type': 'ResourceCounter'},
'data_source_counter': {'key': 'dataSourcesCount', 'type': 'ResourceCounter'},
'storage_size_counter': {'key': 'storageSize', 'type': 'ResourceCounter'},
'synonym_map_counter': {'key': 'synonymMaps', 'type': 'ResourceCounter'},
'skillset_counter': {'key': 'skillsetCount', 'type': 'ResourceCounter'},
}
def __init__(
self,
*,
document_counter: "ResourceCounter",
index_counter: "ResourceCounter",
indexer_counter: "ResourceCounter",
data_source_counter: "ResourceCounter",
storage_size_counter: "ResourceCounter",
synonym_map_counter: "ResourceCounter",
alias_counter: Optional["ResourceCounter"] = None,
skillset_counter: Optional["ResourceCounter"] = None,
**kwargs
):
"""
:keyword alias_counter: Total number of aliases.
:paramtype alias_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword document_counter: Required. Total number of documents across all indexes in the
service.
:paramtype document_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword index_counter: Required. Total number of indexes.
:paramtype index_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword indexer_counter: Required. Total number of indexers.
:paramtype indexer_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword data_source_counter: Required. Total number of data sources.
:paramtype data_source_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword storage_size_counter: Required. Total size of used storage in bytes.
:paramtype storage_size_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword synonym_map_counter: Required. Total number of synonym maps.
:paramtype synonym_map_counter: ~azure.search.documents.indexes.models.ResourceCounter
:keyword skillset_counter: Total number of skillsets.
:paramtype skillset_counter: ~azure.search.documents.indexes.models.ResourceCounter
"""
super(ServiceCounters, self).__init__(**kwargs)
self.alias_counter = alias_counter
self.document_counter = document_counter
self.index_counter = index_counter
self.indexer_counter = indexer_counter
self.data_source_counter = data_source_counter
self.storage_size_counter = storage_size_counter
self.synonym_map_counter = synonym_map_counter
self.skillset_counter = skillset_counter
class ServiceLimits(msrest.serialization.Model):
"""Represents various service level limits.
:ivar max_fields_per_index: The maximum allowed fields per index.
:vartype max_fields_per_index: int
:ivar max_field_nesting_depth_per_index: The maximum depth which you can nest sub-fields in an
index, including the top-level complex field. For example, a/b/c has a nesting depth of 3.
:vartype max_field_nesting_depth_per_index: int
:ivar max_complex_collection_fields_per_index: The maximum number of fields of type
Collection(Edm.ComplexType) allowed in an index.
:vartype max_complex_collection_fields_per_index: int
:ivar max_complex_objects_in_collections_per_document: The maximum number of objects in complex
collections allowed per document.
:vartype max_complex_objects_in_collections_per_document: int
"""
_attribute_map = {
'max_fields_per_index': {'key': 'maxFieldsPerIndex', 'type': 'int'},
'max_field_nesting_depth_per_index': {'key': 'maxFieldNestingDepthPerIndex', 'type': 'int'},
'max_complex_collection_fields_per_index': {'key': 'maxComplexCollectionFieldsPerIndex', 'type': 'int'},
'max_complex_objects_in_collections_per_document': {'key': 'maxComplexObjectsInCollectionsPerDocument', 'type': 'int'},
}
def __init__(
self,
*,
max_fields_per_index: Optional[int] = None,
max_field_nesting_depth_per_index: Optional[int] = None,
max_complex_collection_fields_per_index: Optional[int] = None,
max_complex_objects_in_collections_per_document: Optional[int] = None,
**kwargs
):
"""
:keyword max_fields_per_index: The maximum allowed fields per index.
:paramtype max_fields_per_index: int
:keyword max_field_nesting_depth_per_index: The maximum depth which you can nest sub-fields in
an index, including the top-level complex field. For example, a/b/c has a nesting depth of 3.
:paramtype max_field_nesting_depth_per_index: int
:keyword max_complex_collection_fields_per_index: The maximum number of fields of type
Collection(Edm.ComplexType) allowed in an index.
:paramtype max_complex_collection_fields_per_index: int
:keyword max_complex_objects_in_collections_per_document: The maximum number of objects in
complex collections allowed per document.
:paramtype max_complex_objects_in_collections_per_document: int
"""
super(ServiceLimits, self).__init__(**kwargs)
self.max_fields_per_index = max_fields_per_index
self.max_field_nesting_depth_per_index = max_field_nesting_depth_per_index
self.max_complex_collection_fields_per_index = max_complex_collection_fields_per_index
self.max_complex_objects_in_collections_per_document = max_complex_objects_in_collections_per_document
class ServiceStatistics(msrest.serialization.Model):
"""Response from a get service statistics request. If successful, it includes service level counters and limits.
All required parameters must be populated in order to send to Azure.
:ivar counters: Required. Service level resource counters.
:vartype counters: ~azure.search.documents.indexes.models.ServiceCounters
:ivar limits: Required. Service level general limits.
:vartype limits: ~azure.search.documents.indexes.models.ServiceLimits
"""
_validation = {
'counters': {'required': True},
'limits': {'required': True},
}
_attribute_map = {
'counters': {'key': 'counters', 'type': 'ServiceCounters'},
'limits': {'key': 'limits', 'type': 'ServiceLimits'},
}
def __init__(
self,
*,
counters: "ServiceCounters",
limits: "ServiceLimits",
**kwargs
):
"""
:keyword counters: Required. Service level resource counters.
:paramtype counters: ~azure.search.documents.indexes.models.ServiceCounters
:keyword limits: Required. Service level general limits.
:paramtype limits: ~azure.search.documents.indexes.models.ServiceLimits
"""
super(ServiceStatistics, self).__init__(**kwargs)
self.counters = counters
self.limits = limits
class ShaperSkill(SearchIndexerSkill):
"""A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
"""
super(ShaperSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Util.ShaperSkill' # type: str
class ShingleTokenFilter(TokenFilter):
"""Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar max_shingle_size: The maximum shingle size. Default and minimum value is 2.
:vartype max_shingle_size: int
:ivar min_shingle_size: The minimum shingle size. Default and minimum value is 2. Must be less
than the value of maxShingleSize.
:vartype min_shingle_size: int
:ivar output_unigrams: A value indicating whether the output stream will contain the input
tokens (unigrams) as well as shingles. Default is true.
:vartype output_unigrams: bool
:ivar output_unigrams_if_no_shingles: A value indicating whether to output unigrams for those
times when no shingles are available. This property takes precedence when outputUnigrams is set
to false. Default is false.
:vartype output_unigrams_if_no_shingles: bool
:ivar token_separator: The string to use when joining adjacent tokens to form a shingle.
Default is a single space (" ").
:vartype token_separator: str
:ivar filter_token: The string to insert for each position at which there is no token. Default
is an underscore ("_").
:vartype filter_token: str
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_shingle_size': {'minimum': 2},
'min_shingle_size': {'minimum': 2},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_shingle_size': {'key': 'maxShingleSize', 'type': 'int'},
'min_shingle_size': {'key': 'minShingleSize', 'type': 'int'},
'output_unigrams': {'key': 'outputUnigrams', 'type': 'bool'},
'output_unigrams_if_no_shingles': {'key': 'outputUnigramsIfNoShingles', 'type': 'bool'},
'token_separator': {'key': 'tokenSeparator', 'type': 'str'},
'filter_token': {'key': 'filterToken', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
max_shingle_size: Optional[int] = 2,
min_shingle_size: Optional[int] = 2,
output_unigrams: Optional[bool] = True,
output_unigrams_if_no_shingles: Optional[bool] = False,
token_separator: Optional[str] = " ",
filter_token: Optional[str] = "_",
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_shingle_size: The maximum shingle size. Default and minimum value is 2.
:paramtype max_shingle_size: int
:keyword min_shingle_size: The minimum shingle size. Default and minimum value is 2. Must be
less than the value of maxShingleSize.
:paramtype min_shingle_size: int
:keyword output_unigrams: A value indicating whether the output stream will contain the input
tokens (unigrams) as well as shingles. Default is true.
:paramtype output_unigrams: bool
:keyword output_unigrams_if_no_shingles: A value indicating whether to output unigrams for
those times when no shingles are available. This property takes precedence when outputUnigrams
is set to false. Default is false.
:paramtype output_unigrams_if_no_shingles: bool
:keyword token_separator: The string to use when joining adjacent tokens to form a shingle.
Default is a single space (" ").
:paramtype token_separator: str
:keyword filter_token: The string to insert for each position at which there is no token.
Default is an underscore ("_").
:paramtype filter_token: str
"""
super(ShingleTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.ShingleTokenFilter' # type: str
self.max_shingle_size = max_shingle_size
self.min_shingle_size = min_shingle_size
self.output_unigrams = output_unigrams
self.output_unigrams_if_no_shingles = output_unigrams_if_no_shingles
self.token_separator = token_separator
self.filter_token = filter_token
class SkillNames(msrest.serialization.Model):
"""SkillNames.
:ivar skill_names: the names of skills to be reset.
:vartype skill_names: list[str]
"""
_attribute_map = {
'skill_names': {'key': 'skillNames', 'type': '[str]'},
}
def __init__(
self,
*,
skill_names: Optional[List[str]] = None,
**kwargs
):
"""
:keyword skill_names: the names of skills to be reset.
:paramtype skill_names: list[str]
"""
super(SkillNames, self).__init__(**kwargs)
self.skill_names = skill_names
class SnowballTokenFilter(TokenFilter):
"""A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar language: Required. The language to use. Possible values include: "armenian", "basque",
"catalan", "danish", "dutch", "english", "finnish", "french", "german", "german2", "hungarian",
"italian", "kp", "lovins", "norwegian", "porter", "portuguese", "romanian", "russian",
"spanish", "swedish", "turkish".
:vartype language: str or ~azure.search.documents.indexes.models.SnowballTokenFilterLanguage
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'language': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'language': {'key': 'language', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
language: Union[str, "SnowballTokenFilterLanguage"],
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword language: Required. The language to use. Possible values include: "armenian",
"basque", "catalan", "danish", "dutch", "english", "finnish", "french", "german", "german2",
"hungarian", "italian", "kp", "lovins", "norwegian", "porter", "portuguese", "romanian",
"russian", "spanish", "swedish", "turkish".
:paramtype language: str or ~azure.search.documents.indexes.models.SnowballTokenFilterLanguage
"""
super(SnowballTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.SnowballTokenFilter' # type: str
self.language = language
class SoftDeleteColumnDeletionDetectionPolicy(DataDeletionDetectionPolicy):
"""Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the data deletion detection
policy.Constant filled by server.
:vartype odata_type: str
:ivar soft_delete_column_name: The name of the column to use for soft-deletion detection.
:vartype soft_delete_column_name: str
:ivar soft_delete_marker_value: The marker value that identifies an item as deleted.
:vartype soft_delete_marker_value: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'soft_delete_column_name': {'key': 'softDeleteColumnName', 'type': 'str'},
'soft_delete_marker_value': {'key': 'softDeleteMarkerValue', 'type': 'str'},
}
def __init__(
self,
*,
soft_delete_column_name: Optional[str] = None,
soft_delete_marker_value: Optional[str] = None,
**kwargs
):
"""
:keyword soft_delete_column_name: The name of the column to use for soft-deletion detection.
:paramtype soft_delete_column_name: str
:keyword soft_delete_marker_value: The marker value that identifies an item as deleted.
:paramtype soft_delete_marker_value: str
"""
super(SoftDeleteColumnDeletionDetectionPolicy, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy' # type: str
self.soft_delete_column_name = soft_delete_column_name
self.soft_delete_marker_value = soft_delete_marker_value
class SplitSkill(SearchIndexerSkill):
"""A skill to split a string into chunks of text.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "de", "en", "es", "fi", "fr", "it", "ko", "pt".
:vartype default_language_code: str or
~azure.search.documents.indexes.models.SplitSkillLanguage
:ivar text_split_mode: A value indicating which split mode to perform. Possible values include:
"pages", "sentences".
:vartype text_split_mode: str or ~azure.search.documents.indexes.models.TextSplitMode
:ivar maximum_page_length: The desired maximum page length. Default is 10000.
:vartype maximum_page_length: int
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_language_code': {'key': 'defaultLanguageCode', 'type': 'str'},
'text_split_mode': {'key': 'textSplitMode', 'type': 'str'},
'maximum_page_length': {'key': 'maximumPageLength', 'type': 'int'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_language_code: Optional[Union[str, "SplitSkillLanguage"]] = None,
text_split_mode: Optional[Union[str, "TextSplitMode"]] = None,
maximum_page_length: Optional[int] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_language_code: A value indicating which language code to use. Default is en.
Possible values include: "da", "de", "en", "es", "fi", "fr", "it", "ko", "pt".
:paramtype default_language_code: str or
~azure.search.documents.indexes.models.SplitSkillLanguage
:keyword text_split_mode: A value indicating which split mode to perform. Possible values
include: "pages", "sentences".
:paramtype text_split_mode: str or ~azure.search.documents.indexes.models.TextSplitMode
:keyword maximum_page_length: The desired maximum page length. Default is 10000.
:paramtype maximum_page_length: int
"""
super(SplitSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.SplitSkill' # type: str
self.default_language_code = default_language_code
self.text_split_mode = text_split_mode
self.maximum_page_length = maximum_page_length
class SqlIntegratedChangeTrackingPolicy(DataChangeDetectionPolicy):
"""Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the data change detection
policy.Constant filled by server.
:vartype odata_type: str
"""
_validation = {
'odata_type': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
"""
"""
super(SqlIntegratedChangeTrackingPolicy, self).__init__(**kwargs)
self.odata_type = '#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy' # type: str
class StemmerOverrideTokenFilter(TokenFilter):
"""Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar rules: Required. A list of stemming rules in the following format: "word => stem", for
example: "ran => run".
:vartype rules: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'rules': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'rules': {'key': 'rules', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
rules: List[str],
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword rules: Required. A list of stemming rules in the following format: "word => stem", for
example: "ran => run".
:paramtype rules: list[str]
"""
super(StemmerOverrideTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StemmerOverrideTokenFilter' # type: str
self.rules = rules
class StemmerTokenFilter(TokenFilter):
"""Language specific stemming filter. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar language: Required. The language to use. Possible values include: "arabic", "armenian",
"basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch", "dutchKp",
"english", "lightEnglish", "minimalEnglish", "possessiveEnglish", "porter2", "lovins",
"finnish", "lightFinnish", "french", "lightFrench", "minimalFrench", "galician",
"minimalGalician", "german", "german2", "lightGerman", "minimalGerman", "greek", "hindi",
"hungarian", "lightHungarian", "indonesian", "irish", "italian", "lightItalian", "sorani",
"latvian", "norwegian", "lightNorwegian", "minimalNorwegian", "lightNynorsk", "minimalNynorsk",
"portuguese", "lightPortuguese", "minimalPortuguese", "portugueseRslp", "romanian", "russian",
"lightRussian", "spanish", "lightSpanish", "swedish", "lightSwedish", "turkish".
:vartype language: str or ~azure.search.documents.indexes.models.StemmerTokenFilterLanguage
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'language': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'language': {'key': 'language', 'type': 'str'},
}
def __init__(
self,
*,
name: str,
language: Union[str, "StemmerTokenFilterLanguage"],
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword language: Required. The language to use. Possible values include: "arabic",
"armenian", "basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch",
"dutchKp", "english", "lightEnglish", "minimalEnglish", "possessiveEnglish", "porter2",
"lovins", "finnish", "lightFinnish", "french", "lightFrench", "minimalFrench", "galician",
"minimalGalician", "german", "german2", "lightGerman", "minimalGerman", "greek", "hindi",
"hungarian", "lightHungarian", "indonesian", "irish", "italian", "lightItalian", "sorani",
"latvian", "norwegian", "lightNorwegian", "minimalNorwegian", "lightNynorsk", "minimalNynorsk",
"portuguese", "lightPortuguese", "minimalPortuguese", "portugueseRslp", "romanian", "russian",
"lightRussian", "spanish", "lightSpanish", "swedish", "lightSwedish", "turkish".
:paramtype language: str or ~azure.search.documents.indexes.models.StemmerTokenFilterLanguage
"""
super(StemmerTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StemmerTokenFilter' # type: str
self.language = language
class StopAnalyzer(LexicalAnalyzer):
"""Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the analyzer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the analyzer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar stopwords: A list of stopwords.
:vartype stopwords: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'stopwords': {'key': 'stopwords', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
stopwords: Optional[List[str]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the analyzer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword stopwords: A list of stopwords.
:paramtype stopwords: list[str]
"""
super(StopAnalyzer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StopAnalyzer' # type: str
self.stopwords = stopwords
class StopwordsTokenFilter(TokenFilter):
"""Removes stop words from a token stream. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar stopwords: The list of stopwords. This property and the stopwords list property cannot
both be set.
:vartype stopwords: list[str]
:ivar stopwords_list: A predefined list of stopwords to use. This property and the stopwords
property cannot both be set. Default is English. Possible values include: "arabic", "armenian",
"basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch", "english",
"finnish", "french", "galician", "german", "greek", "hindi", "hungarian", "indonesian",
"irish", "italian", "latvian", "norwegian", "persian", "portuguese", "romanian", "russian",
"sorani", "spanish", "swedish", "thai", "turkish".
:vartype stopwords_list: str or ~azure.search.documents.indexes.models.StopwordsList
:ivar ignore_case: A value indicating whether to ignore case. If true, all words are converted
to lower case first. Default is false.
:vartype ignore_case: bool
:ivar remove_trailing_stop_words: A value indicating whether to ignore the last search term if
it's a stop word. Default is true.
:vartype remove_trailing_stop_words: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'stopwords': {'key': 'stopwords', 'type': '[str]'},
'stopwords_list': {'key': 'stopwordsList', 'type': 'str'},
'ignore_case': {'key': 'ignoreCase', 'type': 'bool'},
'remove_trailing_stop_words': {'key': 'removeTrailing', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
stopwords: Optional[List[str]] = None,
stopwords_list: Optional[Union[str, "StopwordsList"]] = None,
ignore_case: Optional[bool] = False,
remove_trailing_stop_words: Optional[bool] = True,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword stopwords: The list of stopwords. This property and the stopwords list property cannot
both be set.
:paramtype stopwords: list[str]
:keyword stopwords_list: A predefined list of stopwords to use. This property and the stopwords
property cannot both be set. Default is English. Possible values include: "arabic", "armenian",
"basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch", "english",
"finnish", "french", "galician", "german", "greek", "hindi", "hungarian", "indonesian",
"irish", "italian", "latvian", "norwegian", "persian", "portuguese", "romanian", "russian",
"sorani", "spanish", "swedish", "thai", "turkish".
:paramtype stopwords_list: str or ~azure.search.documents.indexes.models.StopwordsList
:keyword ignore_case: A value indicating whether to ignore case. If true, all words are
converted to lower case first. Default is false.
:paramtype ignore_case: bool
:keyword remove_trailing_stop_words: A value indicating whether to ignore the last search term
if it's a stop word. Default is true.
:paramtype remove_trailing_stop_words: bool
"""
super(StopwordsTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.StopwordsTokenFilter' # type: str
self.stopwords = stopwords
self.stopwords_list = stopwords_list
self.ignore_case = ignore_case
self.remove_trailing_stop_words = remove_trailing_stop_words
class Suggester(msrest.serialization.Model):
"""Defines how the Suggest API should apply to a group of fields in the index.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the suggester.
:vartype name: str
:ivar search_mode: A value indicating the capabilities of the suggester. Has constant value:
"analyzingInfixMatching".
:vartype search_mode: str
:ivar source_fields: Required. The list of field names to which the suggester applies. Each
field must be searchable.
:vartype source_fields: list[str]
"""
_validation = {
'name': {'required': True},
'search_mode': {'required': True, 'constant': True},
'source_fields': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'search_mode': {'key': 'searchMode', 'type': 'str'},
'source_fields': {'key': 'sourceFields', 'type': '[str]'},
}
search_mode = "analyzingInfixMatching"
def __init__(
self,
*,
name: str,
source_fields: List[str],
**kwargs
):
"""
:keyword name: Required. The name of the suggester.
:paramtype name: str
:keyword source_fields: Required. The list of field names to which the suggester applies. Each
field must be searchable.
:paramtype source_fields: list[str]
"""
super(Suggester, self).__init__(**kwargs)
self.name = name
self.source_fields = source_fields
class SynonymMap(msrest.serialization.Model):
"""Represents a synonym map definition.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:ivar name: Required. The name of the synonym map.
:vartype name: str
:ivar format: The format of the synonym map. Only the 'solr' format is currently supported. Has
constant value: "solr".
:vartype format: str
:ivar synonyms: Required. A series of synonym rules in the specified synonym map format. The
rules must be separated by newlines.
:vartype synonyms: str
:ivar encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your data when you
want full assurance that no one, not even Microsoft, can decrypt your data in Azure Cognitive
Search. Once you have encrypted your data, it will always remain encrypted. Azure Cognitive
Search will ignore attempts to set this property to null. You can change this property as
needed if you want to rotate your encryption key; Your data will be unaffected. Encryption with
customer-managed keys is not available for free search services, and is only available for paid
services created on or after January 1, 2019.
:vartype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
:ivar e_tag: The ETag of the synonym map.
:vartype e_tag: str
"""
_validation = {
'name': {'required': True},
'format': {'required': True, 'constant': True},
'synonyms': {'required': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'format': {'key': 'format', 'type': 'str'},
'synonyms': {'key': 'synonyms', 'type': 'str'},
'encryption_key': {'key': 'encryptionKey', 'type': 'SearchResourceEncryptionKey'},
'e_tag': {'key': '@odata\\.etag', 'type': 'str'},
}
format = "solr"
def __init__(
self,
*,
name: str,
synonyms: str,
encryption_key: Optional["SearchResourceEncryptionKey"] = None,
e_tag: Optional[str] = None,
**kwargs
):
"""
:keyword name: Required. The name of the synonym map.
:paramtype name: str
:keyword synonyms: Required. A series of synonym rules in the specified synonym map format. The
rules must be separated by newlines.
:paramtype synonyms: str
:keyword encryption_key: A description of an encryption key that you create in Azure Key Vault.
This key is used to provide an additional level of encryption-at-rest for your data when you
want full assurance that no one, not even Microsoft, can decrypt your data in Azure Cognitive
Search. Once you have encrypted your data, it will always remain encrypted. Azure Cognitive
Search will ignore attempts to set this property to null. You can change this property as
needed if you want to rotate your encryption key; Your data will be unaffected. Encryption with
customer-managed keys is not available for free search services, and is only available for paid
services created on or after January 1, 2019.
:paramtype encryption_key: ~azure.search.documents.indexes.models.SearchResourceEncryptionKey
:keyword e_tag: The ETag of the synonym map.
:paramtype e_tag: str
"""
super(SynonymMap, self).__init__(**kwargs)
self.name = name
self.synonyms = synonyms
self.encryption_key = encryption_key
self.e_tag = e_tag
class SynonymTokenFilter(TokenFilter):
"""Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar synonyms: Required. A list of synonyms in following one of two formats: 1. incredible,
unbelievable, fabulous => amazing - all terms on the left side of => symbol will be replaced
with all terms on its right side; 2. incredible, unbelievable, fabulous, amazing - comma
separated list of equivalent words. Set the expand option to change how this list is
interpreted.
:vartype synonyms: list[str]
:ivar ignore_case: A value indicating whether to case-fold input for matching. Default is
false.
:vartype ignore_case: bool
:ivar expand: A value indicating whether all words in the list of synonyms (if => notation is
not used) will map to one another. If true, all words in the list of synonyms (if => notation
is not used) will map to one another. The following list: incredible, unbelievable, fabulous,
amazing is equivalent to: incredible, unbelievable, fabulous, amazing => incredible,
unbelievable, fabulous, amazing. If false, the following list: incredible, unbelievable,
fabulous, amazing will be equivalent to: incredible, unbelievable, fabulous, amazing =>
incredible. Default is true.
:vartype expand: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'synonyms': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'synonyms': {'key': 'synonyms', 'type': '[str]'},
'ignore_case': {'key': 'ignoreCase', 'type': 'bool'},
'expand': {'key': 'expand', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
synonyms: List[str],
ignore_case: Optional[bool] = False,
expand: Optional[bool] = True,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword synonyms: Required. A list of synonyms in following one of two formats: 1. incredible,
unbelievable, fabulous => amazing - all terms on the left side of => symbol will be replaced
with all terms on its right side; 2. incredible, unbelievable, fabulous, amazing - comma
separated list of equivalent words. Set the expand option to change how this list is
interpreted.
:paramtype synonyms: list[str]
:keyword ignore_case: A value indicating whether to case-fold input for matching. Default is
false.
:paramtype ignore_case: bool
:keyword expand: A value indicating whether all words in the list of synonyms (if => notation
is not used) will map to one another. If true, all words in the list of synonyms (if =>
notation is not used) will map to one another. The following list: incredible, unbelievable,
fabulous, amazing is equivalent to: incredible, unbelievable, fabulous, amazing => incredible,
unbelievable, fabulous, amazing. If false, the following list: incredible, unbelievable,
fabulous, amazing will be equivalent to: incredible, unbelievable, fabulous, amazing =>
incredible. Default is true.
:paramtype expand: bool
"""
super(SynonymTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.SynonymTokenFilter' # type: str
self.synonyms = synonyms
self.ignore_case = ignore_case
self.expand = expand
class TagScoringFunction(ScoringFunction):
"""Defines a function that boosts scores of documents with string values matching a given list of tags.
All required parameters must be populated in order to send to Azure.
:ivar type: Required. Indicates the type of function to use. Valid values include magnitude,
freshness, distance, and tag. The function type must be lower case.Constant filled by server.
:vartype type: str
:ivar field_name: Required. The name of the field used as input to the scoring function.
:vartype field_name: str
:ivar boost: Required. A multiplier for the raw score. Must be a positive number not equal to
1.0.
:vartype boost: float
:ivar interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:vartype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:ivar parameters: Required. Parameter values for the tag scoring function.
:vartype parameters: ~azure.search.documents.indexes.models.TagScoringParameters
"""
_validation = {
'type': {'required': True},
'field_name': {'required': True},
'boost': {'required': True},
'parameters': {'required': True},
}
_attribute_map = {
'type': {'key': 'type', 'type': 'str'},
'field_name': {'key': 'fieldName', 'type': 'str'},
'boost': {'key': 'boost', 'type': 'float'},
'interpolation': {'key': 'interpolation', 'type': 'str'},
'parameters': {'key': 'tag', 'type': 'TagScoringParameters'},
}
def __init__(
self,
*,
field_name: str,
boost: float,
parameters: "TagScoringParameters",
interpolation: Optional[Union[str, "ScoringFunctionInterpolation"]] = None,
**kwargs
):
"""
:keyword field_name: Required. The name of the field used as input to the scoring function.
:paramtype field_name: str
:keyword boost: Required. A multiplier for the raw score. Must be a positive number not equal
to 1.0.
:paramtype boost: float
:keyword interpolation: A value indicating how boosting will be interpolated across document
scores; defaults to "Linear". Possible values include: "linear", "constant", "quadratic",
"logarithmic".
:paramtype interpolation: str or
~azure.search.documents.indexes.models.ScoringFunctionInterpolation
:keyword parameters: Required. Parameter values for the tag scoring function.
:paramtype parameters: ~azure.search.documents.indexes.models.TagScoringParameters
"""
super(TagScoringFunction, self).__init__(field_name=field_name, boost=boost, interpolation=interpolation, **kwargs)
self.type = 'tag' # type: str
self.parameters = parameters
class TagScoringParameters(msrest.serialization.Model):
"""Provides parameter values to a tag scoring function.
All required parameters must be populated in order to send to Azure.
:ivar tags_parameter: Required. The name of the parameter passed in search queries to specify
the list of tags to compare against the target field.
:vartype tags_parameter: str
"""
_validation = {
'tags_parameter': {'required': True},
}
_attribute_map = {
'tags_parameter': {'key': 'tagsParameter', 'type': 'str'},
}
def __init__(
self,
*,
tags_parameter: str,
**kwargs
):
"""
:keyword tags_parameter: Required. The name of the parameter passed in search queries to
specify the list of tags to compare against the target field.
:paramtype tags_parameter: str
"""
super(TagScoringParameters, self).__init__(**kwargs)
self.tags_parameter = tags_parameter
class TextTranslationSkill(SearchIndexerSkill):
"""A skill to translate text from one language to another.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar default_to_language_code: Required. The language code to translate documents into for
documents that don't specify the to language explicitly. Possible values include: "af", "ar",
"bn", "bs", "bg", "yue", "ca", "zh-Hans", "zh-Hant", "hr", "cs", "da", "nl", "en", "et", "fj",
"fil", "fi", "fr", "de", "el", "ht", "he", "hi", "mww", "hu", "is", "id", "it", "ja", "sw",
"tlh", "tlh-Latn", "tlh-Piqd", "ko", "lv", "lt", "mg", "ms", "mt", "nb", "fa", "pl", "pt",
"pt-br", "pt-PT", "otq", "ro", "ru", "sm", "sr-Cyrl", "sr-Latn", "sk", "sl", "es", "sv", "ty",
"ta", "te", "th", "to", "tr", "uk", "ur", "vi", "cy", "yua", "ga", "kn", "mi", "ml", "pa".
:vartype default_to_language_code: str or
~azure.search.documents.indexes.models.TextTranslationSkillLanguage
:ivar default_from_language_code: The language code to translate documents from for documents
that don't specify the from language explicitly. Possible values include: "af", "ar", "bn",
"bs", "bg", "yue", "ca", "zh-Hans", "zh-Hant", "hr", "cs", "da", "nl", "en", "et", "fj", "fil",
"fi", "fr", "de", "el", "ht", "he", "hi", "mww", "hu", "is", "id", "it", "ja", "sw", "tlh",
"tlh-Latn", "tlh-Piqd", "ko", "lv", "lt", "mg", "ms", "mt", "nb", "fa", "pl", "pt", "pt-br",
"pt-PT", "otq", "ro", "ru", "sm", "sr-Cyrl", "sr-Latn", "sk", "sl", "es", "sv", "ty", "ta",
"te", "th", "to", "tr", "uk", "ur", "vi", "cy", "yua", "ga", "kn", "mi", "ml", "pa".
:vartype default_from_language_code: str or
~azure.search.documents.indexes.models.TextTranslationSkillLanguage
:ivar suggested_from: The language code to translate documents from when neither the
fromLanguageCode input nor the defaultFromLanguageCode parameter are provided, and the
automatic language detection is unsuccessful. Default is en. Possible values include: "af",
"ar", "bn", "bs", "bg", "yue", "ca", "zh-Hans", "zh-Hant", "hr", "cs", "da", "nl", "en", "et",
"fj", "fil", "fi", "fr", "de", "el", "ht", "he", "hi", "mww", "hu", "is", "id", "it", "ja",
"sw", "tlh", "tlh-Latn", "tlh-Piqd", "ko", "lv", "lt", "mg", "ms", "mt", "nb", "fa", "pl",
"pt", "pt-br", "pt-PT", "otq", "ro", "ru", "sm", "sr-Cyrl", "sr-Latn", "sk", "sl", "es", "sv",
"ty", "ta", "te", "th", "to", "tr", "uk", "ur", "vi", "cy", "yua", "ga", "kn", "mi", "ml",
"pa".
:vartype suggested_from: str or
~azure.search.documents.indexes.models.TextTranslationSkillLanguage
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
'default_to_language_code': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'default_to_language_code': {'key': 'defaultToLanguageCode', 'type': 'str'},
'default_from_language_code': {'key': 'defaultFromLanguageCode', 'type': 'str'},
'suggested_from': {'key': 'suggestedFrom', 'type': 'str'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
default_to_language_code: Union[str, "TextTranslationSkillLanguage"],
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
default_from_language_code: Optional[Union[str, "TextTranslationSkillLanguage"]] = None,
suggested_from: Optional[Union[str, "TextTranslationSkillLanguage"]] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword default_to_language_code: Required. The language code to translate documents into for
documents that don't specify the to language explicitly. Possible values include: "af", "ar",
"bn", "bs", "bg", "yue", "ca", "zh-Hans", "zh-Hant", "hr", "cs", "da", "nl", "en", "et", "fj",
"fil", "fi", "fr", "de", "el", "ht", "he", "hi", "mww", "hu", "is", "id", "it", "ja", "sw",
"tlh", "tlh-Latn", "tlh-Piqd", "ko", "lv", "lt", "mg", "ms", "mt", "nb", "fa", "pl", "pt",
"pt-br", "pt-PT", "otq", "ro", "ru", "sm", "sr-Cyrl", "sr-Latn", "sk", "sl", "es", "sv", "ty",
"ta", "te", "th", "to", "tr", "uk", "ur", "vi", "cy", "yua", "ga", "kn", "mi", "ml", "pa".
:paramtype default_to_language_code: str or
~azure.search.documents.indexes.models.TextTranslationSkillLanguage
:keyword default_from_language_code: The language code to translate documents from for
documents that don't specify the from language explicitly. Possible values include: "af", "ar",
"bn", "bs", "bg", "yue", "ca", "zh-Hans", "zh-Hant", "hr", "cs", "da", "nl", "en", "et", "fj",
"fil", "fi", "fr", "de", "el", "ht", "he", "hi", "mww", "hu", "is", "id", "it", "ja", "sw",
"tlh", "tlh-Latn", "tlh-Piqd", "ko", "lv", "lt", "mg", "ms", "mt", "nb", "fa", "pl", "pt",
"pt-br", "pt-PT", "otq", "ro", "ru", "sm", "sr-Cyrl", "sr-Latn", "sk", "sl", "es", "sv", "ty",
"ta", "te", "th", "to", "tr", "uk", "ur", "vi", "cy", "yua", "ga", "kn", "mi", "ml", "pa".
:paramtype default_from_language_code: str or
~azure.search.documents.indexes.models.TextTranslationSkillLanguage
:keyword suggested_from: The language code to translate documents from when neither the
fromLanguageCode input nor the defaultFromLanguageCode parameter are provided, and the
automatic language detection is unsuccessful. Default is en. Possible values include: "af",
"ar", "bn", "bs", "bg", "yue", "ca", "zh-Hans", "zh-Hant", "hr", "cs", "da", "nl", "en", "et",
"fj", "fil", "fi", "fr", "de", "el", "ht", "he", "hi", "mww", "hu", "is", "id", "it", "ja",
"sw", "tlh", "tlh-Latn", "tlh-Piqd", "ko", "lv", "lt", "mg", "ms", "mt", "nb", "fa", "pl",
"pt", "pt-br", "pt-PT", "otq", "ro", "ru", "sm", "sr-Cyrl", "sr-Latn", "sk", "sl", "es", "sv",
"ty", "ta", "te", "th", "to", "tr", "uk", "ur", "vi", "cy", "yua", "ga", "kn", "mi", "ml",
"pa".
:paramtype suggested_from: str or
~azure.search.documents.indexes.models.TextTranslationSkillLanguage
"""
super(TextTranslationSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Text.TranslationSkill' # type: str
self.default_to_language_code = default_to_language_code
self.default_from_language_code = default_from_language_code
self.suggested_from = suggested_from
class TextWeights(msrest.serialization.Model):
"""Defines weights on index fields for which matches should boost scoring in search queries.
All required parameters must be populated in order to send to Azure.
:ivar weights: Required. The dictionary of per-field weights to boost document scoring. The
keys are field names and the values are the weights for each field.
:vartype weights: dict[str, float]
"""
_validation = {
'weights': {'required': True},
}
_attribute_map = {
'weights': {'key': 'weights', 'type': '{float}'},
}
def __init__(
self,
*,
weights: Dict[str, float],
**kwargs
):
"""
:keyword weights: Required. The dictionary of per-field weights to boost document scoring. The
keys are field names and the values are the weights for each field.
:paramtype weights: dict[str, float]
"""
super(TextWeights, self).__init__(**kwargs)
self.weights = weights
class TruncateTokenFilter(TokenFilter):
"""Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar length: The length at which terms will be truncated. Default and maximum is 300.
:vartype length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'length': {'key': 'length', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
length: Optional[int] = 300,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword length: The length at which terms will be truncated. Default and maximum is 300.
:paramtype length: int
"""
super(TruncateTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.TruncateTokenFilter' # type: str
self.length = length
class UaxUrlEmailTokenizer(LexicalTokenizer):
"""Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the tokenizer.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the tokenizer. It must only contain letters, digits, spaces,
dashes or underscores, can only start and end with alphanumeric characters, and is limited to
128 characters.
:vartype name: str
:ivar max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:vartype max_token_length: int
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
'max_token_length': {'maximum': 300},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'max_token_length': {'key': 'maxTokenLength', 'type': 'int'},
}
def __init__(
self,
*,
name: str,
max_token_length: Optional[int] = 255,
**kwargs
):
"""
:keyword name: Required. The name of the tokenizer. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword max_token_length: The maximum token length. Default is 255. Tokens longer than the
maximum length are split. The maximum token length that can be used is 300 characters.
:paramtype max_token_length: int
"""
super(UaxUrlEmailTokenizer, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.UaxUrlEmailTokenizer' # type: str
self.max_token_length = max_token_length
class UniqueTokenFilter(TokenFilter):
"""Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar only_on_same_position: A value indicating whether to remove duplicates only at the same
position. Default is false.
:vartype only_on_same_position: bool
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'only_on_same_position': {'key': 'onlyOnSamePosition', 'type': 'bool'},
}
def __init__(
self,
*,
name: str,
only_on_same_position: Optional[bool] = False,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword only_on_same_position: A value indicating whether to remove duplicates only at the
same position. Default is false.
:paramtype only_on_same_position: bool
"""
super(UniqueTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.UniqueTokenFilter' # type: str
self.only_on_same_position = only_on_same_position
class WebApiSkill(SearchIndexerSkill):
"""A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the skill.Constant filled by
server.
:vartype odata_type: str
:ivar name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:vartype name: str
:ivar description: The description of the skill which describes the inputs, outputs, and usage
of the skill.
:vartype description: str
:ivar context: Represents the level at which operations take place, such as the document root
or document content (for example, /document or /document/content). The default is /document.
:vartype context: str
:ivar inputs: Required. Inputs of the skills could be a column in the source data set, or the
output of an upstream skill.
:vartype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:ivar outputs: Required. The output of a skill is either a field in a search index, or a value
that can be consumed as an input by another skill.
:vartype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:ivar uri: Required. The url for the Web API.
:vartype uri: str
:ivar http_headers: The headers required to make the http request.
:vartype http_headers: dict[str, str]
:ivar http_method: The method for the http request.
:vartype http_method: str
:ivar timeout: The desired timeout for the request. Default is 30 seconds.
:vartype timeout: ~datetime.timedelta
:ivar batch_size: The desired batch size which indicates number of documents.
:vartype batch_size: int
:ivar degree_of_parallelism: If set, the number of parallel calls that can be made to the Web
API.
:vartype degree_of_parallelism: int
"""
_validation = {
'odata_type': {'required': True},
'inputs': {'required': True},
'outputs': {'required': True},
'uri': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'description': {'key': 'description', 'type': 'str'},
'context': {'key': 'context', 'type': 'str'},
'inputs': {'key': 'inputs', 'type': '[InputFieldMappingEntry]'},
'outputs': {'key': 'outputs', 'type': '[OutputFieldMappingEntry]'},
'uri': {'key': 'uri', 'type': 'str'},
'http_headers': {'key': 'httpHeaders', 'type': '{str}'},
'http_method': {'key': 'httpMethod', 'type': 'str'},
'timeout': {'key': 'timeout', 'type': 'duration'},
'batch_size': {'key': 'batchSize', 'type': 'int'},
'degree_of_parallelism': {'key': 'degreeOfParallelism', 'type': 'int'},
}
def __init__(
self,
*,
inputs: List["InputFieldMappingEntry"],
outputs: List["OutputFieldMappingEntry"],
uri: str,
name: Optional[str] = None,
description: Optional[str] = None,
context: Optional[str] = None,
http_headers: Optional[Dict[str, str]] = None,
http_method: Optional[str] = None,
timeout: Optional[datetime.timedelta] = None,
batch_size: Optional[int] = None,
degree_of_parallelism: Optional[int] = None,
**kwargs
):
"""
:keyword name: The name of the skill which uniquely identifies it within the skillset. A skill
with no name defined will be given a default name of its 1-based index in the skills array,
prefixed with the character '#'.
:paramtype name: str
:keyword description: The description of the skill which describes the inputs, outputs, and
usage of the skill.
:paramtype description: str
:keyword context: Represents the level at which operations take place, such as the document
root or document content (for example, /document or /document/content). The default is
/document.
:paramtype context: str
:keyword inputs: Required. Inputs of the skills could be a column in the source data set, or
the output of an upstream skill.
:paramtype inputs: list[~azure.search.documents.indexes.models.InputFieldMappingEntry]
:keyword outputs: Required. The output of a skill is either a field in a search index, or a
value that can be consumed as an input by another skill.
:paramtype outputs: list[~azure.search.documents.indexes.models.OutputFieldMappingEntry]
:keyword uri: Required. The url for the Web API.
:paramtype uri: str
:keyword http_headers: The headers required to make the http request.
:paramtype http_headers: dict[str, str]
:keyword http_method: The method for the http request.
:paramtype http_method: str
:keyword timeout: The desired timeout for the request. Default is 30 seconds.
:paramtype timeout: ~datetime.timedelta
:keyword batch_size: The desired batch size which indicates number of documents.
:paramtype batch_size: int
:keyword degree_of_parallelism: If set, the number of parallel calls that can be made to the
Web API.
:paramtype degree_of_parallelism: int
"""
super(WebApiSkill, self).__init__(name=name, description=description, context=context, inputs=inputs, outputs=outputs, **kwargs)
self.odata_type = '#Microsoft.Skills.Custom.WebApiSkill' # type: str
self.uri = uri
self.http_headers = http_headers
self.http_method = http_method
self.timeout = timeout
self.batch_size = batch_size
self.degree_of_parallelism = degree_of_parallelism
class WordDelimiterTokenFilter(TokenFilter):
"""Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.
All required parameters must be populated in order to send to Azure.
:ivar odata_type: Required. Identifies the concrete type of the token filter.Constant filled by
server.
:vartype odata_type: str
:ivar name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:vartype name: str
:ivar generate_word_parts: A value indicating whether to generate part words. If set, causes
parts of words to be generated; for example "AzureSearch" becomes "Azure" "Search". Default is
true.
:vartype generate_word_parts: bool
:ivar generate_number_parts: A value indicating whether to generate number subwords. Default is
true.
:vartype generate_number_parts: bool
:ivar catenate_words: A value indicating whether maximum runs of word parts will be catenated.
For example, if this is set to true, "Azure-Search" becomes "AzureSearch". Default is false.
:vartype catenate_words: bool
:ivar catenate_numbers: A value indicating whether maximum runs of number parts will be
catenated. For example, if this is set to true, "1-2" becomes "12". Default is false.
:vartype catenate_numbers: bool
:ivar catenate_all: A value indicating whether all subword parts will be catenated. For
example, if this is set to true, "Azure-Search-1" becomes "AzureSearch1". Default is false.
:vartype catenate_all: bool
:ivar split_on_case_change: A value indicating whether to split words on caseChange. For
example, if this is set to true, "AzureSearch" becomes "Azure" "Search". Default is true.
:vartype split_on_case_change: bool
:ivar preserve_original: A value indicating whether original words will be preserved and added
to the subword list. Default is false.
:vartype preserve_original: bool
:ivar split_on_numerics: A value indicating whether to split on numbers. For example, if this
is set to true, "Azure1Search" becomes "Azure" "1" "Search". Default is true.
:vartype split_on_numerics: bool
:ivar stem_english_possessive: A value indicating whether to remove trailing "'s" for each
subword. Default is true.
:vartype stem_english_possessive: bool
:ivar protected_words: A list of tokens to protect from being delimited.
:vartype protected_words: list[str]
"""
_validation = {
'odata_type': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'odata_type': {'key': '@odata\\.type', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'generate_word_parts': {'key': 'generateWordParts', 'type': 'bool'},
'generate_number_parts': {'key': 'generateNumberParts', 'type': 'bool'},
'catenate_words': {'key': 'catenateWords', 'type': 'bool'},
'catenate_numbers': {'key': 'catenateNumbers', 'type': 'bool'},
'catenate_all': {'key': 'catenateAll', 'type': 'bool'},
'split_on_case_change': {'key': 'splitOnCaseChange', 'type': 'bool'},
'preserve_original': {'key': 'preserveOriginal', 'type': 'bool'},
'split_on_numerics': {'key': 'splitOnNumerics', 'type': 'bool'},
'stem_english_possessive': {'key': 'stemEnglishPossessive', 'type': 'bool'},
'protected_words': {'key': 'protectedWords', 'type': '[str]'},
}
def __init__(
self,
*,
name: str,
generate_word_parts: Optional[bool] = True,
generate_number_parts: Optional[bool] = True,
catenate_words: Optional[bool] = False,
catenate_numbers: Optional[bool] = False,
catenate_all: Optional[bool] = False,
split_on_case_change: Optional[bool] = True,
preserve_original: Optional[bool] = False,
split_on_numerics: Optional[bool] = True,
stem_english_possessive: Optional[bool] = True,
protected_words: Optional[List[str]] = None,
**kwargs
):
"""
:keyword name: Required. The name of the token filter. It must only contain letters, digits,
spaces, dashes or underscores, can only start and end with alphanumeric characters, and is
limited to 128 characters.
:paramtype name: str
:keyword generate_word_parts: A value indicating whether to generate part words. If set, causes
parts of words to be generated; for example "AzureSearch" becomes "Azure" "Search". Default is
true.
:paramtype generate_word_parts: bool
:keyword generate_number_parts: A value indicating whether to generate number subwords. Default
is true.
:paramtype generate_number_parts: bool
:keyword catenate_words: A value indicating whether maximum runs of word parts will be
catenated. For example, if this is set to true, "Azure-Search" becomes "AzureSearch". Default
is false.
:paramtype catenate_words: bool
:keyword catenate_numbers: A value indicating whether maximum runs of number parts will be
catenated. For example, if this is set to true, "1-2" becomes "12". Default is false.
:paramtype catenate_numbers: bool
:keyword catenate_all: A value indicating whether all subword parts will be catenated. For
example, if this is set to true, "Azure-Search-1" becomes "AzureSearch1". Default is false.
:paramtype catenate_all: bool
:keyword split_on_case_change: A value indicating whether to split words on caseChange. For
example, if this is set to true, "AzureSearch" becomes "Azure" "Search". Default is true.
:paramtype split_on_case_change: bool
:keyword preserve_original: A value indicating whether original words will be preserved and
added to the subword list. Default is false.
:paramtype preserve_original: bool
:keyword split_on_numerics: A value indicating whether to split on numbers. For example, if
this is set to true, "Azure1Search" becomes "Azure" "1" "Search". Default is true.
:paramtype split_on_numerics: bool
:keyword stem_english_possessive: A value indicating whether to remove trailing "'s" for each
subword. Default is true.
:paramtype stem_english_possessive: bool
:keyword protected_words: A list of tokens to protect from being delimited.
:paramtype protected_words: list[str]
"""
super(WordDelimiterTokenFilter, self).__init__(name=name, **kwargs)
self.odata_type = '#Microsoft.Azure.Search.WordDelimiterTokenFilter' # type: str
self.generate_word_parts = generate_word_parts
self.generate_number_parts = generate_number_parts
self.catenate_words = catenate_words
self.catenate_numbers = catenate_numbers
self.catenate_all = catenate_all
self.split_on_case_change = split_on_case_change
self.preserve_original = preserve_original
self.split_on_numerics = split_on_numerics
self.stem_english_possessive = stem_english_possessive
self.protected_words = protected_words
| 49.102679 | 1,844 | 0.677657 | 58,463 | 483,956 | 5.505106 | 0.033149 | 0.013159 | 0.021439 | 0.028943 | 0.818397 | 0.794202 | 0.771959 | 0.749929 | 0.727801 | 0.709432 | 0 | 0.00296 | 0.222438 | 483,956 | 9,855 | 1,845 | 49.107661 | 0.852314 | 0.612202 | 0 | 0.594153 | 0 | 0 | 0.271104 | 0.095813 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040032 | false | 0 | 0.001317 | 0 | 0.161707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c32c2baff25fb432da8631734f325570aa12a3b | 15,100 | py | Python | models.py | soso030/fast-neural-style-keras | 6dd2737ef8615845464e04c7a79f41e85c5b8423 | [
"MIT"
] | 24 | 2018-11-16T21:36:15.000Z | 2022-02-19T18:06:22.000Z | models.py | soso030/fast-neural-style-keras | 6dd2737ef8615845464e04c7a79f41e85c5b8423 | [
"MIT"
] | 2 | 2019-05-21T08:05:48.000Z | 2020-04-21T00:14:32.000Z | models.py | soso030/fast-neural-style-keras | 6dd2737ef8615845464e04c7a79f41e85c5b8423 | [
"MIT"
] | 9 | 2018-11-27T15:54:48.000Z | 2020-08-30T15:00:17.000Z | from keras import layers
from keras.applications import vgg16
from keras.models import Model
from utils import get_style_loss, get_content_loss, get_tv_loss, \
residual_block, OutputScale, InputReflect, AverageAddTwo
def get_training_model(width, height, bs=1, bi_style=False):
input_o = layers.Input(shape=(height, width, 3), dtype='float32', name='input_o')
c1 = layers.Conv2D(32, (9, 9), strides=1, padding='same', name='conv_1')(input_o)
c1 = layers.BatchNormalization(name='normal_1')(c1)
c1 = layers.Activation('relu', name='relu_1')(c1)
c2 = layers.Conv2D(64, (3, 3), strides=2, padding='same', name='conv_2')(c1)
c2 = layers.BatchNormalization(name='normal_2')(c2)
c2 = layers.Activation('relu', name='relu_2')(c2)
c3 = layers.Conv2D(128, (3, 3), strides=2, padding='same', name='conv_3')(c2)
c3 = layers.BatchNormalization(name='normal_3')(c3)
c3 = layers.Activation('relu', name='relu_3')(c3)
r1 = residual_block(c3, 1)
r2 = residual_block(r1, 2)
r3 = residual_block(r2, 3)
r4 = residual_block(r3, 4)
r5 = residual_block(r4, 5)
d1 = layers.Conv2DTranspose(64, (3, 3), strides=2, padding='same', name='conv_4')(r5)
d1 = layers.BatchNormalization(name='normal_4')(d1)
d1 = layers.Activation('relu', name='relu_4')(d1)
d2 = layers.Conv2DTranspose(32, (3, 3), strides=2, padding='same', name='conv_5')(d1)
d2 = layers.BatchNormalization(name='normal_5')(d2)
d2 = layers.Activation('relu', name='relu_5')(d2)
c4 = layers.Conv2D(3, (9, 9), strides=1, padding='same', name='conv_6')(d2)
c4 = layers.BatchNormalization(name='normal_6')(c4)
c4 = layers.Activation('tanh', name='tanh_1')(c4)
c4 = OutputScale(name='output')(c4)
content_activation = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation1 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
if bi_style:
style_activation1_2 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2_2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3_2 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4_2 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
total_variation_loss = layers.Lambda(get_tv_loss, output_shape=(1,), name='tv',
arguments={'width': width, 'height': height})([c4])
# Block 1
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(c4)
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
style_loss1 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1', arguments={'batch_size': bs})([x, style_activation1])
if bi_style:
style_loss1_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1_2', arguments={'batch_size': bs})([x, style_activation1_2])
style_loss1 = AverageAddTwo(name='style1_out')([style_loss1, style_loss1_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
content_loss = layers.Lambda(get_content_loss, output_shape=(1,), name='content')([x, content_activation])
style_loss2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2', arguments={'batch_size': bs})([x, style_activation2])
if bi_style:
style_loss2_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2_2', arguments={'batch_size': bs})([x, style_activation2_2])
style_loss2 = AverageAddTwo(name='style2_out')([style_loss2, style_loss2_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
style_loss3 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3', arguments={'batch_size': bs})([x, style_activation3])
if bi_style:
style_loss3_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3_2', arguments={'batch_size': bs})([x, style_activation3_2])
style_loss3 = AverageAddTwo(name='style3_out')([style_loss3, style_loss3_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
style_loss4 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4', arguments={'batch_size': bs})([x, style_activation4])
if bi_style:
style_loss4_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4_2', arguments={'batch_size': bs})([x, style_activation4_2])
style_loss4 = AverageAddTwo(name='style4_out')([style_loss4, style_loss4_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
if bi_style:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3, style_activation4,
style_activation1_2, style_activation2_2, style_activation3_2, style_activation4_2],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, c4])
else:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3, style_activation4],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, c4])
model_layers = {layer.name: layer for layer in model.layers}
original_vgg = vgg16.VGG16(weights='imagenet', include_top=False)
original_vgg_layers = {layer.name: layer for layer in original_vgg.layers}
# load image_net weight
for layer in original_vgg.layers:
if layer.name in model_layers:
model_layers[layer.name].set_weights(original_vgg_layers[layer.name].get_weights())
model_layers[layer.name].trainable = False
print("training model built successfully!")
return model
def get_evaluate_model(width, height):
input_o = layers.Input(shape=(height, width, 3), dtype='float32', name='input_o')
c1 = layers.Conv2D(32, (9, 9), strides=1, padding='same', name='conv_1')(input_o)
c1 = layers.BatchNormalization(name='normal_1')(c1)
c1 = layers.Activation('relu', name='relu_1')(c1)
c2 = layers.Conv2D(64, (3, 3), strides=2, padding='same', name='conv_2')(c1)
c2 = layers.BatchNormalization(name='normal_2')(c2)
c2 = layers.Activation('relu', name='relu_2')(c2)
c3 = layers.Conv2D(128, (3, 3), strides=2, padding='same', name='conv_3')(c2)
c3 = layers.BatchNormalization(name='normal_3')(c3)
c3 = layers.Activation('relu', name='relu_3')(c3)
r1 = residual_block(c3, 1)
r2 = residual_block(r1, 2)
r3 = residual_block(r2, 3)
r4 = residual_block(r3, 4)
r5 = residual_block(r4, 5)
d1 = layers.Conv2DTranspose(64, (3, 3), strides=2, padding='same', name='conv_4')(r5)
d1 = layers.BatchNormalization(name='normal_4')(d1)
d1 = layers.Activation('relu', name='relu_4')(d1)
d2 = layers.Conv2DTranspose(32, (3, 3), strides=2, padding='same', name='conv_5')(d1)
d2 = layers.BatchNormalization(name='normal_5')(d2)
d2 = layers.Activation('relu', name='relu_5')(d2)
c4 = layers.Conv2D(3, (9, 9), strides=1, padding='same', name='conv_6')(d2)
c4 = layers.BatchNormalization(name='normal_6')(c4)
c4 = layers.Activation('tanh', name='tanh_1')(c4)
c4 = OutputScale(name='output')(c4)
model = Model([input_o], c4)
print("evaluate model built successfully!")
return model
def get_temp_view_model(width, height, bs=1, bi_style=False):
input_o = layers.Input(shape=(height, width, 3), dtype='float32')
y = InputReflect(width, height, name='output')(input_o)
total_variation_loss = layers.Lambda(get_tv_loss, output_shape=(1,), name='tv',
arguments={'width': width, 'height': height})([y])
content_activation = layers.Input(shape=(height//2, width//2, 128), dtype='float32')
style_activation1 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2 = layers.Input(shape=(height//2, width//2, 128), dtype='float32')
style_activation3 = layers.Input(shape=(height//4, width//4, 256), dtype='float32')
style_activation4 = layers.Input(shape=(height//8, width//8, 512), dtype='float32')
if bi_style:
style_activation1_2 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2_2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3_2 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4_2 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
# Block 1
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(y)
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
style_loss1 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1', arguments={'batch_size': bs})([x, style_activation1])
if bi_style:
style_loss1_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1_2', arguments={'batch_size': bs})([x, style_activation1_2])
style_loss1 = AverageAddTwo(name='style1_out')([style_loss1, style_loss1_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
content_loss = layers.Lambda(get_content_loss, output_shape=(1,), name='content')([x, content_activation])
style_loss2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2', arguments={'batch_size': bs})([x, style_activation2])
if bi_style:
style_loss2_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2_2', arguments={'batch_size': bs})([x, style_activation2_2])
style_loss2 = AverageAddTwo(name='style2_out')([style_loss2, style_loss2_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
style_loss3 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3', arguments={'batch_size': bs})([x, style_activation3])
if bi_style:
style_loss3_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3_2', arguments={'batch_size': bs})([x, style_activation3_2])
style_loss3 = AverageAddTwo(name='style3_out')([style_loss3, style_loss3_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
style_loss4 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4', arguments={'batch_size': bs})([x, style_activation4])
if bi_style:
style_loss4_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4_2', arguments={'batch_size': bs})([x, style_activation4_2])
style_loss4 = AverageAddTwo(name='style4_out')([style_loss4, style_loss4_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
if bi_style:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3,
style_activation4,
style_activation1_2, style_activation2_2, style_activation3_2, style_activation4_2],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, y])
else:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3,
style_activation4],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, y])
model_layers = {layer.name: layer for layer in model.layers}
original_vgg = vgg16.VGG16(weights='imagenet', include_top=False)
original_vgg_layers = {layer.name: layer for layer in original_vgg.layers}
# load image_net weight
for layer in original_vgg.layers:
if layer.name in model_layers:
model_layers[layer.name].set_weights(original_vgg_layers[layer.name].get_weights())
model_layers[layer.name].trainable = False
print("temp_view model built successfully!")
return model
| 56.343284 | 118 | 0.647815 | 2,051 | 15,100 | 4.562652 | 0.059971 | 0.044668 | 0.06091 | 0.044454 | 0.966125 | 0.962599 | 0.962599 | 0.954264 | 0.954264 | 0.954264 | 0 | 0.067739 | 0.188543 | 15,100 | 267 | 119 | 56.554307 | 0.695993 | 0.008146 | 0 | 0.895735 | 0 | 0 | 0.110666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014218 | false | 0 | 0.018957 | 0 | 0.047393 | 0.014218 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c6230338f6af4a4aa174feffbf5380343099687 | 13,481 | gyp | Python | chrome/chrome_nibs.gyp | kjthegod/chromium | cf940f7f418436b77e15b1ea23e6fa100ca1c91a | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 1 | 2019-11-28T10:46:52.000Z | 2019-11-28T10:46:52.000Z | chrome/chrome_nibs.gyp | kjthegod/chromium | cf940f7f418436b77e15b1ea23e6fa100ca1c91a | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | chrome/chrome_nibs.gyp | kjthegod/chromium | cf940f7f418436b77e15b1ea23e6fa100ca1c91a | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2015-03-27T11:15:39.000Z | 2016-08-17T14:19:56.000Z | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# This gyp file creates a fake target that is used to generate a minimal Xcode
# project, useful for editing XIB files.
#
# The sole target is called "chrome_nibs" and its sources are the minimum
# dependency set for all of the classes referred to by XIB files. If you are
# editing or adding a new XIB file, ensure that any classes to which you refer
# in the XIB are listed (both header and implementation) here so that Xcode can
# connect them.
#
# This target DOES NOT BUILD. Attempting to do so will generate lots of errors.
# Only use this target for editing XIBs.
#
# For more information, see
# <http://dev.chromium.org/developers/design-documents/mac-xib-files>.
{
'variables': {
'chromium_code': 1,
},
'includes': [
'chrome_nibs.gypi',
],
'target_defaults': {
'include_dirs': [
'..',
],
},
'targets': [
{
'target_name': 'chrome_nibs',
'type': 'executable',
'mac_bundle': 1,
'dependencies': [
'../third_party/google_toolbox_for_mac/google_toolbox_for_mac.gyp:google_toolbox_for_mac',
],
'sources': [
'../ui/base/cocoa/base_view.h',
'../ui/base/cocoa/base_view.mm',
'../ui/base/cocoa/controls/hyperlink_button_cell.h',
'../ui/base/cocoa/controls/hyperlink_button_cell.mm',
'../ui/base/cocoa/hover_button.h',
'../ui/base/cocoa/hover_button.mm',
'../ui/base/cocoa/hover_image_button.h',
'../ui/base/cocoa/hover_image_button.mm',
'../ui/base/cocoa/menu_controller.h',
'../ui/base/cocoa/menu_controller.mm',
'../ui/base/cocoa/nsview_additions.h',
'../ui/base/cocoa/nsview_additions.mm',
'browser/app_controller_mac.h',
'browser/app_controller_mac.mm',
'browser/ui/cocoa/animatable_view.h',
'browser/ui/cocoa/animatable_view.mm',
'browser/ui/cocoa/background_gradient_view.h',
'browser/ui/cocoa/background_gradient_view.mm',
'browser/ui/cocoa/base_bubble_controller.h',
'browser/ui/cocoa/base_bubble_controller.mm',
'browser/ui/cocoa/bookmarks/bookmark_all_tabs_controller.h',
'browser/ui/cocoa/bookmarks/bookmark_all_tabs_controller.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_controller.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_controller.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_folder_controller.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_folder_controller.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_folder_view.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_folder_view.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_folder_window.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_folder_window.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_toolbar_view.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_toolbar_view.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_unittest_helper.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_unittest_helper.mm',
'browser/ui/cocoa/bookmarks/bookmark_bar_view.h',
'browser/ui/cocoa/bookmarks/bookmark_bar_view.mm',
'browser/ui/cocoa/bookmarks/bookmark_bubble_controller.h',
'browser/ui/cocoa/bookmarks/bookmark_bubble_controller.mm',
'browser/ui/cocoa/bookmarks/bookmark_button.h',
'browser/ui/cocoa/bookmarks/bookmark_button.mm',
'browser/ui/cocoa/bookmarks/bookmark_button_cell.h',
'browser/ui/cocoa/bookmarks/bookmark_button_cell.mm',
'browser/ui/cocoa/bookmarks/bookmark_editor_base_controller.h',
'browser/ui/cocoa/bookmarks/bookmark_editor_base_controller.mm',
'browser/ui/cocoa/bookmarks/bookmark_name_folder_controller.h',
'browser/ui/cocoa/bookmarks/bookmark_name_folder_controller.mm',
'browser/ui/cocoa/browser/avatar_menu_bubble_controller.h',
'browser/ui/cocoa/browser/avatar_menu_bubble_controller.mm',
'browser/ui/cocoa/browser_window_controller.h',
'browser/ui/cocoa/browser_window_controller.mm',
'browser/ui/cocoa/browser_window_controller_private.h',
'browser/ui/cocoa/browser_window_controller_private.mm',
'browser/ui/cocoa/chrome_browser_window.h',
'browser/ui/cocoa/chrome_browser_window.mm',
'browser/ui/cocoa/chrome_event_processing_window.h',
'browser/ui/cocoa/chrome_event_processing_window.mm',
'browser/ui/cocoa/clickhold_button_cell.h',
'browser/ui/cocoa/clickhold_button_cell.mm',
'browser/ui/cocoa/content_settings/collected_cookies_mac.h',
'browser/ui/cocoa/content_settings/collected_cookies_mac.mm',
'browser/ui/cocoa/content_settings/content_setting_bubble_cocoa.h',
'browser/ui/cocoa/content_settings/content_setting_bubble_cocoa.mm',
'browser/ui/cocoa/content_settings/cookie_details_view_controller.h',
'browser/ui/cocoa/content_settings/cookie_details_view_controller.mm',
'browser/ui/cocoa/custom_frame_view.h',
'browser/ui/cocoa/custom_frame_view.mm',
'browser/ui/cocoa/download/download_item_button.h',
'browser/ui/cocoa/download/download_item_button.mm',
'browser/ui/cocoa/download/download_item_cell.h',
'browser/ui/cocoa/download/download_item_cell.mm',
'browser/ui/cocoa/download/download_item_controller.h',
'browser/ui/cocoa/download/download_item_controller.mm',
'browser/ui/cocoa/download/download_shelf_controller.h',
'browser/ui/cocoa/download/download_shelf_controller.mm',
'browser/ui/cocoa/download/download_shelf_view.h',
'browser/ui/cocoa/download/download_shelf_view.mm',
'browser/ui/cocoa/download/download_show_all_button.h',
'browser/ui/cocoa/download/download_show_all_button.mm',
'browser/ui/cocoa/download/download_show_all_cell.h',
'browser/ui/cocoa/download/download_show_all_cell.mm',
'browser/ui/cocoa/draggable_button.h',
'browser/ui/cocoa/draggable_button.mm',
'browser/ui/cocoa/browser/edit_search_engine_cocoa_controller.h',
'browser/ui/cocoa/browser/edit_search_engine_cocoa_controller.mm',
'browser/ui/cocoa/constrained_window/constrained_window_button.h',
'browser/ui/cocoa/constrained_window/constrained_window_button.mm',
'browser/ui/cocoa/constrained_window/constrained_window_custom_window.h',
'browser/ui/cocoa/constrained_window/constrained_window_custom_window.mm',
'browser/ui/cocoa/exclusive_access_bubble_window_controller.h',
'browser/ui/cocoa/exclusive_access_bubble_window_controller.mm',
'browser/ui/cocoa/exclusive_access_bubble_view.h',
'browser/ui/cocoa/exclusive_access_bubble_view.mm',
'browser/ui/cocoa/extensions/browser_actions_container_view.h',
'browser/ui/cocoa/extensions/browser_actions_container_view.mm',
'browser/ui/cocoa/extensions/device_permissions_view_controller.h',
'browser/ui/cocoa/extensions/device_permissions_view_controller.mm',
'browser/ui/cocoa/extensions/extension_install_dialog_controller.h',
'browser/ui/cocoa/extensions/extension_install_dialog_controller.mm',
'browser/ui/cocoa/extensions/extension_install_view_controller.h',
'browser/ui/cocoa/extensions/extension_install_view_controller.mm',
'browser/ui/cocoa/extensions/extension_installed_bubble_controller.h',
'browser/ui/cocoa/extensions/extension_installed_bubble_controller.mm',
'browser/ui/cocoa/fast_resize_view.h',
'browser/ui/cocoa/fast_resize_view.mm',
'browser/ui/cocoa/find_bar/find_bar_cocoa_controller.h',
'browser/ui/cocoa/find_bar/find_bar_cocoa_controller.mm',
'browser/ui/cocoa/find_bar/find_bar_text_field.h',
'browser/ui/cocoa/find_bar/find_bar_text_field.mm',
'browser/ui/cocoa/find_bar/find_bar_text_field_cell.h',
'browser/ui/cocoa/find_bar/find_bar_text_field_cell.mm',
'browser/ui/cocoa/find_bar/find_bar_view.h',
'browser/ui/cocoa/find_bar/find_bar_view.mm',
'browser/ui/cocoa/first_run_bubble_controller.h',
'browser/ui/cocoa/first_run_bubble_controller.mm',
'browser/ui/cocoa/first_run_dialog.h',
'browser/ui/cocoa/first_run_dialog.mm',
'browser/ui/cocoa/framed_browser_window.h',
'browser/ui/cocoa/framed_browser_window.mm',
'browser/ui/cocoa/global_error_bubble_controller.h',
'browser/ui/cocoa/global_error_bubble_controller.mm',
'browser/ui/cocoa/gradient_button_cell.h',
'browser/ui/cocoa/gradient_button_cell.mm',
'browser/ui/cocoa/hover_close_button.h',
'browser/ui/cocoa/hover_close_button.mm',
'browser/ui/cocoa/hung_renderer_controller.h',
'browser/ui/cocoa/hung_renderer_controller.mm',
'browser/ui/cocoa/image_button_cell.h',
'browser/ui/cocoa/image_button_cell.mm',
'browser/ui/cocoa/info_bubble_view.h',
'browser/ui/cocoa/info_bubble_view.mm',
'browser/ui/cocoa/info_bubble_window.h',
'browser/ui/cocoa/info_bubble_window.mm',
'browser/ui/cocoa/infobars/infobar_controller.h',
'browser/ui/cocoa/infobars/infobar_controller.mm',
'browser/ui/cocoa/infobars/infobar_gradient_view.h',
'browser/ui/cocoa/infobars/infobar_gradient_view.mm',
'browser/ui/cocoa/location_bar/autocomplete_text_field.h',
'browser/ui/cocoa/location_bar/autocomplete_text_field.mm',
'browser/ui/cocoa/location_bar/autocomplete_text_field_cell.h',
'browser/ui/cocoa/location_bar/autocomplete_text_field_cell.mm',
'browser/ui/cocoa/login_prompt_cocoa.h',
'browser/ui/cocoa/login_prompt_cocoa.mm',
'browser/ui/cocoa/menu_button.h',
'browser/ui/cocoa/menu_button.mm',
'browser/ui/cocoa/multi_key_equivalent_button.h',
'browser/ui/cocoa/multi_key_equivalent_button.mm',
'browser/ui/cocoa/new_tab_button.h',
'browser/ui/cocoa/new_tab_button.mm',
'browser/ui/cocoa/nsmenuitem_additions.h',
'browser/ui/cocoa/nsmenuitem_additions.mm',
'browser/ui/cocoa/one_click_signin_view_controller.h',
'browser/ui/cocoa/one_click_signin_view_controller.mm',
'browser/ui/cocoa/screen_capture_notification_ui_cocoa.h',
'browser/ui/cocoa/screen_capture_notification_ui_cocoa.mm',
'browser/ui/cocoa/status_bubble_mac.h',
'browser/ui/cocoa/status_bubble_mac.mm',
'browser/ui/cocoa/styled_text_field.h',
'browser/ui/cocoa/styled_text_field.mm',
'browser/ui/cocoa/styled_text_field_cell.h',
'browser/ui/cocoa/styled_text_field_cell.mm',
'browser/ui/cocoa/tab_contents/overlayable_contents_controller.h',
'browser/ui/cocoa/tab_contents/overlayable_contents_controller.mm',
'browser/ui/cocoa/tab_contents/sad_tab_controller.h',
'browser/ui/cocoa/tab_contents/sad_tab_controller.mm',
'browser/ui/cocoa/tab_contents/sad_tab_view.h',
'browser/ui/cocoa/tab_contents/sad_tab_view.mm',
'browser/ui/cocoa/tabs/tab_controller.h',
'browser/ui/cocoa/tabs/tab_controller.mm',
'browser/ui/cocoa/tabs/tab_strip_model_observer_bridge.h',
'browser/ui/cocoa/tabs/tab_strip_model_observer_bridge.mm',
'browser/ui/cocoa/tabs/tab_strip_view.h',
'browser/ui/cocoa/tabs/tab_strip_view.mm',
'browser/ui/cocoa/tabs/tab_view.h',
'browser/ui/cocoa/tabs/tab_view.mm',
'browser/ui/cocoa/tabs/tab_window_controller.h',
'browser/ui/cocoa/tabs/tab_window_controller.mm',
'browser/ui/cocoa/task_manager_mac.h',
'browser/ui/cocoa/task_manager_mac.mm',
'browser/ui/cocoa/themed_window.h',
'browser/ui/cocoa/themed_window.mm',
'browser/ui/cocoa/toolbar/reload_button.h',
'browser/ui/cocoa/toolbar/reload_button.mm',
'browser/ui/cocoa/toolbar/toolbar_button.h',
'browser/ui/cocoa/toolbar/toolbar_button.mm',
'browser/ui/cocoa/toolbar/toolbar_controller.h',
'browser/ui/cocoa/toolbar/toolbar_controller.mm',
'browser/ui/cocoa/toolbar/toolbar_view.h',
'browser/ui/cocoa/toolbar/toolbar_view.mm',
'browser/ui/cocoa/toolbar/wrench_toolbar_button_cell.h',
'browser/ui/cocoa/toolbar/wrench_toolbar_button_cell.mm',
'browser/ui/cocoa/ui_localizer.h',
'browser/ui/cocoa/ui_localizer.mm',
'browser/ui/cocoa/vertical_gradient_view.h',
'browser/ui/cocoa/vertical_gradient_view.mm',
'browser/ui/cocoa/view_id_util.h',
'browser/ui/cocoa/view_id_util.mm',
'browser/ui/cocoa/wrench_menu/menu_tracked_root_view.h',
'browser/ui/cocoa/wrench_menu/menu_tracked_root_view.mm',
'browser/ui/cocoa/wrench_menu/wrench_menu_controller.h',
'browser/ui/cocoa/wrench_menu/wrench_menu_controller.mm',
'browser/ui/cocoa/panels/panel_titlebar_view_cocoa.h',
'browser/ui/cocoa/panels/panel_titlebar_view_cocoa.mm',
'browser/ui/cocoa/panels/panel_window_controller_cocoa.h',
'browser/ui/cocoa/panels/panel_window_controller_cocoa.mm',
],
'mac_bundle_resources': [
'<@(mac_all_xibs)',
],
}, # target chrome_xibs
], # targets
}
| 53.709163 | 98 | 0.71241 | 1,819 | 13,481 | 4.995602 | 0.126443 | 0.147904 | 0.292726 | 0.167272 | 0.894134 | 0.835589 | 0.652801 | 0.470672 | 0.138219 | 0.017828 | 0 | 0.00053 | 0.159484 | 13,481 | 250 | 99 | 53.924 | 0.80143 | 0.061568 | 0 | 0.025862 | 0 | 0 | 0.785211 | 0.770723 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
92d2d82b32d399de31730c7e823ea2a0014492f4 | 163 | py | Python | temboo/core/Library/Withings/Sleep/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Withings/Sleep/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Withings/Sleep/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Withings.Sleep.GetSleepMetrics import GetSleepMetrics, GetSleepMetricsInputSet, GetSleepMetricsResultSet, GetSleepMetricsChoreographyExecution
| 81.5 | 162 | 0.91411 | 11 | 163 | 13.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042945 | 163 | 1 | 163 | 163 | 0.955128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
13c3b89e2da155a09044606d0ea770a811039610 | 40,889 | py | Python | Lokaverkefni.py | gullicoolboi69/lokaverkefni_forritun | f19431b52efd8ab442c8fcc1b3b887af336f6b8a | [
"MIT"
] | null | null | null | Lokaverkefni.py | gullicoolboi69/lokaverkefni_forritun | f19431b52efd8ab442c8fcc1b3b887af336f6b8a | [
"MIT"
] | null | null | null | Lokaverkefni.py | gullicoolboi69/lokaverkefni_forritun | f19431b52efd8ab442c8fcc1b3b887af336f6b8a | [
"MIT"
] | null | null | null | #Lokaverkefni
#Ingólfur Óskarsson
#Guðlaugur Haukur Árnason
import random #Látið inn random
#Klassi búinn til
class Nagdyr:
def __init__(self,tegund,stadur,afl,þyngd,tennur):#Smiðurinn búinn til
self.tegund=tegund#Tegund af nagdýri
self.afl=afl#Hversu mikið afl
self.stadur=stadur#Staðsetningin
self.þyngd=þyngd#Þyngdin
self.tennur=tennur#Hversu hvassar tennur
#While lykkja
svar=0 #Svarið byrjar sem 0
while svar!="3":#Svo lengi að svarið er ekki 3, þá heldur lykkjan áfram
oftkast1=0#Telur hversu oft spilari 1 kastar
oftkast2=0#Telur hversu oft spilar 2 kastar
player1 = Nagdyr("Mús", 0, random.randrange(2, 7, 2), random.randint(1, 3), random.randrange(2, 7, 2))#Stats fyrir player 1(tegund,afl,þyngd og tennur)
player2 = Nagdyr("Mús", 0, random.randrange(2, 7, 2), random.randint(1, 3), random.randrange(2, 7, 2))#Stats fyrir player 2(tegund,afl,þyngd og tennur)
rotta1 = Nagdyr("Rotta", random.randrange(1, 100), random.randrange(2, 7, 2),random.randint(1, 3),random.randrange(2, 7, 2))#Stats fyrir rottu 1(tegund,afl,staðsetning,þyngd og tennur)
rotta2 = Nagdyr("Rotta", random.randrange(1, 100), random.randrange(2, 7, 2),random.randint(1, 3),random.randrange(2, 7, 2))#Stats fyrir rottu 2(tegund,afl,staðsetning,þyngd og tennur)
rotta3 = Nagdyr("Rotta", random.randrange(1, 100), random.randrange(2, 7, 2),random.randint(1, 3),random.randrange(2, 7, 2))#Stats fyrir rottur 3(tegund,afl,staðsetning,þyngd og tennur)
hamstur = Nagdyr("Hamstur", random.randrange(1, 50), random.randrange(2, 7, 2),0,0)#Stats fyrir hamstur(tegund,afl og staðsetning)
kaninan = Nagdyr("Kaninan", random.randrange(1, 50), random.randrange(2, 7, 2), 0, 0)#Stats fyrir kaninu(tegund,afl og staðsetning)
# Hérna tökum við aflið, þyngdinna og hversu hvassar tennur og leggjum það saman í power
powermus = player1.afl + player1.þyngd + player1.tennur
powermus2 = player2.afl + player2.þyngd + player2.tennur
powerrotta1 = rotta1.afl + rotta1.þyngd + rotta1.tennur
powerrotta2 = rotta2.afl + rotta2.þyngd + rotta2.tennur
powerrotta3 = rotta3.afl + rotta3.þyngd + rotta3.tennur
print("-----Nagdýr-----")
print("1 - Spila Einn")
print("2 - Spila Tveir")
print("3 - Hætta")
print("-----------------")
svar=input("Sláðu inn tölu frá bilinu 1-3 ")
#Tjekkar hvort player 2 er með
tvoplayer=0
if svar == "2":
tvoplayer=1
svar = "1"
if svar=="1":
svar1=0
while svar1!="3":#While lykkja fyrir leikinn sjálfan
print("\n-----Valmynd-----")
print("1 - Kasta teningi?")
print("2 - Staðsetning?")
print("3 - Hætta")
print("-------------------")
svar1=input("Sláðu inn tölu frá bilinu 1-3 ")
if svar1=="1":
print("\n------Mús 1-------")
print("Mús 1 kastar teningi ")
teningur = random.randint(1, 6)
print("Mús 1 fékk =", teningur)
#For lykkja til að tjekka hvort rotta eða hamstur er á sama reit
for x in range(player1.stadur, teningur+player1.stadur):
x=x+1
if rotta1.stadur == x:
print("Þú ert á sama reit og rotta 1")
#bardagi á milli rottu og mús
if powerrotta1 > powermus:
print("\nRottan vinnur\n")
player1.stadur = player1.stadur - teningur - rotta1.afl
print("Þú ferð til baka um",rotta1.afl,"marga reiti")
elif powerrotta1 < powermus:
print("\nMús 1 vinnur\n")
else:
print("Jafntefli")#Ef það er jafntefli þá gerist ekkert
#athuga hvort músin eða rottan hefur meira afl
elif rotta2.stadur == x:
print("Þú ert á sama reit og rotta 2")
if powerrotta2 > powermus:
print("\nRottan vinnur\n")
player1.stadur = player1.stadur - teningur - rotta2.afl
print("Þú ferð til baka um", rotta2.afl, "marga reiti")
elif powerrotta2 < powermus:
print("\nMús 1 vinnur\n")
else:
print("Jafntefli")
elif rotta3.stadur == x:
print("Þú ert á sama reit og rotta 3")
if powerrotta3 > powermus:
print("\nRottan vinnur\n")
player1.stadur = player1.stadur - teningur - rotta3.afl
print("Þú ferð til baka um", rotta3.afl, "marga reiti")
elif powerrotta3 < powermus:
print("\nMús 1 vinnur\n")
else:
print("Jafntefli")
elif hamstur.stadur == x:#Ef mús/hamstur rekst á hvorn annan þá kastar hamsturinn honum áfram
print("\nHAMSTURINN KASTAR ÞÉR ÁFRAM!\n")
player1.stadur = player1.stadur + hamstur.afl
print("\n Þú lentir á reit",player1.stadur)
oftkast1=oftkast1 + 1#Telur kast
player1.stadur = player1.stadur + teningur
if player1.stadur < 0:#Ef spilarinn fer í mínus, þá er hann settur aftur í 0
player1.stadur = 0
print("Þú ert á reit", player1.stadur)
if tvoplayer == 1:#Ef player 2 er með þá fer þessi if setning í gang
print("\n------Mús 2-------")
print("Nú kastar mús 2 ")
teningur = random.randint(1, 6)
print("Mús 2 fékk =", teningur)
for x in range(player2.stadur, teningur + player2.stadur):#For lykkja fyrir player 2 hvort hann rekst á rottu eða kanínu
x = x + 1
if rotta1.stadur == x:
print("Þú ert á sama reit og rotta 1")
if powerrotta1 > powermus2:
print("\nRottan vinnur\n")
player2.stadur = player2.stadur - teningur - rotta1.afl
print("Þú ferð til baka um", rotta1.afl, "marga reiti")
elif powerrotta1 < powermus2:
print("\nMús 2 vinnur\n")
else:
print("Jafntefli")
# athuga hvort músin eða rottan hefur meira afl
elif rotta2.stadur == x:
print("Þú ert á sama reit og rotta 2")
if powerrotta2 > powermus2:
print("\nRottan vinnur\n")
player2.stadur = player2.stadur - teningur - rotta2.afl
print("Þú ferð til baka um", rotta2.afl, "marga reiti")
elif powerrotta2 < powermus2:
print("\nMús 2 vinnur\n")
else:
print("Jafntefli")
elif rotta3.stadur == x:
print("Þú ert á sama reit og rotta 3")
if powerrotta3 > powermus:
print("\nRottan vinnur\n")
player2.stadur = player2.stadur - teningur - rotta3.afl
print("Þú ferð til baka um", rotta3.afl, "marga reiti")
elif powerrotta3 < powermus2:
print("\nMús 2 vinnur\n")
else:
print("Jafntefli")
elif hamstur.stadur == x:
print("\nHAMSTURINN KASTAR ÞÉR ÁFRAM!\n")
player2.stadur = player2.stadur + hamstur.afl
print("\n Þú lentir á reit", player2.stadur)
oftkast2=oftkast2 + 1#telur kast
player2.stadur = player2.stadur + teningur
if player2.stadur < 0:
player2.stadur = 0
print("Þú ert á reit", player2.stadur)
teningur = random.randint(1, 6)
print("\nNúna kasta rotturnar\n")
print("------Rotta 1-------")
att1=random.randint(1,2)#Ákveður í hvaða átt rottan fer í
if att1 == 1:
print("Rotta 1 fær", teningur,"og fer áfram")
for x in range(rotta1.stadur,rotta1.stadur+teningur):
x=x+1
if player1.stadur == x:
print("Rotta 1 hittir mús 1")
if powerrotta1 > powermus:#bardagi á milli rottu og mús
print("\nRotta 1 vinnur")
player1.stadur = player1.stadur - rotta1.afl
print("Mús 1 fer til baka um", rotta1.afl, "marga reiti")
elif powerrotta1 < powermus:
print("\nMúsin vinnur")
player1.stadur=player1.stadur + 2
print("Mús 1 fer áfram um 2")
print("Mús 1 er kominn á reit", player1.stadur)
else:
print("Jafntefli")
if tvoplayer == 1:
for x in range(rotta1.stadur, rotta1.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("Rotta 1 hittir mús 2")
if powerrotta1 > powermus2:
print("\nRotta 1 vinnur")
player2.stadur = player2.stadur - rotta1.afl
print("Mús 2 fer til baka um", rotta1.afl, "marga reiti")
elif powerrotta1 < powermus2:
print("\nMúsin vinnur")
player2.stadur = player2.stadur + 2
print("Mús 2 fer áfram um 2")
print("Mús 2 er kominn á reit",player2.stadur)
else:
print("Jafntefli")
rotta1.stadur = rotta1.stadur + teningur
if rotta1.stadur > 100:#Ef rotta fer yfir 100 þá fer hún til baka í 100
rotta1.stadur = 100
print("Rotta 1 er kominn á reit", rotta1.stadur,"\n")
elif att1 == 2:
print("Rotta 1 fær", teningur,"og fer til baka")
for x in range(rotta1.stadur,rotta1.stadur - teningur,-1):
x=x-1
if player1.stadur == x:
print("Rotta 1 hittir mús")
if powerrotta1 > powermus:
print("\nRotta 1 vinnur")
player1.stadur = player1.stadur - rotta1.afl
print("Þú ferð til baka um", rotta1.afl, "marga reiti")
elif powerrotta1 < powermus:
print("\nMúsin vinnur")
player1.stadur=player1.stadur + 2
print("Mús 1 fer áfram um 2")#Ef rotta rekst á mús og músin vinnur, þá fer hún áfram um 2
print("Mús 1 er kominn á reit", player1.stadur)
else:
print("Jafntefli")
if tvoplayer == 1:
for x in range(rotta1.stadur, rotta1.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("Rotta 1 hittir mús 2")
if powerrotta1 > powermus2:
print("\nRotta 1 vinnur")
player2.stadur = player2.stadur - rotta1.afl
print("Mús 2 fer til baka um", rotta1.afl, "marga reiti")
elif powerrotta1 < powermus2:
print("\nMúsin vinnur")
player2.stadur = player2.stadur + 2
print("Mús 2 fer áfram um 2")
print("Mús 2 er kominn á reit",player2.stadur)
else:
print("Jafntefli")
rotta1.stadur = rotta1.stadur - teningur
print("Rotta 1 er kominn á reit", rotta1.stadur,"\n")
teningur = random.randint(1, 6)
print("------Rotta 2-------")
att2 = random.randint(1 , 2)
if att2 == 1:
print("Rotta 2 fær", teningur, "og fer áfram")
for x in range(rotta2.stadur,rotta2.stadur + teningur):
x=x+1
if player1.stadur == x:
print("\nRotta 2 hittir mús")
if powerrotta2 > powermus:
print("\nRotta 2 vinnur")
player1.stadur = player1.stadur - rotta2.afl
print("Þú ferð til baka um", rotta2.afl, "marga reiti")
elif powerrotta2 < powermus:
print("\nMúsin vinnur")
player1.stadur=player1.stadur + 2
print("Mús 1 fer áfram um 2")
print("Mús 1 er kominn á reit", player1.stadur)
else:
print("Jafntefli")
if tvoplayer == 1:
for x in range(rotta2.stadur, rotta2.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("\nRotta 2 hittir mús 2")
if powerrotta2 > powermus2:
print("\nRotta 2 vinnur")
player2.stadur = player2.stadur - rotta2.afl
print("Mús 2 fer til baka um", rotta2.afl, "marga reiti")
elif powerrotta2 < powermus2:
print("\nMúsin vinnur")
player2.stadur = player2.stadur + 2
print("Mús 2 fer áfram um 2")
print("Mús 2 er kominn á reit",player2.stadur)
else:
print("Jafntefli")
rotta2.stadur = rotta2.stadur + teningur
if rotta2.stadur > 100:
rotta2.stadur = 100
print("Rotta 2 er kominn á reit", rotta2.stadur,"\n")
elif att2 == 2:
print("Rotta 2 fær", teningur, "og fer til baka")
for x in range(rotta2.stadur,rotta2.stadur - teningur,-2):
x=x-1
if player1.stadur == x:
print("\nRotta 2 hittir mús")
if powerrotta2 > powermus:
print("\nRotta 2 vinnur")
player1.stadur = player1.stadur - rotta2.afl
print("Þú ferð til baka um", rotta2.afl, "marga reiti")
elif powerrotta2 < powermus:
print("\nMúsin vinnur")
player1.stadur=player1.stadur + 2
print("Mús 1 fer áfram um 2")
print("Mús 1 er kominn á reit", player1.stadur)
else:
print("Jafntefli")
if tvoplayer == 1:
for x in range(rotta2.stadur, rotta2.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("\nRotta 2 hittir mús 2")
if powerrotta2 > powermus2:
print("\nRotta 2 vinnur")
player2.stadur = player2.stadur - rotta2.afl
print("Mús 2 fer til baka um", rotta2.afl, "marga reiti")
elif powerrotta2 < powermus2:
print("\nMúsin vinnur")
player2.stadur = player2.stadur + 2
print("Mús 2 fer áfram um 2")
print("Mús 2 er kominn á reit",player2.stadur)
else:
print("Jafntefli")
rotta2.stadur = rotta2.stadur - teningur
print("Rotta 2 er kominn á reit", rotta2.stadur,"\n")
teningur = random.randint(1, 6)
print("------Rotta 3-------")
att3 = random.randint(1 , 2)
if att3 == 1:
print("Rotta 3 fær", teningur, "og fer áfram")
for x in range(rotta3.stadur,rotta3.stadur + teningur):
x=x+1
if player1.stadur == x:
print("Rotta 3 hittir mús")
if powerrotta3 > powermus:
print("\nRotta 3 vinnur")
player1.stadur = player1.stadur - rotta3.afl
print("Þú ferð til baka um", rotta3.afl, "marga reiti")
elif powerrotta3 < powermus:
print("\nMúsin vinnur")
player1.stadur=player1.stadur + 2
print("Mús 1 fer áfram um 2")
print("Mús 1 er kominn á reit", player1.stadur)
else:
print("Jafntefli")
if tvoplayer == 1:
for x in range(rotta3.stadur, rotta3.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("\nRotta 3 hittir mús 2")
if powerrotta3 > powermus2:
print("\nRotta 3 vinnur")
player2.stadur = player2.stadur - rotta3.afl
print("Mús 2 fer til baka um", rotta3.afl, "marga reiti")
elif powerrotta3 < powermus2:
print("\nMúsin vinnur")
player2.stadur = player2.stadur + 2
print("Mús 2 fer áfram um 2")
print("Mús 2 er kominn á reit",player2.stadur)
else:
print("Jafntefli")
rotta3.stadur = rotta3.stadur + teningur
if rotta3.stadur > 100:
rotta3.stadur = 100
print("Rotta 3 er kominn á reit", rotta3.stadur,"\n")
elif att3 == 2:
print("Rotta 3 fær", teningur, "og fer til baka")
for x in range(rotta3.stadur,rotta3.stadur - teningur,-1):
x=x-1
if player1.stadur == x:
print("\nRotta 1 hittir mús 1")
if powerrotta3 > powermus:
print("\nRotta vinnur")
player1.stadur = player1.stadur - rotta3.afl
print("Þú ferð til baka um", rotta3.afl, "marga reiti")
elif powerrotta3 < powermus:
print("\nMúsin vinnur")
player1.stadur=player1.stadur + 2
print("Mús 1 fer áfram um 2")
print("Mús 1 er kominn á reit", player1.stadur)
else:
print("Jafntefli")
if tvoplayer == 1:
for x in range(rotta3.stadur, rotta3.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("\nRotta 3 hittir mús 2")
if powerrotta3 > powermus2:
print("\nRotta 3 vinnur")
player2.stadur = player2.stadur - rotta3.afl
print("Mús 2 fer til baka um", rotta3.afl, "marga reiti")
elif powerrotta3 < powermus2:
print("\nMúsin vinnur")
player2.stadur = player2.stadur + 2
print("Mús 2 fer áfram um 2")
print("Mús 2 er kominn á reit",player2.stadur)
else:
print("Jafntefli")
rotta3.stadur = rotta3.stadur - teningur
print("Rotta 3 er kominn á reit", rotta3.stadur,"\n")
teningur = random.randint(1, 6)
print("\n------Hamsturinn-------")
print("Núna kastar hamsturinn")
if player1.stadur < 0:
player1.stadur = 0
if player2.stadur < 0:
player2.stadur = 0
print("Hamsturinn fær",teningur)
#Hvort hamsturinn fer áfram eða afturábak að músinni
if player1.stadur > hamstur.stadur:
#Hvort að hamstur Rekst á mús
for x in range(hamstur.stadur,hamstur.stadur+teningur):
x=x+1
if player1.stadur == x:
print("\nHAMSTURINN KASTAR ÞÉR ÁFRAM!")
player1.stadur=player1.stadur + hamstur.afl
print("Þú lentir á reit",player1.stadur,"\n")
hamstur.stadur=hamstur.stadur+teningur
#Tjekkar hvort hamstur og rotta er á sama stað
if hamstur.stadur == rotta1.stadur:
print("Hamstur lenti á sama reit og rotta 1")
if att1 == 1:
rotta1.stadur=rotta1.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 1 er á reit",rotta1.stadur)
print("Hamsturinn fer einn afturábak")
hamstur.stadur = hamstur.stadur - 1
elif att1 == 2:
rotta1.stadur=rotta1.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 1 er á reit",rotta1.stadur)
print("Hamsturinn fer einn afturábak")
hamstur.stadur = hamstur.stadur - 1
elif hamstur.stadur == rotta2.stadur:
print("Hamstur lenti á sama reit og rotta 1")
if att2 == 1:
rotta2.stadur=rotta2.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 2 er á reit",rotta2.stadur)
print("Hamsturinn fer einn afturábak")
hamstur.stadur = hamstur.stadur - 1
elif att2 == 2:
rotta1.stadur=rotta2.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 2 er á reit",rotta2.stadur)
print("Hamsturinn fer einn afturábak")
hamstur.stadur = hamstur.stadur - 1
elif hamstur.stadur == rotta3.stadur:
print("Hamstur lenti á sama reit og rotta 1")
if att3 == 1:
rotta3.stadur=rotta3.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 3 er á reit",rotta3.stadur)
print("Hamsturinn fer einn afturábak")
hamstur.stadur = hamstur.stadur - 1
elif att3 == 2:
rotta3.stadur=rotta3.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 3 er á reit",rotta3.stadur)
print("Hamsturinn fer einn afturábak")
hamstur.stadur=hamstur.stadur - 1
elif player1.stadur < hamstur.stadur:
#Hvort að hamstur fer framhjá mús
for x in range(hamstur.stadur,hamstur.stadur-teningur,-1):
if player1.stadur == x:
print("\nHAMSTURINN KASTAR ÞÉR ÁFRAM!")
player1.stadur=player1.stadur + hamstur.afl#Ef hamstur/kanínan lendir á spilara, þá kastar hún honum með aflinu sínu
print("Þú lentir á reit",player1.stadur,"\n")
hamstur.stadur=hamstur.stadur-teningur
#Tjekkar hvort hamstur og rotta er á sama stað
if hamstur.stadur == rotta1.stadur:
print("Hamstur lenti á sama reit og rotta 1")
if att1 == 1:
rotta1.stadur=rotta1.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 1 er á reit",rotta1.stadur)
print("Hamsturinn fer einn áfram")
hamstur.stadur = hamstur.stadur + 1
elif att1 == 2:
rotta1.stadur=rotta1.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 1 er á reit",rotta1.stadur)
print("Hamsturinn fer einn áfram")
hamstur.stadur = hamstur.stadur + 1
elif hamstur.stadur == rotta2.stadur:
print("Hamstur lenti á sama reit og rotta 1")
if att2 == 1:
rotta2.stadur=rotta2.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 2 er á reit",rotta2.stadur)
print("Hamsturinn fer einn áfram")
hamstur.stadur = hamstur.stadur + 1
elif att2 == 2:
rotta1.stadur=rotta2.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 2 er á reit",rotta2.stadur)
print("Hamsturinn fer einn áfram")
hamstur.stadur = hamstur.stadur + 1
elif hamstur.stadur == rotta3.stadur:
print("Hamstur lenti á sama reit og rotta 1")
if att3 == 1:
rotta3.stadur=rotta3.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 3 er á reit",rotta3.stadur)
print("Hamsturinn fer einn áfram")
hamstur.stadur = hamstur.stadur + 1
elif att3 == 2:
rotta3.stadur=rotta3.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 3 er á reit",rotta3.stadur)
print("Hamsturinn fer einn áfram")
hamstur.stadur=hamstur.stadur + 1
print("Hamsturinn er kominn á reit",hamstur.stadur)
if player1.stadur >=100:#Ef þú lendir á hundrað eða ferð yfir þá vinnur þú leikinn
print("Til hamingju Mús 1 þú vannst! ")
print(" III")
print(" IIIIIII")
print(" IIII IIII")
print("IIII 1 IIII") #bikar
print(" IIII IIII")
print(" IIIIIII")
print(" III")
print(" III")
print(" III")
print(" III")
print(" IIIII")
print(" IIIIIII")
print("\n Þú Kastaðir Teningnum",oftkast1,"Sinnum")#Sýnir hversu oft þú þurftir að kasta
svar1="3"
if tvoplayer == 1:
teningur = random.randint(1, 6)
print("\n------Kaninan-------")
print("Núna kastar Kaninan")
print("Kaninan fær", teningur)
# Hvort kaninan fer áfram eða afturábak að músinni
if player2.stadur > kaninan.stadur:
# Hvort að hamstur fer framhjá mús
for x in range(kaninan.stadur, kaninan.stadur + teningur):
x = x + 1
if player2.stadur == x:
print("\nKANÍNAN KASTAR ÞÉR ÁFRAM!")
player2.stadur = player2.stadur + hamstur.afl
print("Þú lentir á reit",player2.stadur,"\n")
kaninan.stadur = kaninan.stadur + teningur
# Tjekkar hvort kanína og rotta er á sama stað
if kaninan.stadur == rotta1.stadur:
print("Kaninan lenti á sama reit og rotta 1")
if att1 == 1:
rotta1.stadur = rotta1.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 1 er á reit", rotta1.stadur)
print("Kaninan fer einn afturábak")
kaninan.stadur = kaninan.stadur - 1
elif att1 == 2:
rotta1.stadur = rotta1.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 1 er á reit", rotta1.stadur)
print("Kaninan fer einn afturábak")
kaninan.stadur = kaninan.stadur - 1
elif kaninan.stadur == rotta2.stadur:
print("Kaninan lenti á sama reit og rotta 1")
if att2 == 1:
rotta2.stadur = rotta2.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 2 er á reit", rotta2.stadur)
print("Kaninan fer einn afturábak")
kaninan.stadur = kaninan.stadur - 1
elif att2 == 2:
rotta1.stadur = rotta2.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 2 er á reit", rotta2.stadur)
print("Kaninan fer einn afturábak")
kaninan.stadur = kaninan.stadur - 1
elif kaninan.stadur == rotta3.stadur:
print("Kaninan lenti á sama reit og rotta 1")
if att3 == 1:
rotta3.stadur = rotta3.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 3 er á reit", rotta3.stadur)
print("Kaninan fer einn afturábak")
kaninan.stadur = kaninan.stadur - 1
elif att3 == 2:
rotta3.stadur = rotta3.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 3 er á reit", rotta3.stadur)
print("Kaninan fer einn afturábak")
kaninan.stadur = kaninan.stadur - 1
elif player2.stadur < kaninan.stadur:
# Hvort að Kanina fer framhjá mús
for x in range(kaninan.stadur, kaninan.stadur - teningur, -1):
if player2.stadur == x:
print("\nKANÍNAN KASTAR ÞÉR ÁFRAM!")
player2.stadur = player2.stadur + kaninan.afl
print("Þú lentir á reit",player2.stadur,"\n")
kaninan.stadur = kaninan.stadur - teningur
# Tjekkar hvort kanina og rotta er á sama stað
if kaninan.stadur == rotta1.stadur:
print("Kaninan lenti á sama reit og rotta 1")
if att1 == 1:
rotta1.stadur = rotta1.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 1 er á reit", rotta1.stadur)
print("Kaninan fer einn áfram")
kaninan.stadur = kaninan.stadur + 1
elif att1 == 2:
rotta1.stadur = rotta1.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 1 er á reit", rotta1.stadur)
print("Kaninan fer einn áfram")
kaninan.stadur = kaninan.stadur + 1
elif kaninan.stadur == rotta2.stadur:
print("Kaninan lenti á sama reit og rotta 1")
if att2 == 1:
rotta2.stadur = rotta2.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 2 er á reit", rotta2.stadur)
print("Kaninan fer einn áfram")
kaninan.stadur = kaninan.stadur + 1
elif att2 == 2:
rotta1.stadur = rotta2.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 2 er á reit", rotta2.stadur)
print("Kaninan fer einn áfram")
kaninan.stadur = kaninan.stadur + 1
elif kaninan.stadur == rotta3.stadur:
print("Kaninan lenti á sama reit og rotta 1")
if att3 == 1:
rotta3.stadur = rotta3.stadur - 1
print("Rottan fer einn afturábak")
print("Rotta 3 er á reit", rotta3.stadur)
print("Kaninan fer einn áfram")
kaninan.stadur = kaninan.stadur + 1
elif att3 == 2:
rotta3.stadur = rotta3.stadur + 1
print("Rottan fer einn áfram")
print("Rotta 3 er á reit", rotta3.stadur)
print("Kanínan fer einn áfram")
kaninan.stadur = kaninan.stadur + 1
print("Kanínan er kominn á reit", kaninan.stadur)
if player2.stadur >= 100:
print("Til hamingju Mús 2 þú vannst! ")
print(" III")
print(" IIIIIII")
print(" IIII IIII")
print("IIII 1 IIII") #bikar
print(" IIII IIII")
print(" IIIIIII")
print(" III")
print(" III")
print(" III")
print(" III")
print(" IIIII")
print(" IIIIIII")
print("\n Þú Kastaðir Teningnum",oftkast2,"Sinnum")
svar1 = "3"
elif svar1=="2":
#Hér er sýnt alla stats á öllum nagdýronum
print("Mús 1 er á reit",player1.stadur)
print("Mús 1 er",player1.þyngd,"kg.")
if player1.tennur == 2:
print("Mús 1 hefur ekki hvassar tennur.")
elif player1.tennur == 4:
print("Mús 1 ert með hvassar tennur.")
elif player1.tennur == 6:
print("Mús 1 ert með MJÖG hvassar tennur.")
if tvoplayer == 1:#Ef player 2 er með þá fer þessi if setning í gang
print("Mús 2 er á reit", player2.stadur)
print("Mús 2 er",player2.þyngd,"kg.")
if player2.tennur == 2:
print("Mús 2 hefur ekki hvassar tennur.")
elif player2.tennur == 4:
print("Mús 2 ert með hvassar tennur.")
elif player2.tennur == 6:
print("Mús 2 ert með MJÖG hvassar tennur.")
print("Rotta 1 er á reit",rotta1.stadur)
print("Rotta 1 er",rotta1.þyngd,"kg")
if rotta1.tennur == 2:
print("Rotta 1 hefur ekki hvassar tennur.")
elif rotta1.tennur == 4:
print("Rotta 1 er með hvassar tennur.")
elif rotta1.tennur == 6:
print("Rotta 1 er með MJÖG hvassar tennur.")
print("Rotta 2 er á reit", rotta2.stadur)
print("Rotta 2 er", rotta2.þyngd, "kg")
if rotta2.tennur == 2:
print("Rotta 2 hefur ekki hvassar tennur.")
elif rotta2.tennur == 4:
print("Rotta 2 er með hvassar tennur.")
elif rotta2.tennur == 6:
print("Rotta 2 er með MJÖG hvassar tennur.")
print("Rotta 3 er á reit", rotta3.stadur)
print("Rotta 3 er", rotta3.þyngd, "kg")
if rotta3.tennur == 2:
print("Rotta 3 hefur ekki hvassar tennur.")
elif rotta3.tennur == 4:
print("Rotta 3 er með hvassar tennur.")
elif rotta3.tennur == 6:
print("Rotta 3 er með MJÖG hvassar tennur.")
print("Hamstur er á reit", hamstur.stadur)
if tvoplayer == 1:
print("Kanínan er á reit",kaninan.stadur)
| 56.790278 | 190 | 0.414684 | 3,779 | 40,889 | 4.485843 | 0.058746 | 0.050614 | 0.012801 | 0.025484 | 0.82946 | 0.788698 | 0.762801 | 0.737553 | 0.725755 | 0.697086 | 0 | 0.046063 | 0.505173 | 40,889 | 719 | 191 | 56.869263 | 0.791776 | 0.052532 | 0 | 0.745008 | 0 | 0 | 0.169476 | 0.001238 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001536 | false | 0 | 0.001536 | 0 | 0.004608 | 0.471582 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
13ed117d5f734c617b81331c08a84a82f168d46f | 9,266 | py | Python | augur/metrics/commit/commit.py | derekrechtien/augur | 8dcc6c5b7d6a03aca9b7edc4843a47032bb6d116 | [
"MIT"
] | null | null | null | augur/metrics/commit/commit.py | derekrechtien/augur | 8dcc6c5b7d6a03aca9b7edc4843a47032bb6d116 | [
"MIT"
] | null | null | null | augur/metrics/commit/commit.py | derekrechtien/augur | 8dcc6c5b7d6a03aca9b7edc4843a47032bb6d116 | [
"MIT"
] | 2 | 2019-12-12T04:36:22.000Z | 2019-12-14T15:53:08.000Z | """
Metrics that provide data about commits & their associated activity
"""
import inspect
import sys
import types
import datetime
import sqlalchemy as s
import pandas as pd
from augur.util import logger, annotate, add_metrics
@annotate(tag='committers')
def committers(self, repo_group_id, repo_id=None, begin_date=None, end_date=None, period='week'):
if not begin_date:
begin_date = '1970-1-1 00:00:01'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
committersSQL = None
if repo_id:
committersSQL = s.sql.text(
"""
SELECT
date_trunc(:period, commits.cmt_author_date::date) as date,
repo_name,
rg_name,
count(cmt_author_name)
FROM
commits, repo, repo_groups
WHERE
commits.repo_id = :repo_id AND commits.repo_id = repo.repo_id
AND repo.repo_group_id = repo_groups.repo_group_id
AND commits.cmt_author_date BETWEEN :begin_date and :end_date
GROUP BY date, repo_name, rg_name
ORDER BY date DESC
"""
)
else:
committersSQL = s.sql.text(
"""
SELECT
date_trunc(:period, commits.cmt_author_date::date) as date,
rg_name,
count(cmt_author_name)
FROM
commits, repo, repo_groups
WHERE
repo.repo_group_id = repo_groups.repo_group_id AND repo.repo_group_id = :repo_group_id
AND repo.repo_id = commits.repo_id
AND commits.cmt_author_date BETWEEN :begin_date and :end_date
GROUP BY date, rg_name
"""
)
results = pd.read_sql(committersSQL, self.database, params={'repo_id': repo_id, 'repo_group_id': repo_group_id,'begin_date': begin_date, 'end_date': end_date, 'period':period})
return results
@annotate(tag='annual-commit-count-ranked-by-new-repo-in-repo-group')
def annual_commit_count_ranked_by_new_repo_in_repo_group(self, repo_group_id, repo_id = None, calendar_year=None):
"""
For each repository in a collection of repositories being managed, each REPO that first appears in the parameterized
calendar year (a new repo in that year), show all commits for that year (total for year by repo).
Result ranked from highest number of commits to lowest by default.
:param repo_url: the repository's URL
:param calendar_year: the calendar year a repo is created in to be considered "new"
:param repo_group: the group of repositories to analyze
"""
if calendar_year == None:
calendar_year = 2019
cdRgNewrepRankedCommitsSQL = None
if not repo_id:
cdRgNewrepRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, sum(cast(added as INTEGER) - cast(removed as INTEGER) - cast(whitespace as INTEGER)) as net, patches, repo_name
FROM dm_repo_annual, repo, repo_groups
where repo.repo_group_id = :repo_group_id
and dm_repo_annual.repo_id = repo.repo_id
and date_part('year', repo.repo_added) = :calendar_year
and repo.repo_group_id = repo_groups.repo_group_id
group by repo.repo_id, patches, rg_name
ORDER BY net desc
LIMIT 10
""")
else:
cdRgNewrepRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, sum(cast(added as INTEGER) - cast(removed as INTEGER) - cast(whitespace as INTEGER)) as net, patches, repo_name
FROM dm_repo_annual, repo, repo_groups
where repo.repo_group_id = (select repo.repo_group_id from repo where repo.repo_id = :repo_id)
and dm_repo_annual.repo_id = repo.repo_id
and date_part('year', repo.repo_added) = :calendar_year
and repo.repo_group_id = repo_groups.repo_group_id
group by repo.repo_id, patches, rg_name
ORDER BY net desc
LIMIT 10
""")
results = pd.read_sql(cdRgNewrepRankedCommitsSQL, self.database, params={ "repo_group_id": repo_group_id,
"repo_id": repo_id, "calendar_year": calendar_year})
return results
@annotate(tag='annual-commit-count-ranked-by-repo-in-repo-group')
def annual_commit_count_ranked_by_repo_in_repo_group(self, repo_group_id, repo_id=None, timeframe=None):
"""
For each repository in a collection of repositories being managed, each REPO's total commits during the current Month,
Year or Week. Result ranked from highest number of commits to lowest by default.
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:param calendar_year: the calendar year a repo is created in to be considered "new"
"""
if timeframe == None:
timeframe = 'all'
cdRgTpRankedCommitsSQL = None
if repo_id:
if timeframe == 'all':
cdRgTpRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, repo_name as name, SUM(added - removed - whitespace) as net, patches
FROM dm_repo_annual, repo, repo_groups
WHERE repo.repo_group_id = (select repo.repo_group_id from repo where repo.repo_id = :repo_id)
AND repo.repo_group_id = repo_groups.repo_group_id
AND dm_repo_annual.repo_id = repo.repo_id
group by repo.repo_id, patches
order by net desc
LIMIT 10
""")
elif timeframe == 'year':
cdRgTpRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, repo_name as name, SUM(added - removed - whitespace) as net, patches
FROM dm_repo_annual, repo, repo_groups
WHERE repo.repo_group_id = (select repo.repo_group_id from repo where repo.repo_id = :repo_id)
AND repo.repo_group_id = repo_groups.repo_group_id
AND dm_repo_annual.repo_id = repo.repo_id
AND date_part('year', repo_added) = date_part('year', CURRENT_DATE)
group by repo.repo_id, patches
order by net desc
LIMIT 10
""")
elif timeframe == 'month':
cdRgTpRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, repo_name as name, SUM(added - removed - whitespace) as net, patches
FROM dm_repo_monthly, repo, repo_groups
WHERE repo.repo_group_id = (select repo.repo_group_id from repo where repo.repo_id = :repo_id)
AND repo.repo_group_id = repo_groups.repo_group_id
AND dm_repo_monthly.repo_id = repo.repo_id
AND date_part('year', repo_added) = date_part('year', CURRENT_DATE)
AND date_part('month', repo_added) = date_part('month', CURRENT_DATE)
group by repo.repo_id, patches
order by net desc
LIMIT 10
""")
else:
if timeframe == 'all':
cdRgTpRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, repo_name as name, SUM(added - removed - whitespace) as net, patches
FROM dm_repo_annual, repo, repo_groups
WHERE repo.repo_group_id = :repo_group_id
AND repo.repo_group_id = repo_groups.repo_group_id
AND dm_repo_annual.repo_id = repo.repo_id
group by repo.repo_id, patches
order by net desc
LIMIT 10
""")
elif timeframe == "year":
cdRgTpRankedCommitsSQL = s.sql.text(
"""
SELECT repo.repo_id, repo_name as name, SUM(added - removed - whitespace) as net, patches
FROM dm_repo_annual, repo, repo_groups
WHERE repo.repo_group_id = :repo_group_id
AND repo.repo_group_id = repo_groups.repo_group_id
AND dm_repo_annual.repo_id = repo.repo_id
AND date_part('year', repo_added) = date_part('year', CURRENT_DATE)
group by repo.repo_id, patches
order by net desc
LIMIT 10
"""
)
elif timeframe == 'month':
cdRgTpRankedCommitsSQL = s.sql.text("""
SELECT repo.repo_id, repo_name as name, SUM(added - removed - whitespace) as net, patches
FROM dm_repo_annual, repo, repo_groups
WHERE repo.repo_group_id = :repo_group_id
AND repo.repo_group_id = repo_groups.repo_group_id
AND dm_repo_annual.repo_id = repo.repo_id
AND date_part('year', repo_added) = date_part('year', CURRENT_DATE)
AND date_part('month', repo_added) = date_part('month', CURRENT_DATE)
group by repo.repo_id, patches
order by net desc
LIMIT 10
""")
results = pd.read_sql(cdRgTpRankedCommitsSQL, self.database, params={ "repo_group_id": repo_group_id,
"repo_id": repo_id})
return results
def create_commit_metrics(metrics):
add_metrics(metrics, __name__)
| 45.64532 | 180 | 0.616447 | 1,226 | 9,266 | 4.397227 | 0.108483 | 0.096457 | 0.099981 | 0.063996 | 0.800779 | 0.786125 | 0.780189 | 0.774068 | 0.774068 | 0.760527 | 0 | 0.004976 | 0.305957 | 9,266 | 202 | 181 | 45.871287 | 0.833307 | 0.099719 | 0 | 0.674419 | 0 | 0.054264 | 0.659393 | 0.065619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031008 | false | 0 | 0.054264 | 0 | 0.108527 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b91b16680f1e9a5bfd44f4a0522b25914d8ba2f1 | 74,313 | py | Python | standardhouse.py | Eramismus/CommunityModelCreator | b23d2239e24b6785a1c7ddb7186991802a1cbf0f | [
"MIT"
] | null | null | null | standardhouse.py | Eramismus/CommunityModelCreator | b23d2239e24b6785a1c7ddb7186991802a1cbf0f | [
"MIT"
] | null | null | null | standardhouse.py | Eramismus/CommunityModelCreator | b23d2239e24b6785a1c7ddb7186991802a1cbf0f | [
"MIT"
] | null | null | null | import math
from teaser.logic.buildingobjects.buildingphysics.rooftop import Rooftop
from teaser.logic.buildingobjects.buildingphysics.layer import Layer
from teaser.logic.buildingobjects.buildingphysics.material import Material
from teaser.logic.buildingobjects.buildingphysics.outerwall import OuterWall
from teaser.logic.buildingobjects.buildingphysics.innerwall import InnerWall
from teaser.logic.buildingobjects.buildingphysics.groundfloor import GroundFloor
from teaser.logic.buildingobjects.buildingphysics.floor import Floor
from teaser.logic.buildingobjects.buildingphysics.window import Window
from teaser.logic.buildingobjects.thermalzone import ThermalZone
from teaser.logic.buildingobjects.boundaryconditions.boundaryconditions \
import BoundaryConditions
from teaser.logic.buildingobjects.building import Building
# Dictionary for materials, data from Allen and Pinney, check values for air
# materials
#{name: [density, heat capacity, thermal conductance, emissivity, absorptivity]}
Mat_dict = {"Plaster": [800, 0.840, 0.26, 0.91, 0.50],
"Plasterboard": [950, 0.840, 0.16, 0.91, 0.50],
"Brick_in": [1700, 0.800, 0.62, 0.93, 0.70],
"Brick_out": [1700, 0.8, 0.84, 0.90, 0.93],
"Cavity": [1.276, 1.006, 0.065/0.18, 0, 0], # R is 0.18 for airspaces acc. CIBSE Guide A
"Glass_fibre": [250, 0.840, 0.04, 0.90, 0.30],
"Insulation": [12, 0.840, 0.040, 0.90,0.30],
"Timber": [650, 1.2, 0.14, 0.91, 0.65],
"Carpet": [160, 1, 0.06, 0.90, 0.65],
"Roof_tile": [1900, 0.8, 0.84, 0.90, 0.60],
"Earth": [1900, 1.7, 1.4, 0.90, 0.85],
"Concrete": [2100, 0.840, 1.40, 0.90, 0.65],
"Softwood": [230, 2.760, 0.12, 0.90, 0.65],
"GlasWindow": [2500, 0.750, 1.05, 0.90, 0.20],
"ConcreteBlock": [1400, 1.0, 0.510, 0.90, 0.65],
"ConcreteWallPanel": [1200, 1.0, 0.380, 0.90, 0.65],
"ConcreteFloorPanel": [2000, 1.0, 1.13, 0.90, 0.65],
"Screed": [1200, 0.840, 0.410, 0.91, 0.65],
"ConcreteWaffle": [2000, 1.0, 1.13, 0.90, 0.65],
"AluminiumSheet": [2700, 0.880, 210, 0.22, 0.20],
"TimberPanel": [650, 1.2, 0.14, 0.91, 0.65],
"CeramicTiles": [1900, 0.8, 0.84, 0.90, 0.60],
"PortlandStone": [2200, 0.712, 1.83, 0.90, 0.60]
}
def create_stand_dwelling(prj, build_id, type, scaler):
# Data and values based on Allen and Pinney 1990, BEPAC, A Set of Standard Dwelling
bldg = Building(parent=prj)
bldg.name = build_id
bldg.street_name = "StandardClose"
bldg.city = "StandardTown"
if type == "detached":
print("Creating a detached house")
bldg.year_of_construction = 1950
bldg.number_of_floors = 2
bldg.height_of_floors = 2.5
# Instantiate a ThermalZone class and set the Building as a parent of it.
# Set some parameters of the thermal zone. Be careful: Dymola does not
# like whitespaces in names and filenames, thus we will delete them
# anyway in TEASER.
tz = ThermalZone(parent=bldg)
tz.name = "House"
tz.area = scaler[1]*(19.05+13.14+7.71+10.05)
tz.volume = tz.area * bldg.number_of_floors * bldg.height_of_floors
tz.infiltration_rate = 0.7
# Instantiate BoundaryConditions and load conditions for `Living`.
tz.use_conditions = BoundaryConditions(parent=tz)
tz.use_conditions.load_use_conditions("Living", prj.data)
# Define two building elements reflecting a pitched roof (south = 180 and
# north = 0). Setting the the ThermalZone as a parent will automatically
# assign this element to the thermal zone. We also set names, tilt and
# coefficients for heat transfer on the inner and outer side of the
# roofs. If the building has a flat roof, please use -1 as
# orientation. Please read the docs to get more information on these
# parameters.
# To define the wall constructions we need to instantiate Layer and
# Material objects and set attributes. id indicates the order of wall
# construction from inside to outside (so 0 is on the inner surface). You
# need to set this value!
# outer walls
# {'name_of_wall': [area, tilt, orientation]}
# interior walls
# {'name_of_wall': [area, tilt, orientation]}
# interior floors
# {'name_of_wall': [area]}
w_n = scaler[2]*(6.5*5.1-(1.58*0.83+1.63*0.58+1.8*2.1+1.58*0.83))
w_e = scaler[2]*(7.20*5.1-(0.80*2.05-0.74-0.89))
w_s = scaler[2]*(6.5*5.1-(1.02*0.87+2.16*0.81+0.8*2.05+2.12*1.07))
w_w = scaler[2]*(7.20*5.1-(0.80*2.05-0.74-0.89))
out_wall_dict = {"OuterWall_north": [w_n, 90.0, 0.0],
"OuterWall_east": [w_e, 90.0, 90.0],
"OuterWall_south": [w_s, 90.0, 180.0],
"OuterWall_west": [w_w, 90.0, 270.0]
}
# Lump all inner walls into one
in_wall_dict = {"InnerWall_south": [scaler[3]*((4.3+3.83+2.93+4.43-(4.43+3.83-4.43-2.63-0.016*2-0.105))*2.5+(2.03+1.63+2.63+2.28+3.83-2.63+3.23+2.93+3.23-1.9)*2.35), 90.0, 0.0],
}
# Only areas given
in_floor_dict = {"InnerFloor1": [scaler[4]*(19.05+13.14+7.71+10.05)],
}
roof_dict = {"Roof_South": [0, 55, 180],
"Roof_North": [0, 55, 0],
"Roof_West": [0, 55, 270],
"Roof_East": [0, 55, 90]
}
# Calculate the areas, assumed tilt 55 degrees
roof_dict["Roof_South"][0] = scaler[5]*0.5*2.85*6.50/math.cos(math.radians(roof_dict["Roof_South"][1]))
roof_dict["Roof_North"][0] = scaler[5]*0.5*2.85*6.50/math.cos(math.radians(roof_dict["Roof_North"][1]))
roof_dict["Roof_East"][0] = scaler[5]*(0.5*2.85*(7.20-0.7)/math.cos(math.radians(roof_dict["Roof_East"][1]))+0.7*2.85/math.cos(math.radians(roof_dict["Roof_East"][1]))+2.85*1.2/math.cos(math.radians(roof_dict["Roof_East"][1])))
roof_dict["Roof_West"][0] = scaler[5]*(0.5*2.85*(7.20-0.7)/math.cos(math.radians(roof_dict["Roof_West"][1]))+0.7*2.85/math.cos(math.radians(roof_dict["Roof_West"][1]))+2.85*1.2/math.cos(math.radians(roof_dict["Roof_West"][1])))
# For ground floors the orientation is always -2
ground_floor_dict = {"GroundFloor": [scaler[4]*(19.05+13.14+7.71+10.05), 0.0, -2]}
win_dict = {"Window_south1": [scaler[6]*2.12*1.07, 90.0, 180.0],
"Window_south2": [scaler[6]*2.16*0.81, 90.0, 180.0],
"Window_south3": [scaler[6]*0.87*1.02, 90.0, 180.0],
"Window_north1": [scaler[6]*1.58*0.83, 90.0, 0],
"Window_north2": [scaler[6]*1.63*0.58, 90.0, 0],
"Window_north3": [scaler[6]*1.58*0.83, 90.0, 0],
"Door_back": [1.8*2.10, 90.0, 0],
"Window_east": [scaler[6]*0.74*0.89, 90.0, 90],
"Window_east": [scaler[6]*0.74*0.89, 90.0, 270]
}
door_dict = {"Door_front": [0.8*2.05, 90.0, 180.0],
"Door_side1": [0.8*2.05, 90.0, 90.0],
"Door_side2": [0.8*2.05, 90.0, 270]
}
# Start with the roof
for key, value in roof_dict.items():
roof = Rooftop(parent=tz)
roof.name = key
roof.tilt = value[1]
roof.area = value[0]
roof.orientation = value[2]
roof.inner_convection = 4.3
roof.outer_convection = 18.1
roof.inner_radiation = 5.7
roof.outer_radiation = 5.7
# Plasterboard
layer_s1 = Layer(parent=roof, id=0)
layer_s1.thickness = 0.010
material_s1 = Material(layer_s1)
material_s1.name = "Plasterboard"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s2 = Layer(parent=roof, id=1)
layer_s2.thickness = scaler[7]*0.10
material_s1 = Material(layer_s2)
material_s1.name = "Glass_fibre"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
#Loft space
layer_s3 = Layer(parent=roof, id=2)
layer_s3.thickness = 0.5*2.15 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Roof tiles
layer_s4 = Layer(parent=roof, id=3)
layer_s4.thickness = 0.010
material_s1 = Material(layer_s4)
material_s1.name = "Roof_tile"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# External Walls
for key, value in out_wall_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
# area, tilt and orientation need to be set individually.
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
# External walls
# Plaster
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick, inner
layer_s2 = Layer(parent=out_wall, id=1)
layer_s2.thickness = 0.105
material_s1 = Material(layer_s2)
material_s1.name = "Brick_in"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s3 = Layer(parent=out_wall, id=2)
layer_s3.thickness = 0.065
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s4 = Layer(parent=out_wall, id=3)
layer_s4.thickness = scaler[8]*0.065
material_s1 = Material(layer_s4)
material_s1.name = "Insulation"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick, outer
layer_s5 = Layer(parent=out_wall, id=4)
layer_s5.thickness = 0.105
material_s1 = Material(layer_s5)
material_s1.name = "Brick_out"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner walls
for key, value in in_wall_dict.items():
in_wall = InnerWall(parent=tz)
in_wall.name = key
in_wall.area = value[0]
in_wall.tilt = value[1]
in_wall.orientation = value[2]
in_wall.inner_convection = 3.0
in_wall.outer_convection = 3.0
in_wall.inner_radiation = 5.7
in_wall.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick
layer_s2 = Layer(parent=in_wall, id=1)
layer_s2.thickness = 0.105 # Average of the smallest height (conservative)
material_s1 = Material(layer_s2)
material_s1.name = "Brick_in"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plaster
layer_s3 = Layer(parent=in_wall, id=2)
layer_s3.thickness = 0.016 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner floors
for key, value in in_floor_dict.items():
in_floor = Floor(parent=tz)
in_floor.name = key
in_floor.area = value[0]
in_floor.inner_convection = 3.0
in_floor.outer_convection = 3.0
in_floor.inner_radiation = 5.7
in_floor.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_floor, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Carpet"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# timber
layer_s2 = Layer(parent=in_floor, id=1)
layer_s2.thickness = 0.020
material_s1 = Material(layer_s2)
material_s1.name = "Timber"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s3 = Layer(parent=in_floor, id=2)
layer_s3.thickness = 0.200
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plasterboard
layer_s4 = Layer(parent=in_floor, id=3)
layer_s4.thickness = 0.010
material_s1 = Material(layer_s4)
material_s1.name = "Plasterboard"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
for key, value in ground_floor_dict.items():
ground = GroundFloor(parent=tz)
ground.name = key
ground.area = value[0]
ground.tilt = value[1]
ground.orientation = value[2]
ground.inner_convection = 3.0
ground.outer_convection = 100000000000000
ground.inner_radiation = 5.7
ground.outer_radiation = 100000000000000
# Carpet
layer_s1 = Layer(parent=ground, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Carpet"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# timber
layer_s2 = Layer(parent=ground, id=1)
layer_s2.thickness = 0.1
material_s1 = Material(layer_s2)
material_s1.name = "Concrete"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
#Earth
layer_s3 = Layer(parent=ground, id=2)
layer_s3.thickness = 0.160
material_s1 = Material(layer_s3)
material_s1.name = "Earth"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Doors
for key, value in door_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.030
material_s1 = Material(layer_s1)
material_s1.name = "Softwood"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Windows
for key, value in win_dict.items():
win = Window(parent = tz)
win.name = key
win.area = value[0]
win.tilt = value[1]
win.orientation = value[2]
# Additional to the already known attributes the window has
# additional attributes. Window.g_value describes the solar gain
# through windows, a_conv the convective heat transmission due to
# absorption of the window on the inner side. shading_g_total and
# shading_max_irr refers to the shading (solar gain reduction of the
# shading and shading_max_irr the threshold of irradiance to
# automatically apply shading).
win.inner_convection = 3
win.inner_radiation = 14
win.outer_convection = 5.7
win.outer_radiation = 5.7
win.g_value = 0.84
win.a_conv = 0.03
win.shading_g_total = 0.0
win.shading_max_irr = 180.0
# Double-glazed windows:
win_layer1 = Layer(parent=win)
win_layer1.id = 0
win_layer1.thickness = 0.006
# Material for Glas
win_material = Material(win_layer1)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
# Gap of 12 mm
win_layer2 = Layer(parent=win)
win_layer2.id = 1
win_layer2.thickness = 0.012
win_material = Material(win_layer2)
win_material.name = "Cavity"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
#Glass
win_layer3 = Layer(parent=win)
win_layer3.id = 2
win_layer3.thickness = 0.006
# Material for Glas
win_material = Material(win_layer3)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# %%%%% ---- Semi detached House --- %%%%%%%%%%%%%
if type == "semi-detached":
print("Creating a semi-detached house")
bldg.year_of_construction = 1950
bldg.number_of_floors = 2
bldg.height_of_floors = 2.35
# Instantiate a ThermalZone class and set the Building as a parent of it.
# Set some parameters of the thermal zone. Be careful: Dymola does not
# like whitespaces in names and filenames, thus we will delete them
# anyway in TEASER.
tz = ThermalZone(parent=bldg)
tz.name = "House"
tz.area = scaler[1]*(14.69+13.52+4.73+9.60)
tz.volume = tz.area * bldg.number_of_floors * bldg.height_of_floors
tz.infiltration_rate = 0.7 # Based on SAP
# Instantiate BoundaryConditions and load conditions for `Living`.
tz.use_conditions = BoundaryConditions(parent=tz)
tz.use_conditions.load_use_conditions("Living", prj.data)
# Define two building elements reflecting a pitched roof (south = 180 and
# north = 0). Setting the the ThermalZone as a parent will automatically
# assign this element to the thermal zone. We also set names, tilt and
# coefficients for heat transfer on the inner and outer side of the
# roofs. If the building has a flat roof, please use -1 as
# orientation. Please read the docs to get more information on these
# parameters.
# To define the wall constructions we need to instantiate Layer and
# Material objects and set attributes. id indicates the order of wall
# construction from inside to outside (so 0 is on the inner surface). You
# need to set this value!
# outer walls
# {'name_of_wall': [area, tilt, orientation]}
# interior walls
# {'name_of_wall': [area, tilt, orientation]}
# interior floors
# {'name_of_wall': [area]}
w_n = scaler[2]*(6*4.9-(1.8*2.1+1.58*0.83+0.76*0.76+0.8*2.05))
w_e = scaler[2]*(7.20*4.9-(0.74*0.89))
w_s = scaler[2]*(6*4.9-(0.74*0.80+0.8*2.05+1.51*1.02+1.51*0.86))
w_w = scaler[2]*(7.20*4.9)
out_wall_dict = {"OuterWall_north": [w_n, 90.0, 0.0],
"OuterWall_east": [w_e, 90.0, 90.0],
"OuterWall_south": [w_s, 90.0, 180.0],
}
# Lump all inner walls into one
in_wall_dict = {"InnerWall_south": [scaler[3]*(2.4*(2.03+4.23+3.83)+2.30*(2*2.03+3.83+3.53)), 90.0, 0.0],
"PartyWall_west": [w_w, 90.0, 270.0]
}
# Only areas given
in_floor_dict = {"InnerFloor1": [scaler[4]*(14.69+13.52+4.73+9.60)]
}
roof_dict = {"Roof_South": [0, 55, 180],
"Roof_North": [0, 55, 0],
"Roof_West": [0, 55, 270],
"Roof_East": [0, 55, 90]
}
# Calculate the areas, assumed tilt 55 degrees
roof_dict["Roof_South"][0] = scaler[5]*0.5*2.50*6.00/math.cos(math.radians(roof_dict["Roof_South"][1]))
roof_dict["Roof_North"][0] = scaler[5]*0.5*2.50*6.00/math.cos(math.radians(roof_dict["Roof_North"][1]))
roof_dict["Roof_East"][0] = scaler[5]*(0.5*2.5*(7.20)/math.cos(math.radians(roof_dict["Roof_East"][1])))
roof_dict["Roof_West"][0] = scaler[5]*(0.5*2.5*(7.20)/math.cos(math.radians(roof_dict["Roof_East"][1])))
# For ground floors the orientation is always -2
ground_floor_dict = {"GroundFloor": [scaler[4]*(14.69+13.52+4.73+9.60), 0.0, -2]}
win_dict = {"Window_south1": [scaler[6]*0.74*0.89, 90.0, 180.0],
"Window_south2": [scaler[6]*0.89*1.51, 90.0, 180.0],
"Window_south3": [scaler[6]*1.02*1.51, 90.0, 180.0],
"Window_north1": [scaler[6]*1.58*0.83, 90.0, 0],
"Window_north2": [scaler[6]*0.96*0.76, 90.0, 0],
"Door_back": [1.8*2.10, 90.0, 0],
"Window_east": [scaler[6]*0.74*0.89, 90.0, 90]
}
door_dict = {"Door_front": [0.8*2.05, 90.0, 180.0],
"Door_back1": [0.8*2.05, 90.0, 0]
}
# Start with the roof
for key, value in roof_dict.items():
roof = Rooftop(parent=tz)
roof.name = key
roof.tilt = value[1]
roof.area = value[0]
roof.orientation = value[2]
roof.inner_convection = 4.3
roof.outer_convection = 18.1
roof.inner_radiation = 5.7
roof.outer_radiation = 5.7
# Plasterboard
layer_s1 = Layer(parent=roof, id=0)
layer_s1.thickness = 0.010
material_s1 = Material(layer_s1)
material_s1.name = "Plasterboard"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s2 = Layer(parent=roof, id=1)
layer_s2.thickness = scaler[7]*0.10
material_s1 = Material(layer_s2)
material_s1.name = "Glass_fibre"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
#Loft space
layer_s3 = Layer(parent=roof, id=2)
layer_s3.thickness = 0.5*2.15 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Roof tiles
layer_s4 = Layer(parent=roof, id=3)
layer_s4.thickness = 0.010
material_s1 = Material(layer_s4)
material_s1.name = "Roof_tile"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# External Walls
for key, value in out_wall_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
# area, tilt and orientation need to be set individually.
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
# External walls
# Plaster
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick, inner
layer_s2 = Layer(parent=out_wall, id=1)
layer_s2.thickness = 0.105
material_s1 = Material(layer_s2)
material_s1.name = "Brick_in"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s3 = Layer(parent=out_wall, id=2)
layer_s3.thickness = 0.065
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s4 = Layer(parent=out_wall, id=3)
layer_s4.thickness = scaler[8]*0.065
material_s1 = Material(layer_s4)
material_s1.name = "Insulation"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick, outer
layer_s5 = Layer(parent=out_wall, id=4)
layer_s5.thickness = 0.105
material_s1 = Material(layer_s5)
material_s1.name = "Brick_out"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner walls
for key, value in in_wall_dict.items():
in_wall = InnerWall(parent=tz)
in_wall.name = key
in_wall.area = value[0]
in_wall.tilt = value[1]
in_wall.orientation = value[2]
in_wall.inner_convection = 3.0
in_wall.outer_convection = 3.0
in_wall.inner_radiation = 5.7
in_wall.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick
layer_s2 = Layer(parent=in_wall, id=1)
layer_s2.thickness = 0.105 # Average of the smallest height (conservative)
material_s1 = Material(layer_s2)
material_s1.name = "Brick_in"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plaster
layer_s3 = Layer(parent=in_wall, id=2)
layer_s3.thickness = 0.016 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner floors
for key, value in in_floor_dict.items():
in_floor = Floor(parent=tz)
in_floor.name = key
in_floor.area = value[0]
in_floor.inner_convection = 3.0
in_floor.outer_convection = 3.0
in_floor.inner_radiation = 5.7
in_floor.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_floor, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Carpet"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# timber
layer_s2 = Layer(parent=in_floor, id=1)
layer_s2.thickness = 0.020
material_s1 = Material(layer_s2)
material_s1.name = "Timber"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s3 = Layer(parent=in_floor, id=2)
layer_s3.thickness = 0.200
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plasterboard
layer_s4 = Layer(parent=in_floor, id=3)
layer_s4.thickness = 0.010
material_s1 = Material(layer_s4)
material_s1.name = "Plasterboard"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
for key, value in ground_floor_dict.items():
ground = GroundFloor(parent=tz)
ground.name = key
ground.area = value[0]
ground.tilt = value[1]
ground.orientation = value[2]
ground.inner_convection = 3.0
ground.outer_convection = 100000000000000
ground.inner_radiation = 5.7
ground.outer_radiation = 100000000000000
# Carpet
layer_s1 = Layer(parent=ground, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Carpet"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# timber
layer_s2 = Layer(parent=ground, id=1)
layer_s2.thickness = 0.1
material_s1 = Material(layer_s2)
material_s1.name = "Concrete"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
#Earth
layer_s3 = Layer(parent=ground, id=2)
layer_s3.thickness = 0.160
material_s1 = Material(layer_s3)
material_s1.name = "Earth"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Doors
for key, value in door_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.030
material_s1 = Material(layer_s1)
material_s1.name = "Softwood"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Windows
for key, value in win_dict.items():
win = Window(parent = tz)
win.name = key
win.area = value[0]
win.tilt = value[1]
win.orientation = value[2]
# Additional to the already known attributes the window has
# additional attributes. Window.g_value describes the solar gain
# through windows, a_conv the convective heat transmission due to
# absorption of the window on the inner side. shading_g_total and
# shading_max_irr refers to the shading (solar gain reduction of the
# shading and shading_max_irr the threshold of irradiance to
# automatically apply shading).
win.inner_convection = 3
win.inner_radiation = 14
win.outer_convection = 5.7
win.outer_radiation = 5.7
win.g_value = 0.84
win.a_conv = 0.03
win.shading_g_total = 0.0
win.shading_max_irr = 180.0
# Double-glazed windows:
win_layer1 = Layer(parent=win)
win_layer1.id = 0
win_layer1.thickness = 0.006
# Material for Glas
win_material = Material(win_layer1)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
# Gap of 12 mm
win_layer2 = Layer(parent=win)
win_layer2.id = 1
win_layer2.thickness = 0.012
win_material = Material(win_layer2)
win_material.name = "Cavity"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
#Glass
win_layer3 = Layer(parent=win)
win_layer3.id = 2
win_layer3.thickness = 0.006
# Material for Glas
win_material = Material(win_layer3)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
'''
%%%%%%%%%%%%%% Post-1919 Terrace %%%%%%%%%%%%%%%%%%%%%
'''
if type == "terrace":
print("Creating a post 1919 terraced house")
bldg.year_of_construction = 1950
bldg.number_of_floors = 2
bldg.height_of_floors = 2.3
# Instantiate a ThermalZone class and set the Building as a parent of it.
# Set some parameters of the thermal zone. Be careful: Dymola does not
# like whitespaces in names and filenames, thus we will delete them
# anyway in TEASER.
tz = ThermalZone(parent=bldg)
tz.name = "House"
tz.area = scaler[1]*(12.42+9.76+6.83+8.69)
tz.volume = tz.area * bldg.number_of_floors * bldg.height_of_floors
tz.infiltration_rate = 0.7
# Instantiate BoundaryConditions and load conditions for `Living`.
tz.use_conditions = BoundaryConditions(parent=tz)
tz.use_conditions.load_use_conditions("Living", prj.data)
w_n = scaler[2]*(5.8*4.8-(0.76*0.76+1.56*0.96+1.02*0.87+1.80*2.1))
w_e = scaler[2]*6.8*4.8
w_s = scaler[2]*(5.8*4.8-(1.02*0.87*2+1.56*0.96+0.8*2.05))
w_w = scaler[2]*6.8*4.8
out_wall_dict = {"OuterWall_north": [w_n, 90.0, 0.0],
"OuterWall_south": [w_s, 90.0, 180.0],
}
# Lump all inner walls into one
in_wall_dict = {"InnerWall_south": [scaler[3]*((4.3+3.83+2.93+4.43-(4.43+3.83-4.43-2.63-0.016*2-0.105))*2.5+(2.03+1.63+2.63+2.28+3.83-2.63+3.23+2.93+3.23-1.9)*2.35), 90.0, 0.0],
"PartyWall_east": [w_e, 90.0, 90.0],
"PartyWall_west": [w_w, 90.0, 270.0]
}
# Only areas given
in_floor_dict = {"InnerFloor1": [scaler[3]*(12.42+9.76+6.83+8.69)],
}
roof_dict = {"Roof_South": [0, 55, 180],
"Roof_North": [0, 55, 0],
"Roof_West": [0, 55, 270],
"Roof_East": [0, 55, 90]
}
# Calculate the areas, assumed tilt 55 degrees
roof_dict["Roof_South"][0] = scaler[2]*2.7*5.8
roof_dict["Roof_North"][0] = scaler[2]*2.7*5.8
roof_dict["Roof_East"][0] = scaler[2]*0.5*2.7*6.8*math.cos(math.radians(roof_dict["Roof_East"][1]))
roof_dict["Roof_West"][0] = scaler[2]*0.5*2.7*6.8*math.cos(math.radians(roof_dict["Roof_East"][1]))
# For ground floors the orientation is always -2
ground_floor_dict = {"GroundFloor": [scaler[1]*(12.42+9.76+6.83+8.69), 0.0, -2]}
win_dict = {"Window_south1": [scaler[3]*1.02*0.87, 90.0, 180.0],
"Window_south2": [scaler[3]*1.02*0.87, 90.0, 180.0],
"Window_south3": [scaler[3]*1.56*0.96, 90.0, 180.0],
"Window_north1": [scaler[3]*0.76*0.76, 90.0, 0],
"Window_north2": [scaler[3]*1.56*0.96, 90.0, 0],
"Window_north3": [scaler[3]*1.02*0.87, 90.0, 0],
"Door_back": [1.8*2.10, 90.0, 0]
}
door_dict = {"Door_front": [0.8*2.05, 90.0, 180.0],
}
# Start with the roof
for key, value in roof_dict.items():
roof = Rooftop(parent=tz)
roof.name = key
roof.tilt = value[1]
roof.area = value[0]
roof.orientation = value[2]
roof.inner_convection = 4.3
roof.outer_convection = 18.1
roof.inner_radiation = 5.7
roof.outer_radiation = 5.7
# Plasterboard
layer_s1 = Layer(parent=roof, id=0)
layer_s1.thickness = 0.010
material_s1 = Material(layer_s1)
material_s1.name = "Plasterboard"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s2 = Layer(parent=roof, id=1)
layer_s2.thickness = 0.10
material_s1 = Material(layer_s2)
material_s1.name = "Glass_fibre"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
#Loft space
layer_s3 = Layer(parent=roof, id=2)
layer_s3.thickness = 0.5*2.15 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Roof tiles
layer_s4 = Layer(parent=roof, id=3)
layer_s4.thickness = 0.010
material_s1 = Material(layer_s4)
material_s1.name = "Roof_tile"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# External Walls
for key, value in out_wall_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
# area, tilt and orientation need to be set individually.
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
# External walls
# Plaster
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick, inner
layer_s2 = Layer(parent=out_wall, id=1)
layer_s2.thickness = 0.105
material_s1 = Material(layer_s2)
material_s1.name = "Brick_in"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s5 = Layer(parent=out_wall, id=2)
layer_s5.thickness = 0.065
material_s1 = Material(layer_s5)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s3 = Layer(parent=out_wall, id=3)
layer_s3.thickness = scaler[4]*0.1
material_s1 = Material(layer_s3)
material_s1.name = "Insulation"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick, outer
layer_s4 = Layer(parent=out_wall, id=3)
layer_s4.thickness = 0.105
material_s1 = Material(layer_s4)
material_s1.name = "Brick_out"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner walls
for key, value in in_wall_dict.items():
in_wall = InnerWall(parent=tz)
in_wall.name = key
in_wall.area = value[0]
in_wall.tilt = value[1]
in_wall.orientation = value[2]
in_wall.inner_convection = 3.0
in_wall.outer_convection = 3.0
in_wall.inner_radiation = 5.7
in_wall.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick
layer_s2 = Layer(parent=in_wall, id=1)
layer_s2.thickness = 0.105 # Average of the smallest height (conservative)
material_s1 = Material(layer_s2)
material_s1.name = "Brick_in"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plaster
layer_s3 = Layer(parent=in_wall, id=2)
layer_s3.thickness = 0.016 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner floors
for key, value in in_floor_dict.items():
in_floor = Floor(parent=tz)
in_floor.name = key
in_floor.area = value[0]
in_floor.inner_convection = 3.0
in_floor.outer_convection = 3.0
in_floor.inner_radiation = 5.7
in_floor.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_floor, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Carpet"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# timber
layer_s2 = Layer(parent=in_floor, id=1)
layer_s2.thickness = 0.020
material_s1 = Material(layer_s2)
material_s1.name = "Timber"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s3 = Layer(parent=in_floor, id=2)
layer_s3.thickness = 0.200
material_s1 = Material(layer_s3)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plasterboard
layer_s4 = Layer(parent=in_floor, id=3)
layer_s4.thickness = 0.010
material_s1 = Material(layer_s4)
material_s1.name = "Plasterboard"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
for key, value in ground_floor_dict.items():
ground = GroundFloor(parent=tz)
ground.name = key
ground.area = value[0]
ground.tilt = value[1]
ground.orientation = value[2]
ground.inner_convection = 3.0
ground.outer_convection = 100000000000000
ground.inner_radiation = 5.7
ground.outer_radiation = 100000000000000
# Carpet
layer_s1 = Layer(parent=ground, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Carpet"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# timber
layer_s2 = Layer(parent=ground, id=1)
layer_s2.thickness = 0.1
material_s1 = Material(layer_s2)
material_s1.name = "Concrete"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
#Earth
layer_s3 = Layer(parent=ground, id=2)
layer_s3.thickness = 0.160
material_s1 = Material(layer_s3)
material_s1.name = "Earth"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Doors
for key, value in door_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.030
material_s1 = Material(layer_s1)
material_s1.name = "Softwood"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Windows
for key, value in win_dict.items():
win = Window(parent = tz)
win.name = key
win.area = value[0]
win.tilt = value[1]
win.orientation = value[2]
# Additional to the already known attributes the window has
# additional attributes. Window.g_value describes the solar gain
# through windows, a_conv the convective heat transmission due to
# absorption of the window on the inner side. shading_g_total and
# shading_max_irr refers to the shading (solar gain reduction of the
# shading and shading_max_irr the threshold of irradiance to
# automatically apply shading).
win.inner_convection = 3
win.inner_radiation = 14
win.outer_convection = 5.7
win.outer_radiation = 5.7
win.g_value = 0.84
win.a_conv = 0.03
win.shading_g_total = 0.0
win.shading_max_irr = 180.0
# Double-glazed windows:
win_layer1 = Layer(parent=win)
win_layer1.id = 0
win_layer1.thickness = 0.006
# Material for Glas
win_material = Material(win_layer1)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
# Gap of 12 mm
win_layer2 = Layer(parent=win)
win_layer2.id = 1
win_layer2.thickness = 0.012
win_material = Material(win_layer2)
win_material.name = "Cavity"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
#Glass
win_layer3 = Layer(parent=win)
win_layer3.id = 2
win_layer3.thickness = 0.006
# Material for Glas
win_material = Material(win_layer3)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
if type == "office_highcost-mid1980s":
print("Creating a high-end mid 1980 office floor")
bldg.year_of_construction = 1985
bldg.number_of_floors = 1
bldg.height_of_floors = 3.2
# Instantiate a ThermalZone class and set the Building as a parent of it.
# Set some parameters of the thermal zone. Be careful: Dymola does not
# like whitespaces in names and filenames, thus we will delete them
# anyway in TEASER.
tz = ThermalZone(parent=bldg)
tz.name = "Office"
tz.area = scaler[1]*288
tz.volume = tz.area * bldg.number_of_floors * bldg.height_of_floors
tz.infiltration_rate = 0.7
# Instantiate BoundaryConditions and load conditions for `Living`.
tz.use_conditions = BoundaryConditions(parent=tz)
tz.use_conditions.load_use_conditions("Office", prj.data)
w_e = scaler[2]*8*3.2-4*1.65-0.6*1.65
w_w = scaler[2]*8*3.2-4*1.65-0.6*1.65
w_s = scaler[2]*36*3.2-6*4*1.65-5*0.6*1.65
out_wall_dict = {"OuterWall_east": [w_e, 90.0, 90],
"OuterWall_south": [w_s, 90.0, 180],
"OuterWall_west": [w_w, 90.0, 270],
}
# Lump all inner walls into one
in_wall_dict = {"InnerWall_south": [scaler[3]*115.2, 90, 0],
"PartyWall_north": [scaler[2]*36*3.2, 90.0, 0]
}
# Only areas given
in_floor_dict = {"InnerFloor1": [scaler[1]*288],
"InnerCeiling": [scaler[1]*288]
}
#roof_dict = {"Roof_South": [36*3.2, 55, 180],
# }
# For ground floors the orientation is always -2
#ground_floor_dict = {"GroundFloor": [288, 0.0, -2]}
win_dict = {"Window_south": [scaler[4]*6*4*1.65+5*0.6*1.65, 90.0, 180.0],
"Window_east": [scaler[4]*4*1.65+0.6*1.65, 90.0, 180.0],
"Window_west": [scaler[4]*4*1.65+0.6*1.65, 90.0, 180.0],
}
# External Walls
for key, value in out_wall_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
# area, tilt and orientation need to be set individually.
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
# External walls
# Dry lining
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.01
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s2 = Layer(parent=out_wall, id=1)
layer_s2.thickness = scaler[5]*0.070
material_s1 = Material(layer_s2)
material_s1.name = "Insulation"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Concrete Block
layer_s3 = Layer(parent=out_wall, id=2)
layer_s3.thickness = 0.140
material_s1 = Material(layer_s3)
material_s1.name = "ConcreteBlock"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Portland Stone
layer_s4 = Layer(parent=out_wall, id=3)
layer_s4.thickness = 0.050
material_s1 = Material(layer_s4)
material_s1.name = "PortlandStone"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner walls
for key, value in in_wall_dict.items():
in_wall = InnerWall(parent=tz)
in_wall.name = key
in_wall.area = value[0]
in_wall.tilt = value[1]
in_wall.orientation = value[2]
in_wall.inner_convection = 3.0
in_wall.outer_convection = 3.0
in_wall.inner_radiation = 5.7
in_wall.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick
layer_s2 = Layer(parent=in_wall, id=1)
layer_s2.thickness = 0.100 # Average of the smallest height (conservative)
material_s1 = Material(layer_s2)
material_s1.name = "ConcreteWallPanel"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plaster
layer_s3 = Layer(parent=in_wall, id=2)
layer_s3.thickness = 0.016 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner floors
for key, value in in_floor_dict.items():
in_floor = Floor(parent=tz)
in_floor.name = key
in_floor.area = value[0]
in_floor.inner_convection = 3.0
in_floor.outer_convection = 3.0
in_floor.inner_radiation = 5.7
in_floor.outer_radiation = 5.7
# Screed
layer_s1 = Layer(parent=in_floor, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Screed"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Concrete
layer_s2 = Layer(parent=in_floor, id=1)
layer_s2.thickness = 0.370
material_s1 = Material(layer_s2)
material_s1.name = "ConcreteFloorPanel"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Windows
for key, value in win_dict.items():
win = Window(parent = tz)
win.name = key
win.area = value[0]
win.tilt = value[1]
win.orientation = value[2]
# Additional to the already known attributes the window has
# additional attributes. Window.g_value describes the solar gain
# through windows, a_conv the convective heat transmission due to
# absorption of the window on the inner side. shading_g_total and
# shading_max_irr refers to the shading (solar gain reduction of the
# shading and shading_max_irr the threshold of irradiance to
# automatically apply shading).
win.inner_convection = 3
win.inner_radiation = 14
win.outer_convection = 5.7
win.outer_radiation = 5.7
win.g_value = 0.84
win.a_conv = 0.03
win.shading_g_total = 0.0
win.shading_max_irr = 180.0
# Double-glazed windows:
win_layer1 = Layer(parent=win)
win_layer1.id = 0
win_layer1.thickness = 0.006
# Material for Glas
win_material = Material(win_layer1)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
# Gap of 12 mm
win_layer2 = Layer(parent=win)
win_layer2.id = 1
win_layer2.thickness = 0.012
win_material = Material(win_layer2)
win_material.name = "Cavity"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
#Glass
win_layer3 = Layer(parent=win)
win_layer3.id = 2
win_layer3.thickness = 0.006
# Material for Glas
win_material = Material(win_layer3)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
if type == "office_lowenergy-early1980s":
print("Creating low-energy 1980s office")
bldg.year_of_construction = 1980
bldg.number_of_floors = 1
bldg.height_of_floors = 2.5
# Instantiate a ThermalZone class and set the Building as a parent of it.
# Set some parameters of the thermal zone. Be careful: Dymola does not
# like whitespaces in names and filenames, thus we will delete them
# anyway in TEASER.
tz = ThermalZone(parent=bldg)
tz.name = "Office"
tz.area = scaler[1]*4.7*3.65
tz.volume = tz.area * bldg.number_of_floors * bldg.height_of_floors
tz.infiltration_rate = 1
# Instantiate BoundaryConditions and load conditions for `Living`.
tz.use_conditions = BoundaryConditions(parent=tz)
tz.use_conditions.load_use_conditions("Office", prj.data)
w_s = scaler[2]*3.65*2.5-2.95*1.3
out_wall_dict = {"OuterWall_south": [w_s, 90.0, 180],
}
# Lump all inner walls into one
in_wall_dict = {"InnerWall_south": [scaler[3]*2*4.7*2,5,90,0],
"PartyWall_north": [scaler[2]*3.65*2.5, 90.0, 0]
}
# Only areas given
in_floor_dict = {"InnerFloor1": [scaler[4]*3.65*4.7],
"InnerCeiling": [scaler[4]*3.65*4.7]
}
#ground_floor_dict = {"GroundFloor": [288, 0.0, -2]}
win_dict = {"Window_south": [scaler[5]*2.95*1.3, 90.0, 180.0],
}
# External Walls
for key, value in out_wall_dict.items():
# Instantiate class, key is the name
out_wall = OuterWall(parent=tz)
out_wall.name = key
out_wall.inner_convection = 3.0
out_wall.outer_convection = 14
out_wall.inner_radiation = 5.7
out_wall.outer_radiation = 5.7
# area, tilt and orientation need to be set individually.
out_wall.area = value[0]
out_wall.tilt = value[1]
out_wall.orientation = value[2]
# External walls
# Dry lining
layer_s1 = Layer(parent=out_wall, id=0)
layer_s1.thickness = 0.01
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Timber
layer_s2 = Layer(parent=out_wall, id=1)
layer_s2.thickness = 0.100
material_s1 = Material(layer_s2)
material_s1.name = "Timber"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Insulation
layer_s3 = Layer(parent=out_wall, id=2)
layer_s3.thickness = scaler[6]*0.150
material_s1 = Material(layer_s3)
material_s1.name = "Insulation"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Cavity
layer_s4 = Layer(parent=out_wall, id=3)
layer_s4.thickness = 0.150
material_s1 = Material(layer_s4)
material_s1.name = "Cavity"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Concrete
layer_s5 = Layer(parent=out_wall, id=4)
layer_s5.thickness = 0.10
material_s1 = Material(layer_s5)
material_s1.name = "ConcreteBlock"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Ceramic tiles
layer_s6 = Layer(parent=out_wall, id=5)
layer_s6.thickness = 0.0010
material_s1 = Material(layer_s6)
material_s1.name = "CeramicTiles"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner walls
for key, value in in_wall_dict.items():
in_wall = InnerWall(parent=tz)
in_wall.name = key
in_wall.area = value[0]
in_wall.tilt = value[1]
in_wall.orientation = value[2]
in_wall.inner_convection = 3.0
in_wall.outer_convection = 3.0
in_wall.inner_radiation = 5.7
in_wall.outer_radiation = 5.7
# Plaster
layer_s1 = Layer(parent=in_wall, id=0)
layer_s1.thickness = 0.016
material_s1 = Material(layer_s1)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Brick
layer_s2 = Layer(parent=in_wall, id=1)
layer_s2.thickness = 0.100 # Average of the smallest height (conservative)
material_s1 = Material(layer_s2)
material_s1.name = "ConcreteWallPanel"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Plaster
layer_s3 = Layer(parent=in_wall, id=2)
layer_s3.thickness = 0.016 # Average of the smallest height (conservative)
material_s1 = Material(layer_s3)
material_s1.name = "Plaster"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Inner floors
for key, value in in_floor_dict.items():
in_floor = Floor(parent=tz)
in_floor.name = key
in_floor.area = value[0]
in_floor.inner_convection = 3.0
in_floor.outer_convection = 3.0
in_floor.inner_radiation = 5.7
in_floor.outer_radiation = 5.7
# Screed
layer_s1 = Layer(parent=in_floor, id=0)
layer_s1.thickness = 0.005
material_s1 = Material(layer_s1)
material_s1.name = "Screed"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Concrete
layer_s2 = Layer(parent=in_floor, id=1)
layer_s2.thickness = 0.205
material_s1 = Material(layer_s2)
material_s1.name = "ConcreteFloorPanel"
material_s1.density = Mat_dict[material_s1.name][0]
material_s1.heat_capac = Mat_dict[material_s1.name][1]
material_s1.thermal_conduc = Mat_dict[material_s1.name][2]
material_s1.ir_emissivity = Mat_dict[material_s1.name][3]
material_s1.solar_absorp = Mat_dict[material_s1.name][4]
# Windows
for key, value in win_dict.items():
win = Window(parent = tz)
win.name = key
win.area = value[0]
win.tilt = value[1]
win.orientation = value[2]
# Additional to the already known attributes the window has
# additional attributes. Window.g_value describes the solar gain
# through windows, a_conv the convective heat transmission due to
# absorption of the window on the inner side. shading_g_total and
# shading_max_irr refers to the shading (solar gain reduction of the
# shading and shading_max_irr the threshold of irradiance to
# automatically apply shading).
win.inner_convection = 3
win.inner_radiation = 14
win.outer_convection = 5.7
win.outer_radiation = 5.7
win.g_value = 0.84
win.a_conv = 0.03
win.shading_g_total = 0.0
win.shading_max_irr = 180.0
# Double-glazed windows:
win_layer1 = Layer(parent=win)
win_layer1.id = 0
win_layer1.thickness = 0.006
# Material for Glas
win_material = Material(win_layer1)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
# Gap of 12 mm
win_layer2 = Layer(parent=win)
win_layer2.id = 1
win_layer2.thickness = 0.012
win_material = Material(win_layer2)
win_material.name = "Cavity"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
#Glass
win_layer3 = Layer(parent=win)
win_layer3.id = 2
win_layer3.thickness = 0.006
# Material for Glas
win_material = Material(win_layer3)
win_material.name = "GlasWindow"
win_material.density = Mat_dict[material_s1.name][0]
win_material.heat_capac = Mat_dict[material_s1.name][1]
win_material.thermal_conduc = Mat_dict[material_s1.name][2]
win_material.ir_emissivity = Mat_dict[material_s1.name][3]
win_material.solar_absorp = Mat_dict[material_s1.name][4]
win_material.transmittance = 0.8
| 35.404002 | 233 | 0.712123 | 11,978 | 74,313 | 4.161546 | 0.034814 | 0.207635 | 0.155877 | 0.161996 | 0.959877 | 0.950609 | 0.945573 | 0.94104 | 0.932393 | 0.926335 | 0 | 0.074784 | 0.163646 | 74,313 | 2,098 | 234 | 35.420877 | 0.727233 | 0.120127 | 0 | 0.871036 | 0 | 0 | 0.039346 | 0.000784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000705 | false | 0 | 0.008457 | 0 | 0.009161 | 0.003524 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b9287693a699859f99aec4f94e3ba35de4e6b20b | 14,383 | py | Python | tests/output_test.py | michaelgruenstaeudl/EMBL2checklists | 5215ff97871293635a8de06c2c106bae23457324 | [
"BSD-3-Clause"
] | null | null | null | tests/output_test.py | michaelgruenstaeudl/EMBL2checklists | 5215ff97871293635a8de06c2c106bae23457324 | [
"BSD-3-Clause"
] | null | null | null | tests/output_test.py | michaelgruenstaeudl/EMBL2checklists | 5215ff97871293635a8de06c2c106bae23457324 | [
"BSD-3-Clause"
] | 1 | 2018-06-02T17:59:39.000Z | 2018-06-02T17:59:39.000Z | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
'''
Unit tests to compare actual and expected output
'''
#####################
# IMPORT OPERATIONS #
#####################
import sys, os
# Import package modules of EMBL2checklists irrespective of install status
try:
import EMBL2checklists
except ImportError:
try:
package_topLevel = __file__.split(__info__)[0] + __info__
if os.path.isdir(package_topLevel):
sys.path.append(package_topLevel)
if not os.path.isdir(package_topLevel):
raise ValueError('Top level of package not set.')
except ValueError:
try:
package_topLevel = os.path.dirname(os.path.split(inspect.getfile(EMBL2checklists))[0]) # Homes in on `__init__.py`
if not os.path.isdir(package_topLevel):
raise ValueError('Top level of package not set.')
except ValueError:
package_topLevel = os.path.dirname(os.path.dirname(__file__))
if not os.path.isdir(package_topLevel):
raise ValueError('Top level of package not set.')
import unittest
import subprocess
import inspect
###############
# AUTHOR INFO #
###############
__author__ = 'Michael Gruenstaeudl <m.gruenstaeudl@fu-berlin.de>'
__copyright__ = 'Copyright (C) 2016-2018 Michael Gruenstaeudl'
__info__ = 'EMBL2checklists'
__version__ = '2018.09.25.1600'
#############
# DEBUGGING #
#############
import pdb
# pdb.set_trace()
####################
# GLOBAL VARIABLES #
####################
try:
package_topLevel = __file__.split(__info__)[0] + __info__
if not os.path.isdir(package_topLevel):
raise ValueError('Top level of package not set.')
except ValueError:
try:
package_topLevel = os.path.dirname(os.path.split(inspect.getfile(EMBL2checklists))[0]) # Homes in on `__init__.py`
if not os.path.isdir(package_topLevel):
raise ValueError('Top level of package not set.')
except ValueError:
package_topLevel = os.path.dirname(os.path.dirname(__file__))
if not os.path.isdir(package_topLevel):
raise ValueError('Top level of package not set.')
script_rel_path = 'scripts/EMBL2checklists_launcher_CLI.py'
script_abs_path = os.path.join(package_topLevel, script_rel_path)
###########
# CLASSES #
###########
class OutputTestCases(unittest.TestCase):
''' Tests to evaluate miscellaneous operations'''
def test_actual_vs_expected_output_ETS(self):
''' Assert that the actual and the expected output for checklist
`ETS` are identical. If they are not, show their difference. '''
actual_inp = 'example_ETS.embl'
expect_otp = 'example_ETS.tsv'
actual_otp = sys._getframe().f_code.co_name # Name of this function
actual_inp_rel_path = os.path.join('example/input/', actual_inp)
actual_inp_abs_path = os.path.join(package_topLevel, actual_inp_rel_path)
actual_otp_rel_path = os.path.join('example/temp/', actual_otp)
actual_otp_abs_path = os.path.join(package_topLevel, actual_otp_rel_path)
expect_otp_rel_path = os.path.join('example/output/', expect_otp)
expect_otp_abs_path = os.path.join(package_topLevel, expect_otp_rel_path)
cmd_list = [sys.executable, script_abs_path,
'-i', actual_inp_abs_path,
'-o', actual_otp_abs_path,
'-c ETS', '-e no'
]
try:
subprocess.check_output(' '.join(cmd_list), shell=True)
except subprocess.CalledProcessError as e:
print e.output
expected_str = open(expect_otp_abs_path).read()
if os.path.isfile(actual_otp_abs_path): # Check if actual output exists
actual_str = open(actual_otp_abs_path).read()
# Important: Remove actual output so that lines from
# subsequent tests are not appended, rendering actual and
# expected different.
os.remove(actual_otp_abs_path)
else:
print 'EMBL2checklists TESTING ERROR: actual_str not found.'
self.assertTrue(isinstance(expected_str, str),
'Not a string: ' + expect_otp_abs_path)
self.assertTrue(isinstance(actual_str, str),
'Not a string: ' + actual_otp_abs_path)
self.assertMultiLineEqual(expected_str, actual_str)
def test_actual_vs_expected_output_gene_intron(self):
''' Assert that the actual and the expected output for checklist
`gene_intron` are identical. If they are not, show their difference. '''
actual_inp = 'example_gene_intron.embl'
expect_otp = 'example_gene_intron.tsv'
actual_otp = sys._getframe().f_code.co_name # Name of this function
actual_inp_rel_path = os.path.join('example/input/', actual_inp)
actual_inp_abs_path = os.path.join(package_topLevel, actual_inp_rel_path)
actual_otp_rel_path = os.path.join('example/temp/', actual_otp)
actual_otp_abs_path = os.path.join(package_topLevel, actual_otp_rel_path)
expect_otp_rel_path = os.path.join('example/output/', expect_otp)
expect_otp_abs_path = os.path.join(package_topLevel, expect_otp_rel_path)
cmd_list = [sys.executable, script_abs_path,
'-i', actual_inp_abs_path,
'-o', actual_otp_abs_path,
'-c gene_intron', '-e no'
]
try:
subprocess.check_output(' '.join(cmd_list), shell=True)
except subprocess.CalledProcessError as e:
print e.output
expected_str = open(expect_otp_abs_path).read()
if os.path.isfile(actual_otp_abs_path): # Check if actual output exists
actual_str = open(actual_otp_abs_path).read()
# Important: Remove actual output so that lines from
# subsequent tests are not appended, rendering actual and
# expected different.
os.remove(actual_otp_abs_path)
else:
print 'EMBL2checklists TESTING ERROR: actual_str not found.'
self.assertTrue(isinstance(expected_str, str),
'Not a string: ' + expect_otp_abs_path)
self.assertTrue(isinstance(actual_str, str),
'Not a string: ' + actual_otp_abs_path)
self.assertMultiLineEqual(expected_str, actual_str)
def test_actual_vs_expected_output_IGS(self):
''' Assert that the actual and the expected output for checklist
`IGS` are identical. If they are not, show their difference. '''
actual_inp = 'example_IGS.embl'
expect_otp = 'example_IGS.tsv'
actual_otp = sys._getframe().f_code.co_name # Name of this function
actual_inp_rel_path = os.path.join('example/input/', actual_inp)
actual_inp_abs_path = os.path.join(package_topLevel, actual_inp_rel_path)
actual_otp_rel_path = os.path.join('example/temp/', actual_otp)
actual_otp_abs_path = os.path.join(package_topLevel, actual_otp_rel_path)
expect_otp_rel_path = os.path.join('example/output/', expect_otp)
expect_otp_abs_path = os.path.join(package_topLevel, expect_otp_rel_path)
cmd_list = [sys.executable, script_abs_path,
'-i', actual_inp_abs_path,
'-o', actual_otp_abs_path,
'-c IGS', '-e no'
]
try:
subprocess.check_output(' '.join(cmd_list), shell=True)
except subprocess.CalledProcessError as e:
print e.output
expected_str = open(expect_otp_abs_path).read()
if os.path.isfile(actual_otp_abs_path): # Check if actual output exists
actual_str = open(actual_otp_abs_path).read()
# Important: Remove actual output so that lines from
# subsequent tests are not appended, rendering actual and
# expected different.
os.remove(actual_otp_abs_path)
else:
print 'EMBL2checklists TESTING ERROR: actual_str not found.'
self.assertTrue(isinstance(expected_str, str),
'Not a string: ' + expect_otp_abs_path)
self.assertTrue(isinstance(actual_str, str),
'Not a string: ' + actual_otp_abs_path)
self.assertMultiLineEqual(expected_str, actual_str)
def test_actual_vs_expected_output_ITS(self):
''' Assert that the actual and the expected output for checklist
`ITS` are identical. If they are not, show their difference. '''
actual_inp = 'example_ITS.embl'
expect_otp = 'example_ITS.tsv'
actual_otp = sys._getframe().f_code.co_name # Name of this function
actual_inp_rel_path = os.path.join('example/input/', actual_inp)
actual_inp_abs_path = os.path.join(package_topLevel, actual_inp_rel_path)
actual_otp_rel_path = os.path.join('example/temp/', actual_otp)
actual_otp_abs_path = os.path.join(package_topLevel, actual_otp_rel_path)
expect_otp_rel_path = os.path.join('example/output/', expect_otp)
expect_otp_abs_path = os.path.join(package_topLevel, expect_otp_rel_path)
cmd_list = [sys.executable, script_abs_path,
'-i', actual_inp_abs_path,
'-o', actual_otp_abs_path,
'-c ITS', '-e no'
]
try:
subprocess.check_output(' '.join(cmd_list), shell=True)
except subprocess.CalledProcessError as e:
print e.output
expected_str = open(expect_otp_abs_path).read()
if os.path.isfile(actual_otp_abs_path): # Check if actual output exists
actual_str = open(actual_otp_abs_path).read()
# Important: Remove actual output so that lines from
# subsequent tests are not appended, rendering actual and
# expected different.
os.remove(actual_otp_abs_path)
else:
print 'EMBL2checklists TESTING ERROR: actual_str not found.'
self.assertTrue(isinstance(expected_str, str),
'Not a string: ' + expect_otp_abs_path)
self.assertTrue(isinstance(actual_str, str),
'Not a string: ' + actual_otp_abs_path)
self.assertMultiLineEqual(expected_str, actual_str)
def test_actual_vs_expected_output_rRNA(self):
''' Assert that the actual and the expected output for checklist
`rRNA` are identical. If they are not, show their difference. '''
actual_inp = 'example_rRNA.embl'
expect_otp = 'example_rRNA.tsv'
actual_otp = sys._getframe().f_code.co_name # Name of this function
actual_inp_rel_path = os.path.join('example/input/', actual_inp)
actual_inp_abs_path = os.path.join(package_topLevel, actual_inp_rel_path)
actual_otp_rel_path = os.path.join('example/temp/', actual_otp)
actual_otp_abs_path = os.path.join(package_topLevel, actual_otp_rel_path)
expect_otp_rel_path = os.path.join('example/output/', expect_otp)
expect_otp_abs_path = os.path.join(package_topLevel, expect_otp_rel_path)
cmd_list = [sys.executable, script_abs_path,
'-i', actual_inp_abs_path,
'-o', actual_otp_abs_path,
'-c rRNA', '-e no'
]
try:
subprocess.check_output(' '.join(cmd_list), shell=True)
except subprocess.CalledProcessError as e:
print e.output
expected_str = open(expect_otp_abs_path).read()
if os.path.isfile(actual_otp_abs_path): # Check if actual output exists
actual_str = open(actual_otp_abs_path).read()
# Important: Remove actual output so that lines from
# subsequent tests are not appended, rendering actual and
# expected different.
os.remove(actual_otp_abs_path)
else:
print 'EMBL2checklists TESTING ERROR: actual_str not found.'
self.assertTrue(isinstance(expected_str, str),
'Not a string: ' + expect_otp_abs_path)
self.assertTrue(isinstance(actual_str, str),
'Not a string: ' + actual_otp_abs_path)
self.assertMultiLineEqual(expected_str, actual_str)
def test_actual_vs_expected_output_trnK_matK(self):
''' Assert that the actual and the expected output for checklist
`trnK_matK` are identical. If they are not, show their difference. '''
actual_inp = 'example_trnK_matK.embl'
expect_otp = 'example_trnK_matK.tsv'
actual_otp = sys._getframe().f_code.co_name # Name of this function
actual_inp_rel_path = os.path.join('example/input/', actual_inp)
actual_inp_abs_path = os.path.join(package_topLevel, actual_inp_rel_path)
actual_otp_rel_path = os.path.join('example/temp/', actual_otp)
actual_otp_abs_path = os.path.join(package_topLevel, actual_otp_rel_path)
expect_otp_rel_path = os.path.join('example/output/', expect_otp)
expect_otp_abs_path = os.path.join(package_topLevel, expect_otp_rel_path)
cmd_list = [sys.executable, script_abs_path,
'-i', actual_inp_abs_path,
'-o', actual_otp_abs_path,
'-c trnK_matK', '-e no'
]
try:
subprocess.check_output(' '.join(cmd_list), shell=True)
except subprocess.CalledProcessError as e:
print e.output
expected_str = open(expect_otp_abs_path).read()
if os.path.isfile(actual_otp_abs_path): # Check if actual output exists
actual_str = open(actual_otp_abs_path).read()
# Important: Remove actual output so that lines from
# subsequent tests are not appended, rendering actual and
# expected different.
os.remove(actual_otp_abs_path)
else:
print 'EMBL2checklists TESTING ERROR: actual_str not found.'
self.assertTrue(isinstance(expected_str, str),
'Not a string: ' + expect_otp_abs_path)
self.assertTrue(isinstance(actual_str, str),
'Not a string: ' + actual_otp_abs_path)
self.assertMultiLineEqual(expected_str, actual_str)
#############
# FUNCTIONS #
#############
########
# MAIN #
########
if __name__ == '__main__':
unittest.main()
| 46.099359 | 127 | 0.643816 | 1,821 | 14,383 | 4.76112 | 0.089511 | 0.058939 | 0.062284 | 0.059746 | 0.894579 | 0.89158 | 0.888235 | 0.884544 | 0.884544 | 0.876009 | 0 | 0.00354 | 0.253702 | 14,383 | 311 | 128 | 46.247588 | 0.804174 | 0.094278 | 0 | 0.786667 | 0 | 0 | 0.118721 | 0.01336 | 0 | 0 | 0 | 0 | 0.08 | 0 | null | null | 0 | 0.031111 | null | null | 0.053333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b957c5a5467ee782eadf891c6d1dcb03ca5a7e6c | 100,987 | py | Python | ProgettoLube/WebInspector/venv/Lib/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py | Lube-Project/ProgettoLube | cbf33971e2c2e865783ec1a2302625539186a338 | [
"MIT"
] | null | null | null | ProgettoLube/WebInspector/venv/Lib/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py | Lube-Project/ProgettoLube | cbf33971e2c2e865783ec1a2302625539186a338 | [
"MIT"
] | null | null | null | ProgettoLube/WebInspector/venv/Lib/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py | Lube-Project/ProgettoLube | cbf33971e2c2e865783ec1a2302625539186a338 | [
"MIT"
] | 1 | 2021-01-28T01:57:41.000Z | 2021-01-28T01:57:41.000Z | """Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: gen_xla_ops.cc
"""
import collections
from tensorflow.python import pywrap_tfe as pywrap_tfe
from tensorflow.python.eager import context as _context
from tensorflow.python.eager import core as _core
from tensorflow.python.eager import execute as _execute
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import op_def_registry as _op_def_registry
from tensorflow.python.framework import ops as _ops
from tensorflow.python.framework import op_def_library as _op_def_library
from tensorflow.python.util.deprecation import deprecated_endpoints
from tensorflow.python.util import dispatch as _dispatch
from tensorflow.python.util.tf_export import tf_export
_XlaBroadcastHelperOutput = collections.namedtuple(
"XlaBroadcastHelper",
["lhs_output", "rhs_output"])
@_dispatch.add_dispatch_list
@tf_export('xla_broadcast_helper')
def xla_broadcast_helper(lhs, rhs, broadcast_dims, name=None):
r"""Helper operator for performing XLA-style broadcasts
Broadcasts `lhs` and `rhs` to the same rank, by adding size 1 dimensions to
whichever of `lhs` and `rhs` has the lower rank, using XLA's broadcasting rules
for binary operators.
Args:
lhs: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the LHS input tensor
rhs: A `Tensor`. Must have the same type as `lhs`. the RHS input tensor
broadcast_dims: A `Tensor`. Must be one of the following types: `int32`, `int64`.
an XLA-style broadcast dimension specification
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (lhs_output, rhs_output).
lhs_output: A `Tensor`. Has the same type as `lhs`. the broadcasted LHS tensor
rhs_output: A `Tensor`. Has the same type as `lhs`. the broadcasted RHS tensor
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaBroadcastHelper", name,
tld.op_callbacks, lhs, rhs, broadcast_dims)
_result = _XlaBroadcastHelperOutput._make(_result)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_broadcast_helper_eager_fallback(
lhs, rhs, broadcast_dims, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_broadcast_helper, (), dict(lhs=lhs, rhs=rhs,
broadcast_dims=broadcast_dims,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaBroadcastHelper", lhs=lhs, rhs=rhs, broadcast_dims=broadcast_dims,
name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_broadcast_helper, (), dict(lhs=lhs, rhs=rhs,
broadcast_dims=broadcast_dims,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaBroadcastHelper", _inputs_flat, _attrs, _result)
_result = _XlaBroadcastHelperOutput._make(_result)
return _result
XlaBroadcastHelper = tf_export("raw_ops.XlaBroadcastHelper")(_ops.to_raw_op(xla_broadcast_helper))
def xla_broadcast_helper_eager_fallback(lhs, rhs, broadcast_dims, name, ctx):
_attr_T, _inputs_T = _execute.args_to_matching_eager([lhs, rhs], ctx)
(lhs, rhs) = _inputs_T
_attr_Tindices, (broadcast_dims,) = _execute.args_to_matching_eager([broadcast_dims], ctx)
_inputs_flat = [lhs, rhs, broadcast_dims]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices)
_result = _execute.execute(b"XlaBroadcastHelper", 2, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaBroadcastHelper", _inputs_flat, _attrs, _result)
_result = _XlaBroadcastHelperOutput._make(_result)
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_conv')
def xla_conv(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, feature_group_count, dimension_numbers, precision_config, name=None):
r"""Wraps the XLA ConvGeneralDilated operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#conv_convolution
.
Args:
lhs: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the input tensor
rhs: A `Tensor`. Must have the same type as `lhs`. the kernel tensor
window_strides: A `Tensor`. Must be one of the following types: `int32`, `int64`.
the inter-window strides
padding: A `Tensor`. Must have the same type as `window_strides`.
the padding to apply at the start and end of each input dimensions
lhs_dilation: A `Tensor`. Must have the same type as `window_strides`.
dilation to apply between input elements
rhs_dilation: A `Tensor`. Must have the same type as `window_strides`.
dilation to apply between kernel elements
feature_group_count: A `Tensor`. Must have the same type as `window_strides`.
number of feature groups for grouped convolution.
dimension_numbers: A `string`.
a serialized xla::ConvolutionDimensionNumbers proto.
precision_config: A `string`. a serialized xla::PrecisionConfig proto.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `lhs`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaConv", name,
tld.op_callbacks, lhs, rhs, window_strides, padding, lhs_dilation,
rhs_dilation, feature_group_count, "dimension_numbers",
dimension_numbers, "precision_config", precision_config)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_conv_eager_fallback(
lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation,
feature_group_count, dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_conv, (), dict(lhs=lhs, rhs=rhs,
window_strides=window_strides, padding=padding,
lhs_dilation=lhs_dilation,
rhs_dilation=rhs_dilation,
feature_group_count=feature_group_count,
dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
precision_config = _execute.make_str(precision_config, "precision_config")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaConv", lhs=lhs, rhs=rhs, window_strides=window_strides,
padding=padding, lhs_dilation=lhs_dilation,
rhs_dilation=rhs_dilation,
feature_group_count=feature_group_count,
dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_conv, (), dict(lhs=lhs, rhs=rhs, window_strides=window_strides,
padding=padding, lhs_dilation=lhs_dilation,
rhs_dilation=rhs_dilation,
feature_group_count=feature_group_count,
dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"), "dimension_numbers",
_op.get_attr("dimension_numbers"), "precision_config",
_op.get_attr("precision_config"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaConv", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaConv = tf_export("raw_ops.XlaConv")(_ops.to_raw_op(xla_conv))
def xla_conv_eager_fallback(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, feature_group_count, dimension_numbers, precision_config, name, ctx):
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
precision_config = _execute.make_str(precision_config, "precision_config")
_attr_T, _inputs_T = _execute.args_to_matching_eager([lhs, rhs], ctx)
(lhs, rhs) = _inputs_T
_attr_Tindices, _inputs_Tindices = _execute.args_to_matching_eager([window_strides, padding, lhs_dilation, rhs_dilation, feature_group_count], ctx)
(window_strides, padding, lhs_dilation, rhs_dilation, feature_group_count) = _inputs_Tindices
_inputs_flat = [lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, feature_group_count]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices, "dimension_numbers",
dimension_numbers, "precision_config", precision_config)
_result = _execute.execute(b"XlaConv", 1, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaConv", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_dequantize')
def xla_dequantize(input, min_range, max_range, mode, transpose_output, name=None):
r"""Takes the packed uint32 input and unpacks the input to uint8 to do
Dequantization on device.
Args:
input: A `Tensor` of type `uint32`.
Input tensors whose types is uint32, shape is [d0, ..., dn].
min_range: A `float`.
The minimum scalar value possibly produced for the input.
max_range: A `float`.
The maximum scalar value possibly produced for the input.
mode: A `string`.
String to determine the dequantize mode in {"MIN_COMBINED", "MIN_FIRST", "SCALED"}.
transpose_output: A `bool`.
Boolean to determine if output is transposed. transpose_output
is faster when input is large and rank of input is higher than 1.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `bfloat16`.
Output tensors whose types is bloat16. If transpose_output is true,
output shape is [dn * 4, dn-1, ..., d1, d0]. If transpose_output
is false, output shape is [d0,..., dn * 4].
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaDequantize", name,
tld.op_callbacks, input, "min_range", min_range, "max_range",
max_range, "mode", mode, "transpose_output", transpose_output)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_dequantize_eager_fallback(
input, min_range=min_range, max_range=max_range, mode=mode,
transpose_output=transpose_output, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dequantize, (), dict(input=input, min_range=min_range,
max_range=max_range, mode=mode,
transpose_output=transpose_output,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
min_range = _execute.make_float(min_range, "min_range")
max_range = _execute.make_float(max_range, "max_range")
mode = _execute.make_str(mode, "mode")
transpose_output = _execute.make_bool(transpose_output, "transpose_output")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaDequantize", input=input, min_range=min_range,
max_range=max_range, mode=mode,
transpose_output=transpose_output, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dequantize, (), dict(input=input, min_range=min_range,
max_range=max_range, mode=mode,
transpose_output=transpose_output,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("min_range", _op.get_attr("min_range"), "max_range",
_op.get_attr("max_range"), "mode", _op.get_attr("mode"),
"transpose_output", _op._get_attr_bool("transpose_output"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaDequantize", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaDequantize = tf_export("raw_ops.XlaDequantize")(_ops.to_raw_op(xla_dequantize))
def xla_dequantize_eager_fallback(input, min_range, max_range, mode, transpose_output, name, ctx):
min_range = _execute.make_float(min_range, "min_range")
max_range = _execute.make_float(max_range, "max_range")
mode = _execute.make_str(mode, "mode")
transpose_output = _execute.make_bool(transpose_output, "transpose_output")
input = _ops.convert_to_tensor(input, _dtypes.uint32)
_inputs_flat = [input]
_attrs = ("min_range", min_range, "max_range", max_range, "mode", mode,
"transpose_output", transpose_output)
_result = _execute.execute(b"XlaDequantize", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaDequantize", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_dot')
def xla_dot(lhs, rhs, dimension_numbers, precision_config, name=None):
r"""Wraps the XLA DotGeneral operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#dotgeneral
.
Args:
lhs: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the LHS tensor
rhs: A `Tensor`. Must have the same type as `lhs`. the RHS tensor
dimension_numbers: A `string`.
a serialized xla::DotDimensionNumbers proto.
precision_config: A `string`. a serialized xla::PrecisionConfig proto.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `lhs`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaDot", name,
tld.op_callbacks, lhs, rhs, "dimension_numbers", dimension_numbers,
"precision_config", precision_config)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_dot_eager_fallback(
lhs, rhs, dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dot, (), dict(lhs=lhs, rhs=rhs,
dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
precision_config = _execute.make_str(precision_config, "precision_config")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaDot", lhs=lhs, rhs=rhs, dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dot, (), dict(lhs=lhs, rhs=rhs,
dimension_numbers=dimension_numbers,
precision_config=precision_config, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "dimension_numbers",
_op.get_attr("dimension_numbers"), "precision_config",
_op.get_attr("precision_config"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaDot", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaDot = tf_export("raw_ops.XlaDot")(_ops.to_raw_op(xla_dot))
def xla_dot_eager_fallback(lhs, rhs, dimension_numbers, precision_config, name, ctx):
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
precision_config = _execute.make_str(precision_config, "precision_config")
_attr_T, _inputs_T = _execute.args_to_matching_eager([lhs, rhs], ctx)
(lhs, rhs) = _inputs_T
_inputs_flat = [lhs, rhs]
_attrs = ("T", _attr_T, "dimension_numbers", dimension_numbers,
"precision_config", precision_config)
_result = _execute.execute(b"XlaDot", 1, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaDot", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_dynamic_slice')
def xla_dynamic_slice(input, start_indices, size_indices, name=None):
r"""Wraps the XLA DynamicSlice operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#dynamicslice
.
DynamicSlice extracts a sub-array from the input array at dynamic
start_indices. The size of the slice in each dimension is passed in
size_indices, which specify the end point of exclusive slice intervals in each
dimension -- [start, start + size). The shape of start_indices must have rank 1,
with dimension size equal to the rank of operand.
Args:
input: A `Tensor`. A `Tensor` of type T.
start_indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.
List of N integers containing the slice size for each
dimension. Each value must be strictly greater than zero, and start + size
must be less than or equal to the size of the dimension to avoid
implementation defined behavior.
size_indices: A `Tensor`. Must have the same type as `start_indices`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaDynamicSlice", name,
tld.op_callbacks, input, start_indices, size_indices)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_dynamic_slice_eager_fallback(
input, start_indices, size_indices, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dynamic_slice, (), dict(input=input,
start_indices=start_indices,
size_indices=size_indices, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaDynamicSlice", input=input, start_indices=start_indices,
size_indices=size_indices, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dynamic_slice, (), dict(input=input,
start_indices=start_indices,
size_indices=size_indices, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaDynamicSlice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaDynamicSlice = tf_export("raw_ops.XlaDynamicSlice")(_ops.to_raw_op(xla_dynamic_slice))
def xla_dynamic_slice_eager_fallback(input, start_indices, size_indices, name, ctx):
_attr_T, (input,) = _execute.args_to_matching_eager([input], ctx)
_attr_Tindices, _inputs_Tindices = _execute.args_to_matching_eager([start_indices, size_indices], ctx)
(start_indices, size_indices) = _inputs_Tindices
_inputs_flat = [input, start_indices, size_indices]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices)
_result = _execute.execute(b"XlaDynamicSlice", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaDynamicSlice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_dynamic_update_slice')
def xla_dynamic_update_slice(input, update, indices, name=None):
r"""Wraps the XLA DynamicUpdateSlice operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#dynamicupdateslice
.
XlaDynamicUpdateSlice generates a result which is the value of the `input`
operand, with a slice update overwritten at `indices`. The shape of `update`
determines the shape of the sub-array of the result which is updated. The shape
of indices must be rank == 1, with dimension size equal to the rank of `input`.
Handling of out-of-bounds slice indices is implementation-defined.
Args:
input: A `Tensor`. A `Tensor` of type T.
update: A `Tensor`. Must have the same type as `input`.
A `Tensor` of type T. Same rank as `input`.
indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.
A vector of indices into `input`. Must have length equal to the rank of
`input`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`. A `Tensor` of type T.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaDynamicUpdateSlice", name,
tld.op_callbacks, input, update, indices)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_dynamic_update_slice_eager_fallback(
input, update, indices, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dynamic_update_slice, (), dict(input=input, update=update,
indices=indices, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaDynamicUpdateSlice", input=input, update=update, indices=indices,
name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_dynamic_update_slice, (), dict(input=input, update=update,
indices=indices, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaDynamicUpdateSlice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaDynamicUpdateSlice = tf_export("raw_ops.XlaDynamicUpdateSlice")(_ops.to_raw_op(xla_dynamic_update_slice))
def xla_dynamic_update_slice_eager_fallback(input, update, indices, name, ctx):
_attr_T, _inputs_T = _execute.args_to_matching_eager([input, update], ctx)
(input, update) = _inputs_T
_attr_Tindices, (indices,) = _execute.args_to_matching_eager([indices], ctx)
_inputs_flat = [input, update, indices]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices)
_result = _execute.execute(b"XlaDynamicUpdateSlice", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaDynamicUpdateSlice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_einsum')
def xla_einsum(a, b, equation, name=None):
r"""An op which supports basic einsum op with 2 inputs and 1 output.
This op has better TPU performance since it doesn't have explicitly reshape and
transpose operations as tf.einsum does.
Args:
a: A `Tensor`. Must be one of the following types: `complex64`, `bfloat16`, `float32`.
b: A `Tensor`. Must have the same type as `a`.
equation: A `string`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `a`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaEinsum", name,
tld.op_callbacks, a, b, "equation", equation)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_einsum_eager_fallback(
a, b, equation=equation, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_einsum, (), dict(a=a, b=b, equation=equation, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
equation = _execute.make_str(equation, "equation")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaEinsum", a=a, b=b, equation=equation, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_einsum, (), dict(a=a, b=b, equation=equation, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("equation", _op.get_attr("equation"), "T",
_op._get_attr_type("T"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaEinsum", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaEinsum = tf_export("raw_ops.XlaEinsum")(_ops.to_raw_op(xla_einsum))
def xla_einsum_eager_fallback(a, b, equation, name, ctx):
equation = _execute.make_str(equation, "equation")
_attr_T, _inputs_T = _execute.args_to_matching_eager([a, b], ctx)
(a, b) = _inputs_T
_inputs_flat = [a, b]
_attrs = ("equation", equation, "T", _attr_T)
_result = _execute.execute(b"XlaEinsum", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaEinsum", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_gather')
def xla_gather(operand, start_indices, slice_sizes, dimension_numbers, indices_are_sorted, name=None):
r"""Wraps the XLA Gather operator documented at
https://www.tensorflow.org/xla/operation_semantics#gather
Args:
operand: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
The array we're gathering from.
start_indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.
Array containing the starting indices of the slices we gather.
slice_sizes: A `Tensor`. Must have the same type as `start_indices`.
slice_sizes[i] is the bounds for the slice on dimension i.
dimension_numbers: A `string`.
A serialized xla::GatherDimensionNumbers proto.
indices_are_sorted: A `bool`.
Boolean indicating if the indices are sorted.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `operand`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaGather", name,
tld.op_callbacks, operand, start_indices, slice_sizes,
"dimension_numbers", dimension_numbers, "indices_are_sorted",
indices_are_sorted)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_gather_eager_fallback(
operand, start_indices, slice_sizes,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_gather, (), dict(operand=operand, start_indices=start_indices,
slice_sizes=slice_sizes,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
indices_are_sorted = _execute.make_bool(indices_are_sorted, "indices_are_sorted")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaGather", operand=operand, start_indices=start_indices,
slice_sizes=slice_sizes,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_gather, (), dict(operand=operand, start_indices=start_indices,
slice_sizes=slice_sizes,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("dimension_numbers", _op.get_attr("dimension_numbers"),
"indices_are_sorted", _op._get_attr_bool("indices_are_sorted"),
"T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaGather", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaGather = tf_export("raw_ops.XlaGather")(_ops.to_raw_op(xla_gather))
def xla_gather_eager_fallback(operand, start_indices, slice_sizes, dimension_numbers, indices_are_sorted, name, ctx):
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
indices_are_sorted = _execute.make_bool(indices_are_sorted, "indices_are_sorted")
_attr_T, (operand,) = _execute.args_to_matching_eager([operand], ctx)
_attr_Tindices, _inputs_Tindices = _execute.args_to_matching_eager([start_indices, slice_sizes], ctx)
(start_indices, slice_sizes) = _inputs_Tindices
_inputs_flat = [operand, start_indices, slice_sizes]
_attrs = ("dimension_numbers", dimension_numbers, "indices_are_sorted",
indices_are_sorted, "T", _attr_T, "Tindices", _attr_Tindices)
_result = _execute.execute(b"XlaGather", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaGather", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_if')
def xla_if(cond, inputs, then_branch, else_branch, Tout, name=None):
r"""output = cond ? then_branch(inputs) : else_branch(inputs).
Args:
cond: A `Tensor`. A boolean scalar.
inputs: A list of `Tensor` objects. A list of input tensors.
then_branch: A function decorated with @Defun.
A function takes 'inputs' and returns a list of tensors,
whose types are the same as what else_branch returns.
else_branch: A function decorated with @Defun.
A function takes 'inputs' and returns a list of tensors.
whose types are the same as what then_branch returns.
Tout: A list of `tf.DTypes`.
name: A name for the operation (optional).
Returns:
A list of `Tensor` objects of type `Tout`.
A list of tensors returned by either then_branch(inputs) or
else_branch(inputs). The input shapes of the then_branch and
else_branch must match.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaIf", name,
tld.op_callbacks, cond, inputs, "then_branch", then_branch,
"else_branch", else_branch, "Tout", Tout)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_if_eager_fallback(
cond, inputs, then_branch=then_branch, else_branch=else_branch,
Tout=Tout, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_if, (), dict(cond=cond, inputs=inputs,
then_branch=then_branch, else_branch=else_branch,
Tout=Tout, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
if not isinstance(Tout, (list, tuple)):
raise TypeError(
"Expected list for 'Tout' argument to "
"'xla_if' Op, not %r." % Tout)
Tout = [_execute.make_type(_t, "Tout") for _t in Tout]
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaIf", cond=cond, inputs=inputs, then_branch=then_branch,
else_branch=else_branch, Tout=Tout, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_if, (), dict(cond=cond, inputs=inputs, then_branch=then_branch,
else_branch=else_branch, Tout=Tout, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if not _result:
return _op
if _execute.must_record_gradient():
_attrs = ("Tcond", _op._get_attr_type("Tcond"), "then_branch",
_op.get_attr("then_branch"), "else_branch",
_op.get_attr("else_branch"), "Tin", _op.get_attr("Tin"), "Tout",
_op.get_attr("Tout"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaIf", _inputs_flat, _attrs, _result)
return _result
XlaIf = tf_export("raw_ops.XlaIf")(_ops.to_raw_op(xla_if))
def xla_if_eager_fallback(cond, inputs, then_branch, else_branch, Tout, name, ctx):
if not isinstance(Tout, (list, tuple)):
raise TypeError(
"Expected list for 'Tout' argument to "
"'xla_if' Op, not %r." % Tout)
Tout = [_execute.make_type(_t, "Tout") for _t in Tout]
_attr_Tcond, (cond,) = _execute.args_to_matching_eager([cond], ctx)
_attr_Tin, inputs = _execute.convert_to_mixed_eager_tensors(inputs, ctx)
_inputs_flat = [cond] + list(inputs)
_attrs = ("Tcond", _attr_Tcond, "then_branch", then_branch, "else_branch",
else_branch, "Tin", _attr_Tin, "Tout", Tout)
_result = _execute.execute(b"XlaIf", len(Tout), inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaIf", _inputs_flat, _attrs, _result)
return _result
_XlaKeyValueSortOutput = collections.namedtuple(
"XlaKeyValueSort",
["sorted_keys", "sorted_values"])
@_dispatch.add_dispatch_list
@tf_export('xla_key_value_sort')
def xla_key_value_sort(keys, values, name=None):
r"""Wraps the XLA Sort operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#sort
.
Sorts a tensor. Currently only sorts in ascending order are supported.
Args:
keys: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.
A `Tensor` of type K.
values: A `Tensor`. A `Tensor` of type V.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (sorted_keys, sorted_values).
sorted_keys: A `Tensor`. Has the same type as `keys`. A `Tensor` of type K.
sorted_values: A `Tensor`. Has the same type as `values`. A `Tensor` of type V.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaKeyValueSort", name,
tld.op_callbacks, keys, values)
_result = _XlaKeyValueSortOutput._make(_result)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_key_value_sort_eager_fallback(
keys, values, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_key_value_sort, (), dict(keys=keys, values=values, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaKeyValueSort", keys=keys, values=values, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_key_value_sort, (), dict(keys=keys, values=values, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("K", _op._get_attr_type("K"), "V", _op._get_attr_type("V"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaKeyValueSort", _inputs_flat, _attrs, _result)
_result = _XlaKeyValueSortOutput._make(_result)
return _result
XlaKeyValueSort = tf_export("raw_ops.XlaKeyValueSort")(_ops.to_raw_op(xla_key_value_sort))
def xla_key_value_sort_eager_fallback(keys, values, name, ctx):
_attr_K, (keys,) = _execute.args_to_matching_eager([keys], ctx)
_attr_V, (values,) = _execute.args_to_matching_eager([values], ctx)
_inputs_flat = [keys, values]
_attrs = ("K", _attr_K, "V", _attr_V)
_result = _execute.execute(b"XlaKeyValueSort", 2, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaKeyValueSort", _inputs_flat, _attrs, _result)
_result = _XlaKeyValueSortOutput._make(_result)
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_pad')
def xla_pad(input, padding_value, padding_low, padding_high, padding_interior, name=None):
r"""Wraps the XLA Pad operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#pad
.
Args:
input: A `Tensor`. A `Tensor` of type T.
padding_value: A `Tensor`. Must have the same type as `input`.
A scalar `Tensor` of type T.
padding_low: A `Tensor`. Must be one of the following types: `int32`, `int64`.
the padding to apply at the start of each input dimensions
padding_high: A `Tensor`. Must have the same type as `padding_low`.
the padding to apply at the end of each input dimension.
padding_interior: A `Tensor`. Must have the same type as `padding_low`.
the padding to apply between each input element.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`. A `Tensor` of type T.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaPad", name,
tld.op_callbacks, input, padding_value, padding_low, padding_high,
padding_interior)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_pad_eager_fallback(
input, padding_value, padding_low, padding_high, padding_interior,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_pad, (), dict(input=input, padding_value=padding_value,
padding_low=padding_low,
padding_high=padding_high,
padding_interior=padding_interior, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaPad", input=input, padding_value=padding_value,
padding_low=padding_low, padding_high=padding_high,
padding_interior=padding_interior, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_pad, (), dict(input=input, padding_value=padding_value,
padding_low=padding_low,
padding_high=padding_high,
padding_interior=padding_interior, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaPad", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaPad = tf_export("raw_ops.XlaPad")(_ops.to_raw_op(xla_pad))
def xla_pad_eager_fallback(input, padding_value, padding_low, padding_high, padding_interior, name, ctx):
_attr_T, _inputs_T = _execute.args_to_matching_eager([input, padding_value], ctx)
(input, padding_value) = _inputs_T
_attr_Tindices, _inputs_Tindices = _execute.args_to_matching_eager([padding_low, padding_high, padding_interior], ctx)
(padding_low, padding_high, padding_interior) = _inputs_Tindices
_inputs_flat = [input, padding_value, padding_low, padding_high, padding_interior]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices)
_result = _execute.execute(b"XlaPad", 1, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaPad", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_recv')
def xla_recv(dtype, tensor_name, shape, name=None):
r"""Receives the named tensor from another XLA computation. Wraps the XLA Recv
operator documented at
https://www.tensorflow.org/performance/xla/operation_semantics#recv .
Args:
dtype: A `tf.DType`. The type of the tensor.
tensor_name: A `string`. A string key that identifies the channel.
shape: A `tf.TensorShape` or list of `ints`. The shape of the tensor.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `dtype`. The tensor to receive.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaRecv", name,
tld.op_callbacks, "dtype", dtype, "tensor_name", tensor_name, "shape",
shape)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_recv_eager_fallback(
dtype=dtype, tensor_name=tensor_name, shape=shape, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_recv, (), dict(dtype=dtype, tensor_name=tensor_name,
shape=shape, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
dtype = _execute.make_type(dtype, "dtype")
tensor_name = _execute.make_str(tensor_name, "tensor_name")
shape = _execute.make_shape(shape, "shape")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaRecv", dtype=dtype, tensor_name=tensor_name, shape=shape,
name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_recv, (), dict(dtype=dtype, tensor_name=tensor_name,
shape=shape, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("dtype", _op._get_attr_type("dtype"), "tensor_name",
_op.get_attr("tensor_name"), "shape", _op.get_attr("shape"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaRecv", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaRecv = tf_export("raw_ops.XlaRecv")(_ops.to_raw_op(xla_recv))
def xla_recv_eager_fallback(dtype, tensor_name, shape, name, ctx):
dtype = _execute.make_type(dtype, "dtype")
tensor_name = _execute.make_str(tensor_name, "tensor_name")
shape = _execute.make_shape(shape, "shape")
_inputs_flat = []
_attrs = ("dtype", dtype, "tensor_name", tensor_name, "shape", shape)
_result = _execute.execute(b"XlaRecv", 1, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaRecv", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_reduce')
def xla_reduce(input, init_value, dimensions_to_reduce, reducer, name=None):
r"""Wraps the XLA Reduce operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#reduce .
Args:
input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the input tensor
init_value: A `Tensor`. Must have the same type as `input`.
a scalar representing the initial value for the reduction
dimensions_to_reduce: A list of `ints`.
dimension numbers over which to reduce
reducer: A function decorated with @Defun. a reducer function to apply
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaReduce", name,
tld.op_callbacks, input, init_value, "dimensions_to_reduce",
dimensions_to_reduce, "reducer", reducer)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_reduce_eager_fallback(
input, init_value, dimensions_to_reduce=dimensions_to_reduce,
reducer=reducer, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_reduce, (), dict(input=input, init_value=init_value,
dimensions_to_reduce=dimensions_to_reduce,
reducer=reducer, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
if not isinstance(dimensions_to_reduce, (list, tuple)):
raise TypeError(
"Expected list for 'dimensions_to_reduce' argument to "
"'xla_reduce' Op, not %r." % dimensions_to_reduce)
dimensions_to_reduce = [_execute.make_int(_i, "dimensions_to_reduce") for _i in dimensions_to_reduce]
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaReduce", input=input, init_value=init_value,
dimensions_to_reduce=dimensions_to_reduce,
reducer=reducer, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_reduce, (), dict(input=input, init_value=init_value,
dimensions_to_reduce=dimensions_to_reduce,
reducer=reducer, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "dimensions_to_reduce",
_op.get_attr("dimensions_to_reduce"), "reducer",
_op.get_attr("reducer"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaReduce", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaReduce = tf_export("raw_ops.XlaReduce")(_ops.to_raw_op(xla_reduce))
def xla_reduce_eager_fallback(input, init_value, dimensions_to_reduce, reducer, name, ctx):
if not isinstance(dimensions_to_reduce, (list, tuple)):
raise TypeError(
"Expected list for 'dimensions_to_reduce' argument to "
"'xla_reduce' Op, not %r." % dimensions_to_reduce)
dimensions_to_reduce = [_execute.make_int(_i, "dimensions_to_reduce") for _i in dimensions_to_reduce]
_attr_T, _inputs_T = _execute.args_to_matching_eager([input, init_value], ctx)
(input, init_value) = _inputs_T
_inputs_flat = [input, init_value]
_attrs = ("T", _attr_T, "dimensions_to_reduce", dimensions_to_reduce,
"reducer", reducer)
_result = _execute.execute(b"XlaReduce", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaReduce", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_reduce_window')
def xla_reduce_window(input, init_value, window_dimensions, window_strides, base_dilations, window_dilations, padding, computation, name=None):
r"""Wraps the XLA ReduceWindow operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#reducewindow .
Args:
input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the input tensor
init_value: A `Tensor`. Must have the same type as `input`.
a scalar representing the initial value for the reduction
window_dimensions: A `Tensor`. Must be one of the following types: `int32`, `int64`.
the shape of the window
window_strides: A `Tensor`. Must have the same type as `window_dimensions`.
the inter-window strides
base_dilations: A `Tensor`. Must have the same type as `window_dimensions`.
window_dilations: A `Tensor`. Must have the same type as `window_dimensions`.
padding: A `Tensor`. Must have the same type as `window_dimensions`.
the padding to apply at the start and end of each input dimensions
computation: A function decorated with @Defun. a reducer function to apply
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaReduceWindow", name,
tld.op_callbacks, input, init_value, window_dimensions,
window_strides, base_dilations, window_dilations, padding,
"computation", computation)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_reduce_window_eager_fallback(
input, init_value, window_dimensions, window_strides,
base_dilations, window_dilations, padding, computation=computation,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_reduce_window, (), dict(input=input, init_value=init_value,
window_dimensions=window_dimensions,
window_strides=window_strides,
base_dilations=base_dilations,
window_dilations=window_dilations,
padding=padding,
computation=computation, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaReduceWindow", input=input, init_value=init_value,
window_dimensions=window_dimensions,
window_strides=window_strides,
base_dilations=base_dilations,
window_dilations=window_dilations, padding=padding,
computation=computation, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_reduce_window, (), dict(input=input, init_value=init_value,
window_dimensions=window_dimensions,
window_strides=window_strides,
base_dilations=base_dilations,
window_dilations=window_dilations,
padding=padding,
computation=computation, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"), "computation",
_op.get_attr("computation"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaReduceWindow", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaReduceWindow = tf_export("raw_ops.XlaReduceWindow")(_ops.to_raw_op(xla_reduce_window))
def xla_reduce_window_eager_fallback(input, init_value, window_dimensions, window_strides, base_dilations, window_dilations, padding, computation, name, ctx):
_attr_T, _inputs_T = _execute.args_to_matching_eager([input, init_value], ctx)
(input, init_value) = _inputs_T
_attr_Tindices, _inputs_Tindices = _execute.args_to_matching_eager([window_dimensions, window_strides, base_dilations, window_dilations, padding], ctx)
(window_dimensions, window_strides, base_dilations, window_dilations, padding) = _inputs_Tindices
_inputs_flat = [input, init_value, window_dimensions, window_strides, base_dilations, window_dilations, padding]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices, "computation",
computation)
_result = _execute.execute(b"XlaReduceWindow", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaReduceWindow", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_replica_id')
def xla_replica_id(name=None):
r"""Replica ID.
Args:
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int32`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaReplicaId", name,
tld.op_callbacks)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_replica_id_eager_fallback(
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_replica_id, (), dict(name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaReplicaId", name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_replica_id, (), dict(name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaReplicaId", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaReplicaId = tf_export("raw_ops.XlaReplicaId")(_ops.to_raw_op(xla_replica_id))
def xla_replica_id_eager_fallback(name, ctx):
_inputs_flat = []
_attrs = None
_result = _execute.execute(b"XlaReplicaId", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaReplicaId", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_scatter')
def xla_scatter(operand, scatter_indices, updates, update_computation, dimension_numbers, indices_are_sorted, name=None):
r"""Wraps the XLA Scatter operator documented at
https://www.tensorflow.org/xla/operation_semantics#scatter.
Args:
operand: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
Array to be scattered into.
scatter_indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.
Array containing the starting indices of the slices that must
be scattered to.
updates: A `Tensor`. Must have the same type as `operand`.
Array containing the values that must be used for scattering.
update_computation: A function decorated with @Defun.
Computation to be used for combining the existing values in
the input array and the updates during scatter.
dimension_numbers: A `string`.
A serialized xla::ScatterDimensionNumbers proto.
indices_are_sorted: A `bool`.
Boolean indicating if the indices are sorted.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `operand`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaScatter", name,
tld.op_callbacks, operand, scatter_indices, updates,
"update_computation", update_computation, "dimension_numbers",
dimension_numbers, "indices_are_sorted", indices_are_sorted)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_scatter_eager_fallback(
operand, scatter_indices, updates,
update_computation=update_computation,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_scatter, (), dict(operand=operand,
scatter_indices=scatter_indices,
updates=updates,
update_computation=update_computation,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
indices_are_sorted = _execute.make_bool(indices_are_sorted, "indices_are_sorted")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaScatter", operand=operand, scatter_indices=scatter_indices,
updates=updates, update_computation=update_computation,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_scatter, (), dict(operand=operand,
scatter_indices=scatter_indices,
updates=updates,
update_computation=update_computation,
dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("update_computation", _op.get_attr("update_computation"),
"dimension_numbers", _op.get_attr("dimension_numbers"),
"indices_are_sorted", _op._get_attr_bool("indices_are_sorted"),
"T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaScatter", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaScatter = tf_export("raw_ops.XlaScatter")(_ops.to_raw_op(xla_scatter))
def xla_scatter_eager_fallback(operand, scatter_indices, updates, update_computation, dimension_numbers, indices_are_sorted, name, ctx):
dimension_numbers = _execute.make_str(dimension_numbers, "dimension_numbers")
indices_are_sorted = _execute.make_bool(indices_are_sorted, "indices_are_sorted")
_attr_T, _inputs_T = _execute.args_to_matching_eager([operand, updates], ctx)
(operand, updates) = _inputs_T
_attr_Tindices, (scatter_indices,) = _execute.args_to_matching_eager([scatter_indices], ctx)
_inputs_flat = [operand, scatter_indices, updates]
_attrs = ("update_computation", update_computation, "dimension_numbers",
dimension_numbers, "indices_are_sorted", indices_are_sorted, "T", _attr_T,
"Tindices", _attr_Tindices)
_result = _execute.execute(b"XlaScatter", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaScatter", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_select_and_scatter')
def xla_select_and_scatter(operand, window_dimensions, window_strides, padding, source, init_value, select, scatter, name=None):
r"""Wraps the XLA SelectAndScatter operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#selectandscatter
.
Args:
operand: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the input tensor
window_dimensions: A `Tensor`. Must be one of the following types: `int32`, `int64`.
the shape of the window
window_strides: A `Tensor`. Must have the same type as `window_dimensions`.
the inter-window strides
padding: A `Tensor`. Must have the same type as `window_dimensions`.
the padding to apply at the start and end of each input dimensions
source: A `Tensor`. Must have the same type as `operand`.
a tensor of values to scatter
init_value: A `Tensor`. Must have the same type as `operand`.
a scalar representing the initial value for the output tensor
select: A function decorated with @Defun. a selection function to apply
scatter: A function decorated with @Defun. a scatter function to apply
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `operand`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSelectAndScatter", name,
tld.op_callbacks, operand, window_dimensions, window_strides, padding,
source, init_value, "select", select, "scatter", scatter)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_select_and_scatter_eager_fallback(
operand, window_dimensions, window_strides, padding, source,
init_value, select=select, scatter=scatter, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_select_and_scatter, (), dict(operand=operand,
window_dimensions=window_dimensions,
window_strides=window_strides,
padding=padding, source=source,
init_value=init_value,
select=select, scatter=scatter,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSelectAndScatter", operand=operand,
window_dimensions=window_dimensions,
window_strides=window_strides, padding=padding,
source=source, init_value=init_value,
select=select, scatter=scatter, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_select_and_scatter, (), dict(operand=operand,
window_dimensions=window_dimensions,
window_strides=window_strides,
padding=padding, source=source,
init_value=init_value,
select=select, scatter=scatter,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "Tindices",
_op._get_attr_type("Tindices"), "select",
_op.get_attr("select"), "scatter", _op.get_attr("scatter"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSelectAndScatter", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaSelectAndScatter = tf_export("raw_ops.XlaSelectAndScatter")(_ops.to_raw_op(xla_select_and_scatter))
def xla_select_and_scatter_eager_fallback(operand, window_dimensions, window_strides, padding, source, init_value, select, scatter, name, ctx):
_attr_T, _inputs_T = _execute.args_to_matching_eager([operand, source, init_value], ctx)
(operand, source, init_value) = _inputs_T
_attr_Tindices, _inputs_Tindices = _execute.args_to_matching_eager([window_dimensions, window_strides, padding], ctx)
(window_dimensions, window_strides, padding) = _inputs_Tindices
_inputs_flat = [operand, window_dimensions, window_strides, padding, source, init_value]
_attrs = ("T", _attr_T, "Tindices", _attr_Tindices, "select", select,
"scatter", scatter)
_result = _execute.execute(b"XlaSelectAndScatter", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSelectAndScatter", _inputs_flat, _attrs, _result)
_result, = _result
return _result
_XlaSelfAdjointEigOutput = collections.namedtuple(
"XlaSelfAdjointEig",
["w", "v"])
@_dispatch.add_dispatch_list
@tf_export('xla_self_adjoint_eig')
def xla_self_adjoint_eig(a, lower, max_iter, epsilon, name=None):
r"""Computes the eigen decomposition of a batch of self-adjoint matrices
(Note: Only real inputs are supported).
Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in
tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for
i=0...N-1.
Args:
a: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the input tensor.
lower: A `bool`.
a boolean specifies whether the calculation is done with the lower
triangular part or the upper triangular part.
max_iter: An `int`.
maximum number of sweep update, i.e., the whole lower triangular
part or upper triangular part based on parameter lower. Heuristically, it has
been argued that approximately logN sweeps are needed in practice (Ref: Golub &
van Loan "Matrix Computation").
epsilon: A `float`. the tolerance ratio.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (w, v).
w: A `Tensor`. Has the same type as `a`. The eigenvalues in ascending order, each repeated according to its
multiplicity.
v: A `Tensor`. Has the same type as `a`. The column v[..., :, i] is the normalized eigenvector corresponding to the
eigenvalue w[..., i].
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSelfAdjointEig", name,
tld.op_callbacks, a, "lower", lower, "max_iter", max_iter, "epsilon",
epsilon)
_result = _XlaSelfAdjointEigOutput._make(_result)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_self_adjoint_eig_eager_fallback(
a, lower=lower, max_iter=max_iter, epsilon=epsilon, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_self_adjoint_eig, (), dict(a=a, lower=lower,
max_iter=max_iter, epsilon=epsilon,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
lower = _execute.make_bool(lower, "lower")
max_iter = _execute.make_int(max_iter, "max_iter")
epsilon = _execute.make_float(epsilon, "epsilon")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSelfAdjointEig", a=a, lower=lower, max_iter=max_iter,
epsilon=epsilon, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_self_adjoint_eig, (), dict(a=a, lower=lower, max_iter=max_iter,
epsilon=epsilon, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("lower", _op._get_attr_bool("lower"), "max_iter",
_op._get_attr_int("max_iter"), "epsilon",
_op.get_attr("epsilon"), "T", _op._get_attr_type("T"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSelfAdjointEig", _inputs_flat, _attrs, _result)
_result = _XlaSelfAdjointEigOutput._make(_result)
return _result
XlaSelfAdjointEig = tf_export("raw_ops.XlaSelfAdjointEig")(_ops.to_raw_op(xla_self_adjoint_eig))
def xla_self_adjoint_eig_eager_fallback(a, lower, max_iter, epsilon, name, ctx):
lower = _execute.make_bool(lower, "lower")
max_iter = _execute.make_int(max_iter, "max_iter")
epsilon = _execute.make_float(epsilon, "epsilon")
_attr_T, (a,) = _execute.args_to_matching_eager([a], ctx)
_inputs_flat = [a]
_attrs = ("lower", lower, "max_iter", max_iter, "epsilon", epsilon, "T",
_attr_T)
_result = _execute.execute(b"XlaSelfAdjointEig", 2, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSelfAdjointEig", _inputs_flat, _attrs, _result)
_result = _XlaSelfAdjointEigOutput._make(_result)
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_send')
def xla_send(tensor, tensor_name, name=None):
r"""Sends the named tensor to another XLA computation. Wraps the XLA Send operator
documented at
https://www.tensorflow.org/performance/xla/operation_semantics#send .
Args:
tensor: A `Tensor`. The tensor to send.
tensor_name: A `string`. A string key that identifies the channel.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSend", name,
tld.op_callbacks, tensor, "tensor_name", tensor_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_send_eager_fallback(
tensor, tensor_name=tensor_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_send, (), dict(tensor=tensor, tensor_name=tensor_name,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
tensor_name = _execute.make_str(tensor_name, "tensor_name")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSend", tensor=tensor, tensor_name=tensor_name, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_send, (), dict(tensor=tensor, tensor_name=tensor_name,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
return _op
XlaSend = tf_export("raw_ops.XlaSend")(_ops.to_raw_op(xla_send))
def xla_send_eager_fallback(tensor, tensor_name, name, ctx):
tensor_name = _execute.make_str(tensor_name, "tensor_name")
_attr_T, (tensor,) = _execute.args_to_matching_eager([tensor], ctx)
_inputs_flat = [tensor]
_attrs = ("T", _attr_T, "tensor_name", tensor_name)
_result = _execute.execute(b"XlaSend", 0, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
_result = None
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_sharding')
def xla_sharding(input, name=None):
r"""An op which shards the input based on the given sharding attribute.
Args:
input: A `Tensor`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSharding", name,
tld.op_callbacks, input)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_sharding_eager_fallback(
input, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_sharding, (), dict(input=input, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSharding", input=input, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_sharding, (), dict(input=input, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSharding", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaSharding = tf_export("raw_ops.XlaSharding")(_ops.to_raw_op(xla_sharding))
def xla_sharding_eager_fallback(input, name, ctx):
_attr_T, (input,) = _execute.args_to_matching_eager([input], ctx)
_inputs_flat = [input]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"XlaSharding", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSharding", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_sort')
def xla_sort(input, name=None):
r"""Wraps the XLA Sort operator, documented at
https://www.tensorflow.org/performance/xla/operation_semantics#sort
.
Sorts a tensor. Currently only sorts in ascending order are supported.
Args:
input: A `Tensor`. A `Tensor` of type T.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`. A `Tensor` of type T.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSort", name,
tld.op_callbacks, input)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_sort_eager_fallback(
input, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_sort, (), dict(input=input, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSort", input=input, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_sort, (), dict(input=input, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSort", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaSort = tf_export("raw_ops.XlaSort")(_ops.to_raw_op(xla_sort))
def xla_sort_eager_fallback(input, name, ctx):
_attr_T, (input,) = _execute.args_to_matching_eager([input], ctx)
_inputs_flat = [input]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"XlaSort", 1, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSort", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_spmd_full_to_shard_shape')
def xla_spmd_full_to_shard_shape(input, manual_sharding, name=None):
r"""An op used by XLA SPMD partitioner to switch from automatic partitioning to
manual partitioning. It annotates the input (full-shape, to be automatically
partitioned) with the same sharding used by manual partitioning, and outputs a
shard-shaped tensor to be consumed by later manually-partitioned ops. If the
shape is not evenly partitionable, the padding region will be masked with 0s.
Args:
input: A `Tensor`.
manual_sharding: A `string`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSpmdFullToShardShape",
name, tld.op_callbacks, input, "manual_sharding", manual_sharding)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_spmd_full_to_shard_shape_eager_fallback(
input, manual_sharding=manual_sharding, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_spmd_full_to_shard_shape, (), dict(input=input,
manual_sharding=manual_sharding,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
manual_sharding = _execute.make_str(manual_sharding, "manual_sharding")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSpmdFullToShardShape", input=input,
manual_sharding=manual_sharding, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_spmd_full_to_shard_shape, (), dict(input=input,
manual_sharding=manual_sharding,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "manual_sharding",
_op.get_attr("manual_sharding"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSpmdFullToShardShape", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaSpmdFullToShardShape = tf_export("raw_ops.XlaSpmdFullToShardShape")(_ops.to_raw_op(xla_spmd_full_to_shard_shape))
def xla_spmd_full_to_shard_shape_eager_fallback(input, manual_sharding, name, ctx):
manual_sharding = _execute.make_str(manual_sharding, "manual_sharding")
_attr_T, (input,) = _execute.args_to_matching_eager([input], ctx)
_inputs_flat = [input]
_attrs = ("T", _attr_T, "manual_sharding", manual_sharding)
_result = _execute.execute(b"XlaSpmdFullToShardShape", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSpmdFullToShardShape", _inputs_flat, _attrs, _result)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_spmd_shard_to_full_shape')
def xla_spmd_shard_to_full_shape(input, manual_sharding, full_shape, name=None):
r"""An op used by XLA SPMD partitioner to switch from manual partitioning to
automatic partitioning. It converts the shard-shaped, manually partitioned input
into full-shaped tensor to be partitioned automatically with the same sharding
used by manual partitioning.
Args:
input: A `Tensor`.
manual_sharding: A `string`.
full_shape: A `tf.TensorShape` or list of `ints`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSpmdShardToFullShape",
name, tld.op_callbacks, input, "manual_sharding", manual_sharding,
"full_shape", full_shape)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_spmd_shard_to_full_shape_eager_fallback(
input, manual_sharding=manual_sharding, full_shape=full_shape,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_spmd_shard_to_full_shape, (), dict(input=input,
manual_sharding=manual_sharding,
full_shape=full_shape,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
manual_sharding = _execute.make_str(manual_sharding, "manual_sharding")
full_shape = _execute.make_shape(full_shape, "full_shape")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSpmdShardToFullShape", input=input,
manual_sharding=manual_sharding,
full_shape=full_shape, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_spmd_shard_to_full_shape, (), dict(input=input,
manual_sharding=manual_sharding,
full_shape=full_shape,
name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("T", _op._get_attr_type("T"), "manual_sharding",
_op.get_attr("manual_sharding"), "full_shape",
_op.get_attr("full_shape"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSpmdShardToFullShape", _inputs_flat, _attrs, _result)
_result, = _result
return _result
XlaSpmdShardToFullShape = tf_export("raw_ops.XlaSpmdShardToFullShape")(_ops.to_raw_op(xla_spmd_shard_to_full_shape))
def xla_spmd_shard_to_full_shape_eager_fallback(input, manual_sharding, full_shape, name, ctx):
manual_sharding = _execute.make_str(manual_sharding, "manual_sharding")
full_shape = _execute.make_shape(full_shape, "full_shape")
_attr_T, (input,) = _execute.args_to_matching_eager([input], ctx)
_inputs_flat = [input]
_attrs = ("T", _attr_T, "manual_sharding", manual_sharding, "full_shape",
full_shape)
_result = _execute.execute(b"XlaSpmdShardToFullShape", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSpmdShardToFullShape", _inputs_flat, _attrs, _result)
_result, = _result
return _result
_XlaSvdOutput = collections.namedtuple(
"XlaSvd",
["s", "u", "v"])
@_dispatch.add_dispatch_list
@tf_export('xla_svd')
def xla_svd(a, max_iter, epsilon, precision_config, name=None):
r"""Computes the eigen decomposition of a batch of self-adjoint matrices
(Note: Only real inputs are supported).
Computes the eigenvalues and eigenvectors of the innermost M-by-N matrices in
tensor such that tensor[...,:,:] = u[..., :, :] * Diag(s[..., :]) * Transpose(v[...,:,:]).
Args:
a: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.
the input tensor.
max_iter: An `int`.
maximum number of sweep update, i.e., the whole lower triangular
part or upper triangular part based on parameter lower. Heuristically, it has
been argued that approximately log(min (M, N)) sweeps are needed in practice
(Ref: Golub & van Loan "Matrix Computation").
epsilon: A `float`. the tolerance ratio.
precision_config: A `string`. a serialized xla::PrecisionConfig proto.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (s, u, v).
s: A `Tensor`. Has the same type as `a`. Singular values. The values are sorted in reverse order of magnitude, so
s[..., 0] is the largest value, s[..., 1] is the second largest, etc.
u: A `Tensor`. Has the same type as `a`. Left singular vectors.
v: A `Tensor`. Has the same type as `a`. Right singular vectors.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaSvd", name,
tld.op_callbacks, a, "max_iter", max_iter, "epsilon", epsilon,
"precision_config", precision_config)
_result = _XlaSvdOutput._make(_result)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_svd_eager_fallback(
a, max_iter=max_iter, epsilon=epsilon,
precision_config=precision_config, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_svd, (), dict(a=a, max_iter=max_iter, epsilon=epsilon,
precision_config=precision_config, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
max_iter = _execute.make_int(max_iter, "max_iter")
epsilon = _execute.make_float(epsilon, "epsilon")
precision_config = _execute.make_str(precision_config, "precision_config")
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaSvd", a=a, max_iter=max_iter, epsilon=epsilon,
precision_config=precision_config, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_svd, (), dict(a=a, max_iter=max_iter, epsilon=epsilon,
precision_config=precision_config, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("max_iter", _op._get_attr_int("max_iter"), "epsilon",
_op.get_attr("epsilon"), "precision_config",
_op.get_attr("precision_config"), "T", _op._get_attr_type("T"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaSvd", _inputs_flat, _attrs, _result)
_result = _XlaSvdOutput._make(_result)
return _result
XlaSvd = tf_export("raw_ops.XlaSvd")(_ops.to_raw_op(xla_svd))
def xla_svd_eager_fallback(a, max_iter, epsilon, precision_config, name, ctx):
max_iter = _execute.make_int(max_iter, "max_iter")
epsilon = _execute.make_float(epsilon, "epsilon")
precision_config = _execute.make_str(precision_config, "precision_config")
_attr_T, (a,) = _execute.args_to_matching_eager([a], ctx)
_inputs_flat = [a]
_attrs = ("max_iter", max_iter, "epsilon", epsilon, "precision_config",
precision_config, "T", _attr_T)
_result = _execute.execute(b"XlaSvd", 3, inputs=_inputs_flat, attrs=_attrs,
ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaSvd", _inputs_flat, _attrs, _result)
_result = _XlaSvdOutput._make(_result)
return _result
@_dispatch.add_dispatch_list
@tf_export('xla_while')
def xla_while(input, cond, body, name=None):
r"""output = input; While (Cond(output)) { output = Body(output) }
Args:
input: A list of `Tensor` objects.
A list of input tensors whose types are T.
cond: A function decorated with @Defun.
A function takes 'input' and returns a tensor. If the tensor is
a scalar of non-boolean, the scalar is converted to a boolean
according to the following rule: if the scalar is a numerical
value, non-zero means True and zero means False; if the scalar is
a string, non-empty means True and empty means False. If the
tensor is not a scalar, non-emptiness means True and False
otherwise.
body: A function decorated with @Defun.
A function that takes a list of tensors and returns another
list of tensors. Both lists have the same types as specified by T.
name: A name for the operation (optional).
Returns:
A list of `Tensor` objects. Has the same type as `input`.
A list of output tensors whose types are T.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "XlaWhile", name,
tld.op_callbacks, input, "cond", cond, "body", body)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return xla_while_eager_fallback(
input, cond=cond, body=body, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_while, (), dict(input=input, cond=cond, body=body, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
# Add nodes to the TensorFlow graph.
try:
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"XlaWhile", input=input, cond=cond, body=body, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
xla_while, (), dict(input=input, cond=cond, body=body, name=name)
)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _outputs[:]
if not _result:
return _op
if _execute.must_record_gradient():
_attrs = ("T", _op.get_attr("T"), "cond", _op.get_attr("cond"), "body",
_op.get_attr("body"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"XlaWhile", _inputs_flat, _attrs, _result)
return _result
XlaWhile = tf_export("raw_ops.XlaWhile")(_ops.to_raw_op(xla_while))
def xla_while_eager_fallback(input, cond, body, name, ctx):
_attr_T, input = _execute.convert_to_mixed_eager_tensors(input, ctx)
_inputs_flat = list(input)
_attrs = ("T", _attr_T, "cond", cond, "body", body)
_result = _execute.execute(b"XlaWhile", len(input), inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"XlaWhile", _inputs_flat, _attrs, _result)
return _result
| 41.712929 | 232 | 0.677374 | 12,404 | 100,987 | 5.156482 | 0.042164 | 0.016385 | 0.017589 | 0.010162 | 0.857663 | 0.831382 | 0.812949 | 0.789997 | 0.770923 | 0.746424 | 0 | 0.005235 | 0.228247 | 100,987 | 2,420 | 233 | 41.730165 | 0.815441 | 0.21784 | 0 | 0.722469 | 1 | 0 | 0.06744 | 0.009484 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027809 | false | 0.027809 | 0.006674 | 0 | 0.119021 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b96a7c8c9d117343f0c9b433eadc672b0b4dab5c | 48 | py | Python | src/Constants.py | eons-dev/lib_eons | 5ad768f2414b1c170426fa82e8db22ac092ea5bb | [
"MIT"
] | null | null | null | src/Constants.py | eons-dev/lib_eons | 5ad768f2414b1c170426fa82e8db22ac092ea5bb | [
"MIT"
] | null | null | null | src/Constants.py | eons-dev/lib_eons | 5ad768f2414b1c170426fa82e8db22ac092ea5bb | [
"MIT"
] | null | null | null | def INVALID_NAME():
return "INVALID_NAME"
| 16 | 26 | 0.6875 | 6 | 48 | 5.166667 | 0.666667 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 48 | 2 | 27 | 24 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
b9cbce085b12ded1430f818a323af1af14d0ab56 | 89,925 | py | Python | moment_polytopes/third_party.py | amsqi/moment_polytopes | 641f3c0ebeb0daaea6e9664acb01f95c3686382e | [
"MIT"
] | 2 | 2017-02-14T21:37:33.000Z | 2017-02-15T10:24:37.000Z | moment_polytopes/third_party.py | catch22/moment_polytopes | 641f3c0ebeb0daaea6e9664acb01f95c3686382e | [
"MIT"
] | null | null | null | moment_polytopes/third_party.py | catch22/moment_polytopes | 641f3c0ebeb0daaea6e9664acb01f95c3686382e | [
"MIT"
] | 1 | 2021-02-23T15:35:22.000Z | 2021-02-23T15:35:22.000Z | # coding: utf-8
from __future__ import absolute_import, print_function
from sage.all import vector, Permutations, prod
from . import (
HRepr,
weyl_module,
StabilizerGroup,
perm_action,
external_tensor_product,
qmp,
)
__all__ = [
"KLYACHKO_FERMI_SCENARIOS",
"klyachko_fermi_hrepr",
"KLYACHKO_QMP_SCENARIOS",
"KLYACHKO_GOOD_QMP_SCENARIOS",
"klyachko_qmp_hrepr",
"_klyachko_qmp_bare_ieqs", # for unit testing
]
# these inequalities are from Altunbulak and Klyachko (2008)
KLYACHKO_FERMI_DATA = {
(
3,
6,
): """λ 1 + λ 6 = 1
λ 2 + λ 5 = 1
λ 3 + λ 4 = 1
λ 4 - λ 5 - λ 6 ≤ 0""",
(
3,
7,
): """λ 2 + λ 3 + λ 4 + λ 5 ≤ 2
λ 1 + λ 3 + λ 4 + λ 6 ≤ 2
λ 1 + λ 2 + λ 4 + λ 7 ≤ 2
λ 1 + λ 2 + λ 5 + λ 6 ≤ 2""",
(
3,
8,
): """λ 2 + λ 3 + λ 4 + λ 5 ≤ 2
λ 1 + λ 2 + λ 4 + λ 7 ≤ 2
λ 1 + λ 3 + λ 4 + λ 6 ≤ 2
λ 1 + λ 2 + λ 5 + λ 6 ≤ 2
λ 1 + λ 2 − λ 3 ≤ 1
λ 2 + λ 5 − λ 7 ≤ 1
λ 1 + λ 6 − λ 7 ≤ 1
λ 2 + λ 4 − λ 6 ≤ 1
λ 1 + λ 4 − λ 5 ≤ 1
λ 3 + λ 4 − λ 7 ≤ 1
λ 1 + λ 8 ≤ 1
λ 2 − λ 3 − λ 6 − λ 7 ≤ 0
λ 4 − λ 5 − λ 6 − λ 7 ≤ 0
λ 1 − λ 3 − λ 5 − λ 7 ≤ 0
λ 2 + λ 3 + 2λ 4 − λ 5 − λ 7 + λ 8 ≤ 2
λ 1 + λ 3 + 2λ 4 − λ 5 − λ 6 + λ 8 ≤ 2
λ 1 + 2λ 2 − λ 3 + λ 4 − λ 5 + λ 8 ≤ 2
λ 1 + 2λ 2 − λ 3 + λ 5 − λ 6 + λ 8 ≤ 2
λ 1 + λ 2 − 2λ 3 − λ 4 − λ 5 ≤ 0
λ 1 − λ 2 − λ 3 + λ 6 − 2λ 7 ≤ 0
λ 1 − λ 3 − λ 4 − λ 5 + λ 8 ≤ 0
λ 1 − λ 2 − λ 3 − λ 7 + λ 8 ≤ 0
2λ 1 − λ 2 + λ 4 − 2λ 5 − λ 6 + λ 8 ≤ 1
λ 3 + 2λ 4 − 2λ 5 − λ 6 − λ 7 + λ 8 ≤ 1
2λ 1 − λ 2 − λ 4 + λ 6 − 2λ 7 + λ 8 ≤ 1
2λ 1 + λ 2 − 2λ 3 − λ 4 − λ 6 + λ 8 ≤ 1
λ 1 + 2λ 2 − 2λ 3 − λ 5 − λ 6 + λ 8 ≤ 1
2λ 1 − 2λ 2 − λ 3 − λ 4 + λ 6 − 3λ 7 + λ 8 ≤ 0
−λ 1 + λ 3 + 2λ 4 − 3λ 5 − 2λ 6 − λ 7 + λ 8 ≤ 0
2λ 1 + λ 2 − 3λ 3 − 2λ 4 − λ 5 − λ 6 + λ 8 ≤ 0
λ 1 + 2λ 2 − 3λ 3 − λ 4 − 2λ 5 − λ 6 + λ 8 ≤ 0""",
(
4,
8,
): """λ 1 ≤ 1
λ 5 − λ 6 − λ 7 − λ 8 ≤ 0
λ 1 − λ 2 − λ 7 − λ 8 ≤ 0
λ 1 − λ 3 − λ 6 − λ 8 ≤ 0
λ 1 − λ 4 − λ 6 − λ 7 ≤ 0
λ 1 − λ 4 − λ 5 − λ 8 ≤ 0
λ 3 − λ 4 − λ 7 − λ 8 ≤ 0
λ 2 − λ 4 − λ 6 − λ 8 ≤ 0
λ 2 + λ 3 + λ 5 − λ 8 ≤ 2
λ 1 + λ 3 + λ 6 − λ 8 ≤ 2
λ 1 + λ 2 + λ 7 − λ 8 ≤ 2
λ 1 + λ 2 + λ 3 − λ 4 ≤ 2
λ 1 + λ 4 + λ 5 − λ 8 ≤ 2
λ 1 + λ 2 + λ 5 − λ 6 ≤ 2
λ 1 + λ 3 + λ 5 − λ 7 ≤ 2""",
}
#: Scenarios :math:`(n,d)`, corresponding to the representation of :math:`GL(d)` on the anti-symmetric power :math:`\bigwedge^n \mathbb C^d`.
KLYACHKO_FERMI_SCENARIOS = sorted(KLYACHKO_FERMI_DATA.keys())
def _parse_fermi_ieq(d, s, split_at):
"""Parse a fermionic inequality in Klyachko's format."""
H = [0] * d
s = s.rstrip(" ,.").replace("−", "-").replace("λ", " A").replace("-A", "- A")
lhs, rhs = map(lambda s: str(s).strip(), s.split(split_at))
if lhs[0] not in ["+", "-"]:
lhs = "+ " + lhs
todo = lhs.split()
while todo:
assert todo[0] in ["+", "-"]
sign = 1 if todo[0] == "+" else -1
if todo[1] == "A":
todo = [todo[0]] + ["1"] + todo[1:]
coeff = int(todo[1])
assert todo[2] == "A"
idx = int(todo[3])
H[idx - 1] = -sign * coeff
todo = todo[4:]
c = -int(rhs)
return (vector(H), c)
def klyachko_fermi_hrepr(n, d, bare=False):
r"""Return the moment polytope for the :math:`GL(d)`-representation on :math:`\bigwedge^n \mathbb C^d` as computed in `Altunbulak and Klyachko (2008) <https://arxiv.org/abs/0802.0918>`_.
See :data:`KLYACHKO_FERMI_SCENARIOS` for available scenarios.
:param n: the antisymmetric power.
:param d: the rank of the group.
:param bare: if ``True`` then the reduced Weyl chamber inequalities are omitted.
:rtype: :class:`moment_polytopes.HRepr`
"""
# retrieve inequalities from data file
ieqs = []
eqns = []
for line in KLYACHKO_FERMI_DATA[n, d].splitlines():
line = line.strip()
if not line:
continue
is_equation = "=" in line
if is_equation:
eqns.append(_parse_fermi_ieq(d, line, "="))
else:
ieqs.append(_parse_fermi_ieq(d, line, "≤"))
hrepr = HRepr(ieqs=ieqs, eqns=eqns)
# intersect with reduced Weyl chamber
if not bare:
R = weyl_module(d, [1] * n)
hrepr = hrepr & R.reduced_positive_weyl_chamber_hrepr
return hrepr
# these inequalities are from Klyachko (2004)
KLYACHKO_QMP_DATA = {
(
3,
2,
6,
): """µ 1 − µ 2 ≤ ν 1 + ν 2 + ν 3 − ν 4 − ν 5 − ν 6 .
λ 1 + λ 2 − 2λ 3 ≤ ν 1 + ν 2 + ν 3 + ν 4 − 2ν 5 − 2ν 6 ,
λ 2 + λ 3 − 2λ 1 ≤ ν 1 + ν 2 + ν 3 + ν 6 − 2ν 4 − 2ν 5 .
2λ 1 − λ 2 − λ 3 ≤ 2ν 1 + 2ν 2 − ν 3 − ν 4 − ν 5 − ν 6 ,
2λ 3 − λ 1 − λ 2 ≤ 2ν 2 + 2ν 3 − ν 1 − ν 4 − ν 5 − ν 6 .
2λ 1 − 2λ 3 − µ 2 + µ 1 ≤ 3ν 1 + ν 2 + ν 3 − ν 4 − ν 5 − 3ν 6 ,
2λ 1 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 − ν 4 − ν 5 − 3ν 6 ,
2λ 2 − 2λ 3 − µ 2 + µ 1 ≤ 3ν 2 + ν 1 + ν 3 − ν 4 − ν 5 − 3ν 6 ,
2λ 1 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 4 − ν 3 − ν 5 − 3ν 6 ,
2λ 1 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 4 − ν 3 − ν 5 − 3ν 6 ,
2λ 1 − 2λ 2 − µ 2 + µ 1 ≤ 3ν 1 + ν 2 + ν 4 − ν 3 − ν 5 − 3ν 6 ,
2λ 2 − 2λ 3 − µ 2 + µ 1 ≤ 3ν 1 + ν 2 + ν 4 − ν 3 − ν 5 − 3ν 6 ,
2λ 3 − 2λ 1 − µ 2 + µ 1 ≤ 3ν 3 + ν 1 + ν 4 − ν 2 − ν 5 − 3ν 6 ,
2λ 3 − 2λ 1 + µ 2 − µ 1 ≤ 3ν 3 + ν 2 + ν 4 − ν 1 − ν 5 − 3ν 6 ,
2λ 3 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 3 + ν 4 − ν 1 − ν 5 − 3ν 6 ,
2λ 3 − 2λ 1 − µ 2 + µ 1 ≤ 3ν 2 + ν 3 + ν 4 − ν 1 − ν 5 − 3ν 6 ,
2λ 2 − 2λ 1 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 6 − ν 4 − ν 3 − 3ν 5 ,
2λ 3 − 2λ 1 − µ 2 + µ 1 ≤ 3ν 1 + ν 2 + ν 6 − ν 4 − ν 3 − 3ν 5 ,
2λ 2 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 4 − ν 3 − ν 6 − 3ν 5 ,
2λ 1 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 − ν 4 − ν 6 − 3ν 5 ,
2λ 2 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 − ν 4 − ν 6 − 3ν 5 ,
2λ 1 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 − ν 6 − ν 4 − 3ν 5 ,
2λ 1 − 2λ 2 − µ 2 + µ 1 ≤ 3ν 1 + ν 2 + ν 3 − ν 6 − ν 4 − 3ν 5 ,
2λ 3 − 2λ 1 − µ 2 + µ 1 ≤ 3ν 1 + ν 2 + ν 5 − ν 3 − ν 6 − 3ν 4 ,
2λ 3 − 2λ 1 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 6 − ν 3 − ν 5 − 3ν 4 .
2λ 1 + 2λ 2 − 4λ 3 + 3µ 1 − 3µ 2 ≤ 5ν 1 + 5ν 2 − ν 3 − ν 4 − ν 5 − 7ν 6 ,
2λ 3 + 2λ 1 − 4λ 2 + 3µ 1 − 3µ 2 ≤ 5ν 3 + 5ν 1 − ν 2 − ν 4 − ν 5 − 7ν 6 ,
2λ 1 + 2λ 3 − 4λ 2 + 3µ 2 − 3µ 1 ≤ 5ν 2 + 5ν 3 − ν 1 − ν 4 − ν 5 − 7ν 6 ,
2λ 2 + 2λ 3 − 4λ 1 + 3µ 1 − 3µ 2 ≤ 5ν 2 + 5ν 3 − ν 1 − ν 4 − ν 5 − 7ν 6 ,
2λ 2 + 2λ 1 − 4λ 3 + 3µ 2 − 3µ 1 ≤ 5ν 1 + 5ν 2 − ν 3 − ν 6 − ν 4 − 7ν 5 ,
2λ 3 + 2λ 1 − 4λ 2 + 3µ 1 − 3µ 2 ≤ 5ν 1 + 5ν 2 − ν 3 − ν 6 − ν 4 − 7ν 5 ,
2λ 2 + 2λ 3 − 4λ 1 + 3µ 1 − 3µ 2 ≤ 5ν 1 + 5ν 3 − ν 2 − ν 6 − ν 4 − 7ν 5 ,
2λ 2 + 2λ 3 − 4λ 1 + 3µ 1 − 3µ 2 ≤ 5ν 1 + 5ν 2 − ν 3 − ν 5 − ν 6 − 7ν 4 .
4λ 1 − 2λ 2 − 2λ 3 + 3µ 1 − 3µ 2 ≤ 7ν 1 + ν 2 + ν 3 + ν 4 − 5ν 5 − 5ν 6 ,
4λ 1 − 2λ 2 − 2λ 3 + 3µ 2 − 3µ 1 ≤ 7ν 2 + ν 1 + ν 3 + ν 4 − 5ν 5 − 5ν 6 ,
4λ 2 − 2λ 1 − 2λ 3 + 3µ 1 − 3µ 2 ≤ 7ν 2 + ν 1 + ν 3 + ν 4 − 5ν 5 − 5ν 6 ,
4λ 2 − 2λ 1 − 2λ 3 + 3µ 1 − 3µ 2 ≤ 7ν 1 + ν 2 + ν 3 + ν 5 − 5ν 4 − 5ν 6 ,
4λ 3 − 2λ 1 − 2λ 2 + 3µ 1 − 3µ 2 ≤ 7ν 3 + ν 1 + ν 2 + ν 4 − 5ν 5 − 5ν 6 ,
4λ 3 − 2λ 1 − 2λ 2 + 3µ 1 − 3µ 2 ≤ 7ν 2 + ν 1 + ν 3 + ν 5 − 5ν 4 − 5ν 6 ,
4λ 2 − 2λ 1 − 2λ 3 + 3µ 2 − 3µ 1 ≤ 7ν 1 + ν 2 + ν 3 + ν 6 − 5ν 4 − 5ν 5 ,
4λ 3 − 2λ 1 − 2λ 2 + 3µ 1 − 3µ 2 ≤ 7ν 1 + ν 2 + ν 3 + ν 6 − 5ν 4 − 5ν 5 .""",
(
4,
2,
8,
): """µ 1 − µ 2 ≤ ν 1 + ν 2 + ν 3 + ν 4 − ν 5 − ν 6 − ν 7 − ν 8 .
λ 1 + λ 2 − λ 3 − λ 4 ≤ ν 1 + ν 2 + ν 3 + ν 4 − ν 5 − ν 6 − ν 7 − ν 8 ,
λ 1 + λ 4 − λ 2 − λ 3 ≤ ν 1 + ν 2 + ν 4 + ν 5 − ν 3 − ν 6 − ν 7 − ν 8 ,
λ 2 + λ 3 − λ 1 − λ 4 ≤ ν 1 + ν 2 + ν 3 + ν 6 − ν 4 − ν 5 − ν 7 − ν 8 ,
λ 3 + λ 4 − λ 1 − λ 2 ≤ ν 2 + ν 3 + ν 4 + ν 5 − ν 1 − ν 6 − ν 7 − ν 8 ,
λ 3 + λ 4 − λ 1 − λ 2 ≤ ν 1 + ν 2 + ν 3 + ν 8 − ν 4 − ν 5 − ν 6 − ν 7 .
λ 1 + λ 2 + λ 3 − 3λ 4 ≤ ν 1 + ν 2 + ν 3 + ν 4 + ν 5 + ν 6 − 3ν 7 − 3ν 8 ,
λ 1 + λ 3 + λ 4 − 3λ 2 ≤ ν 1 + ν 2 + ν 3 + ν 4 + ν 5 + ν 8 − 3ν 6 − 3ν 7 .
3λ 1 − λ 2 − λ 3 − λ 4 ≤ 3ν 1 + 3ν 2 − ν 3 − ν 4 − ν 5 − ν 6 − ν 7 − ν 8 ,
3λ 3 − λ 1 − λ 2 − λ 4 ≤ 3ν 2 + 3ν 3 − ν 1 − ν 4 − ν 5 − ν 6 − ν 7 − ν 8 .
λ 1 + λ 2 + λ 3 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 6 − ν 7 − 5ν 8 ,
λ 1 + λ 2 + λ 4 − 3λ 3 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 6 − ν 7 − 5ν 8 ,
λ 1 + λ 2 + λ 3 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 3ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 6 − ν 8 − 5ν 7 ,
λ 1 + λ 2 + λ 4 − 3λ 3 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 6 − ν 8 − 5ν 7 ,
λ 1 + λ 3 + λ 4 − 3λ 2 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 3 + 3ν 4 − ν 2 − ν 5 − ν 6 − ν 7 − 5ν 8 ,
λ 1 + λ 3 + λ 4 − 3λ 2 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 6 − ν 8 − 5ν 7 ,
λ 1 + λ 3 + λ 4 − 3λ 2 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 7 − ν 8 − 5ν 6 ,
λ 1 + λ 3 + λ 4 − 3λ 2 + 2µ 2 − 2µ 1 ≤ 3ν 2 + 3ν 3 + 3ν 4 − ν 1 − ν 5 − ν 6 − ν 7 − 5ν 8 ,
λ 2 + λ 3 + λ 4 − 3λ 1 + 2µ 1 − 2µ 2 ≤ 3ν 2 + 3ν 3 + 3ν 4 − ν 1 − ν 5 − ν 6 − ν 7 − 5ν 8 ,
λ 2 + λ 3 + λ 4 − 3λ 1 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 3 + 3ν 4 − ν 2 − ν 5 − ν 6 − ν 8 − 5ν 7 ,
λ 2 + λ 3 + λ 4 − 3λ 1 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 7 − ν 8 − 5ν 6 ,
λ 2 + λ 3 + λ 4 − 3λ 1 + 2µ 1 − 2µ 2 ≤ 3ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 6 − ν 7 − ν 8 − 5ν 5 .
3λ 1 − λ 2 − λ 3 − λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + ν 2 + ν 3 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 3ν 8 ,
3λ 1 − λ 2 − λ 3 − λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 2 + ν 1 + ν 3 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 3ν 8 ,
3λ 2 − λ 1 − λ 3 − λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 2 + ν 1 + ν 3 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 3ν 8 ,
3λ 2 − λ 1 − λ 3 − λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + ν 2 + ν 3 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 3ν 8 ,
3λ 3 − λ 1 − λ 2 − λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 3 + ν 1 + ν 2 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 3ν 8 ,
3λ 3 − λ 1 − λ 2 − λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 2 + ν 1 + ν 3 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 3ν 8 ,
3λ 3 − λ 1 − λ 2 − λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + ν 2 + ν 3 + ν 4 + ν 7 − 3ν 5 − 3ν 6 − 3ν 8 ,
3λ 4 − λ 1 − λ 2 − λ 3 + 2µ 1 − 2µ 2 ≤ 5ν 4 + ν 1 + ν 2 + ν 3 + ν 5 − 3ν 6 − 3ν 7 − 3ν 8 ,
3λ 4 − λ 1 − λ 2 − λ 3 + 2µ 1 − 2µ 2 ≤ 5ν 3 + ν 1 + ν 2 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 3ν 8 ,
3λ 4 − λ 1 − λ 2 − λ 3 + 2µ 1 − 2µ 2 ≤ 5ν 2 + ν 1 + ν 3 + ν 4 + ν 7 − 3ν 5 − 3ν 6 − 3ν 8 ,
3λ 3 − λ 1 − λ 2 − λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + ν 2 + ν 3 + ν 4 + ν 8 − 3ν 5 − 3ν 6 − 3ν 7 ,
3λ 4 − λ 1 − λ 2 − λ 3 + 2µ 1 − 2µ 2 ≤ 5ν 1 + ν 2 + ν 3 + ν 4 + ν 8 − 3ν 5 − 3ν 6 − 3ν 7
λ 1 + λ 2 − λ 3 − λ 4 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 2 − 2ν 7 − 2ν 8 ,
λ 1 + λ 3 − λ 2 − λ 4 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 3 − 2ν 7 − 2ν 8 ,
λ 1 + λ 3 − λ 2 − λ 4 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 2 − 2ν 6 − 2ν 8 ,
λ 1 + λ 3 − λ 2 − λ 4 + µ 2 − µ 1 ≤ 2ν 2 + 2ν 3 − 2ν 7 − 2ν 8 ,
λ 2 + λ 3 − λ 1 − λ 4 + µ 1 − µ 2 ≤ 2ν 2 + 2ν 3 − 2ν 7 − 2ν 8 ,
λ 1 + λ 4 − λ 2 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 4 − 2ν 7 − 2ν 8 ,
λ 1 + λ 4 − λ 2 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 3 − 2ν 6 − 2ν 8 ,
λ 2 + λ 3 − λ 1 − λ 4 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 3 − 2ν 6 − 2ν 8 ,
λ 2 + λ 3 − λ 1 − λ 4 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 2 − 2ν 5 − 2ν 8 ,
λ 1 + λ 3 − λ 2 − λ 4 + µ 2 − µ 1 ≤ 2ν 1 + 2ν 2 − 2ν 6 − 2ν 7 ,
λ 1 + λ 4 − λ 2 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 2 − 2ν 6 − 2ν 7 ,
λ 1 + λ 4 − λ 2 − λ 3 + µ 2 − µ 1 ≤ 2ν 2 + 2ν 4 − 2ν 7 − 2ν 8 ,
λ 2 + λ 4 − λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 2 + 2ν 4 − 2ν 7 − 2ν 8 ,
λ 2 + λ 4 − λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 4 − 2ν 6 − 2ν 8 ,
λ 1 + λ 4 − λ 2 − λ 3 + µ 2 − µ 1 ≤ 2ν 2 + 2ν 3 − 2ν 6 − 2ν 8 ,
λ 2 + λ 4 − λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 2 + 2ν 3 − 2ν 6 − 2ν 8 ,
λ 2 + λ 4 − λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 3 − 2ν 5 − 2ν 8 ,
λ 2 + λ 3 − λ 1 − λ 4 + µ 2 − µ 1 ≤ 2ν 1 + 2ν 2 − 2ν 5 − 2ν 7 ,
λ 2 + λ 4 − λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 2 − 2ν 5 − 2ν 7 ,
λ 2 + λ 3 − λ 1 − λ 4 + µ 2 − µ 1 ≤ 2ν 1 + 2ν 3 − 2ν 6 − 2ν 7 ,
λ 2 + λ 4 − λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 3 − 2ν 6 − 2ν 7 ,
λ 3 + λ 4 − λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 3 + 2ν 4 − 2ν 7 − 2ν 8 ,
λ 3 + λ 4 − λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 2 + 2ν 4 − 2ν 6 − 2ν 8
λ 3 + λ 4 − λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 2 + 2ν 3 − 2ν 5 − 2ν 8 ,
λ 3 + λ 4 − λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 3 − 2ν 5 − 2ν 7 ,
λ 3 + λ 4 − λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 4 − 2ν 6 − 2ν 7 ,
λ 3 + λ 4 − λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 1 + 2ν 2 − 2ν 5 − 2ν 6 .
5λ 1 + λ 2 − 3λ 3 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 2 − 3λ 3 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 1 + 3ν 3 − ν 4 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 2 + λ 1 − 3λ 3 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 2 + 3ν 1 + 3ν 3 − ν 4 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 2 − 3λ 3 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 3 − 3λ 2 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 2 + λ 1 − 3λ 3 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 3 − 3λ 2 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 7 − 5ν 6 − 5ν 8 ,
5λ 1 + λ 3 − 3λ 2 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 1 + 3ν 4 − ν 3 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 4 − 3λ 2 − 3λ 3 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 5 − ν 3 − ν 4 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 4 − 3λ 2 − 3λ 3 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 4 − ν 3 − ν 5 − ν 7 − 5ν 6 − 5ν 8 ,
5λ 1 + λ 3 − 3λ 2 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 1 + 3ν 3 − ν 4 − ν 5 − ν 7 − 5ν 6 − 5ν 8 ,
5λ 1 + λ 3 − 3λ 2 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 8 − 5ν 6 − 5ν 7 ,
5λ 1 + λ 4 − 3λ 2 − 3λ 3 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 3 − ν 4 − ν 5 − ν 8 − 5ν 6 − 5ν 7 ,
5λ 3 + λ 2 − 3λ 1 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 3 + 3ν 1 + 3ν 4 − ν 2 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 3 + λ 1 − 3λ 2 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 3 + 3ν 4 − ν 1 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 3 + λ 2 − 3λ 1 − 3λ 4 + 2µ 1 − 2µ 2 ≤ 7ν 2 + 3ν 3 + 3ν 4 − ν 1 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 4 − 3λ 2 − 3λ 3 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 1 + 3ν 5 − ν 3 − ν 4 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 1 + λ 4 − 3λ 2 − 3λ 3 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 1 + 3ν 4 − ν 3 − ν 5 − ν 7 − 5ν 6 − 5ν 8 ,
5λ 1 + λ 4 − 3λ 2 − 3λ 3 + 2µ 2 − 2µ 1 ≤ 7ν 2 + 3ν 1 + 3ν 3 − ν 4 − ν 5 − ν 8 − 5ν 6 − 5ν 7 ,
5λ 3 + λ 2 − 3λ 1 − 3λ 4 + 2µ 2 − 2µ 1 ≤ 7ν 3 + 3ν 2 + 3ν 4 − ν 1 − ν 5 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 4 + λ 3 − 3λ 1 − 3λ 2 + 2µ 1 − 2µ 2 ≤ 7ν 4 + 3ν 1 + 3ν 5 − ν 2 − ν 3 − ν 6 − 5ν 7 − 5ν 8 ,
5λ 3 + λ 4 − 3λ 1 − 3λ 2 + 2µ 2 − 2µ 1 ≤ 7ν 1 + 3ν 2 + 3ν 8 − ν 3 − ν 4 − ν 5 − 5ν 6 − 5ν 7 ,
5λ 4 + λ 3 − 3λ 1 − 3λ 2 + 2µ 1 − 2µ 2 ≤ 7ν 1 + 3ν 2 + 3ν 8 − ν 3 − ν 4 − ν 5 − 5ν 6 − 5ν 7
3λ 1 + 3λ 2 − λ 3 − 5λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 7ν 8 ,
3λ 1 + 3λ 3 − λ 2 − 5λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 3 + ν 2 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 7ν 8 ,
3λ 1 + 3λ 2 − λ 3 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 7ν 8 ,
3λ 1 + 3λ 2 − λ 4 − 5λ 3 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 7ν 8 ,
3λ 1 + 3λ 3 − λ 2 − 5λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 7ν 8 ,
3λ 1 + 3λ 2 − λ 3 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 5 − 3ν 6 − 3ν 8 − 7ν 7 ,
3λ 1 + 3λ 2 − λ 4 − 5λ 3 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 5 − 3ν 6 − 3ν 8 − 7ν 7 ,
3λ 1 + 3λ 3 − λ 2 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 2 + 5ν 3 + ν 1 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 7ν 8 ,
3λ 2 + 3λ 3 − λ 1 − 5λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 2 + 5ν 3 + ν 1 + ν 4 + ν 5 − 3ν 6 − 3ν 7 − 7ν 8 ,
3λ 2 + 3λ 3 − λ 1 − 5λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 3 + ν 2 + ν 4 + ν 6 − 3ν 5 − 3ν 7 − 7ν 8 ,
3λ 2 + 3λ 3 − λ 1 − 5λ 4 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 5 + ν 6 − 3ν 4 − 3ν 7 − 7ν 8 ,
3λ 1 + 3λ 3 − λ 2 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 6 − 3ν 5 − 3ν 8 − 7ν 7 ,
3λ 1 + 3λ 3 − λ 2 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 3 + ν 2 + ν 4 + ν 5 − 3ν 6 − 3ν 8 − 7ν 7 ,
3λ 1 + 3λ 3 − λ 4 − 5λ 2 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 8 − 3ν 5 − 3ν 6 − 7ν 7 ,
3λ 1 + 3λ 4 − λ 3 − 5λ 2 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 8 − 3ν 5 − 3ν 6 − 7ν 7 ,
3λ 2 + 3λ 3 − λ 1 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 5 + ν 6 − 3ν 4 − 3ν 8 − 7ν 7 ,
3λ 2 + 3λ 3 − λ 1 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 3 + ν 2 + ν 4 + ν 6 − 3ν 5 − 3ν 8 − 7ν 7 ,
3λ 2 + 3λ 3 − λ 1 − 5λ 4 + 2µ 2 − 2µ 1 ≤ 5ν 2 + 5ν 3 + ν 1 + ν 4 + ν 5 − 3ν 6 − 3ν 8 − 7ν 7 ,
3λ 1 + 3λ 4 − λ 3 − 5λ 2 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 7 − 3ν 5 − 3ν 8 − 7ν 6 ,
3λ 1 + 3λ 4 − λ 3 − 5λ 2 + 2µ 2 − 2µ 1 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 4 + ν 8 − 3ν 5 − 3ν 7 − 7ν 6 ,
3λ 3 + 3λ 4 − λ 1 − 5λ 2 + 2µ 2 − 2µ 1 ≤ 5ν 2 + 5ν 3 + ν 4 + ν 5 + ν 6 − 3ν 1 − 3ν 7 − 7ν 8 ,
3λ 3 + 3λ 4 − λ 2 − 5λ 1 + 2µ 1 − 2µ 2 ≤ 5ν 2 + 5ν 3 + ν 4 + ν 5 + ν 6 − 3ν 1 − 3ν 7 − 7ν 8 ,
3λ 3 + 3λ 4 − λ 2 − 5λ 1 + 2µ 1 − 2µ 2 ≤ 5ν 1 + 5ν 2 + ν 3 + ν 6 + ν 7 − 3ν 4 − 3ν 8 − 7ν 5 .
2λ 1 − 2λ 4 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 4 − ν 5 − ν 6 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 4 − ν 5 − ν 6 − ν 7 − 3ν 8 ,
2λ 2 − 2λ 4 + µ 1 − µ 2 ≤ 3ν 2 + ν 1 + ν 3 + ν 4 − ν 5 − ν 6 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 5 − ν 4 − ν 6 − ν 7 − 3ν 8 ,
2λ 2 − 2λ 4 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 5 − ν 4 − ν 6 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 4 − ν 5 − ν 6 − ν 8 − 3ν 7 ,
2λ 1 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 4 − ν 5 − ν 6 − ν 8 − 3ν 7 ,
2λ 3 − 2λ 4 + µ 1 − µ 2 ≤ 3ν 3 + ν 1 + ν 2 + ν 4 − ν 5 − ν 6 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 5 − ν 4 − ν 6 − ν 7 − 3ν 8 ,
2λ 2 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 2 + ν 1 + ν 3 + ν 5 − ν 4 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 4 + µ 1 − µ 2 ≤ 3ν 2 + ν 1 + ν 3 + ν 5 − ν 4 − ν 6 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 4 + ν 5 − ν 3 − ν 6 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 4 + ν 5 − ν 3 − ν 6 − ν 7 − 3ν 8 ,
2λ 2 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 4 + ν 5 − ν 3 − ν 6 − ν 7 − 3ν 8 ,
2λ 2 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 6 − ν 4 − ν 5 − ν 7 − 3ν 8 ,
2λ 2 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 6 − ν 4 − ν 5 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 4 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 6 − ν 4 − ν 5 − ν 7 − 3ν 8 ,
2λ 1 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 5 − ν 4 − ν 6 − ν 8 − 3ν 7 ,
2λ 2 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 5 − ν 4 − ν 6 − ν 8 − 3ν 7 ,
2λ 2 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 5 − ν 4 − ν 6 − ν 8 − 3ν 7 ,
2λ 1 − 2λ 3 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 4 − ν 5 − ν 6 − ν 8 − 3ν 7 ,
2λ 2 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 4 − ν 5 − ν 6 − ν 8 − 3ν 7 ,
2λ 2 − 2λ 3 + µ 1 − µ 2 ≤ 3ν 2 + ν 1 + ν 3 + ν 4 − ν 5 − ν 6 − ν 8 − 3ν 7 ,
2λ 1 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 4 − ν 5 − ν 7 − ν 8 − 3ν 6 ,
2λ 1 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 4 + ν 5 − ν 3 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 6 − ν 4 − ν 5 − ν 8 − 3ν 7 ,
2λ 1 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 5 − ν 4 − ν 6 − ν 8 − 3ν 7 ,
2λ 3 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 5 − ν 4 − ν 6 − ν 8 − 3ν 7 ,
2λ 3 − 2λ 4 + µ 2 − µ 1 ≤ 3ν 3 + ν 1 + ν 2 + ν 4 − ν 5 − ν 6 − ν 8 − 3ν 7 ,
2λ 1 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 1 + ν 3 + ν 4 − ν 5 − ν 7 − ν 8 − 3ν 6 ,
2λ 4 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 4 + ν 1 + ν 2 + ν 5 − ν 3 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 3 + ν 1 + ν 4 + ν 5 − ν 2 − ν 6 − ν 7 − 3ν 8 ,
2λ 4 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 3 + ν 1 + ν 4 + ν 5 − ν 2 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 3 + ν 4 + ν 5 − ν 1 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 2 + ν 3 + ν 4 + ν 5 − ν 1 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 8 − ν 4 − ν 5 − ν 6 − 3ν 7 ,
2λ 4 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 8 − ν 4 − ν 5 − ν 6 − 3ν 7 ,
2λ 3 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 7 − ν 4 − ν 5 − ν 8 − 3ν 6 ,
2λ 4 − 2λ 2 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 7 − ν 4 − ν 5 − ν 8 − 3ν 6 ,
2λ 3 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 6 − ν 4 − ν 7 − ν 8 − 3ν 5 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 4 + ν 1 + ν 3 + ν 5 − ν 2 − ν 6 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 1 + µ 2 − µ 1 ≤ 3ν 3 + ν 2 + ν 4 + ν 5 − ν 1 − ν 6 − ν 7 − 3ν 8 ,
2λ 4 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 3 + ν 2 + ν 4 + ν 5 − ν 1 − ν 6 − ν 7 − 3ν 8 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 3 + ν 2 + ν 4 + ν 5 − ν 1 − ν 6 − ν 7 − 3ν 8 ,
2λ 4 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 2 + ν 3 + ν 4 + ν 6 − ν 1 − ν 5 − ν 7 − 3ν 8 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 2 + ν 3 + ν 4 + ν 6 − ν 1 − ν 5 − ν 7 − 3ν 8 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 3 + ν 1 + ν 4 + ν 6 − ν 2 − ν 5 − ν 7 − 3ν 8 ,
2λ 3 − 2λ 1 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 4 + ν 8 − ν 3 − ν 5 − ν 6 − 3ν 7 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 4 + ν 8 − ν 3 − ν 5 − ν 6 − 3ν 7 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 3 + ν 1 + ν 4 + ν 5 − ν 2 − ν 6 − ν 8 − 3ν 7 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 4 + ν 1 + ν 2 + ν 5 − ν 3 − ν 6 − ν 8 − 3ν 7 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 4 + ν 7 − ν 3 − ν 5 − ν 8 − 3ν 6 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 2 + ν 1 + ν 3 + ν 7 − ν 4 − ν 5 − ν 8 − 3ν 6 ,
2λ 3 − 2λ 1 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 8 − ν 4 − ν 5 − ν 7 − 3ν 6 ,
2λ 4 − 2λ 2 + µ 2 − µ 1 ≤ 3ν 1 + ν 2 + ν 3 + ν 8 − ν 4 − ν 5 − ν 7 − 3ν 6 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 8 − ν 4 − ν 5 − ν 7 − 3ν 6 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 1 + ν 2 + ν 3 + ν 7 − ν 4 − ν 6 − ν 8 − 3ν 5 ,
2λ 4 − 2λ 1 + µ 1 − µ 2 ≤ 3ν 2 + ν 1 + ν 3 + ν 6 − ν 4 − ν 7 − ν 8 − 3ν 5
3λ 1 + λ 2 − λ 3 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 1 − λ 3 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 3 − λ 2 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 1 − λ 3 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 4 − 3λ 3 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 3 − λ 2 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 1 + λ 2 − λ 4 − 3λ 3 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 1 + λ 3 − λ 2 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 4 − 3λ 3 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 1 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 1 − λ 4 − 3λ 3 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 4 − 3λ 3 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 3 − λ 2 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 1 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 1 − λ 4 − 3λ 3 + µ 1 − µ 2 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 2 − λ 4 − 3λ 3 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 1 + λ 3 − λ 2 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 8 − 4ν 7 ,
3λ 1 + λ 3 − λ 2 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 2 + λ 1 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 2 + λ 1 − λ 4 − 3λ 3 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 1 + λ 2 − λ 4 − 3λ 3 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 2 + λ 1 − λ 3 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 2 + λ 1 − λ 4 − 3λ 3 + µ 1 − µ 2 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 3 + 2ν 1 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 1 − λ 2 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 3 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 2 + 2ν 3 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 2 + λ 3 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 6 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 6 − 2ν 5 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 5 − 2ν 4 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 5 − 2ν 4 − 2ν 7 − 4ν 8
3λ 1 + λ 4 − λ 2 − 3λ 3 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 3 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 4 − 2ν 3 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 3 − λ 4 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 6 − 4ν 7 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 6 − 4ν 7 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 8 − 4ν 6 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 3 + 2ν 2 + 2ν 4 − 2ν 6 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 5 − 2ν 4 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 6 − 2ν 4 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 5 − 2ν 3 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 4 − 2ν 3 − 2ν 7 − 4ν 8 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 5 − 2ν 6 − 4ν 7 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 5 − 2ν 4 − 2ν 8 − 4ν 7 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 6 − 2ν 5 − 2ν 8 − 4ν 7 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 3 + 2ν 4 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 3 + λ 2 − λ 1 − 3λ 4 + µ 2 − µ 1 ≤ 4ν 3 + 2ν 1 + 2ν 4 − 2ν 6 − 2ν 8 − 4ν 7 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 1 + 2ν 3 − 2ν 5 − 2ν 8 − 4ν 6 ,
3λ 1 + λ 4 − λ 3 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 3 − 2ν 5 − 2ν 7 − 4ν 6 ,
3λ 3 + λ 4 − λ 1 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 3 + 2ν 4 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 4 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 2 + 2ν 3 + 2ν 4 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 4 − λ 1 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 5 − 2ν 6 − 4ν 7 ,
3λ 4 + λ 3 − λ 1 − 3λ 2 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 5 − 2ν 6 − 4ν 7 ,
3λ 4 + λ 3 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 4 + 2ν 1 + 2ν 5 − 2ν 3 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 4 − λ 2 − 3λ 1 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 3 + 2ν 5 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 4 + λ 3 − λ 1 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 2 + 2ν 3 + 2ν 5 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 4 + λ 3 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 2 + 2ν 3 + 2ν 5 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 4 − λ 2 − 3λ 1 + µ 2 − µ 1 ≤ 4ν 3 + 2ν 2 + 2ν 4 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 4 + λ 3 − λ 1 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 3 + 2ν 2 + 2ν 4 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 4 + λ 3 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 3 + 2ν 2 + 2ν 4 − 2ν 1 − 2ν 7 − 4ν 8 ,
3λ 3 + λ 4 − λ 2 − 3λ 1 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 4 − 2ν 6 − 4ν 7 ,
3λ 4 + λ 3 − λ 1 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 4 − 2ν 6 − 4ν 7 ,
3λ 4 + λ 3 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 4 − 2ν 6 − 4ν 7 ,
3λ 3 + λ 4 − λ 2 − 3λ 1 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 5 − 2ν 7 − 4ν 6 ,
3λ 4 + λ 3 − λ 1 − 3λ 2 + µ 2 − µ 1 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 5 − 2ν 7 − 4ν 6 ,
3λ 4 + λ 3 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 8 − 2ν 5 − 2ν 7 − 4ν 6 ,
3λ 4 + λ 3 − λ 2 − 3λ 1 + µ 1 − µ 2 ≤ 4ν 1 + 2ν 2 + 2ν 6 − 2ν 4 − 2ν 8 − 4ν 5 .""",
(
2,
2,
3,
12,
): """λ 1 − λ 2 ≤ ρ 1 + ρ 2 + ρ 3 + ρ 4 + ρ 5 + ρ 6 − ρ 7 − ρ 8 − ρ 9 − ρ 10 − ρ 11 − ρ 12 .
2ν 1 − ν 2 − ν 3 ≤ 2ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − ρ 5 − ρ 6 − ρ 7 − ρ 8 − ρ 9 − ρ 10 − ρ 11 − ρ 12 .
ν 1 + ν 2 − 2ν 3 ≤ ρ 1 + ρ 2 + ρ 3 + ρ 4 + ρ 5 + ρ 6 + ρ 7 + ρ 8 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 2ρ 12 .
λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ρ 1 + 2ρ 2 + 2ρ 3 − 2ρ 10 − 2ρ 11 − 2ρ 12
λ 1 − λ 2 + ν 1 − ν 3 ≤ 2ρ 1 + 2ρ 2 + ρ 3 + ρ 4 − ρ 9 − ρ 10 − 2ρ 11 − 2ρ 12 .
3λ 1 − 3λ 2 + 2ν 1 − ν 2 − ν 3 ≤ 5ρ 1 + 5ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 11 − 4ρ 12 ,
3λ 1 − 3λ 2 + 2ν 3 − ν 1 − ν 2 ≤ 5ρ 2 + 5ρ 3 + 2ρ 1 + 2ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 11 − 4ρ 12 ,
3λ 1 − 3λ 2 + 2ν 3 − ν 1 − ν 2 ≤ 5ρ 1 + 5ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 − ρ 8 − ρ 9 − 4ρ 7 − 4ρ 10 − 4ρ 11 − 4ρ 12 .
3λ 1 − 3λ 2 + ν 1 + ν 2 − 2ν 3 ≤ 4ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 4 + ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 5ρ 11 − 5ρ 12 ,
3λ 1 − 3λ 2 + ν 2 + ν 3 − 2ν 1 ≤ 4ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 6 + ρ 4 + ρ 5 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 5ρ 11 − 5ρ 12 ,
3λ 1 − 3λ 2 + ν 2 + ν 3 − 2ν 1 ≤ 4ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 4 + ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 12 − 5ρ 10 − 5ρ 11
λ 1 − λ 2 + µ 1 − µ 2 + 2ν 1 − 2ν 3 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 1 − µ 2 + 2ν 2 − 2ν 3 ≤ 4ρ 2 + 2ρ 1 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 1 − 2ν 3 ≤ 4ρ 2 + 2ρ 1 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 1 − µ 2 + 2ν 1 − 2ν 2 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 1 − µ 2 + 2ν 2 − 2ν 3 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 1 − µ 2 + 2ν 1 − 2ν 2 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 1 − 2ν 3 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 2 − 2ν 3 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 10 − 2ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 1 − 2ν 2 ≤ 4ρ 2 + 2ρ 1 + 2ρ 3 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 1 − 2ν 2 ≤ 4ρ 2 + 2ρ 1 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 2 − 2ν 3 ≤ 4ρ 2 + 2ρ 1 + 2ρ 3 + 2ρ 4 − 2ρ 9 − 2ρ 10 − 2ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 1 − 2ν 2 ≤ 4ρ 1 + 2ρ 2 + 2ρ 4 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 2 − 2ν 3 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 9 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 3 − 2ν 1 ≤ 4ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 2ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + µ 2 − µ 1 + 2ν 3 − 2ν 1 ≤ 4ρ 1 + 2ρ 2 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 4ρ 11
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 1 − 2ν 2 − 2ν 3 ≤ 10ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 8ρ 11 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 2 − 2ν 1 − 2ν 3 ≤ 10ρ 2 + 4ρ 1 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 8ρ 11 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 2 − 3µ 1 + 4ν 1 − 2ν 2 − 2ν 3 ≤ 10ρ 2 + 4ρ 1 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 8ρ 11 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 2 − 2ν 1 − 2ν 3 ≤ 10ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 11 − 8ρ 10 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 3 − 2ν 1 − 2ν 2 ≤ 10ρ 3 + 4ρ 1 + 4ρ 2 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 8ρ 11 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 3 − 2ν 1 − 2ν 2 ≤ 10ρ 2 + 4ρ 1 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 11 − 8ρ 10 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 3 − 2ν 1 − 2ν 2 ≤ 10ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 5 + 4ρ 6 − 2ρ 4 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 10 − 8ρ 11 − 8ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 4ν 3 − 2ν 1 − 2ν 2 ≤ 10ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 12 − 8ρ 10 − 8ρ 11 ,
3λ 1 − 3λ 2 + 3µ 2 − 3µ 1 + 4ν 2 − 2ν 1 − 2ν 3 ≤ 10ρ 1 + 4ρ 2 + 4ρ 3 + 4ρ 4 + 4ρ 5 − 2ρ 6 − 2ρ 7 − 2ρ 8 − 2ρ 9 − 2ρ 12 − 8ρ 10 − 8ρ 11 .
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 1 + 2ν 2 − 4ν 3 ≤ 8ρ 1 + 8ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 11 − 10ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 1 + 2ν 3 − 4ν 2 ≤ 8ρ 1 + 8ρ 3 + 2ρ 2 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 11 − 10ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 1 + 2ν 3 − 4ν 2 ≤ 8ρ 1 + 8ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 12 − 10ρ 11 ,
3λ 1 − 3λ 2 + 3µ 2 − 3µ 1 + 2ν 1 + 2ν 2 − 4ν 3 ≤ 8ρ 1 + 8ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 12 − 10ρ 11 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 2 + 2ν 3 − 4ν 1 ≤ 8ρ 1 + 8ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 9 − 4ρ 7 − 4ρ 8 − 4ρ 10 − 4ρ 11 − 10ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 2 + 2ν 3 − 4ν 1 ≤ 8ρ 2 + 8ρ 3 + 2ρ 1 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 11 − 10ρ 12 ,
3λ 1 − 3λ 2 + 3µ 2 − 3µ 1 + 2ν 1 + 2ν 3 − 4ν 2 ≤ 8ρ 2 + 8ρ 3 + 2ρ 1 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 11 − 10ρ 12 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 2 + 2ν 3 − 4ν 1 ≤ 8ρ 1 + 8ρ 3 + 2ρ 2 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 10 − 4ρ 12 − 10ρ 11 ,
3λ 1 − 3λ 2 + 3µ 1 − 3µ 2 + 2ν 2 + 2ν 3 − 4ν 1 ≤ 8ρ 1 + 8ρ 2 + 2ρ 3 + 2ρ 4 + 2ρ 5 + 2ρ 6 + 2ρ 7 − 4ρ 8 − 4ρ 9 − 4ρ 11 − 4ρ 12 − 10ρ 10
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 2ν 1 − ν 2 − ν 3 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − ν 1 − ν 3 ≤ 11ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − ν 1 − ν 3 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 5 + 2ρ 4 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − ν 1 − ν 3 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 8 − 4ρ 7 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − ν 1 − ν 3 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 11 − 10ρ 10 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 9 − 4ρ 7 − 4ρ 8 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 3 + 8ρ 1 + 8ρ 2 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 5 + 2ρ 4 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 8 − 4ρ 7 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 11 − 10ρ 10 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 6 + 2ρ 4 + 2ρ 5 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 5 + 2ρ 4 + 2ρ 6 − ρ 8 − 4ρ 7 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 5 + 2ρ 4 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 11 − 10ρ 10 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 8 − 4ρ 7 − 4ρ 9 − 7ρ 11 − 10ρ 10 − 10ρ 12 ,
6λ 1 − 6λ 2 − 3µ 2 + 3µ 1 + 2ν 3 − ν 1 − ν 2 ≤ 11ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 12 − 10ρ 10 − 10ρ 11 .
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 1 + ν 2 − 2ν 3 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 1 + ν 3 − 2ν 2 ≤ 10ρ 1 + 10ρ 3 + 7ρ 2 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 1 + ν 3 − 2ν 2 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 6 + ρ 5 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 1 + ν 3 − 2ν 2 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 9 − 5ρ 8 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 1 + ν 3 − 2ν 2 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 11ρ 11 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 9 − 5ρ 8 − 8ρ 10 − 8ρ 12 − 11ρ 11 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 2 + 10ρ 3 + 7ρ 1 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 3 + 7ρ 2 + 4ρ 4 + 4ρ 6 + ρ 5 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 3 + 7ρ 2 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 9 − 5ρ 8 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 3 + 7ρ 2 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 11ρ 11 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 5 + 4ρ 6 + ρ 4 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 6 + ρ 5 − 2ρ 7 − 2ρ 9 − 5ρ 8 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 6 + ρ 5 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 11ρ 11 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 8 − 2ρ 9 − 5ρ 7 − 8ρ 10 − 8ρ 11 − 11ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + ν 2 + ν 3 − 2ν 1 ≤ 10ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 11 − 8ρ 12 − 11ρ 10 .
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 1 + ν 2 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 2 + 10ρ 1 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 4 + 4ρ 3 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 7 − 2ρ 6 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 10 − 8ρ 9 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 1 + ν 3 − 5ν 2 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 1 + ν 2 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 2 + 10ρ 1 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 4 + 4ρ 3 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 6 + ρ 7 − 2ρ 5 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 7 − 2ρ 6 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 2 + ν 1 − 5ν 3 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 10 − 8ρ 9 − 8ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 3 + ν 2 − 5ν 1 ≤ 13ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 11 − 8ρ 9 − 8ρ 12 − 14ρ 10 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 4ν 3 + ν 2 − 5ν 1 ≤ 13ρ 2 + 10ρ 3 + 7ρ 4 + 4ρ 1 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 4ν 3 + ν 1 − 5ν 2 ≤ 13ρ 2 + 10ρ 3 + 7ρ 4 + 4ρ 1 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 8ρ 11 − 14ρ 12
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 1 − ν 2 − 4ν 3 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 2 − ν 1 − 4ν 3 ≤ 14ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 − ν 2 − 4ν 3 ≤ 14ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 1 + 8ρ 2 + 8ρ 4 + 5ρ 3 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 7 − ρ 6 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 10 − 7ρ 9 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 12 − 13ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 2 + 8ρ 1 + 8ρ 4 + 5ρ 3 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 7 − ρ 6 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 10 − 7ρ 9 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 2 + 8ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 12 − 13ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 − ν 3 − 4ν 2 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 8 − ρ 6 − 4ρ 7 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 3 − ν 2 − 4ν 1 ≤ 14ρ 3 + 8ρ 1 + 8ρ 4 + 5ρ 2 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 13ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 3 − ν 2 − 4ν 1 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 12 − 7ρ 9 − 10ρ 10 − 13ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 2 − ν 3 − 4ν 1 ≤ 14ρ 1 + 8ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 12 − 7ρ 9 − 10ρ 10 − 13ρ 11 .
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 7ν 1 − 2ν 2 − 5ν 3 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 2 − 5ν 3 ≤ 16ρ 2 + 10ρ 1 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 1 + 10ρ 2 + 7ρ 4 + 4ρ 3 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 2 − 5ν 3 ≤ 16ρ 1 + 10ρ 2 + 7ρ 4 + 4ρ 3 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 7 − 2ρ 6 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 10 − 8ρ 9 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 2 − 5ν 3 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 10 − 8ρ 9 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 2 − 5ν 3 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 2 + 10ρ 1 + 7ρ 4 + 4ρ 3 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 2 + 10ρ 1 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 7 − 2ρ 6 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 2 + 10ρ 1 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 10 − 8ρ 9 − 11ρ 11 − 14ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 2 + 10ρ 1 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 6 − 2ρ 7 − 2ρ 8 − 5ρ 9 − 8ρ 10 − 11ρ 12 − 14ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 7ν 1 − 2ν 3 − 5ν 2 ≤ 16ρ 1 + 10ρ 2 + 7ρ 3 + 4ρ 4 + 4ρ 5 + ρ 8 − 2ρ 6 − 2ρ 7 − 5ρ 9 − 8ρ 10 − 11ρ 11 − 14ρ 12 .
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 1 + 2ν 2 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 2 + 11ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 + 2ν 2 − 7ν 3 ≤ 14ρ 2 + 11ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 4 + 5ρ 3 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 + 2ν 2 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 4 + 5ρ 3 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 7 − ρ 6 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 1 − 3µ 2 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 10 − 7ρ 9 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 + 2ν 2 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 10 − 7ρ 9 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 1 + 2ν 2 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 12 − 16ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 2 + 11ρ 1 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 12 − 16ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 4 + 5ρ 3 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 12 − 16ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 6 + 2ρ 7 − ρ 5 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 11 − 16ρ 12 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 7 − ρ 6 − 4ρ 8 − 4ρ 9 − 7ρ 10 − 10ρ 12 − 16ρ 11 ,
6λ 1 − 6λ 2 + 3µ 2 − 3µ 1 + 5ν 2 + 2ν 1 − 7ν 3 ≤ 14ρ 1 + 11ρ 2 + 8ρ 3 + 5ρ 4 + 2ρ 5 + 2ρ 6 − ρ 7 − 4ρ 8 − 4ρ 10 − 7ρ 9 − 10ρ 12 − 16ρ 11
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 1 − ν 3 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 3 ≤ 4ρ 2 + 3ρ 1 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 1 − ν 2 ≤ 4ρ 1 + 3ρ 3 + 2ρ 2 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 3 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 5 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 1 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 6 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 3 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 7 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 1 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 8 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 3 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 11 − 3ρ 10 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 1 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 12 − 4ρ 11 ,
λ 1 − λ 2 − 2µ 2 + 2µ 1 + ν 3 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 7 − 2ρ 8 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 − 2µ 2 + 2µ 1 + ν 3 − ν 2 ≤ 4ρ 3 + 3ρ 1 + 2ρ 2 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 1 ≤ 4ρ 2 + 3ρ 3 + 2ρ 1 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 2 − λ 1 + 2µ 1 − 2µ 2 + ν 1 − ν 2 ≤ 4ρ 2 + 3ρ 3 + 2ρ 1 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 − 2µ 2 + 2µ 1 + ν 3 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 6 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 2 − λ 1 + 2µ 1 − 2µ 2 + ν 2 − ν 3 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 6 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 5 + ρ 6 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 7 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 2 − λ 1 + 2µ 1 − 2µ 2 + ν 1 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 7 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 − 2µ 2 + 2µ 1 + ν 3 − ν 2 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 12 − 3ρ 10 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 12 − 3ρ 11 − 4ρ 10 ,
λ 2 − λ 1 + 2µ 1 − 2µ 2 + ν 2 − ν 3 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 12 − 3ρ 10 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 2 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 11 − 3ρ 12 − 4ρ 10 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 7 − 2ρ 9 − 2ρ 11 − 3ρ 12 − 4ρ 10 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 5 + ρ 6 − ρ 8 − 2ρ 9 − 2ρ 11 − 3ρ 10 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 6 − ρ 7 − 2ρ 8 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 5 + ρ 6 − ρ 7 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 7 − 2ρ 11 − 3ρ 10 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 8 − 2ρ 12 − 3ρ 10 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 6 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 6 + ρ 4 − ρ 9 − 2ρ 8 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 6 − ρ 8 − 2ρ 9 − 2ρ 12 − 3ρ 10 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 2 + 3ρ 3 + 2ρ 1 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 11 − 3ρ 10 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 6 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 7 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 5 + ρ 4 − ρ 9 − 2ρ 7 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 3 + 3ρ 2 + 2ρ 1 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 3 + 3ρ 1 + 2ρ 2 + 2ρ 4 + ρ 6 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 3 + 3ρ 1 + 2ρ 2 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 8 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 3 + 3ρ 1 + 2ρ 2 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 2 + 3ρ 1 + 2ρ 3 + 2ρ 5 + ρ 6 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 2 + 3ρ 1 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 9 − 2ρ 7 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 2 + 3ρ 1 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 11 − 3ρ 12 − 4ρ 10 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 2 + 3ρ 3 + 2ρ 1 + 2ρ 5 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 2 + 3ρ 3 + 2ρ 1 + 2ρ 4 + ρ 5 − ρ 7 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 3 + 2ρ 2 + 2ρ 6 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 5 + ρ 4 − ρ 8 − 2ρ 9 − 2ρ 11 − 3ρ 12 − 4ρ 10 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 3 + 2ρ 2 + 2ρ 4 + ρ 5 − ρ 7 − 2ρ 8 − 2ρ 10 − 3ρ 11 − 4ρ 12 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 2 + 2ρ 3 + 2ρ 4 + ρ 5 − ρ 7 − 2ρ 8 − 2ρ 10 − 3ρ 12 − 4ρ 11 ,
λ 1 − λ 2 + 2µ 1 − 2µ 2 + ν 3 − ν 1 ≤ 4ρ 1 + 3ρ 3 + 2ρ 2 + 2ρ 4 + ρ 5 − ρ 8 − 2ρ 9 − 2ρ 12 − 3ρ 10 − 4ρ 11 .
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 4 + 2ρ 3 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 4 + 2ρ 3 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 4 + 2ρ 3 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 10 − 4ρ 9 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 10 − 4ρ 9 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 10 − 4ρ 9 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 2 + 4ρ 1 + 4ρ 4 + 2ρ 3 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 10 − 4ρ 9 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 4 + 2ρ 3 + 2ρ 6 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 4 + 2ρ 3 + 2ρ 5 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 4 + 2ρ 3 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 8 − 2ρ 10 − 4ρ 9 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 7 − 2ρ 10 − 4ρ 9 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 1 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 + 3µ 1 − 3µ 2 + 2ν 2 − 2ν 3 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 10 − 4ρ 9 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 12 − 4ρ 9 − 4ρ 11 − 6ρ 10 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 7 − 2ρ 10 − 4ρ 8 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 6 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 2 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 6 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 5 + 2ρ 3 + 2ρ 6 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 3 + 4ρ 1 + 4ρ 4 + 2ρ 2 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 4ρ 7 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 2 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 4ρ 7 − 4ρ 11 − 6ρ 12 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 12 − 4ρ 9 − 4ρ 10 − 6ρ 11 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 2 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 12 − 4ρ 9 − 4ρ 10 − 6ρ 11 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 11 − 4ρ 9 − 4ρ 12 − 6ρ 10 ,
λ 1 − λ 2 + 3µ 1 − 3µ 2 + 2ν 3 − 2ν 1 ≤ 6ρ 2 + 4ρ 3 + 4ρ 4 + 2ρ 1 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 2 ≤ 6ρ 2 + 4ρ 3 + 4ρ 4 + 2ρ 1 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 6 − 2ρ 8 − 2ρ 12 − 4ρ 9 − 4ρ 10 − 6ρ 11 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 2 + 4ρ 1 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 9 − 2ρ 10 − 4ρ 7 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 6 + 2ρ 3 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 6 + 2ρ 3 + 2ρ 4 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 12 − 6ρ 11 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 2 + 4ρ 3 + 4ρ 4 + 2ρ 1 + 2ρ 5 − 2ρ 7 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 1 + 4ρ 2 + 4ρ 3 + 2ρ 4 + 2ρ 5 − 2ρ 8 − 2ρ 10 − 4ρ 7 − 4ρ 11 − 6ρ 12 ,
λ 2 − λ 1 − 3µ 2 + 3µ 1 + 2ν 3 − 2ν 1 ≤ 6ρ 3 + 4ρ 2 + 4ρ 4 + 2ρ 1 + 2ρ 5 − 2ρ 8 − 2ρ 9 − 4ρ 10 − 4ρ 11 − 6ρ 12 .""",
(
3,
3,
9,
): """2λ 1 − λ 2 − λ 3 ≤ 2ν 1 + 2ν 2 + 2ν 3 − ν 4 − ν 5 − ν 6 − ν 7 − ν 8 − ν 9 ,
λ 1 + λ 2 − 2λ 3 ≤ ν 1 + ν 2 + ν 3 + ν 4 + ν 5 + ν 6 − 2ν 7 − 2ν 8 − 2ν 9 ,
λ 1 + λ 2 − 2λ 3 + µ 1 + µ 2 − 2µ 3 ≤ 2ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 7 − ν 8 − 4ν 9 ,
λ 1 + λ 2 − 2λ 3 + µ 1 + µ 3 − 2µ 2 ≤ 2ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 7 − ν 9 − 4ν 8 ,
λ 1 + λ 2 − 2λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 2ν 1 + 2ν 2 + 2ν 3 + 2ν 6 − ν 4 − ν 5 − ν 7 − ν 8 − 4ν 9 ,
λ 1 + λ 2 − 2λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 2ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 8 − ν 9 − 4ν 7 ,
2λ 1 − λ 2 − λ 3 + 2µ 1 − µ 2 − µ 3 ≤ 4ν 1 + ν 2 + ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 2ν 9 ,
2λ 1 − λ 2 − λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 4ν 2 + ν 1 + ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 2ν 9 ,
2λ 1 − λ 2 − λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 4ν 3 + ν 1 + ν 2 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 2ν 9 ,
2λ 1 − λ 2 − λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 4ν 1 + ν 2 + ν 3 + ν 5 + ν 6 − 2ν 4 − 2ν 7 − 2ν 8 − 2ν 9
λ 1 + λ 2 − 2λ 3 + 2µ 1 − µ 2 − µ 3 ≤ 3ν 1 + 3ν 2 − 3ν 8 − 3ν 9 ,
λ 1 + λ 3 − 2λ 2 + 2µ 1 − µ 2 − µ 3 ≤ 3ν 1 + 3ν 3 − 3ν 8 − 3ν 9 ,
λ 1 + λ 2 − 2λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 3ν 1 + 3ν 2 − 3ν 7 − 3ν 9 ,
λ 2 + λ 3 − 2λ 1 + 2µ 1 − µ 2 − µ 3 ≤ 3ν 2 + 3ν 3 − 3ν 8 − 3ν 9 ,
λ 1 + λ 3 − 2λ 2 + 2µ 2 − µ 1 − µ 3 ≤ 3ν 2 + 3ν 3 − 3ν 8 − 3ν 9 ,
λ 1 + λ 2 − 2λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 3ν 2 + 3ν 3 − 3ν 8 − 3ν 9 ,
λ 2 + λ 3 − 2λ 1 + 2µ 1 − µ 2 − µ 3 ≤ 3ν 1 + 3ν 2 − 3ν 7 − 3ν 8 ,
λ 1 + λ 3 − 2λ 2 + 2µ 2 − µ 1 − µ 3 ≤ 3ν 1 + 3ν 2 − 3ν 7 − 3ν 8 ,
λ 1 + λ 2 − 2λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 3ν 1 + 3ν 2 − 3ν 7 − 3ν 8
λ 1 − λ 3 + µ 1 − µ 3 ≤ 2ν 1 + ν 2 + ν 3 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 1 + ν 3 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 1 − µ 2 ≤ 2ν 1 + ν 2 + ν 5 − ν 7 − ν 8 − 2ν 9 ,
λ 2 − λ 3 + µ 2 − µ 3 ≤ 2ν 1 + ν 2 + ν 3 − ν 5 − ν 8 − 2ν 9 ,
λ 2 − λ 3 + µ 2 − µ 3 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 7 − 2ν 9 ,
λ 1 − λ 3 + µ 2 − µ 3 ≤ 2ν 2 + ν 1 + ν 3 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 2 − µ 3 ≤ 2ν 1 + ν 2 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 2 − µ 3 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + ν 2 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 1 − µ 2 ≤ 2ν 1 + ν 2 + ν 3 − ν 7 − ν 9 − 2ν 8 ,
λ 1 − λ 2 + µ 2 − µ 3 ≤ 2ν 2 + ν 1 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 2 − µ 3 ≤ 2ν 2 + ν 1 + ν 3 − ν 6 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 2 − µ 3 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 9 − 2ν 8 ,
λ 1 − λ 2 + µ 2 − µ 3 ≤ 2ν 1 + ν 2 + ν 4 − ν 7 − ν 9 − 2ν 8 ,
λ 1 − λ 2 + µ 2 − µ 3 ≤ 2ν 2 + ν 1 + ν 3 − ν 7 − ν 9 − 2ν 8 ,
λ 1 − λ 2 + µ 2 − µ 1 ≤ 2ν 2 + ν 3 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 2 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 7 − 2ν 8 ,
λ 2 − λ 3 + µ 3 − µ 2 ≤ 2ν 2 + ν 3 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 2 − λ 3 + µ 3 − µ 2 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 7 − 2ν 8 ,
λ 1 − λ 3 + µ 3 − µ 1 ≤ 2ν 3 + ν 1 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 3 − µ 1 ≤ 2ν 2 + ν 3 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 3 + µ 3 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 7 − 2ν 8 ,
λ 1 − λ 3 + µ 3 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 9 − 2ν 7 ,
λ 1 − λ 2 + µ 3 − µ 2 ≤ 2ν 3 + ν 1 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 2 − λ 3 + µ 2 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 9 − 2ν 7 ,
λ 1 − λ 2 + µ 3 − µ 1 ≤ 2ν 3 + ν 2 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 3 − µ 1 ≤ 2ν 3 + ν 1 + ν 5 − ν 7 − ν 8 − 2ν 9 ,
λ 1 − λ 2 + µ 3 − µ 1 ≤ 2ν 2 + ν 1 + ν 3 − ν 6 − ν 7 − 2ν 8 ,
λ 1 − λ 2 + µ 3 − µ 1 ≤ 2ν 2 + ν 3 + ν 4 − ν 7 − ν 9 − 2ν 8 ,
λ 1 − λ 2 + µ 3 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 8 − 2ν 7 ,
λ 2 − λ 3 + µ 3 − µ 1 ≤ 2ν 3 + ν 2 + ν 4 − ν 7 − ν 8 − 2ν 9 ,
λ 2 − λ 3 + µ 3 − µ 1 ≤ 2ν 2 + ν 1 + ν 3 − ν 6 − ν 7 − 2ν 8 ,
λ 2 − λ 3 + µ 3 − µ 1 ≤ 2ν 2 + ν 3 + ν 4 − ν 7 − ν 9 − 2ν 8 ,
λ 2 − λ 3 + µ 3 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 5 − ν 9 − 2ν 7 ,
λ 2 − λ 3 + µ 3 − µ 1 ≤ 2ν 1 + ν 2 + ν 3 − ν 6 − ν 8 − 2ν 7 .
3λ 1 − 3λ 3 + µ 1 + µ 2 − 2µ 3 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 3 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 1 + 4ν 3 + ν 2 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 3 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 2 − 3λ 3 + µ 1 + µ 2 − 2µ 3 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 3 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 1 − 3λ 2 + µ 1 + µ 2 − 2µ 3 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 1 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 2 + 4ν 3 + ν 1 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 2 − 3λ 3 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 2 + 4ν 3 + ν 1 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 3 + ν 2 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 5 + ν 6 − 2ν 4 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 2 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 5 + ν 6 − 2ν 4 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 2 − 3λ 3 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 1 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 3 + ν 2 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 1 − 3λ 2 + µ 1 + µ 3 − 2µ 2 ≤ 4ν 1 + 4ν 3 + ν 2 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 1 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 8 − 2ν 9 − 5ν 7 ,
3λ 1 − 3λ 2 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 3 + ν 2 + ν 5 + ν 6 − 2ν 4 − 2ν 7 − 2ν 8 − 5ν 9 ,
3λ 1 − 3λ 2 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 2 + 4ν 3 + ν 1 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 2 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 2 + 4ν 3 + ν 1 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 2ν 9 − 5ν 8 ,
3λ 2 − 3λ 3 + µ 2 + µ 3 − 2µ 1 ≤ 4ν 1 + 4ν 2 + ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 8 − 2ν 9 − 5ν 7 .
3λ 1 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 9 − 4ν 7 − 4ν 8 ,
3λ 1 − 3λ 2 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 9 − 4ν 7 − 4ν 8 ,
3λ 2 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 6 − ν 4 − ν 5 − ν 8 − 4ν 7 − 4ν 9 ,
3λ 1 − 3λ 2 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 9 − 4ν 7 − 4ν 8 ,
3λ 2 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 9 − 4ν 7 − 4ν 8 ,
3λ 1 − 3λ 3 + 2µ 1 − µ 2 − µ 3 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 2 − 3λ 3 + 2µ 1 − µ 2 − µ 3 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 2 + 2µ 1 − µ 2 − µ 3 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 8 − 4ν 7 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 3 + 2ν 1 + 2ν 2 + 2ν 4 − ν 5 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 2 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 6 − ν 4 − ν 5 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 2 − 3λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 6 − ν 4 − ν 5 − ν 7 − 4ν 8 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 1 + 2ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − ν 8 − 4ν 7 − 4ν 9 ,
3λ 1 − 3λ 3 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 8 − 4ν 7 − 4ν 9 ,
3λ 2 − 3λ 3 + 2µ 2 − µ 1 − µ 3 ≤ 5ν 2 + 2ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − ν 8 − 4ν 7 − 4ν 9 ,
3λ 1 − 3λ 2 + 2µ 3 − µ 1 − µ 2 ≤ 5ν 3 + 2ν 1 + 2ν 2 + 2ν 5 − ν 4 − ν 6 − ν 7 − 4ν 8 − 4ν 9
3λ 1 − 3λ 3 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 1 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 1 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 1 − 3λ 2 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 1 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 1 − 3λ 2 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 8 − 5ν 7 − 8ν 9 ,
3λ 1 − 3λ 2 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 5ν 9 − 8ν 8 ,
3λ 1 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 6 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 1 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 8 − 5ν 7 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 8 − 5ν 7 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 8 − 5ν 7 − 8ν 9 ,
3λ 2 − 3λ 3 + 4µ 1 + µ 3 − 5µ 2 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 2 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 1 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 7 − 5ν 9 − 8ν 8 ,
3λ 1 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 7 − 5ν 9 − 8ν 8 ,
3λ 1 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 7 − 5ν 9 − 8ν 8 ,
3λ 2 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 3 + 4ν 1 + 4ν 4 + ν 2 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 3 + 4ν 4 + ν 1 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 2 + 4ν 3 + 4ν 4 + ν 1 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 2 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 6 + ν 3 + ν 4 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 6 + ν 3 + ν 4 − 2ν 5 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 5 + ν 3 + ν 6 − 2ν 4 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 2 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 2 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 2 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 2 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 1 + µ 2 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 9 − 5ν 8 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 3 + 4ν 2 + 4ν 4 + ν 1 + ν 5 − 2ν 6 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 6 + ν 3 + ν 5 − 2ν 4 − 2ν 7 − 5ν 8 − 8ν 9 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 6 + ν 3 + ν 4 − 2ν 5 − 2ν 8 − 5ν 7 − 8ν 9 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 9 − 5ν 7 − 8ν 8 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 3 + 4ν 4 + ν 1 + ν 5 − 2ν 6 − 2ν 7 − 5ν 9 − 8ν 8 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 6 − 2ν 5 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 8 − 5ν 9 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 3 + ν 4 + ν 6 − 2ν 5 − 2ν 9 − 5ν 8 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 1 + 4ν 2 + 4ν 4 + ν 3 + ν 5 − 2ν 6 − 2ν 9 − 5ν 8 − 8ν 7 ,
3λ 3 − 3λ 1 + 4µ 2 + µ 1 − 5µ 3 ≤ 7ν 2 + 4ν 1 + 4ν 3 + ν 4 + ν 5 − 2ν 6 − 2ν 9 − 5ν 8 − 8ν 7
3λ 1 − 3λ 3 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 2 + 5ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 3 + 2ν 2 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 2 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 2 + 5µ 2 − µ 1 − 4µ 3 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 3 + 2ν 2 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 1 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 3 + 2ν 2 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 9 − 7ν 8 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 1 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 3 + 2ν 2 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 2 − 3λ 3 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 3 + 5ν 2 + 2ν 1 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9
3λ 2 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 6 − ν 4 − ν 7 − 4ν 5 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 6 − ν 7 − 4ν 4 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 5 − ν 6 − ν 7 − 4ν 4 − 4ν 8 − 7ν 9 ,
3λ 2 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 9 − 4ν 6 − 4ν 7 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 9 − 4ν 6 − 4ν 7 − 7ν 8 ,
3λ 2 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 2 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 2 − 4µ 3 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 8 − 4ν 6 − 4ν 9 − 7ν 7 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 2 + 2ν 1 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 5 − ν 4 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 2 + 2ν 1 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 6 − ν 5 − ν 7 − 4ν 4 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 3 + 2ν 2 + 2ν 5 − ν 6 − ν 7 − 4ν 4 − 4ν 8 − 7ν 9 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 1 + 2ν 3 + 2ν 4 − ν 5 − ν 9 − 4ν 6 − 4ν 7 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 4 − ν 5 − ν 7 − 4ν 6 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 1 + 2ν 2 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 2 + 5ν 3 + 2ν 1 + 2ν 5 − ν 4 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 3 + 5ν 2 + 2ν 1 + 2ν 4 − ν 5 − ν 6 − 4ν 7 − 4ν 9 − 7ν 8 ,
3λ 3 − 3λ 1 + 5µ 1 − µ 3 − 4µ 2 ≤ 8ν 1 + 5ν 2 + 2ν 3 + 2ν 4 − ν 5 − ν 9 − 4ν 6 − 4ν 8 − 7ν 7""",
(
2,
2,
2,
2,
16,
): """# QUBIT_COORDS 0 1 2 3
2ρ ≤ τ 1 + τ 2 + τ 3 + τ 4 + τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − τ 12 − τ 13 − τ 14 − τ 15 − τ 16 ,
2ν + 2ρ ≤ 2τ 1 + 2τ 2 + 2τ 3 + 2τ 4 − 2τ 13 − 2τ 14 − 2τ 15 − 2τ 16 ,
2µ + 2ν + 2ρ ≤ 3τ 1 + 3τ 2 + τ 3 + τ 4 + τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − τ 12 − τ 13 − τ 14 − 3τ 15 − 3τ 16 ,
2µ + 2ν + 4ρ ≤ 4τ 1 + 4τ 2 + 2τ 3 + 2τ 4 + 2τ 5 + 2τ 6 − 2τ 11 − 2τ 12 − 2τ 13 − 2τ 14 − 4τ 15 − 4τ 16 ,
2λ + 2µ + 2ν + 2ρ ≤ 4τ 1 + 2τ 2 + 2τ 3 + 2τ 4 + 2τ 5 − 2τ 12 − 2τ 13 − 2τ 14 − 2τ 15 − 4τ 16 ,
2λ + 2µ + 2ν − 2ρ ≤ 4τ 2 + 2τ 1 + 2τ 3 + 2τ 4 + 2τ 5 − 2τ 12 − 2τ 13 − 2τ 14 − 2τ 15 − 4τ 16 ,
2λ + 2µ + 2ν − 2ρ ≤ 4τ 1 + 2τ 2 + 2τ 3 + 2τ 4 + 2τ 5 − 2τ 12 − 2τ 13 − 2τ 14 − 2τ 16 − 4τ 15 ,
2λ + 2µ + 2ν + 4ρ ≤ 5τ 1 + 3τ 2 + 3τ 3 + 3τ 4 + τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − τ 12 − 3τ 13 − 3τ 14 − 3τ 15 − 5τ 16 ,
2λ + 2µ − 2ν + 4ρ ≤ 5τ 2 + 3τ 1 + 3τ 3 + 3τ 4 + τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − τ 12 − 3τ 13 − 3τ 14 − 3τ 15 − 5τ 16 ,
2λ + 2µ − 2ν + 4ρ ≤ 5τ 1 + 3τ 2 + 3τ 3 + 3τ 4 + τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − τ 12 − 3τ 13 − 3τ 14 − 3τ 16 − 5τ 15 ,
2λ + 2µ + 2ν + 6ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ + 2µ − 2ν + 6ρ ≤ 6τ 2 + 4τ 1 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ + 2µ − 2ν + 6ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 8 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ + 2µ − 2ν + 6ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 9 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ + 2µ − 2ν + 6ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 4τ 16 − 6τ 15 ,
2λ + 2µ + 4ν + 4ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 2τ 4 + 2τ 5 + 2τ 6 − 2τ 11 − 2τ 12 − 2τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ − 2µ + 4ν + 4ρ ≤ 6τ 2 + 4τ 1 + 4τ 3 + 2τ 4 + 2τ 5 + 2τ 6 − 2τ 11 − 2τ 12 − 2τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ − 2µ + 4ν + 4ρ ≤ 6τ 1 + 4τ 2 + 4τ 4 + 2τ 3 + 2τ 5 + 2τ 6 − 2τ 11 − 2τ 12 − 2τ 13 − 4τ 14 − 4τ 15 − 6τ 16 ,
2λ − 2µ + 4ν + 4ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 2τ 4 + 2τ 5 + 2τ 6 − 2τ 11 − 2τ 12 − 2τ 14 − 4τ 13 − 4τ 15 − 6τ 16 ,
2λ − 2µ + 4ν + 4ρ ≤ 6τ 1 + 4τ 2 + 4τ 3 + 2τ 4 + 2τ 5 + 2τ 6 − 2τ 11 − 2τ 12 − 2τ 13 − 4τ 14 − 4τ 16 − 6τ 15
2λ + 2µ + 4ν + 6ρ ≤ 7τ 1 + 5τ 2 + 5τ 3 + 3τ 4 + 3τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − 3τ 12 − 3τ 13 − 5τ 14 − 5τ 15 − 7τ 16 ,
2λ − 2µ + 4ν + 6ρ ≤ 7τ 2 + 5τ 1 + 5τ 3 + 3τ 4 + 3τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − 3τ 12 − 3τ 13 − 5τ 14 − 5τ 15 − 7τ 16 ,
2λ − 2µ + 4ν + 6ρ ≤ 7τ 1 + 5τ 2 + 5τ 4 + 3τ 3 + 3τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − 3τ 12 − 3τ 13 − 5τ 14 − 5τ 15 − 7τ 16 ,
2λ − 2µ + 4ν + 6ρ ≤ 7τ 1 + 5τ 2 + 5τ 3 + 3τ 4 + 3τ 6 + τ 5 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − 3τ 12 − 3τ 13 − 5τ 14 − 5τ 15 − 7τ 16 ,
2λ − 2µ + 4ν + 6ρ ≤ 7τ 1 + 5τ 2 + 5τ 3 + 3τ 4 + 3τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 12 − 3τ 11 − 3τ 13 − 5τ 14 − 5τ 15 − 7τ 16 ,
2λ − 2µ + 4ν + 6ρ ≤ 7τ 1 + 5τ 2 + 5τ 3 + 3τ 4 + 3τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − 3τ 12 − 3τ 14 − 5τ 13 − 5τ 15 − 7τ 16 ,
2λ − 2µ + 4ν + 6ρ ≤ 7τ 1 + 5τ 2 + 5τ 3 + 3τ 4 + 3τ 5 + τ 6 + τ 7 + τ 8 − τ 9 − τ 10 − τ 11 − 3τ 12 − 3τ 13 − 5τ 14 − 5τ 16 − 7τ 15 ,
2λ + 4µ + 4ν + 6ρ ≤ 8τ 1 + 6τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 6τ 15 − 8τ 16 ,
−2λ + 4µ + 4ν + 6ρ ≤ 8τ 2 + 6τ 1 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 6τ 15 − 8τ 16 ,
−2λ + 4µ + 4ν + 6ρ ≤ 8τ 1 + 6τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 8 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 6τ 15 − 8τ 16 ,
−2λ + 4µ + 4ν + 6ρ ≤ 8τ 1 + 6τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 9 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 6τ 15 − 8τ 16 ,
−2λ + 4µ + 4ν + 6ρ ≤ 8τ 1 + 6τ 2 + 4τ 3 + 4τ 4 + 2τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 2τ 12 − 4τ 13 − 4τ 14 − 6τ 16 − 8τ 15 ,
2λ + 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 2 + 6τ 1 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 4 + 4τ 3 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 6 + 2τ 5 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 8 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 9 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 12 − 4τ 11 − 4τ 13 − 6τ 14 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 14 − 6τ 13 − 6τ 15 − 8τ 16 ,
2λ − 2µ + 4ν + 8ρ ≤ 8τ 1 + 6τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 6τ 16 − 8τ 15 ,
2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 2 + 8τ 1 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 4 + 4τ 3 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 6 + 2τ 5 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 8 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 9 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 12 − 4τ 11 − 4τ 13 − 6τ 14 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 14 − 6τ 13 − 8τ 15 − 10τ 16 ,
−2λ + 4µ + 6ν + 8ρ ≤ 10τ 1 + 8τ 2 + 6τ 3 + 4τ 4 + 4τ 5 + 2τ 6 + 2τ 7 − 2τ 10 − 2τ 11 − 4τ 12 − 4τ 13 − 6τ 14 − 8τ 16 − 10τ 15""",
}
#: Scenarios :math:`(d_1,\dots,d_n)`, corresponding to :math:`\times_i GL(d_i)`-representation on :math:`\bigotimes_i \mathbb C^{d_i}`.
KLYACHKO_QMP_SCENARIOS = sorted(KLYACHKO_QMP_DATA.keys())
#: Scenarios :math:`(d_1,\dots,d_n)`, corresponding to :math:`\times_i GL(d_i)`-representation on :math:`\bigotimes_i \mathbb C^{d_i}` for which Klyachko's inequalities do not contain any mistake.
KLYACHKO_GOOD_QMP_SCENARIOS = [
dims for dims in KLYACHKO_QMP_SCENARIOS if dims != (2, 2, 3, 12)
]
def _parse_mixed_ieq(dims, s, qubit_coords=[]):
"""Parse a mixed state inequality for distinguishable particles in Klyachko's format."""
pmap = {"A": 0, "B": 1, "C": 2, "D": 3, "E": 4}
v = [0] * sum(dims)
def p(side, overall_sign):
overall_factor = 2 if qubit_coords else 1
todo = side.split()
while todo:
assert todo[0] in ["+", "-"]
sign = 1 if todo[0] == "+" else -1
if todo[1] in pmap.keys():
todo = [todo[0]] + ["1"] + todo[1:]
coeff = int(todo[1])
party = pmap[todo[2]]
if party in qubit_coords:
v[sum(dims[:party])] = overall_sign * sign * coeff
v[sum(dims[:party]) + 1] = -overall_sign * sign * coeff
todo = todo[3:]
else:
idx = int(todo[3])
v[sum(dims[:party]) + idx - 1] = (
overall_factor * overall_sign * sign * coeff
)
todo = todo[4:]
return v
s = (
s.rstrip(" ,.")
.replace("−", "-")
.replace("λ", " A")
.replace("µ", " B")
.replace("ν", " C")
.replace("ρ", " D")
.replace("τ", " E")
)
lhs, rhs = map(lambda s: str(s).strip(), s.split("≤"))
p("+ " + lhs, -1)
p("+ " + rhs, 1)
return (vector(v), 0)
def _klyachko_qmp_bare_ieqs(dims):
"""Return bare inequalities from Klyachko's paper."""
# trim off last dimension
assert dims[-1] == prod(dims[:-1])
# retrieve inequalities from data file
qubit_coords = []
ieqs = []
for line in KLYACHKO_QMP_DATA[dims].splitlines():
line = line.strip()
if not line:
continue
# single qubit coordinate metadata?
_, sep, after = line.partition("# QUBIT_COORDS")
if sep:
qubit_coords = list(map(int, after.split()))
continue
# parse ieq
ieq = _parse_mixed_ieq(dims, line, qubit_coords)
ieqs.append(ieq)
return ieqs
def _find_qmp_scenario(dims):
"""Find ``dims_klyachko`` and a permutation ``pi`` such that ``dims_permuted = (pi * dims) <= dims_klyachko``.
Return the triple ``(pi, dims_permuted, dims_klyachko)``."""
for dims_klyachko in KLYACHKO_GOOD_QMP_SCENARIOS:
if len(dims) != len(dims_klyachko):
continue
for pi in Permutations(len(dims)):
dims_permuted = perm_action(pi, dims)
if all(x <= y for (x, y) in zip(dims_permuted, dims_klyachko)):
return pi, dims_permuted, dims_klyachko
raise Exception("Cannot obtain %s from Klyachko's scenarios." % (dims,))
def klyachko_qmp_hrepr(dims, bare=False, irred=True):
r"""Return the moment polytope for the :math:`\times_i GL(d_i)`-representation on :math:`\bigotimes_i \mathbb C^{d_i}` as computed in `Klyachko (2004) <https://arxiv.org/abs/quant-ph/0409113>`_.
See :data:`KLYACHKO_QMP_SCENARIOS` and :data:`KLYACHKO_GOOD_QMP_SCENARIOS` for available scenarios.
Sub-scenarios of these scenarios are also supported (i.e., :math:`d'_{\pi(i)} \leq d_i` for some permutation :math:`\pi`).
:param dims: the dimensions :math:`(d_1,\dots,d_n)`.
:param bare: if ``True`` then permutations, positivity, and Weyl chamber inequalities are omitted.
:param irred: if ``True`` then an irredunant H-representation is returned.
:rtype: :class:`moment_polytopes.HRepr`
"""
# look up klyachko scenario
dims = tuple(dims)
if bare:
dims_klyachko = tuple(dims)
assert dims_klyachko in KLYACHKO_QMP_SCENARIOS
else:
pi, dims_permuted, dims_klyachko = _find_qmp_scenario(dims)
pi_inverse = pi.inverse()
# fetch bare inequalities
bare_ieqs = _klyachko_qmp_bare_ieqs(dims_klyachko)
if bare:
hrepr = HRepr(ieqs=bare_ieqs)
return hrepr.irred() if irred else hrepr
# permute and truncate bare inequalities
stab = StabilizerGroup(dims_klyachko)
ieqs = set()
for H, z in bare_ieqs:
# extract
hs = [
tuple(H[sum(dims_klyachko[:i]) : sum(dims_klyachko[: i + 1])])
for i in range(len(dims_klyachko))
]
# permute subsystems
for hs_permuted in stab.orbit([hs]):
# truncate
hs_permuted = [h[:d] for (h, d) in zip(hs_permuted, dims_permuted)]
# permute back
hs_permuted = perm_action(pi_inverse, hs_permuted)
H_permuted = sum(hs_permuted, ())
# add ieq
ieq = (H_permuted, z)
ieqs.add(ieq)
# add positivity for all parties [[ more conceptually, we should ONLY add lambda_{AB,ab} >= 0 and get the other ones either implicitly (if dims_permuted == dims_klyachko) or from tracing out the Weyl chamber inequalities ]]
for k in range(len(dims)):
H, z = (
(0,) * sum(dims[:k])
+ (0,) * (dims[k] - 1)
+ (1,)
+ (0,) * sum(dims[k + 1 :]),
0,
)
ieqs.add((H, z))
# intersect with reduced Weyl chamber
R = external_tensor_product(dims)
hrepr = HRepr(ieqs=ieqs) & R.reduced_positive_weyl_chamber_hrepr
# make irredundant?
return hrepr.irred() if irred else hrepr
def higuchi_hrepr(num_qubits=3):
"""Return moment polytope for multi-qubit pure states computed by `Higuchi, Sudbery, and Szulc (2003) <https://arxiv.org/abs/quant-ph/0209085>`_.
:param num_qubits: the number of qubits.
:rtype: :class:`moment_polytopes.HRepr`
"""
# polygonal inequalities
ieqs = []
for k in range(num_qubits):
H, z = (-1, 0) * k + (1, 0) + (-1, 0) * (num_qubits - k - 1), 2 - num_qubits
ieqs.append((H, z))
# positivity
for k in range(num_qubits):
H, z = (0, 0) * k + (0, 1) + (0, 0) * (num_qubits - k - 1), 0
ieqs.append((H, z))
# bring inequalities into canonical form (traceless, integral components)
dims = [2] * num_qubits
ieqs = [qmp.facet_normal_form(dims, ieq) for ieq in ieqs]
R = external_tensor_product(dims)
return HRepr(ieqs=ieqs) & R.reduced_positive_weyl_chamber_hrepr
def bravyi_hrepr():
r"""Return moment polytope for :math:`\mathbb C^2 \otimes \mathbb C^2 \otimes \mathbb C^4` computed by `Bravyi (2004) <https://arxiv.org/abs/quant-ph/0301014>`_.
:rtype: :class:`moment_polytopes.HRepr`
"""
ieqs = [
((-1, 0, 0, 0, 1, 1, 0, 0), 0),
((0, 0, -1, 0, 1, 1, 0, 0), 0),
((-1, 0, -1, 0, 1, 0, 0, -1), -1),
((-1, 0, 1, 0, 1, 0, -1, 0), 0),
((-1, 0, 1, 0, 0, 1, 0, -1), 0),
((1, 0, -1, 0, 1, 0, -1, 0), 0),
((1, 0, -1, 0, 0, 1, 0, -1), 0),
((0, 1, 0, 0, 0, 0, 0, 0), 0),
((0, 0, 0, 1, 0, 0, 0, 0), 0),
((0, 0, 0, 0, 0, 0, 0, 1), 0),
]
# bring inequalities into canonical form (traceless, integral components)
dims = [2, 2, 4]
ieqs = [qmp.facet_normal_form(dims, ieq) for ieq in ieqs]
R = external_tensor_product(dims)
return HRepr(ieqs=ieqs) & R.reduced_positive_weyl_chamber_hrepr
def franz_hrepr():
r"""Return moment polytope for :math:`\mathbb C^3 \otimes \mathbb C^3 \otimes \mathbb C^3` computed by `Franz (2002) <http://www.emis.de/journals/JLT/vol.12_no.2/16.html>`_.
:rtype: :class:`moment_polytopes.HRepr`
"""
dims = [3, 3, 3]
stab = StabilizerGroup(dims)
# Franz' raw inequalities <H,lambda> <= 0 with permutations removed (observe the sign!)
franz_data = """
1 0 0 ; 1 0 0 ; -2 -1 -1
1 1 0 ; 1 0 0 ; -2 -2 -1
0 1 0 ; 1 0 0 ; -1 -2 -1
2 0 1 ; 2 0 1 ; -4 -2 -3
2 0 1 ; 2 1 0 ; -4 -3 -2
1 2 0 ; 2 0 1 ; -3 -4 -2
1 2 0 ; 2 1 0 ; -4 -3 -2
0 0 0 ; 0 0 0 ; 0 0 -1
"""
# convert normal vectors to our format and add in permutations
hss_wo_perms = [
[tuple([-int(x) for x in h.split()]) for h in line.split(";")]
for line in franz_data.splitlines()
if line.strip()
]
hss = stab.orbit(hss_wo_perms)
Hs = [sum(hs, ()) for hs in hss]
ieqs = [(vector(H), 0) for H in Hs]
# bring inequalities into canonical form (traceless components)
ieqs = [qmp.facet_normal_form(dims, ieq) for ieq in ieqs]
R = external_tensor_product(dims)
return HRepr(ieqs=ieqs) & R.reduced_positive_weyl_chamber_hrepr
| 76.597104 | 227 | 0.417592 | 29,780 | 89,925 | 1.462257 | 0.011249 | 0.044642 | 0.016052 | 0.014422 | 0.900404 | 0.882791 | 0.871676 | 0.862697 | 0.852042 | 0.842328 | 0 | 0.399356 | 0.440245 | 89,925 | 1,173 | 228 | 76.662404 | 0.341962 | 0.041068 | 0 | 0.06968 | 0 | 0.713748 | 0.91129 | 0.001115 | 0 | 0 | 0 | 0 | 0.004708 | 1 | 0.009416 | false | 0 | 0.002825 | 0 | 0.022599 | 0.000942 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
6a3b46123cf6ef6b4e5683cd1e1089791cc7f154 | 15,345 | py | Python | sdk/python/pulumi_aws/acm/certificate_validation.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 260 | 2018-06-18T14:57:00.000Z | 2022-03-29T11:41:03.000Z | sdk/python/pulumi_aws/acm/certificate_validation.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 1,154 | 2018-06-19T20:38:20.000Z | 2022-03-31T19:48:16.000Z | sdk/python/pulumi_aws/acm/certificate_validation.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 115 | 2018-06-28T03:20:27.000Z | 2022-03-29T11:41:06.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['CertificateValidationArgs', 'CertificateValidation']
@pulumi.input_type
class CertificateValidationArgs:
def __init__(__self__, *,
certificate_arn: pulumi.Input[str],
validation_record_fqdns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a CertificateValidation resource.
:param pulumi.Input[str] certificate_arn: The ARN of the certificate that is being validated.
:param pulumi.Input[Sequence[pulumi.Input[str]]] validation_record_fqdns: List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
pulumi.set(__self__, "certificate_arn", certificate_arn)
if validation_record_fqdns is not None:
pulumi.set(__self__, "validation_record_fqdns", validation_record_fqdns)
@property
@pulumi.getter(name="certificateArn")
def certificate_arn(self) -> pulumi.Input[str]:
"""
The ARN of the certificate that is being validated.
"""
return pulumi.get(self, "certificate_arn")
@certificate_arn.setter
def certificate_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "certificate_arn", value)
@property
@pulumi.getter(name="validationRecordFqdns")
def validation_record_fqdns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
return pulumi.get(self, "validation_record_fqdns")
@validation_record_fqdns.setter
def validation_record_fqdns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "validation_record_fqdns", value)
@pulumi.input_type
class _CertificateValidationState:
def __init__(__self__, *,
certificate_arn: Optional[pulumi.Input[str]] = None,
validation_record_fqdns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering CertificateValidation resources.
:param pulumi.Input[str] certificate_arn: The ARN of the certificate that is being validated.
:param pulumi.Input[Sequence[pulumi.Input[str]]] validation_record_fqdns: List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
if certificate_arn is not None:
pulumi.set(__self__, "certificate_arn", certificate_arn)
if validation_record_fqdns is not None:
pulumi.set(__self__, "validation_record_fqdns", validation_record_fqdns)
@property
@pulumi.getter(name="certificateArn")
def certificate_arn(self) -> Optional[pulumi.Input[str]]:
"""
The ARN of the certificate that is being validated.
"""
return pulumi.get(self, "certificate_arn")
@certificate_arn.setter
def certificate_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "certificate_arn", value)
@property
@pulumi.getter(name="validationRecordFqdns")
def validation_record_fqdns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
return pulumi.get(self, "validation_record_fqdns")
@validation_record_fqdns.setter
def validation_record_fqdns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "validation_record_fqdns", value)
class CertificateValidation(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
certificate_arn: Optional[pulumi.Input[str]] = None,
validation_record_fqdns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
This resource represents a successful validation of an ACM certificate in concert
with other resources.
Most commonly, this resource is used together with `route53.Record` and
`acm.Certificate` to request a DNS validated certificate,
deploy the required validation records and wait for validation to complete.
> **WARNING:** This resource implements a part of the validation workflow. It does not represent a real-world entity in AWS, therefore changing or deleting this resource on its own has no immediate effect.
## Example Usage
### DNS Validation with Route 53
```python
import pulumi
import pulumi_aws as aws
example_certificate = aws.acm.Certificate("exampleCertificate",
domain_name="example.com",
validation_method="DNS")
example_zone = aws.route53.get_zone(name="example.com",
private_zone=False)
example_record = []
for range in [{"key": k, "value": v} for [k, v] in enumerate({dvo.domainName: {
name: dvo.resourceRecordName,
record: dvo.resourceRecordValue,
type: dvo.resourceRecordType,
} for dvo in example_certificate.domainValidationOptions})]:
example_record.append(aws.route53.Record(f"exampleRecord-{range['key']}",
allow_overwrite=True,
name=range["value"]["name"],
records=[range["value"]["record"]],
ttl=60,
type=range["value"]["type"],
zone_id=example_zone.zone_id))
example_certificate_validation = aws.acm.CertificateValidation("exampleCertificateValidation",
certificate_arn=example_certificate.arn,
validation_record_fqdns=example_record.apply(lambda example_record: [record.fqdn for record in example_record]))
# ... other configuration ...
example_listener = aws.lb.Listener("exampleListener", certificate_arn=example_certificate_validation.certificate_arn)
```
### Email Validation
In this situation, the resource is simply a waiter for manual email approval of ACM certificates.
```python
import pulumi
import pulumi_aws as aws
example_certificate = aws.acm.Certificate("exampleCertificate",
domain_name="example.com",
validation_method="EMAIL")
example_certificate_validation = aws.acm.CertificateValidation("exampleCertificateValidation", certificate_arn=example_certificate.arn)
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] certificate_arn: The ARN of the certificate that is being validated.
:param pulumi.Input[Sequence[pulumi.Input[str]]] validation_record_fqdns: List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: CertificateValidationArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource represents a successful validation of an ACM certificate in concert
with other resources.
Most commonly, this resource is used together with `route53.Record` and
`acm.Certificate` to request a DNS validated certificate,
deploy the required validation records and wait for validation to complete.
> **WARNING:** This resource implements a part of the validation workflow. It does not represent a real-world entity in AWS, therefore changing or deleting this resource on its own has no immediate effect.
## Example Usage
### DNS Validation with Route 53
```python
import pulumi
import pulumi_aws as aws
example_certificate = aws.acm.Certificate("exampleCertificate",
domain_name="example.com",
validation_method="DNS")
example_zone = aws.route53.get_zone(name="example.com",
private_zone=False)
example_record = []
for range in [{"key": k, "value": v} for [k, v] in enumerate({dvo.domainName: {
name: dvo.resourceRecordName,
record: dvo.resourceRecordValue,
type: dvo.resourceRecordType,
} for dvo in example_certificate.domainValidationOptions})]:
example_record.append(aws.route53.Record(f"exampleRecord-{range['key']}",
allow_overwrite=True,
name=range["value"]["name"],
records=[range["value"]["record"]],
ttl=60,
type=range["value"]["type"],
zone_id=example_zone.zone_id))
example_certificate_validation = aws.acm.CertificateValidation("exampleCertificateValidation",
certificate_arn=example_certificate.arn,
validation_record_fqdns=example_record.apply(lambda example_record: [record.fqdn for record in example_record]))
# ... other configuration ...
example_listener = aws.lb.Listener("exampleListener", certificate_arn=example_certificate_validation.certificate_arn)
```
### Email Validation
In this situation, the resource is simply a waiter for manual email approval of ACM certificates.
```python
import pulumi
import pulumi_aws as aws
example_certificate = aws.acm.Certificate("exampleCertificate",
domain_name="example.com",
validation_method="EMAIL")
example_certificate_validation = aws.acm.CertificateValidation("exampleCertificateValidation", certificate_arn=example_certificate.arn)
```
:param str resource_name: The name of the resource.
:param CertificateValidationArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(CertificateValidationArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
certificate_arn: Optional[pulumi.Input[str]] = None,
validation_record_fqdns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = CertificateValidationArgs.__new__(CertificateValidationArgs)
if certificate_arn is None and not opts.urn:
raise TypeError("Missing required property 'certificate_arn'")
__props__.__dict__["certificate_arn"] = certificate_arn
__props__.__dict__["validation_record_fqdns"] = validation_record_fqdns
super(CertificateValidation, __self__).__init__(
'aws:acm/certificateValidation:CertificateValidation',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
certificate_arn: Optional[pulumi.Input[str]] = None,
validation_record_fqdns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'CertificateValidation':
"""
Get an existing CertificateValidation resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] certificate_arn: The ARN of the certificate that is being validated.
:param pulumi.Input[Sequence[pulumi.Input[str]]] validation_record_fqdns: List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _CertificateValidationState.__new__(_CertificateValidationState)
__props__.__dict__["certificate_arn"] = certificate_arn
__props__.__dict__["validation_record_fqdns"] = validation_record_fqdns
return CertificateValidation(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="certificateArn")
def certificate_arn(self) -> pulumi.Output[str]:
"""
The ARN of the certificate that is being validated.
"""
return pulumi.get(self, "certificate_arn")
@property
@pulumi.getter(name="validationRecordFqdns")
def validation_record_fqdns(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates. If this is set, the resource can implement additional sanity checks and has an explicit dependency on the resource that is implementing the validation
"""
return pulumi.get(self, "validation_record_fqdns")
| 51.15 | 332 | 0.686347 | 1,738 | 15,345 | 5.85328 | 0.128884 | 0.060552 | 0.068122 | 0.031947 | 0.815394 | 0.797405 | 0.795144 | 0.787378 | 0.782365 | 0.782365 | 0 | 0.00178 | 0.230955 | 15,345 | 299 | 333 | 51.32107 | 0.860266 | 0.523428 | 0 | 0.568966 | 1 | 0 | 0.12307 | 0.06242 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146552 | false | 0.008621 | 0.043103 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0289010049f4f63135040f18ecdf0f5083f4cb3 | 3,220 | py | Python | Protheus_WebApp/Modules/SIGAPCP/MATA660TESTCASE.py | 98llm/tir-script-samples | 0bff8393b79356aa562e9e6512c11ee6e039b177 | [
"MIT"
] | 17 | 2018-09-24T17:27:08.000Z | 2021-09-16T19:09:46.000Z | Protheus_WebApp/Modules/SIGAPCP/MATA660TESTCASE.py | 98llm/tir-script-samples | 0bff8393b79356aa562e9e6512c11ee6e039b177 | [
"MIT"
] | 4 | 2018-09-24T17:30:32.000Z | 2022-01-03T11:39:30.000Z | Protheus_WebApp/Modules/SIGAPCP/MATA660TESTCASE.py | 98llm/tir-script-samples | 0bff8393b79356aa562e9e6512c11ee6e039b177 | [
"MIT"
] | 18 | 2019-06-07T17:41:34.000Z | 2022-01-31T18:17:31.000Z | from tir import Webapp
import unittest
class MATA660(unittest.TestCase):
@classmethod
def setUpClass(inst):
inst.oHelper = Webapp()
inst.oHelper.Setup('SIGAPCP','26/04/2019','T1','D MG 01 ','10')
inst.oHelper.Program('MATA660')
def test_MATA660_001(self):
self.oHelper.SetButton('Outras Ações', 'Incluir')
self.oHelper.SetBranch('D MG 01')
self.oHelper.SetValue('H9_RECURSO','MT6601')
self.oHelper.SetValue('H9_MOTIVO','QUEBRA DE EQUIPAMENTO')
self.oHelper.SetValue('H9_DTINI','25/04/2019')
self.oHelper.SetValue('H9_DTFIM','27/04/2019')
self.oHelper.SetValue('H9_HRINI','10:00')
self.oHelper.SetValue('H9_HRFIM','15:00')
self.oHelper.SetButton('Salvar')
self.oHelper.SetButton('Cancelar')
self.oHelper.SetButton('Visualizar')
self.oHelper.CheckResult('H9_RECURSO','MT6601')
self.oHelper.CheckResult('H9_CCUSTO','PCP000001')
self.oHelper.CheckResult('H9_MOTIVO','QUEBRA DE EQUIPAMENTO')
self.oHelper.CheckResult('H9_DTINI','25/04/2019')
self.oHelper.CheckResult('H9_DTFIM','27/04/2019')
self.oHelper.CheckResult('H9_HRINI','10:00')
self.oHelper.CheckResult('H9_HRFIM','15:00')
self.oHelper.SetButton('Cancelar')
self.oHelper.AssertTrue()
def test_MATA660_002(self):
self.oHelper.SearchBrowse('D MG 01 BPCP000001MT6602')
self.oHelper.SetButton('Alterar')
self.oHelper.SetValue('H9_DTINI','26/04/2019')
self.oHelper.SetValue('H9_HRFIM','18:00')
self.oHelper.SetButton('Salvar')
self.oHelper.SetButton('Visualizar')
self.oHelper.CheckResult('H9_RECURSO','MT6602')
self.oHelper.CheckResult('H9_CCUSTO','PCP000001')
self.oHelper.CheckResult('H9_MOTIVO','QUEBRA DE EQUIPAMENTO')
self.oHelper.CheckResult('H9_DTINI','26/04/2019')
self.oHelper.CheckResult('H9_DTFIM','27/04/2019')
self.oHelper.CheckResult('H9_HRINI','10:00')
self.oHelper.CheckResult('H9_HRFIM','18:00')
self.oHelper.SetButton('Cancelar')
self.oHelper.AssertTrue()
def test_MATA660_003(self):
self.oHelper.SearchBrowse('D MG 01 BPCP000001MT6603')
self.oHelper.SetButton('Outras Ações', 'Excluir')
self.oHelper.SetButton('Confirmar')
self.oHelper.AssertTrue()
def test_MATA660_004(self):
self.oHelper.SetButton('Assistente')
self.oHelper.SetBranch('D MG 01')
self.oHelper.SetValue('H9_RECURSO','MT6604')
self.oHelper.SetValue('H9_MOTIVO','QUEBRA DE EQUIPAMENTO')
self.oHelper.SetValue('H9_DTINI','26/04/2019')
self.oHelper.SetValue('H9_DTFIM','26/04/2019')
self.oHelper.SetValue('H9_HRINI','10:00')
self.oHelper.SetValue('H9_HRFIM','15:00')
self.oHelper.SetButton('Salvar')
self.oHelper.SetButton('Cancelar')
self.oHelper.SetButton('Visualizar')
self.oHelper.CheckResult('H9_RECURSO','MT6604')
self.oHelper.CheckResult('H9_CCUSTO','PCP000001')
self.oHelper.CheckResult('H9_MOTIVO','QUEBRA DE EQUIPAMENTO')
self.oHelper.CheckResult('H9_DTINI','26/04/2019')
self.oHelper.CheckResult('H9_DTFIM','26/04/2019')
self.oHelper.CheckResult('H9_HRINI','10:00')
self.oHelper.CheckResult('H9_HRFIM','15:00')
self.oHelper.SetButton('Cancelar')
self.oHelper.AssertTrue()
@classmethod
def tearDownClass(inst):
inst.oHelper.TearDown()
if __name__ == '__main__':
unittest.main() | 29.009009 | 65 | 0.728261 | 424 | 3,220 | 5.410377 | 0.17217 | 0.282912 | 0.201395 | 0.219704 | 0.819093 | 0.781604 | 0.766347 | 0.714037 | 0.701831 | 0.67524 | 0 | 0.09514 | 0.099068 | 3,220 | 111 | 66 | 29.009009 | 0.695622 | 0 | 0 | 0.552632 | 0 | 0 | 0.274138 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.078947 | false | 0 | 0.026316 | 0 | 0.118421 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
bc5dc478362d3eb9041d2319ba896820db53cae8 | 125 | py | Python | cardio_audio_sleep/io/__init__.py | mscheltienne/cardio-audio-sleep | 42a41eb46dc7b285e0fbcdd909352153f69d68b7 | [
"MIT"
] | null | null | null | cardio_audio_sleep/io/__init__.py | mscheltienne/cardio-audio-sleep | 42a41eb46dc7b285e0fbcdd909352153f69d68b7 | [
"MIT"
] | null | null | null | cardio_audio_sleep/io/__init__.py | mscheltienne/cardio-audio-sleep | 42a41eb46dc7b285e0fbcdd909352153f69d68b7 | [
"MIT"
] | null | null | null | """I/O module."""
from .read_raw_fif import read_raw_fif # noqa: F401
from .read_raw_xdf import read_raw_xdf # noqa: F401
| 25 | 52 | 0.736 | 23 | 125 | 3.652174 | 0.478261 | 0.333333 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 0.152 | 125 | 4 | 53 | 31.25 | 0.735849 | 0.272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bca541a476b2330f9dd444fc57ffbf621a83ea70 | 5,829 | py | Python | experiments/plots/style2d_grids.py | Butters-cloud/denoising-normalizing-flow | 12d56a0d069e10a744acabf5e78fdbfba8df54ee | [
"MIT"
] | 12 | 2021-11-18T15:01:17.000Z | 2022-02-22T16:17:42.000Z | experiments/plots/style2d_grids.py | Butters-cloud/denoising-normalizing-flow | 12d56a0d069e10a744acabf5e78fdbfba8df54ee | [
"MIT"
] | 2 | 2022-01-22T00:41:13.000Z | 2022-02-01T15:41:42.000Z | experiments/plots/style2d_grids.py | Butters-cloud/denoising-normalizing-flow | 12d56a0d069e10a744acabf5e78fdbfba8df54ee | [
"MIT"
] | 1 | 2022-01-26T22:44:07.000Z | 2022-01-26T22:44:07.000Z | """ Create density grid of Figure 3
Requires: updating the data and output path (see below)
"""
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import torch
import os
from mpl_toolkits.axes_grid1 import ImageGrid
data_path = r'...\data\style2d' #<--adapt here
output_dir = r'...\images' #<--adapt here
#hyperparameters
latent_dim = 2
x = torch.linspace(-2, 2, 7)
xx, yy = torch.meshgrid((x, x))
grid= torch.stack((xx.flatten(), yy.flatten()), dim=1).double()
####################################
###########DNF######################
gan_images = np.load(os.path.join(data_path,'dnf_2_gan2d_paper_grid.npy')
gan_images = gan_images.reshape([7,7,3,64,64])
gan_images = np.transpose(gan_images,axes = [1,0,2,3,4])
gan_images = gan_images.reshape([49,3,64,64])
boundary = 1.5
resolution = 7
each = np.linspace(-boundary, boundary, resolution)
each_grid = np.meshgrid(*[each for _ in range(2)], indexing="ij")
each_grid = [x.flatten() for x in each_grid]
gan_zs = np.vstack(each_grid).T
gan_images = np.clip(gan_images / 256.0, 0.0, 1.0)
gan_images.shape
size = 0.45
fig, ax = plt.subplots()
fig = plt.figure(figsize=(10., 10.))
#fig, ax = ps.figure(height=0.33*ps.TEXTWIDTH) #
for z, image in zip(gan_zs, gan_images):
#print('z[0]',z[0])
image_ = np.transpose(image, [1,2,0])
plt.imshow(image_, extent=(z[0]-size/2, z[0]+size/2, z[1]-size/2, z[1]+size/2))
plt.xlabel(r"DNF latent variable $\tilde{u}_0$", labelpad=4, fontsize=25)
plt.ylabel(r"DNF latent variable $\tilde{u}_1$", labelpad=1, fontsize=25)
#plt.xlabel("StyleGAN latent variable $z_0$", labelpad=4)
#plt.ylabel("StyleGAN latent variable $z_1$", labelpad=1)
plt.xlim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.ylim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.xticks([-1., 0., 1.])
plt.yticks([-1., 0., 1.])
ax.tick_params(axis='y', which='major', pad=1)
plt.tight_layout()
fig.savefig(os.path.join(output_dir, 'style2d_dnf_grid.pdf'), bbox_inches = 'tight')
####################################
###########Style Gan################
gan_images = np.load(os.path.join(data_path,'grid.npy')
boundary = 1.5
resolution = 7
each = np.linspace(-boundary, boundary, resolution)
each_grid = np.meshgrid(*[each for _ in range(2)], indexing="ij")
each_grid = [x.flatten() for x in each_grid]
gan_zs = np.vstack(each_grid).T
gan_images = gan_images.reshape((9, 9, 3, 64, 64))
gan_images = gan_images[1:-1, 1:-1, :, :, :]
gan_images = gan_images.reshape((49, 3, 64, 64))
gan_images = 0.5 + 255.0 * gan_images
gan_images = np.clip(gan_images / 256.0, 0.0, 1.0)
gan_images.shape
size = 0.45
fig, ax = plt.subplots()
fig = plt.figure(figsize=(10., 10.))
#fig, ax = ps.figure(height=0.33*ps.TEXTWIDTH) #
for z, image in zip(gan_zs, gan_images):
#print('z[0]',z[0])
image_ = np.transpose(image, [1,2,0])
plt.imshow(image_, extent=(z[0]-size/2, z[0]+size/2, z[1]-size/2, z[1]+size/2))
plt.xlabel("StyleGAN latent variable $z_0$", labelpad=4, fontsize=25)
plt.ylabel("StyleGAN latent variable $z_1$", labelpad=1, fontsize=25)
plt.xlim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.ylim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.xticks([-1., 0., 1.])
plt.yticks([-1., 0., 1.])
ax.tick_params(axis='y', which='major', pad=1)
plt.tight_layout()
fig.savefig(os.path.join(output_dir, 'style2d_grid.pdf'), bbox_inches = 'tight') #dnf_gand2d_grid , dpi=72
####################################
###########VAE######################
gan_images = 0.5 + 255.0 * np.load(os.path.join(data_path,'grid_VAE.npy')
gan_images = gan_images.reshape([7,7,3,64,64])
gan_images = np.transpose(gan_images,axes = [1,0,2,3,4])
gan_images = gan_images.reshape([49,3,64,64])
boundary = 1.5
resolution = 7
each = np.linspace(-boundary, boundary, resolution)
each_grid = np.meshgrid(*[each for _ in range(2)], indexing="ij")
each_grid = [x.flatten() for x in each_grid]
gan_zs = np.vstack(each_grid).T
gan_images = np.clip(gan_images / 256.0, 0.0, 1.0)
gan_images.shape
size = 0.45
fig, ax = plt.subplots()
fig = plt.figure(figsize=(10., 10.))
for z, image in zip(gan_zs, gan_images):
image_ = np.transpose(image, [1,2,0])
plt.imshow(image_, extent=(z[0]-size/2, z[0]+size/2, z[1]-size/2, z[1]+size/2))
plt.xlabel(r"InfoMax-VAE variable $\tilde{u}_0$", labelpad=4, fontsize=25)
plt.ylabel(r"InfoMax-VAE variable $\tilde{u}_1$", labelpad=1, fontsize=25)
plt.xlim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.ylim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.xticks([-1., 0., 1.])
plt.yticks([-1., 0., 1.])
ax.tick_params(axis='y', which='major', pad=1)
plt.tight_layout()
fig.savefig(os.path.join(output_dir, 'style2d_vae_grid.pdf'), bbox_inches = 'tight')
####################################
###########PAE######################
gan_images = np.load(os.path.join(data_path,'grid_gan2d_pae.npy') + 0.5
boundary = 1.5
resolution = 7
each = np.linspace(-boundary, boundary, resolution)
each_grid = np.meshgrid(*[each for _ in range(2)], indexing="ij")
each_grid = [x.flatten() for x in each_grid]
gan_zs = np.vstack(each_grid).T
gan_images.shape
size = 0.45
fig, ax = plt.subplots()
fig = plt.figure(figsize=(10., 10.))
for z, image in zip(gan_zs, gan_images):
image_ = image
plt.imshow(image_, extent=(z[0]-size/2, z[0]+size/2, z[1]-size/2, z[1]+size/2))
plt.xlabel(r"PAE latent variable $\tilde{u}_0$", labelpad=4, fontsize=25)
plt.ylabel(r"PAE latent variable $\tilde{u}_1$", labelpad=1, fontsize=25)
plt.xlim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.ylim(-1.5 - 1.3*size/2, 1.5 + 1.3*size/2)
plt.xticks([-1., 0., 1.])
plt.yticks([-1., 0., 1.])
ax.tick_params(axis='y', which='major', pad=1)
plt.tight_layout()
fig.savefig(os.path.join(output_dir, 'style2d_pae_grid.pdf'), bbox_inches = 'tight') | 34.904192 | 107 | 0.629439 | 1,033 | 5,829 | 3.430784 | 0.125847 | 0.096501 | 0.013544 | 0.018059 | 0.87105 | 0.837754 | 0.82026 | 0.817438 | 0.809537 | 0.75 | 0 | 0.069532 | 0.133985 | 5,829 | 167 | 108 | 34.904192 | 0.632528 | 0 | 0 | 0.717949 | 0 | 0 | 0.093305 | 0.005075 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.051282 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bce8bae656b6078b9c6fbd047d97646fd2e57e67 | 185,201 | py | Python | cfgov/v1/migrations/0102_recreated.py | atuggle/cfgov-refresh | 5a9cfd92b460b9be7befb39f5845abf56857aeac | [
"CC0-1.0"
] | null | null | null | cfgov/v1/migrations/0102_recreated.py | atuggle/cfgov-refresh | 5a9cfd92b460b9be7befb39f5845abf56857aeac | [
"CC0-1.0"
] | 1 | 2016-09-14T21:11:19.000Z | 2016-09-14T21:11:19.000Z | cfgov/v1/migrations/0102_recreated.py | atuggle/cfgov-refresh | 5a9cfd92b460b9be7befb39f5845abf56857aeac | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import datetime
import wagtail.wagtaildocs.blocks
import wagtail.wagtailcore.models
import localflavor.us.models
import modelcluster.fields
import wagtail.wagtailimages.blocks
import v1.util.ref
from django.conf import settings
import v1.blocks
import django.core.validators
import wagtail.wagtailsearch.index
import v1.atomic_elements.atoms
import v1.models.snippets
import modelcluster.contrib.taggit
import wagtail.wagtailimages.models
import wagtail.wagtailcore.fields
import wagtail.wagtailcore.blocks
import v1.util.filterable_list
import wagtail.wagtailsnippets.blocks
import v1.feeds
import taggit.managers
import django.db.models.deletion
import v1.atomic_elements.organisms
class Migration(migrations.Migration):
replaces = [
('v1', '0001_initial'),
('v1', '0002_share_perms'),
('v1', '0003_cfgovimage_collection'),
('v1', '0004_auto_20160712_1531'),
('v1', '0005_auto_20160815_1537'),
('v1', '0006_auto_20160823_1608'),
('v1', '0007_imagetext5050_sharing'),
('v1', '0008_rename_related_links'),
('v1', '0009_site_root_data'),
('v1', '0010_hero_refactor'),
('v1', '0011_hero_refactor_data'),
('v1', '0012_create_tableblock'),
('v1', '0013_update_tables_to_tableblocks'),
('v1', '0014_modify_half_blob_labels'),
('v1', '0015_feedback'),
('v1', '0016_registration_form_block'),
('v1', '0017_add_capacity_fields'),
('v1', '0018_migration_merge_cleanup'),
('v1', '0019_modify_fcm_help'),
('v1', '0020_full-width-text-anchor'),
('v1', '0021_replace_dup_category_field'),
('v1', '0022_replace_dup_category_field_data'),
('v1', '0023_conf_reg_form_updates'),
('v1', '0024_extend_feedback_model'),
('v1', '0025_adjust_pages_using_feedback'),
('v1', '0026_adjust_block_field_labeling'),
('v1', '0027_conf_reg_form_updates'),
('v1', '0028_add_richtext_for_feedback_advisory'),
('v1', '0029_remove_contact_advisory_default_text'),
('v1', '0030_adjust_feedback_default_text'),
('v1', '0031_add_social_media_customization'),
('v1', '0032_add_video_player_module'),
('v1', '0033_making_25_75_images_clickable'),
('v1', '0034_add_story_categories'),
('v1', '0035_add_5050_output_to_flc'),
('v1', '0036_cfgovrendition_uniqueness'),
('v1', '0037_fix_youtube_url_validation'),
('v1', '0038_convert_bureau_structure_to_wagtail'),
('v1', '0039_add_filter_spec_to_cfgovrendition'),
('v1', '0040_fill_filter_spec'),
('v1', '0041_create_html_block'),
('v1', '0042_remove_demo_page'),
('v1', '0043_create_chart_block'),
('v1', '0044_changing_case_on_enforcement_action_category'),
('v1', '0045_update_story_categories'),
('v1', '0046_adding_no_table_results_message'),
('v1', '0047_resource_snippet_lists'),
('v1', '0048_remove_body_header_fields_from_main_contact_info'),
('v1', '0049_remove_main_contact_info_from_sidefoot'),
('v1', '0050_refactor_chart_block'),
('v1', '0051_wagtail_1_8_1'),
('v1', '0052_add_image_inset'),
('v1', '0053_more_email_signups'),
('v1', '0054_new_categories'),
('v1', '0055_orderable_resource_snippets'),
('v1', '0056_make_subfilterable_preview_images_clickable'),
('v1', '0057_add_reusable_text'),
('v1', '0058_adding_clickable_image_to_50_50'),
('v1', '0059_alj_filterable_list'),
('v1', '0060_feedback_language'),
('v1', '0061_make_info_unit_headings_linkable'),
('v1', '0062_modifying_video_player'),
('v1', '0063_remove_validation_from_video_player'),
('v1', '0064_adding_button_atom'),
('v1', '0065_add_related_posts_and_filtering'),
('v1', '0066_fix_for_linking_video_stills'),
('v1', '0067_add_expandables_to_blog_pages'),
('v1', '0068_remove_cfgovrendition_filter'),
('v1', '0069_add_social_sharing_image'),
('v1', '0070_pull_quote_is_large_option'),
('v1', '0071_create_data_snapshot'),
('v1', '0072_add_image_and_help_text_to_data_snapshot'),
('v1', '0073_update_social_image_help_text'),
('v1', '0074_akamaihistory'),
('v1', '0075_reusabletext_sidefoot_heading'),
('v1', '0076_add_snippet_list_to_sublanding'),
('v1', '0077_add_last_updated_projected_data_fields'),
('v1', '0078_make_data_snapshot_image_optional'),
('v1', '0079_simplify_help_text'),
('v1', '0080_add_date_published_to_chart_block'),
('v1', '0081_related_metadata_date_required'),
('v1', '0082_update_reusabletext_help_text'),
('v1', '0083_add_raw_html_block_to_browse_page'),
('v1', '0084_add_info_unit_groups'),
('v1', '0085_change_related_metadata_boolean_name'),
('v1', '0086_convert_helper_text_from_array_to_text'),
('v1', '0087_add_mortgage_chart_block_to_browsepage'),
('v1', '0088_add_mortgage_map_block_to_browsepage'),
('v1', '0089_output_snippet_list_thumbnails'),
('v1', '0090_add_note_field_to_mortgage_blocks'),
('v1', '0091_add_mortgage_download_block_to_browsepage'),
('v1', '0092_restrict_info_unit_groups'),
('v1', '0093_add_intro_fields_to_mortgage_charts'),
('v1', '0094_add_snippet_list_col_width'),
('v1', '0095_adding_y_axis_label'),
('v1', '0096_add_phone_extensions'),
('v1', '0097_move_reusable_text_chooser_block'),
('v1', '0098_dynamic_snippet_list_choices'),
('v1', '0099_add_rule_options_to_modules'),
('v1', '0100_update_display_names_for_categories'),
('v1', '0101_2018_research_conference'),
]
dependencies = [
('wagtailimages', '0019_delete_filter'),
('wagtaildocs', '0007_merge'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('wagtailcore', '0033_remove_golive_expiry_help_text'),
('taggit', '0002_auto_20150616_2121'),
]
operations = [
migrations.CreateModel(
name='AkamaiHistory',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('subject', models.CharField(max_length=2083)),
('message', models.CharField(max_length=255)),
('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='CFGOVAuthoredPages',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
],
options={
'verbose_name': 'Author',
'verbose_name_plural': 'Authors',
},
),
migrations.CreateModel(
name='CFGOVImage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=255, verbose_name='title')),
('file', models.ImageField(height_field='height', upload_to=wagtail.wagtailimages.models.get_upload_to, width_field='width', verbose_name='file')),
('width', models.IntegerField(verbose_name='width', editable=False)),
('height', models.IntegerField(verbose_name='height', editable=False)),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='created at', db_index=True)),
('focal_point_x', models.PositiveIntegerField(null=True, blank=True)),
('focal_point_y', models.PositiveIntegerField(null=True, blank=True)),
('focal_point_width', models.PositiveIntegerField(null=True, blank=True)),
('focal_point_height', models.PositiveIntegerField(null=True, blank=True)),
('file_size', models.PositiveIntegerField(null=True, editable=False)),
('alt', models.CharField(max_length=100, blank=True)),
('collection', models.ForeignKey(related_name='+', default=wagtail.wagtailcore.models.get_root_collection_id, verbose_name='collection', to='wagtailcore.Collection')),
('tags', taggit.managers.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text=None, verbose_name='tags')),
('uploaded_by_user', models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, blank=True, editable=False, to=settings.AUTH_USER_MODEL, null=True, verbose_name='uploaded by user')),
],
options={
'abstract': False,
},
bases=(wagtail.wagtailsearch.index.Indexed, models.Model),
),
migrations.CreateModel(
name='CFGOVPage',
fields=[
('page_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('shared', models.BooleanField(default=False)),
('has_unshared_changes', models.BooleanField(default=False)),
('language', models.CharField(default=b'en', max_length=2, choices=[(b'en', b'English'), (b'es', b'Spanish'), (b'zh', b'Chinese'), (b'vi', b'Vietnamese'), (b'ko', b'Korean'), (b'tl', b'Tagalog'), (b'ru', b'Russian'), (b'ar', b'Arabic'), (b'ht', b'Haitian Creole')])),
('sidefoot', wagtail.wagtailcore.fields.StreamField([(b'call_to_action', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'related_posts', wagtail.wagtailcore.blocks.StructBlock([(b'limit', wagtail.wagtailcore.blocks.CharBlock(help_text=b'This limit applies to EACH TYPE of post this module retrieves, not the total number of retrieved posts.', default=b'3')), (b'show_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'This toggles the heading and icon for the related types.', default=True, required=False, label=b'Show Heading and Icon?')), (b'header_title', wagtail.wagtailcore.blocks.CharBlock(default=b'Further reading', label=b'Slug Title')), (b'relate_posts', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, editable=False, label=b'Blog Posts')), (b'relate_newsroom', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, editable=False, label=b'Newsroom')), (b'relate_events', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Events')), (b'specific_categories', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'Blog', ((b'At the CFPB', b'At the CFPB'), (b'Policy & Compliance', b'Policy and compliance'), (b'Data, Research & Reports', b'Data, research, and reports'), (b'Info for Consumers', b'Info for consumers'))), (b'Newsroom', ((b'Op-Ed', b'Op-ed'), (b'Press Release', b'Press release'), (b'Speech', b'Speech'), (b'Testimony', b'Testimony')))]), required=False)), (b'and_filtering', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, related posts will only be pulled in if they match ALL topic tags set on this page. Otherwise, related posts can match any one topic tag.', default=False, required=False, label=b'Match all topic tags'))])), (b'related_metadata', wagtail.wagtailcore.blocks.StructBlock([(b'slug', wagtail.wagtailcore.blocks.CharBlock(max_length=100)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'text', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(max_length=100)), (b'blob', wagtail.wagtailcore.blocks.RichTextBlock())], icon=b'pilcrow')), (b'list', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(max_length=100)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))], icon=b'list-ul')), (b'date', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(max_length=100)), (b'date', wagtail.wagtailcore.blocks.DateBlock())], icon=b'date')), (b'topics', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(default=b'Topics', max_length=100)), (b'show_topics', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False))], icon=b'tag'))])), (b'is_half_width', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False))])), (b'email_signup', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'gd_code', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'form_field', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'btn_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'required', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'info', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Disclaimer')), (b'label', wagtail.wagtailcore.blocks.CharBlock(required=True)), (b'type', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'text', b'Text'), (b'checkbox', b'Checkbox'), (b'email', b'Email'), (b'number', b'Number'), (b'url', b'URL'), (b'radio', b'Radio')])), (b'placeholder', wagtail.wagtailcore.blocks.CharBlock(required=False))]), required=False, icon=b'mail'))])), (b'sidebar_contact', wagtail.wagtailcore.blocks.StructBlock([(b'contact', wagtail.wagtailsnippets.blocks.SnippetChooserBlock(b'v1.Contact')), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Add a horizontal rule line to top of contact block.', default=False, required=False))])), (b'rss_feed', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'blog_feed', b'Blog Feed'), (b'newsroom_feed', b'Newsroom Feed')])), (b'social_media', wagtail.wagtailcore.blocks.StructBlock([(b'is_share_view', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If unchecked, social media icons will link users to official CFPB accounts. Do not fill in any further fields.', default=True, required=False, label=b'Desired action: share this page')), (b'blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', default=b"Look what I found on the CFPB's site!", required=False)), (b'twitter_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Custom text for Twitter shares. If blank, will default to value of blurb field above.', max_length=100, required=False)), (b'twitter_related', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) A comma-separated list of accounts related to the content of the shared URL. Do not enter the @ symbol. If blank, it will default to just "cfpb".', required=False)), (b'twitter_hashtags', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) A comma-separated list of hashtags to be appended to default tweet text.', required=False)), (b'twitter_lang', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Loads text components in the specified language, if other than English. E.g., use "es" for Spanish. See https://dev.twitter.com/web/overview/languages for a list of supported language codes.', required=False)), (b'email_title', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Custom subject for email shares. If blank, will default to value of blurb field above.', required=False)), (b'email_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Custom text for email shares. If blank, will default to "Check out this page from the CFPB".', required=False)), (b'email_signature', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Adds a custom signature line to email shares. ', required=False)), (b'linkedin_title', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Custom title for LinkedIn shares. If blank, will default to value of blurb field above.', required=False)), (b'linkedin_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'(Optional) Custom text for LinkedIn shares.', required=False))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(v1.models.snippets.ReusableText))], blank=True)),
],
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='CFGOVPageCategory',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('sort_order', models.IntegerField(null=True, editable=False, blank=True)),
('name', models.CharField(max_length=255, choices=[(b'Administrative adjudication docket', ((b'administrative-adjudication', b'Administrative adjudication'), (b'stipulation-and-constent-order', b'Stipulation and consent order'))), (b'Amicus Brief', ((b'us-supreme-court', b'U.S. Supreme Court'), (b'fed-circuit-court', b'Federal Circuit Court'), (b'fed-district-court', b'Federal District Court'), (b'state-court', b'State Court'))), (b'Blog', ((b'at-the-cfpb', b'At the CFPB'), (b'policy_compliance', b'Policy and compliance'), (b'data-research-reports', b'Data, research, and reports'), (b'info-for-consumers', b'Info for consumers'))), (b'Enforcement Action', ((b'fed-district-case', b'Federal district court case'), (b'administrative-adjudication-2', b'Administrative adjudication'), (b'stipulation-and-consent-order-2', b'Stipulation and consent order'))), (b'Final rule', ((b'interim-final-rule', b'Interim final rule'), (b'final-rule', b'Final rule'))), (b'FOIA Frequently Requested Record', ((b'report', b'Report'), (b'log', b'Log'), (b'record', b'Record'))), (b'Implementation Resource', ((b'compliance-aid', b'Compliance aid'), (b'official-guidance', b'Official guidance'))), (b'Newsroom', ((b'op-ed', b'Op-ed'), (b'press-release', b'Press release'), (b'speech', b'Speech'), (b'testimony', b'Testimony'))), (b'Notice and Opportunity for Comment', ((b'notice-proposed-rule', b'Advance notice of proposed rulemaking'), (b'proposed-rule', b'Proposed rule'), (b'interim-final-rule-2', b'Interim final rule'), (b'request-comment-info', b'Request for comment or information'), (b'proposed-policy', b'Proposed policy'), (b'intent-preempt-determ', b'Intent to make preemption determination'), (b'info-collect-activity', b'Information collection activities'), (b'notice-privacy-act', b'Notice related to Privacy Act'))), (b'Research Report', ((b'consumer-complaint', b'Consumer complaint'), (b'super-highlight', b'Supervisory Highlights'), (b'data-point', b'Data point'), (b'industry-markets', b'Industry and markets'), (b'consumer-edu-empower', b'Consumer education and empowerment'), (b'to-congress', b'To Congress'))), (b'Rule under development', ((b'notice-proposed-rule-2', b'Advance notice of proposed rulemaking'), (b'proposed-rule-2', b'Proposed rule'))), (b'Story', ((b'auto-loans', b'Auto loans'), (b'bank-accts-services', b'Bank accounts and services'), (b'credit-cards', b'Credit cards'), (b'credit-reports-scores', b'Credit reports and scores'), (b'debt-collection', b'Debt collection'), (b'money-transfers', b'Money transfers'), (b'mortgages', b'Mortgages'), (b'payday-loans', b'Payday loans'), (b'prepaid-cards', b'Prepaid cards'), (b'student-loans', b'Student loans')))])),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='CFGOVRendition',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('filter_spec', models.CharField(max_length=255, db_index=True)),
('file', models.ImageField(height_field='height', width_field='width', upload_to=wagtail.wagtailimages.models.get_rendition_upload_to)),
('width', models.IntegerField(editable=False)),
('height', models.IntegerField(editable=False)),
('focal_point_key', models.CharField(default='', max_length=16, editable=False, blank=True)),
('image', models.ForeignKey(related_name='renditions', to='v1.CFGOVImage')),
],
),
migrations.CreateModel(
name='CFGOVTaggedPages',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
],
options={
'verbose_name': 'Tag',
'verbose_name_plural': 'Tags',
},
),
migrations.CreateModel(
name='Contact',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('heading', models.CharField(help_text=b'The snippet heading', max_length=255, verbose_name=b'Heading')),
('body', wagtail.wagtailcore.fields.RichTextField(blank=True)),
('contact_info', wagtail.wagtailcore.fields.StreamField([(b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True)),
],
),
migrations.CreateModel(
name='FailedLoginAttempt',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('failed_attempts', models.CharField(max_length=1000)),
('user', models.OneToOneField(to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Feedback',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('submitted_on', models.DateTimeField(auto_now_add=True)),
('comment', models.TextField(null=True, blank=True)),
('language', models.CharField(max_length=8, null=True, blank=True)),
('referrer', models.CharField(max_length=255, null=True, blank=True)),
('is_helpful', models.NullBooleanField()),
('expect_to_buy', models.CharField(max_length=255, null=True, blank=True)),
('currently_own', models.CharField(max_length=255, null=True, blank=True)),
('email', models.EmailField(max_length=250, null=True, blank=True)),
('page', models.ForeignKey(related_name='feedback', on_delete=django.db.models.deletion.SET_NULL, to='wagtailcore.Page', null=True)),
],
),
migrations.CreateModel(
name='PasswordHistoryItem',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('expires_at', models.DateTimeField()),
('locked_until', models.DateTimeField()),
('encrypted_password', models.CharField(max_length=128, verbose_name='password')),
('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
options={
'get_latest_by': 'created',
},
),
migrations.CreateModel(
name='Resource',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=255)),
('desc', wagtail.wagtailcore.fields.RichTextField(verbose_name=b'Description', blank=True)),
('link', models.URLField(blank=True, help_text=b'Example: URL to order a few copies of a printed piece.', validators=[django.core.validators.URLValidator])),
('alternate_link', models.URLField(blank=True, help_text=b'Example: a URL to for ordering bulk copies.', validators=[django.core.validators.URLValidator])),
('order', models.PositiveSmallIntegerField(help_text=b'Snippets will be listed alphabetically by title in a Snippet List module, unless any in the list have a number in this field; those with an order value will appear at the bottom of the list, in ascending order.', null=True, blank=True)),
('alternate_file', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='wagtaildocs.Document', null=True)),
('related_file', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='wagtaildocs.Document', null=True)),
],
options={
'ordering': ('order', 'title'),
},
),
migrations.CreateModel(
name='ResourceTag',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('content_object', modelcluster.fields.ParentalKey(related_name='tagged_items', to='v1.Resource')),
('tag', models.ForeignKey(related_name='v1_resourcetag_items', to='taggit.Tag')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ReusableText',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=255, verbose_name=b'Snippet title (internal only)')),
('sidefoot_heading', models.CharField(help_text=b'Applies "slug" style heading. Only for use in sidebars and prefooters (the "sidefoot"). See [GHE]/flapjack/Modules-V1/wiki/Atoms#slugs', max_length=255, blank=True)),
('text', wagtail.wagtailcore.fields.RichTextField()),
],
bases=(wagtail.wagtailsearch.index.Indexed, models.Model),
),
migrations.CreateModel(
name='TemporaryLockout',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('expires_at', models.DateTimeField()),
('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='AbstractFilterPage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'article_subheader', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'form')), (b'text_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to bottom of text introduction.', required=False, label=b'Has bottom rule'))])), (b'item_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'show_category', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Whether to show the category or not (category must be set in 'Configuration').", default=True, required=False)), (b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'date', wagtail.wagtailcore.blocks.DateBlock(required=False)), (b'has_social', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Whether to show the share icons or not.', required=False))]))], blank=True)),
('preview_title', models.CharField(max_length=255, null=True, blank=True)),
('preview_subheading', models.CharField(max_length=255, null=True, blank=True)),
('preview_description', wagtail.wagtailcore.fields.RichTextField(null=True, blank=True)),
('secondary_link_url', models.CharField(max_length=500, null=True, blank=True)),
('secondary_link_text', models.CharField(max_length=255, null=True, blank=True)),
('date_published', models.DateField(default=datetime.date.today)),
('date_filed', models.DateField(null=True, blank=True)),
('comments_close_by', models.DateField(null=True, blank=True)),
],
options={
'abstract': False,
},
bases=('v1.cfgovpage',),
),
migrations.CreateModel(
name='BrowseFilterablePage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'text_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to bottom of text introduction.', required=False, label=b'Has bottom rule'))])), (b'featured_content', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'category', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'featured-event', b'Featured event'), (b'featured-blog', b'Featured blog'), (b'featured-video', b'Featured video'), (b'featured-tool', b'Featured tool'), (b'featured-news', b'Featured news'), (b'featured', b'Featured')])), (b'post', wagtail.wagtailcore.blocks.PageChooserBlock(required=False)), (b'show_post_link', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Render post link?')), (b'post_link_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), label=b'Additional Links')), (b'video', wagtail.wagtailcore.blocks.StructBlock([(b'id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'E.g., in "https://www.youtube.com/watch?v=en0Iq8II4fA", the ID is everything after the "?v=".', required=False, label=b'ID')), (b'url', wagtail.wagtailcore.blocks.CharBlock(help_text=b'You must use the embed URL, e.g., https://www.youtube.com/embed/JPTg8ZB3j5c?autoplay=1&enablejsapi=1', required=False, label=b'URL')), (b'height', wagtail.wagtailcore.blocks.CharBlock(default=b'320', required=False)), (b'width', wagtail.wagtailcore.blocks.CharBlock(default=b'568', required=False))]))]))])),
('content', wagtail.wagtailcore.fields.StreamField([(b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'filter_controls', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'form_type', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'filterable-list', b'Filterable List'), (b'pdf-generator', b'PDF Generator')])), (b'title', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Title')), (b'post_date_description', wagtail.wagtailcore.blocks.CharBlock(default=b'Published')), (b'categories', wagtail.wagtailcore.blocks.StructBlock([(b'filter_category', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False)), (b'show_preview_categories', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False)), (b'page_type', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=v1.util.ref.filterable_list_page_types))])), (b'topics', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Topics')), (b'authors', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Authors')), (b'date_range', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Date Range')), (b'output_5050', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Render preview items as 50-50s')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Add links to post preview images and headings in filterable list results', default=False, required=False))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))]))])),
('secondary_nav_exclude_sibling_pages', models.BooleanField(default=False)),
],
options={
'abstract': False,
},
bases=(v1.feeds.FilterableFeedPageMixin, v1.util.filterable_list.FilterableListMixin, 'v1.cfgovpage'),
),
migrations.CreateModel(
name='BrowsePage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'text_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to bottom of text introduction.', required=False, label=b'Has bottom rule'))])), (b'featured_content', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'category', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'featured-event', b'Featured event'), (b'featured-blog', b'Featured blog'), (b'featured-video', b'Featured video'), (b'featured-tool', b'Featured tool'), (b'featured-news', b'Featured news'), (b'featured', b'Featured')])), (b'post', wagtail.wagtailcore.blocks.PageChooserBlock(required=False)), (b'show_post_link', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Render post link?')), (b'post_link_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), label=b'Additional Links')), (b'video', wagtail.wagtailcore.blocks.StructBlock([(b'id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'E.g., in "https://www.youtube.com/watch?v=en0Iq8II4fA", the ID is everything after the "?v=".', required=False, label=b'ID')), (b'url', wagtail.wagtailcore.blocks.CharBlock(help_text=b'You must use the embed URL, e.g., https://www.youtube.com/embed/JPTg8ZB3j5c?autoplay=1&enablejsapi=1', required=False, label=b'URL')), (b'height', wagtail.wagtailcore.blocks.CharBlock(default=b'320', required=False)), (b'width', wagtail.wagtailcore.blocks.CharBlock(default=b'568', required=False))]))]))], blank=True)),
('content', wagtail.wagtailcore.fields.StreamField([(b'bureau_structure', wagtail.wagtailcore.blocks.StructBlock([(b'last_updated_date', wagtail.wagtailcore.blocks.DateBlock(required=False)), (b'download_image', wagtail.wagtaildocs.blocks.DocumentChooserBlock(icon=b'image')), (b'director', wagtail.wagtailcore.blocks.CharBlock()), (b'divisions', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'division', v1.blocks.PlaceholderCharBlock(label=b'Division')), (b'division_lead', v1.blocks.PlaceholderCharBlock(placeholder=b'Name')), (b'title', wagtail.wagtailcore.blocks.StructBlock([(b'line_1', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 1')), (b'line_2', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 2'))])), (b'link_to_division_page', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'offices', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'office_name', wagtail.wagtailcore.blocks.CharBlock()), (b'lead', v1.blocks.PlaceholderCharBlock(placeholder=b'Name')), (b'title', wagtail.wagtailcore.blocks.StructBlock([(b'line_1', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 1')), (b'line_2', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 2'))]))], required=False)))]))), (b'office_of_the_director', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'office_name', wagtail.wagtailcore.blocks.CharBlock()), (b'lead', v1.blocks.PlaceholderCharBlock(placeholder=b'Name')), (b'title', wagtail.wagtailcore.blocks.StructBlock([(b'line_1', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 1')), (b'line_2', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 2'))])), (b'offices', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'office_name', wagtail.wagtailcore.blocks.CharBlock()), (b'lead', v1.blocks.PlaceholderCharBlock(placeholder=b'Name')), (b'title', wagtail.wagtailcore.blocks.StructBlock([(b'line_1', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 1')), (b'line_2', v1.blocks.PlaceholderCharBlock(required=False, placeholder=b'Title 2'))]))], required=False)))]), label=b'Office of the Director'))])), (b'info_unit_group', wagtail.wagtailcore.blocks.StructBlock([(b'format', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Choose the number and width of info unit columns.', choices=[(b'50-50', b'50/50'), (b'33-33-33', b'33/33/33'), (b'25-75', b'25/75')], label=b'Format')), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'If this field is not empty, the Heading field must also be set.', required=False)), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=True, required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of info unit group.', default=False, required=False)), (b'lines_between_items', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to show horizontal rule lines between info units.', default=False, required=False, label=b'Show rule lines between items')), (b'info_units', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], default={b'level': b'h3'}, required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))]))), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))]))])), (b'image_text_25_75_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])))])), (b'image_text_50_50_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))])), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'is_widescreen', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Use 16:9 image')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Show links as button')), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'half_width_link_blob_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'has_top_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_bottom_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'link_blobs', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'third_width_link_blob_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'has_top_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_bottom_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'link_blobs', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'expandable', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))])), (b'expandable_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_accordion', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of expandable group.', default=False, required=False)), (b'expandables', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'job_listing_table', wagtail.wagtailcore.blocks.StructBlock([(b'first_row_is_table_header', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Display the first row as a header.', default=True, required=False)), (b'first_col_is_header', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Display the first column as a header.', default=False, required=False)), (b'is_full_width', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Display the table at full width.', default=False, required=False)), (b'is_striped', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Display the table with striped rows.', default=False, required=False)), (b'is_stacked', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Stack the table columns on mobile.', default=True, required=False)), (b'empty_table_msg', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Message to display if there is no table data.', required=False, label=b'No Table Data Message')), (b'hide_closed', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Whether to hide jobs that are not currently open (jobs will automatically update)', default=True, required=False))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))])), (b'conference_registration_form', wagtail.wagtailcore.blocks.StructBlock([(b'govdelivery_code', wagtail.wagtailcore.blocks.CharBlock(help_text='Conference registrants will be subscribed to this GovDelivery list.')), (b'capacity', wagtail.wagtailcore.blocks.IntegerBlock(help_text='Enter an integer that will be the conference attendance limit.')), (b'success_message', wagtail.wagtailcore.blocks.RichTextBlock(help_text='Enter a message that will be shown on successful registration.')), (b'at_capacity_message', wagtail.wagtailcore.blocks.RichTextBlock(help_text='Enter a message that will be shown when the event is at capacity.')), (b'failure_message', wagtail.wagtailcore.blocks.RichTextBlock(help_text='Enter a message that will be shown if the GovDelivery subscription fails.'))])), (b'raw_html_block', wagtail.wagtailcore.blocks.RawHTMLBlock(label=b'Raw HTML block')), (b'html_block', wagtail.wagtailcore.blocks.StructBlock([(b'html_url', wagtail.wagtailcore.blocks.RegexBlock(regex=b'^https://(s3.amazonaws.com/)?files.consumerfinance.gov/.+$', default=b'', required=True, error_messages={b'required': b'The HTML URL field is required for rendering raw HTML from a remote source.', b'invalid': b'The URL is invalid or not allowed. '}, label=b'Source URL'))])), (b'chart_block', wagtail.wagtailcore.blocks.StructBlock([(b'title', wagtail.wagtailcore.blocks.CharBlock(required=True)), (b'chart_type', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'bar', b'Bar'), (b'line', b'Line'), (b'tile_map', b'Tile Map')])), (b'color_scheme', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b"Chart's color scheme. See https://github.com/cfpb/cfpb-chart-builder#configuration.", required=False, choices=[(b'green', b'Green'), (b'blue', b'Blue'), (b'teal', b'Teal'), (b'navy', b'Navy')])), (b'data_source', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Location of the chart\'s data source relative to "http://files.consumerfinance.gov/data/". For example,"consumer-credit-trends/volume_data_Score_Level_AUT.csv".', required=True)), (b'date_published', wagtail.wagtailcore.blocks.DateBlock(help_text=b'Automatically generated when CCT cron job runs')), (b'description', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Briefly summarize the chart for visually impaired users.', required=True)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of chart block.', default=False, required=False)), (b'last_updated_projected_data', wagtail.wagtailcore.blocks.DateBlock(help_text=b'Month of latest entry in dataset')), (b'metadata', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional metadata for the chart to use. For example, with CCT this would be the chart\'s "group".', required=False)), (b'note', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Text to display as a footnote. For example, "Data from the last six months are not final."', required=False)), (b'y_axis_label', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Custom y-axis label', required=False))])), (b'mortgage_chart_block', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'title', wagtail.wagtailcore.blocks.CharBlock(classname=b'title', required=True)), (b'description', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Chart summary for visually impaired users.', required=False)), (b'note', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Text for "Note" section of footnotes.', required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of chart block.', default=False, required=False))])), (b'mortgage_map_block', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'title', wagtail.wagtailcore.blocks.CharBlock(classname=b'title', required=True)), (b'description', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Chart summary for visually impaired users.', required=False)), (b'note', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Text for "Note" section of footnotes.', required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of chart block.', default=False, required=False))])), (b'mortgage_downloads_block', wagtail.wagtailcore.blocks.StructBlock([(b'show_archives', wagtail.wagtailcore.blocks.BooleanBlock(help_text='Check this box to allow the archival section to display. No section will appear if there are no archival downloads.', default=False, required=False))])), (b'snippet_list', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of snippet list.', default=False, required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'actions_column_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Choose the width in % that you wish to set the Actions column in a snippet list.', required=False, choices=[(b'70', b'70%'), (b'66', b'66%'), (b'60', b'60%'), (b'50', b'50%'), (b'40', b'40%'), (b'33', b'33%'), (b'30', b'30%')], label=b'Width of "Actions" column')), (b'snippet_type', wagtail.wagtailcore.blocks.ChoiceBlock(choices=v1.atomic_elements.organisms.get_snippet_type_choices)), (b'show_thumbnails', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"If selected, each snippet in the list will include a 150px-wide image from the snippet's thumbnail field.", required=False)), (b'actions', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'link_label', wagtail.wagtailcore.blocks.CharBlock(help_text=b'E.g., "Download" or "Order free prints"')), (b'snippet_field', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Corresponds to the available fields for the selected snippet type.', choices=v1.atomic_elements.organisms.get_snippet_field_choices))]))), (b'tags', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock(label=b'Tag'), help_text=b'Enter tag names to filter the snippets. For a snippet to match and be output in the list, it must have been tagged with all of the tag names listed here. The tag names are case-insensitive.'))])), (b'data_snapshot', wagtail.wagtailcore.blocks.StructBlock([(b'market_key', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Market identifier, e.g. AUT', max_length=20, required=True)), (b'num_originations', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Number of originations, e.g. 1.2 million', max_length=20)), (b'value_originations', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Total dollar value of originations, e.g. $3.4 billion', max_length=20)), (b'year_over_year_change', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Percentage change, e.g. 5.6% increase', max_length=20)), (b'last_updated_projected_data', wagtail.wagtailcore.blocks.DateBlock(help_text=b'Month of latest entry in dataset')), (b'num_originations_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Descriptive sentence, e.g. Auto loans originated', max_length=100)), (b'value_originations_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Descriptive sentence, e.g. Dollar volume of new loans', max_length=100)), (b'year_over_year_change_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Descriptive sentence, e.g. In year-over-year originations', max_length=100)), (b'image', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False, icon=b'image'))]))], blank=True)),
('secondary_nav_exclude_sibling_pages', models.BooleanField(default=False)),
],
options={
'abstract': False,
},
bases=('v1.cfgovpage',),
),
migrations.CreateModel(
name='HomePage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'info_unit', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], default={b'level': b'h3'}, required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])), (b'half_width_link_blob', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))]))], blank=True)),
('latest_updates', wagtail.wagtailcore.fields.StreamField([(b'posts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'categories', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'speech-bubble', b'Blog'), (b'newspaper', b'Newsroom'), (b'document', b'Report'), (b'date', b'Events'), (b'microphone', b'Speech'), (b'bullhorn', b'Press release'), (b'contract', b'Op-ed'), (b'double-quote', b'Testimony')])), (b'link', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'date', wagtail.wagtailcore.blocks.DateTimeBlock(required=False))])))], blank=True)),
],
options={
'abstract': False,
},
bases=('v1.cfgovpage',),
),
migrations.CreateModel(
name='LandingPage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'hero', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Maximum character count: 25 (including spaces)', required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Maximum character count: 185 (including spaces)', required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), help_text=b'If your hero needs a call-to-action link, enter it here, rather than inside the body field.')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select to render any links given above as buttons.', required=False)), (b'image', wagtail.wagtailimages.blocks.ImageChooserBlock(help_text=b'Should be exactly 390px tall, and up to 940px wide, unless this is an overlay or bleeding style hero.', required=False)), (b'is_overlay', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select if you want the provided image to be a background image under the entire hero.', required=False)), (b'background_color', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Specify a hex value (with the # sign) from our official palette: https://github.com/cfpb/cf-theme-cfpb/blob/master/src/color-palette.less', required=False)), (b'is_white_text', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Turns the hero text white. Useful if using a dark background color or background image.', required=False)), (b'cta_link_color', wagtail.wagtailcore.blocks.CharBlock(help_text=b'If using a dark background color or background image, you may need to specify an alternate color for the call-to-action link. Specify a hex value (with the # sign) from our official palette: https://github.com/cfpb/cf-theme-cfpb/blob/master/src/color-palette.less', required=False, label=b'CTA link color')), (b'is_bleeding', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select if you want the provided image to bleed vertically off the top and bottom of the hero.', required=False)), (b'small_image', wagtail.wagtailimages.blocks.ImageChooserBlock(help_text=b'Provide an alternate image for small displays when using a bleeding or overlay hero.', required=False))])), (b'text_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to bottom of text introduction.', required=False, label=b'Has bottom rule'))]))], blank=True)),
('content', wagtail.wagtailcore.fields.StreamField([(b'info_unit_group', wagtail.wagtailcore.blocks.StructBlock([(b'format', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Choose the number and width of info unit columns.', choices=[(b'50-50', b'50/50'), (b'33-33-33', b'33/33/33'), (b'25-75', b'25/75')], label=b'Format')), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'If this field is not empty, the Heading field must also be set.', required=False)), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=True, required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of info unit group.', default=False, required=False)), (b'lines_between_items', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to show horizontal rule lines between info units.', default=False, required=False, label=b'Show rule lines between items')), (b'info_units', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], default={b'level': b'h3'}, required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))]))), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))]))])), (b'image_text_25_75_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])))])), (b'image_text_50_50_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))])), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'is_widescreen', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Use 16:9 image')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Show links as button')), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'half_width_link_blob_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'has_top_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_bottom_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'link_blobs', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'third_width_link_blob_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'has_top_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_bottom_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'link_blobs', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))]))], blank=True)),
],
options={
'abstract': False,
},
bases=('v1.cfgovpage',),
),
migrations.CreateModel(
name='SublandingFilterablePage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'hero', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Maximum character count: 25 (including spaces)', required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Maximum character count: 185 (including spaces)', required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), help_text=b'If your hero needs a call-to-action link, enter it here, rather than inside the body field.')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select to render any links given above as buttons.', required=False)), (b'image', wagtail.wagtailimages.blocks.ImageChooserBlock(help_text=b'Should be exactly 390px tall, and up to 940px wide, unless this is an overlay or bleeding style hero.', required=False)), (b'is_overlay', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select if you want the provided image to be a background image under the entire hero.', required=False)), (b'background_color', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Specify a hex value (with the # sign) from our official palette: https://github.com/cfpb/cf-theme-cfpb/blob/master/src/color-palette.less', required=False)), (b'is_white_text', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Turns the hero text white. Useful if using a dark background color or background image.', required=False)), (b'cta_link_color', wagtail.wagtailcore.blocks.CharBlock(help_text=b'If using a dark background color or background image, you may need to specify an alternate color for the call-to-action link. Specify a hex value (with the # sign) from our official palette: https://github.com/cfpb/cf-theme-cfpb/blob/master/src/color-palette.less', required=False, label=b'CTA link color')), (b'is_bleeding', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select if you want the provided image to bleed vertically off the top and bottom of the hero.', required=False)), (b'small_image', wagtail.wagtailimages.blocks.ImageChooserBlock(help_text=b'Provide an alternate image for small displays when using a bleeding or overlay hero.', required=False))]))], blank=True)),
('content', wagtail.wagtailcore.fields.StreamField([(b'text_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to bottom of text introduction.', required=False, label=b'Has bottom rule'))])), (b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'filter_controls', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'form_type', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'filterable-list', b'Filterable List'), (b'pdf-generator', b'PDF Generator')])), (b'title', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Title')), (b'post_date_description', wagtail.wagtailcore.blocks.CharBlock(default=b'Published')), (b'categories', wagtail.wagtailcore.blocks.StructBlock([(b'filter_category', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False)), (b'show_preview_categories', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False)), (b'page_type', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=v1.util.ref.filterable_list_page_types))])), (b'topics', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Topics')), (b'authors', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Authors')), (b'date_range', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Filter Date Range')), (b'output_5050', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Render preview items as 50-50s')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Add links to post preview images and headings in filterable list results', default=False, required=False))])), (b'featured_content', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'category', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'featured-event', b'Featured event'), (b'featured-blog', b'Featured blog'), (b'featured-video', b'Featured video'), (b'featured-tool', b'Featured tool'), (b'featured-news', b'Featured news'), (b'featured', b'Featured')])), (b'post', wagtail.wagtailcore.blocks.PageChooserBlock(required=False)), (b'show_post_link', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Render post link?')), (b'post_link_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), label=b'Additional Links')), (b'video', wagtail.wagtailcore.blocks.StructBlock([(b'id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'E.g., in "https://www.youtube.com/watch?v=en0Iq8II4fA", the ID is everything after the "?v=".', required=False, label=b'ID')), (b'url', wagtail.wagtailcore.blocks.CharBlock(help_text=b'You must use the embed URL, e.g., https://www.youtube.com/embed/JPTg8ZB3j5c?autoplay=1&enablejsapi=1', required=False, label=b'URL')), (b'height', wagtail.wagtailcore.blocks.CharBlock(default=b'320', required=False)), (b'width', wagtail.wagtailcore.blocks.CharBlock(default=b'568', required=False))]))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))]))])),
],
options={
'abstract': False,
},
bases=(v1.feeds.FilterableFeedPageMixin, v1.util.filterable_list.FilterableListMixin, 'v1.cfgovpage'),
),
migrations.CreateModel(
name='SublandingPage',
fields=[
('cfgovpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.CFGOVPage')),
('header', wagtail.wagtailcore.fields.StreamField([(b'hero', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Maximum character count: 25 (including spaces)', required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Maximum character count: 185 (including spaces)', required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), help_text=b'If your hero needs a call-to-action link, enter it here, rather than inside the body field.')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select to render any links given above as buttons.', required=False)), (b'image', wagtail.wagtailimages.blocks.ImageChooserBlock(help_text=b'Should be exactly 390px tall, and up to 940px wide, unless this is an overlay or bleeding style hero.', required=False)), (b'is_overlay', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select if you want the provided image to be a background image under the entire hero.', required=False)), (b'background_color', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Specify a hex value (with the # sign) from our official palette: https://github.com/cfpb/cf-theme-cfpb/blob/master/src/color-palette.less', required=False)), (b'is_white_text', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Turns the hero text white. Useful if using a dark background color or background image.', required=False)), (b'cta_link_color', wagtail.wagtailcore.blocks.CharBlock(help_text=b'If using a dark background color or background image, you may need to specify an alternate color for the call-to-action link. Specify a hex value (with the # sign) from our official palette: https://github.com/cfpb/cf-theme-cfpb/blob/master/src/color-palette.less', required=False, label=b'CTA link color')), (b'is_bleeding', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Select if you want the provided image to bleed vertically off the top and bottom of the hero.', required=False)), (b'small_image', wagtail.wagtailimages.blocks.ImageChooserBlock(help_text=b'Provide an alternate image for small displays when using a bleeding or overlay hero.', required=False))]))], blank=True)),
('content', wagtail.wagtailcore.fields.StreamField([(b'text_introduction', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to bottom of text introduction.', required=False, label=b'Has bottom rule'))])), (b'featured_content', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'category', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'featured-event', b'Featured event'), (b'featured-blog', b'Featured blog'), (b'featured-video', b'Featured video'), (b'featured-tool', b'Featured tool'), (b'featured-news', b'Featured news'), (b'featured', b'Featured')])), (b'post', wagtail.wagtailcore.blocks.PageChooserBlock(required=False)), (b'show_post_link', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Render post link?')), (b'post_link_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), label=b'Additional Links')), (b'video', wagtail.wagtailcore.blocks.StructBlock([(b'id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'E.g., in "https://www.youtube.com/watch?v=en0Iq8II4fA", the ID is everything after the "?v=".', required=False, label=b'ID')), (b'url', wagtail.wagtailcore.blocks.CharBlock(help_text=b'You must use the embed URL, e.g., https://www.youtube.com/embed/JPTg8ZB3j5c?autoplay=1&enablejsapi=1', required=False, label=b'URL')), (b'height', wagtail.wagtailcore.blocks.CharBlock(default=b'320', required=False)), (b'width', wagtail.wagtailcore.blocks.CharBlock(default=b'568', required=False))]))])), (b'info_unit_group', wagtail.wagtailcore.blocks.StructBlock([(b'format', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Choose the number and width of info unit columns.', choices=[(b'50-50', b'50/50'), (b'33-33-33', b'33/33/33'), (b'25-75', b'25/75')], label=b'Format')), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'If this field is not empty, the Heading field must also be set.', required=False)), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=True, required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of info unit group.', default=False, required=False)), (b'lines_between_items', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to show horizontal rule lines between info units.', default=False, required=False, label=b'Show rule lines between items')), (b'info_units', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], default={b'level': b'h3'}, required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))]))), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))]))])), (b'image_text_25_75_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])))])), (b'image_text_50_50_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))])), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'is_widescreen', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Use 16:9 image')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Show links as button')), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'half_width_link_blob_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'has_top_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_bottom_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'link_blobs', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'third_width_link_blob_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'has_top_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_bottom_border', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'link_blobs', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H3 heading')), (b'sub_heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'H4 heading')), (b'sub_heading_icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A list of icon names can be obtained at: https://cfpb.github.io/capital-framework/components/cf-icons/. Examples: linkedin-square, facebook-square, etc.', required=False, label=b'H4 heading icon')), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'post_preview_snapshot', wagtail.wagtailcore.blocks.StructBlock([(b'limit', wagtail.wagtailcore.blocks.CharBlock(help_text=b'How many posts do you want to show?', default=b'3', label=b'Limit')), (b'post_date_description', wagtail.wagtailcore.blocks.CharBlock(default=b'Published'))])), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'contact', wagtail.wagtailcore.blocks.StructBlock([(b'contact', wagtail.wagtailsnippets.blocks.SnippetChooserBlock(b'v1.Contact')), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Add a horizontal rule line to top of contact block.', default=False, required=False))])), (b'formfield_with_button', wagtail.wagtailcore.blocks.StructBlock([(b'btn_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'required', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'info', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Disclaimer')), (b'label', wagtail.wagtailcore.blocks.CharBlock(required=True)), (b'type', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'text', b'Text'), (b'checkbox', b'Checkbox'), (b'email', b'Email'), (b'number', b'Number'), (b'url', b'URL'), (b'radio', b'Radio')])), (b'placeholder', wagtail.wagtailcore.blocks.CharBlock(required=False))])), (b'reg_comment', wagtail.wagtailcore.blocks.StructBlock([(b'document_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Federal Register document ID number to which the comment should be submitted. Should follow this format: CFPB-YYYY-####-####', required=True, label=b'Document ID')), (b'generic_regs_link', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If unchecked, the link to comment at Regulations.gov if you want to add attachments will link directly to the document given above. Leave this checked if this comment form is being published before the full document is live at Regulations.gov, then uncheck it when the full document has been published.', default=True, required=False, label=b'Use generic Regs.gov link?')), (b'id', wagtail.wagtailcore.blocks.CharBlock(help_text=b"Sets the `id` attribute in the form's markup. If not set, the form will be assigned a base id of `o-reg-comment_` with a random number appended.", required=False, label=b'Form ID'))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))])), (b'snippet_list', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of snippet list.', default=False, required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'actions_column_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Choose the width in % that you wish to set the Actions column in a snippet list.', required=False, choices=[(b'70', b'70%'), (b'66', b'66%'), (b'60', b'60%'), (b'50', b'50%'), (b'40', b'40%'), (b'33', b'33%'), (b'30', b'30%')], label=b'Width of "Actions" column')), (b'snippet_type', wagtail.wagtailcore.blocks.ChoiceBlock(choices=v1.atomic_elements.organisms.get_snippet_type_choices)), (b'show_thumbnails', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"If selected, each snippet in the list will include a 150px-wide image from the snippet's thumbnail field.", required=False)), (b'actions', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'link_label', wagtail.wagtailcore.blocks.CharBlock(help_text=b'E.g., "Download" or "Order free prints"')), (b'snippet_field', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Corresponds to the available fields for the selected snippet type.', choices=v1.atomic_elements.organisms.get_snippet_field_choices))]))), (b'tags', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock(label=b'Tag'), help_text=b'Enter tag names to filter the snippets. For a snippet to match and be output in the list, it must have been tagged with all of the tag names listed here. The tag names are case-insensitive.'))]))], blank=True)),
('sidebar_breakout', wagtail.wagtailcore.fields.StreamField([(b'slug', wagtail.wagtailcore.blocks.CharBlock(icon=b'title')), (b'heading', wagtail.wagtailcore.blocks.CharBlock(icon=b'title')), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'breakout_image', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailimages.blocks.ImageChooserBlock()), (b'is_round', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Round?')), (b'icon', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Enter icon class name.')), (b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, label=b'Introduction Heading')), (b'body', wagtail.wagtailcore.blocks.TextBlock(required=False, label=b'Introduction Body'))], heading=b'Breakout Image', icon=b'image')), (b'related_posts', wagtail.wagtailcore.blocks.StructBlock([(b'limit', wagtail.wagtailcore.blocks.CharBlock(help_text=b'This limit applies to EACH TYPE of post this module retrieves, not the total number of retrieved posts.', default=b'3')), (b'show_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'This toggles the heading and icon for the related types.', default=True, required=False, label=b'Show Heading and Icon?')), (b'header_title', wagtail.wagtailcore.blocks.CharBlock(default=b'Further reading', label=b'Slug Title')), (b'relate_posts', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, editable=False, label=b'Blog Posts')), (b'relate_newsroom', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, editable=False, label=b'Newsroom')), (b'relate_events', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Events')), (b'specific_categories', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'Blog', ((b'At the CFPB', b'At the CFPB'), (b'Policy & Compliance', b'Policy and compliance'), (b'Data, Research & Reports', b'Data, research, and reports'), (b'Info for Consumers', b'Info for consumers'))), (b'Newsroom', ((b'Op-Ed', b'Op-ed'), (b'Press Release', b'Press release'), (b'Speech', b'Speech'), (b'Testimony', b'Testimony')))]), required=False)), (b'and_filtering', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, related posts will only be pulled in if they match ALL topic tags set on this page. Otherwise, related posts can match any one topic tag.', default=False, required=False, label=b'Match all topic tags'))])), (b'job_listing_list', wagtail.wagtailcore.blocks.StructBlock([(b'limit', v1.atomic_elements.atoms.IntegerBlock(help_text=b'Limit list to this number of items', default=5, min_value=0, label=b'Maximum items')), (b'heading', wagtail.wagtailcore.blocks.CharBlock(help_text=b'List heading', required=False)), (b'more_jobs_page', wagtail.wagtailcore.blocks.PageChooserBlock(help_text=b'Link to full list of jobs')), (b'more_jobs_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Text to show on link to full list of jobs', required=False)), (b'hide_closed', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Whether to hide jobs that are not currently open (jobs will automatically update)', default=True, required=False))]))], blank=True)),
],
options={
'abstract': False,
},
bases=('v1.cfgovpage',),
),
migrations.AddField(
model_name='resource',
name='tags',
field=taggit.managers.TaggableManager(to='taggit.Tag', through='v1.ResourceTag', blank=True, help_text=b'Tags can be used to filter snippets in a Snippet List.', verbose_name='Tags'),
),
migrations.AddField(
model_name='resource',
name='thumbnail',
field=models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='v1.CFGOVImage', null=True),
),
migrations.AddField(
model_name='cfgovtaggedpages',
name='content_object',
field=modelcluster.fields.ParentalKey(to='v1.CFGOVPage'),
),
migrations.AddField(
model_name='cfgovtaggedpages',
name='tag',
field=models.ForeignKey(related_name='v1_cfgovtaggedpages_items', to='taggit.Tag'),
),
migrations.AddField(
model_name='cfgovpagecategory',
name='page',
field=modelcluster.fields.ParentalKey(related_name='categories', to='v1.CFGOVPage'),
),
migrations.AddField(
model_name='cfgovpage',
name='authors',
field=modelcluster.contrib.taggit.ClusterTaggableManager(to='taggit.Tag', through='v1.CFGOVAuthoredPages', blank=True, help_text=b'A comma separated list of authors.', verbose_name=b'Authors'),
),
migrations.AddField(
model_name='cfgovpage',
name='social_sharing_image',
field=models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='v1.CFGOVImage', help_text=b'Optionally select a custom image to appear when users share this page on social media websites. Minimum size: 1200w x 630h.', null=True),
),
migrations.AddField(
model_name='cfgovpage',
name='tags',
field=modelcluster.contrib.taggit.ClusterTaggableManager(to='taggit.Tag', through='v1.CFGOVTaggedPages', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'),
),
migrations.AddField(
model_name='cfgovauthoredpages',
name='content_object',
field=modelcluster.fields.ParentalKey(to='v1.CFGOVPage'),
),
migrations.AddField(
model_name='cfgovauthoredpages',
name='tag',
field=models.ForeignKey(related_name='v1_cfgovauthoredpages_items', to='taggit.Tag'),
),
migrations.CreateModel(
name='ActivityLogPage',
fields=[
('sublandingfilterablepage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.SublandingFilterablePage')),
],
options={
'abstract': False,
},
bases=('v1.sublandingfilterablepage',),
),
migrations.CreateModel(
name='BlogPage',
fields=[
('abstractfilterpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.AbstractFilterPage')),
('content', wagtail.wagtailcore.fields.StreamField([(b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'image_text_50_50_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))])), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'is_widescreen', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Use 16:9 image')), (b'is_button', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Show links as button')), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))])))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))])), (b'email_signup', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'gd_code', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'form_field', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'btn_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'required', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'info', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Disclaimer')), (b'label', wagtail.wagtailcore.blocks.CharBlock(required=True)), (b'type', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'text', b'Text'), (b'checkbox', b'Checkbox'), (b'email', b'Email'), (b'number', b'Number'), (b'url', b'URL'), (b'radio', b'Radio')])), (b'placeholder', wagtail.wagtailcore.blocks.CharBlock(required=False))]), required=False, icon=b'mail'))])), (b'expandable', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))]))])),
],
options={
'abstract': False,
},
bases=('v1.abstractfilterpage',),
),
migrations.CreateModel(
name='DocumentDetailPage',
fields=[
('abstractfilterpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.AbstractFilterPage')),
('content', wagtail.wagtailcore.fields.StreamField([(b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'expandable', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))])), (b'expandable_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_accordion', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of expandable group.', default=False, required=False)), (b'expandables', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))]))], blank=True)),
],
options={
'abstract': False,
},
bases=('v1.abstractfilterpage',),
),
migrations.CreateModel(
name='EventArchivePage',
fields=[
('browsefilterablepage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.BrowseFilterablePage')),
],
options={
'abstract': False,
},
bases=('v1.browsefilterablepage',),
),
migrations.CreateModel(
name='EventPage',
fields=[
('abstractfilterpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.AbstractFilterPage')),
('body', wagtail.wagtailcore.fields.RichTextField(verbose_name=b'Subheading', blank=True)),
('archive_body', wagtail.wagtailcore.fields.RichTextField(blank=True)),
('live_body', wagtail.wagtailcore.fields.RichTextField(blank=True)),
('start_dt', models.DateTimeField(null=True, verbose_name=b'Start', blank=True)),
('end_dt', models.DateTimeField(null=True, verbose_name=b'End', blank=True)),
('future_body', wagtail.wagtailcore.fields.RichTextField(blank=True)),
('flickr_url', models.URLField(verbose_name=b'Flickr URL', blank=True)),
('youtube_url', models.URLField(blank=True, help_text=b'Format: https://www.youtube.com/embed/video_id. It can be obtained by clicking on Share > Embed on Youtube.', verbose_name=b'Youtube URL', validators=[django.core.validators.RegexValidator(regex=b'^https?:\\/\\/www\\.youtube\\.com\\/embed\\/.*$')])),
('live_stream_availability', models.BooleanField(default=False, verbose_name=b'Streaming?')),
('live_stream_url', models.URLField(help_text=b'Format: https://www.ustream.tv/embed/video_id or https://www.youtube.com/embed/video_id.', verbose_name=b'URL', blank=True)),
('live_stream_date', models.DateTimeField(null=True, verbose_name=b'Go Live Date', blank=True)),
('venue_name', models.CharField(max_length=100, blank=True)),
('venue_street', models.CharField(max_length=100, blank=True)),
('venue_suite', models.CharField(max_length=100, blank=True)),
('venue_city', models.CharField(max_length=100, blank=True)),
('venue_state', localflavor.us.models.USStateField(blank=True, max_length=2, choices=[(b'AL', b'Alabama'), (b'AK', b'Alaska'), (b'AS', b'American Samoa'), (b'AZ', b'Arizona'), (b'AR', b'Arkansas'), (b'AA', b'Armed Forces Americas'), (b'AE', b'Armed Forces Europe'), (b'AP', b'Armed Forces Pacific'), (b'CA', b'California'), (b'CO', b'Colorado'), (b'CT', b'Connecticut'), (b'DE', b'Delaware'), (b'DC', b'District of Columbia'), (b'FL', b'Florida'), (b'GA', b'Georgia'), (b'GU', b'Guam'), (b'HI', b'Hawaii'), (b'ID', b'Idaho'), (b'IL', b'Illinois'), (b'IN', b'Indiana'), (b'IA', b'Iowa'), (b'KS', b'Kansas'), (b'KY', b'Kentucky'), (b'LA', b'Louisiana'), (b'ME', b'Maine'), (b'MD', b'Maryland'), (b'MA', b'Massachusetts'), (b'MI', b'Michigan'), (b'MN', b'Minnesota'), (b'MS', b'Mississippi'), (b'MO', b'Missouri'), (b'MT', b'Montana'), (b'NE', b'Nebraska'), (b'NV', b'Nevada'), (b'NH', b'New Hampshire'), (b'NJ', b'New Jersey'), (b'NM', b'New Mexico'), (b'NY', b'New York'), (b'NC', b'North Carolina'), (b'ND', b'North Dakota'), (b'MP', b'Northern Mariana Islands'), (b'OH', b'Ohio'), (b'OK', b'Oklahoma'), (b'OR', b'Oregon'), (b'PA', b'Pennsylvania'), (b'PR', b'Puerto Rico'), (b'RI', b'Rhode Island'), (b'SC', b'South Carolina'), (b'SD', b'South Dakota'), (b'TN', b'Tennessee'), (b'TX', b'Texas'), (b'UT', b'Utah'), (b'VT', b'Vermont'), (b'VI', b'Virgin Islands'), (b'VA', b'Virginia'), (b'WA', b'Washington'), (b'WV', b'West Virginia'), (b'WI', b'Wisconsin'), (b'WY', b'Wyoming')])),
('venue_zip', models.IntegerField(null=True, blank=True)),
('agenda_items', wagtail.wagtailcore.fields.StreamField([(b'item', wagtail.wagtailcore.blocks.StructBlock([(b'start_time', wagtail.wagtailcore.blocks.TimeBlock(required=False, label=b'Start')), (b'end_time', wagtail.wagtailcore.blocks.TimeBlock(required=False, label=b'End')), (b'description', wagtail.wagtailcore.blocks.CharBlock(max_length=100, required=False)), (b'location', wagtail.wagtailcore.blocks.CharBlock(max_length=100, required=False)), (b'speakers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'name', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.URLBlock(required=False))], required=False, icon=b'user')))]))], blank=True)),
('archive_image', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='wagtailimages.Image', null=True)),
('speech_transcript', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='wagtaildocs.Document', null=True)),
('video_transcript', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='wagtaildocs.Document', null=True)),
],
options={
'abstract': False,
},
bases=('v1.abstractfilterpage',),
),
migrations.CreateModel(
name='LearnPage',
fields=[
('abstractfilterpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.AbstractFilterPage')),
('content', wagtail.wagtailcore.fields.StreamField([(b'info_unit_group_25_75_only', wagtail.wagtailcore.blocks.StructBlock([(b'format', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'25/75 is the only allowed format for this page type.', choices=[(b'25-75', b'25/75')], label=b'Format')), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], required=False)), (b'intro', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'If this field is not empty, the Heading field must also be set.', required=False)), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=True, required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of info unit group.', default=False, required=False)), (b'lines_between_items', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to show horizontal rule lines between info units.', default=False, required=False, label=b'Show rule lines between items')), (b'info_units', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'heading', wagtail.wagtailcore.blocks.StructBlock([(b'text', v1.blocks.HeadingTextBlock(required=False)), (b'level', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'h2', b'H2'), (b'h3', b'H3'), (b'h4', b'H4')])), (b'icon', v1.blocks.HeadingIconBlock(help_text=b'Input the name of an icon to appear to the left of the heading. E.g., approved, help-round, etc. <a href="https://cfpb.github.io/capital-framework/components/cf-icons/#icons">See full list of icons</a>', required=False))], default={b'level': b'h3'}, required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False, blank=True)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False))]))), (b'sharing', wagtail.wagtailcore.blocks.StructBlock([(b'shareable', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'If checked, share links will be included below the items.', required=False, label=b'Include sharing links?')), (b'share_blurb', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Sets the tweet text, email subject line, and LinkedIn post text.', required=False))]))])), (b'image_text_25_75_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False, icon=b'title')), (b'link_image_and_heading', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b"Check this to link all images and headings to the URL of the first link in their unit's list, if there is a link.", default=False, required=False)), (b'image_texts', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))]), required=False)), (b'has_rule', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])))])), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'full_width_text', wagtail.wagtailcore.blocks.StreamBlock([(b'content_with_anchor', wagtail.wagtailcore.blocks.StructBlock([(b'content_block', wagtail.wagtailcore.blocks.RichTextBlock()), (b'anchor_link', wagtail.wagtailcore.blocks.StructBlock([(b'link_id', wagtail.wagtailcore.blocks.CharBlock(help_text=b'\n ID will be auto-generated on save.\n However, you may enter some human-friendly text that\n will be incorporated to make it easier to read.\n ', required=False, label=b'ID for this content block'))]))])), (b'content', wagtail.wagtailcore.blocks.RichTextBlock(icon=b'edit')), (b'media', wagtail.wagtailimages.blocks.ImageChooserBlock(icon=b'image')), (b'quote', wagtail.wagtailcore.blocks.StructBlock([(b'body', wagtail.wagtailcore.blocks.TextBlock()), (b'citation', wagtail.wagtailcore.blocks.TextBlock(required=False)), (b'is_large', wagtail.wagtailcore.blocks.BooleanBlock(required=False))])), (b'cta', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'related_links', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'links', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'image_inset', wagtail.wagtailcore.blocks.StructBlock([(b'image', wagtail.wagtailcore.blocks.StructBlock([(b'upload', wagtail.wagtailimages.blocks.ImageChooserBlock(required=False)), (b'alt', wagtail.wagtailcore.blocks.CharBlock(help_text=b"If the image is decorative (i.e., if a screenreader wouldn't have anything useful to say about it), leave the Alt field blank.", required=False))])), (b'image_position', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'right', b'right'), (b'left', b'left')])), (b'is_image_decorative', wagtail.wagtailcore.blocks.BooleanBlock(required=False, label=b'Image decorative')), (b'image_width', wagtail.wagtailcore.blocks.ChoiceBlock(help_text=b'Default is 270px.', choices=[(170, b'170px'), (270, b'270px')], label=b'Image Width')), (b'text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_bottom_rule', wagtail.wagtailcore.blocks.BooleanBlock(default=True, required=False, label=b'Bottom Rule'))])), (b'reusable_text', v1.blocks.ReusableTextChooserBlock(b'v1.ReusableText'))])), (b'expandable', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))])), (b'expandable_group', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'body', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'is_accordion', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'has_top_rule_line', wagtail.wagtailcore.blocks.BooleanBlock(help_text=b'Check this to add a horizontal rule line to top of expandable group.', default=False, required=False)), (b'expandables', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'is_bordered', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_midtone', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'is_expanded', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'content', wagtail.wagtailcore.blocks.StreamBlock([(b'paragraph', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'well', wagtail.wagtailcore.blocks.StructBlock([(b'content', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Well'))])), (b'links', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'email', wagtail.wagtailcore.blocks.StructBlock([(b'emails', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])))])), (b'phone', wagtail.wagtailcore.blocks.StructBlock([(b'fax', wagtail.wagtailcore.blocks.BooleanBlock(default=False, required=False, label=b'Is this number a fax?')), (b'phones', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'number', wagtail.wagtailcore.blocks.CharBlock(max_length=15)), (b'extension', wagtail.wagtailcore.blocks.CharBlock(max_length=4, required=False)), (b'vanity', wagtail.wagtailcore.blocks.CharBlock(help_text=b'A phoneword version of the above number', max_length=15, required=False)), (b'tty', wagtail.wagtailcore.blocks.CharBlock(max_length=15, label=b'TTY', required=False)), (b'tty_ext', wagtail.wagtailcore.blocks.CharBlock(max_length=4, label=b'TTY Extension', required=False))])))])), (b'address', wagtail.wagtailcore.blocks.StructBlock([(b'label', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'title', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'street', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'city', wagtail.wagtailcore.blocks.CharBlock(max_length=50, required=False)), (b'state', wagtail.wagtailcore.blocks.CharBlock(max_length=25, required=False)), (b'zip_code', wagtail.wagtailcore.blocks.CharBlock(max_length=15, required=False))]))], blank=True))])))])), (b'table', wagtail.wagtailcore.blocks.StructBlock([(b'headers', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.CharBlock())), (b'rows', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StreamBlock([(b'hyperlink', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False))])), (b'text', wagtail.wagtailcore.blocks.CharBlock()), (b'text_blob', wagtail.wagtailcore.blocks.TextBlock()), (b'rich_text_blob', wagtail.wagtailcore.blocks.RichTextBlock())])))], editable=False)), (b'table_block', v1.atomic_elements.organisms.AtomicTableBlock(table_options={b'renderer': b'html'})), (b'call_to_action', wagtail.wagtailcore.blocks.StructBlock([(b'slug_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'paragraph_text', wagtail.wagtailcore.blocks.RichTextBlock(required=False)), (b'button', wagtail.wagtailcore.blocks.StructBlock([(b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'url', wagtail.wagtailcore.blocks.CharBlock(default=b'/', required=False)), (b'size', wagtail.wagtailcore.blocks.ChoiceBlock(choices=[(b'regular', b'Regular'), (b'large', b'Large Primary')]))]))])), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))])), (b'video_player', wagtail.wagtailcore.blocks.StructBlock([(b'video_url', wagtail.wagtailcore.blocks.RegexBlock(regex=b'^https:\\/\\/www\\.youtube\\.com\\/embed\\/.+$', default=b'https://www.youtube.com/embed/', required=True, error_messages={b'required': b'The YouTube URL field is required for video players.', b'invalid': b'The YouTube URL is in the wrong format. You must use the embed URL (https://www.youtube.com/embed/video_id), which can be obtained by clicking Share > Embed on the YouTube video page.'}, label=b'YouTube Embed URL'))])), (b'email_signup', wagtail.wagtailcore.blocks.StructBlock([(b'heading', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'gd_code', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'form_field', wagtail.wagtailcore.blocks.ListBlock(wagtail.wagtailcore.blocks.StructBlock([(b'btn_text', wagtail.wagtailcore.blocks.CharBlock(required=False)), (b'required', wagtail.wagtailcore.blocks.BooleanBlock(required=False)), (b'info', wagtail.wagtailcore.blocks.RichTextBlock(required=False, label=b'Disclaimer')), (b'label', wagtail.wagtailcore.blocks.CharBlock(required=True)), (b'type', wagtail.wagtailcore.blocks.ChoiceBlock(required=False, choices=[(b'text', b'Text'), (b'checkbox', b'Checkbox'), (b'email', b'Email'), (b'number', b'Number'), (b'url', b'URL'), (b'radio', b'Radio')])), (b'placeholder', wagtail.wagtailcore.blocks.CharBlock(required=False))]), required=False, icon=b'mail'))]))], blank=True)),
],
options={
'abstract': False,
},
bases=('v1.abstractfilterpage',),
),
migrations.CreateModel(
name='LegacyBlogPage',
fields=[
('abstractfilterpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.AbstractFilterPage')),
('content', wagtail.wagtailcore.fields.StreamField([(b'content', wagtail.wagtailcore.blocks.RawHTMLBlock(help_text=b'Content from WordPress unescaped.')), (b'feedback', wagtail.wagtailcore.blocks.StructBlock([(b'was_it_helpful_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Use this field only for feedback forms that use "Was this helpful?" radio buttons.', default=b'Was this page helpful to you?', required=False)), (b'intro_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional feedback intro', required=False)), (b'question_text', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Optional expansion on intro', required=False)), (b'radio_intro', wagtail.wagtailcore.blocks.CharBlock(help_text=b'Leave blank unless you are building a feedback form with extra radio-button prompts, as in /owning-a-home/help-us-improve/.', required=False)), (b'radio_text', wagtail.wagtailcore.blocks.CharBlock(default=b'This information helps us understand your question better.', required=False)), (b'radio_question_1', wagtail.wagtailcore.blocks.CharBlock(default=b'How soon do you expect to buy a home?', required=False)), (b'radio_question_2', wagtail.wagtailcore.blocks.CharBlock(default=b'Do you currently own a home?', required=False)), (b'button_text', wagtail.wagtailcore.blocks.CharBlock(default=b'Submit')), (b'contact_advisory', wagtail.wagtailcore.blocks.RichTextBlock(help_text=b'Use only for feedback forms that ask for a contact email', required=False))]))])),
],
options={
'abstract': False,
},
bases=('v1.abstractfilterpage',),
),
migrations.CreateModel(
name='NewsroomLandingPage',
fields=[
('browsefilterablepage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.BrowseFilterablePage')),
],
options={
'abstract': False,
},
bases=('v1.browsefilterablepage',),
),
migrations.AlterUniqueTogether(
name='cfgovrendition',
unique_together=set([('image', 'filter_spec', 'focal_point_key')]),
),
migrations.AddField(
model_name='abstractfilterpage',
name='preview_image',
field=models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='v1.CFGOVImage', null=True),
),
migrations.CreateModel(
name='LegacyNewsroomPage',
fields=[
('legacyblogpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.LegacyBlogPage')),
],
options={
'abstract': False,
},
bases=('v1.legacyblogpage',),
),
migrations.CreateModel(
name='NewsroomPage',
fields=[
('blogpage_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='v1.BlogPage')),
],
options={
'abstract': False,
},
bases=('v1.blogpage',),
),
]
| 306.117355 | 31,353 | 0.746119 | 24,440 | 185,201 | 5.569435 | 0.051637 | 0.202723 | 0.264655 | 0.140372 | 0.893378 | 0.879919 | 0.865366 | 0.855132 | 0.842907 | 0.834157 | 0 | 0.00894 | 0.090388 | 185,201 | 604 | 31,354 | 306.624172 | 0.799063 | 0.000113 | 0 | 0.452261 | 0 | 0.19263 | 0.300273 | 0.030868 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.00335 | 0.041876 | 0 | 0.048576 | 0.005025 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
4c02a1aa5e0f073c6a1ee2a16365721eabcf4d09 | 78 | py | Python | core/admin.py | mattmakai/txt2react | 80210fa90909fbf72ab9f908f815c9adcbdec503 | [
"MIT"
] | 17 | 2016-09-02T10:35:30.000Z | 2021-09-09T02:53:34.000Z | core/admin.py | makaimc/txt2react | 80210fa90909fbf72ab9f908f815c9adcbdec503 | [
"MIT"
] | null | null | null | core/admin.py | makaimc/txt2react | 80210fa90909fbf72ab9f908f815c9adcbdec503 | [
"MIT"
] | 7 | 2015-01-02T00:01:07.000Z | 2016-05-30T12:58:06.000Z | from django.contrib import admin
from django.contrib.sites.models import Site
| 26 | 44 | 0.846154 | 12 | 78 | 5.5 | 0.666667 | 0.30303 | 0.515152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 78 | 2 | 45 | 39 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4c62b34a8df14fd4979241a803b6d0960eb40aa7 | 8,134 | py | Python | test/test_parseduri.py | fredrikhoyer/citizenshell | 87361758537d0aea215ac2c16eca349244aee832 | [
"MIT"
] | 14 | 2018-03-22T19:54:14.000Z | 2021-03-28T15:07:23.000Z | test/test_parseduri.py | fredrikhoyer/citizenshell | 87361758537d0aea215ac2c16eca349244aee832 | [
"MIT"
] | 15 | 2018-02-07T21:31:37.000Z | 2022-02-28T14:08:21.000Z | test/test_parseduri.py | fredrikhoyer/citizenshell | 87361758537d0aea215ac2c16eca349244aee832 | [
"MIT"
] | 7 | 2018-05-13T11:50:53.000Z | 2021-04-14T13:05:21.000Z | from citizenshell import ParsedUri
from pytest import raises
try:
from urllib.parse import quote_plus
except:
from urllib import quote_plus
def test_parse_uri_all_in_uri():
result = ParsedUri("myscheme://john:secretpassword@thehostname.com:1234")
assert result.scheme == "myscheme"
assert result.username == "john"
assert result.password == "secretpassword"
assert result.hostname == "thehostname.com"
assert result.port == 1234
def test_parse_uri_all_in_uri_password_with_weird_char():
password = "pass?::"
result = ParsedUri("myscheme://john:%s@thehostname.com:1234" % quote_plus(password))
assert result.scheme == "myscheme"
assert result.username == "john"
assert result.password == password
assert result.hostname == "thehostname.com"
assert result.port == 1234
def test_parse_uri_no_password_in_uri():
result = ParsedUri("myscheme://john@thehostname.com:1234")
assert result.scheme == "myscheme"
assert result.username == "john"
assert result.password == None
assert result.hostname == "thehostname.com"
assert result.port == 1234
def test_parse_uri_no_username_in_uri():
result = ParsedUri("myscheme://:secretpassword@thehostname.com:1234")
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == "secretpassword"
assert result.hostname == "thehostname.com"
assert result.port == 1234
def test_parse_uri_no_userinfo_in_uri():
result = ParsedUri("myscheme://thehostname.com:1234")
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == None
assert result.hostname == "thehostname.com"
assert result.port == 1234
def test_parse_uri_scheme_and_port_only():
result = ParsedUri("myscheme://:1234")
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == None
assert result.hostname == None
assert result.port == 1234
def test_parse_uri_scheme_only():
result = ParsedUri("myscheme://")
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == None
assert result.hostname == None
assert result.port == None
def test_parse_uri_scheme_only_no_slash_slash():
result = ParsedUri("myscheme")
assert result.scheme == None
assert result.username == None
assert result.password == None
assert result.hostname == None
assert result.port == None
def test_parse_uri_empty_string():
result = ParsedUri("")
assert result.scheme == "local"
assert result.username == None
assert result.password == None
assert result.hostname == None
assert result.port == None
def test_parse_uri_no_argument():
result = ParsedUri()
assert result.scheme == "local"
assert result.username == None
assert result.password == None
assert result.hostname == None
assert result.port == None
def test_parse_uri_port_as_arg():
result = ParsedUri("myscheme://thehostname.com", port=4567)
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == None
assert result.hostname == "thehostname.com"
assert result.port == 4567
def test_parse_uri_only_scheme_and_hostname():
result = ParsedUri("myscheme://thehostname.com")
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == None
assert result.hostname == "thehostname.com"
assert result.port == None
def test_parse_uri_only_scheme_and_hostname_in_uri_username_as_arg():
result = ParsedUri("myscheme://thehostname.com", username="john")
assert result.scheme == "myscheme"
assert result.username == "john"
assert result.password == None
assert result.hostname == "thehostname.com"
assert result.port == None
def test_parse_uri_only_scheme_and_hostname_in_uri_password_as_arg():
result = ParsedUri("myscheme://thehostname.com", password="secretpassword")
assert result.scheme == "myscheme"
assert result.username == None
assert result.password == "secretpassword"
assert result.hostname == "thehostname.com"
assert result.port == None
def test_parsed_uri_telnet_no_username():
with raises(RuntimeError) as e:
ParsedUri("telnet://hostname")
assert e.value.args == ("scheme '%s' requires 'hostname' and 'username'", 'telnet')
def test_parsed_uri_telnet_username_as_arg():
ParsedUri("telnet://hostname", username="john")
def test_parsed_uri_ssh_no_username():
with raises(RuntimeError) as e:
ParsedUri("ssh://hostname")
assert e.value.args == ("scheme '%s' requires 'hostname' and 'username'", 'ssh')
def test_parsed_uri_ssh_username_as_arg():
ParsedUri("ssh://hostname", username="john")
def test_parsed_uri_fill_in_default_port():
assert ParsedUri("ssh://john@hostname").port == 22
assert ParsedUri("telnet://john@hostname").port == 23
assert ParsedUri("adb://hostname").port == 5555
assert ParsedUri("adb+tcp://hostname").port == 5555
assert ParsedUri("adb+usb://device").port == None
def test_parsed_uri_adb():
result = ParsedUri("adb://something:4444")
assert result.scheme == "adb"
assert result.port == 4444
assert result.hostname == "something"
assert result.device == None
def test_parsed_uri_adb_tcp():
result = ParsedUri("adb+tcp://something:4444")
assert result.scheme == "adb"
assert result.port == 4444
assert result.hostname == "something"
assert result.device == None
def test_parsed_uri_adb_usb():
result = ParsedUri("adb+usb://youpla")
assert result.scheme == "adb"
assert result.port == None
assert result.hostname == None
assert result.device == "youpla"
def test_parse_uri_username_in_uri_and_as_arg():
with raises(RuntimeError):
ParsedUri("myscheme://bender@thehostname.com", username="john")
def test_parse_uri_password_in_uri_and_as_arg():
with raises(RuntimeError):
ParsedUri("myscheme://bender:futurama@thehostname.com", password="futurama")
def test_parse_uri_serial_baudrate_no_username():
result = ParsedUri("serial:///dev/ttyUSB3?baudrate=115200")
assert result.scheme == "serial"
assert result.port == "/dev/ttyUSB3"
assert result.baudrate == 115200
def test_parse_uri_serial_baudrate_no_username_baudrate_kwargs():
result = ParsedUri("serial:///dev/ttyUSB3", baudrate=5252)
assert result.scheme == "serial"
assert result.port == "/dev/ttyUSB3"
assert result.baudrate == 5252
def test_parse_uri_serial_baudrate_with_username():
result = ParsedUri("serial://bender@/dev/ttyUSB3?baudrate=115200")
assert result.scheme == "serial"
assert result.port == "/dev/ttyUSB3"
assert result.baudrate == 115200
assert result.username == "bender"
def test_parse_uri_serial_baudrate_with_username_and_password():
result = ParsedUri("serial://bender:futurama@/dev/ttyUSB3?baudrate=115200")
assert result.scheme == "serial"
assert result.port == "/dev/ttyUSB3"
assert result.baudrate == 115200
assert result.username == "bender"
assert result.password == "futurama"
def test_parse_uri_serial_baudrate_with_username_and_password_kwargs():
result = ParsedUri("serial:///dev/ttyUSB3?baudrate=115200", username="bender", password="futurama")
assert result.scheme == "serial"
assert result.port == "/dev/ttyUSB3"
assert result.baudrate == 115200
assert result.username == "bender"
assert result.password == "futurama"
def test_parse_uri_serial_baudrate_with_username_and_password_windows_style():
result = ParsedUri("serial://bender:futurama@COM33?baudrate=115200")
assert result.scheme == "serial"
assert result.port == "COM33"
assert result.baudrate == 115200
assert result.username == "bender"
assert result.password == "futurama"
def test_parse_uri_check_xc():
result = ParsedUri("scheme://something", check_xc=True)
assert result.scheme == "scheme"
assert result.hostname == "something"
assert result.kwargs["check_xc"] == True | 35.519651 | 103 | 0.713425 | 992 | 8,134 | 5.647177 | 0.083669 | 0.23563 | 0.079971 | 0.061585 | 0.833095 | 0.794716 | 0.744377 | 0.668868 | 0.622635 | 0.607462 | 0 | 0.024437 | 0.164864 | 8,134 | 229 | 104 | 35.519651 | 0.800236 | 0 | 0 | 0.560847 | 0 | 0 | 0.189428 | 0.081991 | 0 | 0 | 0 | 0 | 0.619048 | 1 | 0.164021 | false | 0.164021 | 0.021164 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
4c79fe63ac668bb9e3e3abb1d91025eeb729fb81 | 3,424 | py | Python | voxinn/heightmap/ProceduralTerrain.py | djeof-1/VOXINN | 8aeae4e73c1013e5ff1562907c4381a1c5662dd7 | [
"MIT"
] | 1 | 2016-02-18T11:29:04.000Z | 2016-02-18T11:29:04.000Z | voxinn/heightmap/ProceduralTerrain.py | djeof-1/VOXINN | 8aeae4e73c1013e5ff1562907c4381a1c5662dd7 | [
"MIT"
] | null | null | null | voxinn/heightmap/ProceduralTerrain.py | djeof-1/VOXINN | 8aeae4e73c1013e5ff1562907c4381a1c5662dd7 | [
"MIT"
] | null | null | null | import random
from djinn import *
import os
import sys
class ProceduralTerrain:
def __init__(self, heightmapList):
self.heightmapList = heightmapList
def generateHill(self):
randindex = random.randint(0,len(self.heightmapList[0])-1)
makemap = random.randint(-5,1)
ind, i, flag = 0, 0, 0
count = 0
num = 1
while ind < len(self.heightmapList):
if makemap>=0:
i = ind
flag = 1
for j in range(9):
for index in range((9-count)/2):
self.heightmapList[i].append(0)
for index in range(9 - 2 * ((9 - count)/2)):
self.heightmapList[i].append(num)
num += 1
num -= 2
while num > 0:
self.heightmapList[i].append(num)
num -= 1
for index in range((9-count)/2):
self.heightmapList[i].append(0)
count += 2
num = 1
i += 1
index = 0
for j in range(len(self.heightmapList)-9):
for x in range(9):
self.heightmapList[i].append(0)
if flag == 1:
ind = i
if flag == 0:
for ind in range(len(self.heightmapList)):
for k in range(9):
self.heightmapList[ind].append(0)
count = 0
num = 1
ind += 1
heightmap = self.heightmapList
return heightmap
def generateStar(self):
randindex = random.randint(0,len(self.heightmapList[0])-1)
makemap = random.randint(0,1)
objects = 0
ind, i, flag = 0, 0, 0
count = 0
num = 1
while ind < len(self.heightmapList):
if makemap:
i = ind
flag = 1
for j in range(9):
for index in range((9-count)/2):
self.heightmapList[i].append(0)
objects += 1
for index in range(9 - 2 * ((9 - count)/2)):
self.heightmapList[i].append(num)
num += 1
objects += num
num -= 2
while num > 0:
self.heightmapList[i].append(num)
num -= 1
objects += num
for index in range((9-count)/2):
self.heightmapList[i].append(0)
objects += 1
count += 2
num = 1
i += 1
index = 0
for j in range(len(self.heightmapList)-9):
for x in range(9):
self.heightmapList[i].append(0)
objects += 1
if flag == 1:
ind = i
if flag == 0:
for ind in range(len(self.heightmapList)):
for k in range(9):
objects += 1
self.heightmapList[ind].append(0)
count = 0
num = 1
ind += 1
print "Procedural Objects: ", objects
heightmap = self.heightmapList
return heightmap
| 31.703704 | 66 | 0.407126 | 351 | 3,424 | 3.960114 | 0.122507 | 0.293525 | 0.069065 | 0.172662 | 0.856115 | 0.797122 | 0.797122 | 0.791367 | 0.791367 | 0.776978 | 0 | 0.050292 | 0.500584 | 3,424 | 107 | 67 | 32 | 0.762573 | 0 | 0 | 0.842105 | 1 | 0 | 0.005841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042105 | null | null | 0.010526 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
4c7b997f665bbc6b7f79199e3fa821990df985a5 | 8,712 | py | Python | test/test_alerts.py | hypostulate/mbta-api-client | f18903b6269c523c733a31574ff4579349fed3f8 | [
"MIT"
] | null | null | null | test/test_alerts.py | hypostulate/mbta-api-client | f18903b6269c523c733a31574ff4579349fed3f8 | [
"MIT"
] | null | null | null | test/test_alerts.py | hypostulate/mbta-api-client | f18903b6269c523c733a31574ff4579349fed3f8 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
MBTA
MBTA service API. https://www.mbta.com Source code: https://github.com/mbta/api # noqa: E501
The version of the OpenAPI document: 3.0
Contact: developer@mbta.com
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import datetime
import openapi_client
from openapi_client.models.alerts import Alerts # noqa: E501
from openapi_client.rest import ApiException
class TestAlerts(unittest.TestCase):
"""Alerts unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def make_instance(self, include_optional):
"""Test Alerts
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included """
# model = openapi_client.models.alerts.Alerts() # noqa: E501
if include_optional :
return Alerts(
links = openapi_client.models.schedules_links.Schedules_links(
self = '0',
prev = '0',
next = '0',
last = '0',
first = '0', ),
data = [
openapi_client.models.alert_resource.AlertResource(
type = '0',
relationships = openapi_client.models.alert_resource_relationships.AlertResource_relationships(
facility = openapi_client.models.alert_resource_relationships_facility.AlertResource_relationships_facility(
links = openapi_client.models.alert_resource_relationships_facility_links.AlertResource_relationships_facility_links(
self = '0',
related = '0', ),
data = openapi_client.models.alert_resource_relationships_facility_data.AlertResource_relationships_facility_data(
type = '0',
id = '0', ), ), ),
links = openapi_client.models.links.links(),
id = '0',
attributes = openapi_client.models.alert_resource_attributes.AlertResource_attributes(
url = 'http://www.mbta.com/uploadedfiles/Documents/Schedules_and_Maps/Commuter_Rail/fairmount.pdf?led=6/3/2017%201:22:09%20AM',
updated_at = '2017-08-14T14:54:01-04:00',
timeframe = 'Ongoing',
short_header = 'All weekend Fairmount Line trains will be bused between Morton St. & Readville due to construction of Blue Hill Ave Station.
',
severity = 10,
service_effect = 'Minor Route 216 delay',
lifecycle = 'Ongoing',
informed_entity = [
openapi_client.models.informed_entity.InformedEntity(
trip = 'CR-Weekday-Spring-17-517',
stop = 'Auburndale',
route_type = 2,
route = 'CR_Worcester',
direction_id = 56,
activities = [BOARD, EXIT], )
],
header = 'Starting 6/3, all weekend Fairmount Line trains will be bused between Morton St. and Readville in both directions due to construction of the new Blue Hill Avenue Station.
',
effect_name = 'Delay',
effect = 'ACCESS_ISSUE',
description = 'If entering the station, cross Tremont Street to the Boston Common and use Park Street Elevator 978 to the Green Line westbound platform. Red Line platform access is available via the elevator beyond the fare gates. If exiting the station, please travel down the Winter Street Concourse toward Downtown Crossing Station, exit through the fare gates, and take Downtown Crossing Elevator 892 to the street level.
',
created_at = '2017-08-14T14:54:01-04:00',
cause = 'ACCIDENT',
banner = 'All service suspended due to severe weather',
active_period = [
openapi_client.models.active_period.ActivePeriod(
start = '2017-08-14T14:54:01-04:00',
end = '2017-08-14T14:54:01-04:00', )
], ), )
]
)
else :
return Alerts(
data = [
openapi_client.models.alert_resource.AlertResource(
type = '0',
relationships = openapi_client.models.alert_resource_relationships.AlertResource_relationships(
facility = openapi_client.models.alert_resource_relationships_facility.AlertResource_relationships_facility(
links = openapi_client.models.alert_resource_relationships_facility_links.AlertResource_relationships_facility_links(
self = '0',
related = '0', ),
data = openapi_client.models.alert_resource_relationships_facility_data.AlertResource_relationships_facility_data(
type = '0',
id = '0', ), ), ),
links = openapi_client.models.links.links(),
id = '0',
attributes = openapi_client.models.alert_resource_attributes.AlertResource_attributes(
url = 'http://www.mbta.com/uploadedfiles/Documents/Schedules_and_Maps/Commuter_Rail/fairmount.pdf?led=6/3/2017%201:22:09%20AM',
updated_at = '2017-08-14T14:54:01-04:00',
timeframe = 'Ongoing',
short_header = 'All weekend Fairmount Line trains will be bused between Morton St. & Readville due to construction of Blue Hill Ave Station.
',
severity = 10,
service_effect = 'Minor Route 216 delay',
lifecycle = 'Ongoing',
informed_entity = [
openapi_client.models.informed_entity.InformedEntity(
trip = 'CR-Weekday-Spring-17-517',
stop = 'Auburndale',
route_type = 2,
route = 'CR_Worcester',
direction_id = 56,
activities = [BOARD, EXIT], )
],
header = 'Starting 6/3, all weekend Fairmount Line trains will be bused between Morton St. and Readville in both directions due to construction of the new Blue Hill Avenue Station.
',
effect_name = 'Delay',
effect = 'ACCESS_ISSUE',
description = 'If entering the station, cross Tremont Street to the Boston Common and use Park Street Elevator 978 to the Green Line westbound platform. Red Line platform access is available via the elevator beyond the fare gates. If exiting the station, please travel down the Winter Street Concourse toward Downtown Crossing Station, exit through the fare gates, and take Downtown Crossing Elevator 892 to the street level.
',
created_at = '2017-08-14T14:54:01-04:00',
cause = 'ACCIDENT',
banner = 'All service suspended due to severe weather',
active_period = [
openapi_client.models.active_period.ActivePeriod(
start = '2017-08-14T14:54:01-04:00',
end = '2017-08-14T14:54:01-04:00', )
], ), )
],
)
def testAlerts(self):
"""Test Alerts"""
inst_req_only = self.make_instance(include_optional=False)
inst_req_and_optional = self.make_instance(include_optional=True)
if __name__ == '__main__':
unittest.main()
| 57.695364 | 453 | 0.518939 | 811 | 8,712 | 5.408138 | 0.265105 | 0.068171 | 0.090971 | 0.065663 | 0.81099 | 0.796854 | 0.796854 | 0.796626 | 0.796626 | 0.796626 | 0 | 0.047749 | 0.411042 | 8,712 | 150 | 454 | 58.08 | 0.807055 | 0.009527 | 0 | 0.793388 | 0 | 0.066116 | 0.093117 | 0.030426 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.016529 | 0.049587 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d5e0f4874a7dffd20fd93b92562a02c37d765509 | 7,472 | py | Python | tf_quant_finance/experimental/instruments/bond_test.py | alexanu/tf-quant-finance | d0eb0e778d2422c6190844ef8f8c457ae25f9265 | [
"Apache-2.0"
] | 1 | 2021-09-01T06:27:02.000Z | 2021-09-01T06:27:02.000Z | tf_quant_finance/experimental/instruments/bond_test.py | alexanu/tf-quant-finance | d0eb0e778d2422c6190844ef8f8c457ae25f9265 | [
"Apache-2.0"
] | null | null | null | tf_quant_finance/experimental/instruments/bond_test.py | alexanu/tf-quant-finance | d0eb0e778d2422c6190844ef8f8c457ae25f9265 | [
"Apache-2.0"
] | 1 | 2021-09-01T06:26:57.000Z | 2021-09-01T06:26:57.000Z | # Lint as: python3
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for bond.py."""
from absl.testing import parameterized
import numpy as np
import tensorflow.compat.v2 as tf
import tf_quant_finance as tff
from tensorflow.python.framework import test_util # pylint: disable=g-direct-tensorflow-import
dates = tff.experimental.dates
instruments = tff.experimental.instruments
@test_util.run_all_in_graph_and_eager_modes
class BondTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_bond_correctness(self, dtype):
settlement_date = dates.convert_to_date_tensor([(2014, 1, 15)])
maturity_date = dates.convert_to_date_tensor([(2015, 1, 15)])
valuation_date = dates.convert_to_date_tensor([(2014, 1, 15)])
period_6m = dates.periods.PeriodTensor(6, dates.PeriodType.MONTH)
fix_spec = instruments.FixedCouponSpecs(
coupon_frequency=period_6m,
currency='usd',
notional=100.,
coupon_rate=0.06,
daycount_convention=instruments.DayCountConvention.ACTUAL_365,
businessday_rule=dates.BusinessDayConvention.NONE)
bond_inst = instruments.Bond(settlement_date, maturity_date, [fix_spec],
dtype=dtype)
curve_dates = valuation_date + dates.periods.PeriodTensor(
[0, 6, 12], dates.PeriodType.MONTH)
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.0, 0.005, 0.007], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = instruments.InterestRateMarket(discount_curve=reference_curve)
price = self.evaluate(bond_inst.price(valuation_date, market))
np.testing.assert_allclose(price, 105.27397754, atol=1e-6)
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_bond_many(self, dtype):
settlement_date = dates.convert_to_date_tensor([(2014, 1, 15),
(2014, 1, 15)])
maturity_date = dates.convert_to_date_tensor([(2015, 1, 15),
(2015, 1, 15)])
valuation_date = dates.convert_to_date_tensor([(2014, 1, 15)])
period_6m = dates.periods.PeriodTensor(6, dates.PeriodType.MONTH)
fix_spec = instruments.FixedCouponSpecs(
coupon_frequency=period_6m,
currency='usd',
notional=100.,
coupon_rate=0.06,
daycount_convention=instruments.DayCountConvention.ACTUAL_365,
businessday_rule=dates.BusinessDayConvention.NONE)
bond_inst = instruments.Bond(settlement_date, maturity_date,
[fix_spec, fix_spec],
dtype=dtype)
curve_dates = valuation_date + dates.periods.PeriodTensor(
[0, 6, 12], dates.PeriodType.MONTH)
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.0, 0.005, 0.007], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = instruments.InterestRateMarket(discount_curve=reference_curve)
price = self.evaluate(bond_inst.price(valuation_date, market))
np.testing.assert_allclose(price, [105.27397754, 105.27397754], atol=1e-6)
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_bond_stub_begin(self, dtype):
settlement_date = dates.convert_to_date_tensor([(2020, 1, 1)])
maturity_date = dates.convert_to_date_tensor([(2021, 2, 1)])
first_coupon_date = dates.convert_to_date_tensor([(2020, 2, 1)])
valuation_date = dates.convert_to_date_tensor([(2020, 1, 1)])
period_6m = dates.periods.PeriodTensor(6, dates.PeriodType.MONTH)
fix_spec = instruments.FixedCouponSpecs(
coupon_frequency=period_6m,
currency='usd',
notional=100.,
coupon_rate=0.06,
daycount_convention=instruments.DayCountConvention.ACTUAL_365,
businessday_rule=dates.BusinessDayConvention.NONE)
bond_inst = instruments.Bond(settlement_date, maturity_date,
[fix_spec],
first_coupon_date=first_coupon_date,
dtype=dtype)
curve_dates = valuation_date + dates.periods.PeriodTensor(
[0, 6, 12, 24], dates.PeriodType.MONTH)
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.0, 0.025, 0.03, 0.035], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = instruments.InterestRateMarket(discount_curve=reference_curve)
price = self.evaluate(bond_inst.price(valuation_date, market))
np.testing.assert_allclose(price, [103.12756228], atol=1e-6)
expected_coupon_dates = dates.convert_to_date_tensor([(2020, 2, 1),
(2020, 8, 1),
(2021, 2, 1)])
self.assertAllEqual(expected_coupon_dates.ordinal(),
bond_inst._cashflows.payment_dates.ordinal())
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_bond_stub_end(self, dtype):
settlement_date = dates.convert_to_date_tensor([(2020, 1, 1)])
maturity_date = dates.convert_to_date_tensor([(2021, 2, 1)])
last_coupon_date = dates.convert_to_date_tensor([(2021, 1, 1)])
valuation_date = dates.convert_to_date_tensor([(2020, 1, 1)])
period_6m = dates.periods.PeriodTensor(6, dates.PeriodType.MONTH)
fix_spec = instruments.FixedCouponSpecs(
coupon_frequency=period_6m,
currency='usd',
notional=100.,
coupon_rate=0.06,
daycount_convention=instruments.DayCountConvention.ACTUAL_365,
businessday_rule=dates.BusinessDayConvention.NONE)
bond_inst = instruments.Bond(settlement_date, maturity_date,
[fix_spec],
penultimate_coupon_date=last_coupon_date,
dtype=dtype)
curve_dates = valuation_date + dates.periods.PeriodTensor(
[0, 6, 12, 24], dates.PeriodType.MONTH)
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.0, 0.025, 0.03, 0.035], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = instruments.InterestRateMarket(discount_curve=reference_curve)
price = self.evaluate(bond_inst.price(valuation_date, market))
np.testing.assert_allclose(price, [103.12769595], atol=1e-6)
expected_coupon_dates = dates.convert_to_date_tensor([(2020, 7, 1),
(2021, 1, 1),
(2021, 2, 1)])
self.assertAllEqual(expected_coupon_dates.ordinal(),
bond_inst._cashflows.payment_dates.ordinal())
if __name__ == '__main__':
tf.test.main()
| 41.977528 | 95 | 0.659663 | 875 | 7,472 | 5.392 | 0.217143 | 0.055108 | 0.047478 | 0.061043 | 0.805638 | 0.805638 | 0.805638 | 0.795464 | 0.788894 | 0.77554 | 0 | 0.058855 | 0.238223 | 7,472 | 177 | 96 | 42.214689 | 0.770028 | 0.084047 | 0 | 0.717391 | 0 | 0 | 0.011723 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 1 | 0.028986 | false | 0 | 0.036232 | 0 | 0.072464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
91014a929b309e93bd9336d69a6233d9bddca440 | 125 | py | Python | os/test_filestat.py | badgeteam/micropython-lib | fca0235c166ebbada489d88c42fc549267832797 | [
"PSF-2.0"
] | null | null | null | os/test_filestat.py | badgeteam/micropython-lib | fca0235c166ebbada489d88c42fc549267832797 | [
"PSF-2.0"
] | null | null | null | os/test_filestat.py | badgeteam/micropython-lib | fca0235c166ebbada489d88c42fc549267832797 | [
"PSF-2.0"
] | 1 | 2018-12-30T01:03:20.000Z | 2018-12-30T01:03:20.000Z | import os
assert os.access("test_filestat.py", os.F_OK) == True
assert os.access("test_filestat.py-not", os.F_OK) == False
| 20.833333 | 58 | 0.72 | 23 | 125 | 3.73913 | 0.521739 | 0.186047 | 0.325581 | 0.418605 | 0.651163 | 0.651163 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112 | 125 | 5 | 59 | 25 | 0.774775 | 0 | 0 | 0 | 0 | 0 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e68e7f59b94ac0c63e50074cfbfacd315d638506 | 151 | py | Python | modules/ckanext-ytp_recommendation/ckanext/ytp_recommendation/helpers.py | vrk-kpa/opendata-ckan | 8936e2d9e700b9e5534fe2a51eedc2d1ede8c10b | [
"MIT"
] | null | null | null | modules/ckanext-ytp_recommendation/ckanext/ytp_recommendation/helpers.py | vrk-kpa/opendata-ckan | 8936e2d9e700b9e5534fe2a51eedc2d1ede8c10b | [
"MIT"
] | 10 | 2021-12-02T10:33:42.000Z | 2022-03-31T11:00:54.000Z | modules/ckanext-ytp_recommendation/ckanext/ytp_recommendation/helpers.py | vrk-kpa/opendata-ckan | 8936e2d9e700b9e5534fe2a51eedc2d1ede8c10b | [
"MIT"
] | null | null | null | from ckan.common import config
def get_ytp_recommendation_recaptcha_sitekey():
return config.get('ckanext.ytp_recommendation.recaptcha_sitekey')
| 25.166667 | 69 | 0.834437 | 19 | 151 | 6.315789 | 0.684211 | 0.283333 | 0.433333 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092715 | 151 | 5 | 70 | 30.2 | 0.875912 | 0 | 0 | 0 | 0 | 0 | 0.291391 | 0.291391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 9 |
e6a5918cc19bf41d01a830ab83c3ed0b620c8b7a | 6,793 | py | Python | RetinaNet/src/loss.py | chuanfuye/object_detection | 405085810875f6cb4097e43d90924089c1e3aef8 | [
"MIT"
] | null | null | null | RetinaNet/src/loss.py | chuanfuye/object_detection | 405085810875f6cb4097e43d90924089c1e3aef8 | [
"MIT"
] | null | null | null | RetinaNet/src/loss.py | chuanfuye/object_detection | 405085810875f6cb4097e43d90924089c1e3aef8 | [
"MIT"
] | null | null | null | from torch import nn, Tensor
import torch
class Loss(nn.Module):
"""
Implements the loss as the sum of the followings:
1. Confidence Loss: All labels, with hard negative mining
2. Localization Loss: Only on positive labels
Suppose input dboxes has the shape 76725x4
"""
def __init__(self, dboxes):
super(Loss, self).__init__()
self.scale_xy = 1.0 / dboxes.scale_xy
self.scale_wh = 1.0 / dboxes.scale_wh
self.location_loss = nn.SmoothL1Loss(reduction='none')
# self.location_loss = nn.SmoothL1Loss(reduce=False)
self.dboxes = nn.Parameter(dboxes(order="xywh").transpose(0, 1).unsqueeze(dim=0),
requires_grad=False)
# Two factor are from following links
# http://jany.st/post/2017-11-05-single-shot-detector-ssd-from-scratch-in-tensorflow.html
self.confidence_loss = nn.CrossEntropyLoss(reduction='none')
# self.confidence_loss = nn.CrossEntropyLoss(reduce=False)
def _location_vec(self, loc):
# type: (Tensor)
"""
Generate Location Vectors
计算ground truth相对anchors的回归参数
:param loc:
:return:
"""
gxy = self.scale_xy * (loc[:, :2, :] - self.dboxes[:, :2, :]) / self.dboxes[:, 2:, :]
gwh = self.scale_wh * (loc[:, 2:, :] / self.dboxes[:, 2:, :]).log()
return torch.cat((gxy, gwh), dim=1).contiguous()
def forward(self, ploc, plabel, gloc, glabel):
# type: (Tensor, Tensor, Tensor, Tensor)
"""
ploc, plabel: Nx4x76725, Nxlabel_numx76725
predicted location and labels
gloc, glabel: Nx4x76725, Nx76725
ground truth location and labels
"""
# 获取正样本的mask Tensor: [N, 76725]
mask = glabel > 0
# mask1 = torch.nonzero(glabel)
# 计算一个batch中的每张图片的正样本个数 Tensor: [N]
pos_num = mask.sum(dim=1)
# 计算gt的location回归参数 Tensor: [N, 4, 76725]
vec_gd = self._location_vec(gloc)
# sum on four coordinates, and mask
# 计算定位损失(只有正样本)
loc_loss = self.location_loss(ploc, vec_gd).sum(dim=1) # Tensor: [N, 76725]
loc_loss = (mask.float() * loc_loss).sum(dim=1) # Tenosr: [N]
# hard negative mining Tenosr: [N, 76725]
con = self.confidence_loss(plabel, glabel)
# positive mask will never selected
# 获取负样本
con_neg = con.clone()
con_neg[mask] = torch.tensor(0.0)
# 按照confidence_loss降序排列 con_idx(Tensor: [N, 76725])
_, con_idx = con_neg.sort(dim=1, descending=True)
_, con_rank = con_idx.sort(dim=1) # 这个步骤比较巧妙
# number of negative three times positive
# 用于损失计算的负样本数是正样本的3倍(在原论文Hard negative mining部分),
# 但不能超过总样本数
neg_num = torch.clamp(3 * pos_num, max=mask.size(1)).unsqueeze(-1)
neg_mask = con_rank < neg_num # Tensor [N, 76725]
# confidence最终loss使用选取的正样本loss+选取的负样本loss
con_loss = (con * (mask.float() + neg_mask.float())).sum(dim=1) # Tensor [N]
# avoid no object detected
# 避免出现图像中没有GTBOX的情况
total_loss = loc_loss + con_loss
num_mask = (pos_num > 0).float() # 统计一个batch中的每张图像中是否存在GTBOX
pos_num = pos_num.float().clamp(min=1e-6) # 防止出现分母为零的情况
ret = (total_loss * num_mask / pos_num).mean(dim=0) # 只计算存在GTBOX的图像损失
return ret
class FocalLoss(nn.Module):
"""
Implements the loss as the sum of the followings:
1. Confidence Loss: All labels, with hard negative mining
2. Localization Loss: Only on positive labels
Suppose input dboxes has the shape 76725x4
"""
def __init__(self, dboxes):
super(FocalLoss, self).__init__()
self.scale_xy = 1.0 / dboxes.scale_xy
self.scale_wh = 1.0 / dboxes.scale_wh
self.location_loss = nn.SmoothL1Loss(reduction='none')
# self.location_loss = nn.SmoothL1Loss(reduce=False)
self.dboxes = nn.Parameter(dboxes(order="xywh").transpose(0, 1).unsqueeze(dim=0),
requires_grad=False)
# Two factor are from following links
# http://jany.st/post/2017-11-05-single-shot-detector-ssd-from-scratch-in-tensorflow.html
self.confidence_loss = nn.CrossEntropyLoss(reduction='none')
# self.confidence_loss = nn.CrossEntropyLoss(reduce=False)
def _location_vec(self, loc):
# type: (Tensor)
"""
Generate Location Vectors
计算ground truth相对anchors的回归参数
:param loc:
:return:
"""
gxy = self.scale_xy * (loc[:, :2, :] - self.dboxes[:, :2, :]) / self.dboxes[:, 2:, :]
gwh = self.scale_wh * (loc[:, 2:, :] / self.dboxes[:, 2:, :]).log()
return torch.cat((gxy, gwh), dim=1).contiguous()
def forward(self, ploc, plabel, gloc, glabel):
# type: (Tensor, Tensor, Tensor, Tensor)
"""
ploc, plabel: Nx4x76725, Nxlabel_numx76725
predicted location and labels
gloc, glabel: Nx4x76725, Nx76725
ground truth location and labels
"""
# 获取正样本的mask Tensor: [N, 76725]
mask = glabel > 0
# mask1 = torch.nonzero(glabel)
# 计算一个batch中的每张图片的正样本个数 Tensor: [N]
pos_num = mask.sum(dim=1)
# 计算gt的location回归参数 Tensor: [N, 4, 76725]
vec_gd = self._location_vec(gloc)
# sum on four coordinates, and mask
# 计算定位损失(只有正样本)
loc_loss = self.location_loss(ploc, vec_gd).sum(dim=1) # Tensor: [N, 76725]
loc_loss = (mask.float() * loc_loss).sum(dim=1) # Tenosr: [N]
# hard negative mining Tenosr: [N, 76725]
con = self.confidence_loss(plabel, glabel)
# positive mask will never selected
# 获取负样本
con_neg = con.clone()
con_neg[mask] = torch.tensor(0.0)
# 按照confidence_loss降序排列 con_idx(Tensor: [N, 76725])
_, con_idx = con_neg.sort(dim=1, descending=True)
_, con_rank = con_idx.sort(dim=1) # 这个步骤比较巧妙
# number of negative three times positive
# 用于损失计算的负样本数是正样本的3倍(在原论文Hard negative mining部分),
# 但不能超过总样本数
neg_num = torch.clamp(3 * pos_num, max=mask.size(1)).unsqueeze(-1)
neg_mask = con_rank < neg_num # Tensor [N, 76725]
# confidence最终loss使用选取的正样本loss+选取的负样本loss
con_loss = (con * (mask.float() + neg_mask.float())).sum(dim=1) # Tensor [N]
# avoid no object detected
# 避免出现图像中没有GTBOX的情况
total_loss = loc_loss + con_loss
num_mask = (pos_num > 0).float() # 统计一个batch中的每张图像中是否存在GTBOX
pos_num = pos_num.float().clamp(min=1e-6) # 防止出现分母为零的情况
ret = (total_loss * num_mask / pos_num).mean(dim=0) # 只计算存在GTBOX的图像损失
return ret | 39.04023 | 97 | 0.600618 | 821 | 6,793 | 4.825822 | 0.199756 | 0.014134 | 0.02423 | 0.018173 | 0.982332 | 0.982332 | 0.982332 | 0.982332 | 0.982332 | 0.982332 | 0 | 0.041726 | 0.280289 | 6,793 | 174 | 98 | 39.04023 | 0.768664 | 0.389813 | 0 | 0.909091 | 0 | 0 | 0.006349 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.030303 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e6fe2521da4af9670f3f7569925243096d2700f2 | 10,381 | py | Python | piRNA_analysis/snakemake/13_map_consensus_L1s_IAPs.py | rberrens/SPOCD1-piRNA_directed_DNA_met | 8e795436197ef41f07159624e45d6b0fddb1ded8 | [
"MIT"
] | 4 | 2020-07-17T12:03:38.000Z | 2021-03-11T03:30:20.000Z | piRNA_analysis/snakemake/13_map_consensus_L1s_IAPs.py | rberrens/SPOCD1-piRNA_directed_DNA_met | 8e795436197ef41f07159624e45d6b0fddb1ded8 | [
"MIT"
] | null | null | null | piRNA_analysis/snakemake/13_map_consensus_L1s_IAPs.py | rberrens/SPOCD1-piRNA_directed_DNA_met | 8e795436197ef41f07159624e45d6b0fddb1ded8 | [
"MIT"
] | 1 | 2021-08-15T07:11:52.000Z | 2021-08-15T07:11:52.000Z | configfile: 'config_spocd1_pi_simple.yaml'
bowtie = "/usr/local/Cellar/bowtie/1.2.1.1/bin/bowtie"
rule all:
input:
expand("Processed/v3/mapped/{sample}_consensus_L1A_v3.sam", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1A_v3_plus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1A_v3_minus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1A_v3_gapcount.txt", sample = config["samples"]),
expand("Processed/v3/mapped/{sample}_consensus_L1T_v3.sam", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1T_v3_plus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1T_v3_minus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1T_v3_gapcount.txt", sample = config["samples"]),
expand("Processed/v3/mapped/{sample}_consensus_L1Gf_v3.sam", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_plus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_minus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_gapcount.txt", sample = config["samples"]),
expand("Processed/v3/mapped/{sample}_consensus_L1F2_v3.sam", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1F2_v3_plus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1F2_v3_minus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_L1F2_v3_gapcount.txt", sample = config["samples"]),
expand("Processed/v3/mapped/{sample}_consensus_IAPEy_v3.sam", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_plus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_minus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_gapcount.txt", sample = config["samples"]),
expand("Processed/v3/mapped/{sample}_consensus_IAPEz_v3.sam", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_plus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_minus.txt", sample = config["samples"]),
expand("Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_gapcount.txt", sample = config["samples"])
rule mapL1A:
input:
fasta = "Processed/v3/fasta/spocd1_pi_L1MdA_{sample}.fasta"
output:
L1 = "Processed/v3/mapped/{sample}_consensus_L1A_v3.sam"
shell:
"""
{bowtie} -v 3 -f --best -k 1 ../consensus/L1Md-A2 {input.fasta} \
{output.L1}
"""
rule L1A_plus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1A_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1A_v3_plus.txt"
shell:
"""
perl ../perl/filter_plus.pl {input.sam} > {output.list}
"""
rule L1A_minus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1A_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1A_v3_minus.txt"
shell:
"""
perl ../perl/filter_minus.pl {input.sam} > {output.list}
"""
rule L1A_pair:
input:
plus = "Processed/v3/pingpong/{sample}_consensus_L1A_v3_plus.txt",
minus = "Processed/v3/pingpong/{sample}_consensus_L1A_v3_minus.txt"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1A_v3_gapcount.txt"
shell:
"""
perl ../perl/Souatari_allgap.pl {input.minus} {input.plus} | perl ../perl/sum_count.pl | \
perl ../perl/count_last_column.pl | perl ../perl/ping-pong_range.pl > {output.list}
"""
rule mapL1T:
input:
fasta = "Processed/v3/fasta/spocd1_pi_L1MdT_{sample}.fasta"
output:
L1 = "Processed/v3/mapped/{sample}_consensus_L1T_v3.sam"
shell:
"""
{bowtie} -v 3 -f --best -k 1 /Users/Shared/ykabayam/tools/Mus_musculus/UCSC_mm10/Sequence/Consensus_repeat/L1_consensus/L1MdTf1_7398_2 {input.fasta} \
{output.L1}
"""
rule L1T_plus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1T_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1T_v3_plus.txt"
shell:
"""
perl ../perl/filter_plus.pl {input.sam} > {output.list}
"""
rule L1T_minus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1T_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1T_v3_minus.txt"
shell:
"""
perl ../perl/filter_minus.pl {input.sam} > {output.list}
"""
rule L1T_pair:
input:
plus = "Processed/v3/pingpong/{sample}_consensus_L1T_v3_plus.txt",
minus = "Processed/v3/pingpong/{sample}_consensus_L1T_v3_minus.txt"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1T_v3_gapcount.txt"
shell:
"""
perl ../perl/Souatari_allgap.pl {input.minus} {input.plus} | perl ../perl/sum_count.pl | \
perl ../perl/count_last_column.pl | perl ../perl/ping-pong_range.pl > {output.list}
"""
rule mapL1Gf:
input:
fasta = "Processed/v3/fasta/spocd1_pi_L1MdGf_{sample}.fasta"
output:
L1 = "Processed/v3/mapped/{sample}_consensus_L1Gf_v3.sam"
shell:
"""
{bowtie} -v 3 -f --best -k 1 /Users/Shared/ykabayam/tools/Mus_musculus/UCSC_mm10/Sequence/Consensus_repeat/L1_consensus/L1MdGf1_7085 {input.fasta} \
{output.L1}
"""
rule L1Gf_plus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1Gf_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_plus.txt"
shell:
"""
perl ../perl/filter_plus.pl {input.sam} > {output.list}
"""
rule L1Gf_minus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1Gf_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_minus.txt"
shell:
"""
perl ../perl/filter_minus.pl {input.sam} > {output.list}
"""
rule L1Gf_pair:
input:
plus = "Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_plus.txt",
minus = "Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_minus.txt"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1Gf_v3_gapcount.txt"
shell:
"""
perl ../perl/Souatari_allgap.pl {input.minus} {input.plus} | perl ../perl/sum_count.pl | \
perl ../perl/count_last_column.pl | perl ../perl/ping-pong_range.pl > {output.list}
"""
rule mapL1F2:
input:
fasta = "Processed/v3/fasta/spocd1_pi_L1MdF2_{sample}.fasta"
output:
L1 = "Processed/v3/mapped/{sample}_consensus_L1F2_v3.sam"
shell:
"""
{bowtie} -v 3 -f --best -k 1 /Users/Shared/ykabayam/tools/Mus_musculus/UCSC_mm10/Sequence/Consensus_repeat/L1_consensus/L1MdF1_6382 {input.fasta} \
{output.L1}
"""
rule L1F2_plus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1F2_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1F2_v3_plus.txt"
shell:
"""
perl ../perl/filter_plus.pl {input.sam} > {output.list}
"""
rule L1F2_minus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_L1F2_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1F2_v3_minus.txt"
shell:
"""
perl ../perl/filter_minus.pl {input.sam} > {output.list}
"""
rule L1F2_pair:
input:
plus = "Processed/v3/pingpong/{sample}_consensus_L1F2_v3_plus.txt",
minus = "Processed/v3/pingpong/{sample}_consensus_L1F2_v3_minus.txt"
output:
list = "Processed/v3/pingpong/{sample}_consensus_L1F2_v3_gapcount.txt"
shell:
"""
perl ../perl/Souatari_allgap.pl {input.minus} {input.plus} | perl ../perl/sum_count.pl | \
perl ../perl/count_last_column.pl | perl ../perl/ping-pong_range.pl > {output.list}
"""
rule mapIAPEz:
input:
fasta = "Processed/v3/fasta/spocd1_pi_IAPEz_{sample}.fasta"
output:
IAP = "Processed/v3/mapped/{sample}_consensus_IAPEz_v3.sam"
shell:
"""
{bowtie} -v 3 -f --best -k 1 ../consensus/IAPEZI {input.fasta} \
{output.IAP}
"""
rule IAPz_plus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_IAPEz_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_plus.txt"
shell:
"""
perl ../perl/filter_plus.pl {input.sam} > {output.list}
"""
rule IAPz_minus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_IAPEz_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_minus.txt"
shell:
"""
perl ../perl/filter_minus.pl {input.sam} > {output.list}
"""
rule IAPz_pair:
input:
plus = "Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_plus.txt",
minus = "Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_minus.txt"
output:
list = "Processed/v3/pingpong/{sample}_consensus_IAPEz_v3_gapcount.txt"
shell:
"""
perl ../perl/Souatari_allgap.pl {input.minus} {input.plus} | perl ../perl/sum_count.pl | \
perl ../perl/count_last_column.pl | perl ../perl/ping-pong_range.pl > {output.list}
"""
rule mapIAPEY:
input:
fasta = "Processed/v3/fasta/spocd1_pi_IAPEy_{sample}.fasta"
output:
IAP = "Processed/v3/mapped/{sample}_consensus_IAPEy_v3.sam"
shell:
"""
{bowtie} -v 3 -f --best -k 1 ../consensus/IAPEY {input.fasta} \
{output.IAP}
"""
rule IAPy_plus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_IAPEy_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_plus.txt"
shell:
"""
perl ../perl/filter_plus.pl {input.sam} > {output.list}
"""
rule IAPy_minus:
input:
sam = "Processed/v3/mapped/{sample}_consensus_IAPEy_v3.sam"
output:
list = "Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_minus.txt"
shell:
"""
perl ../perl/filter_minus.pl {input.sam} > {output.list}
"""
rule IAPy_pair:
input:
plus = "Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_plus.txt",
minus = "Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_minus.txt"
output:
list = "Processed/v3/pingpong/{sample}_consensus_IAPEy_v3_gapcount.txt"
shell:
"""
perl ../perl/Souatari_allgap.pl {input.minus} {input.plus} | perl ../perl/sum_count.pl | \
perl ../perl/count_last_column.pl | perl ../perl/ping-pong_range.pl > {output.list}
"""
| 34.374172 | 154 | 0.688758 | 1,375 | 10,381 | 4.937455 | 0.065455 | 0.126381 | 0.134335 | 0.176757 | 0.965385 | 0.945647 | 0.941965 | 0.905435 | 0.874208 | 0.744145 | 0 | 0.033678 | 0.153357 | 10,381 | 302 | 155 | 34.374172 | 0.738764 | 0 | 0 | 0.477528 | 0 | 0 | 0.604047 | 0.58168 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.