hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
133747d67ff329702143e06f5ac0f400ac60d0b7 | 55,280 | py | Python | quantlib/backends/cutie/grrules/ana/dporules.py | mdatres/quantlab | 09fb24ede78f49768f829afe0fac2ac291b8fd4f | [
"Apache-2.0"
] | null | null | null | quantlib/backends/cutie/grrules/ana/dporules.py | mdatres/quantlab | 09fb24ede78f49768f829afe0fac2ac291b8fd4f | [
"Apache-2.0"
] | null | null | null | quantlib/backends/cutie/grrules/ana/dporules.py | mdatres/quantlab | 09fb24ede78f49768f829afe0fac2ac291b8fd4f | [
"Apache-2.0"
] | 1 | 2022-01-02T10:10:46.000Z | 2022-01-02T10:10:46.000Z | #
# dporules.py
#
# Author(s):
# Matteo Spallanzani <spmatteo@iis.ee.ethz.ch>
#
# Copyright (c) 2020-2021 ETH Zurich.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import itertools
from collections import OrderedDict
import torch
import torch.nn as nn
import networkx as nx
from .lutactivation import LUTActivation
from .folding import fold_anaact_anaconv2d_bn2d_anaact, fold_anaact_analinear_bn1d_anaact
from quantlib.editing.graphs.graphs import Bipartite, PyTorchNode, __NODE_ID_FORMAT__
from quantlib.editing.graphs.grrules.dporules import DPORule
from quantlib.editing.graphs.grrules import Seeker
import quantlib.editing.graphs as qg
import quantlib.algorithms as qa
class FoldANAConvBNANAActRule(DPORule):
def __init__(self, lut_entry_bits=16):
self._lut_entry_bits = lut_entry_bits
# Nodes of the interface
K_types = OrderedDict()
K_types.update({'HPin': qg.graphs.HelperInput.__name__})
K_types.update({'HPTin': qg.graphs.HelperInputPrecisionTunnel.__name__})
K_types = OrderedDict([('/'.join(['K-term', k]), v) for k, v in K_types.items()])
# Nodes in the core template graph
LK_types = OrderedDict()
LK_types.update({'ANAConv': qa.ana.ANAConv2d.__name__})
LK_types.update({'BatchNorm': nn.BatchNorm2d.__name__})
LK_types.update({'ANAActout': qa.ana.ANAActivation.__name__})
LK_types = OrderedDict([('/'.join(['L-term', k]), v) for k, v in LK_types.items()])
# Nodes in the core replacement graph
RK_types = OrderedDict()
RK_types.update({'TWConv': nn.Conv2d.__name__})
RK_types.update({'LUTAct': LUTActivation.__name__})
RK_types = OrderedDict([('/'.join(['R-term', k]), v) for k, v in RK_types.items()])
K_node_IDs = list(K_types.keys())
LK_node_IDs = list(LK_types.keys())
RK_node_IDs = list(RK_types.keys())
# define the template graph L [L-term]
L_node_IDs = [K_node_IDs[0]] + LK_node_IDs + [K_node_IDs[-1]]
self.L = nx.DiGraph()
# Define arcs between nodes in full template graph
self.L.add_edges_from({(u, v) for u, v in zip(L_node_IDs[:-1], L_node_IDs[1:])})
# Here, graph is only operation nodes
# Necessary for seeker
nx.set_node_attributes(self.L, {vL: Bipartite.KERNEL for vL in set(self.L.nodes)}, 'bipartite')
nx.set_node_attributes(self.L, {**K_types, **LK_types}, 'type')
# define the context (sub-)graph K [K-term]
VK = set(K_node_IDs) # precision tunnel nodes define the context graph
self.K = self.L.subgraph(VK)
# define the template (sub-)graph L\K
VLK = set(self.L.nodes).difference(set(self.K.nodes))
self.LK = self.L.subgraph(VLK)
# define the replacement (sub-)graph R\K ["gluing" R\K to K yields the graph R, i.e., the R-term]
self.RK = nx.DiGraph()
self.RK.add_edges_from({(u, v) for u, v in zip(RK_node_IDs[:-1], RK_node_IDs[1:])})
nx.set_node_attributes(self.RK, {vRK: Bipartite.KERNEL for vRK in set(self.RK.nodes)}, 'bipartite')
nx.set_node_attributes(self.RK, RK_types, 'type')
# define the arcs that go from the vertices of K to those of R\K, and viceversa
E_K2RK = {(K_node_IDs[0], RK_node_IDs[0])}
E_RK2K = {(RK_node_IDs[-1], K_node_IDs[-1])}
E_K2RK2K = E_K2RK | E_RK2K
# disintegrate `E_K2RK` and `E_RK2K` along fibres to speed up rule application
# A fibre is kind of like fixing one argument of a two input one output function and looking at all possible outputs
self.F_K2RK = {vK: set(arc for arc in E_K2RK if arc[0] == vK) for vK in set(self.K.nodes)}
self.F_RK2K = {vK: set(arc for arc in E_RK2K if arc[1] == vK) for vK in set(self.K.nodes)}
# since the GRR's L-term has been modified, rebuild the seeker
self.seeker = Seeker(self.L)
# this machinery can generate always-new identifiers for different rule applications
self._counter = itertools.count()
def _get_rule_count(self):
rule_count = ''.join(['FConvBNANA', __NODE_ID_FORMAT__.format(next(self._counter))])
return rule_count
def core(self, HI, g, nodes_dict):
# generate the substitute (sub-)graph J\I
rule_count = self._get_rule_count()
g_RK2JI = {vRK: '_'.join([rule_count, vRK.replace('R-term/', '')]) for vRK in set(self.RK.nodes)}
JI = nx.relabel_nodes(self.RK, g_RK2JI, copy=True)
# get pointers to the old modules;
# these pointers will enable two actions:
# 1. extracting the arguments required to perform the folding
# 2. extracting the parameters to instantiate the new modules
g_L2H = {vL: vH for vH, vL in g.items()}
mconv2d = nodes_dict[g_L2H['/'.join(['L-term', 'ANAConv'])]].nobj
mbn2d = nodes_dict[g_L2H['/'.join(['L-term', 'BatchNorm'])]].nobj
manaout = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActout'])]].nobj
# fold
tau, weight = fold_anaact_anaconv2d_bn2d_anaact(torch.Tensor([1.0]),
mconv2d.eps, mconv2d.weight_maybe_quant,
mbn2d.running_mean, mbn2d.running_var, mbn2d.eps, mbn2d.weight,
mbn2d.bias,
manaout.eps,
manaout.thresholds,
ceiltau=False)
# build the new modules
mtwconv = nn.Conv2d(mconv2d.in_channels, mconv2d.out_channels, mconv2d.kernel_size,
stride=mconv2d.stride, padding=mconv2d.padding, dilation=mconv2d.dilation,
groups=mconv2d.groups,
bias=mconv2d.bias is not None).to(torch.device('cpu'))
mtwconv.weight.data = weight
mlutact = LUTActivation(tau, manaout.quant_levels)
# register the newly created nodes
vJI_2_ptnode = {}
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'TWConv'])]] = PyTorchNode(mtwconv)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'LUTAct'])]] = PyTorchNode(mlutact)
return JI, vJI_2_ptnode
# G: Full/original graph
# nodes_dict: Mapping between node identifiers of G and actual underlying objects
# g: One instance of all occurences of the template in G, i.e. one application point for the replacement rule -> one morphism
def apply(self, G, nodes_dict, g):
# create new containers
G = G.copy()
# Dictionary mapping of node identifiers to a payload
# keys in nodes_dict should be the same as G.nodes
nodes_dict = {**nodes_dict}
# characterise the match graph H
# Occurence of template in the graph
# SPMATTEO: Some assumptions to discuss
VI = {vH for vH, vL in g.items() if vL in set(self.K.nodes)} # Occurence of context
VHI = {vH for vH, vL in g.items() if vL not in set(self.K.nodes)} # Occurence of core template
HI = G.subgraph(VHI) # HI is the subgraph induced by the set of nodes VHI
# generate the substitute (sub-)graph J\I (completely detached from G)
# Instantiate blueprint of the replacement graph
JI, vJI_2_ptnode = self.core(HI, g, nodes_dict)
# add the substitute (sub-)graph J\I to the main graph G
G = nx.compose(G, JI) # G now has two connected but 'independent' subgraphs
nodes_dict.update(vJI_2_ptnode) # Add new payloads from substitute graph
# glue the substitute (sub-)graph J\I to the interface (sub-)graph I
JI2RK_morphisms = Seeker(self.RK).get_morphisms(JI)
assert len(JI2RK_morphisms) == 1
g_JI2RK = JI2RK_morphisms[0]
g_RK2JI = {vRK: vJI for vJI, vRK in g_JI2RK.items()}
for vI in VI: # for each node in the interface subgraph of G
vK = g[vI]
G.add_edges_from({(vI, g_RK2JI[vRK]) for (_, vRK) in
self.F_K2RK[vK]}) # incoming interface connections from G to substitute graph
G.add_edges_from({(g_RK2JI[vRK], vI) for (vRK, _) in
self.F_RK2K[vK]}) # outcoming interface connections from substitute graph to G
# the new modules are fully integerized, so the precision tunnel should not embed integer numbers in floating point numbers
# Specific to integer arithmetic transformation -> No relation to graph editing, per-se
if nodes_dict[vI].ntype == qg.graphs.HelperInput.__name__:
pass
elif nodes_dict[vI].ntype == qg.graphs.HelperInputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperInputPrecisionTunnel(1.0))
else:
raise TypeError # interface nodes should be objects of class `qg.graphs.HelperPrecisionTunnel` only
# discard the match (sub-)graph H\I
# Assumption: removing a node also removes all arcs pointing to or from that node
G.remove_nodes_from(set(HI.nodes))
# Remove the payload, i.e. underying objects, accordingly
for vHI in VHI:
del nodes_dict[vHI]
return G, nodes_dict
def seek(self, G, nodes_dict):
gs = self.seeker.get_morphisms(G)
return gs
class FoldANAActANAConvBNANAActTypeARule(DPORule): # w/o max pooling
def __init__(self, lut_entry_bits=16):
self._lut_entry_bits = lut_entry_bits
# Nodes of the interface
K_types = OrderedDict()
K_types.update({'HPTout': qg.graphs.HelperOutputPrecisionTunnel.__name__})
K_types.update({'HPTin': qg.graphs.HelperInputPrecisionTunnel.__name__})
K_types = OrderedDict([('/'.join(['K-term', k]), v) for k, v in K_types.items()])
# Nodes in the core template graph
LK_types = OrderedDict()
LK_types.update({'ANAActin': qa.ana.ANAActivation.__name__})
LK_types.update({'ANAConv': qa.ana.ANAConv2d.__name__})
LK_types.update({'BatchNorm': nn.BatchNorm2d.__name__})
LK_types.update({'ANAActout': qa.ana.ANAActivation.__name__})
LK_types = OrderedDict([('/'.join(['L-term', k]), v) for k, v in LK_types.items()])
# Nodes in the core replacement graph
RK_types = OrderedDict()
RK_types.update({'TWConv': nn.Conv2d.__name__})
RK_types.update({'LUTAct': LUTActivation.__name__})
RK_types = OrderedDict([('/'.join(['R-term', k]), v) for k, v in RK_types.items()])
K_node_IDs = list(K_types.keys())
LK_node_IDs = list(LK_types.keys())
RK_node_IDs = list(RK_types.keys())
# define the template graph L [L-term]
L_node_IDs = [K_node_IDs[0]] + LK_node_IDs + [K_node_IDs[-1]]
self.L = nx.DiGraph()
# Define arcs between nodes in full template graph
self.L.add_edges_from({(u, v) for u, v in zip(L_node_IDs[:-1], L_node_IDs[1:])})
# Here, graph is only operation nodes
# Necessary for seeker
nx.set_node_attributes(self.L, {vL: Bipartite.KERNEL for vL in set(self.L.nodes)}, 'bipartite')
nx.set_node_attributes(self.L, {**K_types, **LK_types}, 'type')
# define the context (sub-)graph K [K-term]
VK = set(K_node_IDs) # precision tunnel nodes define the context graph
self.K = self.L.subgraph(VK)
# define the template (sub-)graph L\K
VLK = set(self.L.nodes).difference(set(self.K.nodes))
self.LK = self.L.subgraph(VLK)
# define the replacement (sub-)graph R\K ["gluing" R\K to K yields the graph R, i.e., the R-term]
self.RK = nx.DiGraph()
self.RK.add_edges_from({(u, v) for u, v in zip(RK_node_IDs[:-1], RK_node_IDs[1:])})
nx.set_node_attributes(self.RK, {vRK: Bipartite.KERNEL for vRK in set(self.RK.nodes)}, 'bipartite')
nx.set_node_attributes(self.RK, RK_types, 'type')
# define the arcs that go from the vertices of K to those of R\K, and viceversa
E_K2RK = {(K_node_IDs[0], RK_node_IDs[0])}
E_RK2K = {(RK_node_IDs[-1], K_node_IDs[-1])}
E_K2RK2K = E_K2RK | E_RK2K
# disintegrate `E_K2RK` and `E_RK2K` along fibres to speed up rule application
# A fibre is kind of like fixing one argument of a two input one output function and looking at all possible outputs
self.F_K2RK = {vK: set(arc for arc in E_K2RK if arc[0] == vK) for vK in set(self.K.nodes)}
self.F_RK2K = {vK: set(arc for arc in E_RK2K if arc[1] == vK) for vK in set(self.K.nodes)}
# # glue together the (sub-)graphs L\K and R\K along the vertices of K
# self.S = nx.compose(self.L, self.RK)
# self.S.add_edges_from(E_K2RK2K)
# since the GRR's L-term has been modified, rebuild the seeker
self.seeker = Seeker(self.L)
# this machinery can generate always-new identifiers for different rule applications
self._counter = itertools.count()
def _get_rule_count(self):
rule_count = ''.join(['FANABNANATA', __NODE_ID_FORMAT__.format(next(self._counter))])
return rule_count
def core(self, HI, g, nodes_dict):
# generate the substitute (sub-)graph J\I
rule_count = self._get_rule_count()
g_RK2JI = {vRK: '_'.join([rule_count, vRK.replace('R-term/', '')]) for vRK in set(self.RK.nodes)}
JI = nx.relabel_nodes(self.RK, g_RK2JI, copy=True)
# get pointers to the old modules;
# these pointers will enable two actions:
# 1. extracting the arguments required to perform the folding
# 2. extracting the parameters to instantiate the new modules
g_L2H = {vL: vH for vH, vL in g.items()}
manain = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActin'])]].nobj
mconv2d = nodes_dict[g_L2H['/'.join(['L-term', 'ANAConv'])]].nobj
mbn2d = nodes_dict[g_L2H['/'.join(['L-term', 'BatchNorm'])]].nobj
manaout = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActout'])]].nobj
# fold
tau, weight = fold_anaact_anaconv2d_bn2d_anaact(manain.eps,
mconv2d.eps, mconv2d.weight_maybe_quant,
mbn2d.running_mean, mbn2d.running_var, mbn2d.eps, mbn2d.weight,
mbn2d.bias,
manaout.eps,
manaout.thresholds)
# build the new modules
mtwconv = nn.Conv2d(mconv2d.in_channels, mconv2d.out_channels, mconv2d.kernel_size,
stride=mconv2d.stride, padding=mconv2d.padding, dilation=mconv2d.dilation,
groups=mconv2d.groups,
bias=mconv2d.bias is not None).to(torch.device('cpu'))
mtwconv.weight.data = weight
mlutact = LUTActivation(tau, manaout.quant_levels)
# register the newly created nodes
vJI_2_ptnode = {}
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'TWConv'])]] = PyTorchNode(mtwconv)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'LUTAct'])]] = PyTorchNode(mlutact)
return JI, vJI_2_ptnode
# G: Full/original graph
# nodes_dict: Mapping between node identifiers of G and actual underlying objects
# g: One instance of all occurences of the template in G, i.e. one application point for the replacement rule -> one morphism
def apply(self, G, nodes_dict, g):
# create new containers
G = G.copy()
# Dictionary mapping of node identifiers to a payload
# keys in nodes_dict should be the same as G.nodes
nodes_dict = {**nodes_dict}
# characterise the match graph H
# Occurence of template in the graph
# SPMATTEO: Some assumptions to discuss
VI = {vH for vH, vL in g.items() if vL in set(self.K.nodes)} # Occurence of context
VHI = {vH for vH, vL in g.items() if vL not in set(self.K.nodes)} # Occurence of core template
HI = G.subgraph(VHI) # HI is the subgraph induced by the set of nodes VHI
# generate the substitute (sub-)graph J\I (completely detached from G)
# Instantiate blueprint of the replacement graph
JI, vJI_2_ptnode = self.core(HI, g, nodes_dict)
# add the substitute (sub-)graph J\I to the main graph G
G = nx.compose(G, JI) # G now has two connected but 'independent' subgraphs
nodes_dict.update(vJI_2_ptnode) # Add new payloads from substitute graph
# glue the substitute (sub-)graph J\I to the interface (sub-)graph I
JI2RK_morphisms = Seeker(self.RK).get_morphisms(JI)
assert len(JI2RK_morphisms) == 1
g_JI2RK = JI2RK_morphisms[0]
g_RK2JI = {vRK: vJI for vJI, vRK in g_JI2RK.items()}
for vI in VI: # for each node in the interface subgraph of G
vK = g[vI]
G.add_edges_from({(vI, g_RK2JI[vRK]) for (_, vRK) in
self.F_K2RK[vK]}) # incoming interface connections from G to substitute graph
G.add_edges_from({(g_RK2JI[vRK], vI) for (vRK, _) in
self.F_RK2K[vK]}) # outcoming interface connections from substitute graph to G
# the new modules are fully integerized, so the precision tunnel should not embed integer numbers in floating point numbers
# Specific to integer arithmetic transformation -> No relation to graph editing, per-se
if nodes_dict[vI].ntype == qg.graphs.HelperOutputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperOutputPrecisionTunnel(1.0))
elif nodes_dict[vI].ntype == qg.graphs.HelperInputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperInputPrecisionTunnel(1.0))
else:
raise TypeError # interface nodes should be objects of class `qg.graphs.HelperPrecisionTunnel` only
# discard the match (sub-)graph H\I
# Assumption: removing a node also removes all arcs pointing to or from that node
G.remove_nodes_from(set(HI.nodes))
# Remove the payload, i.e. underying objects, accordingly
for vHI in VHI:
del nodes_dict[vHI]
return G, nodes_dict
def seek(self, G, nodes_dict):
gs = self.seeker.get_morphisms(G)
return gs
class FoldANAActANAConvBNANAActTypeBRule(DPORule): # w/ max pooling
def __init__(self, lut_entry_bits=16):
self._lut_entry_bits = lut_entry_bits
# Nodes of the interface
K_types = OrderedDict()
K_types.update({'HPTout': qg.graphs.HelperOutputPrecisionTunnel.__name__})
K_types.update({'HPTin': qg.graphs.HelperInputPrecisionTunnel.__name__})
K_types = OrderedDict([('/'.join(['K-term', k]), v) for k, v in K_types.items()])
# Nodes in the core template graph
LK_types = OrderedDict()
LK_types.update({'ANAActin': qa.ana.ANAActivation.__name__})
LK_types.update({'MaxPool': nn.MaxPool2d.__name__})
LK_types.update({'ANAConv': qa.ana.ANAConv2d.__name__})
LK_types.update({'BatchNorm': nn.BatchNorm2d.__name__})
LK_types.update({'ANAActout': qa.ana.ANAActivation.__name__})
LK_types = OrderedDict([('/'.join(['L-term', k]), v) for k, v in LK_types.items()])
# Nodes in the core replacement graph
RK_types = OrderedDict()
RK_types.update({'MaxPool': nn.MaxPool2d.__name__})
RK_types.update({'TWConv': nn.Conv2d.__name__})
RK_types.update({'LUTAct': LUTActivation.__name__})
RK_types = OrderedDict([('/'.join(['R-term', k]), v) for k, v in RK_types.items()])
K_node_IDs = list(K_types.keys())
LK_node_IDs = list(LK_types.keys())
RK_node_IDs = list(RK_types.keys())
# define the template graph L [L-term]
L_node_IDs = [K_node_IDs[0]] + LK_node_IDs + [K_node_IDs[-1]]
self.L = nx.DiGraph()
# Define arcs between nodes in full template graph
self.L.add_edges_from({(u, v) for u, v in zip(L_node_IDs[:-1], L_node_IDs[1:])})
# Here, graph is only operation nodes
# Necessary for seeker
nx.set_node_attributes(self.L, {vL: Bipartite.KERNEL for vL in set(self.L.nodes)}, 'bipartite')
nx.set_node_attributes(self.L, {**K_types, **LK_types}, 'type')
# define the context (sub-)graph K [K-term]
VK = set(K_node_IDs) # precision tunnel nodes define the context graph
self.K = self.L.subgraph(VK)
# define the template (sub-)graph L\K
VLK = set(self.L.nodes).difference(set(self.K.nodes))
self.LK = self.L.subgraph(VLK)
# define the replacement (sub-)graph R\K ["gluing" R\K to K yields the graph R, i.e., the R-term]
self.RK = nx.DiGraph()
self.RK.add_edges_from({(u, v) for u, v in zip(RK_node_IDs[:-1], RK_node_IDs[1:])})
nx.set_node_attributes(self.RK, {vRK: Bipartite.KERNEL for vRK in set(self.RK.nodes)}, 'bipartite')
nx.set_node_attributes(self.RK, RK_types, 'type')
# define the arcs that go from the vertices of K to those of R\K, and viceversa
E_K2RK = {(K_node_IDs[0], RK_node_IDs[0])}
E_RK2K = {(RK_node_IDs[-1], K_node_IDs[-1])}
E_K2RK2K = E_K2RK | E_RK2K
# disintegrate `E_K2RK` and `E_RK2K` along fibres to speed up rule application
# A fibre is kind of like fixing one argument of a two input one output function and looking at all possible outputs
self.F_K2RK = {vK: set(arc for arc in E_K2RK if arc[0] == vK) for vK in set(self.K.nodes)}
self.F_RK2K = {vK: set(arc for arc in E_RK2K if arc[1] == vK) for vK in set(self.K.nodes)}
# # glue together the (sub-)graphs L\K and R\K along the vertices of K
# self.S = nx.compose(self.L, self.RK)
# self.S.add_edges_from(E_K2RK2K)
# since the GRR's L-term has been modified, rebuild the seeker
self.seeker = Seeker(self.L)
# this machinery can generate always-new identifiers for different rule applications
self._counter = itertools.count()
def _get_rule_count(self):
rule_count = ''.join(['FANABNANATB', __NODE_ID_FORMAT__.format(next(self._counter))])
return rule_count
def core(self, HI, g, nodes_dict):
# generate the substitute (sub-)graph J\I
rule_count = self._get_rule_count()
g_RK2JI = {vRK: '_'.join([rule_count, vRK.replace('R-term/', '')]) for vRK in set(self.RK.nodes)}
JI = nx.relabel_nodes(self.RK, g_RK2JI, copy=True)
# get pointers to the old modules;
# these pointers will enable two actions:
# 1. extracting the arguments required to perform the folding
# 2. extracting the parameters to instantiate the new modules
g_L2H = {vL: vH for vH, vL in g.items()}
manain = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActin'])]].nobj
mmxpold = nodes_dict[g_L2H['/'.join(['L-term', 'MaxPool'])]].nobj
mconv2d = nodes_dict[g_L2H['/'.join(['L-term', 'ANAConv'])]].nobj
mbn2d = nodes_dict[g_L2H['/'.join(['L-term', 'BatchNorm'])]].nobj
manaout = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActout'])]].nobj
# fold
tau, weight = fold_anaact_anaconv2d_bn2d_anaact(manain.eps,
mconv2d.eps, mconv2d.weight_maybe_quant,
mbn2d.running_mean, mbn2d.running_var, mbn2d.eps, mbn2d.weight,
mbn2d.bias,
manaout.eps,
manaout.thresholds)
# build the new modules
mmxpnew = nn.MaxPool2d(kernel_size=mmxpold.kernel_size, stride=mmxpold.stride, padding=mmxpold.padding)
mtwconv = nn.Conv2d(mconv2d.in_channels, mconv2d.out_channels, mconv2d.kernel_size,
stride=mconv2d.stride, padding=mconv2d.padding, dilation=mconv2d.dilation,
groups=mconv2d.groups,
bias=mconv2d.bias is not None).to(torch.device('cpu'))
mtwconv.weight.data = weight
mlutact = LUTActivation(tau, manaout.quant_levels)
# register the newly created nodes
vJI_2_ptnode = {}
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'MaxPool'])]] = PyTorchNode(mmxpnew)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'TWConv'])]] = PyTorchNode(mtwconv)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'LUTAct'])]] = PyTorchNode(mlutact)
return JI, vJI_2_ptnode
# G: Full/original graph
# nodes_dict: Mapping between node identifiers of G and actual underlying objects
# g: One instance of all occurences of the template in G, i.e. one application point for the replacement rule -> one morphism
def apply(self, G, nodes_dict, g):
# create new containers
G = G.copy()
# Dictionary mapping of node identifiers to a payload
# keys in nodes_dict should be the same as G.nodes
nodes_dict = {**nodes_dict}
# characterise the match graph H
# Occurence of template in the graph
# SPMATTEO: Some assumptions to discuss
VI = {vH for vH, vL in g.items() if vL in set(self.K.nodes)} # Occurence of context
VHI = {vH for vH, vL in g.items() if vL not in set(self.K.nodes)} # Occurence of core template
HI = G.subgraph(VHI) # HI is the subgraph induced by the set of nodes VHI
# generate the substitute (sub-)graph J\I (completely detached from G)
# Instantiate blueprint of the replacement graph
JI, vJI_2_ptnode = self.core(HI, g, nodes_dict)
# add the substitute (sub-)graph J\I to the main graph G
G = nx.compose(G, JI) # G now has two connected but 'independent' subgraphs
nodes_dict.update(vJI_2_ptnode) # Add new payloads from substitute graph
# glue the substitute (sub-)graph J\I to the interface (sub-)graph I
JI2RK_morphisms = Seeker(self.RK).get_morphisms(JI)
assert len(JI2RK_morphisms) == 1
g_JI2RK = JI2RK_morphisms[0]
g_RK2JI = {vRK: vJI for vJI, vRK in g_JI2RK.items()}
for vI in VI: # for each node in the interface subgraph of G
vK = g[vI]
G.add_edges_from({(vI, g_RK2JI[vRK]) for (_, vRK) in
self.F_K2RK[vK]}) # incoming interface connections from G to substitute graph
G.add_edges_from({(g_RK2JI[vRK], vI) for (vRK, _) in
self.F_RK2K[vK]}) # outcoming interface connections from substitute graph to G
# the new modules are fully integerized, so the precision tunnel should not embed integer numbers in floating point numbers
# Specific to integer arithmetic transformation -> No relation to graph editing, per-se
if nodes_dict[vI].ntype == qg.graphs.HelperOutputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperOutputPrecisionTunnel(1.0))
elif nodes_dict[vI].ntype == qg.graphs.HelperInputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperInputPrecisionTunnel(1.0))
else:
raise TypeError # interface nodes should be objects of class `qg.graphs.HelperPrecisionTunnel` only
# discard the match (sub-)graph H\I
# Assumption: removing a node also removes all arcs pointing to or from that node
G.remove_nodes_from(set(HI.nodes))
# Remove the payload, i.e. underying objects, accordingly
for vHI in VHI:
del nodes_dict[vHI]
return G, nodes_dict
def seek(self, G, nodes_dict):
gs = self.seeker.get_morphisms(G)
return gs
class FoldANAActANALinearBNANAActTypeARule(DPORule): # w/o pooling layers
def __init__(self, lut_entry_bits=16):
self._lut_entry_bits = lut_entry_bits
# Nodes of the interface
K_types = OrderedDict()
K_types.update({'HPTout': qg.graphs.HelperOutputPrecisionTunnel.__name__})
K_types.update({'HPTin': qg.graphs.HelperInputPrecisionTunnel.__name__})
K_types = OrderedDict([('/'.join(['K-term', k]), v) for k, v in K_types.items()])
# Nodes in the core template graph
LK_types = OrderedDict()
LK_types.update({'ANAActin': qa.ana.ANAActivation.__name__})
LK_types.update({'ANALinear': qa.ana.ANALinear.__name__})
LK_types.update({'BatchNorm': nn.BatchNorm1d.__name__})
LK_types.update({'ANAActout': qa.ana.ANAActivation.__name__})
LK_types = OrderedDict([('/'.join(['L-term', k]), v) for k, v in LK_types.items()])
# Nodes in the core replacement graph
RK_types = OrderedDict()
RK_types.update({'TWLinear': nn.Linear.__name__})
RK_types.update({'LUTAct': LUTActivation.__name__})
RK_types = OrderedDict([('/'.join(['R-term', k]), v) for k, v in RK_types.items()])
K_node_IDs = list(K_types.keys())
LK_node_IDs = list(LK_types.keys())
RK_node_IDs = list(RK_types.keys())
# define the template graph L [L-term]
L_node_IDs = [K_node_IDs[0]] + LK_node_IDs + [K_node_IDs[-1]]
self.L = nx.DiGraph()
# Define arcs between nodes in full template graph
self.L.add_edges_from({(u, v) for u, v in zip(L_node_IDs[:-1], L_node_IDs[1:])})
# Here, graph is only operation nodes
# Necessary for seeker
nx.set_node_attributes(self.L, {vL: Bipartite.KERNEL for vL in set(self.L.nodes)}, 'bipartite')
nx.set_node_attributes(self.L, {**K_types, **LK_types}, 'type')
# define the context (sub-)graph K [K-term]
VK = set(K_node_IDs) # precision tunnel nodes define the context graph
self.K = self.L.subgraph(VK)
# define the template (sub-)graph L\K
VLK = set(self.L.nodes).difference(set(self.K.nodes))
self.LK = self.L.subgraph(VLK)
# define the replacement (sub-)graph R\K ["gluing" R\K to K yields the graph R, i.e., the R-term]
self.RK = nx.DiGraph()
self.RK.add_edges_from({(u, v) for u, v in zip(RK_node_IDs[:-1], RK_node_IDs[1:])})
nx.set_node_attributes(self.RK, {vRK: Bipartite.KERNEL for vRK in set(self.RK.nodes)}, 'bipartite')
nx.set_node_attributes(self.RK, RK_types, 'type')
# define the arcs that go from the vertices of K to those of R\K, and viceversa
E_K2RK = {(K_node_IDs[0], RK_node_IDs[0])}
E_RK2K = {(RK_node_IDs[-1], K_node_IDs[-1])}
E_K2RK2K = E_K2RK | E_RK2K
# disintegrate `E_K2RK` and `E_RK2K` along fibres to speed up rule application
# A fibre is kind of like fixing one argument of a two input one output function and looking at all possible outputs
self.F_K2RK = {vK: set(arc for arc in E_K2RK if arc[0] == vK) for vK in set(self.K.nodes)}
self.F_RK2K = {vK: set(arc for arc in E_RK2K if arc[1] == vK) for vK in set(self.K.nodes)}
# # glue together the (sub-)graphs L\K and R\K along the vertices of K
# self.S = nx.compose(self.L, self.RK)
# self.S.add_edges_from(E_K2RK2K)
# since the GRR's L-term has been modified, rebuild the seeker
self.seeker = Seeker(self.L)
# this machinery can generate always-new identifiers for different rule applications
self._counter = itertools.count()
def _get_rule_count(self):
rule_count = ''.join(['FANABNANALinTA', __NODE_ID_FORMAT__.format(next(self._counter))])
return rule_count
def core(self, HI, g, nodes_dict):
# generate the substitute (sub-)graph J\I
rule_count = self._get_rule_count()
g_RK2JI = {vRK: '_'.join([rule_count, vRK.replace('R-term/', '')]) for vRK in set(self.RK.nodes)}
JI = nx.relabel_nodes(self.RK, g_RK2JI, copy=True)
# get pointers to the old modules;
# these pointers will enable two actions:
# 1. extracting the arguments required to perform the folding
# 2. extracting the parameters to instantiate the new modules
g_L2H = {vL: vH for vH, vL in g.items()}
manain = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActin'])]].nobj
mlinear = nodes_dict[g_L2H['/'.join(['L-term', 'ANALinear'])]].nobj
mbn1d = nodes_dict[g_L2H['/'.join(['L-term', 'BatchNorm'])]].nobj
manaout = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActout'])]].nobj
# fold
tau, weight = fold_anaact_analinear_bn1d_anaact(manain.eps,
mlinear.eps, mlinear.weight_maybe_quant,
mbn1d.running_mean, mbn1d.running_var, mbn1d.eps, mbn1d.weight,
mbn1d.bias,
manaout.eps,
manaout.thresholds)
# build the new modules
mtwlinear = nn.Linear(mlinear.in_features, mlinear.out_features,
bias=mlinear.bias is not None).to(torch.device('cpu'))
mtwlinear.weight.data = weight
mlutact = LUTActivation(tau, manaout.quant_levels)
# register the newly created nodes
vJI_2_ptnode = {}
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'TWLinear'])]] = PyTorchNode(mtwlinear)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'LUTAct'])]] = PyTorchNode(mlutact)
return JI, vJI_2_ptnode
# G: Full/original graph
# nodes_dict: Mapping between node identifiers of G and actual underlying objects
# g: One instance of all occurences of the template in G, i.e. one application point for the replacement rule -> one morphism
def apply(self, G, nodes_dict, g):
# create new containers
G = G.copy()
# Dictionary mapping of node identifiers to a payload
# keys in nodes_dict should be the same as G.nodes
nodes_dict = {**nodes_dict}
# characterise the match graph H
# Occurence of template in the graph
# SPMATTEO: Some assumptions to discuss
VI = {vH for vH, vL in g.items() if vL in set(self.K.nodes)} # Occurence of context
VHI = {vH for vH, vL in g.items() if vL not in set(self.K.nodes)} # Occurence of core template
HI = G.subgraph(VHI) # HI is the subgraph induced by the set of nodes VHI
# generate the substitute (sub-)graph J\I (completely detached from G)
# Instantiate blueprint of the replacement graph
JI, vJI_2_ptnode = self.core(HI, g, nodes_dict)
# add the substitute (sub-)graph J\I to the main graph G
G = nx.compose(G, JI) # G now has two connected but 'independent' subgraphs
nodes_dict.update(vJI_2_ptnode) # Add new payloads from substitute graph
# glue the substitute (sub-)graph J\I to the interface (sub-)graph I
JI2RK_morphisms = Seeker(self.RK).get_morphisms(JI)
assert len(JI2RK_morphisms) == 1
g_JI2RK = JI2RK_morphisms[0]
g_RK2JI = {vRK: vJI for vJI, vRK in g_JI2RK.items()}
for vI in VI: # for each node in the interface subgraph of G
vK = g[vI]
G.add_edges_from({(vI, g_RK2JI[vRK]) for (_, vRK) in
self.F_K2RK[vK]}) # incoming interface connections from G to substitute graph
G.add_edges_from({(g_RK2JI[vRK], vI) for (vRK, _) in
self.F_RK2K[vK]}) # outcoming interface connections from substitute graph to G
# the new modules are fully integerized, so the precision tunnel should not embed integer numbers in floating point numbers
# Specific to integer arithmetic transformation -> No relation to graph editing, per-se
if nodes_dict[vI].ntype == qg.graphs.HelperOutputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperOutputPrecisionTunnel(1.0))
elif nodes_dict[vI].ntype == qg.graphs.HelperInputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperInputPrecisionTunnel(1.0))
else:
raise TypeError # interface nodes should be objects of class `qg.graphs.HelperPrecisionTunnel` only
# discard the match (sub-)graph H\I
# Assumption: removing a node also removes all arcs pointing to or from that node
G.remove_nodes_from(set(HI.nodes))
# Remove the payload, i.e. underying objects, accordingly
for vHI in VHI:
del nodes_dict[vHI]
return G, nodes_dict
def seek(self, G, nodes_dict):
gs = self.seeker.get_morphisms(G)
return gs
class FoldANAActANALinearBNANAActTypeBRule(DPORule): # w/ pooling layers
def __init__(self, lut_entry_bits=16):
self._lut_entry_bits = lut_entry_bits
# Nodes of the interface
K_types = OrderedDict()
K_types.update({'HPTout': qg.graphs.HelperOutputPrecisionTunnel.__name__})
K_types.update({'HPTin': qg.graphs.HelperInputPrecisionTunnel.__name__})
K_types = OrderedDict([('/'.join(['K-term', k]), v) for k, v in K_types.items()])
# Nodes in the core template graph
LK_types = OrderedDict()
LK_types.update({'ANAActin': qa.ana.ANAActivation.__name__})
LK_types.update({'MaxPool': nn.MaxPool2d.__name__})
LK_types.update({'AvgPool': nn.AdaptiveAvgPool2d.__name__})
LK_types.update({'ViewFlattenNd': qg.graphs.modules.ViewFlattenNd.__name__})
LK_types.update({'ANALinear': qa.ana.ANALinear.__name__})
LK_types.update({'BatchNorm': nn.BatchNorm1d.__name__})
LK_types.update({'ANAActout': qa.ana.ANAActivation.__name__})
LK_types = OrderedDict([('/'.join(['L-term', k]), v) for k, v in LK_types.items()])
# Nodes in the core replacement graph
RK_types = OrderedDict()
RK_types.update({'MaxPool': nn.MaxPool2d.__name__})
RK_types.update({'TWLinear': nn.Linear.__name__})
RK_types.update({'LUTAct': LUTActivation.__name__})
RK_types = OrderedDict([('/'.join(['R-term', k]), v) for k, v in RK_types.items()])
K_node_IDs = list(K_types.keys())
LK_node_IDs = list(LK_types.keys())
RK_node_IDs = list(RK_types.keys())
# define the template graph L [L-term]
L_node_IDs = [K_node_IDs[0]] + LK_node_IDs + [K_node_IDs[-1]]
self.L = nx.DiGraph()
# Define arcs between nodes in full template graph
self.L.add_edges_from({(u, v) for u, v in zip(L_node_IDs[:-1], L_node_IDs[1:])})
# Here, graph is only operation nodes
# Necessary for seeker
nx.set_node_attributes(self.L, {vL: Bipartite.KERNEL for vL in set(self.L.nodes)}, 'bipartite')
nx.set_node_attributes(self.L, {**K_types, **LK_types}, 'type')
# define the context (sub-)graph K [K-term]
VK = set(K_node_IDs) # precision tunnel nodes define the context graph
self.K = self.L.subgraph(VK)
# define the template (sub-)graph L\K
VLK = set(self.L.nodes).difference(set(self.K.nodes))
self.LK = self.L.subgraph(VLK)
# define the replacement (sub-)graph R\K ["gluing" R\K to K yields the graph R, i.e., the R-term]
self.RK = nx.DiGraph()
self.RK.add_edges_from({(u, v) for u, v in zip(RK_node_IDs[:-1], RK_node_IDs[1:])})
nx.set_node_attributes(self.RK, {vRK: Bipartite.KERNEL for vRK in set(self.RK.nodes)}, 'bipartite')
nx.set_node_attributes(self.RK, RK_types, 'type')
# define the arcs that go from the vertices of K to those of R\K, and viceversa
E_K2RK = {(K_node_IDs[0], RK_node_IDs[0])}
E_RK2K = {(RK_node_IDs[-1], K_node_IDs[-1])}
E_K2RK2K = E_K2RK | E_RK2K
# disintegrate `E_K2RK` and `E_RK2K` along fibres to speed up rule application
# A fibre is kind of like fixing one argument of a two input one output function and looking at all possible outputs
self.F_K2RK = {vK: set(arc for arc in E_K2RK if arc[0] == vK) for vK in set(self.K.nodes)}
self.F_RK2K = {vK: set(arc for arc in E_RK2K if arc[1] == vK) for vK in set(self.K.nodes)}
# # glue together the (sub-)graphs L\K and R\K along the vertices of K
# self.S = nx.compose(self.L, self.RK)
# self.S.add_edges_from(E_K2RK2K)
# since the GRR's L-term has been modified, rebuild the seeker
self.seeker = Seeker(self.L)
# this machinery can generate always-new identifiers for different rule applications
self._counter = itertools.count()
def _get_rule_count(self):
rule_count = ''.join(['FANABNANALinTB', __NODE_ID_FORMAT__.format(next(self._counter))])
return rule_count
def core(self, HI, g, nodes_dict):
# generate the substitute (sub-)graph J\I
rule_count = self._get_rule_count()
g_RK2JI = {vRK: '_'.join([rule_count, vRK.replace('R-term/', '')]) for vRK in set(self.RK.nodes)}
JI = nx.relabel_nodes(self.RK, g_RK2JI, copy=True)
# get pointers to the old modules;
# these pointers will enable two actions:
# 1. extracting the arguments required to perform the folding
# 2. extracting the parameters to instantiate the new modules
g_L2H = {vL: vH for vH, vL in g.items()}
manain = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActin'])]].nobj
mmxpold = nodes_dict[g_L2H['/'.join(['L-term', 'MaxPool'])]].nobj
mlinear = nodes_dict[g_L2H['/'.join(['L-term', 'ANALinear'])]].nobj
mbn1d = nodes_dict[g_L2H['/'.join(['L-term', 'BatchNorm'])]].nobj
manaout = nodes_dict[g_L2H['/'.join(['L-term', 'ANAActout'])]].nobj
# fold
tau, weight = fold_anaact_analinear_bn1d_anaact(manain.eps,
mlinear.eps, mlinear.weight_maybe_quant,
mbn1d.running_mean, mbn1d.running_var, mbn1d.eps, mbn1d.weight,
mbn1d.bias,
manaout.eps,
manaout.thresholds)
# build the new modules
mmxpnew = nn.MaxPool2d(kernel_size=mmxpold.kernel_size, stride=mmxpold.stride, padding=mmxpold.padding)
mtwlinear = nn.Linear(mlinear.in_features, mlinear.out_features,
bias=mlinear.bias is not None).to(torch.device('cpu'))
mtwlinear.weight.data = weight
mlutact = LUTActivation(tau, manaout.quant_levels)
# register the newly created nodes
vJI_2_ptnode = {}
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'MaxPool'])]] = PyTorchNode(mmxpnew)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'TWLinear'])]] = PyTorchNode(mtwlinear)
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'LUTAct'])]] = PyTorchNode(mlutact)
return JI, vJI_2_ptnode
# G: Full/original graph
# nodes_dict: Mapping between node identifiers of G and actual underlying objects
# g: One instance of all occurences of the template in G, i.e. one application point for the replacement rule -> one morphism
def apply(self, G, nodes_dict, g):
# create new containers
G = G.copy()
# Dictionary mapping of node identifiers to a payload
# keys in nodes_dict should be the same as G.nodes
nodes_dict = {**nodes_dict}
# characterise the match graph H
# Occurence of template in the graph
# SPMATTEO: Some assumptions to discuss
VI = {vH for vH, vL in g.items() if vL in set(self.K.nodes)} # Occurence of context
VHI = {vH for vH, vL in g.items() if vL not in set(self.K.nodes)} # Occurence of core template
HI = G.subgraph(VHI) # HI is the subgraph induced by the set of nodes VHI
# generate the substitute (sub-)graph J\I (completely detached from G)
# Instantiate blueprint of the replacement graph
JI, vJI_2_ptnode = self.core(HI, g, nodes_dict)
# add the substitute (sub-)graph J\I to the main graph G
G = nx.compose(G, JI) # G now has two connected but 'independent' subgraphs
nodes_dict.update(vJI_2_ptnode) # Add new payloads from substitute graph
# glue the substitute (sub-)graph J\I to the interface (sub-)graph I
JI2RK_morphisms = Seeker(self.RK).get_morphisms(JI)
assert len(JI2RK_morphisms) == 1
g_JI2RK = JI2RK_morphisms[0]
g_RK2JI = {vRK: vJI for vJI, vRK in g_JI2RK.items()}
for vI in VI: # for each node in the interface subgraph of G
vK = g[vI]
G.add_edges_from({(vI, g_RK2JI[vRK]) for (_, vRK) in
self.F_K2RK[vK]}) # incoming interface connections from G to substitute graph
G.add_edges_from({(g_RK2JI[vRK], vI) for (vRK, _) in
self.F_RK2K[vK]}) # outcoming interface connections from substitute graph to G
# the new modules are fully integerized, so the precision tunnel should not embed integer numbers in floating point numbers
# Specific to integer arithmetic transformation -> No relation to graph editing, per-se
if nodes_dict[vI].ntype == qg.graphs.HelperOutputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperOutputPrecisionTunnel(1.0))
elif nodes_dict[vI].ntype == qg.graphs.HelperInputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperInputPrecisionTunnel(1.0))
else:
raise TypeError # interface nodes should be objects of class `qg.graphs.HelperPrecisionTunnel` only
# discard the match (sub-)graph H\I
# Assumption: removing a node also removes all arcs pointing to or from that node
G.remove_nodes_from(set(HI.nodes))
# Remove the payload, i.e. underying objects, accordingly
for vHI in VHI:
del nodes_dict[vHI]
return G, nodes_dict
def seek(self, G, nodes_dict):
gs = self.seeker.get_morphisms(G)
return gs
class FoldANAActLinearRule(DPORule):
def __init__(self, lut_entry_bits=16):
self._lut_entry_bits = lut_entry_bits
# Nodes of the interface
K_types = OrderedDict()
K_types.update({'HPTout': qg.graphs.HelperOutputPrecisionTunnel.__name__})
K_types.update({'HPout': qg.graphs.HelperOutput.__name__})
K_types = OrderedDict([('/'.join(['K-term', k]), v) for k, v in K_types.items()])
# Nodes in the core template graph
LK_types = OrderedDict()
LK_types.update({'ANAAct': qa.ana.ANAActivation.__name__})
LK_types.update({'Linear': nn.Linear.__name__})
LK_types = OrderedDict([('/'.join(['L-term', k]), v) for k, v in LK_types.items()])
# Nodes in the core replacement graph
RK_types = OrderedDict()
RK_types.update({'Linear': nn.Linear.__name__})
RK_types = OrderedDict([('/'.join(['R-term', k]), v) for k, v in RK_types.items()])
K_node_IDs = list(K_types.keys())
LK_node_IDs = list(LK_types.keys())
RK_node_IDs = list(RK_types.keys())
# define the template graph L [L-term]
L_node_IDs = [K_node_IDs[0]] + LK_node_IDs + [K_node_IDs[-1]]
self.L = nx.DiGraph()
# Define arcs between nodes in full template graph
self.L.add_edges_from({(u, v) for u, v in zip(L_node_IDs[:-1], L_node_IDs[1:])})
# Here, graph is only operation nodes
# Necessary for seeker
nx.set_node_attributes(self.L, {vL: Bipartite.KERNEL for vL in set(self.L.nodes)}, 'bipartite')
nx.set_node_attributes(self.L, {**K_types, **LK_types}, 'type')
# define the context (sub-)graph K [K-term]
VK = set(K_node_IDs) # precision tunnel nodes define the context graph
self.K = self.L.subgraph(VK)
# define the template (sub-)graph L\K
VLK = set(self.L.nodes).difference(set(self.K.nodes))
self.LK = self.L.subgraph(VLK)
# define the replacement (sub-)graph R\K ["gluing" R\K to K yields the graph R, i.e., the R-term]
self.RK = nx.DiGraph()
### WARNING! if R\K has only one node, this initialisation will fail!
self.RK.add_nodes_from(RK_node_IDs)
self.RK.add_edges_from({(u, v) for u, v in zip(RK_node_IDs[:-1], RK_node_IDs[1:])})
nx.set_node_attributes(self.RK, {vRK: Bipartite.KERNEL for vRK in set(self.RK.nodes)}, 'bipartite')
nx.set_node_attributes(self.RK, RK_types, 'type')
# define the arcs that go from the vertices of K to those of R\K, and viceversa
E_K2RK = {(K_node_IDs[0], RK_node_IDs[0])}
E_RK2K = {(RK_node_IDs[-1], K_node_IDs[-1])}
E_K2RK2K = E_K2RK | E_RK2K
# disintegrate `E_K2RK` and `E_RK2K` along fibres to speed up rule application
# A fibre is kind of like fixing one argument of a two input one output function and looking at all possible outputs
self.F_K2RK = {vK: set(arc for arc in E_K2RK if arc[0] == vK) for vK in set(self.K.nodes)}
self.F_RK2K = {vK: set(arc for arc in E_RK2K if arc[1] == vK) for vK in set(self.K.nodes)}
# # glue together the (sub-)graphs L\K and R\K along the vertices of K
# self.S = nx.compose(self.L, self.RK)
# self.S.add_edges_from(E_K2RK2K)
# since the GRR's L-term has been modified, rebuild the seeker
self.seeker = Seeker(self.L)
# this machinery can generate always-new identifiers for different rule applications
self._counter = itertools.count()
def _get_rule_count(self):
rule_count = ''.join(['FANAActLinear', __NODE_ID_FORMAT__.format(next(self._counter))])
return rule_count
def core(self, HI, g, nodes_dict):
# generate the substitute (sub-)graph J\I
rule_count = self._get_rule_count()
g_RK2JI = {vRK: '_'.join([rule_count, vRK.replace('R-term/', '')]) for vRK in set(self.RK.nodes)}
JI = nx.relabel_nodes(self.RK, g_RK2JI, copy=True)
# get pointers to the old modules;
# these pointers will enable two actions:
# 1. extracting the arguments required to perform the folding
# 2. extracting the parameters to instantiate the new modules
g_L2H = {vL: vH for vH, vL in g.items()}
manain = nodes_dict[g_L2H['/'.join(['L-term', 'ANAAct'])]].nobj
mlinearold = nodes_dict[g_L2H['/'.join(['L-term', 'Linear'])]].nobj
# fold
weight = manain.eps.item() * mlinearold.weight.data
# build the new modules
mlinearnew = nn.Linear(mlinearold.in_features, mlinearold.out_features,
bias=mlinearold.bias is not None).to(torch.device('cpu'))
mlinearnew.weight.data = weight
mlinearnew.bias.data = mlinearold.bias.data
# register the newly created nodes
vJI_2_ptnode = {}
vJI_2_ptnode[g_RK2JI['/'.join(['R-term', 'Linear'])]] = PyTorchNode(mlinearnew)
return JI, vJI_2_ptnode
# G: Full/original graph
# nodes_dict: Mapping between node identifiers of G and actual underlying objects
# g: One instance of all occurences of the template in G, i.e. one application point for the replacement rule -> one morphism
def apply(self, G, nodes_dict, g):
# create new containers
G = G.copy()
# Dictionary mapping of node identifiers to a payload
# keys in nodes_dict should be the same as G.nodes
nodes_dict = {**nodes_dict}
# characterise the match graph H
# Occurence of template in the graph
# SPMATTEO: Some assumptions to discuss
VI = {vH for vH, vL in g.items() if vL in set(self.K.nodes)} # Occurence of context
VHI = {vH for vH, vL in g.items() if vL not in set(self.K.nodes)} # Occurence of core template
HI = G.subgraph(VHI) # HI is the subgraph induced by the set of nodes VHI
# generate the substitute (sub-)graph J\I (completely detached from G)
# Instantiate blueprint of the replacement graph
JI, vJI_2_ptnode = self.core(HI, g, nodes_dict)
# add the substitute (sub-)graph J\I to the main graph G
G = nx.compose(G, JI) # G now has two connected but 'independent' subgraphs
nodes_dict.update(vJI_2_ptnode) # Add new payloads from substitute graph
# glue the substitute (sub-)graph J\I to the interface (sub-)graph I
JI2RK_morphisms = Seeker(self.RK).get_morphisms(JI)
assert len(JI2RK_morphisms) == 1
g_JI2RK = JI2RK_morphisms[0]
g_RK2JI = {vRK: vJI for vJI, vRK in g_JI2RK.items()}
for vI in VI: # for each node in the interface subgraph of G
vK = g[vI]
G.add_edges_from({(vI, g_RK2JI[vRK]) for (_, vRK) in
self.F_K2RK[vK]}) # incoming interface connections from G to substitute graph
G.add_edges_from({(g_RK2JI[vRK], vI) for (vRK, _) in
self.F_RK2K[vK]}) # outcoming interface connections from substitute graph to G
# the new modules are fully integerized, so the precision tunnel should not embed integer numbers in floating point numbers
# Specific to integer arithmetic transformation -> No relation to graph editing, per-se
if nodes_dict[vI].ntype == qg.graphs.HelperOutputPrecisionTunnel.__name__:
nodes_dict[vI] = PyTorchNode(qg.graphs.HelperOutputPrecisionTunnel(1.0))
elif nodes_dict[vI].ntype == qg.graphs.HelperOutput.__name__:
pass
else:
raise TypeError # interface nodes should be objects of class `qg.graphs.HelperPrecisionTunnel` only
# discard the match (sub-)graph H\I
# Assumption: removing a node also removes all arcs pointing to or from that node
G.remove_nodes_from(set(HI.nodes))
# Remove the payload, i.e. underying objects, accordingly
for vHI in VHI:
del nodes_dict[vHI]
return G, nodes_dict
def seek(self, G, nodes_dict):
gs = self.seeker.get_morphisms(G)
return gs
| 49.756976 | 135 | 0.625561 | 7,802 | 55,280 | 4.258011 | 0.05332 | 0.030071 | 0.010114 | 0.01174 | 0.950905 | 0.947232 | 0.946389 | 0.944674 | 0.943801 | 0.943801 | 0 | 0.011566 | 0.264888 | 55,280 | 1,110 | 136 | 49.801802 | 0.80594 | 0.316172 | 0 | 0.925865 | 0 | 0 | 0.03469 | 0 | 0 | 0 | 0 | 0 | 0.009885 | 1 | 0.049423 | false | 0.003295 | 0.019769 | 0 | 0.118616 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
133ecf9b2354c2770d0b9bfe2fa4981769b3c431 | 49 | py | Python | vk/bot_framework/middlewares/__init__.py | yilbegan/vk.py | 128029969edb57806b1d3d13a0a43613bc33abd3 | [
"MIT"
] | 3 | 2020-03-25T09:05:49.000Z | 2022-02-05T01:41:18.000Z | vk/bot_framework/middlewares/__init__.py | yilbegan/vk.py | 128029969edb57806b1d3d13a0a43613bc33abd3 | [
"MIT"
] | null | null | null | vk/bot_framework/middlewares/__init__.py | yilbegan/vk.py | 128029969edb57806b1d3d13a0a43613bc33abd3 | [
"MIT"
] | 1 | 2021-03-12T23:52:52.000Z | 2021-03-12T23:52:52.000Z | from .middlewares import SimpleLoggingMiddleware
| 24.5 | 48 | 0.897959 | 4 | 49 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
135b609b3e85426c479cf686da3c7b69afd65269 | 145 | py | Python | tests/schema/misc/gql/mutations/__init__.py | simonsobs/acondbs | 6ca11c2889d827ecdb2b54d0cf3b94b8cdd281e6 | [
"MIT"
] | null | null | null | tests/schema/misc/gql/mutations/__init__.py | simonsobs/acondbs | 6ca11c2889d827ecdb2b54d0cf3b94b8cdd281e6 | [
"MIT"
] | 24 | 2020-04-02T19:29:07.000Z | 2022-03-08T03:05:43.000Z | tests/schema/misc/gql/mutations/__init__.py | simonsobs/acondbs | 6ca11c2889d827ecdb2b54d0cf3b94b8cdd281e6 | [
"MIT"
] | 1 | 2020-04-08T15:48:28.000Z | 2020-04-08T15:48:28.000Z | # fmt: off
from .mutation_create_log import MUTATION_CREATE_LOG # noqa: F401
from .mutation_delete_log import MUTATION_DELETE_LOG # noqa: F401
| 36.25 | 66 | 0.813793 | 22 | 145 | 5 | 0.454545 | 0.218182 | 0.309091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.131034 | 145 | 3 | 67 | 48.333333 | 0.825397 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
13c9e9aa67b7f8cb238ce77d40d59122005387eb | 58,538 | py | Python | tensorflow/python/training/basic_session_run_hooks_test.py | wenming2014/tensorflow | a102a6a71844e194f3946f6318768c5367f1f16b | [
"Apache-2.0"
] | 5 | 2018-07-04T22:14:02.000Z | 2018-07-04T22:21:43.000Z | tensorflow/python/training/basic_session_run_hooks_test.py | wenming2014/tensorflow | a102a6a71844e194f3946f6318768c5367f1f16b | [
"Apache-2.0"
] | null | null | null | tensorflow/python/training/basic_session_run_hooks_test.py | wenming2014/tensorflow | a102a6a71844e194f3946f6318768c5367f1f16b | [
"Apache-2.0"
] | 1 | 2018-11-30T01:35:01.000Z | 2018-11-30T01:35:01.000Z | # pylint: disable=g-bad-file-header
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for basic_session_run_hooks."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import shutil
import tempfile
import threading
import time
from tensorflow.contrib.framework.python.framework import checkpoint_utils
from tensorflow.contrib.framework.python.ops import variables
from tensorflow.contrib.testing.python.framework import fake_summary_writer
from tensorflow.python.client import session as session_lib
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import meta_graph
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.ops import variables as variables_lib
import tensorflow.python.ops.nn_grad # pylint: disable=unused-import
from tensorflow.python.platform import gfile
from tensorflow.python.platform import test
from tensorflow.python.platform import tf_logging
from tensorflow.python.summary import summary as summary_lib
from tensorflow.python.summary.writer import writer_cache
from tensorflow.python.training import basic_session_run_hooks
from tensorflow.python.training import monitored_session
from tensorflow.python.training import session_run_hook
from tensorflow.python.training import training_util
class MockCheckpointSaverListener(
basic_session_run_hooks.CheckpointSaverListener):
def __init__(self):
self.begin_count = 0
self.before_save_count = 0
self.after_save_count = 0
self.end_count = 0
self.ask_for_stop = False
def begin(self):
self.begin_count += 1
def before_save(self, session, global_step):
self.before_save_count += 1
def after_save(self, session, global_step):
self.after_save_count += 1
if self.ask_for_stop:
return True
def end(self, session, global_step):
self.end_count += 1
def get_counts(self):
return {
'begin': self.begin_count,
'before_save': self.before_save_count,
'after_save': self.after_save_count,
'end': self.end_count
}
class SecondOrStepTimerTest(test.TestCase):
def test_raise_in_both_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.SecondOrStepTimer(every_secs=2.0, every_steps=10)
def test_raise_in_none_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.SecondOrStepTimer()
def test_every_secs(self):
timer = basic_session_run_hooks.SecondOrStepTimer(every_secs=1.0)
self.assertTrue(timer.should_trigger_for_step(1))
timer.update_last_triggered_step(1)
self.assertFalse(timer.should_trigger_for_step(1))
self.assertFalse(timer.should_trigger_for_step(2))
time.sleep(1.0)
self.assertFalse(timer.should_trigger_for_step(1))
self.assertTrue(timer.should_trigger_for_step(2))
def test_every_steps(self):
timer = basic_session_run_hooks.SecondOrStepTimer(every_steps=3)
self.assertTrue(timer.should_trigger_for_step(1))
timer.update_last_triggered_step(1)
self.assertFalse(timer.should_trigger_for_step(1))
self.assertFalse(timer.should_trigger_for_step(2))
self.assertFalse(timer.should_trigger_for_step(3))
self.assertTrue(timer.should_trigger_for_step(4))
def test_update_last_triggered_step(self):
timer = basic_session_run_hooks.SecondOrStepTimer(every_steps=1)
elapsed_secs, elapsed_steps = timer.update_last_triggered_step(1)
self.assertEqual(None, elapsed_secs)
self.assertEqual(None, elapsed_steps)
elapsed_secs, elapsed_steps = timer.update_last_triggered_step(5)
self.assertLess(0, elapsed_secs)
self.assertEqual(4, elapsed_steps)
elapsed_secs, elapsed_steps = timer.update_last_triggered_step(7)
self.assertLess(0, elapsed_secs)
self.assertEqual(2, elapsed_steps)
class StopAtStepTest(test.TestCase):
def test_raise_in_both_last_step_and_num_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.StopAtStepHook(num_steps=10, last_step=20)
def test_stop_based_on_last_step(self):
h = basic_session_run_hooks.StopAtStepHook(last_step=10)
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
no_op = control_flow_ops.no_op()
h.begin()
with session_lib.Session() as sess:
mon_sess = monitored_session._HookedSession(sess, [h])
sess.run(state_ops.assign(global_step, 5))
h.after_create_session(sess, None)
mon_sess.run(no_op)
self.assertFalse(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 9))
mon_sess.run(no_op)
self.assertFalse(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 10))
mon_sess.run(no_op)
self.assertTrue(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 11))
mon_sess._should_stop = False
mon_sess.run(no_op)
self.assertTrue(mon_sess.should_stop())
def test_stop_based_on_num_step(self):
h = basic_session_run_hooks.StopAtStepHook(num_steps=10)
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
no_op = control_flow_ops.no_op()
h.begin()
with session_lib.Session() as sess:
mon_sess = monitored_session._HookedSession(sess, [h])
sess.run(state_ops.assign(global_step, 5))
h.after_create_session(sess, None)
mon_sess.run(no_op)
self.assertFalse(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 13))
mon_sess.run(no_op)
self.assertFalse(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 14))
mon_sess.run(no_op)
self.assertFalse(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 15))
mon_sess.run(no_op)
self.assertTrue(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 16))
mon_sess._should_stop = False
mon_sess.run(no_op)
self.assertTrue(mon_sess.should_stop())
def test_stop_based_with_multiple_steps(self):
h = basic_session_run_hooks.StopAtStepHook(num_steps=10)
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
no_op = control_flow_ops.no_op()
h.begin()
with session_lib.Session() as sess:
mon_sess = monitored_session._HookedSession(sess, [h])
sess.run(state_ops.assign(global_step, 5))
h.after_create_session(sess, None)
mon_sess.run(no_op)
self.assertFalse(mon_sess.should_stop())
sess.run(state_ops.assign(global_step, 15))
mon_sess.run(no_op)
self.assertTrue(mon_sess.should_stop())
class LoggingTensorHookTest(test.TestCase):
def setUp(self):
# Mock out logging calls so we can verify whether correct tensors are being
# monitored.
self._actual_log = tf_logging.info
self.logged_message = None
def mock_log(*args, **kwargs):
self.logged_message = args
self._actual_log(*args, **kwargs)
tf_logging.info = mock_log
def tearDown(self):
tf_logging.info = self._actual_log
def test_illegal_args(self):
with self.assertRaisesRegexp(ValueError, 'nvalid every_n_iter'):
basic_session_run_hooks.LoggingTensorHook(tensors=['t'], every_n_iter=0)
with self.assertRaisesRegexp(ValueError, 'nvalid every_n_iter'):
basic_session_run_hooks.LoggingTensorHook(tensors=['t'], every_n_iter=-10)
with self.assertRaisesRegexp(ValueError, 'xactly one of'):
basic_session_run_hooks.LoggingTensorHook(
tensors=['t'], every_n_iter=5, every_n_secs=5)
with self.assertRaisesRegexp(ValueError, 'xactly one of'):
basic_session_run_hooks.LoggingTensorHook(tensors=['t'])
def test_print_at_end_only(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
t = constant_op.constant(42.0, name='foo')
train_op = constant_op.constant(3)
hook = basic_session_run_hooks.LoggingTensorHook(
tensors=[t.name], at_end=True)
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
self.evaluate(variables_lib.global_variables_initializer())
self.logged_message = ''
for _ in range(3):
mon_sess.run(train_op)
# assertNotRegexpMatches is not supported by python 3.1 and later
self.assertEqual(str(self.logged_message).find(t.name), -1)
hook.end(sess)
self.assertRegexpMatches(str(self.logged_message), t.name)
def _validate_print_every_n_steps(self, sess, at_end):
t = constant_op.constant(42.0, name='foo')
train_op = constant_op.constant(3)
hook = basic_session_run_hooks.LoggingTensorHook(
tensors=[t.name], every_n_iter=10, at_end=at_end)
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
self.evaluate(variables_lib.global_variables_initializer())
mon_sess.run(train_op)
self.assertRegexpMatches(str(self.logged_message), t.name)
for _ in range(3):
self.logged_message = ''
for _ in range(9):
mon_sess.run(train_op)
# assertNotRegexpMatches is not supported by python 3.1 and later
self.assertEqual(str(self.logged_message).find(t.name), -1)
mon_sess.run(train_op)
self.assertRegexpMatches(str(self.logged_message), t.name)
# Add additional run to verify proper reset when called multiple times.
self.logged_message = ''
mon_sess.run(train_op)
# assertNotRegexpMatches is not supported by python 3.1 and later
self.assertEqual(str(self.logged_message).find(t.name), -1)
self.logged_message = ''
hook.end(sess)
if at_end:
self.assertRegexpMatches(str(self.logged_message), t.name)
else:
# assertNotRegexpMatches is not supported by python 3.1 and later
self.assertEqual(str(self.logged_message).find(t.name), -1)
def test_print_every_n_steps(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
self._validate_print_every_n_steps(sess, at_end=False)
# Verify proper reset.
self._validate_print_every_n_steps(sess, at_end=False)
def test_print_every_n_steps_and_end(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
self._validate_print_every_n_steps(sess, at_end=True)
# Verify proper reset.
self._validate_print_every_n_steps(sess, at_end=True)
def test_print_first_step(self):
# if it runs every iteration, first iteration has None duration.
with ops.Graph().as_default(), session_lib.Session() as sess:
t = constant_op.constant(42.0, name='foo')
train_op = constant_op.constant(3)
hook = basic_session_run_hooks.LoggingTensorHook(
tensors={'foo': t}, every_n_iter=1)
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
self.evaluate(variables_lib.global_variables_initializer())
mon_sess.run(train_op)
self.assertRegexpMatches(str(self.logged_message), 'foo')
# in first run, elapsed time is None.
self.assertEqual(str(self.logged_message).find('sec'), -1)
def _validate_print_every_n_secs(self, sess, at_end):
t = constant_op.constant(42.0, name='foo')
train_op = constant_op.constant(3)
hook = basic_session_run_hooks.LoggingTensorHook(
tensors=[t.name], every_n_secs=1.0, at_end=at_end)
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
self.evaluate(variables_lib.global_variables_initializer())
mon_sess.run(train_op)
self.assertRegexpMatches(str(self.logged_message), t.name)
# assertNotRegexpMatches is not supported by python 3.1 and later
self.logged_message = ''
mon_sess.run(train_op)
self.assertEqual(str(self.logged_message).find(t.name), -1)
time.sleep(1.0)
self.logged_message = ''
mon_sess.run(train_op)
self.assertRegexpMatches(str(self.logged_message), t.name)
self.logged_message = ''
hook.end(sess)
if at_end:
self.assertRegexpMatches(str(self.logged_message), t.name)
else:
# assertNotRegexpMatches is not supported by python 3.1 and later
self.assertEqual(str(self.logged_message).find(t.name), -1)
def test_print_every_n_secs(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
self._validate_print_every_n_secs(sess, at_end=False)
# Verify proper reset.
self._validate_print_every_n_secs(sess, at_end=False)
def test_print_every_n_secs_and_end(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
self._validate_print_every_n_secs(sess, at_end=True)
# Verify proper reset.
self._validate_print_every_n_secs(sess, at_end=True)
def test_print_formatter(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
t = constant_op.constant(42.0, name='foo')
train_op = constant_op.constant(3)
hook = basic_session_run_hooks.LoggingTensorHook(
tensors=[t.name], every_n_iter=10,
formatter=lambda items: 'qqq=%s' % items[t.name])
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
self.evaluate(variables_lib.global_variables_initializer())
mon_sess.run(train_op)
self.assertEqual(self.logged_message[0], 'qqq=42.0')
class CheckpointSaverHookTest(test.TestCase):
def setUp(self):
self.model_dir = tempfile.mkdtemp()
self.graph = ops.Graph()
with self.graph.as_default():
self.scaffold = monitored_session.Scaffold()
self.global_step = variables.get_or_create_global_step()
self.train_op = training_util._increment_global_step(1)
def tearDown(self):
shutil.rmtree(self.model_dir, ignore_errors=True)
def test_saves_when_saver_and_scaffold_both_missing(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=1)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
self.assertEqual(1,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_raise_when_saver_and_scaffold_both_present(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, saver=self.scaffold.saver, scaffold=self.scaffold)
def test_raise_in_both_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_secs=10, save_steps=20)
def test_raise_in_none_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.CheckpointSaverHook(self.model_dir)
def test_save_secs_saves_in_first_step(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_secs=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
self.assertEqual(1,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_save_secs_calls_listeners_at_begin_and_end(self):
with self.graph.as_default():
listener = MockCheckpointSaverListener()
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_secs=2,
scaffold=self.scaffold,
listeners=[listener])
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op) # hook runs here
mon_sess.run(self.train_op) # hook won't run here, so it does at end
hook.end(sess) # hook runs here
self.assertEqual({
'begin': 1,
'before_save': 2,
'after_save': 2,
'end': 1
}, listener.get_counts())
def test_listener_with_monitored_session(self):
with ops.Graph().as_default():
scaffold = monitored_session.Scaffold()
global_step = variables.get_or_create_global_step()
train_op = training_util._increment_global_step(1)
listener = MockCheckpointSaverListener()
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_steps=1,
scaffold=scaffold,
listeners=[listener])
with monitored_session.SingularMonitoredSession(
hooks=[hook],
scaffold=scaffold,
checkpoint_dir=self.model_dir) as sess:
sess.run(train_op)
sess.run(train_op)
global_step_val = sess.raw_session().run(global_step)
listener_counts = listener.get_counts()
self.assertEqual(2, global_step_val)
self.assertEqual({
'begin': 1,
'before_save': 3,
'after_save': 3,
'end': 1
}, listener_counts)
def test_listener_stops_training_in_after_save(self):
with ops.Graph().as_default():
scaffold = monitored_session.Scaffold()
variables.get_or_create_global_step()
train_op = training_util._increment_global_step(1)
listener = MockCheckpointSaverListener()
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=1, scaffold=scaffold, listeners=[listener])
with monitored_session.SingularMonitoredSession(
hooks=[hook], scaffold=scaffold,
checkpoint_dir=self.model_dir) as sess:
sess.run(train_op)
self.assertFalse(sess.should_stop())
sess.run(train_op)
self.assertFalse(sess.should_stop())
listener.ask_for_stop = True
sess.run(train_op)
self.assertTrue(sess.should_stop())
def test_listener_with_default_saver(self):
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
train_op = training_util._increment_global_step(1)
listener = MockCheckpointSaverListener()
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_steps=1,
listeners=[listener])
with monitored_session.SingularMonitoredSession(
hooks=[hook],
checkpoint_dir=self.model_dir) as sess:
sess.run(train_op)
sess.run(train_op)
global_step_val = sess.raw_session().run(global_step)
listener_counts = listener.get_counts()
self.assertEqual(2, global_step_val)
self.assertEqual({
'begin': 1,
'before_save': 3,
'after_save': 3,
'end': 1
}, listener_counts)
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
with monitored_session.SingularMonitoredSession(
checkpoint_dir=self.model_dir) as sess2:
global_step_saved_val = sess2.run(global_step)
self.assertEqual(2, global_step_saved_val)
def test_two_listeners_with_default_saver(self):
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
train_op = training_util._increment_global_step(1)
listener1 = MockCheckpointSaverListener()
listener2 = MockCheckpointSaverListener()
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_steps=1,
listeners=[listener1, listener2])
with monitored_session.SingularMonitoredSession(
hooks=[hook],
checkpoint_dir=self.model_dir) as sess:
sess.run(train_op)
sess.run(train_op)
global_step_val = sess.raw_session().run(global_step)
listener1_counts = listener1.get_counts()
listener2_counts = listener2.get_counts()
self.assertEqual(2, global_step_val)
self.assertEqual({
'begin': 1,
'before_save': 3,
'after_save': 3,
'end': 1
}, listener1_counts)
self.assertEqual(listener1_counts, listener2_counts)
with ops.Graph().as_default():
global_step = variables.get_or_create_global_step()
with monitored_session.SingularMonitoredSession(
checkpoint_dir=self.model_dir) as sess2:
global_step_saved_val = sess2.run(global_step)
self.assertEqual(2, global_step_saved_val)
@test.mock.patch.object(time, 'time')
def test_save_secs_saves_periodically(self, mock_time):
# Let's have a realistic start time
current_time = 1484695987.209386
with self.graph.as_default():
mock_time.return_value = current_time
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_secs=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mock_time.return_value = current_time
mon_sess.run(self.train_op) # Saved.
mock_time.return_value = current_time + 0.5
mon_sess.run(self.train_op) # Not saved.
self.assertEqual(1,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
# Simulate 2.5 seconds of sleep.
mock_time.return_value = current_time + 2.5
mon_sess.run(self.train_op) # Saved.
mock_time.return_value = current_time + 2.6
mon_sess.run(self.train_op) # Not saved.
mock_time.return_value = current_time + 2.7
mon_sess.run(self.train_op) # Not saved.
self.assertEqual(3,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
# Simulate 7.5 more seconds of sleep (10 seconds from start.
mock_time.return_value = current_time + 10
mon_sess.run(self.train_op) # Saved.
self.assertEqual(6,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
@test.mock.patch.object(time, 'time')
def test_save_secs_calls_listeners_periodically(self, mock_time):
# Let's have a realistic start time
current_time = 1484695987.209386
with self.graph.as_default():
mock_time.return_value = current_time
listener = MockCheckpointSaverListener()
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_secs=2,
scaffold=self.scaffold,
listeners=[listener])
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mock_time.return_value = current_time + 0.5
mon_sess.run(self.train_op) # hook runs here
mock_time.return_value = current_time + 0.5
mon_sess.run(self.train_op)
mock_time.return_value = current_time + 3.0
mon_sess.run(self.train_op) # hook runs here
mock_time.return_value = current_time + 3.5
mon_sess.run(self.train_op)
mock_time.return_value = current_time + 4.0
mon_sess.run(self.train_op)
mock_time.return_value = current_time + 6.5
mon_sess.run(self.train_op) # hook runs here
mock_time.return_value = current_time + 7.0
mon_sess.run(self.train_op) # hook won't run here, so it does at end
mock_time.return_value = current_time + 7.5
hook.end(sess) # hook runs here
self.assertEqual({
'begin': 1,
'before_save': 4,
'after_save': 4,
'end': 1
}, listener.get_counts())
def test_save_steps_saves_in_first_step(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
self.assertEqual(1,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_save_steps_saves_periodically(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
mon_sess.run(self.train_op)
# Not saved
self.assertEqual(1,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# saved
self.assertEqual(3,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# Not saved
self.assertEqual(3,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# saved
self.assertEqual(5,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_save_saves_at_end(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_secs=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
mon_sess.run(self.train_op)
hook.end(sess)
self.assertEqual(2,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_summary_writer_defs(self):
fake_summary_writer.FakeSummaryWriter.install()
writer_cache.FileWriterCache.clear()
summary_writer = writer_cache.FileWriterCache.get(self.model_dir)
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
hook.after_create_session(sess, None)
mon_sess.run(self.train_op)
summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.model_dir,
expected_added_meta_graphs=[
meta_graph.create_meta_graph_def(
graph_def=self.graph.as_graph_def(add_shapes=True),
saver_def=self.scaffold.saver.saver_def)
])
fake_summary_writer.FakeSummaryWriter.uninstall()
def test_save_checkpoint_before_first_train_step(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
mon_sess = monitored_session._HookedSession(sess, [hook])
sess.run(self.scaffold.init_op)
hook.after_create_session(sess, None)
# Verifies that checkpoint is saved at step 0.
self.assertEqual(0,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
# Verifies that no checkpoint is saved after one training step.
mon_sess.run(self.train_op)
self.assertEqual(0,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
# Verifies that checkpoint is saved after save_steps.
mon_sess.run(self.train_op)
self.assertEqual(2,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
class CheckpointSaverHookMultiStepTest(test.TestCase):
def setUp(self):
self.model_dir = tempfile.mkdtemp()
self.graph = ops.Graph()
self.steps_per_run = 5
with self.graph.as_default():
self.scaffold = monitored_session.Scaffold()
self.global_step = variables.get_or_create_global_step()
self.train_op = training_util._increment_global_step(self.steps_per_run)
def tearDown(self):
shutil.rmtree(self.model_dir, ignore_errors=True)
def test_save_steps_saves_in_first_step(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_steps=2*self.steps_per_run,
scaffold=self.scaffold)
hook._set_steps_per_run(self.steps_per_run)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
self.assertEqual(5,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_save_steps_saves_periodically(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_steps=2*self.steps_per_run,
scaffold=self.scaffold)
hook._set_steps_per_run(self.steps_per_run)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
# Saved (step=5)
self.assertEqual(5,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# Not saved (step=10)
self.assertEqual(5,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# Saved (step=15)
self.assertEqual(15,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# Not saved (step=20)
self.assertEqual(15,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# Saved (step=25)
self.assertEqual(25,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
def test_save_steps_saves_at_end(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir,
save_steps=2*self.steps_per_run,
scaffold=self.scaffold)
hook._set_steps_per_run(self.steps_per_run)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
mon_sess.run(self.train_op)
hook.end(sess)
self.assertEqual(10,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
class ResourceCheckpointSaverHookTest(test.TestCase):
def setUp(self):
self.model_dir = tempfile.mkdtemp()
self.graph = ops.Graph()
with self.graph.as_default():
self.scaffold = monitored_session.Scaffold()
with variable_scope.variable_scope('foo', use_resource=True):
self.global_step = training_util.get_or_create_global_step()
self.train_op = training_util._increment_global_step(1)
def test_save_steps_saves_periodically(self):
with self.graph.as_default():
hook = basic_session_run_hooks.CheckpointSaverHook(
self.model_dir, save_steps=2, scaffold=self.scaffold)
hook.begin()
self.scaffold.finalize()
with session_lib.Session() as sess:
sess.run(self.scaffold.init_op)
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(self.train_op)
mon_sess.run(self.train_op)
# Not saved
self.assertEqual(1,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# saved
self.assertEqual(3,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# Not saved
self.assertEqual(3,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
mon_sess.run(self.train_op)
# saved
self.assertEqual(5,
checkpoint_utils.load_variable(self.model_dir,
self.global_step.name))
class StepCounterHookTest(test.TestCase):
def setUp(self):
self.log_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.log_dir, ignore_errors=True)
def test_step_counter_every_n_steps(self):
with ops.Graph().as_default() as g, session_lib.Session() as sess:
variables.get_or_create_global_step()
train_op = training_util._increment_global_step(1)
summary_writer = fake_summary_writer.FakeSummaryWriter(self.log_dir, g)
hook = basic_session_run_hooks.StepCounterHook(
summary_writer=summary_writer, every_n_steps=10)
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
with test.mock.patch.object(tf_logging, 'warning') as mock_log:
for _ in range(30):
time.sleep(0.01)
mon_sess.run(train_op)
# logging.warning should not be called.
self.assertIsNone(mock_log.call_args)
hook.end(sess)
summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_graph=g,
expected_summaries={})
self.assertItemsEqual([11, 21], summary_writer.summaries.keys())
for step in [11, 21]:
summary_value = summary_writer.summaries[step][0].value[0]
self.assertEqual('global_step/sec', summary_value.tag)
self.assertGreater(summary_value.simple_value, 0)
def test_step_counter_every_n_secs(self):
with ops.Graph().as_default() as g, session_lib.Session() as sess:
variables.get_or_create_global_step()
train_op = training_util._increment_global_step(1)
summary_writer = fake_summary_writer.FakeSummaryWriter(self.log_dir, g)
hook = basic_session_run_hooks.StepCounterHook(
summary_writer=summary_writer, every_n_steps=None, every_n_secs=0.1)
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(train_op)
time.sleep(0.2)
mon_sess.run(train_op)
time.sleep(0.2)
mon_sess.run(train_op)
hook.end(sess)
summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_graph=g,
expected_summaries={})
self.assertTrue(summary_writer.summaries, 'No summaries were created.')
self.assertItemsEqual([2, 3], summary_writer.summaries.keys())
for summary in summary_writer.summaries.values():
summary_value = summary[0].value[0]
self.assertEqual('global_step/sec', summary_value.tag)
self.assertGreater(summary_value.simple_value, 0)
def test_global_step_name(self):
with ops.Graph().as_default() as g, session_lib.Session() as sess:
with variable_scope.variable_scope('bar'):
variable_scope.get_variable(
'foo',
initializer=0,
trainable=False,
collections=[
ops.GraphKeys.GLOBAL_STEP, ops.GraphKeys.GLOBAL_VARIABLES
])
train_op = training_util._increment_global_step(1)
summary_writer = fake_summary_writer.FakeSummaryWriter(self.log_dir, g)
hook = basic_session_run_hooks.StepCounterHook(
summary_writer=summary_writer, every_n_steps=1, every_n_secs=None)
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(train_op)
mon_sess.run(train_op)
hook.end(sess)
summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_graph=g,
expected_summaries={})
self.assertTrue(summary_writer.summaries, 'No summaries were created.')
self.assertItemsEqual([2], summary_writer.summaries.keys())
summary_value = summary_writer.summaries[2][0].value[0]
self.assertEqual('bar/foo/sec', summary_value.tag)
def test_log_warning_if_global_step_not_increased(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
variables.get_or_create_global_step()
train_op = training_util._increment_global_step(0) # keep same.
self.evaluate(variables_lib.global_variables_initializer())
hook = basic_session_run_hooks.StepCounterHook(
every_n_steps=1, every_n_secs=None)
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
mon_sess.run(train_op) # Run one step to record global step.
with test.mock.patch.object(tf_logging, 'warning') as mock_log:
for _ in range(30):
mon_sess.run(train_op)
self.assertRegexpMatches(
str(mock_log.call_args),
'global step.*has not been increased')
hook.end(sess)
def _setup_steps_per_run_test(self,
every_n_steps,
steps_per_run,
graph,
sess):
variables.get_or_create_global_step()
self.train_op = training_util._increment_global_step(steps_per_run)
self.summary_writer = fake_summary_writer.FakeSummaryWriter(
self.log_dir, graph)
self.hook = basic_session_run_hooks.StepCounterHook(
summary_writer=self.summary_writer, every_n_steps=every_n_steps)
self.hook._set_steps_per_run(steps_per_run)
self.hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
self.mon_sess = monitored_session._HookedSession(sess, [self.hook])
def test_steps_per_run_less_than_every_n_steps(self):
with ops.Graph().as_default() as g, session_lib.Session() as sess:
self._setup_steps_per_run_test(10, 5, g, sess)
# Logs at 15, 25
for _ in range(5):
time.sleep(0.01)
self.mon_sess.run(self.train_op)
self.hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_graph=g,
expected_summaries={})
self.assertItemsEqual([15, 25], self.summary_writer.summaries.keys())
for step in [15, 25]:
summary_value = self.summary_writer.summaries[step][0].value[0]
self.assertEqual('global_step/sec', summary_value.tag)
self.assertGreater(summary_value.simple_value, 0)
def test_steps_per_run_equal_every_n_steps(self):
with ops.Graph().as_default() as g, session_lib.Session() as sess:
self._setup_steps_per_run_test(5, 5, g, sess)
# Logs at 10, 15, 20, 25
for _ in range(5):
time.sleep(0.01)
self.mon_sess.run(self.train_op)
self.hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_graph=g,
expected_summaries={})
self.assertItemsEqual([10, 15, 20, 25],
self.summary_writer.summaries.keys())
for step in [10, 15, 20, 25]:
summary_value = self.summary_writer.summaries[step][0].value[0]
self.assertEqual('global_step/sec', summary_value.tag)
self.assertGreater(summary_value.simple_value, 0)
def test_steps_per_run_greater_than_every_n_steps(self):
with ops.Graph().as_default() as g, session_lib.Session() as sess:
self._setup_steps_per_run_test(5, 10, g, sess)
# Logs at 20, 30, 40, 50
for _ in range(5):
time.sleep(0.01)
self.mon_sess.run(self.train_op)
self.hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_graph=g,
expected_summaries={})
self.assertItemsEqual([20, 30, 40, 50],
self.summary_writer.summaries.keys())
for step in [20, 30, 40, 50]:
summary_value = self.summary_writer.summaries[step][0].value[0]
self.assertEqual('global_step/sec', summary_value.tag)
self.assertGreater(summary_value.simple_value, 0)
class SummarySaverHookTest(test.TestCase):
def setUp(self):
test.TestCase.setUp(self)
self.log_dir = 'log/dir'
self.summary_writer = fake_summary_writer.FakeSummaryWriter(self.log_dir)
var = variables_lib.Variable(0.0)
tensor = state_ops.assign_add(var, 1.0)
tensor2 = tensor * 2
self.summary_op = summary_lib.scalar('my_summary', tensor)
self.summary_op2 = summary_lib.scalar('my_summary2', tensor2)
variables.get_or_create_global_step()
self.train_op = training_util._increment_global_step(1)
def test_raise_when_scaffold_and_summary_op_both_missing(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.SummarySaverHook()
def test_raise_when_scaffold_and_summary_op_both_present(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.SummarySaverHook(
scaffold=monitored_session.Scaffold(), summary_op=self.summary_op)
def test_raise_in_both_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.SummarySaverHook(
save_secs=10, save_steps=20, summary_writer=self.summary_writer)
def test_raise_in_none_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.SummarySaverHook(
save_secs=None, save_steps=None, summary_writer=self.summary_writer)
def test_save_steps(self):
hook = basic_session_run_hooks.SummarySaverHook(
save_steps=8,
summary_writer=self.summary_writer,
summary_op=self.summary_op)
with self.cached_session() as sess:
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
for _ in range(30):
mon_sess.run(self.train_op)
hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_summaries={
1: {
'my_summary': 1.0
},
9: {
'my_summary': 2.0
},
17: {
'my_summary': 3.0
},
25: {
'my_summary': 4.0
},
})
def test_multiple_summaries(self):
hook = basic_session_run_hooks.SummarySaverHook(
save_steps=8,
summary_writer=self.summary_writer,
summary_op=[self.summary_op, self.summary_op2])
with self.cached_session() as sess:
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
for _ in range(10):
mon_sess.run(self.train_op)
hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_summaries={
1: {
'my_summary': 1.0,
'my_summary2': 2.0
},
9: {
'my_summary': 2.0,
'my_summary2': 4.0
},
})
def test_save_secs_saving_once_every_step(self):
hook = basic_session_run_hooks.SummarySaverHook(
save_secs=0.5,
summary_writer=self.summary_writer,
summary_op=self.summary_op)
with self.cached_session() as sess:
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
for _ in range(4):
mon_sess.run(self.train_op)
time.sleep(0.5)
hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_summaries={
1: {
'my_summary': 1.0
},
2: {
'my_summary': 2.0
},
3: {
'my_summary': 3.0
},
4: {
'my_summary': 4.0
},
})
@test.mock.patch.object(time, 'time')
def test_save_secs_saving_once_every_three_steps(self, mock_time):
mock_time.return_value = 1484695987.209386
hook = basic_session_run_hooks.SummarySaverHook(
save_secs=9.,
summary_writer=self.summary_writer,
summary_op=self.summary_op)
with self.cached_session() as sess:
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
for _ in range(8):
mon_sess.run(self.train_op)
mock_time.return_value += 3.1
hook.end(sess)
# 24.8 seconds passed (3.1*8), it saves every 9 seconds starting from first:
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_summaries={
1: {
'my_summary': 1.0
},
4: {
'my_summary': 2.0
},
7: {
'my_summary': 3.0
},
})
class GlobalStepWaiterHookTest(test.TestCase):
def test_not_wait_for_step_zero(self):
with ops.Graph().as_default():
variables.get_or_create_global_step()
hook = basic_session_run_hooks.GlobalStepWaiterHook(wait_until_step=0)
hook.begin()
with session_lib.Session() as sess:
# Before run should return without waiting gstep increment.
hook.before_run(
session_run_hook.SessionRunContext(
original_args=None, session=sess))
def test_wait_for_step(self):
with ops.Graph().as_default():
gstep = variables.get_or_create_global_step()
hook = basic_session_run_hooks.GlobalStepWaiterHook(wait_until_step=1000)
hook.begin()
with session_lib.Session() as sess:
self.evaluate(variables_lib.global_variables_initializer())
waiter = threading.Thread(
target=hook.before_run,
args=(session_run_hook.SessionRunContext(
original_args=None, session=sess),))
waiter.daemon = True
waiter.start()
time.sleep(1.0)
self.assertTrue(waiter.is_alive())
sess.run(state_ops.assign(gstep, 500))
time.sleep(1.0)
self.assertTrue(waiter.is_alive())
sess.run(state_ops.assign(gstep, 1100))
time.sleep(1.2)
self.assertFalse(waiter.is_alive())
class FinalOpsHookTest(test.TestCase):
def test_final_ops_is_scalar_tensor(self):
with ops.Graph().as_default():
expected_value = 4
final_ops = constant_op.constant(expected_value)
hook = basic_session_run_hooks.FinalOpsHook(final_ops)
hook.begin()
with session_lib.Session() as session:
hook.end(session)
self.assertEqual(expected_value,
hook.final_ops_values)
def test_final_ops_is_tensor(self):
with ops.Graph().as_default():
expected_values = [1, 6, 3, 5, 2, 4]
final_ops = constant_op.constant(expected_values)
hook = basic_session_run_hooks.FinalOpsHook(final_ops)
hook.begin()
with session_lib.Session() as session:
hook.end(session)
self.assertListEqual(expected_values,
hook.final_ops_values.tolist())
def test_final_ops_triggers_out_of_range_error(self):
with ops.Graph().as_default():
dataset = dataset_ops.Dataset.range(1)
iterator = dataset.make_one_shot_iterator()
read_ops = iterator.get_next()
final_ops = read_ops
hook = basic_session_run_hooks.FinalOpsHook(final_ops)
hook.begin()
with session_lib.Session() as session:
session.run(read_ops)
with test.mock.patch.object(tf_logging, 'warning') as mock_log:
with self.assertRaisesRegexp(errors.OutOfRangeError,
'End of sequence'):
hook.end(session)
self.assertRegexpMatches(
str(mock_log.call_args),
'dependency back to some input source')
def test_final_ops_with_dictionary(self):
with ops.Graph().as_default():
expected_values = [4, -3]
final_ops = array_ops.placeholder(dtype=dtypes.float32)
final_ops_feed_dict = {final_ops: expected_values}
hook = basic_session_run_hooks.FinalOpsHook(
final_ops, final_ops_feed_dict)
hook.begin()
with session_lib.Session() as session:
hook.end(session)
self.assertListEqual(expected_values,
hook.final_ops_values.tolist())
class ResourceSummarySaverHookTest(test.TestCase):
def setUp(self):
test.TestCase.setUp(self)
self.log_dir = 'log/dir'
self.summary_writer = fake_summary_writer.FakeSummaryWriter(self.log_dir)
var = variable_scope.get_variable('var', initializer=0.0, use_resource=True)
tensor = state_ops.assign_add(var, 1.0)
self.summary_op = summary_lib.scalar('my_summary', tensor)
with variable_scope.variable_scope('foo', use_resource=True):
variables.create_global_step()
self.train_op = training_util._increment_global_step(1)
def test_save_steps(self):
hook = basic_session_run_hooks.SummarySaverHook(
save_steps=8,
summary_writer=self.summary_writer,
summary_op=self.summary_op)
with self.cached_session() as sess:
hook.begin()
self.evaluate(variables_lib.global_variables_initializer())
mon_sess = monitored_session._HookedSession(sess, [hook])
for _ in range(30):
mon_sess.run(self.train_op)
hook.end(sess)
self.summary_writer.assert_summaries(
test_case=self,
expected_logdir=self.log_dir,
expected_summaries={
1: {
'my_summary': 1.0
},
9: {
'my_summary': 2.0
},
17: {
'my_summary': 3.0
},
25: {
'my_summary': 4.0
},
})
class FeedFnHookTest(test.TestCase):
def test_feeding_placeholder(self):
with ops.Graph().as_default(), session_lib.Session() as sess:
x = array_ops.placeholder(dtype=dtypes.float32)
y = x + 1
hook = basic_session_run_hooks.FeedFnHook(
feed_fn=lambda: {x: 1.0})
hook.begin()
mon_sess = monitored_session._HookedSession(sess, [hook])
self.assertEqual(mon_sess.run(y), 2)
class ProfilerHookTest(test.TestCase):
def setUp(self):
super(ProfilerHookTest, self).setUp()
self.output_dir = tempfile.mkdtemp()
self.graph = ops.Graph()
self.filepattern = os.path.join(self.output_dir, 'timeline-*.json')
with self.graph.as_default():
self.global_step = variables.get_or_create_global_step()
self.train_op = state_ops.assign_add(self.global_step, 1)
def tearDown(self):
super(ProfilerHookTest, self).tearDown()
shutil.rmtree(self.output_dir, ignore_errors=True)
def _count_timeline_files(self):
return len(gfile.Glob(self.filepattern))
def test_raise_in_both_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.ProfilerHook(save_secs=10, save_steps=20)
def test_raise_in_none_secs_and_steps(self):
with self.assertRaises(ValueError):
basic_session_run_hooks.ProfilerHook(save_secs=None, save_steps=None)
def test_save_secs_does_not_save_in_first_step(self):
with self.graph.as_default():
hook = basic_session_run_hooks.ProfilerHook(
save_secs=2, output_dir=self.output_dir)
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
sess.run(self.train_op)
self.assertEqual(0, self._count_timeline_files())
@test.mock.patch.object(time, 'time')
def test_save_secs_saves_periodically(self, mock_time):
# Pick a fixed start time.
current_time = 1484863632.
with self.graph.as_default():
mock_time.return_value = current_time
hook = basic_session_run_hooks.ProfilerHook(
save_secs=2, output_dir=self.output_dir)
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
sess.run(self.train_op) # Not saved.
self.assertEqual(0, self._count_timeline_files())
# Simulate 2.5 seconds of sleep.
mock_time.return_value = current_time + 2.5
sess.run(self.train_op) # Saved.
self.assertEqual(1, self._count_timeline_files())
# Pretend some small amount of time has passed.
mock_time.return_value = current_time + 2.6
sess.run(self.train_op) # Not saved.
# Edge test just before we should save the timeline.
mock_time.return_value = current_time + 4.4
sess.run(self.train_op) # Not saved.
self.assertEqual(1, self._count_timeline_files())
mock_time.return_value = current_time + 4.5
sess.run(self.train_op) # Saved.
self.assertEqual(2, self._count_timeline_files())
def test_save_steps_does_not_save_in_first_step(self):
with self.graph.as_default():
hook = basic_session_run_hooks.ProfilerHook(
save_steps=1, output_dir=self.output_dir)
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
sess.run(self.train_op) # Not saved.
self.assertEqual(0, self._count_timeline_files())
def test_save_steps_saves_periodically(self):
with self.graph.as_default():
hook = basic_session_run_hooks.ProfilerHook(
save_steps=2, output_dir=self.output_dir)
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
self.assertEqual(0, self._count_timeline_files())
sess.run(self.train_op) # Not saved.
self.assertEqual(0, self._count_timeline_files())
sess.run(self.train_op) # Saved.
self.assertEqual(1, self._count_timeline_files())
sess.run(self.train_op) # Not saved.
self.assertEqual(1, self._count_timeline_files())
sess.run(self.train_op) # Saved.
self.assertEqual(2, self._count_timeline_files())
sess.run(self.train_op) # Not saved.
self.assertEqual(2, self._count_timeline_files())
def test_run_metadata_saves(self):
writer_cache.FileWriterCache.clear()
fake_summary_writer.FakeSummaryWriter.install()
fake_writer = writer_cache.FileWriterCache.get(self.output_dir)
with self.graph.as_default():
hook = basic_session_run_hooks.ProfilerHook(
save_steps=1, output_dir=self.output_dir)
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
sess.run(self.train_op) # Not saved.
sess.run(self.train_op) # Saved.
self.assertEqual(
list(fake_writer._added_run_metadata.keys()), ['step_2'])
fake_summary_writer.FakeSummaryWriter.uninstall()
if __name__ == '__main__':
test.main()
| 38.110677 | 80 | 0.666934 | 7,468 | 58,538 | 4.916443 | 0.060927 | 0.024594 | 0.021517 | 0.038131 | 0.842984 | 0.804254 | 0.779088 | 0.754984 | 0.727258 | 0.701493 | 0 | 0.014236 | 0.235625 | 58,538 | 1,535 | 81 | 38.135505 | 0.806325 | 0.046585 | 0 | 0.719811 | 0 | 0 | 0.014753 | 0 | 0 | 0 | 0 | 0 | 0.120758 | 1 | 0.072612 | false | 0 | 0.026046 | 0.001579 | 0.112076 | 0.014207 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b91bb70f8f70d6095333cbdf9f7c583498e677b7 | 953 | py | Python | benchmarks/test_calibration.py | sedders123/zoloto | 7084dade1c39f13fe583c3d0f1f0224ec3e1a708 | [
"BSD-3-Clause"
] | null | null | null | benchmarks/test_calibration.py | sedders123/zoloto | 7084dade1c39f13fe583c3d0f1f0224ec3e1a708 | [
"BSD-3-Clause"
] | null | null | null | benchmarks/test_calibration.py | sedders123/zoloto | 7084dade1c39f13fe583c3d0f1f0224ec3e1a708 | [
"BSD-3-Clause"
] | null | null | null | from pathlib import Path
from zoloto.calibration import parse_calibration_file, save_calibrations
def test_save_calibrations_json(benchmark, fake_calibration_params, make_temp_file):
benchmark(save_calibrations, fake_calibration_params, Path(make_temp_file(".json")))
def test_save_calibrations_xml(benchmark, fake_calibration_params, make_temp_file):
benchmark(save_calibrations, fake_calibration_params, Path(make_temp_file(".xml")))
def test_parse_calibrations_xml(benchmark, fake_calibration_params, make_temp_file):
temp_file = Path(make_temp_file(".xml"))
save_calibrations(fake_calibration_params, temp_file)
benchmark(parse_calibration_file.__wrapped__, temp_file)
def test_parse_calibrations_json(benchmark, fake_calibration_params, make_temp_file):
temp_file = Path(make_temp_file(".json"))
save_calibrations(fake_calibration_params, temp_file)
benchmark(parse_calibration_file.__wrapped__, temp_file)
| 39.708333 | 88 | 0.832109 | 125 | 953 | 5.792 | 0.16 | 0.154696 | 0.232044 | 0.165746 | 0.820442 | 0.801105 | 0.801105 | 0.801105 | 0.801105 | 0.71547 | 0 | 0 | 0.090241 | 953 | 23 | 89 | 41.434783 | 0.835063 | 0 | 0 | 0.285714 | 0 | 0 | 0.018888 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b91caccbfeaf8dd1905d7a715349e074a9835edd | 13,097 | py | Python | tests/test_cli.py | pminervini/kg-reasoning-alpha | c796853d273c1c488eda583ac63022a82ba17ce0 | [
"MIT"
] | 1 | 2021-06-27T11:11:18.000Z | 2021-06-27T11:11:18.000Z | tests/test_cli.py | pminervini/kg-reasoning-alpha | c796853d273c1c488eda583ac63022a82ba17ce0 | [
"MIT"
] | null | null | null | tests/test_cli.py | pminervini/kg-reasoning-alpha | c796853d273c1c488eda583ac63022a82ba17ce0 | [
"MIT"
] | null | null | null |
import os
import sys
import numpy as np
import subprocess
import pytest
@pytest.mark.light
def test_cqd():
env = os.environ.copy()
env['PYTHONPATH'] = '.'
cmd_str = 'python3 main.py --do_test --data_path data/NELL-betae-tiny -n 1 -b 1000 -d 1000 -lr 0.1 ' \
'--max_steps 1000 --cpu_num 0 --geo cqd --valid_steps 20 ' \
'--tasks 1p.2p.3p.2i.3i.ip.pi.2in.3in.inp.pin.pni.2u.up --print_on_screen --test_batch_size 1 ' \
'--optimizer adagrad --reg_weight 0.05 --log_steps 5 --checkpoint_path models/nell-betae ' \
'--cqd discrete --cqd-t-norm prod --cqd-k 4'
cmd = cmd_str.split()
p = subprocess.Popen(cmd, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
sys.stdout = sys.stderr
lines = out.decode("utf-8").split("\n")
sanity_check_flag_1 = False
sanity_check_flag_2 = False
sanity_check_flag_3 = False
sanity_check_flag_4 = False
sanity_check_flag_5 = False
sanity_check_flag_6 = False
sanity_check_flag_7 = False
sanity_check_flag_8 = False
sanity_check_flag_9 = False
sanity_check_flag_10 = False
sanity_check_flag_11 = False
sanity_check_flag_12 = False
sanity_check_flag_13 = False
sanity_check_flag_14 = False
for line in lines:
print(line)
if 'Test 1p MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.576179, atol=1e-4, rtol=1e-4)
sanity_check_flag_1 = True
elif 'Test 2p MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.217309, atol=1e-4, rtol=1e-4)
sanity_check_flag_2 = True
elif 'Test 3p MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.138355, atol=1e-4, rtol=1e-4)
sanity_check_flag_3 = True
elif 'Test 2i MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.448377, atol=1e-4, rtol=1e-4)
sanity_check_flag_4 = True
elif 'Test 3i MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.472417, atol=1e-4, rtol=1e-4)
sanity_check_flag_5 = True
elif 'Test ip MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.266512, atol=1e-4, rtol=1e-4)
sanity_check_flag_6 = True
elif 'Test pi MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.225731, atol=1e-4, rtol=1e-4)
sanity_check_flag_7 = True
elif 'Test 2in MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.003777, atol=1e-4, rtol=1e-4)
sanity_check_flag_8 = True
elif 'Test 3in MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.000139, atol=1e-4, rtol=1e-4)
sanity_check_flag_9 = True
elif 'Test inp MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.036229, atol=1e-4, rtol=1e-4)
sanity_check_flag_10 = True
elif 'Test pin MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.003973, atol=1e-4, rtol=1e-4)
sanity_check_flag_11 = True
elif 'Test pni MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.024000, atol=1e-4, rtol=1e-4)
sanity_check_flag_12 = True
elif 'Test 2u-DNF MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.014596, atol=1e-4, rtol=1e-4)
sanity_check_flag_13 = True
elif 'Test up-DNF MRR at step 99999' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.018403, atol=1e-4, rtol=1e-4)
sanity_check_flag_14 = True
assert sanity_check_flag_1
assert sanity_check_flag_2
assert sanity_check_flag_3
assert sanity_check_flag_4
assert sanity_check_flag_5
assert sanity_check_flag_6
assert sanity_check_flag_7
assert sanity_check_flag_8
assert sanity_check_flag_9
assert sanity_check_flag_10
assert sanity_check_flag_11
assert sanity_check_flag_12
assert sanity_check_flag_13
assert sanity_check_flag_14
@pytest.mark.light
def test_train_cqd():
env = os.environ.copy()
env['PYTHONPATH'] = '.'
cmd_str = 'python3 main.py --do_train --do_valid --do_test --data_path data/NELL-betae-tiny -n 1 -b 100 -d 200 ' \
'-lr 0.1 --warm_up_steps 0 --max_steps 10 --cpu_num 0 --geo cqd --valid_steps 500 --tasks 1p ' \
'--print_on_screen --test_batch_size 1000 --optimizer adagrad --reg_weight 0.1 --log_steps 1 ' \
'--use-qa-iterator --disable-saving'
cmd = cmd_str.split()
p = subprocess.Popen(cmd, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
sys.stdout = sys.stderr
lines = out.decode("utf-8").split("\n")
sanity_check_flag_1 = False
sanity_check_flag_2 = False
sanity_check_flag_3 = False
sanity_check_flag_4 = False
sanity_check_flag_5 = False
sanity_check_flag_6 = False
sanity_check_flag_7 = False
sanity_check_flag_8 = False
sanity_check_flag_9 = False
sanity_check_flag_10 = False
sanity_check_flag_11 = False
sanity_check_flag_12 = False
for line in lines:
print(line)
if 'Training average loss at step 0:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.113211, atol=1e-4, rtol=1e-4)
sanity_check_flag_1 = True
elif 'Training average loss at step 1:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.144814, atol=1e-4, rtol=1e-4)
sanity_check_flag_2 = True
elif 'Training average loss at step 2:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.136446, atol=1e-4, rtol=1e-4)
sanity_check_flag_3 = True
elif 'Training average loss at step 3' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.133833, atol=1e-4, rtol=1e-4)
sanity_check_flag_4 = True
elif 'Training average loss at step 4' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.135809, atol=1e-4, rtol=1e-4)
sanity_check_flag_5 = True
elif 'Training average loss at step 5' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.139427, atol=1e-4, rtol=1e-4)
sanity_check_flag_6 = True
elif 'Training average loss at step 6' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.138998, atol=1e-4, rtol=1e-4)
sanity_check_flag_7 = True
elif 'Training average loss at step 7' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.136427, atol=1e-4, rtol=1e-4)
sanity_check_flag_8 = True
elif 'Training average loss at step 8' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.141205, atol=1e-4, rtol=1e-4)
sanity_check_flag_9 = True
elif 'Training average loss at step 9' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.141012, atol=1e-4, rtol=1e-4)
sanity_check_flag_10 = True
elif 'Valid average MRR at step 9:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.004659, atol=1e-4, rtol=1e-4)
sanity_check_flag_11 = True
elif 'Test average MRR at step 9:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.016892, atol=1e-4, rtol=1e-4)
sanity_check_flag_12 = True
assert sanity_check_flag_1
assert sanity_check_flag_2
assert sanity_check_flag_3
assert sanity_check_flag_4
assert sanity_check_flag_5
assert sanity_check_flag_6
assert sanity_check_flag_7
assert sanity_check_flag_8
assert sanity_check_flag_9
assert sanity_check_flag_10
assert sanity_check_flag_11
assert sanity_check_flag_12
@pytest.mark.light
def test_train_cqd_no_warmup():
env = os.environ.copy()
env['PYTHONPATH'] = '.'
cmd_str = 'python3 main.py --do_train --do_valid --do_test --data_path data/NELL-betae-tiny -n 1 -b 100 -d 200 ' \
'-lr 0.1 --warm_up_steps 0 --max_steps 10 --cpu_num 0 --geo cqd --valid_steps 500 --tasks 1p ' \
'--print_on_screen --test_batch_size 1000 --optimizer adagrad --reg_weight 0.1 --log_steps 1 ' \
'--use-qa-iterator --disable-saving --disable-warmup'
cmd = cmd_str.split()
p = subprocess.Popen(cmd, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
sys.stdout = sys.stderr
lines = out.decode("utf-8").split("\n")
sanity_check_flag_1 = False
sanity_check_flag_2 = False
sanity_check_flag_3 = False
sanity_check_flag_4 = False
sanity_check_flag_5 = False
sanity_check_flag_6 = False
sanity_check_flag_7 = False
sanity_check_flag_8 = False
sanity_check_flag_9 = False
sanity_check_flag_10 = False
sanity_check_flag_11 = False
sanity_check_flag_12 = False
for line in lines:
print(line)
if 'Training average loss at step 0:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.113211, atol=1e-4, rtol=1e-4)
sanity_check_flag_1 = True
elif 'Training average loss at step 1:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.144814, atol=1e-4, rtol=1e-4)
sanity_check_flag_2 = True
elif 'Training average loss at step 2:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.231329, atol=1e-4, rtol=1e-4)
sanity_check_flag_3 = True
elif 'Training average loss at step 3' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.390648, atol=1e-4, rtol=1e-4)
sanity_check_flag_4 = True
elif 'Training average loss at step 4' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.762291, atol=1e-4, rtol=1e-4)
sanity_check_flag_5 = True
elif 'Training average loss at step 5' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 23.459116, atol=1e-4, rtol=1e-4)
sanity_check_flag_6 = True
elif 'Training average loss at step 6' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 23.847559, atol=1e-4, rtol=1e-4)
sanity_check_flag_7 = True
elif 'Training average loss at step 7' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 22.674259, atol=1e-4, rtol=1e-4)
sanity_check_flag_8 = True
elif 'Training average loss at step 8' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 23.303553, atol=1e-4, rtol=1e-4)
sanity_check_flag_9 = True
elif 'Training average loss at step 9' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 23.157228, atol=1e-4, rtol=1e-4)
sanity_check_flag_10 = True
elif 'Valid average MRR at step 9:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.042730, atol=1e-4, rtol=1e-4)
sanity_check_flag_11 = True
elif 'Test average MRR at step 9:' in line:
value = float(line.split()[9])
np.testing.assert_allclose(value, 0.090229, atol=1e-4, rtol=1e-4)
sanity_check_flag_12 = True
assert sanity_check_flag_1
assert sanity_check_flag_2
assert sanity_check_flag_3
assert sanity_check_flag_4
assert sanity_check_flag_5
assert sanity_check_flag_6
assert sanity_check_flag_7
assert sanity_check_flag_8
assert sanity_check_flag_9
assert sanity_check_flag_10
assert sanity_check_flag_11
assert sanity_check_flag_12
if __name__ == '__main__':
pytest.main([__file__])
# test_train_cqd()
| 40.800623 | 118 | 0.632359 | 1,978 | 13,097 | 3.960061 | 0.08999 | 0.160092 | 0.218307 | 0.07762 | 0.918167 | 0.912039 | 0.90872 | 0.898506 | 0.894676 | 0.887272 | 0 | 0.084216 | 0.264717 | 13,097 | 320 | 119 | 40.928125 | 0.72918 | 0.001222 | 0 | 0.747331 | 0 | 0.021352 | 0.166769 | 0.003517 | 0 | 0 | 0 | 0 | 0.270463 | 1 | 0.010676 | false | 0 | 0.017794 | 0 | 0.02847 | 0.021352 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b91fa49de273ae168cd0b21214bc33fbfa3c77fc | 1,065 | py | Python | mlutils/core/event/handler.py | marcopodda/mldatautils | 57bf5d6ee2fb62d9dffd4b344d7d91eb8795457d | [
"MIT"
] | 2 | 2020-03-06T19:55:53.000Z | 2020-03-07T14:14:53.000Z | mlutils/core/event/handler.py | marcopodda/mldatautils | 57bf5d6ee2fb62d9dffd4b344d7d91eb8795457d | [
"MIT"
] | null | null | null | mlutils/core/event/handler.py | marcopodda/mldatautils | 57bf5d6ee2fb62d9dffd4b344d7d91eb8795457d | [
"MIT"
] | null | null | null | class EventHandler:
def on_fit_start(self, state):
pass
def on_fit_end(self, state):
pass
def on_epoch_start(self, state):
pass
def on_epoch_end(self, state):
pass
def on_training_epoch_start(self, state):
pass
def on_training_epoch_end(self, state):
pass
def on_validation_epoch_start(self, state):
pass
def on_validation_epoch_end(self, state):
pass
def on_training_batch_start(self, state):
pass
def on_training_batch_end(self, state):
pass
def on_validation_batch_start(self, state):
pass
def on_validation_batch_end(self, state):
pass
def on_backward(self, state):
pass
def on_test_epoch_start(self, state):
pass
def on_test_epoch_end(self, state):
pass
def on_test_batch_start(self, state):
pass
def on_test_batch_end(self, state):
pass
def state_dict(self):
return {}
def load_state_dict(self, state_dict):
pass
| 18.362069 | 47 | 0.622535 | 145 | 1,065 | 4.234483 | 0.131034 | 0.263844 | 0.359935 | 0.442997 | 0.856678 | 0.856678 | 0.754072 | 0.110749 | 0 | 0 | 0 | 0 | 0.304225 | 1,065 | 57 | 48 | 18.684211 | 0.82861 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.487179 | false | 0.461538 | 0 | 0.025641 | 0.538462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
b94e8a78a26cc827cac77d5663f0ecb94e974634 | 126,294 | py | Python | src/oci/_vendor/chardet/langrussianmodel.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 249 | 2017-09-11T22:06:05.000Z | 2022-03-04T17:09:29.000Z | src/oci/_vendor/chardet/langrussianmodel.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 228 | 2017-09-11T23:07:26.000Z | 2022-03-23T10:58:50.000Z | src/oci/_vendor/chardet/langrussianmodel.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 224 | 2017-09-27T07:32:43.000Z | 2022-03-25T16:55:42.000Z | # coding: utf-8
# Modified Work: Copyright (c) 2018, 2021, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
# Original Work: Copyright (c) 2018 Character Encoding Detector contributors. https://github.com/chardet
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from .sbcharsetprober import SingleByteCharSetModel
# 3: Positive
# 2: Likely
# 1: Unlikely
# 0: Negative
RUSSIAN_LANG_MODEL = {
37: { # 'А'
37: 0, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 1, # 'Ж'
51: 1, # 'З'
42: 1, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 2, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 1, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 1, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 1, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 0, # 'е'
24: 1, # 'ж'
20: 1, # 'з'
4: 0, # 'и'
23: 1, # 'й'
11: 2, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 0, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 2, # 'у'
39: 2, # 'ф'
26: 2, # 'х'
28: 0, # 'ц'
22: 1, # 'ч'
25: 2, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
44: { # 'Б'
37: 1, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 1, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 2, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
33: { # 'В'
37: 2, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 1, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 2, # 'а'
21: 1, # 'б'
10: 1, # 'в'
19: 1, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 2, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 2, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 1, # 'ц'
22: 2, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 1, # 'ъ'
18: 3, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 0, # 'ю'
16: 1, # 'я'
},
46: { # 'Г'
37: 1, # 'А'
44: 1, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 2, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 1, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 1, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
41: { # 'Д'
37: 1, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 2, # 'Е'
56: 1, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 2, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 3, # 'ж'
20: 1, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 1, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
48: { # 'Е'
37: 1, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 1, # 'Ж'
51: 1, # 'З'
42: 1, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 2, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 2, # 'Р'
32: 2, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 1, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 0, # 'а'
21: 0, # 'б'
10: 2, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 2, # 'е'
24: 1, # 'ж'
20: 1, # 'з'
4: 0, # 'и'
23: 2, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 1, # 'н'
1: 0, # 'о'
15: 1, # 'п'
9: 1, # 'р'
7: 3, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 2, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
56: { # 'Ж'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 1, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 1, # 'б'
10: 0, # 'в'
19: 1, # 'г'
13: 1, # 'д'
2: 2, # 'е'
24: 1, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 1, # 'м'
5: 0, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 2, # 'ю'
16: 0, # 'я'
},
51: { # 'З'
37: 1, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 0, # 'г'
13: 2, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 1, # 'л'
12: 1, # 'м'
5: 2, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 1, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 1, # 'я'
},
42: { # 'И'
37: 1, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 2, # 'Е'
56: 1, # 'Ж'
51: 1, # 'З'
42: 1, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 2, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 1, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 1, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 2, # 'з'
4: 1, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 1, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 1, # 'у'
39: 1, # 'ф'
26: 2, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
60: { # 'Й'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 1, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 0, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
36: { # 'К'
37: 2, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 1, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 2, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 1, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 0, # 'м'
5: 1, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
49: { # 'Л'
37: 2, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 1, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 1, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 0, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 0, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 1, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 1, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 1, # 'л'
12: 0, # 'м'
5: 1, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 2, # 'ю'
16: 1, # 'я'
},
38: { # 'М'
37: 1, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 1, # 'Ф'
55: 1, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 0, # 'Ь'
47: 1, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 1, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 1, # 'л'
12: 1, # 'м'
5: 2, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 1, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
31: { # 'Н'
37: 2, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 1, # 'З'
42: 2, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 1, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 1, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 3, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 2, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
34: { # 'О'
37: 0, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 2, # 'Д'
48: 1, # 'Е'
56: 1, # 'Ж'
51: 1, # 'З'
42: 1, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 2, # 'Л'
38: 1, # 'М'
31: 2, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 2, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 1, # 'Ф'
55: 1, # 'Х'
58: 0, # 'Ц'
50: 1, # 'Ч'
57: 1, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 1, # 'а'
21: 2, # 'б'
10: 1, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 0, # 'е'
24: 1, # 'ж'
20: 1, # 'з'
4: 0, # 'и'
23: 1, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 3, # 'н'
1: 0, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 1, # 'у'
39: 1, # 'ф'
26: 2, # 'х'
28: 1, # 'ц'
22: 2, # 'ч'
25: 2, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
35: { # 'П'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 2, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 1, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 0, # 'м'
5: 1, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 3, # 'р'
7: 1, # 'с'
6: 1, # 'т'
14: 2, # 'у'
39: 1, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 2, # 'ь'
30: 1, # 'э'
27: 0, # 'ю'
16: 2, # 'я'
},
45: { # 'Р'
37: 2, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 2, # 'Е'
56: 1, # 'Ж'
51: 0, # 'З'
42: 2, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 2, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 1, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 1, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 1, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 2, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 2, # 'я'
},
32: { # 'С'
37: 1, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 2, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 1, # 'Ч'
57: 1, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 1, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 2, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 1, # 'ж'
20: 1, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 2, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 1, # 'с'
6: 3, # 'т'
14: 2, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 1, # 'ц'
22: 1, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 1, # 'ъ'
18: 1, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
40: { # 'Т'
37: 1, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 2, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 1, # 'Ь'
47: 1, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 1, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 1, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
52: { # 'У'
37: 1, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 1, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 0, # 'Ц'
50: 1, # 'Ч'
57: 1, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 1, # 'Ю'
43: 0, # 'Я'
3: 1, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 2, # 'д'
2: 1, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 2, # 'и'
23: 1, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 1, # 'н'
1: 2, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 0, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 1, # 'ц'
22: 2, # 'ч'
25: 1, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
53: { # 'Ф'
37: 1, # 'А'
44: 1, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 0, # 'с'
6: 1, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
55: { # 'Х'
37: 1, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 2, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 0, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 1, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 1, # 'ь'
30: 1, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
58: { # 'Ц'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 1, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 1, # 'а'
21: 0, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 0, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 1, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
50: { # 'Ч'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 1, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 1, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 1, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 1, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 3, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 1, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
57: { # 'Ш'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 1, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 0, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 1, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 1, # 'н'
1: 2, # 'о'
15: 2, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 2, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
63: { # 'Щ'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 1, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 1, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 1, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 1, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 1, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 1, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
62: { # 'Ы'
37: 0, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 1, # 'Ц'
50: 0, # 'Ч'
57: 1, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 0, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 0, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 0, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
61: { # 'Ь'
37: 0, # 'А'
44: 1, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 0, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 1, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 1, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 1, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 1, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 0, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 0, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 0, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 0, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
47: { # 'Э'
37: 0, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 1, # 'Й'
36: 1, # 'К'
49: 1, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 1, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 1, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 2, # 'д'
2: 0, # 'е'
24: 1, # 'ж'
20: 0, # 'з'
4: 0, # 'и'
23: 2, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 0, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 1, # 'с'
6: 3, # 'т'
14: 1, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
59: { # 'Ю'
37: 1, # 'А'
44: 1, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 1, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 0, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 1, # 'б'
10: 0, # 'в'
19: 1, # 'г'
13: 1, # 'д'
2: 0, # 'е'
24: 1, # 'ж'
20: 0, # 'з'
4: 0, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 2, # 'н'
1: 0, # 'о'
15: 1, # 'п'
9: 1, # 'р'
7: 1, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
43: { # 'Я'
37: 0, # 'А'
44: 0, # 'Б'
33: 1, # 'В'
46: 1, # 'Г'
41: 0, # 'Д'
48: 1, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 1, # 'С'
40: 1, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 1, # 'Х'
58: 0, # 'Ц'
50: 1, # 'Ч'
57: 0, # 'Ш'
63: 1, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 1, # 'Ю'
43: 1, # 'Я'
3: 0, # 'а'
21: 1, # 'б'
10: 1, # 'в'
19: 1, # 'г'
13: 1, # 'д'
2: 0, # 'е'
24: 0, # 'ж'
20: 1, # 'з'
4: 0, # 'и'
23: 1, # 'й'
11: 1, # 'к'
8: 1, # 'л'
12: 1, # 'м'
5: 2, # 'н'
1: 0, # 'о'
15: 1, # 'п'
9: 1, # 'р'
7: 1, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 0, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
3: { # 'а'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 1, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 3, # 'з'
4: 3, # 'и'
23: 3, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 2, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 3, # 'ц'
22: 3, # 'ч'
25: 3, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 2, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
21: { # 'б'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 1, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 1, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 0, # 'ф'
26: 2, # 'х'
28: 1, # 'ц'
22: 1, # 'ч'
25: 2, # 'ш'
29: 3, # 'щ'
54: 2, # 'ъ'
18: 3, # 'ы'
17: 2, # 'ь'
30: 1, # 'э'
27: 2, # 'ю'
16: 3, # 'я'
},
10: { # 'в'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 2, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 1, # 'ж'
20: 3, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 2, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 3, # 'ш'
29: 2, # 'щ'
54: 2, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 3, # 'я'
},
19: { # 'г'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 3, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 1, # 'ц'
22: 2, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 1, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
13: { # 'д'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 3, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 2, # 'х'
28: 3, # 'ц'
22: 2, # 'ч'
25: 2, # 'ш'
29: 1, # 'щ'
54: 2, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 1, # 'э'
27: 2, # 'ю'
16: 3, # 'я'
},
2: { # 'е'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 3, # 'з'
4: 2, # 'и'
23: 3, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 2, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 3, # 'ц'
22: 3, # 'ч'
25: 3, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 2, # 'ю'
16: 3, # 'я'
},
24: { # 'ж'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 1, # 'в'
19: 2, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 3, # 'н'
1: 2, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 1, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 0, # 'х'
28: 1, # 'ц'
22: 2, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 2, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
20: { # 'з'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 3, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 1, # 'ц'
22: 2, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 2, # 'ъ'
18: 3, # 'ы'
17: 2, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 3, # 'я'
},
4: { # 'и'
37: 1, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 3, # 'з'
4: 3, # 'и'
23: 3, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 2, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 3, # 'ц'
22: 3, # 'ч'
25: 3, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 2, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
23: { # 'й'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 1, # 'а'
21: 1, # 'б'
10: 1, # 'в'
19: 2, # 'г'
13: 3, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 2, # 'з'
4: 1, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 2, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 1, # 'у'
39: 2, # 'ф'
26: 1, # 'х'
28: 2, # 'ц'
22: 3, # 'ч'
25: 2, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 2, # 'я'
},
11: { # 'к'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 3, # 'в'
19: 1, # 'г'
13: 1, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 3, # 'л'
12: 1, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 2, # 'х'
28: 2, # 'ц'
22: 1, # 'ч'
25: 2, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 1, # 'ы'
17: 1, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
8: { # 'л'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 3, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 1, # 'р'
7: 3, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 2, # 'х'
28: 1, # 'ц'
22: 3, # 'ч'
25: 2, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 1, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
12: { # 'м'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 2, # 'г'
13: 1, # 'д'
2: 3, # 'е'
24: 1, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 2, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 1, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 2, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 3, # 'я'
},
5: { # 'н'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 2, # 'х'
28: 3, # 'ц'
22: 3, # 'ч'
25: 2, # 'ш'
29: 2, # 'щ'
54: 1, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 1, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
1: { # 'о'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 3, # 'з'
4: 3, # 'и'
23: 3, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 2, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 2, # 'ц'
22: 3, # 'ч'
25: 3, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 2, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
15: { # 'п'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 3, # 'л'
12: 1, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 3, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 0, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 1, # 'ш'
29: 1, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 2, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 3, # 'я'
},
9: { # 'р'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 2, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 3, # 'ш'
29: 2, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 2, # 'э'
27: 2, # 'ю'
16: 3, # 'я'
},
7: { # 'с'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 1, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 3, # 'в'
19: 2, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 2, # 'ц'
22: 3, # 'ч'
25: 2, # 'ш'
29: 1, # 'щ'
54: 2, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 2, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
6: { # 'т'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 2, # 'б'
10: 3, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 1, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 2, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 2, # 'ш'
29: 2, # 'щ'
54: 2, # 'ъ'
18: 3, # 'ы'
17: 3, # 'ь'
30: 2, # 'э'
27: 2, # 'ю'
16: 3, # 'я'
},
14: { # 'у'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 3, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 3, # 'з'
4: 2, # 'и'
23: 2, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 2, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 1, # 'у'
39: 2, # 'ф'
26: 3, # 'х'
28: 2, # 'ц'
22: 3, # 'ч'
25: 3, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 2, # 'э'
27: 3, # 'ю'
16: 2, # 'я'
},
39: { # 'ф'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 0, # 'в'
19: 1, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 1, # 'н'
1: 3, # 'о'
15: 1, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 2, # 'у'
39: 2, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 1, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 2, # 'ы'
17: 1, # 'ь'
30: 2, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
26: { # 'х'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 3, # 'в'
19: 1, # 'г'
13: 1, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 1, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 1, # 'п'
9: 3, # 'р'
7: 2, # 'с'
6: 2, # 'т'
14: 2, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 1, # 'ц'
22: 1, # 'ч'
25: 2, # 'ш'
29: 0, # 'щ'
54: 1, # 'ъ'
18: 0, # 'ы'
17: 1, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
28: { # 'ц'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 1, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 1, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 2, # 'к'
8: 1, # 'л'
12: 1, # 'м'
5: 1, # 'н'
1: 3, # 'о'
15: 0, # 'п'
9: 1, # 'р'
7: 0, # 'с'
6: 1, # 'т'
14: 3, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 1, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 3, # 'ы'
17: 1, # 'ь'
30: 0, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
22: { # 'ч'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 1, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 2, # 'л'
12: 1, # 'м'
5: 3, # 'н'
1: 2, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 1, # 'с'
6: 3, # 'т'
14: 3, # 'у'
39: 1, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 1, # 'ч'
25: 2, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 3, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
25: { # 'ш'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 1, # 'б'
10: 2, # 'в'
19: 1, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 2, # 'м'
5: 3, # 'н'
1: 3, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 1, # 'с'
6: 2, # 'т'
14: 3, # 'у'
39: 2, # 'ф'
26: 1, # 'х'
28: 1, # 'ц'
22: 1, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 3, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 0, # 'я'
},
29: { # 'щ'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 3, # 'а'
21: 0, # 'б'
10: 1, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 3, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 3, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 1, # 'м'
5: 2, # 'н'
1: 1, # 'о'
15: 0, # 'п'
9: 2, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 2, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 2, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 0, # 'я'
},
54: { # 'ъ'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 0, # 'б'
10: 0, # 'в'
19: 0, # 'г'
13: 0, # 'д'
2: 2, # 'е'
24: 0, # 'ж'
20: 0, # 'з'
4: 0, # 'и'
23: 0, # 'й'
11: 0, # 'к'
8: 0, # 'л'
12: 0, # 'м'
5: 0, # 'н'
1: 0, # 'о'
15: 0, # 'п'
9: 0, # 'р'
7: 0, # 'с'
6: 0, # 'т'
14: 0, # 'у'
39: 0, # 'ф'
26: 0, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 0, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 1, # 'ю'
16: 2, # 'я'
},
18: { # 'ы'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 3, # 'б'
10: 3, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 2, # 'и'
23: 3, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 1, # 'о'
15: 3, # 'п'
9: 3, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 1, # 'у'
39: 0, # 'ф'
26: 3, # 'х'
28: 2, # 'ц'
22: 3, # 'ч'
25: 3, # 'ш'
29: 2, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 0, # 'ю'
16: 2, # 'я'
},
17: { # 'ь'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 2, # 'б'
10: 2, # 'в'
19: 2, # 'г'
13: 2, # 'д'
2: 3, # 'е'
24: 1, # 'ж'
20: 3, # 'з'
4: 2, # 'и'
23: 0, # 'й'
11: 3, # 'к'
8: 0, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 2, # 'о'
15: 2, # 'п'
9: 1, # 'р'
7: 3, # 'с'
6: 2, # 'т'
14: 0, # 'у'
39: 2, # 'ф'
26: 1, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 3, # 'ш'
29: 2, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 3, # 'ю'
16: 3, # 'я'
},
30: { # 'э'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 1, # 'М'
31: 1, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 1, # 'Р'
32: 1, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 1, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 1, # 'б'
10: 1, # 'в'
19: 1, # 'г'
13: 2, # 'д'
2: 1, # 'е'
24: 0, # 'ж'
20: 1, # 'з'
4: 0, # 'и'
23: 2, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 0, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 2, # 'с'
6: 3, # 'т'
14: 1, # 'у'
39: 2, # 'ф'
26: 1, # 'х'
28: 0, # 'ц'
22: 0, # 'ч'
25: 1, # 'ш'
29: 0, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 1, # 'ю'
16: 1, # 'я'
},
27: { # 'ю'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 2, # 'а'
21: 3, # 'б'
10: 1, # 'в'
19: 2, # 'г'
13: 3, # 'д'
2: 1, # 'е'
24: 2, # 'ж'
20: 2, # 'з'
4: 1, # 'и'
23: 1, # 'й'
11: 2, # 'к'
8: 2, # 'л'
12: 2, # 'м'
5: 2, # 'н'
1: 1, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 0, # 'у'
39: 1, # 'ф'
26: 2, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 2, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 1, # 'э'
27: 2, # 'ю'
16: 1, # 'я'
},
16: { # 'я'
37: 0, # 'А'
44: 0, # 'Б'
33: 0, # 'В'
46: 0, # 'Г'
41: 0, # 'Д'
48: 0, # 'Е'
56: 0, # 'Ж'
51: 0, # 'З'
42: 0, # 'И'
60: 0, # 'Й'
36: 0, # 'К'
49: 0, # 'Л'
38: 0, # 'М'
31: 0, # 'Н'
34: 0, # 'О'
35: 0, # 'П'
45: 0, # 'Р'
32: 0, # 'С'
40: 0, # 'Т'
52: 0, # 'У'
53: 0, # 'Ф'
55: 0, # 'Х'
58: 0, # 'Ц'
50: 0, # 'Ч'
57: 0, # 'Ш'
63: 0, # 'Щ'
62: 0, # 'Ы'
61: 0, # 'Ь'
47: 0, # 'Э'
59: 0, # 'Ю'
43: 0, # 'Я'
3: 0, # 'а'
21: 2, # 'б'
10: 3, # 'в'
19: 2, # 'г'
13: 3, # 'д'
2: 3, # 'е'
24: 3, # 'ж'
20: 3, # 'з'
4: 2, # 'и'
23: 2, # 'й'
11: 3, # 'к'
8: 3, # 'л'
12: 3, # 'м'
5: 3, # 'н'
1: 0, # 'о'
15: 2, # 'п'
9: 2, # 'р'
7: 3, # 'с'
6: 3, # 'т'
14: 1, # 'у'
39: 1, # 'ф'
26: 3, # 'х'
28: 2, # 'ц'
22: 2, # 'ч'
25: 2, # 'ш'
29: 3, # 'щ'
54: 0, # 'ъ'
18: 0, # 'ы'
17: 0, # 'ь'
30: 0, # 'э'
27: 2, # 'ю'
16: 2, # 'я'
},
}
# 255: Undefined characters that did not exist in training text
# 254: Carriage/Return
# 253: symbol (punctuation) that does not belong to word
# 252: 0 - 9
# 251: Control characters
# Character Mapping Table(s):
IBM866_RUSSIAN_CHAR_TO_ORDER = {
0: 255, # '\x00'
1: 255, # '\x01'
2: 255, # '\x02'
3: 255, # '\x03'
4: 255, # '\x04'
5: 255, # '\x05'
6: 255, # '\x06'
7: 255, # '\x07'
8: 255, # '\x08'
9: 255, # '\t'
10: 254, # '\n'
11: 255, # '\x0b'
12: 255, # '\x0c'
13: 254, # '\r'
14: 255, # '\x0e'
15: 255, # '\x0f'
16: 255, # '\x10'
17: 255, # '\x11'
18: 255, # '\x12'
19: 255, # '\x13'
20: 255, # '\x14'
21: 255, # '\x15'
22: 255, # '\x16'
23: 255, # '\x17'
24: 255, # '\x18'
25: 255, # '\x19'
26: 255, # '\x1a'
27: 255, # '\x1b'
28: 255, # '\x1c'
29: 255, # '\x1d'
30: 255, # '\x1e'
31: 255, # '\x1f'
32: 253, # ' '
33: 253, # '!'
34: 253, # '"'
35: 253, # '#'
36: 253, # '$'
37: 253, # '%'
38: 253, # '&'
39: 253, # "'"
40: 253, # '('
41: 253, # ')'
42: 253, # '*'
43: 253, # '+'
44: 253, # ','
45: 253, # '-'
46: 253, # '.'
47: 253, # '/'
48: 252, # '0'
49: 252, # '1'
50: 252, # '2'
51: 252, # '3'
52: 252, # '4'
53: 252, # '5'
54: 252, # '6'
55: 252, # '7'
56: 252, # '8'
57: 252, # '9'
58: 253, # ':'
59: 253, # ';'
60: 253, # '<'
61: 253, # '='
62: 253, # '>'
63: 253, # '?'
64: 253, # '@'
65: 142, # 'A'
66: 143, # 'B'
67: 144, # 'C'
68: 145, # 'D'
69: 146, # 'E'
70: 147, # 'F'
71: 148, # 'G'
72: 149, # 'H'
73: 150, # 'I'
74: 151, # 'J'
75: 152, # 'K'
76: 74, # 'L'
77: 153, # 'M'
78: 75, # 'N'
79: 154, # 'O'
80: 155, # 'P'
81: 156, # 'Q'
82: 157, # 'R'
83: 158, # 'S'
84: 159, # 'T'
85: 160, # 'U'
86: 161, # 'V'
87: 162, # 'W'
88: 163, # 'X'
89: 164, # 'Y'
90: 165, # 'Z'
91: 253, # '['
92: 253, # '\\'
93: 253, # ']'
94: 253, # '^'
95: 253, # '_'
96: 253, # '`'
97: 71, # 'a'
98: 172, # 'b'
99: 66, # 'c'
100: 173, # 'd'
101: 65, # 'e'
102: 174, # 'f'
103: 76, # 'g'
104: 175, # 'h'
105: 64, # 'i'
106: 176, # 'j'
107: 177, # 'k'
108: 77, # 'l'
109: 72, # 'm'
110: 178, # 'n'
111: 69, # 'o'
112: 67, # 'p'
113: 179, # 'q'
114: 78, # 'r'
115: 73, # 's'
116: 180, # 't'
117: 181, # 'u'
118: 79, # 'v'
119: 182, # 'w'
120: 183, # 'x'
121: 184, # 'y'
122: 185, # 'z'
123: 253, # '{'
124: 253, # '|'
125: 253, # '}'
126: 253, # '~'
127: 253, # '\x7f'
128: 37, # 'А'
129: 44, # 'Б'
130: 33, # 'В'
131: 46, # 'Г'
132: 41, # 'Д'
133: 48, # 'Е'
134: 56, # 'Ж'
135: 51, # 'З'
136: 42, # 'И'
137: 60, # 'Й'
138: 36, # 'К'
139: 49, # 'Л'
140: 38, # 'М'
141: 31, # 'Н'
142: 34, # 'О'
143: 35, # 'П'
144: 45, # 'Р'
145: 32, # 'С'
146: 40, # 'Т'
147: 52, # 'У'
148: 53, # 'Ф'
149: 55, # 'Х'
150: 58, # 'Ц'
151: 50, # 'Ч'
152: 57, # 'Ш'
153: 63, # 'Щ'
154: 70, # 'Ъ'
155: 62, # 'Ы'
156: 61, # 'Ь'
157: 47, # 'Э'
158: 59, # 'Ю'
159: 43, # 'Я'
160: 3, # 'а'
161: 21, # 'б'
162: 10, # 'в'
163: 19, # 'г'
164: 13, # 'д'
165: 2, # 'е'
166: 24, # 'ж'
167: 20, # 'з'
168: 4, # 'и'
169: 23, # 'й'
170: 11, # 'к'
171: 8, # 'л'
172: 12, # 'м'
173: 5, # 'н'
174: 1, # 'о'
175: 15, # 'п'
176: 191, # '░'
177: 192, # '▒'
178: 193, # '▓'
179: 194, # '│'
180: 195, # '┤'
181: 196, # '╡'
182: 197, # '╢'
183: 198, # '╖'
184: 199, # '╕'
185: 200, # '╣'
186: 201, # '║'
187: 202, # '╗'
188: 203, # '╝'
189: 204, # '╜'
190: 205, # '╛'
191: 206, # '┐'
192: 207, # '└'
193: 208, # '┴'
194: 209, # '┬'
195: 210, # '├'
196: 211, # '─'
197: 212, # '┼'
198: 213, # '╞'
199: 214, # '╟'
200: 215, # '╚'
201: 216, # '╔'
202: 217, # '╩'
203: 218, # '╦'
204: 219, # '╠'
205: 220, # '═'
206: 221, # '╬'
207: 222, # '╧'
208: 223, # '╨'
209: 224, # '╤'
210: 225, # '╥'
211: 226, # '╙'
212: 227, # '╘'
213: 228, # '╒'
214: 229, # '╓'
215: 230, # '╫'
216: 231, # '╪'
217: 232, # '┘'
218: 233, # '┌'
219: 234, # '█'
220: 235, # '▄'
221: 236, # '▌'
222: 237, # '▐'
223: 238, # '▀'
224: 9, # 'р'
225: 7, # 'с'
226: 6, # 'т'
227: 14, # 'у'
228: 39, # 'ф'
229: 26, # 'х'
230: 28, # 'ц'
231: 22, # 'ч'
232: 25, # 'ш'
233: 29, # 'щ'
234: 54, # 'ъ'
235: 18, # 'ы'
236: 17, # 'ь'
237: 30, # 'э'
238: 27, # 'ю'
239: 16, # 'я'
240: 239, # 'Ё'
241: 68, # 'ё'
242: 240, # 'Є'
243: 241, # 'є'
244: 242, # 'Ї'
245: 243, # 'ї'
246: 244, # 'Ў'
247: 245, # 'ў'
248: 246, # '°'
249: 247, # '∙'
250: 248, # '·'
251: 249, # '√'
252: 250, # '№'
253: 251, # '¤'
254: 252, # '■'
255: 255, # '\xa0'
}
IBM866_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='IBM866',
language='Russian',
char_to_order_map=IBM866_RUSSIAN_CHAR_TO_ORDER,
language_model=RUSSIAN_LANG_MODEL,
typical_positive_ratio=0.976601,
keep_ascii_letters=False,
alphabet='ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё')
WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER = {
0: 255, # '\x00'
1: 255, # '\x01'
2: 255, # '\x02'
3: 255, # '\x03'
4: 255, # '\x04'
5: 255, # '\x05'
6: 255, # '\x06'
7: 255, # '\x07'
8: 255, # '\x08'
9: 255, # '\t'
10: 254, # '\n'
11: 255, # '\x0b'
12: 255, # '\x0c'
13: 254, # '\r'
14: 255, # '\x0e'
15: 255, # '\x0f'
16: 255, # '\x10'
17: 255, # '\x11'
18: 255, # '\x12'
19: 255, # '\x13'
20: 255, # '\x14'
21: 255, # '\x15'
22: 255, # '\x16'
23: 255, # '\x17'
24: 255, # '\x18'
25: 255, # '\x19'
26: 255, # '\x1a'
27: 255, # '\x1b'
28: 255, # '\x1c'
29: 255, # '\x1d'
30: 255, # '\x1e'
31: 255, # '\x1f'
32: 253, # ' '
33: 253, # '!'
34: 253, # '"'
35: 253, # '#'
36: 253, # '$'
37: 253, # '%'
38: 253, # '&'
39: 253, # "'"
40: 253, # '('
41: 253, # ')'
42: 253, # '*'
43: 253, # '+'
44: 253, # ','
45: 253, # '-'
46: 253, # '.'
47: 253, # '/'
48: 252, # '0'
49: 252, # '1'
50: 252, # '2'
51: 252, # '3'
52: 252, # '4'
53: 252, # '5'
54: 252, # '6'
55: 252, # '7'
56: 252, # '8'
57: 252, # '9'
58: 253, # ':'
59: 253, # ';'
60: 253, # '<'
61: 253, # '='
62: 253, # '>'
63: 253, # '?'
64: 253, # '@'
65: 142, # 'A'
66: 143, # 'B'
67: 144, # 'C'
68: 145, # 'D'
69: 146, # 'E'
70: 147, # 'F'
71: 148, # 'G'
72: 149, # 'H'
73: 150, # 'I'
74: 151, # 'J'
75: 152, # 'K'
76: 74, # 'L'
77: 153, # 'M'
78: 75, # 'N'
79: 154, # 'O'
80: 155, # 'P'
81: 156, # 'Q'
82: 157, # 'R'
83: 158, # 'S'
84: 159, # 'T'
85: 160, # 'U'
86: 161, # 'V'
87: 162, # 'W'
88: 163, # 'X'
89: 164, # 'Y'
90: 165, # 'Z'
91: 253, # '['
92: 253, # '\\'
93: 253, # ']'
94: 253, # '^'
95: 253, # '_'
96: 253, # '`'
97: 71, # 'a'
98: 172, # 'b'
99: 66, # 'c'
100: 173, # 'd'
101: 65, # 'e'
102: 174, # 'f'
103: 76, # 'g'
104: 175, # 'h'
105: 64, # 'i'
106: 176, # 'j'
107: 177, # 'k'
108: 77, # 'l'
109: 72, # 'm'
110: 178, # 'n'
111: 69, # 'o'
112: 67, # 'p'
113: 179, # 'q'
114: 78, # 'r'
115: 73, # 's'
116: 180, # 't'
117: 181, # 'u'
118: 79, # 'v'
119: 182, # 'w'
120: 183, # 'x'
121: 184, # 'y'
122: 185, # 'z'
123: 253, # '{'
124: 253, # '|'
125: 253, # '}'
126: 253, # '~'
127: 253, # '\x7f'
128: 191, # 'Ђ'
129: 192, # 'Ѓ'
130: 193, # '‚'
131: 194, # 'ѓ'
132: 195, # '„'
133: 196, # '…'
134: 197, # '†'
135: 198, # '‡'
136: 199, # '€'
137: 200, # '‰'
138: 201, # 'Љ'
139: 202, # '‹'
140: 203, # 'Њ'
141: 204, # 'Ќ'
142: 205, # 'Ћ'
143: 206, # 'Џ'
144: 207, # 'ђ'
145: 208, # '‘'
146: 209, # '’'
147: 210, # '“'
148: 211, # '”'
149: 212, # '•'
150: 213, # '–'
151: 214, # '—'
152: 215, # None
153: 216, # '™'
154: 217, # 'љ'
155: 218, # '›'
156: 219, # 'њ'
157: 220, # 'ќ'
158: 221, # 'ћ'
159: 222, # 'џ'
160: 223, # '\xa0'
161: 224, # 'Ў'
162: 225, # 'ў'
163: 226, # 'Ј'
164: 227, # '¤'
165: 228, # 'Ґ'
166: 229, # '¦'
167: 230, # '§'
168: 231, # 'Ё'
169: 232, # '©'
170: 233, # 'Є'
171: 234, # '«'
172: 235, # '¬'
173: 236, # '\xad'
174: 237, # '®'
175: 238, # 'Ї'
176: 239, # '°'
177: 240, # '±'
178: 241, # 'І'
179: 242, # 'і'
180: 243, # 'ґ'
181: 244, # 'µ'
182: 245, # '¶'
183: 246, # '·'
184: 68, # 'ё'
185: 247, # '№'
186: 248, # 'є'
187: 249, # '»'
188: 250, # 'ј'
189: 251, # 'Ѕ'
190: 252, # 'ѕ'
191: 253, # 'ї'
192: 37, # 'А'
193: 44, # 'Б'
194: 33, # 'В'
195: 46, # 'Г'
196: 41, # 'Д'
197: 48, # 'Е'
198: 56, # 'Ж'
199: 51, # 'З'
200: 42, # 'И'
201: 60, # 'Й'
202: 36, # 'К'
203: 49, # 'Л'
204: 38, # 'М'
205: 31, # 'Н'
206: 34, # 'О'
207: 35, # 'П'
208: 45, # 'Р'
209: 32, # 'С'
210: 40, # 'Т'
211: 52, # 'У'
212: 53, # 'Ф'
213: 55, # 'Х'
214: 58, # 'Ц'
215: 50, # 'Ч'
216: 57, # 'Ш'
217: 63, # 'Щ'
218: 70, # 'Ъ'
219: 62, # 'Ы'
220: 61, # 'Ь'
221: 47, # 'Э'
222: 59, # 'Ю'
223: 43, # 'Я'
224: 3, # 'а'
225: 21, # 'б'
226: 10, # 'в'
227: 19, # 'г'
228: 13, # 'д'
229: 2, # 'е'
230: 24, # 'ж'
231: 20, # 'з'
232: 4, # 'и'
233: 23, # 'й'
234: 11, # 'к'
235: 8, # 'л'
236: 12, # 'м'
237: 5, # 'н'
238: 1, # 'о'
239: 15, # 'п'
240: 9, # 'р'
241: 7, # 'с'
242: 6, # 'т'
243: 14, # 'у'
244: 39, # 'ф'
245: 26, # 'х'
246: 28, # 'ц'
247: 22, # 'ч'
248: 25, # 'ш'
249: 29, # 'щ'
250: 54, # 'ъ'
251: 18, # 'ы'
252: 17, # 'ь'
253: 30, # 'э'
254: 27, # 'ю'
255: 16, # 'я'
}
WINDOWS_1251_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='windows-1251',
language='Russian',
char_to_order_map=WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER,
language_model=RUSSIAN_LANG_MODEL,
typical_positive_ratio=0.976601,
keep_ascii_letters=False,
alphabet='ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё')
IBM855_RUSSIAN_CHAR_TO_ORDER = {
0: 255, # '\x00'
1: 255, # '\x01'
2: 255, # '\x02'
3: 255, # '\x03'
4: 255, # '\x04'
5: 255, # '\x05'
6: 255, # '\x06'
7: 255, # '\x07'
8: 255, # '\x08'
9: 255, # '\t'
10: 254, # '\n'
11: 255, # '\x0b'
12: 255, # '\x0c'
13: 254, # '\r'
14: 255, # '\x0e'
15: 255, # '\x0f'
16: 255, # '\x10'
17: 255, # '\x11'
18: 255, # '\x12'
19: 255, # '\x13'
20: 255, # '\x14'
21: 255, # '\x15'
22: 255, # '\x16'
23: 255, # '\x17'
24: 255, # '\x18'
25: 255, # '\x19'
26: 255, # '\x1a'
27: 255, # '\x1b'
28: 255, # '\x1c'
29: 255, # '\x1d'
30: 255, # '\x1e'
31: 255, # '\x1f'
32: 253, # ' '
33: 253, # '!'
34: 253, # '"'
35: 253, # '#'
36: 253, # '$'
37: 253, # '%'
38: 253, # '&'
39: 253, # "'"
40: 253, # '('
41: 253, # ')'
42: 253, # '*'
43: 253, # '+'
44: 253, # ','
45: 253, # '-'
46: 253, # '.'
47: 253, # '/'
48: 252, # '0'
49: 252, # '1'
50: 252, # '2'
51: 252, # '3'
52: 252, # '4'
53: 252, # '5'
54: 252, # '6'
55: 252, # '7'
56: 252, # '8'
57: 252, # '9'
58: 253, # ':'
59: 253, # ';'
60: 253, # '<'
61: 253, # '='
62: 253, # '>'
63: 253, # '?'
64: 253, # '@'
65: 142, # 'A'
66: 143, # 'B'
67: 144, # 'C'
68: 145, # 'D'
69: 146, # 'E'
70: 147, # 'F'
71: 148, # 'G'
72: 149, # 'H'
73: 150, # 'I'
74: 151, # 'J'
75: 152, # 'K'
76: 74, # 'L'
77: 153, # 'M'
78: 75, # 'N'
79: 154, # 'O'
80: 155, # 'P'
81: 156, # 'Q'
82: 157, # 'R'
83: 158, # 'S'
84: 159, # 'T'
85: 160, # 'U'
86: 161, # 'V'
87: 162, # 'W'
88: 163, # 'X'
89: 164, # 'Y'
90: 165, # 'Z'
91: 253, # '['
92: 253, # '\\'
93: 253, # ']'
94: 253, # '^'
95: 253, # '_'
96: 253, # '`'
97: 71, # 'a'
98: 172, # 'b'
99: 66, # 'c'
100: 173, # 'd'
101: 65, # 'e'
102: 174, # 'f'
103: 76, # 'g'
104: 175, # 'h'
105: 64, # 'i'
106: 176, # 'j'
107: 177, # 'k'
108: 77, # 'l'
109: 72, # 'm'
110: 178, # 'n'
111: 69, # 'o'
112: 67, # 'p'
113: 179, # 'q'
114: 78, # 'r'
115: 73, # 's'
116: 180, # 't'
117: 181, # 'u'
118: 79, # 'v'
119: 182, # 'w'
120: 183, # 'x'
121: 184, # 'y'
122: 185, # 'z'
123: 253, # '{'
124: 253, # '|'
125: 253, # '}'
126: 253, # '~'
127: 253, # '\x7f'
128: 191, # 'ђ'
129: 192, # 'Ђ'
130: 193, # 'ѓ'
131: 194, # 'Ѓ'
132: 68, # 'ё'
133: 195, # 'Ё'
134: 196, # 'є'
135: 197, # 'Є'
136: 198, # 'ѕ'
137: 199, # 'Ѕ'
138: 200, # 'і'
139: 201, # 'І'
140: 202, # 'ї'
141: 203, # 'Ї'
142: 204, # 'ј'
143: 205, # 'Ј'
144: 206, # 'љ'
145: 207, # 'Љ'
146: 208, # 'њ'
147: 209, # 'Њ'
148: 210, # 'ћ'
149: 211, # 'Ћ'
150: 212, # 'ќ'
151: 213, # 'Ќ'
152: 214, # 'ў'
153: 215, # 'Ў'
154: 216, # 'џ'
155: 217, # 'Џ'
156: 27, # 'ю'
157: 59, # 'Ю'
158: 54, # 'ъ'
159: 70, # 'Ъ'
160: 3, # 'а'
161: 37, # 'А'
162: 21, # 'б'
163: 44, # 'Б'
164: 28, # 'ц'
165: 58, # 'Ц'
166: 13, # 'д'
167: 41, # 'Д'
168: 2, # 'е'
169: 48, # 'Е'
170: 39, # 'ф'
171: 53, # 'Ф'
172: 19, # 'г'
173: 46, # 'Г'
174: 218, # '«'
175: 219, # '»'
176: 220, # '░'
177: 221, # '▒'
178: 222, # '▓'
179: 223, # '│'
180: 224, # '┤'
181: 26, # 'х'
182: 55, # 'Х'
183: 4, # 'и'
184: 42, # 'И'
185: 225, # '╣'
186: 226, # '║'
187: 227, # '╗'
188: 228, # '╝'
189: 23, # 'й'
190: 60, # 'Й'
191: 229, # '┐'
192: 230, # '└'
193: 231, # '┴'
194: 232, # '┬'
195: 233, # '├'
196: 234, # '─'
197: 235, # '┼'
198: 11, # 'к'
199: 36, # 'К'
200: 236, # '╚'
201: 237, # '╔'
202: 238, # '╩'
203: 239, # '╦'
204: 240, # '╠'
205: 241, # '═'
206: 242, # '╬'
207: 243, # '¤'
208: 8, # 'л'
209: 49, # 'Л'
210: 12, # 'м'
211: 38, # 'М'
212: 5, # 'н'
213: 31, # 'Н'
214: 1, # 'о'
215: 34, # 'О'
216: 15, # 'п'
217: 244, # '┘'
218: 245, # '┌'
219: 246, # '█'
220: 247, # '▄'
221: 35, # 'П'
222: 16, # 'я'
223: 248, # '▀'
224: 43, # 'Я'
225: 9, # 'р'
226: 45, # 'Р'
227: 7, # 'с'
228: 32, # 'С'
229: 6, # 'т'
230: 40, # 'Т'
231: 14, # 'у'
232: 52, # 'У'
233: 24, # 'ж'
234: 56, # 'Ж'
235: 10, # 'в'
236: 33, # 'В'
237: 17, # 'ь'
238: 61, # 'Ь'
239: 249, # '№'
240: 250, # '\xad'
241: 18, # 'ы'
242: 62, # 'Ы'
243: 20, # 'з'
244: 51, # 'З'
245: 25, # 'ш'
246: 57, # 'Ш'
247: 30, # 'э'
248: 47, # 'Э'
249: 29, # 'щ'
250: 63, # 'Щ'
251: 22, # 'ч'
252: 50, # 'Ч'
253: 251, # '§'
254: 252, # '■'
255: 255, # '\xa0'
}
IBM855_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='IBM855',
language='Russian',
char_to_order_map=IBM855_RUSSIAN_CHAR_TO_ORDER,
language_model=RUSSIAN_LANG_MODEL,
typical_positive_ratio=0.976601,
keep_ascii_letters=False,
alphabet='ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё')
KOI8_R_RUSSIAN_CHAR_TO_ORDER = {
0: 255, # '\x00'
1: 255, # '\x01'
2: 255, # '\x02'
3: 255, # '\x03'
4: 255, # '\x04'
5: 255, # '\x05'
6: 255, # '\x06'
7: 255, # '\x07'
8: 255, # '\x08'
9: 255, # '\t'
10: 254, # '\n'
11: 255, # '\x0b'
12: 255, # '\x0c'
13: 254, # '\r'
14: 255, # '\x0e'
15: 255, # '\x0f'
16: 255, # '\x10'
17: 255, # '\x11'
18: 255, # '\x12'
19: 255, # '\x13'
20: 255, # '\x14'
21: 255, # '\x15'
22: 255, # '\x16'
23: 255, # '\x17'
24: 255, # '\x18'
25: 255, # '\x19'
26: 255, # '\x1a'
27: 255, # '\x1b'
28: 255, # '\x1c'
29: 255, # '\x1d'
30: 255, # '\x1e'
31: 255, # '\x1f'
32: 253, # ' '
33: 253, # '!'
34: 253, # '"'
35: 253, # '#'
36: 253, # '$'
37: 253, # '%'
38: 253, # '&'
39: 253, # "'"
40: 253, # '('
41: 253, # ')'
42: 253, # '*'
43: 253, # '+'
44: 253, # ','
45: 253, # '-'
46: 253, # '.'
47: 253, # '/'
48: 252, # '0'
49: 252, # '1'
50: 252, # '2'
51: 252, # '3'
52: 252, # '4'
53: 252, # '5'
54: 252, # '6'
55: 252, # '7'
56: 252, # '8'
57: 252, # '9'
58: 253, # ':'
59: 253, # ';'
60: 253, # '<'
61: 253, # '='
62: 253, # '>'
63: 253, # '?'
64: 253, # '@'
65: 142, # 'A'
66: 143, # 'B'
67: 144, # 'C'
68: 145, # 'D'
69: 146, # 'E'
70: 147, # 'F'
71: 148, # 'G'
72: 149, # 'H'
73: 150, # 'I'
74: 151, # 'J'
75: 152, # 'K'
76: 74, # 'L'
77: 153, # 'M'
78: 75, # 'N'
79: 154, # 'O'
80: 155, # 'P'
81: 156, # 'Q'
82: 157, # 'R'
83: 158, # 'S'
84: 159, # 'T'
85: 160, # 'U'
86: 161, # 'V'
87: 162, # 'W'
88: 163, # 'X'
89: 164, # 'Y'
90: 165, # 'Z'
91: 253, # '['
92: 253, # '\\'
93: 253, # ']'
94: 253, # '^'
95: 253, # '_'
96: 253, # '`'
97: 71, # 'a'
98: 172, # 'b'
99: 66, # 'c'
100: 173, # 'd'
101: 65, # 'e'
102: 174, # 'f'
103: 76, # 'g'
104: 175, # 'h'
105: 64, # 'i'
106: 176, # 'j'
107: 177, # 'k'
108: 77, # 'l'
109: 72, # 'm'
110: 178, # 'n'
111: 69, # 'o'
112: 67, # 'p'
113: 179, # 'q'
114: 78, # 'r'
115: 73, # 's'
116: 180, # 't'
117: 181, # 'u'
118: 79, # 'v'
119: 182, # 'w'
120: 183, # 'x'
121: 184, # 'y'
122: 185, # 'z'
123: 253, # '{'
124: 253, # '|'
125: 253, # '}'
126: 253, # '~'
127: 253, # '\x7f'
128: 191, # '─'
129: 192, # '│'
130: 193, # '┌'
131: 194, # '┐'
132: 195, # '└'
133: 196, # '┘'
134: 197, # '├'
135: 198, # '┤'
136: 199, # '┬'
137: 200, # '┴'
138: 201, # '┼'
139: 202, # '▀'
140: 203, # '▄'
141: 204, # '█'
142: 205, # '▌'
143: 206, # '▐'
144: 207, # '░'
145: 208, # '▒'
146: 209, # '▓'
147: 210, # '⌠'
148: 211, # '■'
149: 212, # '∙'
150: 213, # '√'
151: 214, # '≈'
152: 215, # '≤'
153: 216, # '≥'
154: 217, # '\xa0'
155: 218, # '⌡'
156: 219, # '°'
157: 220, # '²'
158: 221, # '·'
159: 222, # '÷'
160: 223, # '═'
161: 224, # '║'
162: 225, # '╒'
163: 68, # 'ё'
164: 226, # '╓'
165: 227, # '╔'
166: 228, # '╕'
167: 229, # '╖'
168: 230, # '╗'
169: 231, # '╘'
170: 232, # '╙'
171: 233, # '╚'
172: 234, # '╛'
173: 235, # '╜'
174: 236, # '╝'
175: 237, # '╞'
176: 238, # '╟'
177: 239, # '╠'
178: 240, # '╡'
179: 241, # 'Ё'
180: 242, # '╢'
181: 243, # '╣'
182: 244, # '╤'
183: 245, # '╥'
184: 246, # '╦'
185: 247, # '╧'
186: 248, # '╨'
187: 249, # '╩'
188: 250, # '╪'
189: 251, # '╫'
190: 252, # '╬'
191: 253, # '©'
192: 27, # 'ю'
193: 3, # 'а'
194: 21, # 'б'
195: 28, # 'ц'
196: 13, # 'д'
197: 2, # 'е'
198: 39, # 'ф'
199: 19, # 'г'
200: 26, # 'х'
201: 4, # 'и'
202: 23, # 'й'
203: 11, # 'к'
204: 8, # 'л'
205: 12, # 'м'
206: 5, # 'н'
207: 1, # 'о'
208: 15, # 'п'
209: 16, # 'я'
210: 9, # 'р'
211: 7, # 'с'
212: 6, # 'т'
213: 14, # 'у'
214: 24, # 'ж'
215: 10, # 'в'
216: 17, # 'ь'
217: 18, # 'ы'
218: 20, # 'з'
219: 25, # 'ш'
220: 30, # 'э'
221: 29, # 'щ'
222: 22, # 'ч'
223: 54, # 'ъ'
224: 59, # 'Ю'
225: 37, # 'А'
226: 44, # 'Б'
227: 58, # 'Ц'
228: 41, # 'Д'
229: 48, # 'Е'
230: 53, # 'Ф'
231: 46, # 'Г'
232: 55, # 'Х'
233: 42, # 'И'
234: 60, # 'Й'
235: 36, # 'К'
236: 49, # 'Л'
237: 38, # 'М'
238: 31, # 'Н'
239: 34, # 'О'
240: 35, # 'П'
241: 43, # 'Я'
242: 45, # 'Р'
243: 32, # 'С'
244: 40, # 'Т'
245: 52, # 'У'
246: 56, # 'Ж'
247: 33, # 'В'
248: 61, # 'Ь'
249: 62, # 'Ы'
250: 51, # 'З'
251: 57, # 'Ш'
252: 47, # 'Э'
253: 63, # 'Щ'
254: 50, # 'Ч'
255: 70, # 'Ъ'
}
KOI8_R_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='KOI8-R',
language='Russian',
char_to_order_map=KOI8_R_RUSSIAN_CHAR_TO_ORDER,
language_model=RUSSIAN_LANG_MODEL,
typical_positive_ratio=0.976601,
keep_ascii_letters=False,
alphabet='ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё')
MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER = {
0: 255, # '\x00'
1: 255, # '\x01'
2: 255, # '\x02'
3: 255, # '\x03'
4: 255, # '\x04'
5: 255, # '\x05'
6: 255, # '\x06'
7: 255, # '\x07'
8: 255, # '\x08'
9: 255, # '\t'
10: 254, # '\n'
11: 255, # '\x0b'
12: 255, # '\x0c'
13: 254, # '\r'
14: 255, # '\x0e'
15: 255, # '\x0f'
16: 255, # '\x10'
17: 255, # '\x11'
18: 255, # '\x12'
19: 255, # '\x13'
20: 255, # '\x14'
21: 255, # '\x15'
22: 255, # '\x16'
23: 255, # '\x17'
24: 255, # '\x18'
25: 255, # '\x19'
26: 255, # '\x1a'
27: 255, # '\x1b'
28: 255, # '\x1c'
29: 255, # '\x1d'
30: 255, # '\x1e'
31: 255, # '\x1f'
32: 253, # ' '
33: 253, # '!'
34: 253, # '"'
35: 253, # '#'
36: 253, # '$'
37: 253, # '%'
38: 253, # '&'
39: 253, # "'"
40: 253, # '('
41: 253, # ')'
42: 253, # '*'
43: 253, # '+'
44: 253, # ','
45: 253, # '-'
46: 253, # '.'
47: 253, # '/'
48: 252, # '0'
49: 252, # '1'
50: 252, # '2'
51: 252, # '3'
52: 252, # '4'
53: 252, # '5'
54: 252, # '6'
55: 252, # '7'
56: 252, # '8'
57: 252, # '9'
58: 253, # ':'
59: 253, # ';'
60: 253, # '<'
61: 253, # '='
62: 253, # '>'
63: 253, # '?'
64: 253, # '@'
65: 142, # 'A'
66: 143, # 'B'
67: 144, # 'C'
68: 145, # 'D'
69: 146, # 'E'
70: 147, # 'F'
71: 148, # 'G'
72: 149, # 'H'
73: 150, # 'I'
74: 151, # 'J'
75: 152, # 'K'
76: 74, # 'L'
77: 153, # 'M'
78: 75, # 'N'
79: 154, # 'O'
80: 155, # 'P'
81: 156, # 'Q'
82: 157, # 'R'
83: 158, # 'S'
84: 159, # 'T'
85: 160, # 'U'
86: 161, # 'V'
87: 162, # 'W'
88: 163, # 'X'
89: 164, # 'Y'
90: 165, # 'Z'
91: 253, # '['
92: 253, # '\\'
93: 253, # ']'
94: 253, # '^'
95: 253, # '_'
96: 253, # '`'
97: 71, # 'a'
98: 172, # 'b'
99: 66, # 'c'
100: 173, # 'd'
101: 65, # 'e'
102: 174, # 'f'
103: 76, # 'g'
104: 175, # 'h'
105: 64, # 'i'
106: 176, # 'j'
107: 177, # 'k'
108: 77, # 'l'
109: 72, # 'm'
110: 178, # 'n'
111: 69, # 'o'
112: 67, # 'p'
113: 179, # 'q'
114: 78, # 'r'
115: 73, # 's'
116: 180, # 't'
117: 181, # 'u'
118: 79, # 'v'
119: 182, # 'w'
120: 183, # 'x'
121: 184, # 'y'
122: 185, # 'z'
123: 253, # '{'
124: 253, # '|'
125: 253, # '}'
126: 253, # '~'
127: 253, # '\x7f'
128: 37, # 'А'
129: 44, # 'Б'
130: 33, # 'В'
131: 46, # 'Г'
132: 41, # 'Д'
133: 48, # 'Е'
134: 56, # 'Ж'
135: 51, # 'З'
136: 42, # 'И'
137: 60, # 'Й'
138: 36, # 'К'
139: 49, # 'Л'
140: 38, # 'М'
141: 31, # 'Н'
142: 34, # 'О'
143: 35, # 'П'
144: 45, # 'Р'
145: 32, # 'С'
146: 40, # 'Т'
147: 52, # 'У'
148: 53, # 'Ф'
149: 55, # 'Х'
150: 58, # 'Ц'
151: 50, # 'Ч'
152: 57, # 'Ш'
153: 63, # 'Щ'
154: 70, # 'Ъ'
155: 62, # 'Ы'
156: 61, # 'Ь'
157: 47, # 'Э'
158: 59, # 'Ю'
159: 43, # 'Я'
160: 191, # '†'
161: 192, # '°'
162: 193, # 'Ґ'
163: 194, # '£'
164: 195, # '§'
165: 196, # '•'
166: 197, # '¶'
167: 198, # 'І'
168: 199, # '®'
169: 200, # '©'
170: 201, # '™'
171: 202, # 'Ђ'
172: 203, # 'ђ'
173: 204, # '≠'
174: 205, # 'Ѓ'
175: 206, # 'ѓ'
176: 207, # '∞'
177: 208, # '±'
178: 209, # '≤'
179: 210, # '≥'
180: 211, # 'і'
181: 212, # 'µ'
182: 213, # 'ґ'
183: 214, # 'Ј'
184: 215, # 'Є'
185: 216, # 'є'
186: 217, # 'Ї'
187: 218, # 'ї'
188: 219, # 'Љ'
189: 220, # 'љ'
190: 221, # 'Њ'
191: 222, # 'њ'
192: 223, # 'ј'
193: 224, # 'Ѕ'
194: 225, # '¬'
195: 226, # '√'
196: 227, # 'ƒ'
197: 228, # '≈'
198: 229, # '∆'
199: 230, # '«'
200: 231, # '»'
201: 232, # '…'
202: 233, # '\xa0'
203: 234, # 'Ћ'
204: 235, # 'ћ'
205: 236, # 'Ќ'
206: 237, # 'ќ'
207: 238, # 'ѕ'
208: 239, # '–'
209: 240, # '—'
210: 241, # '“'
211: 242, # '”'
212: 243, # '‘'
213: 244, # '’'
214: 245, # '÷'
215: 246, # '„'
216: 247, # 'Ў'
217: 248, # 'ў'
218: 249, # 'Џ'
219: 250, # 'џ'
220: 251, # '№'
221: 252, # 'Ё'
222: 68, # 'ё'
223: 16, # 'я'
224: 3, # 'а'
225: 21, # 'б'
226: 10, # 'в'
227: 19, # 'г'
228: 13, # 'д'
229: 2, # 'е'
230: 24, # 'ж'
231: 20, # 'з'
232: 4, # 'и'
233: 23, # 'й'
234: 11, # 'к'
235: 8, # 'л'
236: 12, # 'м'
237: 5, # 'н'
238: 1, # 'о'
239: 15, # 'п'
240: 9, # 'р'
241: 7, # 'с'
242: 6, # 'т'
243: 14, # 'у'
244: 39, # 'ф'
245: 26, # 'х'
246: 28, # 'ц'
247: 22, # 'ч'
248: 25, # 'ш'
249: 29, # 'щ'
250: 54, # 'ъ'
251: 18, # 'ы'
252: 17, # 'ь'
253: 30, # 'э'
254: 27, # 'ю'
255: 255, # '€'
}
MACCYRILLIC_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='MacCyrillic',
language='Russian',
char_to_order_map=MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER,
language_model=RUSSIAN_LANG_MODEL,
typical_positive_ratio=0.976601,
keep_ascii_letters=False,
alphabet='ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё')
ISO_8859_5_RUSSIAN_CHAR_TO_ORDER = {
0: 255, # '\x00'
1: 255, # '\x01'
2: 255, # '\x02'
3: 255, # '\x03'
4: 255, # '\x04'
5: 255, # '\x05'
6: 255, # '\x06'
7: 255, # '\x07'
8: 255, # '\x08'
9: 255, # '\t'
10: 254, # '\n'
11: 255, # '\x0b'
12: 255, # '\x0c'
13: 254, # '\r'
14: 255, # '\x0e'
15: 255, # '\x0f'
16: 255, # '\x10'
17: 255, # '\x11'
18: 255, # '\x12'
19: 255, # '\x13'
20: 255, # '\x14'
21: 255, # '\x15'
22: 255, # '\x16'
23: 255, # '\x17'
24: 255, # '\x18'
25: 255, # '\x19'
26: 255, # '\x1a'
27: 255, # '\x1b'
28: 255, # '\x1c'
29: 255, # '\x1d'
30: 255, # '\x1e'
31: 255, # '\x1f'
32: 253, # ' '
33: 253, # '!'
34: 253, # '"'
35: 253, # '#'
36: 253, # '$'
37: 253, # '%'
38: 253, # '&'
39: 253, # "'"
40: 253, # '('
41: 253, # ')'
42: 253, # '*'
43: 253, # '+'
44: 253, # ','
45: 253, # '-'
46: 253, # '.'
47: 253, # '/'
48: 252, # '0'
49: 252, # '1'
50: 252, # '2'
51: 252, # '3'
52: 252, # '4'
53: 252, # '5'
54: 252, # '6'
55: 252, # '7'
56: 252, # '8'
57: 252, # '9'
58: 253, # ':'
59: 253, # ';'
60: 253, # '<'
61: 253, # '='
62: 253, # '>'
63: 253, # '?'
64: 253, # '@'
65: 142, # 'A'
66: 143, # 'B'
67: 144, # 'C'
68: 145, # 'D'
69: 146, # 'E'
70: 147, # 'F'
71: 148, # 'G'
72: 149, # 'H'
73: 150, # 'I'
74: 151, # 'J'
75: 152, # 'K'
76: 74, # 'L'
77: 153, # 'M'
78: 75, # 'N'
79: 154, # 'O'
80: 155, # 'P'
81: 156, # 'Q'
82: 157, # 'R'
83: 158, # 'S'
84: 159, # 'T'
85: 160, # 'U'
86: 161, # 'V'
87: 162, # 'W'
88: 163, # 'X'
89: 164, # 'Y'
90: 165, # 'Z'
91: 253, # '['
92: 253, # '\\'
93: 253, # ']'
94: 253, # '^'
95: 253, # '_'
96: 253, # '`'
97: 71, # 'a'
98: 172, # 'b'
99: 66, # 'c'
100: 173, # 'd'
101: 65, # 'e'
102: 174, # 'f'
103: 76, # 'g'
104: 175, # 'h'
105: 64, # 'i'
106: 176, # 'j'
107: 177, # 'k'
108: 77, # 'l'
109: 72, # 'm'
110: 178, # 'n'
111: 69, # 'o'
112: 67, # 'p'
113: 179, # 'q'
114: 78, # 'r'
115: 73, # 's'
116: 180, # 't'
117: 181, # 'u'
118: 79, # 'v'
119: 182, # 'w'
120: 183, # 'x'
121: 184, # 'y'
122: 185, # 'z'
123: 253, # '{'
124: 253, # '|'
125: 253, # '}'
126: 253, # '~'
127: 253, # '\x7f'
128: 191, # '\x80'
129: 192, # '\x81'
130: 193, # '\x82'
131: 194, # '\x83'
132: 195, # '\x84'
133: 196, # '\x85'
134: 197, # '\x86'
135: 198, # '\x87'
136: 199, # '\x88'
137: 200, # '\x89'
138: 201, # '\x8a'
139: 202, # '\x8b'
140: 203, # '\x8c'
141: 204, # '\x8d'
142: 205, # '\x8e'
143: 206, # '\x8f'
144: 207, # '\x90'
145: 208, # '\x91'
146: 209, # '\x92'
147: 210, # '\x93'
148: 211, # '\x94'
149: 212, # '\x95'
150: 213, # '\x96'
151: 214, # '\x97'
152: 215, # '\x98'
153: 216, # '\x99'
154: 217, # '\x9a'
155: 218, # '\x9b'
156: 219, # '\x9c'
157: 220, # '\x9d'
158: 221, # '\x9e'
159: 222, # '\x9f'
160: 223, # '\xa0'
161: 224, # 'Ё'
162: 225, # 'Ђ'
163: 226, # 'Ѓ'
164: 227, # 'Є'
165: 228, # 'Ѕ'
166: 229, # 'І'
167: 230, # 'Ї'
168: 231, # 'Ј'
169: 232, # 'Љ'
170: 233, # 'Њ'
171: 234, # 'Ћ'
172: 235, # 'Ќ'
173: 236, # '\xad'
174: 237, # 'Ў'
175: 238, # 'Џ'
176: 37, # 'А'
177: 44, # 'Б'
178: 33, # 'В'
179: 46, # 'Г'
180: 41, # 'Д'
181: 48, # 'Е'
182: 56, # 'Ж'
183: 51, # 'З'
184: 42, # 'И'
185: 60, # 'Й'
186: 36, # 'К'
187: 49, # 'Л'
188: 38, # 'М'
189: 31, # 'Н'
190: 34, # 'О'
191: 35, # 'П'
192: 45, # 'Р'
193: 32, # 'С'
194: 40, # 'Т'
195: 52, # 'У'
196: 53, # 'Ф'
197: 55, # 'Х'
198: 58, # 'Ц'
199: 50, # 'Ч'
200: 57, # 'Ш'
201: 63, # 'Щ'
202: 70, # 'Ъ'
203: 62, # 'Ы'
204: 61, # 'Ь'
205: 47, # 'Э'
206: 59, # 'Ю'
207: 43, # 'Я'
208: 3, # 'а'
209: 21, # 'б'
210: 10, # 'в'
211: 19, # 'г'
212: 13, # 'д'
213: 2, # 'е'
214: 24, # 'ж'
215: 20, # 'з'
216: 4, # 'и'
217: 23, # 'й'
218: 11, # 'к'
219: 8, # 'л'
220: 12, # 'м'
221: 5, # 'н'
222: 1, # 'о'
223: 15, # 'п'
224: 9, # 'р'
225: 7, # 'с'
226: 6, # 'т'
227: 14, # 'у'
228: 39, # 'ф'
229: 26, # 'х'
230: 28, # 'ц'
231: 22, # 'ч'
232: 25, # 'ш'
233: 29, # 'щ'
234: 54, # 'ъ'
235: 18, # 'ы'
236: 17, # 'ь'
237: 30, # 'э'
238: 27, # 'ю'
239: 16, # 'я'
240: 239, # '№'
241: 68, # 'ё'
242: 240, # 'ђ'
243: 241, # 'ѓ'
244: 242, # 'є'
245: 243, # 'ѕ'
246: 244, # 'і'
247: 245, # 'ї'
248: 246, # 'ј'
249: 247, # 'љ'
250: 248, # 'њ'
251: 249, # 'ћ'
252: 250, # 'ќ'
253: 251, # '§'
254: 252, # 'ў'
255: 255, # 'џ'
}
ISO_8859_5_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-5',
language='Russian',
char_to_order_map=ISO_8859_5_RUSSIAN_CHAR_TO_ORDER,
language_model=RUSSIAN_LANG_MODEL,
typical_positive_ratio=0.976601,
keep_ascii_letters=False,
alphabet='ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё')
| 22.063941 | 245 | 0.218728 | 16,814 | 126,294 | 1.64702 | 0.03503 | 0.007511 | 0.008378 | 0.010472 | 0.826599 | 0.810963 | 0.802369 | 0.801141 | 0.796338 | 0.796338 | 0 | 0.354898 | 0.552782 | 126,294 | 5,723 | 246 | 22.067797 | 0.131602 | 0.187634 | 0 | 0.899086 | 0 | 0 | 0.005041 | 0.004082 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000176 | 0 | 0.000176 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
b9666f2924267ce1f99b7840142baaeffc9a74c5 | 2,490 | py | Python | PythonScripts/models.py | DelbertWang2/TemporalGraphSR | 5ff7bdef523ae17be85e30211568492de5e2bb29 | [
"MIT"
] | null | null | null | PythonScripts/models.py | DelbertWang2/TemporalGraphSR | 5ff7bdef523ae17be85e30211568492de5e2bb29 | [
"MIT"
] | null | null | null | PythonScripts/models.py | DelbertWang2/TemporalGraphSR | 5ff7bdef523ae17be85e30211568492de5e2bb29 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from layers import GraphConvolution
class PF_GCN(nn.Module):
def __init__(self, DAD_matrix):
super().__init__()
node_size = DAD_matrix.shape[0]
self.node_size = node_size
self.GC1 = GraphConvolution(in_features=64, out_features=128, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC2 = GraphConvolution(in_features=128, out_features=256, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC3 = GraphConvolution(in_features=256, out_features=512, node_size=node_size, DAD_matrix=DAD_matrix)
self.norm4 = nn.BatchNorm1d(33)
self.GC4 = GraphConvolution(in_features=512, out_features=256, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC5 = GraphConvolution(in_features=256, out_features=128, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC6 = GraphConvolution(in_features=128, out_features=64, node_size=node_size, DAD_matrix=DAD_matrix)
def forward(self, x):
x = F.relu(self.GC1(x))
x = F.relu(self.GC2(x))
x = F.relu(self.GC3(x))
x = self.norm4(x)
x = F.relu(self.GC4(x))
x = F.relu(self.GC5(x))
x = torch.sigmoid(self.GC6(x))
return x
class VM_GCN(nn.Module):
def __init__(self, DAD_matrix):
super().__init__()
node_size = DAD_matrix.shape[0]
self.node_size = node_size
self.GC1 = GraphConvolution(in_features=64, out_features=128, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC2 = GraphConvolution(in_features=128, out_features=256, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC3 = GraphConvolution(in_features=256, out_features=512, node_size=node_size, DAD_matrix=DAD_matrix)
self.norm4 = nn.BatchNorm1d(33).cuda()
self.GC4 = GraphConvolution(in_features=512, out_features=256, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC5 = GraphConvolution(in_features=256, out_features=128, node_size=node_size, DAD_matrix=DAD_matrix)
self.GC6 = GraphConvolution(in_features=128, out_features=64, node_size=node_size, DAD_matrix=DAD_matrix)
def forward(self, x):
x = F.relu(self.GC1(x))
x = F.relu(self.GC2(x))
x = F.relu(self.GC3(x))
x = self.norm4(x)
x = F.relu(self.GC4(x))
x = F.relu(self.GC5(x))
x = torch.sigmoid(self.GC6(x))
return x
| 45.272727 | 115 | 0.670683 | 370 | 2,490 | 4.243243 | 0.124324 | 0.152866 | 0.098089 | 0.151592 | 0.933758 | 0.933758 | 0.933758 | 0.933758 | 0.933758 | 0.933758 | 0 | 0.05317 | 0.214458 | 2,490 | 54 | 116 | 46.111111 | 0.749489 | 0 | 0 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b97743460099ea0301a1600a28f95c745ed457ef | 6,605 | py | Python | tests/unittests/test_stat_nova.py | buddwm/hubble | b384ee48556ca144ae6f09dd0b45db29288e5293 | [
"Apache-2.0"
] | 363 | 2017-01-10T22:02:47.000Z | 2022-03-21T10:44:40.000Z | tests/unittests/test_stat_nova.py | buddwm/hubble | b384ee48556ca144ae6f09dd0b45db29288e5293 | [
"Apache-2.0"
] | 439 | 2017-01-12T22:39:42.000Z | 2021-10-11T18:43:28.000Z | tests/unittests/test_stat_nova.py | buddwm/hubble | b384ee48556ca144ae6f09dd0b45db29288e5293 | [
"Apache-2.0"
] | 138 | 2017-01-05T22:10:59.000Z | 2021-09-01T14:35:00.000Z | import os
import hubblestack.files.hubblestack_nova.stat_nova
class TestStatNova():
def test_virtual(self):
expected_val = True
val = hubblestack.files.hubblestack_nova.stat_nova.__virtual__()
assert expected_val == val
def test_merge_yaml(self):
ret = {}
data = {
'stat': {'passwd_owner_group': {
'nova_profile': 'ubuntu-1604-level-1-scored-v1-0-0',
'data': {'Ubuntu-16.04': [{'/etc/passwd': {'gid': 0,
'tag': 'CIS-12.4',
'group': 'root',
'uid': 0,
'user': 'root'}}]},
'description': 'Verify User/Group Ownership on /etc/passwd'}}}
profile = 'ubuntu-1604-level-1-scored-v1-0-0'
val = hubblestack.files.hubblestack_nova.stat_nova._merge_yaml(ret, data, profile)
assert val['stat'] == [{'passwd_owner_group': {
'nova_profile': 'ubuntu-1604-level-1-scored-v1-0-0',
'data': {'Ubuntu-16.04': [{'/etc/passwd': {'group': 'root',
'gid': 0,
'tag': 'CIS-12.4',
'uid': 0,
'user': 'root'}}]},
'description': 'Verify User/Group Ownership on /etc/passwd'}}]
def test_merge_yaml_recurssive(self):
ret = {}
profile = 'ubuntu-1604-level-1-scored-v1-0-0'
data1 = {'stat': {'passwd_owner_group1': {'nova_profile': 'ubuntu-1604-level-1-scored-v1-0-0',
'data': {'Ubuntu-16.04': [{'/etc/passwd': {'gid': 0, 'tag': 'CIS-12.4', 'group': 'root', 'uid': 0, 'user': 'root'}}]},
'description': 'Verify User/Group Ownership on /etc/passwd'}}}
data2 = {'stat': {'passwd_owner_group2': {'nova_profile': 'ubuntu-1604-level-1-scored-v1-0-0',
'data': {'Ubuntu-16.04': [{'/etc/passwd': {'gid': 0, 'tag': 'CIS-12.4', 'group': 'root', 'uid': 0, 'user': 'root'}}]},
'description': 'Verify User/Group Ownership on /etc/passwd'}}}
data_list = [data1, data2]
for data in data_list:
val = hubblestack.files.hubblestack_nova.stat_nova._merge_yaml(ret, data, profile)
assert (len(val['stat'])) == 2
def test_get_tags(self):
data = {'stat': [{'passwd_owner_group': {'nova_profile': 'ubuntu-1604-level-1-scored-v1-0-0',
'data': {'Ubuntu-16.04': [{'/etc/passwd': {'gid': 0, 'tag': 'CIS-12.4', 'group': 'root', 'uid': 0, 'user': 'root'}}]},
'description': 'Verify User/Group Ownership on /etc/passwd'}}]}
hubblestack.files.hubblestack_nova.stat_nova.__grains__ = {'osfinger': 'Ubuntu-16.04'}
ret = hubblestack.files.hubblestack_nova.stat_nova._get_tags(data)
assert ret['CIS-12.4'] == [{'nova_profile': 'ubuntu-1604-level-1-scored-v1-0-0',
'tag': 'CIS-12.4', 'group': 'root', 'name': '/etc/passwd', 'uid': 0, 'gid': 0,
'description': 'Verify User/Group Ownership on /etc/passwd', 'module': 'stat', 'user': 'root'}]
def test_get_tags_for_empty_data(self):
data = {'stat': []}
hubblestack.files.hubblestack_nova.stat_nova.__grains__ = {'osfinger': 'Ubuntu-16.04'}
ret = hubblestack.files.hubblestack_nova.stat_nova._get_tags(data)
assert ret == {}
def test_audit_for_success(self):
val = {}
data_list = [('ubuntu-1604-level-1-scored-v1-0-0', {'stat':
{'passwd_owner_group': {'data': {'Ubuntu-16.04': [{'/etc/passwd': {'gid': 0, 'tag': 'CIS-12.4', 'group': 'root', 'uid': 0, 'user': 'root'}}]},
'description': 'Verify User/Group Ownership on /etc/passwd'}}})]
__tags__ = 'CIS-12.4'
__mods__ = {}
def file_stats(name):
return {'size': 26, 'group': 'root', 'uid': 0, 'type': 'file', 'mode': '0644', 'gid': 0, 'target': '/etc/issue', 'user': 'root', 'mtime': 1486511757.0, 'atime': 1507221810.408013, 'inode': 1322, 'ctime': 1491870657.914388}
__mods__['file.stats'] = file_stats
hubblestack.files.hubblestack_nova.stat_nova.__mods__ = __mods__
hubblestack.files.hubblestack_nova.stat_nova.__grains__ = {'osfinger': 'Ubuntu-16.04'}
val = hubblestack.files.hubblestack_nova.stat_nova.audit(data_list, __tags__, [], debug=False)
assert len(val['Success']) != 0
def test_audit_for_incorrect_input(self):
val = {}
data_list = []
__tags__ = ''
__mods__ = {}
expected_val = {'Failure': [], 'Controlled': [], 'Success': []}
def file_stats(name):
return {'size': 26, 'group': 'root', 'uid': 0, 'type': 'file', 'mode': '0644', 'gid': 0, 'target': '/etc/issue', 'user': 'root', 'mtime': 1486511757.0, 'atime': 1507221810.408013, 'inode': 1322, 'ctime': 1491870657.914388}
__mods__['file.stats'] = file_stats
hubblestack.files.hubblestack_nova.stat_nova.__mods__ = __mods__
hubblestack.files.hubblestack_nova.stat_nova.__grains__ = {'osfinger': 'Ubuntu-16.04'}
val = hubblestack.files.hubblestack_nova.stat_nova.audit(data_list, __tags__, [], debug=False)
assert val == expected_val
def test_audit_for_value_error(self):
val = {}
data_list = 'wrong_test_data'
__tags__ = 'CIS-12.4'
__mods__ = {}
def file_stats(name):
return {'size': 26, 'group': 'root', 'uid': 0, 'type': 'file', 'mode': '0644', 'gid': 0, 'target': '/etc/issue', 'user': 'root', 'mtime': 1486511757.0, 'atime': 1507221810.408013, 'inode': 1322, 'ctime': 1491870657.914388}
__mods__['file.stats'] = file_stats
hubblestack.files.hubblestack_nova.stat_nova.__mods__ = __mods__
hubblestack.files.hubblestack_nova.stat_nova.__grains__ = {'osfinger': 'Ubuntu-16.04'}
try:
val = hubblestack.files.hubblestack_nova.stat_nova.audit(data_list, __tags__, [], debug=False)
except ValueError:
pass
| 59.504505 | 234 | 0.518849 | 714 | 6,605 | 4.519608 | 0.138655 | 0.084289 | 0.142237 | 0.16331 | 0.812209 | 0.812209 | 0.796095 | 0.777502 | 0.755191 | 0.734738 | 0 | 0.073316 | 0.312339 | 6,605 | 110 | 235 | 60.045455 | 0.637164 | 0 | 0 | 0.520833 | 0 | 0 | 0.264042 | 0.044966 | 0 | 0 | 0 | 0 | 0.072917 | 1 | 0.114583 | false | 0.208333 | 0.020833 | 0.03125 | 0.177083 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
b983c92d36b5db9afe0b645a97b68e2f03027e9a | 20,236 | py | Python | src/py42/sdk/queries/query_filter.py | unparalleled-js/py42 | 8c6b054ddd8c2bfea92bf77b0d648af76f1efcf1 | [
"MIT"
] | 1 | 2020-08-18T22:00:22.000Z | 2020-08-18T22:00:22.000Z | src/py42/sdk/queries/query_filter.py | unparalleled-js/py42 | 8c6b054ddd8c2bfea92bf77b0d648af76f1efcf1 | [
"MIT"
] | null | null | null | src/py42/sdk/queries/query_filter.py | unparalleled-js/py42 | 8c6b054ddd8c2bfea92bf77b0d648af76f1efcf1 | [
"MIT"
] | 1 | 2021-05-10T23:33:34.000Z | 2021-05-10T23:33:34.000Z | from collections import OrderedDict
from datetime import datetime
from py42._internal.compat import str
from py42._internal.compat import string_type
from py42.util import convert_datetime_to_timestamp_str
from py42.util import convert_timestamp_to_str
def create_query_filter(term, operator, value=None):
"""Creates a :class:`~py42.sdk.queries.query_filter.QueryFilter` object. Useful for
programmatically crafting query filters, such as filters not yet defined in py42.
Args:
term (str): The term of the filter, such as ``actor`` or ``sharedWith``.
operator (str): The operator between ``term`` and ``value``, such as ``IS`` or `IS_NOT`.
value (str): The value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.QueryFilter`
"""
return QueryFilter(term, operator, value)
def create_filter_group(query_filter_list, filter_clause):
"""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` object. Useful for
programmatically crafting query filters, such as filters not yet defined in py42.
Alternatively, if you want to create custom filter groups with already defined
operators (such as `IS` or `IS_IN`), see the other methods in this module, such as
:meth:`~py42.sdk.queries.query_filter.create_eq_filter_group()`.
Args:
query_filter_list (list): a list of :class:`~py42.sdk.queries.query_filter.QueryFilter`
objects.
filter_clause (str): The clause joining the filters, such as ``AND`` or ``OR``.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return FilterGroup(query_filter_list, filter_clause)
def create_eq_filter_group(term, value):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` equals the given value. Useful for creating ``IS``
filters that are not yet supported in py42 or programmatically crafting filter groups.
Args:
term: (str): The term of the filter, such as ``actor`` or ``sharedWith``.
value (str): The value used to match on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"IS", value)]
return create_filter_group(filter_list, u"AND")
def create_not_eq_filter_group(term, value):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` does not equal the given value. Useful for creating
``IS_NOT`` filters that are not yet supported in py42 or programmatically crafting filter
groups.
Args:
term: (str): The term of the filter, such as ``actor`` or ``sharedWith``.
value (str): The value used to exclude on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"IS_NOT", value)]
return create_filter_group(filter_list, u"AND")
def create_is_in_filter_group(term, value_list):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` is one of several values. Useful for creating ``IS_IN``
filters that are not yet supported in py42 or programmatically crafting filter groups.
Args:
term: (str): The term of the filter, such as ``actor`` or ``sharedWith``.
value_list (list): The list of values to match on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"IS", value) for value in value_list]
return create_filter_group(filter_list, u"OR" if len(filter_list) > 1 else u"AND")
def create_not_in_filter_group(term, value_list):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` is not one of several values. Useful for creating
``NOT_IN`` filters that are not yet supported in py42 or programmatically crafting
filter groups.
Args:
term: (str): The term of the filter, such as ``actor`` or ``sharedWith``.
value_list (list): The list of values to exclude on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"IS_NOT", value) for value in value_list]
return create_filter_group(filter_list, u"AND")
def create_on_or_after_filter_group(term, value):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` is on or after the given value. Examples include
values describing dates. Useful for creating ``ON_OR_AFTER`` filters that are not yet
supported in py42 or programmatically crafting filter groups.
Args:
term: (str): The term of the filter, such as ``eventTimestamp``.
value (str or int): The value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"ON_OR_AFTER", value)]
return create_filter_group(filter_list, u"AND")
def create_on_or_before_filter_group(term, value):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` is on or before the given value. Examples include
values describing dates. Useful for creating ``ON_OR_BEFORE`` filters that are not
yet supported in py42 or programmatically crafting filter groups.
Args:
term: (str): The term of the filter, such as ``eventTimestamp``.
value (str or int): The value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"ON_OR_BEFORE", value)]
return create_filter_group(filter_list, u"AND")
def create_in_range_filter_group(term, start_value, end_value):
""""Creates a :class:`~py42.sdk.queries.query_filter.FilterGroup` for filtering results
where the value with key ``term`` is in the given range. Examples include values describing
dates. Useful for creating a combination of ``ON_OR_AFTER`` and ``ON_OR_BEFORE`` filters
that are not yet supported in py42 or programmatically crafting filter groups.
Args:
term: (str): The term of the filter, such as ``eventTimestamp``.
start_value (str or int): The start value used to filter results.
end_value (str or int): The end value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [
create_query_filter(term, u"ON_OR_AFTER", start_value),
create_query_filter(term, u"ON_OR_BEFORE", end_value),
]
return create_filter_group(filter_list, u"AND")
def create_within_the_last_filter_group(term, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the key ``term`` is an ``EventTimestamp._term``
and the value is one of the `EventTimestamp` attributes as `value`.
Args:
value (str): `EventTimestamp` attribute.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [create_query_filter(term, u"WITHIN_THE_LAST", value)]
return create_filter_group(filter_list, u"AND")
def filter_attributes(cls):
return [
cls().__getattribute__(attr)
for attr in dir(cls)
if not callable(cls().__getattribute__(attr)) and not attr.startswith(u"_")
]
class QueryFilterStringField(object):
"""Helper class for creating filters where the search value is a string."""
_term = u"override_string_field_name"
@classmethod
def eq(cls, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` equals the provided
``value``.
Args:
value (str): The value to match on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_eq_filter_group(cls._term, value)
@classmethod
def not_eq(cls, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` does not equal the provided ``value``.
Args:
value (str): The value to exclude on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_not_eq_filter_group(cls._term, value)
@classmethod
def is_in(cls, value_list):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with the key ``self._term`` is in the provided
``value_list``.
Args:
value_list (list): The list of values to match on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_is_in_filter_group(cls._term, value_list)
@classmethod
def not_in(cls, value_list):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with the key ``self._term`` is not in the provided
``value_list``.
Args:
value_list (list): The list of values to exclude on.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_not_in_filter_group(cls._term, value_list)
class QueryFilterTimestampField(object):
"""Helper class for creating filters where the search value is a timestamp."""
_term = u"override_timestamp_field_name"
@classmethod
def on_or_after(cls, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term` is on or after the
provided ``value``.
Args:
value (str or int): The value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
formatted_timestamp = convert_timestamp_to_str(value)
return create_on_or_after_filter_group(cls._term, formatted_timestamp)
@classmethod
def on_or_before(cls, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` is on or before the
provided ``value``.
Args:
value (str or int): The value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
formatted_timestamp = convert_timestamp_to_str(value)
return create_on_or_before_filter_group(cls._term, formatted_timestamp)
@classmethod
def in_range(cls, start_value, end_value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` is in range between
the provided ``start_value`` and ``end_value``.
Args:
start_value (str or int): The start value used to filter results.
end_value (str or int): The end value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
formatted_start_time = convert_timestamp_to_str(start_value)
formatted_end_time = convert_timestamp_to_str(end_value)
return create_in_range_filter_group(
cls._term, formatted_start_time, formatted_end_time
)
@classmethod
def on_same_day(cls, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` is within the same
calendar day as the provided ``value``.
Args:
value (str or int): The value used to filter results.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
date_from_value = datetime.utcfromtimestamp(value)
start_time = datetime(
date_from_value.year, date_from_value.month, date_from_value.day, 0, 0, 0
)
end_time = datetime(
date_from_value.year, date_from_value.month, date_from_value.day, 23, 59, 59
)
formatted_start_time = convert_datetime_to_timestamp_str(start_time)
formatted_end_time = convert_datetime_to_timestamp_str(end_time)
return create_in_range_filter_group(
cls._term, formatted_start_time, formatted_end_time
)
@classmethod
def within_the_last(cls, value):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the key ``self._term`` is an ``EventTimestamp._term``
and the value is one of the ``EventTimestamp`` attributes as ``value``.
Args:
value (str): `EventTimestamp` attribute.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_within_the_last_filter_group(cls._term, value)
class QueryFilterBooleanField(object):
"""Helper class for creating filters where the search value is a boolean."""
_term = u"override_boolean_field_name"
@classmethod
def is_true(cls):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` is True.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_eq_filter_group(cls._term, u"TRUE")
@classmethod
def is_false(cls):
"""Returns a :class:`~py42.sdk.queries.query_filter.FilterGroup` that is useful
for finding results where the value with key ``self._term`` is False.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
return create_eq_filter_group(cls._term, u"FALSE")
class QueryFilter(object):
"""Class for constructing a single filter object for use in a search query.
When :func:`str()` is called on a :class:`~py42.sdk.queries.query_filter.QueryFilter`
instance, the (``term``, ``operator``, ``value``) attribute combination is transformed
into a JSON string to be used as part of a Forensic Search or Alert query.
When :func:`dict()` is called on a :class:`~py42.sdk.queries.query_filter.QueryFilter`
instance, the (``term``, ``operator``, ``value``) attribute combination is transformed
into the Python `dict` equivalent of their JSON representation. This can be useful
for programmatically manipulating a :class:`~py42.sdk.queries.query_filter.QueryFilter`
after it's been created.
"""
_term = None
def __init__(self, term, operator, value=None):
self._term = term
self._operator = operator
self._value = value
@classmethod
def from_dict(cls, _dict):
"""Creates an instance of :class:`~py42.sdk.queries.query_filter.QueryFilter` from
the values found in ``_dict``. ``_dict`` must contain keys ``term``, ``operator``,
and ``value``.
Args:
_dict (dict): A dictionary containing keys ``term``, ``operator``, and ``value``.
Returns:
:class:`~py42.sdk.queries.query_filter.QueryFilter`
"""
return cls(_dict[u"term"], _dict[u"operator"], value=_dict.get(u"value"))
@property
def term(self):
"""The term of the filter, such as ``actor`` or ``sharedWith``."""
return self._term
@property
def operator(self):
"""The operator between ``term`` and ``value``, such as ``IS`` or `IS_NOT`."""
return self._operator
@property
def value(self):
"""The value used to filter results."""
return self._value
def __str__(self):
value = u"null" if self._value is None else u'"{}"'.format(self._value)
return u'{{"operator":"{0}", "term":"{1}", "value":{2}}}'.format(
self._operator, self._term, value
)
def __iter__(self):
output_dict = OrderedDict()
output_dict[u"operator"] = self._operator
output_dict[u"term"] = self._term
output_dict[u"value"] = self._value
for key in output_dict:
yield key, output_dict[key]
def __eq__(self, other):
if isinstance(other, (QueryFilter, tuple, list)):
return tuple(self) == tuple(other)
elif isinstance(other, string_type):
return str(self) == other
else:
return False
def __hash__(self):
return hash(str(self))
class FilterGroup(object):
"""Class for constructing a logical sub-group of related filters from a list of
:class:`~py42.sdk.queries.query_filter.QueryFilter` objects. Takes a list of
:class:`~py42.sdk.queries.query_filter.QueryFilter` objects and combines them
logically using the passed in filter clause (``AND`` or ``OR``).
When :func:`str()` is called on a :class:`FilterGroup` instance, the combined filter items are
transformed into a JSON string to be used as part of a Forensic Search or Alert query.
When :func:`dict()` is called on a :class:`~py42.sdk.queries.query_filter.FilterGroup`
instance, the combined filter items are transformed into the Python `dict` equivalent
of their JSON representation. This can be useful for programmatically manipulating a
:class:`~py42.sdk.queries.query_filter.FilterGroup` after it's been created.
"""
def __init__(self, filter_list, filter_clause=u"AND"):
self._filter_list = filter_list
self._filter_clause = filter_clause
@classmethod
def from_dict(cls, _dict):
"""Creates an instance of :class:`~py42.sdk.queries.query_filter.FilterGroup`
from the values found in ``_dict``. ``_dict`` must contain keys ``filters`` and
``filterClause``.
Args:
_dict (dict): A dictionary containing keys ``term``, ``operator``, and ``value``.
Returns:
:class:`~py42.sdk.queries.query_filter.FilterGroup`
"""
filter_list = [QueryFilter.from_dict(item) for item in _dict[u"filters"]]
return cls(filter_list, filter_clause=_dict[u"filterClause"])
@property
def filter_list(self):
"""The list of :class:`~py42.sdk.queries.query_filter.QueryFilter` objects in this
group."""
return self._filter_list
@property
def filter_clause(self):
"""The clause joining the filters, such as ``AND`` or ``OR``."""
return self._filter_clause
@filter_clause.setter
def filter_clause(self, value):
"""The clause joining the filters, such as ``AND`` or ``OR``."""
self._filter_clause = value
@property
def _filter_set(self):
return sorted(list(set(self.filter_list)), key=str)
def __str__(self):
filters_string = u",".join(str(filter_item) for filter_item in self._filter_set)
return u'{{"filterClause":"{0}", "filters":[{1}]}}'.format(
self._filter_clause, filters_string
)
def __iter__(self):
filter_list = [dict(item) for item in self._filter_set]
output_dict = {u"filterClause": self._filter_clause, u"filters": filter_list}
for key in output_dict:
yield key, output_dict[key]
def __eq__(self, other):
if isinstance(other, FilterGroup):
return (
self.filter_clause == other.filter_clause
and self._filter_set == other._filter_set
)
elif isinstance(other, (tuple, list)):
return tuple(self) == tuple(other)
elif isinstance(other, string_type):
return str(self) == other
else:
return False
def __contains__(self, item):
return item in self._filter_set
| 37.335793 | 106 | 0.667128 | 2,690 | 20,236 | 4.828996 | 0.077695 | 0.05843 | 0.060354 | 0.081909 | 0.804927 | 0.756736 | 0.741878 | 0.726251 | 0.697845 | 0.674981 | 0 | 0.009785 | 0.227318 | 20,236 | 541 | 107 | 37.404806 | 0.82099 | 0.545958 | 0 | 0.296703 | 0 | 0 | 0.046597 | 0.013332 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225275 | false | 0 | 0.032967 | 0.021978 | 0.532967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
b9af6974ad17149ede9c9fcdc8cf1975e960a57c | 7,062 | py | Python | dlp/google/cloud/dlp_v2/gapic/dlp_service_client_config.py | deryrahman/google-cloud-python | b55058c4b2328fde32f29bfd8ea04708fcc578e0 | [
"Apache-2.0"
] | 1 | 2020-10-25T04:39:41.000Z | 2020-10-25T04:39:41.000Z | dlp/google/cloud/dlp_v2/gapic/dlp_service_client_config.py | deryrahman/google-cloud-python | b55058c4b2328fde32f29bfd8ea04708fcc578e0 | [
"Apache-2.0"
] | 4 | 2018-11-13T22:15:36.000Z | 2018-12-07T18:31:38.000Z | dlp/google/cloud/dlp_v2/gapic/dlp_service_client_config.py | deryrahman/google-cloud-python | b55058c4b2328fde32f29bfd8ea04708fcc578e0 | [
"Apache-2.0"
] | null | null | null | config = {
"interfaces": {
"google.privacy.dlp.v2.DlpService": {
"retry_codes": {
"idempotent": ["DEADLINE_EXCEEDED", "UNAVAILABLE"],
"non_idempotent": []
},
"retry_params": {
"default": {
"initial_retry_delay_millis": 100,
"retry_delay_multiplier": 1.3,
"max_retry_delay_millis": 60000,
"initial_rpc_timeout_millis": 20000,
"rpc_timeout_multiplier": 1.0,
"max_rpc_timeout_millis": 20000,
"total_timeout_millis": 600000
}
},
"methods": {
"InspectContent": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"RedactImage": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"DeidentifyContent": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"ReidentifyContent": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"ListInfoTypes": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"CreateInspectTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"UpdateInspectTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"GetInspectTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"ListInspectTemplates": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"DeleteInspectTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"CreateDeidentifyTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"UpdateDeidentifyTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"GetDeidentifyTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"ListDeidentifyTemplates": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"DeleteDeidentifyTemplate": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"CreateDlpJob": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"ListDlpJobs": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"GetDlpJob": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"DeleteDlpJob": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"CancelDlpJob": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"ListJobTriggers": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"GetJobTrigger": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"DeleteJobTrigger": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"UpdateJobTrigger": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"CreateJobTrigger": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"CreateStoredInfoType": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"UpdateStoredInfoType": {
"timeout_millis": 300000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default"
},
"GetStoredInfoType": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"ListStoredInfoTypes": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
},
"DeleteStoredInfoType": {
"timeout_millis": 300000,
"retry_codes_name": "idempotent",
"retry_params_name": "default"
}
}
}
}
}
| 40.586207 | 67 | 0.421694 | 423 | 7,062 | 6.609929 | 0.1513 | 0.153433 | 0.232833 | 0.257511 | 0.708155 | 0.708155 | 0.708155 | 0.708155 | 0.708155 | 0.708155 | 0 | 0.056624 | 0.477344 | 7,062 | 173 | 68 | 40.820809 | 0.700894 | 0 | 0 | 0.520231 | 0 | 0 | 0.391957 | 0.049703 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b9d50e324614597b181f7bda976d8f3b9d2323b5 | 196 | py | Python | boa3_test/test_sc/interop_test/crypto/VerifyWithECDsaSecp256k1Bytes.py | DanPopa46/neo3-boa | e4ef340744b5bd25ade26f847eac50789b97f3e9 | [
"Apache-2.0"
] | null | null | null | boa3_test/test_sc/interop_test/crypto/VerifyWithECDsaSecp256k1Bytes.py | DanPopa46/neo3-boa | e4ef340744b5bd25ade26f847eac50789b97f3e9 | [
"Apache-2.0"
] | null | null | null | boa3_test/test_sc/interop_test/crypto/VerifyWithECDsaSecp256k1Bytes.py | DanPopa46/neo3-boa | e4ef340744b5bd25ade26f847eac50789b97f3e9 | [
"Apache-2.0"
] | null | null | null | from boa3.builtin import public
from boa3.builtin.interop.crypto import verify_with_ecdsa_secp256k1
@public
def Main():
verify_with_ecdsa_secp256k1(b'unit test', b'publickey', b'signature')
| 24.5 | 73 | 0.80102 | 29 | 196 | 5.206897 | 0.62069 | 0.10596 | 0.198676 | 0.317881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.107143 | 196 | 7 | 74 | 28 | 0.805714 | 0 | 0 | 0 | 0 | 0 | 0.137755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b9e4c032d66bc5caf0eb4c09cbc2679742b0379d | 129 | py | Python | src/main/resources/docs/tests/C0102.py | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | src/main/resources/docs/tests/C0102.py | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | src/main/resources/docs/tests/C0102.py | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | ##Patterns: C0102
##Info: C0102
def foo(name):
print 'Hello', name
def goodFunctionName(name):
print 'Hello', name | 18.428571 | 28 | 0.643411 | 16 | 129 | 5.1875 | 0.5625 | 0.216867 | 0.337349 | 0.433735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079208 | 0.217054 | 129 | 7 | 29 | 18.428571 | 0.742574 | 0.20155 | 0 | 0.5 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
b9e5d094d269211883b57414adff87984f013fa3 | 1,059 | py | Python | letters.py | somervda/uPyLoRaWAN | 34fabf69908c14c7a9a05cc88a1ec319cefc4fe7 | [
"Apache-2.0"
] | 4 | 2020-10-20T20:01:37.000Z | 2021-07-11T22:59:56.000Z | letters.py | somervda/uPyLoRaWAN | 34fabf69908c14c7a9a05cc88a1ec319cefc4fe7 | [
"Apache-2.0"
] | null | null | null | letters.py | somervda/uPyLoRaWAN | 34fabf69908c14c7a9a05cc88a1ec319cefc4fe7 | [
"Apache-2.0"
] | 1 | 2021-04-29T04:36:04.000Z | 2021-04-29T04:36:04.000Z | characters = {"S": [[0, 1, 1, 1, 1], [1, 0, 0, 0, 0], [1, 1, 1, 1, 1], [0, 0, 0, 0, 1], [1, 1, 1, 1, 0]],
"O": [[0, 1, 1, 1, 0], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1], [0, 1, 1, 1, 0]],
"L": [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 1, 1, 1, 1]],
"I": [[0, 1, 1, 1, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 1, 1, 1, 0]],
"D": [[1, 1, 1, 1, 0], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1], [1, 1, 1, 1, 0]],
"A": [[0, 1, 1, 1, 0], [1, 0, 0, 0, 1], [1, 1, 1, 1, 1], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1]],
"R": [[1, 1, 1, 1, 0], [1, 0, 0, 0, 1], [1, 1, 1, 1, 0], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1]],
"T": [[1, 1, 1, 1, 1], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0]],
"Y": [[1, 0, 0, 0, 1], [0, 1, 0, 1, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0]],
"!": [[0, 1, 0, 1, 0], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [0, 1, 1, 1, 0], [0, 0, 1, 0, 0]]} | 105.9 | 105 | 0.254013 | 260 | 1,059 | 1.034615 | 0.046154 | 0.579926 | 0.490706 | 0.460967 | 0.929368 | 0.925651 | 0.914498 | 0.888476 | 0.881041 | 0.802974 | 0 | 0.372024 | 0.365439 | 1,059 | 10 | 106 | 105.9 | 0.028274 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
6a0efc3d7b151c51413dc83c51b530157dea8858 | 260 | py | Python | pagesext/admin/__init__.py | dlancer/django-pages-cms-extensions | 4aa6f2780abef9543ced20258ede01a9662167b3 | [
"BSD-3-Clause"
] | 1 | 2016-07-08T07:23:20.000Z | 2016-07-08T07:23:20.000Z | pagesext/admin/__init__.py | dlancer/django-pages-cms-extensions | 4aa6f2780abef9543ced20258ede01a9662167b3 | [
"BSD-3-Clause"
] | null | null | null | pagesext/admin/__init__.py | dlancer/django-pages-cms-extensions | 4aa6f2780abef9543ced20258ede01a9662167b3 | [
"BSD-3-Clause"
] | null | null | null | from pagesext.admin.pagetagscontent import PageTagsContentAdmin
from pagesext.admin.pageimagecontent import PageImageContentAdmin
from pagesext.admin.pagefilecontent import PageFileContentAdmin
from pagesext.admin.pagevideocontent import PageVideoContentAdmin
| 52 | 65 | 0.907692 | 24 | 260 | 9.833333 | 0.5 | 0.20339 | 0.288136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 260 | 4 | 66 | 65 | 0.967213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6a3fbbc8edb06758bebba7763f81dfb196b23fee | 2,503 | py | Python | project/project3/tests/q3_2_1.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
] | null | null | null | project/project3/tests/q3_2_1.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
] | null | null | null | project/project3/tests/q3_2_1.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
] | null | null | null | test = { 'name': 'q3_2_1',
'points': 1,
'suites': [ { 'cases': [ { 'code': '>>> # This test just checks to see if your classify function works correctly;\n'
'>>> # with k = 5 nearest neighbors;\n'
'>>> from collections import Counter;\n'
">>> g = train_movies.column('Genre');\n"
'>>> def check(r, k):\n'
'... t = test_my_features.row(r)\n'
'... return classify(t, train_my_features, g, k) == Counter(np.take(g, np.argsort(fast_distances(t, '
'train_my_features))[:k])).most_common(1)[0][0];\n'
'>>> check_5_nn = [check(i, 5) for i in np.arange(11)];\n'
'>>> all(check_5_nn)\n'
'True',
'hidden': False,
'locked': False},
{ 'code': '>>> # This test just checks to see if your classify function works correctly;\n'
'>>> # with k = 11 nearest neighbors;\n'
'>>> from collections import Counter;\n'
">>> g = train_movies.column('Genre');\n"
'>>> def check(r, k):\n'
'... t = test_my_features.row(r)\n'
'... return classify(t, train_my_features, g, k) == Counter(np.take(g, np.argsort(fast_distances(t, '
'train_my_features))[:k])).most_common(1)[0][0];\n'
'>>> check_11_nn = [check(i, 11) for i in np.arange(11)];\n'
'>>> all(check_11_nn)\n'
'True',
'hidden': False,
'locked': False}],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'}]}
| 75.848485 | 152 | 0.310028 | 201 | 2,503 | 3.721393 | 0.333333 | 0.080214 | 0.042781 | 0.085562 | 0.877005 | 0.877005 | 0.877005 | 0.799465 | 0.799465 | 0.73262 | 0 | 0.02381 | 0.563724 | 2,503 | 32 | 153 | 78.21875 | 0.661172 | 0 | 0 | 0.5 | 0 | 0.0625 | 0.42469 | 0.106272 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dbecb0a0c8f415990467f756107ca955c94d3d75 | 11,428 | py | Python | samcli/local/docker/lambda_debug_settings.py | awsed/aws-sam-cli | 6becd25c06caaa96a79d6c9211da05501dadd132 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | samcli/local/docker/lambda_debug_settings.py | awsed/aws-sam-cli | 6becd25c06caaa96a79d6c9211da05501dadd132 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | samcli/local/docker/lambda_debug_settings.py | awsed/aws-sam-cli | 6becd25c06caaa96a79d6c9211da05501dadd132 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | """
Represents Lambda debug entrypoints.
"""
import json
from collections import namedtuple
from samcli.lib.utils.feature_flag import extensions_preview_enabled
from samcli.local.docker.lambda_image import Runtime
class DebuggingNotSupported(Exception):
pass
DebugSettings = namedtuple("DebugSettings", ["entrypoint", "debug_env_vars"])
class LambdaDebugSettings:
@staticmethod
def get_debug_settings(debug_port, debug_args_list, runtime, options):
"""
Get Debug settings based on the Runtime
Parameters
----------
debug_port int
Port to open for debugging in the container
debug_args_list list(str)
Additional debug args
runtime str
Lambda Function runtime
options dict
Additonal options needed (i.e delve Path)
Returns
-------
tuple:DebugSettings (list, dict)
Tuple of debug entrypoint and debug env vars
"""
extensions_preview_on = extensions_preview_enabled()
if extensions_preview_on:
entry = ["/var/rapid/init", "--log-level", "error"]
entrypoint_mapping = {
Runtime.java8.value: DebugSettings(
entry,
debug_env_vars={
"_JAVA_OPTIONS": f"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,quiet=y,address={debug_port} -XX:MaxHeapSize=2834432k -XX:MaxMetaspaceSize=163840k -XX:ReservedCodeCacheSize=81920k -XX:+UseSerialGC -XX:-TieredCompilation -Djava.net.preferIPv4Stack=true -Xshare:off"
+ " ".join(debug_args_list)
},
),
Runtime.java11.value: DebugSettings(
entry,
debug_env_vars={
"_JAVA_OPTIONS": f"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,quiet=y,address=*:{debug_port} -XX:MaxHeapSize=2834432k -XX:MaxMetaspaceSize=163840k -XX:ReservedCodeCacheSize=81920k -XX:+UseSerialGC -XX:-TieredCompilation -Djava.net.preferIPv4Stack=true"
+ " ".join(debug_args_list)
},
),
Runtime.dotnetcore21.value: DebugSettings(
entry + ["/var/runtime/bootstrap"] + debug_args_list,
debug_env_vars={"_AWS_LAMBDA_DOTNET_DEBUGGING": "1"},
),
Runtime.go1x.value: DebugSettings(
["/var/runtime/aws-lambda-go"]
+ debug_args_list
+ ["-debug=true", "-delvePort=" + str(debug_port), "-delvePath=" + options.get("delvePath")],
debug_env_vars={},
),
Runtime.nodejs10x.value: DebugSettings(
entry
+ ["/var/lang/bin/node"]
+ debug_args_list
+ [
"/var/runtime/index.js",
],
debug_env_vars={
"NODE_PATH": "/opt/nodejs/node_modules:/opt/nodejs/node10/node_modules:/var/runtime/node_module",
"NODE_OPTIONS": f"--inspect-brk=0.0.0.0:{str(debug_port)} --no-lazy --expose-gc --max-http-header-size 81920",
},
),
Runtime.nodejs12x.value: DebugSettings(
entry
+ ["/var/lang/bin/node"]
+ debug_args_list
+ [
"/var/runtime/index.js",
],
debug_env_vars={
"NODE_PATH": "/opt/nodejs/node_modules:/opt/nodejs/node12/node_modules:/var/runtime/node_module",
"NODE_OPTIONS": f"--inspect-brk=0.0.0.0:{str(debug_port)} --no-lazy --expose-gc --max-http-header-size 81920",
},
),
Runtime.python27.value: DebugSettings(
entry + ["/var/lang/bin/python2.7"] + debug_args_list + ["/var/runtime/bootstrap.py"],
debug_env_vars={},
),
Runtime.python36.value: DebugSettings(
entry + ["/var/lang/bin/python3.6"] + debug_args_list + ["/var/runtime/bootstrap.py"],
debug_env_vars={},
),
Runtime.python37.value: DebugSettings(
entry + ["/var/lang/bin/python3.7"] + debug_args_list + ["/var/runtime/bootstrap.py"],
debug_env_vars={},
),
Runtime.python38.value: DebugSettings(
entry + ["/var/lang/bin/python3.8"] + debug_args_list + ["/var/runtime/bootstrap.py"],
debug_env_vars={},
),
}
else:
entry = "/var/rapid/init"
entrypoint_mapping = {
Runtime.java8.value: DebugSettings(
entry,
debug_env_vars={
"_JAVA_OPTIONS": f"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,quiet=y,address={debug_port} -XX:MaxHeapSize=2834432k -XX:MaxMetaspaceSize=163840k -XX:ReservedCodeCacheSize=81920k -XX:+UseSerialGC -XX:-TieredCompilation -Djava.net.preferIPv4Stack=true -Xshare:off"
+ " ".join(debug_args_list)
},
),
Runtime.java8al2.value: DebugSettings(
entry,
debug_env_vars={
"_JAVA_OPTIONS": f"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,quiet=y,address={debug_port} -XX:MaxHeapSize=2834432k -XX:MaxMetaspaceSize=163840k -XX:ReservedCodeCacheSize=81920k -XX:+UseSerialGC -XX:-TieredCompilation -Djava.net.preferIPv4Stack=true -Xshare:off"
+ " ".join(debug_args_list)
},
),
Runtime.java11.value: DebugSettings(
entry,
debug_env_vars={
"_JAVA_OPTIONS": f"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,quiet=y,address=*:{debug_port} -XX:MaxHeapSize=2834432k -XX:MaxMetaspaceSize=163840k -XX:ReservedCodeCacheSize=81920k -XX:+UseSerialGC -XX:-TieredCompilation -Djava.net.preferIPv4Stack=true"
+ " ".join(debug_args_list)
},
),
Runtime.dotnetcore21.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/runtime/bootstrap",
"--bootstrap-args",
json.dumps(debug_args_list),
],
debug_env_vars={"_AWS_LAMBDA_DOTNET_DEBUGGING": "1"},
),
Runtime.dotnetcore31.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/runtime/bootstrap",
"--bootstrap-args",
json.dumps(debug_args_list),
],
debug_env_vars={"_AWS_LAMBDA_DOTNET_DEBUGGING": "1"},
),
Runtime.go1x.value: DebugSettings(
["/var/runtime/aws-lambda-go"]
+ debug_args_list
+ ["-debug=true", "-delvePort=" + str(debug_port), "-delvePath=" + options.get("delvePath")],
debug_env_vars={},
),
Runtime.nodejs10x.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/lang/bin/node",
"--bootstrap-args",
json.dumps(
debug_args_list
+ [
"--inspect-brk=0.0.0.0:" + str(debug_port),
"--nolazy",
"--expose-gc",
"--max-http-header-size",
"81920",
"/var/runtime/index.js",
]
),
],
debug_env_vars={
"NODE_PATH": "/opt/nodejs/node_modules:/opt/nodejs/node10/node_modules:/var/runtime/node_modules"
},
),
Runtime.nodejs12x.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/lang/bin/node",
"--bootstrap-args",
json.dumps(
debug_args_list
+ [
"--inspect-brk=0.0.0.0:" + str(debug_port),
"--nolazy",
"--expose-gc",
"--max-http-header-size",
"81920",
"/var/runtime/index.js",
]
),
],
debug_env_vars={
"NODE_PATH": "/opt/nodejs/node_modules:/opt/nodejs/node12/node_modules:/var/runtime/node_modules"
},
),
Runtime.python27.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/usr/bin/python2.7",
"--bootstrap-args",
json.dumps(debug_args_list + ["/var/runtime/awslambda/bootstrap.py"]),
],
debug_env_vars={},
),
Runtime.python36.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/lang/bin/python3.6",
"--bootstrap-args",
json.dumps(debug_args_list + ["/var/runtime/awslambda/bootstrap.py"]),
],
debug_env_vars={},
),
Runtime.python37.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/lang/bin/python3.7",
"--bootstrap-args",
json.dumps(debug_args_list + ["/var/runtime/bootstrap"]),
],
debug_env_vars={},
),
Runtime.python38.value: DebugSettings(
[
"/var/rapid/init",
"--bootstrap",
"/var/lang/bin/python3.8",
"--bootstrap-args",
json.dumps(debug_args_list + ["/var/runtime/bootstrap.py"]),
],
debug_env_vars={},
),
}
try:
return entrypoint_mapping[runtime]
except KeyError as ex:
raise DebuggingNotSupported("Debugging is not currently supported for {}".format(runtime)) from ex
| 45.349206 | 297 | 0.453273 | 925 | 11,428 | 5.419459 | 0.179459 | 0.044883 | 0.057451 | 0.031917 | 0.80371 | 0.802912 | 0.78855 | 0.759226 | 0.759226 | 0.745661 | 0 | 0.030546 | 0.432797 | 11,428 | 251 | 298 | 45.52988 | 0.742826 | 0.036052 | 0 | 0.738532 | 0 | 0.03211 | 0.305372 | 0.211461 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004587 | false | 0.004587 | 0.018349 | 0 | 0.036697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e02e1a1dbea5ba684f3edd4ff5e7451ae2a81681 | 63,283 | py | Python | src/bpp/migrations/0001_initial.py | iplweb/django-bpp | 85f183a99d8d5027ae4772efac1e4a9f21675849 | [
"BSD-3-Clause"
] | 1 | 2017-04-27T19:50:02.000Z | 2017-04-27T19:50:02.000Z | src/bpp/migrations/0001_initial.py | mpasternak/django-bpp | 434338821d5ad1aaee598f6327151aba0af66f5e | [
"BSD-3-Clause"
] | 41 | 2019-11-07T00:07:02.000Z | 2022-02-27T22:09:39.000Z | src/bpp/migrations/0001_initial.py | iplweb/bpp | f027415cc3faf1ca79082bf7bacd4be35b1a6fdf | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from django.db import models, migrations
import datetime
import autoslug.fields
from decimal import Decimal
from django.db.migrations.operations.special import RunPython, RunSQL
import django.utils.timezone
from django.contrib.postgres.search import SearchVectorField
from django.contrib.postgres.fields import ArrayField
import django.core.validators
from bpp.migration_util import load_custom_sql
class Migration(migrations.Migration):
dependencies = [
('contenttypes', '__first__'),
('auth', '__first__'),
]
operations = [
RunPython(
lambda *args, **kw: load_custom_sql("0001_collation")),
migrations.CreateModel(
name='BppUser',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('password', models.CharField(max_length=128, verbose_name='password')),
('last_login', models.DateTimeField(default=django.utils.timezone.now, verbose_name='last login')),
('is_superuser', models.BooleanField(default=False, help_text='Designates that this user has all permissions without explicitly assigning them.', verbose_name='superuser status')),
('username', models.CharField(help_text='Required. 30 characters or fewer. Letters, digits and @/./+/-/_ only.', unique=True, max_length=30, verbose_name='username', validators=[django.core.validators.RegexValidator('^[\\w.@+-]+$', 'Enter a valid username.', 'invalid')])),
('first_name', models.CharField(max_length=30, verbose_name='first name', blank=True)),
('last_name', models.CharField(max_length=30, verbose_name='last name', blank=True)),
('email', models.EmailField(max_length=75, verbose_name='email address', blank=True)),
('is_staff', models.BooleanField(default=False, help_text='Designates whether the user can log into this admin site.', verbose_name='staff status')),
('is_active', models.BooleanField(default=True, help_text='Designates whether this user should be treated as active. Unselect this instead of deleting accounts.', verbose_name='active')),
('date_joined', models.DateTimeField(default=django.utils.timezone.now, verbose_name='date joined')),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('active_charmap_tab', models.IntegerField(default=0)),
('per_page', models.IntegerField(default=20, verbose_name=b'Ilo\xc5\x9b\xc4\x87 wy\xc5\x9bwietlanych rekord\xc3\xb3w na stronie')),
('multiseek_format', models.CharField(max_length=200, null=True, verbose_name=b'Ostatnio wybrany format wy\xc5\x9bwietlania w Multiseeku', blank=True)),
('multiseek_order_1', models.CharField(max_length=200, null=True, verbose_name=b'Ostatnio wybrane pole sortowania w Multiseeku', blank=True)),
('groups', models.ManyToManyField(to='auth.Group', verbose_name='groups', blank=True)),
('user_permissions', models.ManyToManyField(to='auth.Permission', verbose_name='user permissions', blank=True)),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Autor',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('imiona', models.CharField(max_length=512, db_index=True)),
('nazwisko', models.CharField(max_length=256, db_index=True)),
('pokazuj_na_stronach_jednostek', models.BooleanField(default=True)),
('email', models.EmailField(max_length=128, null=True, verbose_name=b'E-mail', blank=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'WWW', blank=True)),
('urodzony', models.DateField(null=True, blank=True)),
('zmarl', models.DateField(null=True, blank=True)),
('poprzednie_nazwiska', models.CharField(help_text=b'Je\xc5\xbceli ten\n autor(-ka) posiada nazwisko panie\xc5\x84skie, pod kt\xc3\xb3rym ukazywa\xc5\x82y\n si\xc4\x99 publikacje lub zmienia\xc5\x82 nazwisko z innych powod\xc3\xb3w, wpisz tutaj\n wszystkie poprzednie nazwiska, oddzielaj\xc4\x85c je przecinkami.', max_length=1024, null=True, db_index=True, blank=True)),
('search', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('slug', autoslug.fields.AutoSlugField(unique=True, max_length=1024, editable=False)),
('sort', models.TextField()),
],
options={
'ordering': [b'sort'],
'verbose_name': b'autor',
'verbose_name_plural': b'autorzy',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Autor_Jednostka',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('rozpoczal_prace', models.DateField(db_index=True, null=True, verbose_name=b'Rozpocz\xc4\x85\xc5\x82 prac\xc4\x99', blank=True)),
('zakonczyl_prace', models.DateField(db_index=True, null=True, verbose_name=b'Zako\xc5\x84czy\xc5\x82 prac\xc4\x99', blank=True)),
('autor', models.ForeignKey(to='bpp.Autor', on_delete=models.CASCADE)),
],
options={
'ordering': [b'autor__nazwisko', b'jednostka__nazwa', b'rozpoczal_prace'],
'verbose_name': b'powi\xc4\x85zanie autor-jednostka',
'verbose_name_plural': b'powi\xc4\x85zania autor-jednostka',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Charakter_Formalny',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
('publikacja', models.BooleanField(default=False, help_text=b'Jest charakterem dla publikacji')),
('streszczenie', models.BooleanField(default=False, help_text=b'Jest charakterem dla streszcze\xc5\x84')),
],
options={
'ordering': [b'nazwa'],
'verbose_name': b'charakter formalny',
'verbose_name_plural': b'charaktery formalne',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Funkcja_Autora',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
],
options={
'ordering': [b'nazwa'],
'verbose_name': b'funkcja w jednostce',
'verbose_name_plural': b'funkcje w jednostkach',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='autor_jednostka',
name='funkcja',
field=models.ForeignKey(blank=True, to='bpp.Funkcja_Autora', null=True, on_delete=models.CASCADE),
preserve_default=True,
),
migrations.CreateModel(
name='Jednostka',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('rozpoczecie_funkcjonowania', models.DateField(null=True, verbose_name=b'Rozpocz\xc4\x99cie funkcjonowania', blank=True)),
('zakonczenie_funkcjonowania', models.DateField(null=True, verbose_name=b'Zako\xc5\x84czenie funkcjonowania', blank=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128, verbose_name=b'Skr\xc3\xb3t')),
('opis', models.TextField(null=True, blank=True)),
('slug', autoslug.fields.AutoSlugField(unique=True, editable=False)),
('widoczna', models.BooleanField(default=True, db_index=True)),
('wchodzi_do_raportow', models.BooleanField(default=True, db_index=True, verbose_name=b'Wchodzi do raport\xc3\xb3w')),
('email', models.EmailField(max_length=128, null=True, verbose_name=b'E-mail', blank=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'WWW', blank=True)),
('search', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
],
options={
'ordering': ['nazwa'],
'verbose_name': b'jednostka',
'verbose_name_plural': b'jednostki',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='autor_jednostka',
name='jednostka',
field=models.ForeignKey(to='bpp.Jednostka', on_delete=models.CASCADE),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='autor_jednostka',
unique_together=set([('autor', 'jednostka', 'rozpoczal_prace')]),
),
migrations.AddField(
model_name='autor',
name='jednostki',
field=models.ManyToManyField(to='bpp.Jednostka', through='bpp.Autor_Jednostka'),
preserve_default=True,
),
migrations.AddField(
model_name='autor',
name='aktualna_jednostka',
field=models.ForeignKey(blank=True, to='bpp.Jednostka', null=True, on_delete=models.CASCADE),
preserve_default=True,
),
migrations.CreateModel(
name='Jezyk',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
],
options={
'ordering': ['nazwa'],
'verbose_name': b'j\xc4\x99zyk',
'verbose_name_plural': b'j\xc4\x99zyki',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Opi_2012_Afiliacja_Do_Wydzialu',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('rok', models.IntegerField(db_index=True)),
('autor', models.ForeignKey(to='bpp.Autor', on_delete=models.CASCADE)),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Opi_2012_Tytul_Cache',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('tytul_oryginalny_cache', models.TextField(db_index=True)),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Patent',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('opis_bibliograficzny_cache', models.TextField(default=b'')),
('opis_bibliograficzny_autorzy_cache', ArrayField(models.TextField())),
('opis_bibliograficzny_zapisani_autorzy_cache', models.TextField(default=b'')),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('rok', models.IntegerField(help_text=b'Rok uwzgl\xc4\x99dniany przy wyszukiwaniu i raportach\n KBN/MNiSW)', db_index=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'Adres WWW', blank=True)),
('afiliowana', models.BooleanField(default=False)),
('recenzowana', models.BooleanField(default=False)),
('impact_factor', models.DecimalField(default=Decimal('0.000'), max_digits=6, decimal_places=3, db_index=True)),
('punkty_kbn', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punkty KBN', max_digits=6, decimal_places=2, db_index=True)),
('index_copernicus', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Index Copernicus', max_digits=6, decimal_places=2, db_index=True)),
('punktacja_wewnetrzna', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punktacja wewn\xc4\x99trzna', max_digits=6, decimal_places=2, db_index=True)),
('kc_impact_factor', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', null=True, verbose_name=b'KC: Impact factor', db_index=True)),
('kc_punkty_kbn', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Punkty KBN', db_index=True)),
('kc_index_copernicus', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Index Copernicus')),
('weryfikacja_punktacji', models.BooleanField(default=False)),
('informacje', models.TextField(null=True, verbose_name=b'Informacje', blank=True)),
('szczegoly', models.CharField(help_text=b'Np. str. 23-45', max_length=512, null=True, verbose_name=b'Szczeg\xc3\xb3\xc5\x82y', blank=True)),
('uwagi', models.TextField(db_index=True, null=True, blank=True)),
('slowa_kluczowe', models.TextField(null=True, verbose_name=b'S\xc5\x82owa kluczowe', blank=True)),
('utworzono', models.DateTimeField(default=datetime.datetime(1970, 1, 1, 0, 0), verbose_name=b'Utworzono', auto_now_add=True)),
('search_index', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('tytul_oryginalny_sort', models.TextField(default=b'', db_index=True)),
('tytul_oryginalny', models.TextField(verbose_name=b'Tytu\xc5\x82 oryginalny', db_index=True)),
('numer', models.CharField(max_length=255, null=True, blank=True)),
('z_dnia', models.DateField(null=True, blank=True)),
],
options={
'verbose_name': b'patent',
'verbose_name_plural': b'patenty',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Patent_Autor',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('kolejnosc', models.IntegerField(default=0, verbose_name=b'Kolejno\xc5\x9b\xc4\x87')),
('zapisany_jako', models.CharField(max_length=512)),
('autor', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Autor')),
('jednostka', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Jednostka')),
],
options={
'ordering': ('kolejnosc',),
'verbose_name': b'powi\xc4\x85zanie autora z patentem',
'verbose_name_plural': b'powi\xc4\x85zania autor\xc3\xb3w z patentami',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='patent',
name='autorzy',
field=models.ManyToManyField(to='bpp.Autor', through='bpp.Patent_Autor'),
preserve_default=True,
),
migrations.AddField(
model_name='patent_autor',
name='rekord',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Patent'),
preserve_default=True,
),
migrations.CreateModel(
name='Plec',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
],
options={
'verbose_name': b'p\xc5\x82e\xc4\x87',
'verbose_name_plural': b'p\xc5\x82cie',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='autor',
name='plec',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Plec', null=True),
preserve_default=True,
),
migrations.CreateModel(
name='Praca_Doktorska',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('opis_bibliograficzny_cache', models.TextField(default=b'')),
('opis_bibliograficzny_autorzy_cache', ArrayField(models.TextField())),
('opis_bibliograficzny_zapisani_autorzy_cache', models.TextField(default=b'')),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('isbn', models.CharField(max_length=64, null=True, verbose_name=b'ISBN', blank=True)),
('e_isbn', models.CharField(max_length=64, null=True, verbose_name=b'E-ISBN', blank=True)),
('tytul_oryginalny', models.TextField(verbose_name=b'Tytu\xc5\x82 oryginalny', db_index=True)),
('tytul', models.TextField(db_index=True, null=True, verbose_name=b'Tytu\xc5\x82', blank=True)),
('rok', models.IntegerField(help_text=b'Rok uwzgl\xc4\x99dniany przy wyszukiwaniu i raportach\n KBN/MNiSW)', db_index=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'Adres WWW', blank=True)),
('afiliowana', models.BooleanField(default=False)),
('recenzowana', models.BooleanField(default=False)),
('impact_factor', models.DecimalField(default=Decimal('0.000'), max_digits=6, decimal_places=3, db_index=True)),
('punkty_kbn', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punkty KBN', max_digits=6, decimal_places=2, db_index=True)),
('index_copernicus', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Index Copernicus', max_digits=6, decimal_places=2, db_index=True)),
('punktacja_wewnetrzna', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punktacja wewn\xc4\x99trzna', max_digits=6, decimal_places=2, db_index=True)),
('kc_impact_factor', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', null=True, verbose_name=b'KC: Impact factor', db_index=True)),
('kc_punkty_kbn', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Punkty KBN', db_index=True)),
('kc_index_copernicus', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Index Copernicus')),
('weryfikacja_punktacji', models.BooleanField(default=False)),
('informacje', models.TextField(null=True, verbose_name=b'Informacje', blank=True)),
('szczegoly', models.CharField(help_text=b'Np. str. 23-45', max_length=512, null=True, verbose_name=b'Szczeg\xc3\xb3\xc5\x82y', blank=True)),
('uwagi', models.TextField(db_index=True, null=True, blank=True)),
('slowa_kluczowe', models.TextField(null=True, verbose_name=b'S\xc5\x82owa kluczowe', blank=True)),
('utworzono', models.DateTimeField(default=datetime.datetime(1970, 1, 1, 0, 0), verbose_name=b'Utworzono', auto_now_add=True)),
('search_index', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('tytul_oryginalny_sort', models.TextField(default=b'', db_index=True)),
('miejsce_i_rok', models.CharField(help_text=b'Przyk\xc5\x82adowo:\n Warszawa 2012. Wpisz prosz\xc4\x99 najpierw miejsce potem rok; oddziel\n spacj\xc4\x85.', max_length=256, null=True, blank=True)),
('wydawnictwo', models.CharField(max_length=256, null=True, blank=True)),
('redakcja', models.TextField(null=True, blank=True)),
('autor', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Autor')),
('jednostka', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Jednostka')),
('jezyk', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'J\xc4\x99zyk', to='bpp.Jezyk')),
],
options={
'verbose_name': b'praca doktorska',
'verbose_name_plural': b'prace doktorskie',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Praca_Habilitacyjna',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('opis_bibliograficzny_cache', models.TextField(default=b'')),
('opis_bibliograficzny_autorzy_cache', ArrayField(models.TextField())),
('opis_bibliograficzny_zapisani_autorzy_cache', models.TextField(default=b'')),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('isbn', models.CharField(max_length=64, null=True, verbose_name=b'ISBN', blank=True)),
('e_isbn', models.CharField(max_length=64, null=True, verbose_name=b'E-ISBN', blank=True)),
('tytul_oryginalny', models.TextField(verbose_name=b'Tytu\xc5\x82 oryginalny', db_index=True)),
('tytul', models.TextField(db_index=True, null=True, verbose_name=b'Tytu\xc5\x82', blank=True)),
('rok', models.IntegerField(help_text=b'Rok uwzgl\xc4\x99dniany przy wyszukiwaniu i raportach\n KBN/MNiSW)', db_index=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'Adres WWW', blank=True)),
('afiliowana', models.BooleanField(default=False)),
('recenzowana', models.BooleanField(default=False)),
('impact_factor', models.DecimalField(default=Decimal('0.000'), max_digits=6, decimal_places=3, db_index=True)),
('punkty_kbn', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punkty KBN', max_digits=6, decimal_places=2, db_index=True)),
('index_copernicus', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Index Copernicus', max_digits=6, decimal_places=2, db_index=True)),
('punktacja_wewnetrzna', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punktacja wewn\xc4\x99trzna', max_digits=6, decimal_places=2, db_index=True)),
('kc_impact_factor', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', null=True, verbose_name=b'KC: Impact factor', db_index=True)),
('kc_punkty_kbn', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Punkty KBN', db_index=True)),
('kc_index_copernicus', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Index Copernicus')),
('weryfikacja_punktacji', models.BooleanField(default=False)),
('informacje', models.TextField(null=True, verbose_name=b'Informacje', blank=True)),
('szczegoly', models.CharField(help_text=b'Np. str. 23-45', max_length=512, null=True, verbose_name=b'Szczeg\xc3\xb3\xc5\x82y', blank=True)),
('uwagi', models.TextField(db_index=True, null=True, blank=True)),
('slowa_kluczowe', models.TextField(null=True, verbose_name=b'S\xc5\x82owa kluczowe', blank=True)),
('utworzono', models.DateTimeField(default=datetime.datetime(1970, 1, 1, 0, 0), verbose_name=b'Utworzono', auto_now_add=True)),
('search_index', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('tytul_oryginalny_sort', models.TextField(default=b'', db_index=True)),
('miejsce_i_rok', models.CharField(help_text=b'Przyk\xc5\x82adowo:\n Warszawa 2012. Wpisz prosz\xc4\x99 najpierw miejsce potem rok; oddziel\n spacj\xc4\x85.', max_length=256, null=True, blank=True)),
('wydawnictwo', models.CharField(max_length=256, null=True, blank=True)),
('redakcja', models.TextField(null=True, blank=True)),
('autor', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Autor')),
('jednostka', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Jednostka')),
('jezyk', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'J\xc4\x99zyk', to='bpp.Jezyk')),
],
options={
'verbose_name': b'praca habilitacyjna',
'verbose_name_plural': b'prace habilitacyjne',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Publikacja_Habilitacyjna',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('kolejnosc', models.IntegerField(default=0, verbose_name=b'Kolejno\xc5\x9b\xc4\x87')),
('object_id', models.PositiveIntegerField()),
('content_type', models.ForeignKey(on_delete=models.CASCADE, to='contenttypes.ContentType')),
('praca_habilitacyjna', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Praca_Habilitacyjna')),
],
options={
'ordering': ('kolejnosc',),
'verbose_name': b'powi\xc4\x85zanie publikacji z habilitacj\xc4\x85',
'verbose_name_plural': b'powi\xc4\x85zania publikacji z habilitacj\xc4\x85',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Punktacja_Zrodla',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('impact_factor', models.DecimalField(default=Decimal('0.000'), max_digits=6, decimal_places=3, db_index=True)),
('punkty_kbn', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punkty KBN', max_digits=6, decimal_places=2, db_index=True)),
('index_copernicus', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Index Copernicus', max_digits=6, decimal_places=2, db_index=True)),
('punktacja_wewnetrzna', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punktacja wewn\xc4\x99trzna', max_digits=6, decimal_places=2, db_index=True)),
('kc_impact_factor', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', null=True, verbose_name=b'KC: Impact factor', db_index=True)),
('kc_punkty_kbn', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Punkty KBN', db_index=True)),
('kc_index_copernicus', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Index Copernicus')),
('rok', models.IntegerField()),
],
options={
'ordering': ['zrodlo__nazwa', 'rok'],
'verbose_name': b'punktacja \xc5\xbar\xc3\xb3d\xc5\x82a',
'verbose_name_plural': b'punktacja \xc5\xbar\xc3\xb3d\xc5\x82a',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Redakcja_Zrodla',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('od_roku', models.IntegerField()),
('do_roku', models.IntegerField(null=True, blank=True)),
('redaktor', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Autor')),
],
options={
'verbose_name': b'redaktor \xc5\xbar\xc3\xb3d\xc5\x82a',
'verbose_name_plural': b'redaktorzy \xc5\xbar\xc3\xb3d\xc5\x82a',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Rodzaj_Zrodla',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
],
options={
'verbose_name': b'rodzaj \xc5\xbar\xc3\xb3d\xc5\x82a',
'verbose_name_plural': b'rodzaje \xc5\xbar\xc3\xb3de\xc5\x82',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Status_Korekty',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
],
options={
'verbose_name': b'status korekty',
'verbose_name_plural': b'statusy korekty',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='praca_habilitacyjna',
name='status_korekty',
field=models.ForeignKey(on_delete=models.CASCADE, default=1, to='bpp.Status_Korekty'),
preserve_default=True,
),
migrations.AddField(
model_name='praca_doktorska',
name='status_korekty',
field=models.ForeignKey(on_delete=models.CASCADE, default=1, to='bpp.Status_Korekty'),
preserve_default=True,
),
migrations.AddField(
model_name='patent',
name='status_korekty',
field=models.ForeignKey(on_delete=models.CASCADE, default=1, to='bpp.Status_Korekty'),
preserve_default=True,
),
migrations.CreateModel(
name='Typ_KBN',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
],
options={
'ordering': ['nazwa'],
'verbose_name': b'typ KBN',
'verbose_name_plural': b'typy KBN',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='praca_habilitacyjna',
name='typ_kbn',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ KBN', to='bpp.Typ_KBN'),
preserve_default=True,
),
migrations.AddField(
model_name='praca_doktorska',
name='typ_kbn',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ KBN', to='bpp.Typ_KBN'),
preserve_default=True,
),
migrations.CreateModel(
name='Typ_Odpowiedzialnosci',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
],
options={
'ordering': [b'nazwa'],
'verbose_name': b'typ odpowiedzialno\xc5\x9bci autora',
'verbose_name_plural': b'typy odpowiedzialno\xc5\x9bci autor\xc3\xb3w',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='patent_autor',
name='typ_odpowiedzialnosci',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ odpowiedzialno\xc5\x9bci', to='bpp.Typ_Odpowiedzialnosci'),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='patent_autor',
unique_together=set([('rekord', 'autor', 'typ_odpowiedzialnosci', 'kolejnosc')]),
),
migrations.CreateModel(
name='Tytul',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
],
options={
'verbose_name': b'tytu\xc5\x82',
'verbose_name_plural': b'tytu\xc5\x82y',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='autor',
name='tytul',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Tytul', null=True),
preserve_default=True,
),
migrations.CreateModel(
name='Uczelnia',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=128)),
('nazwa_dopelniacz_field', models.CharField(max_length=512, null=True, verbose_name='Nazwa w dope\u0142niaczu', blank=True)),
('slug', autoslug.fields.AutoSlugField(unique=True, editable=False)),
('logo_www', models.ImageField(help_text=b'Plik w formacie bitmapowym, np. JPEG lub PNG,\n w rozdzielczo\xc5\x9bci maks. 100x100', upload_to=b'logo', null=True, verbose_name=b'Logo na stron\xc4\x99 WWW', blank=True)),
('logo_svg', models.FileField(upload_to=b'logo_svg', null=True, verbose_name=b'Logo wektorowe (SVG)', blank=True)),
('favicon_ico', models.FileField(upload_to=b'favicon', null=True, verbose_name=b'Ikona ulubionych (favicon)', blank=True)),
],
options={
'verbose_name': b'uczelnia',
'verbose_name_plural': b'uczelnie',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Wydawnictwo_Ciagle',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('opis_bibliograficzny_cache', models.TextField(default=b'')),
('opis_bibliograficzny_autorzy_cache', ArrayField(models.TextField())),
('opis_bibliograficzny_zapisani_autorzy_cache', models.TextField(default=b'')),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('issn', models.CharField(max_length=32, null=True, verbose_name=b'ISSN', blank=True)),
('e_issn', models.CharField(max_length=32, null=True, verbose_name=b'e-ISSN', blank=True)),
('tytul_oryginalny', models.TextField(verbose_name=b'Tytu\xc5\x82 oryginalny', db_index=True)),
('tytul', models.TextField(db_index=True, null=True, verbose_name=b'Tytu\xc5\x82', blank=True)),
('rok', models.IntegerField(help_text=b'Rok uwzgl\xc4\x99dniany przy wyszukiwaniu i raportach\n KBN/MNiSW)', db_index=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'Adres WWW', blank=True)),
('afiliowana', models.BooleanField(default=False)),
('recenzowana', models.BooleanField(default=False)),
('impact_factor', models.DecimalField(default=Decimal('0.000'), max_digits=6, decimal_places=3, db_index=True)),
('punkty_kbn', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punkty KBN', max_digits=6, decimal_places=2, db_index=True)),
('index_copernicus', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Index Copernicus', max_digits=6, decimal_places=2, db_index=True)),
('punktacja_wewnetrzna', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punktacja wewn\xc4\x99trzna', max_digits=6, decimal_places=2, db_index=True)),
('kc_impact_factor', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', null=True, verbose_name=b'KC: Impact factor', db_index=True)),
('kc_punkty_kbn', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Punkty KBN', db_index=True)),
('kc_index_copernicus', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Index Copernicus')),
('weryfikacja_punktacji', models.BooleanField(default=False)),
('informacje', models.TextField(null=True, verbose_name=b'Informacje', blank=True)),
('szczegoly', models.CharField(help_text=b'Np. str. 23-45', max_length=512, null=True, verbose_name=b'Szczeg\xc3\xb3\xc5\x82y', blank=True)),
('uwagi', models.TextField(db_index=True, null=True, blank=True)),
('slowa_kluczowe', models.TextField(null=True, verbose_name=b'S\xc5\x82owa kluczowe', blank=True)),
('utworzono', models.DateTimeField(default=datetime.datetime(1970, 1, 1, 0, 0), verbose_name=b'Utworzono', auto_now_add=True)),
('search_index', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('tytul_oryginalny_sort', models.TextField(default=b'', db_index=True)),
('uzupelnij_punktacje', models.BooleanField(default=False)),
('charakter_formalny', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Charakter formalny', to='bpp.Charakter_Formalny')),
('jezyk', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'J\xc4\x99zyk', to='bpp.Jezyk')),
('status_korekty', models.ForeignKey(on_delete=models.CASCADE, default=1, to='bpp.Status_Korekty')),
('typ_kbn', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ KBN', to='bpp.Typ_KBN')),
],
options={
'verbose_name': b'wydawnictwo ci\xc4\x85g\xc5\x82e',
'verbose_name_plural': b'wydawnictwa ci\xc4\x85g\xc5\x82e',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Wydawnictwo_Ciagle_Autor',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('kolejnosc', models.IntegerField(default=0, verbose_name=b'Kolejno\xc5\x9b\xc4\x87')),
('zapisany_jako', models.CharField(max_length=512)),
('autor', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Autor')),
('jednostka', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Jednostka')),
],
options={
'ordering': (b'kolejnosc',),
'verbose_name': b'powi\xc4\x85zanie autora z wyd. ci\xc4\x85g\xc5\x82ym',
'verbose_name_plural': b'powi\xc4\x85zania autor\xc3\xb3w z wyd. ci\xc4\x85g\xc5\x82ymi',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='wydawnictwo_ciagle',
name='autorzy',
field=models.ManyToManyField(to='bpp.Autor', through='bpp.Wydawnictwo_Ciagle_Autor'),
preserve_default=True,
),
migrations.AddField(
model_name='wydawnictwo_ciagle_autor',
name='rekord',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Wydawnictwo_Ciagle'),
preserve_default=True,
),
migrations.AddField(
model_name='wydawnictwo_ciagle_autor',
name='typ_odpowiedzialnosci',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ odpowiedzialno\xc5\x9bci', to='bpp.Typ_Odpowiedzialnosci'),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='wydawnictwo_ciagle_autor',
unique_together=set([('rekord', 'autor', 'typ_odpowiedzialnosci', 'kolejnosc')]),
),
migrations.CreateModel(
name='Wydawnictwo_Zwarte',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('opis_bibliograficzny_cache', models.TextField(default=b'')),
('opis_bibliograficzny_autorzy_cache', ArrayField(models.TextField())),
('opis_bibliograficzny_zapisani_autorzy_cache', models.TextField(default=b'')),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('isbn', models.CharField(max_length=64, null=True, verbose_name=b'ISBN', blank=True)),
('e_isbn', models.CharField(max_length=64, null=True, verbose_name=b'E-ISBN', blank=True)),
('tytul_oryginalny', models.TextField(verbose_name=b'Tytu\xc5\x82 oryginalny', db_index=True)),
('tytul', models.TextField(db_index=True, null=True, verbose_name=b'Tytu\xc5\x82', blank=True)),
('rok', models.IntegerField(help_text=b'Rok uwzgl\xc4\x99dniany przy wyszukiwaniu i raportach\n KBN/MNiSW)', db_index=True)),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'Adres WWW', blank=True)),
('afiliowana', models.BooleanField(default=False)),
('recenzowana', models.BooleanField(default=False)),
('impact_factor', models.DecimalField(default=Decimal('0.000'), max_digits=6, decimal_places=3, db_index=True)),
('punkty_kbn', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punkty KBN', max_digits=6, decimal_places=2, db_index=True)),
('index_copernicus', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Index Copernicus', max_digits=6, decimal_places=2, db_index=True)),
('punktacja_wewnetrzna', models.DecimalField(default=Decimal('0.00'), verbose_name=b'Punktacja wewn\xc4\x99trzna', max_digits=6, decimal_places=2, db_index=True)),
('kc_impact_factor', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', null=True, verbose_name=b'KC: Impact factor', db_index=True)),
('kc_punkty_kbn', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Punkty KBN', db_index=True)),
('kc_index_copernicus', models.DecimalField(decimal_places=2, default=None, max_digits=6, blank=True, help_text=b'Je\xc5\xbceli wpiszesz\n warto\xc5\x9b\xc4\x87 w to pole, to zostanie ona u\xc5\xbcyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', null=True, verbose_name=b'KC: Index Copernicus')),
('weryfikacja_punktacji', models.BooleanField(default=False)),
('informacje', models.TextField(null=True, verbose_name=b'Informacje', blank=True)),
('szczegoly', models.CharField(help_text=b'Np. str. 23-45', max_length=512, null=True, verbose_name=b'Szczeg\xc3\xb3\xc5\x82y', blank=True)),
('uwagi', models.TextField(db_index=True, null=True, blank=True)),
('slowa_kluczowe', models.TextField(null=True, verbose_name=b'S\xc5\x82owa kluczowe', blank=True)),
('utworzono', models.DateTimeField(default=datetime.datetime(1970, 1, 1, 0, 0), verbose_name=b'Utworzono', auto_now_add=True)),
('search_index', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('tytul_oryginalny_sort', models.TextField(default=b'', db_index=True)),
('miejsce_i_rok', models.CharField(help_text=b'Przyk\xc5\x82adowo:\n Warszawa 2012. Wpisz prosz\xc4\x99 najpierw miejsce potem rok; oddziel\n spacj\xc4\x85.', max_length=256, null=True, blank=True)),
('wydawnictwo', models.CharField(max_length=256, null=True, blank=True)),
('redakcja', models.TextField(null=True, blank=True)),
('liczba_znakow_wydawniczych', models.IntegerField(null=True, verbose_name=b'Liczba znak\xc3\xb3w wydawniczych', blank=True)),
('charakter_formalny', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Charakter formalny', to='bpp.Charakter_Formalny')),
('jezyk', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'J\xc4\x99zyk', to='bpp.Jezyk')),
('status_korekty', models.ForeignKey(on_delete=models.CASCADE, default=1, to='bpp.Status_Korekty')),
('typ_kbn', models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ KBN', to='bpp.Typ_KBN')),
('wydawnictwo_nadrzedne', models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Wydawnictwo_Zwarte', help_text=b'Je\xc5\xbceli dodajesz rozdzia\xc5\x82,\n tu wybierz prac\xc4\x99, w ramach kt\xc3\xb3rej dany rozdzia\xc5\x82 wyst\xc4\x99puje.', null=True)),
],
options={
'verbose_name': b'wydawnictwo zwarte',
'verbose_name_plural': b'wydawnictwa zwarte',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='opi_2012_tytul_cache',
name='wydawnictwo_zwarte',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Wydawnictwo_Zwarte'),
preserve_default=True,
),
migrations.CreateModel(
name='Wydawnictwo_Zwarte_Autor',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('kolejnosc', models.IntegerField(default=0, verbose_name=b'Kolejno\xc5\x9b\xc4\x87')),
('zapisany_jako', models.CharField(max_length=512)),
('autor', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Autor')),
('jednostka', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Jednostka')),
],
options={
'ordering': (b'kolejnosc',),
'verbose_name': b'powi\xc4\x85zanie autora z wyd. zwartym',
'verbose_name_plural': b'powi\xc4\x85zania autor\xc3\xb3w z wyd. zwartymi',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='wydawnictwo_zwarte',
name='autorzy',
field=models.ManyToManyField(to='bpp.Autor', through='bpp.Wydawnictwo_Zwarte_Autor'),
preserve_default=True,
),
migrations.AddField(
model_name='wydawnictwo_zwarte_autor',
name='rekord',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Wydawnictwo_Zwarte'),
preserve_default=True,
),
migrations.AddField(
model_name='wydawnictwo_zwarte_autor',
name='typ_odpowiedzialnosci',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Typ odpowiedzialno\xc5\x9bci', to='bpp.Typ_Odpowiedzialnosci'),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='wydawnictwo_zwarte_autor',
unique_together=set([('rekord', 'autor', 'typ_odpowiedzialnosci', 'kolejnosc')]),
),
migrations.CreateModel(
name='Wydzial',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('rozpoczecie_funkcjonowania', models.DateField(null=True, verbose_name=b'Rozpocz\xc4\x99cie funkcjonowania', blank=True)),
('zakonczenie_funkcjonowania', models.DateField(null=True, verbose_name=b'Zako\xc5\x84czenie funkcjonowania', blank=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
('skrot', models.CharField(unique=True, max_length=4, verbose_name=b'Skr\xc3\xb3t')),
('opis', models.TextField(null=True, blank=True)),
('slug', autoslug.fields.AutoSlugField(unique=True, max_length=512, editable=False)),
('kolejnosc', models.IntegerField(default=0, verbose_name=b'Kolejno\xc5\x9b\xc4\x87')),
('widoczny', models.BooleanField(default=True)),
('uczelnia', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Uczelnia')),
],
options={
'ordering': [b'kolejnosc', b'skrot'],
'verbose_name': b'wydzia\xc5\x82',
'verbose_name_plural': b'wydzia\xc5\x82y',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='opi_2012_afiliacja_do_wydzialu',
name='wydzial',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Wydzial'),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='opi_2012_afiliacja_do_wydzialu',
unique_together=set([('autor', 'wydzial', 'rok')]),
),
migrations.AddField(
model_name='jednostka',
name='wydzial',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'Wydzia\xc5\x82', to='bpp.Wydzial'),
preserve_default=True,
),
migrations.CreateModel(
name='Zasieg_Zrodla',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
],
options={
'verbose_name': b'zasi\xc4\x99g \xc5\xbar\xc3\xb3d\xc5\x82a',
'verbose_name_plural': b'zasi\xc4\x99g \xc5\xbar\xc3\xb3de\xc5\x82',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Zrodlo',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('ostatnio_zmieniony', models.DateTimeField(auto_now=True, auto_now_add=True, null=True, db_index=True)),
('adnotacje', models.TextField(help_text=b'Pole do u\xc5\xbcytku wewn\xc4\x99trznego -\n wpisane tu informacje nie s\xc4\x85 wy\xc5\x9bwietlane na stronach WWW dost\xc4\x99pnych\n dla u\xc5\xbcytkownik\xc3\xb3w ko\xc5\x84cowych.', null=True, db_index=True, blank=True)),
('issn', models.CharField(max_length=32, null=True, verbose_name=b'ISSN', blank=True)),
('e_issn', models.CharField(max_length=32, null=True, verbose_name=b'e-ISSN', blank=True)),
('nazwa', models.CharField(max_length=1024, db_index=True)),
('skrot', models.CharField(max_length=512, verbose_name=b'Skr\xc3\xb3t')),
('www', models.URLField(max_length=1024, null=True, verbose_name=b'WWW', blank=True)),
('poprzednia_nazwa', models.CharField(db_index=True, max_length=1024, null=True, verbose_name=b'Poprzedni tytu\xc5\x82', blank=True)),
('search', SearchVectorField(default=b'', serialize=False, null=True, editable=False, db_index=True)),
('slug', autoslug.fields.AutoSlugField(unique=True, editable=False)),
('rodzaj', models.ForeignKey(on_delete=models.CASCADE, to='bpp.Rodzaj_Zrodla')),
('zasieg', models.ForeignKey(on_delete=models.CASCADE, default=None, blank=True, to='bpp.Zasieg_Zrodla', null=True)),
],
options={
'ordering': [b'nazwa'],
'verbose_name': b'\xc5\xbar\xc3\xb3d\xc5\x82o',
'verbose_name_plural': b'\xc5\xbar\xc3\xb3d\xc5\x82a',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='wydawnictwo_ciagle',
name='zrodlo',
field=models.ForeignKey(on_delete=models.CASCADE, verbose_name=b'\xc5\xb9r\xc3\xb3d\xc5\x82o', to='bpp.Zrodlo', null=True),
preserve_default=True,
),
migrations.AddField(
model_name='redakcja_zrodla',
name='zrodlo',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Zrodlo'),
preserve_default=True,
),
migrations.AddField(
model_name='punktacja_zrodla',
name='zrodlo',
field=models.ForeignKey(on_delete=models.CASCADE, to='bpp.Zrodlo'),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='punktacja_zrodla',
unique_together=set([('zrodlo', 'rok')]),
),
migrations.CreateModel(
name='Zrodlo_Informacji',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('nazwa', models.CharField(unique=True, max_length=512)),
],
options={
'verbose_name': b'\xc5\xbar\xc3\xb3d\xc5\x82o informacji o bibliografii',
'verbose_name_plural': b'\xc5\xbar\xc3\xb3d\xc5\x82a informacji o bibliografii',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='wydawnictwo_zwarte',
name='informacja_z',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Zrodlo_Informacji', null=True),
preserve_default=True,
),
migrations.AddField(
model_name='wydawnictwo_ciagle',
name='informacja_z',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Zrodlo_Informacji', null=True),
preserve_default=True,
),
migrations.AddField(
model_name='praca_habilitacyjna',
name='informacja_z',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Zrodlo_Informacji', null=True),
preserve_default=True,
),
migrations.AddField(
model_name='praca_doktorska',
name='informacja_z',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Zrodlo_Informacji', null=True),
preserve_default=True,
),
migrations.AddField(
model_name='patent',
name='informacja_z',
field=models.ForeignKey(on_delete=models.CASCADE, blank=True, to='bpp.Zrodlo_Informacji', null=True),
preserve_default=True,
),
RunSQL("CREATE OR REPLACE LANGUAGE plpython3u"),
RunPython(lambda *args, **kw: load_custom_sql("0001_indeksy")),
RunPython(lambda *args, **kw: load_custom_sql("0001_tytul_oryginalny_sort_triggers")),
RunPython(lambda *args, **kw: load_custom_sql("0001_widoki_kronika")),
RunPython(lambda *args, **kw: load_custom_sql("0001_widoki_sumy")),
RunPython(lambda *args, **kw: load_custom_sql("0001_widoki_rekord")),
RunPython(lambda *args, **kw: load_custom_sql("0001_widoki_autorzy")),
RunPython(lambda *args, **kw: load_custom_sql("0001_fulltext")),
RunPython(lambda *args, **kw: load_custom_sql("0001_cache_init")),
RunPython(lambda *args, **kw: load_custom_sql("0001_cache_functions"))
]
| 70.786353 | 408 | 0.625176 | 7,484 | 63,283 | 5.127873 | 0.066943 | 0.063918 | 0.047216 | 0.035151 | 0.883055 | 0.863408 | 0.839357 | 0.816843 | 0.788832 | 0.762983 | 0 | 0.02733 | 0.236193 | 63,283 | 893 | 409 | 70.865622 | 0.766634 | 0.000332 | 0 | 0.723669 | 0 | 0.041903 | 0.287812 | 0.053588 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.001133 | 0.011325 | 0 | 0.014723 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ecd30c55b742b8eed8c07b955e2482ec91cc0c1 | 18,759 | py | Python | tests/unit/cli/commands/test_vimdriver.py | rajahaidar/lmctl | 48984047d3656eca51a382bdfb936304cf48d5aa | [
"Apache-2.0"
] | null | null | null | tests/unit/cli/commands/test_vimdriver.py | rajahaidar/lmctl | 48984047d3656eca51a382bdfb936304cf48d5aa | [
"Apache-2.0"
] | null | null | null | tests/unit/cli/commands/test_vimdriver.py | rajahaidar/lmctl | 48984047d3656eca51a382bdfb936304cf48d5aa | [
"Apache-2.0"
] | null | null | null | import os
import tests.unit.cli.commands.command_testing as command_testing
import lmctl.drivers.lm.base as lm_drivers
import lmctl.cli.commands.vimdriver as vimdriver_cmds
from unittest.mock import patch
from tests.common.simulations.lm_simulator import LmSimulator
class TestVimDriverCommands(command_testing.CommandTestCase):
def setUp(self):
super().setUp()
# Created simulated LM session when requested
self.lm_sim = LmSimulator().start()
create_lm_session_patcher = patch('lmctl.cli.ctlmgmt.create_lm_session')
self.mock_create_lm_session = create_lm_session_patcher.start()
self.mock_create_lm_session.return_value = self.lm_sim.as_mocked_session()
self.addCleanup(create_lm_session_patcher.stop)
def test_add_with_defaults(self):
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com'])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|--------------------------------------+----------------------+-------------------------------|'
expected_output += '\n| {0} | Openstack | http://mockdriver.example.com |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.add_vim_driver.assert_called_once_with({'baseUri': 'http://mockdriver.example.com', 'infrastructureType': 'Openstack'})
def test_add_with_type(self):
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '--type', 'Kubernetes'])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|--------------------------------------+----------------------+-------------------------------|'
expected_output += '\n| {0} | Kubernetes | http://mockdriver.example.com |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.add_vim_driver.assert_called_once_with({'baseUri': 'http://mockdriver.example.com', 'infrastructureType': 'Kubernetes'})
def test_add_with_certificate(self):
certificate_pem_file = os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir, 'resources', 'certificate.pem')
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '--type', 'Kubernetes', '--certificate', certificate_pem_file])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|--------------------------------------+----------------------+-------------------------------|'
expected_output += '\n| {0} | Kubernetes | http://mockdriver.example.com |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.add_vim_driver.assert_called_once_with({'baseUri': 'http://mockdriver.example.com', 'infrastructureType': 'Kubernetes', 'certificate': '-----BEGIN CERTIFICATE-----\\nMIIDDzCCAfegAwIBAgIQXgj9XfKMhQRCDLhG4/BGSDANBgkqhkiG9w0BAQsFADAj\\nMSEwHwYDVQQDExhhbnNpYmxlLWxpZmVjeWNsZS1kcml2ZXIwHhcNMjAwMTE1MDcz\\nMzQzWhcNMzAwMTEyMDczMzQzWjAjMSEwHwYDVQQDExhhbnNpYmxlLWxpZmVjeWNs\\nZS1kcml2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAk6+0/uLm\\n2H8KQmApSgWGtehVUIyq2iIDxfQrRkF3HiS/9UzMKVdLaafX+vPvkJniLDs162Ch\\ngkO8JKejmwopO2pzYUFS/yhnCS8Ys+BMjYfX+5Wpuq/mBVQODuBVJV3n/evheuj1\\nr8t97kPbgNTQxygSAI/C/QdzbuC6GG4cw9seiM/1kVEqb1D9z53DvVftq6yJELDj\\nbItJiY57reDfUj3raUh7GfNt68d1DSRBMYGmyu0o7uHVEL5PCeqRJpOmiL7DyoH6\\nTYg7QEjEFdao4X3ohWdw9rxxO4PKw5g5zL5yjFqygBH0XkZ/9TVefpcAn5d/uaTY\\n4bv36qT3BF6LAgMBAAGjPzA9MA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggr\\nBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQsFAAOC\\nAQEAhzczPnzCCWCXg8O5K6CmIqXPyOrNEobegTifjdGdGBayFYFfp2ybLJX+XK8O\\nJiuoOqY/ti0ZkBFiV7JbfmUl4uRTEqBdax5sU0UlR6YxyRbiSM152uPUjYQwZkMM\\nfSqPjcvIoLCcznHe0z7ECgfJPjgti9YZlnhBTGW3WDelhXgQyU94A+c7NLBn2cK5\\nVfcvyunmSiAUVzSjmjpGBZ/xX2I4JjmteLrr8WsxSllg8DAo0AI+7jeecty3BG4q\\ne4p06LPHR/j8yBaHyMHweAolrn01cZXP7h5aRiE3xPRBK/Rccr6xYDTgqBLdwhfX\\nkF8OyMZPlY0Jf7/zbjbHi/D93A==\\n-----END CERTIFICATE-----'})
def test_add_with_missing_certificate(self):
certificate_pem_file = 'certificate.pem'
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '--type', 'Kubernetes', '--certificate', certificate_pem_file])
self.assert_has_system_exit(result)
self.assert_output(result, "Error: reading certificate: [Errno 2] No such file or directory: 'certificate.pem'")
def test_add_with_config(self):
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|--------------------------------------+----------------------+-------------------------------|'
expected_output += '\n| {0} | Openstack | http://mockdriver.example.com |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_add_with_pwd(self):
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '--pwd', 'secret'])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|--------------------------------------+----------------------+-------------------------------|'
expected_output += '\n| {0} | Openstack | http://mockdriver.example.com |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_add_with_output_json_format(self):
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '-f', 'json'])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = '{'
expected_output += '\n \"infrastructureType\": \"Openstack\",'
expected_output += '\n \"baseUri\": \"http://mockdriver.example.com\",'
expected_output += '\n \"id\": \"{0}\"'.format(expected_id)
expected_output += '\n}'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_add_with_output_yaml_format(self):
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com', '-f', 'yaml'])
self.assert_no_errors(result)
expected_id = None
for vim_id, vim_driver in self.lm_sim.vim_drivers.items():
expected_id = vim_id
expected_output = 'infrastructureType: Openstack'
expected_output += '\nbaseUri: http://mockdriver.example.com'
expected_output += '\nid: {0}\n'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_add_handles_lm_driver_error(self):
self.mock_create_lm_session.return_value.vim_driver_mgmt_driver.add_vim_driver.side_effect = lm_drivers.LmDriverException('Mocked error')
result = self.runner.invoke(vimdriver_cmds.add, ['TestEnv', '--url', 'http://mockdriver.example.com'])
self.assert_has_system_exit(result)
expected_output = 'LM error occurred: Mocked error'
self.assert_output(result, expected_output)
def test_delete_with_defaults(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id})
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv', vim_driver_id])
self.assert_no_errors(result)
expected_output = 'Deleting VIM driver: {0}...'.format(vim_driver_id)
expected_output += '\nDeleted VIM driver: {0}'.format(vim_driver_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.delete_vim_driver.assert_called_once_with(vim_driver_id)
def test_delete_with_config(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id})
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv', vim_driver_id, '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_output = 'Deleting VIM driver: {0}...'.format(vim_driver_id)
expected_output += '\nDeleted VIM driver: {0}'.format(vim_driver_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_delete_with_pwd(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id})
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv', vim_driver_id, '--pwd', 'secret'])
self.assert_no_errors(result)
expected_output = 'Deleting VIM driver: {0}...'.format(vim_driver_id)
expected_output += '\nDeleted VIM driver: {0}'.format(vim_driver_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_delete_handles_lm_driver_error(self):
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv', '987'])
self.assert_has_system_exit(result)
expected_output = 'Deleting VIM driver: 987...'
expected_output += '\nLM error occurred: No VIM driver with id 987'
self.assert_output(result, expected_output)
def test_delete_by_type(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack'})
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv', '--type', 'Openstack'])
self.assert_no_errors(result)
expected_output = 'Found VIM driver matching type \'Openstack\'. Id: 123'
expected_output += '\nDeleting VIM driver: {0}...'.format(vim_driver_id)
expected_output += '\nDeleted VIM driver: {0}'.format(vim_driver_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.get_vim_driver_by_type.assert_called_once_with('Openstack')
mock_vim_mgmt_driver.delete_vim_driver.assert_called_once_with(vim_driver_id)
def test_delete_by_type_not_found(self):
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv', '--type', 'Openstack'])
self.assert_has_system_exit(result)
expected_output = 'LM error occurred: No VIM driver with infrastructure type Openstack'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_delete_without_id_or_type_fails(self):
result = self.runner.invoke(vimdriver_cmds.delete, ['TestEnv'])
self.assert_has_system_exit(result)
expected_output = 'Error: Must specify driver-id argument or type option'
self.assert_output(result, expected_output)
def test_get_with_defaults(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack', 'baseUri': 'example.com'})
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', vim_driver_id])
self.assert_no_errors(result)
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|------+----------------------+-------------|'
expected_output += '\n| 123 | Openstack | example.com |'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.get_vim_driver.assert_called_once_with(vim_driver_id)
def test_get_with_config(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack', 'baseUri': 'example.com'})
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', vim_driver_id, '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|------+----------------------+-------------|'
expected_output += '\n| 123 | Openstack | example.com |'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_get_with_pwd(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack', 'baseUri': 'example.com'})
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', vim_driver_id, '--pwd', 'secret'])
self.assert_no_errors(result)
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|------+----------------------+-------------|'
expected_output += '\n| 123 | Openstack | example.com |'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_get_handles_lm_driver_error(self):
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', '987'])
self.assert_has_system_exit(result)
expected_output = 'LM error occurred: No VIM driver with id 987'
self.assert_output(result, expected_output)
def test_get_by_type(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack', 'baseUri': 'example.com'})
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', '--type', 'Openstack'])
self.assert_no_errors(result)
expected_output = '| id | infrastructureType | baseUri |'
expected_output += '\n|------+----------------------+-------------|'
expected_output += '\n| 123 | Openstack | example.com |'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_vim_mgmt_driver = self.mock_create_lm_session.return_value.vim_driver_mgmt_driver
mock_vim_mgmt_driver.get_vim_driver_by_type.assert_called_once_with('Openstack')
def test_get_by_type_not_found(self):
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', '--type', 'Openstack'])
self.assert_has_system_exit(result)
expected_output = 'LM error occurred: No VIM driver with infrastructure type Openstack'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_get_without_id_or_type_fails(self):
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv'])
self.assert_has_system_exit(result)
expected_output = 'Error: Must specify driver-id argument or type option'
self.assert_output(result, expected_output)
def test_get_with_output_json_format(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack', 'baseUri': 'example.com'})
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', vim_driver_id, '-f', 'json'])
self.assert_no_errors(result)
expected_output = '{'
expected_output += '\n \"id\": \"123\",'
expected_output += '\n \"infrastructureType\": \"Openstack\",'
expected_output += '\n \"baseUri\": \"example.com\"'
expected_output += '\n}'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_get_with_output_yaml_format(self):
vim_driver_id = '123'
self.lm_sim.add_vim_driver({'id': vim_driver_id, 'infrastructureType': 'Openstack', 'baseUri': 'example.com'})
result = self.runner.invoke(vimdriver_cmds.get, ['TestEnv', vim_driver_id, '-f', 'yaml'])
self.assert_no_errors(result)
expected_output = 'id: \'123\''
expected_output += '\ninfrastructureType: Openstack'
expected_output += '\nbaseUri: example.com\n'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None) | 66.521277 | 1,326 | 0.670558 | 2,163 | 18,759 | 5.464632 | 0.082293 | 0.066244 | 0.045601 | 0.039256 | 0.815567 | 0.799239 | 0.787817 | 0.784433 | 0.776227 | 0.758968 | 0 | 0.013166 | 0.182099 | 18,759 | 282 | 1,327 | 66.521277 | 0.757218 | 0.002292 | 0 | 0.678571 | 0 | 0.003968 | 0.302378 | 0.10179 | 0 | 0 | 0 | 0 | 0.305556 | 1 | 0.103175 | false | 0 | 0.02381 | 0 | 0.130952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ef117a8af9892593b9ed3d123e666a752aca86d | 209 | py | Python | exp_configs/__init__.py | jqueguiner/covid19_weak_supervision | 229bdd55647b822d869b2cea76733a4615ebf315 | [
"Apache-2.0"
] | null | null | null | exp_configs/__init__.py | jqueguiner/covid19_weak_supervision | 229bdd55647b822d869b2cea76733a4615ebf315 | [
"Apache-2.0"
] | null | null | null | exp_configs/__init__.py | jqueguiner/covid19_weak_supervision | 229bdd55647b822d869b2cea76733a4615ebf315 | [
"Apache-2.0"
] | null | null | null | from . import baseline_exps, weakly_exps, weakly_exps_pau
EXP_GROUPS = {}
EXP_GROUPS.update(weakly_exps.EXP_GROUPS)
EXP_GROUPS.update(weakly_exps_pau.EXP_GROUPS)
EXP_GROUPS.update(weakly_exps_pau.EXP_GROUPS)
| 29.857143 | 57 | 0.851675 | 34 | 209 | 4.764706 | 0.264706 | 0.388889 | 0.240741 | 0.296296 | 0.802469 | 0.802469 | 0.802469 | 0.802469 | 0.802469 | 0.802469 | 0 | 0 | 0.062201 | 209 | 6 | 58 | 34.833333 | 0.826531 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
1625feea2777f77a3c1efe0defbb1b7145494405 | 225 | py | Python | clinica/iotools/abstract_converter.py | MatthieuJoulot/clinica | c82f8ba6fd3d3c11076cb175ada13a4810c39d8b | [
"MIT"
] | 135 | 2019-05-17T14:16:40.000Z | 2022-03-19T03:08:05.000Z | clinica/iotools/abstract_converter.py | MatthieuJoulot/clinica | c82f8ba6fd3d3c11076cb175ada13a4810c39d8b | [
"MIT"
] | 391 | 2019-06-03T09:32:17.000Z | 2022-03-31T15:10:26.000Z | clinica/iotools/abstract_converter.py | MatthieuJoulot/clinica | c82f8ba6fd3d3c11076cb175ada13a4810c39d8b | [
"MIT"
] | 57 | 2019-05-20T08:38:01.000Z | 2022-02-11T12:14:32.000Z | import abc
class Converter:
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def convert_images(self, src, dst):
pass
@abc.abstractmethod
def convert_clinical_data(self, src, dst):
pass
| 16.071429 | 46 | 0.662222 | 26 | 225 | 5.461538 | 0.615385 | 0.239437 | 0.28169 | 0.380282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262222 | 225 | 13 | 47 | 17.307692 | 0.855422 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.222222 | 0.111111 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
168282505a9bf38366aa698ded83a76e08ad832b | 5,573 | py | Python | solarwindpy/fitfunctions/power_laws.py | blalterman/SolarWindPy | c906f1ea1b833fedc717d906d14d2531e6c03d66 | [
"BSD-3-Clause"
] | null | null | null | solarwindpy/fitfunctions/power_laws.py | blalterman/SolarWindPy | c906f1ea1b833fedc717d906d14d2531e6c03d66 | [
"BSD-3-Clause"
] | 4 | 2020-03-24T16:53:54.000Z | 2022-03-12T00:58:31.000Z | solarwindpy/fitfunctions/power_laws.py | blalterman/SolarWindPy | c906f1ea1b833fedc717d906d14d2531e6c03d66 | [
"BSD-3-Clause"
] | 1 | 2021-11-24T23:10:32.000Z | 2021-11-24T23:10:32.000Z | #!/usr/bin/env python
r""":py:mod:`Exponential` and similar `FitFunction` subclasses.
"""
import pdb # noqa: F401
from .core import FitFunction
class PowerLaw(FitFunction):
def __init__(self, xobs, yobs, **kwargs):
super().__init__(xobs, yobs, **kwargs)
@property
def function(self):
def power_law(x, A, b):
return A * (x ** b)
return power_law
@property
def p0(self):
r"""Calculate the initial guess for the Exponential parameters.
Return
------
p0 : list
The initial guesses as [c, A].
"""
assert self.sufficient_data
# y = self.yobs
# c = 1.0
# try:
# A = y.max()
# except ValueError as e:
# chk = (
# r"zero-size array to reduction operation maximum "
# "which has no identity"
# )
# if e.message.startswith(chk):
# msg = (
# "There is no maximum of a zero-size array. "
# "Please check input data."
# )
# raise ValueError(msg)
p0 = [1, 1]
return p0
@property
def TeX_function(self):
TeX = r"f(x)=A x^b"
return TeX
class PowerLawPlusC(FitFunction):
def __init__(self, xobs, yobs, **kwargs):
super().__init__(xobs, yobs, **kwargs)
@property
def function(self):
def power_law(x, A, b, c):
return (A * (x ** b)) + c
return power_law
@property
def p0(self):
r"""Calculate the initial guess for the Exponential parameters.
Return
------
p0 : list
The initial guesses as [c, A].
"""
assert self.sufficient_data
# y = self.yobs
# c = 1.0
# try:
# A = y.max()
# except ValueError as e:
# chk = (
# r"zero-size array to reduction operation maximum "
# "which has no identity"
# )
# if e.message.startswith(chk):
# msg = (
# "There is no maximum of a zero-size array. "
# "Please check input data."
# )
# raise ValueError(msg)
p0 = [1, 1, 0]
return p0
@property
def TeX_function(self):
TeX = r"f(x)=A x^b + c"
return TeX
class PowerLawOffCenter(FitFunction):
def __init__(self, xobs, yobs, **kwargs):
r""":py:class:`Fitfunction` for a power law centered at (x - x_0) with no constant offset."""
super().__init__(xobs, yobs, **kwargs)
@property
def function(self):
def power_law(x, A, b, x0):
return A * ((x - x0) ** b)
return power_law
@property
def p0(self):
r"""Calculate the initial guess for the Exponential parameters.
Return
------
p0 : list
The initial guesses as [c, A].
"""
assert self.sufficient_data
# y = self.yobs
# c = 1.0
# try:
# A = y.max()
# except ValueError as e:
# chk = (
# r"zero-size array to reduction operation maximum "
# "which has no identity"
# )
# if e.message.startswith(chk):
# msg = (
# "There is no maximum of a zero-size array. "
# "Please check input data."
# )
# raise ValueError(msg)
p0 = [1, 1, 0]
return p0
@property
def TeX_function(self):
TeX = r"f(x)=A (x - x_0)^b"
return TeX
# class PowerLaw2(FitFunction):
# def __init__(self, xobs, yobs, **kwargs):
# f""":py:class:`Fitfunction` for a power law centered at (x - x_0) with a constant offset.
# """
# super().__init__(xobs, yobs, **kwargs)
# @property
# def function(self):
# def power_law(x, A, b, c, x0):
# return (A * ((x - x0) ** b) + c)
# return power_law
# @property
# def p0(self):
# r"""Calculate the initial guess for the Exponential parameters.
# Return
# ------
# p0 : list
# The initial guesses as [c, A].
# """
# assert self.sufficient_data
# # y = self.yobs
# # c = 1.0
# # try:
# # A = y.max()
# # except ValueError as e:
# # chk = (
# # r"zero-size array to reduction operation maximum "
# # "which has no identity"
# # )
# # if e.message.startswith(chk):
# # msg = (
# # "There is no maximum of a zero-size array. "
# # "Please check input data."
# # )
# # raise ValueError(msg)
# p0 = [1, 1, 1, 1]
# return p0
# @property
# def TeX_function(self):
# TeX = r"f(x)=A (x - x_0)^b + c"
# return TeX
| 27.589109 | 101 | 0.424367 | 572 | 5,573 | 4.043706 | 0.159091 | 0.057069 | 0.048422 | 0.038046 | 0.904021 | 0.90013 | 0.889754 | 0.858625 | 0.858625 | 0.858625 | 0 | 0.016145 | 0.466535 | 5,573 | 201 | 102 | 27.726368 | 0.761857 | 0.604522 | 0 | 0.655172 | 0 | 0 | 0.021505 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 1 | 0.258621 | false | 0 | 0.034483 | 0.051724 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
1684b2b4a47d35ca782314c878f393ae273d3625 | 95 | py | Python | src/lib/models/networks/DCN/__init__.py | wisematch/KDMOT | 03be0a148fc5d5a43c13a0427c429305b92e6838 | [
"MIT"
] | null | null | null | src/lib/models/networks/DCN/__init__.py | wisematch/KDMOT | 03be0a148fc5d5a43c13a0427c429305b92e6838 | [
"MIT"
] | null | null | null | src/lib/models/networks/DCN/__init__.py | wisematch/KDMOT | 03be0a148fc5d5a43c13a0427c429305b92e6838 | [
"MIT"
] | null | null | null | from .centernet_deconv import ModulatedDeformConvWithOff
from .centernet_deconv import Fake_DCN | 47.5 | 56 | 0.905263 | 11 | 95 | 7.545455 | 0.636364 | 0.313253 | 0.457831 | 0.60241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073684 | 95 | 2 | 57 | 47.5 | 0.943182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
16a517abb2549bf6b741016bfe3f1e42bedb7770 | 47,460 | py | Python | tests/unittest.py | orsessential/rest_crud | 7a960335a7d2205554289ccdf67199eb825ee9de | [
"MIT"
] | null | null | null | tests/unittest.py | orsessential/rest_crud | 7a960335a7d2205554289ccdf67199eb825ee9de | [
"MIT"
] | null | null | null | tests/unittest.py | orsessential/rest_crud | 7a960335a7d2205554289ccdf67199eb825ee9de | [
"MIT"
] | null | null | null | import unittest
import json
from app import app
from database.db import db
class TestOrder(unittest.TestCase):
def setUp(self):
self.app = app.test_client()
self.db = db.get_db()
def test_empty_response(self):
response = self.app.get('/orders')
self.assertListEqual(response.json, [])
self.assertEqual(response.status_code, 200)
def test_get(self):
order_payload = {
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"customer_name": "PT. AMARA PRIMATIGA",
"customer_code": "1678593",
"transaction_amount": "70700",
"transaction_discount": "0",
"transaction_payment_type": "29",
"transaction_additional_field": "",
"transaction_state": "PAID",
"transaction_code": "CGKFT20200715121",
"transaction_order": 121,
"location_id": "5cecb20b6c49615b174c3e74",
"organization_id": 6,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"transaction_payment_type_name": "Invoice",
"transaction_cash_amount": 0,
"transaction_cash_change": 0,
"customer_attribute": {
"Nama_Sales": "Radit Fitrawikarsa",
"TOP": "14 Hari",
"Jenis_Pelanggan": "B2B"
},
"connote": {
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"connote_number": 1,
"connote_service": "ECO",
"connote_service_price": 70700,
"connote_amount": 70700,
"connote_code": "AWB00100209082020",
"connote_booking_code": "",
"connote_order": 326931,
"connote_state": "PAID",
"connote_state_id": 2,
"zone_code_from": "CGKFT",
"zone_code_to": "SMG",
"surcharge_amount": "",
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"actual_weight": 20,
"volume_weight": 0,
"chargeable_weight": 20,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74",
"connote_total_package": "3",
"connote_surcharge_amount": "0",
"connote_sla_day": "4",
"location_name": "Hub Jakarta Selatan",
"location_type": "HUB",
"source_tariff_db": "tariff_customers",
"id_source_tariff": "1576868",
"pod": "",
"history": []
},
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"origin_data": {
"customer_name": "PT. NARA OKA PRAKARSA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 100, SEMARANG TENGAH 12420",
"customer_email": "info@naraoka.co.id",
"customer_phone": "024-1234567",
"customer_address_detail": "",
"customer_zip_code": "12420",
"zone_code": "CGKFT",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"destination_data": {
"customer_name": "PT AMARIS HOTEL SIMPANG LIMA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 01, SEMARANG TENGAH",
"customer_email": "",
"customer_phone": "0248453499",
"customer_address_detail": "KOTA SEMARANG SEMARANG TENGAH KARANGKIDUL",
"customer_zip_code": "50241",
"zone_code": "SMG",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"koli_data": [
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.1",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "e2cb6d86-0bb9-409b-a1f0-389ed4f2df2d",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.1"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.2",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "3600f10b-4144-4e58-a024-cc3178e7a709",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.2"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.3",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 2,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "LID HOT CUP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 2,
"koli_id": "2937bdbf-315e-4c5e-b139-fd39a3dfd15f",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.3"
}
],
"custom_field": {
"catatan_tambahan": "JANGAN DI BANTING / DI TINDIH"
},
"currentLocation": {
"name": "Hub Jakarta Selatan",
"code": "JKTS01",
"type": "Agent"
}
}
response = self.app.post('/orders',
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload))
response = self.app.get('/orders')
new_payload = response.json[0]
print('----',new_payload['_id'])
self.assertEqual(order_payload['transaction_id'], new_payload['transaction_id'])
self.assertEqual(order_payload['customer_name'], new_payload['customer_name'])
self.assertEqual(200, response.status_code)
def test_get_by_id(self):
order_payload = {
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"customer_name": "PT. AMARA PRIMATIGA",
"customer_code": "1678593",
"transaction_amount": "70700",
"transaction_discount": "0",
"transaction_payment_type": "29",
"transaction_additional_field": "",
"transaction_state": "PAID",
"transaction_code": "CGKFT20200715121",
"transaction_order": 121,
"location_id": "5cecb20b6c49615b174c3e74",
"organization_id": 6,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"transaction_payment_type_name": "Invoice",
"transaction_cash_amount": 0,
"transaction_cash_change": 0,
"customer_attribute": {
"Nama_Sales": "Radit Fitrawikarsa",
"TOP": "14 Hari",
"Jenis_Pelanggan": "B2B"
},
"connote": {
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"connote_number": 1,
"connote_service": "ECO",
"connote_service_price": 70700,
"connote_amount": 70700,
"connote_code": "AWB00100209082020",
"connote_booking_code": "",
"connote_order": 326931,
"connote_state": "PAID",
"connote_state_id": 2,
"zone_code_from": "CGKFT",
"zone_code_to": "SMG",
"surcharge_amount": "",
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"actual_weight": 20,
"volume_weight": 0,
"chargeable_weight": 20,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74",
"connote_total_package": "3",
"connote_surcharge_amount": "0",
"connote_sla_day": "4",
"location_name": "Hub Jakarta Selatan",
"location_type": "HUB",
"source_tariff_db": "tariff_customers",
"id_source_tariff": "1576868",
"pod": "",
"history": []
},
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"origin_data": {
"customer_name": "PT. NARA OKA PRAKARSA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 100, SEMARANG TENGAH 12420",
"customer_email": "info@naraoka.co.id",
"customer_phone": "024-1234567",
"customer_address_detail": "",
"customer_zip_code": "12420",
"zone_code": "CGKFT",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"destination_data": {
"customer_name": "PT AMARIS HOTEL SIMPANG LIMA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 01, SEMARANG TENGAH",
"customer_email": "",
"customer_phone": "0248453499",
"customer_address_detail": "KOTA SEMARANG SEMARANG TENGAH KARANGKIDUL",
"customer_zip_code": "50241",
"zone_code": "SMG",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"koli_data": [
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.1",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "e2cb6d86-0bb9-409b-a1f0-389ed4f2df2d",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.1"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.2",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "3600f10b-4144-4e58-a024-cc3178e7a709",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.2"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.3",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 2,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "LID HOT CUP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 2,
"koli_id": "2937bdbf-315e-4c5e-b139-fd39a3dfd15f",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.3"
}
],
"custom_field": {
"catatan_tambahan": "JANGAN DI BANTING / DI TINDIH"
},
"currentLocation": {
"name": "Hub Jakarta Selatan",
"code": "JKTS01",
"type": "Agent"
}
}
response = self.app.post('/orders',
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload))
response = self.app.get('/orders')
new_payload = response.json[0]
id = new_payload['_id']['$oid']
response = self.app.get('/orders/'+ id)
self.assertEqual(200, response.status_code)
def test_delete_order(self):
order_payload = {
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"customer_name": "PT. AMARA PRIMATIGA",
"customer_code": "1678593",
"transaction_amount": "70700",
"transaction_discount": "0",
"transaction_payment_type": "29",
"transaction_additional_field": "",
"transaction_state": "PAID",
"transaction_code": "CGKFT20200715121",
"transaction_order": 121,
"location_id": "5cecb20b6c49615b174c3e74",
"organization_id": 6,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"transaction_payment_type_name": "Invoice",
"transaction_cash_amount": 0,
"transaction_cash_change": 0,
"customer_attribute": {
"Nama_Sales": "Radit Fitrawikarsa",
"TOP": "14 Hari",
"Jenis_Pelanggan": "B2B"
},
"connote": {
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"connote_number": 1,
"connote_service": "ECO",
"connote_service_price": 70700,
"connote_amount": 70700,
"connote_code": "AWB00100209082020",
"connote_booking_code": "",
"connote_order": 326931,
"connote_state": "PAID",
"connote_state_id": 2,
"zone_code_from": "CGKFT",
"zone_code_to": "SMG",
"surcharge_amount": "",
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"actual_weight": 20,
"volume_weight": 0,
"chargeable_weight": 20,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74",
"connote_total_package": "3",
"connote_surcharge_amount": "0",
"connote_sla_day": "4",
"location_name": "Hub Jakarta Selatan",
"location_type": "HUB",
"source_tariff_db": "tariff_customers",
"id_source_tariff": "1576868",
"pod": "",
"history": []
},
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"origin_data": {
"customer_name": "PT. NARA OKA PRAKARSA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 100, SEMARANG TENGAH 12420",
"customer_email": "info@naraoka.co.id",
"customer_phone": "024-1234567",
"customer_address_detail": "",
"customer_zip_code": "12420",
"zone_code": "CGKFT",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"destination_data": {
"customer_name": "PT AMARIS HOTEL SIMPANG LIMA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 01, SEMARANG TENGAH",
"customer_email": "",
"customer_phone": "0248453499",
"customer_address_detail": "KOTA SEMARANG SEMARANG TENGAH KARANGKIDUL",
"customer_zip_code": "50241",
"zone_code": "SMG",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"koli_data": [
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.1",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "e2cb6d86-0bb9-409b-a1f0-389ed4f2df2d",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.1"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.2",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "3600f10b-4144-4e58-a024-cc3178e7a709",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.2"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.3",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 2,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "LID HOT CUP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 2,
"koli_id": "2937bdbf-315e-4c5e-b139-fd39a3dfd15f",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.3"
}
],
"custom_field": {
"catatan_tambahan": "JANGAN DI BANTING / DI TINDIH"
},
"currentLocation": {
"name": "Hub Jakarta Selatan",
"code": "JKTS01",
"type": "Agent"
}
}
response = self.app.post('/orders',
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload))
response = self.app.get('/orders')
new_payload = response.json[0]
id = new_payload['_id']['$oid']
response = self.app.delete('/orders/'+ id)
self.assertEqual(200, response.status_code)
response = self.app.get('/orders')
self.assertListEqual(response.json, [])
def test_update_item(self):
order_payload = {
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"customer_name": "PT. AMARA PRIMATIGA",
"customer_code": "1678593",
"transaction_amount": "70700",
"transaction_discount": "0",
"transaction_payment_type": "29",
"transaction_additional_field": "",
"transaction_state": "PAID",
"transaction_code": "CGKFT20200715121",
"transaction_order": 121,
"location_id": "5cecb20b6c49615b174c3e74",
"organization_id": 6,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"transaction_payment_type_name": "Invoice",
"transaction_cash_amount": 0,
"transaction_cash_change": 0,
"customer_attribute": {
"Nama_Sales": "Radit Fitrawikarsa",
"TOP": "14 Hari",
"Jenis_Pelanggan": "B2B"
},
"connote": {
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"connote_number": 1,
"connote_service": "ECO",
"connote_service_price": 70700,
"connote_amount": 70700,
"connote_code": "AWB00100209082020",
"connote_booking_code": "",
"connote_order": 326931,
"connote_state": "PAID",
"connote_state_id": 2,
"zone_code_from": "CGKFT",
"zone_code_to": "SMG",
"surcharge_amount": "",
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"actual_weight": 20,
"volume_weight": 0,
"chargeable_weight": 20,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74",
"connote_total_package": "3",
"connote_surcharge_amount": "0",
"connote_sla_day": "4",
"location_name": "Hub Jakarta Selatan",
"location_type": "HUB",
"source_tariff_db": "tariff_customers",
"id_source_tariff": "1576868",
"pod": "",
"history": []
},
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"origin_data": {
"customer_name": "PT. NARA OKA PRAKARSA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 100, SEMARANG TENGAH 12420",
"customer_email": "info@naraoka.co.id",
"customer_phone": "024-1234567",
"customer_address_detail": "",
"customer_zip_code": "12420",
"zone_code": "CGKFT",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"destination_data": {
"customer_name": "PT AMARIS HOTEL SIMPANG LIMA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 01, SEMARANG TENGAH",
"customer_email": "",
"customer_phone": "0248453499",
"customer_address_detail": "KOTA SEMARANG SEMARANG TENGAH KARANGKIDUL",
"customer_zip_code": "50241",
"zone_code": "SMG",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"koli_data": [
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.1",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "e2cb6d86-0bb9-409b-a1f0-389ed4f2df2d",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.1"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.2",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "3600f10b-4144-4e58-a024-cc3178e7a709",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.2"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.3",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 2,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "LID HOT CUP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 2,
"koli_id": "2937bdbf-315e-4c5e-b139-fd39a3dfd15f",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.3"
}
],
"custom_field": {
"catatan_tambahan": "JANGAN DI BANTING / DI TINDIH"
},
"currentLocation": {
"name": "Hub Jakarta Selatan",
"code": "JKTS01",
"type": "Agent"
}
}
response = self.app.post('/orders',
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload))
response = self.app.get('/orders')
new_payload = response.json[0]
id = new_payload['_id']['$oid']
order_payload_update = {"transaction_state": "UNPAID"}
response = self.app.patch('/orders/'+id,
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload_update))
self.assertEqual(200, response.status_code)
response = self.app.get('/orders')
new_payload = response.json[0]
self.assertEqual(order_payload_update['transaction_state'], new_payload['transaction_state'])
def test_add_order(self):
order_payload = {
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"customer_name": "PT. AMARA PRIMATIGA",
"customer_code": "1678593",
"transaction_amount": "70700",
"transaction_discount": "0",
"transaction_payment_type": "29",
"transaction_additional_field": "",
"transaction_state": "PAID",
"transaction_code": "CGKFT20200715121",
"transaction_order": 121,
"location_id": "5cecb20b6c49615b174c3e74",
"organization_id": 6,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"transaction_payment_type_name": "Invoice",
"transaction_cash_amount": 0,
"transaction_cash_change": 0,
"customer_attribute": {
"Nama_Sales": "Radit Fitrawikarsa",
"TOP": "14 Hari",
"Jenis_Pelanggan": "B2B"
},
"connote": {
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"connote_number": 1,
"connote_service": "ECO",
"connote_service_price": 70700,
"connote_amount": 70700,
"connote_code": "AWB00100209082020",
"connote_booking_code": "",
"connote_order": 326931,
"connote_state": "PAID",
"connote_state_id": 2,
"zone_code_from": "CGKFT",
"zone_code_to": "SMG",
"surcharge_amount": "",
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"actual_weight": 20,
"volume_weight": 0,
"chargeable_weight": 20,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74",
"connote_total_package": "3",
"connote_surcharge_amount": "0",
"connote_sla_day": "4",
"location_name": "Hub Jakarta Selatan",
"location_type": "HUB",
"source_tariff_db": "tariff_customers",
"id_source_tariff": "1576868",
"pod": "",
"history": []
},
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"origin_data": {
"customer_name": "PT. NARA OKA PRAKARSA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 100, SEMARANG TENGAH 12420",
"customer_email": "info@naraoka.co.id",
"customer_phone": "024-1234567",
"customer_address_detail": "",
"customer_zip_code": "12420",
"zone_code": "CGKFT",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"destination_data": {
"customer_name": "PT AMARIS HOTEL SIMPANG LIMA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 01, SEMARANG TENGAH",
"customer_email": "",
"customer_phone": "0248453499",
"customer_address_detail": "KOTA SEMARANG SEMARANG TENGAH KARANGKIDUL",
"customer_zip_code": "50241",
"zone_code": "SMG",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"koli_data": [
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.1",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "e2cb6d86-0bb9-409b-a1f0-389ed4f2df2d",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.1"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.2",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "3600f10b-4144-4e58-a024-cc3178e7a709",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.2"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.3",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 2,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "LID HOT CUP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 2,
"koli_id": "2937bdbf-315e-4c5e-b139-fd39a3dfd15f",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.3"
}
],
"custom_field": {
"catatan_tambahan": "JANGAN DI BANTING / DI TINDIH"
},
"currentLocation": {
"name": "Hub Jakarta Selatan",
"code": "JKTS01",
"type": "Agent"
}
}
response = self.app.post('/orders',
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload))
self.assertEqual(200, response.status_code)
def test_update_order(self):
order_payload = {
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"customer_name": "PT. AMARA PRIMATIGA",
"customer_code": "1678593",
"transaction_amount": "70700",
"transaction_discount": "0",
"transaction_payment_type": "29",
"transaction_additional_field": "",
"transaction_state": "UNPAID",
"transaction_code": "CGKFT20200715121",
"transaction_order": 121,
"location_id": "5cecb20b6c49615b174c3e74",
"organization_id": 6,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"transaction_payment_type_name": "Invoice",
"transaction_cash_amount": 0,
"transaction_cash_change": 0,
"customer_attribute": {
"Nama_Sales": "Radit Fitrawikarsa",
"TOP": "14 Hari",
"Jenis_Pelanggan": "B2B"
},
"connote": {
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"connote_number": 1,
"connote_service": "ECO",
"connote_service_price": 70700,
"connote_amount": 70700,
"connote_code": "AWB00100209082020",
"connote_booking_code": "",
"connote_order": 326931,
"connote_state": "PAID",
"connote_state_id": 2,
"zone_code_from": "CGKFT",
"zone_code_to": "SMG",
"surcharge_amount": "",
"transaction_id": "d0090c40-539f-479a-8274-899b9970bddc",
"actual_weight": 20,
"volume_weight": 0,
"chargeable_weight": 20,
"created_at": "2020-07-15T11:11:12+0700",
"updated_at": "2020-07-15T11:11:22+0700",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74",
"connote_total_package": "3",
"connote_surcharge_amount": "0",
"connote_sla_day": "4",
"location_name": "Hub Jakarta Selatan",
"location_type": "HUB",
"source_tariff_db": "tariff_customers",
"id_source_tariff": "1576868",
"pod": "",
"history": []
},
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"origin_data": {
"customer_name": "PT. NARA OKA PRAKARSA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 100, SEMARANG TENGAH 12420",
"customer_email": "info@naraoka.co.id",
"customer_phone": "024-1234567",
"customer_address_detail": "",
"customer_zip_code": "12420",
"zone_code": "CGKFT",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"destination_data": {
"customer_name": "PT AMARIS HOTEL SIMPANG LIMA",
"customer_address": "JL. KH. AHMAD DAHLAN NO. 01, SEMARANG TENGAH",
"customer_email": "",
"customer_phone": "0248453499",
"customer_address_detail": "KOTA SEMARANG SEMARANG TENGAH KARANGKIDUL",
"customer_zip_code": "50241",
"zone_code": "SMG",
"organization_id": 6,
"location_id": "5cecb20b6c49615b174c3e74"
},
"koli_data": [
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.1",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "e2cb6d86-0bb9-409b-a1f0-389ed4f2df2d",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.1"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.2",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 9,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "V WARP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 9,
"koli_id": "3600f10b-4144-4e58-a024-cc3178e7a709",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.2"
},
{
"koli_length": 0,
"awb_url": "https://tracking.mile.app/label/AWB00100209082020.3",
"created_at": "2020-07-15 11:11:13",
"koli_chargeable_weight": 2,
"koli_width": 0,
"koli_surcharge": [],
"koli_height": 0,
"updated_at": "2020-07-15 11:11:13",
"koli_description": "LID HOT CUP",
"koli_formula_id": "",
"connote_id": "f70670b1-c3ef-4caf-bc4f-eefa702092ed",
"koli_volume": 0,
"koli_weight": 2,
"koli_id": "2937bdbf-315e-4c5e-b139-fd39a3dfd15f",
"koli_custom_field": {
"awb_sicepat": "",
"harga_barang": ""
},
"koli_code": "AWB00100209082020.3"
}
],
"custom_field": {
"catatan_tambahan": "JANGAN DI BANTING / DI TINDIH"
},
"currentLocation": {
"name": "Hub Jakarta Selatan",
"code": "JKTS01",
"type": "Agent"
}
}
response = self.app.post('/orders',
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload))
response = self.app.get('/orders')
new_payload = response.json[0]
id = new_payload['_id']['$oid']
order_payload_update = {"transaction_state": "PAID"}
response = self.app.put('/orders/'+id,
headers={"Content-Type": "application/json"},
data=json.dumps(order_payload_update))
self.assertEqual(200, response.status_code)
response = self.app.get('/orders')
new_payload = response.json[0]
self.assertEqual(order_payload_update['transaction_state'], new_payload['transaction_state'])
def tearDown(self):
for collection in self.db.list_collection_names():
self.db.drop_collection(collection)
if __name__ == "__main__":
unittest.main() | 46.302439 | 101 | 0.433375 | 3,754 | 47,460 | 5.205381 | 0.060469 | 0.018423 | 0.024564 | 0.018423 | 0.974566 | 0.971445 | 0.971445 | 0.971445 | 0.964024 | 0.959623 | 0 | 0.144896 | 0.446249 | 47,460 | 1,025 | 102 | 46.302439 | 0.598645 | 0 | 0 | 0.872233 | 0 | 0 | 0.4104 | 0.108342 | 0 | 0 | 0 | 0 | 0.013078 | 1 | 0.009054 | false | 0 | 0.004024 | 0 | 0.014085 | 0.001006 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
16e2468580a8c9bf0a9e5fa65c4b0327f9aa3d10 | 38 | py | Python | tests/api/totp_test.py | felbinger/PythonFlaskLogin | 3f55c2ce358331c1f182ee1a03fe3a13a53e3f69 | [
"MIT"
] | 2 | 2020-07-13T08:26:46.000Z | 2021-05-23T00:13:34.000Z | tests/api/totp_test.py | felbinger/FlaskBasic | 803fc5b07638e7d85eddccd00ca20567e57519f0 | [
"MIT"
] | 1 | 2020-07-04T17:10:29.000Z | 2020-07-10T18:55:43.000Z | tests/api/totp_test.py | felbinger/FlaskBasic | 803fc5b07638e7d85eddccd00ca20567e57519f0 | [
"MIT"
] | null | null | null | from tests.utils import Utils
# TODO
| 9.5 | 29 | 0.763158 | 6 | 38 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 3 | 30 | 12.666667 | 0.935484 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bc646c242b51448cb05827679694c36ae7872526 | 146,885 | py | Python | Task 3.1.3.2 Integrated - Equestrianism : pace.py | varipon/Work-Plan-3-multi-legs | 60fd9f624e40f53ebe97e8ae8e90f4ff0ad9b11d | [
"MIT"
] | null | null | null | Task 3.1.3.2 Integrated - Equestrianism : pace.py | varipon/Work-Plan-3-multi-legs | 60fd9f624e40f53ebe97e8ae8e90f4ff0ad9b11d | [
"MIT"
] | null | null | null | Task 3.1.3.2 Integrated - Equestrianism : pace.py | varipon/Work-Plan-3-multi-legs | 60fd9f624e40f53ebe97e8ae8e90f4ff0ad9b11d | [
"MIT"
] | null | null | null | # ================
# SOFTWARE LICENSE
# ================
# The MIT License (MIT)
# Copyright (c) 2021 Yutaka Sawai (Varipon)
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ==============================================================
# LICENSE FOR CONTENT PROCEDURALLY GENERATED USING THIS SOFTWARE
# ==============================================================
# All content procedurally generated by this software and its permutations
# are licensed under Creative Commons Attribution By 3.0:
# https://creativecommons.org/licenses/by/3.0/
#!/usr/bin/python
import bpy
from bpy import *
import mathutils
import math
from mathutils import *
from math import *
class Formula:
def __init__(self, P, A, J, move, part, helicity, start, end):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# joint number
self.J = J
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints α(n) -> a[n], β(n) -> b[n], γ(n) -> y[n], δ(n) -> o[n]
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A, self.J, self.helicity, self.rig, self.move, self.part)
def configMovement(self, P, A, J, a, b, y, o):
mat_a = [0 for i in range(4)] # Joint α matrix
mat_b = [0 for i in range(self.J)] # Joint β matrix
mat_y = [0 for i in range(self.J)] # Joint γ matrix
mat_o = [0 for i in range(self.J)] # Joint δ matrix
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
a[0] = mathutils.Euler((-A - E + (D * 0.5), -A - (D * 0.5), 0.0), 'XYZ')
print ("a0 =", a[0])
mat_a[0] = Matrix.Translation(a[0])
a[3] = mathutils.Euler((0-a[0].x, 0-a[0].y, 0-a[0].z), 'XYZ')
print ("a3 =", a[3])
mat_a[3] = Matrix.Translation(a[3])
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
mat_y[1] = Matrix.Translation(y[1])
### pattern A
b[2] = mathutils.Euler((a[0].x + E + (A * 2), a[0].y + (A * 2), 0.0), 'XYZ')
print ("b2 =", b[2])
mat_b[2] = Matrix.Translation(b[2])
b[3] = mathutils.Euler((a[0].x + E - (D * 0.5), a[0].y - (A * 2), 0.0), 'XYZ')
print ("b3 =", b[3])
mat_b[3] = Matrix.Translation(b[3])
y[2] = mathutils.Euler((a[0].x + E, a[0].y, 0.0), 'XYZ')
print ("y2 =", y[2])
mat_y[2] = Matrix.Translation(y[2])
y[3] = mathutils.Euler((a[0].x + E - (D * 0.5), a[0].y - (D * 0.5), 0.0), 'XYZ')
print ("y3 =", y[3])
mat_y[3] = Matrix.Translation(y[3])
o[2] = mathutils.Euler((a[0].x + E + (A * 2), a[0].y - (A * 2), 0.0), 'XYZ')
print ("o2 =", o[2])
mat_o[2] = Matrix.Translation(o[2])
o[3] = mathutils.Euler((a[0].x + E - (D * 0.5) - (A * 2), a[0].y - (D * 0.5) - (A * 2), 0.0), 'XYZ')
print ("o3 =", o[3])
mat_o[3] = Matrix.Translation(o[3])
### pattern A end
org_rot_mat = Matrix.Rotation(math.radians(0), 4, 'Z')
# define the rotation
rot_mat = Matrix.Rotation(math.radians(-45), 4, 'Z')
for j in range(2, J - 2):
mat_y[j + 2] = mat_a[0] @ org_rot_mat @ rot_mat @ mat_a[3] @ mat_y[j]
# obj.matrix_world = mat_y[j + 2]
# extract components back out of the matrix
loc, rot, sca = mat_y[j + 2].decompose()
y[j + 2] = mathutils.Euler(loc, 'XYZ')
print("y"+str(j + 2)+" = ", y[j + 2], rot, sca)
mat_b[j + 2] = mat_a[0] @ org_rot_mat @ rot_mat @ mat_a[3] @ mat_b[j]
# obj.matrix_world = mat_b[j + 2]
# extract components back out of the matrix
loc, rot, sca = mat_b[j + 2].decompose()
b[j + 2] = mathutils.Euler(loc, 'XYZ')
print("b"+str(j + 2)+" = ", b[j + 2], rot, sca)
mat_o[j + 2] = mat_a[0] @ org_rot_mat @ rot_mat @ mat_a[3] @ mat_o[j]
# obj.matrix_world = mat_o[j + 2]
# extract components back out of the matrix
loc, rot, sca = mat_o[j + 2].decompose()
o[j + 2] = mathutils.Euler(loc, 'XYZ')
print("o"+str(j + 2)+" = ", o[j + 2], rot, sca)
def constructMovement(self, J, helicity, amt, rig, a, b, y, o):
# Linkages
aa = [[0 for i in range(4)] for j in range(4)] # Link α(i) - α(j)
ab = [[0 for i in range(4)] for j in range(4)] # Link α(i) - β(j)
ya = [[0 for i in range(4)] for j in range(4)] # Link γ(i) - α(j)
ao = [[0 for i in range(4)] for j in range(4)] # Link α(i) - δ(j)
ob = [[0 for i in range(self.J)] for j in range(self.J)] # Link δ(i) - β(j)
yy = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - γ(j)
by = [[0 for i in range(self.J)] for j in range(self.J)] # Link β(i) - γ(j)
yo = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - δ(j)
rig.location = mathutils.Euler((0.0, 0.0, 0.0), 'XYZ')
rig.show_in_front = True
amt.show_names = True
amt.display_type = 'STICK'
# amt.display_type = 'BBONE'
# Link object to scene
bpy.data.collections['movement'].objects.link(rig)
bpy.context.view_layer.objects.active = rig
bpy.context.view_layer.update()
# Edit
bpy.ops.object.editmode_toggle()
# Construction Linkage
aa[2][1] = amt.edit_bones.new('a2a1')
aa[2][1].head = a[2]
aa[2][1].tail = a[1]
ab[1][1] = amt.edit_bones.new('a1b1')
ab[1][1].head = a[1]
ab[1][1].tail = b[1]
ab[1][1].parent = aa[2][1]
by[1][1] = amt.edit_bones.new('b1y1')
by[1][1].head = b[1]
by[1][1].tail = y[1]
by[1][1].parent = ab[1][1]
by[1][1].use_inherit_rotation = False
ya[1][2] = amt.edit_bones.new('y1a2')
ya[1][2].head = y[1]
ya[1][2].tail = a[2]
ya[1][2].parent = by[1][1]
ao[2][1] = amt.edit_bones.new('a2o1')
ao[2][1].head = a[2]
ao[2][1].tail = o[1]
ao[2][1].parent = ya[1][2]
ob[1][2] = amt.edit_bones.new('o1b2')
ob[1][2].head = o[1]
ob[1][2].tail = b[2]
ob[1][2].parent = ao[2][1]
yy[1][2] = amt.edit_bones.new('y1y2')
yy[1][2].head = y[1]
yy[1][2].tail = y[2]
yy[1][2].parent = by[1][1]
for j in range(2, J - 1):
by[j][j] = amt.edit_bones.new('b'+ str(j) + 'y'+ str(j))
by[j][j].head = b[j]
by[j][j].tail = y[j]
by[j][j].parent = ob[j-1][j]
yo[j][j] = amt.edit_bones.new('y'+ str(j) + 'o'+ str(j))
yo[j][j].head = y[j]
yo[j][j].tail = o[j]
yo[j][j].parent = yy[j-1][j]
yy[j][j+1] = amt.edit_bones.new('y'+ str(j) + 'y'+ str(j+1))
yy[j][j+1].head = y[j]
yy[j][j+1].tail = y[j+1]
yy[j][j+1].parent = by[j][j]
if j < (J-2):
ob[j][j+1] = amt.edit_bones.new('o'+ str(j) + 'b'+ str(j+1))
ob[j][j+1].head = o[j]
ob[j][j+1].tail = b[j+1]
ob[j][j+1].parent = yo[j][j]
# all bones select
# Bone constraints. Armature must be in pose mode.
bpy.ops.object.mode_set(mode='POSE')
bpy.ops.pose.select_all(action="SELECT")
# Edit
bpy.ops.object.editmode_toggle()
if helicity == 'right':
bpy.ops.armature.calculate_roll(type='GLOBAL_POS_Z')
else:
bpy.ops.armature.calculate_roll(type='GLOBAL_NEG_Z')
# IK constraint
cns = rig.pose.bones['y1a2'].constraints.new('IK')
cns.name = 'Ik'
cns.target = rig
cns.subtarget = 'a2a1'
cns.chain_count = 2
cns.use_stretch = False
for j in range(2, J - 1):
cns = rig.pose.bones['b'+str(j) +'y'+str(j)].constraints.new('IK')
cns.name = 'Ik'
cns.target = rig
cns.subtarget = 'y'+str(j)+'o'+str(j)
cns.iterations = 500
cns.chain_count = 2
cns.use_stretch = False
bpy.ops.object.mode_set(mode='OBJECT')
def configRotation(self, rig, interval, frame_start, frame_end, start, end):
# Bone constraints. Armature must be in pose mode.
bpy.ops.object.mode_set(mode='POSE')
# key insert
keyframe_insert_interval = interval
rig.pose.bones["a1b1"].rotation_mode = 'XYZ'
rig.pose.bones["a1b1"].rotation_euler.z = math.radians(start)
rig.pose.bones["a1b1"].keyframe_insert(data_path="rotation_euler",frame=frame_start)
rig.pose.bones["a1b1"].rotation_mode = 'XYZ'
rig.pose.bones["a1b1"].rotation_euler.z = math.radians(end)
rig.pose.bones["a1b1"].keyframe_insert(data_path="rotation_euler",frame=frame_end)
for curve in bpy.context.active_object.animation_data.action.fcurves:
cycles = curve.modifiers.new(type='CYCLES')
cycles.mode_before = 'REPEAT_OFFSET'
cycles.mode_after = 'REPEAT_OFFSET'
for keyframe in curve.keyframe_points:
keyframe.interpolation = 'LINEAR'
bpy.ops.object.mode_set(mode='OBJECT')
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.000"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
def constructLink(self, A, J, helicity, rig, move, part):
# Move and rotate the tip bone in pose mode
bpy.context.view_layer.objects.active = rig
Y = 1.1838*A
for n in rig.pose.bones:
if n.name != "o" + str(J-2) + "b" + str(J-1):
# we can get the object from the pose bone
obj = n.id_data
matrix_final = obj.matrix_world @ n.matrix
# Create armature and object
lnk = bpy.data.armatures.new(n.name[:len(n.name)]+'.data.' + helicity)
lnk_rig = bpy.data.objects.new(n.name[:len(n.name)]+'.link.' + helicity, lnk)
lnk_rig.location = mathutils.Euler((0.0, 0.0, 0.0), 'XYZ')
# rig.show_in_front = True
lnk.show_names = True
lnk.display_type = 'STICK'
bpy.data.collections['link'].objects.link(lnk_rig)
bpy.context.view_layer.objects.active = lnk_rig
bpy.context.view_layer.update()
# Create bones
# mode='EDIT'
bpy.ops.object.editmode_toggle()
link = lnk.edit_bones.new(n.name[:len(n.name)])
link.head = (0.0, 0.0, 0.0)
link.tail = (0.0, Y, 0.0)
link_head = lnk.edit_bones.new('head')
link_head.head = (0.0, 0.0, 0.1)
link_head.tail = (0.0, 0.0, 0.0)
link_head.parent = link
link_head.use_inherit_scale = False
link_tail = lnk.edit_bones.new('tail')
link_tail.head = (0.0, Y, 0.0)
link_tail.tail = (0.0, Y, -0.1)
link_tail.parent = link
link_tail.use_inherit_scale = False
bpy.ops.object.mode_set(mode='OBJECT')
ob = bpy.data.objects[n.name[:len(n.name)]+'.mesh.' + move + '.' + part +'.' + helicity]
ob.location = mathutils.Euler((0.0, 0.0, 0.0), 'XYZ')
# Give mesh object an armature modifier, using vertex groups but
# not envelopes
mod = ob.modifiers.new('MyRigModif', 'ARMATURE')
mod.object = lnk_rig
mod.use_bone_envelopes = False
mod.use_vertex_groups = True
# Bone constraints. Armature must be in pose mode.
bpy.ops.object.mode_set(mode='POSE')
# Copy rotation constraints Base -> Tip
pBase = lnk_rig.pose.bones[n.name[:len(n.name)]]
cns = pBase.constraints.new('COPY_LOCATION')
cns.name = 'Copy_Location'
cns.target = rig
cns.subtarget = n.name[:len(n.name)]
cns.owner_space = 'WORLD'
cns.target_space = 'WORLD'
# Copy rotation constraints Base -> Tip
pBase = lnk_rig.pose.bones[n.name[:len(n.name)]]
cns = pBase.constraints.new('COPY_ROTATION')
cns.name = 'Copy_Rotation'
cns.target = rig
cns.subtarget = n.name[:len(n.name)]
cns.owner_space = 'WORLD'
cns.target_space = 'WORLD'
# StretchTo constraint Mid -> Tip with influence 0.5
cns1 = pBase.constraints.new('STRETCH_TO')
cns1.name = 'Stretch'
cns1.target = rig
cns1.subtarget = n.name[:len(n.name)]
cns1.head_tail = 1
cns1.rest_length = Y
cns1.influence = 1
cns1.keep_axis = 'PLANE_Z'
cns1.volume = 'NO_VOLUME'
bpy.ops.object.mode_set(mode='OBJECT')
class Costa(Formula):
J = 4 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple, disciple2):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
self.disciple2 = disciple2
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(self.J)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig,
self.disciple_loc, self.disciple_rot, self.disciple, self.disciple2)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.25*self.A*0.4, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.25*self.A*0.4, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
y[2] = mathutils.Euler((-A, (-1.72423/1.28082)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
o[1] = mathutils.Euler(((-10.6563/1.28082)*A, -A, 0.0), 'XYZ')
print ("o1 =", o[1])
b[2] = mathutils.Euler(((-10.6563/1.28082)*A, (-1.72423/1.28082)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
o[2] = mathutils.Euler((-A, (-1.97185/1.28082)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
def constructMovement(self, J, helicity, amt, rig, a, b, y, o):
# Linkages
aa = [[0 for i in range(4)] for j in range(4)] # Link α(i) - α(j)
ab = [[0 for i in range(4)] for j in range(4)] # Link α(i) - β(j)
ya = [[0 for i in range(4)] for j in range(4)] # Link γ(i) - α(j)
# ao = [[0 for i in range(4)] for j in range(4)] # Link α(i) - δ(j)
ob = [[0 for i in range(self.J)] for j in range(self.J)] # Link δ(i) - β(j)
yy = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - γ(j)
by = [[0 for i in range(self.J)] for j in range(self.J)] # Link β(i) - γ(j)
yo = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - δ(j)
rig.location = mathutils.Euler((0.0, 0.0, 0.0), 'XYZ')
rig.show_in_front = True
amt.show_names = True
amt.display_type = 'STICK'
# amt.display_type = 'BBONE'
# Link object to scene
bpy.data.collections['movement'].objects.link(rig)
bpy.context.view_layer.objects.active = rig
bpy.context.view_layer.update()
# Edit
bpy.ops.object.editmode_toggle()
# Construction Linkage
aa[2][1] = amt.edit_bones.new('a2a1')
aa[2][1].head = a[2]
aa[2][1].tail = a[1]
ab[1][1] = amt.edit_bones.new('a1b1')
ab[1][1].head = a[1]
ab[1][1].tail = b[1]
ab[1][1].parent = aa[2][1]
by[1][1] = amt.edit_bones.new('b1y1')
by[1][1].head = b[1]
by[1][1].tail = y[1]
by[1][1].parent = ab[1][1]
by[1][1].use_inherit_rotation = False
ya[1][2] = amt.edit_bones.new('y1a2')
ya[1][2].head = y[1]
ya[1][2].tail = a[2]
ya[1][2].parent = by[1][1]
yo[1][1] = amt.edit_bones.new('y1o1')
yo[1][1].head = y[1]
yo[1][1].tail = o[1]
yo[1][1].parent = ya[1][2]
ob[1][2] = amt.edit_bones.new('o1b2')
ob[1][2].head = o[1]
ob[1][2].tail = b[2]
ob[1][2].parent = yo[1][1]
yy[1][2] = amt.edit_bones.new('y1y2')
yy[1][2].head = y[1]
yy[1][2].tail = y[2]
yy[1][2].parent = by[1][1]
by[2][2] = amt.edit_bones.new('b'+ str(2) + 'y'+ str(2))
by[2][2].head = b[2]
by[2][2].tail = y[2]
by[2][2].parent = ob[1][2]
yo[2][2] = amt.edit_bones.new('y'+ str(2) + 'o'+ str(2))
yo[2][2].head = y[2]
yo[2][2].tail = o[2]
yo[2][2].parent = yy[1][2]
# all bones select
# Bone constraints. Armature must be in pose mode.
bpy.ops.object.mode_set(mode='POSE')
bpy.ops.pose.select_all(action="SELECT")
# Edit
bpy.ops.object.editmode_toggle()
if helicity == 'right':
bpy.ops.armature.calculate_roll(type='GLOBAL_POS_Z')
else:
bpy.ops.armature.calculate_roll(type='GLOBAL_NEG_Z')
# IK constraint
cns = rig.pose.bones['y1a2'].constraints.new('IK')
cns.name = 'Ik'
cns.target = rig
cns.subtarget = 'a2a1'
cns.chain_count = 2
cns.use_stretch = False
cns = rig.pose.bones['b2y2'].constraints.new('IK')
cns.name = 'Ik'
cns.target = rig
cns.subtarget = 'y2o2'
cns.iterations = 500
cns.chain_count = 2
cns.use_stretch = False
bpy.ops.object.mode_set(mode='OBJECT')
# Parent set disciple to master
def setParent(self, helicity, move, rig,
disciple_loc, disciple_rot, disciple, disciple2):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y1o1' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
disciple2.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
# disciple2 position
disciple2.rig.location.x += disciple_loc[0]
disciple2.rig.location.y += disciple_loc[1]
disciple2.rig.location.z += disciple_loc[2]
disciple2.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.000"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.y1o1"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
n = 1
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
n = 2
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.b2y2"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class Spine(Formula):
J = 7 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple,
disciple2_loc, disciple2_rot, disciple2, disciple3):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple position
self.disciple2_loc = disciple2_loc
self.disciple2_rot = disciple2_rot
# disciple
self.disciple = disciple
self.disciple2 = disciple2
self.disciple3 = disciple3
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig,
self.disciple_loc, self.disciple_rot, self.disciple,
self.disciple2_loc, self.disciple2_rot, self.disciple2, self.disciple3)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A*0.5, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A*0.5, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((10.0046/1.71652)*A, (-6.57156/1.71652)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((10.0046/1.71652)*A, (-18.2927/1.71652)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
b[4] = mathutils.Euler(((3.13855/1.71652)*A, (-13.4376/1.71652)*A, 0.0), 'XYZ')
print ("b4 =", b[4])
y[2] = mathutils.Euler(((6.57156/1.71652)*A, (-10.0046/1.71652)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((14.8597/1.71652)*A, (-18.2927/1.71652)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
o[2] = b[2]
print ("o2 =", o[2])
o[3] = mathutils.Euler(((14.8597/1.71652)*A, (-13.4376/1.71652)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
y[4] = y[2]
print ("y4 =", y[4])
o[4] = b[4]
print ("o4 =", o[4])
b[5] = mathutils.Euler(((-5.14955/1.71652)*A, (-5.14955/1.71652)*A, 0.0), 'XYZ')
print ("b5 =", b[5])
y[5] = y[1]
print ("y5 =", y[5])
o[5] = b[5]
y[6] = mathutils.Euler(((-10.0046/1.71652)*A, (6.57156/1.71652)*A, 0.0), 'XYZ')
print ("y6 =", y[6])
# Parent set disciple to master
def setParent(self, helicity, move, rig,
disciple_loc, disciple_rot, disciple,
disciple2_loc, disciple2_rot, disciple2, disciple3):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y5y6' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y3y4' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple2.rig.select_set(state=True)
disciple3.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
# disciple2 position
disciple2.rig.location.x += disciple2_loc[0]
disciple2.rig.location.y += disciple2_loc[1]
disciple2.rig.location.z += disciple2_loc[2]
disciple2.rig.rotation_euler = disciple2_rot
# disciple3 position
disciple3.rig.location.x += disciple2_loc[0]
disciple3.rig.location.y += disciple2_loc[1]
disciple3.rig.location.z += disciple2_loc[2]
disciple3.rig.rotation_euler = disciple2_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.spine.a2a1"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n >= (3):
N=-Q*5
else:
N=-Q*0
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, N-Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 2 of yy
if n == (2):
obj_joint = bpy.data.objects["joint.gold.spine.y2y3"].copy()
else:
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, N+Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 1 of ob
if n == (2):
obj_joint = bpy.data.objects["joint.blue.spine.o2b3"].copy()
else:
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, N-Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
if n == (2):
obj_joint = bpy.data.objects["joint.copper.spine.y3o3"].copy()
else:
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, N-Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.gold.spine.y5y6"].copy()
obj_joint.location = (0.0, 0.0, N+Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y5y6.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class LowerForelimb(Formula):
J = 6 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.8*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.8*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((1.05/0.35)*A, A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((2.2642/0.35)*A, (-4.97125/0.35)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
b[4] = mathutils.Euler(((6.42315/0.35)*A, (-5.57265/0.35)*A, 0.0), 'XYZ')
print ("b4 =", b[4])
y[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((4.97125/0.35)*A, (-4.97125/0.35)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
y[4] = mathutils.Euler(((5.9979/0.35)*A, (-5.9979/0.35)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
y[5] = mathutils.Euler(((7.653072/0.35)*A, (-7.653072/0.35)*A, 0.0), 'XYZ')
print ("y5 =", y[5])
o[2] = mathutils.Euler(((2.2642/0.35)*A, (1.56419/0.35)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((4.97125/0.35)*A, (-5.57265/0.35)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
o[4] = mathutils.Euler(((6.8484/0.35)*A, (-5.9979/0.35)*A, 0.0), 'XYZ')
print ("o4 =", o[4])
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
if part == 'right-lowerforelimb':
obj_joint = bpy.data.objects["joint.gold.a2a1.lowerforelimb-right"].copy()
else:
obj_joint = bpy.data.objects["joint.gold.a2a1.lowerforelimb-left"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class UpperForelimb(Formula):
J = 5 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end, disciple_loc, disciple_rot, disciple):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig, self.disciple_loc, self.disciple_rot, self.disciple)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.25*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.25*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((9.4/0.6)*A, (-8.2/0.6)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((7.399085/0.6)*A, (-19.717056/0.6)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
y[2] = mathutils.Euler(((8.2/0.6)*A, (-9.4/0.6)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((9.08969/0.6)*A, (-19.569149/0.6)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
y[4] = mathutils.Euler(((9.2376/0.6)*A, (-21.259787/0.6)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
o[2] = mathutils.Euler(((6.509395/0.6)*A, (-9.547907/0.6)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((10.38971/0.6)*A, (-20.659994/0.6)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y3y4' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
if part == 'right-upperforelimb':
obj_joint = bpy.data.objects["joint.gold.a2a1.upperforelimb-right"].copy()
else:
obj_joint = bpy.data.objects["joint.gold.a2a1.upperforelimb-left"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class RightShoulder(Formula):
J = 5 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig, self.disciple_loc, self.disciple_rot, self.disciple)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((A*3/0.512329)*A, (A/0.512329)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((-A/0.512329)*A, (-3/0.512329)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
y[2] = mathutils.Euler(((A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((A/0.512329)*A, (-3/0.512329)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
o[2] = mathutils.Euler(((-A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((A/0.512329)*A, (-4.03054/0.512329)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
y[4] = mathutils.Euler(((A*3/0.512329)*A, (-3/0.512329)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'b3y3' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.000"].copy()
obj_joint.location = (0.0, 0.0, -Q*0+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.002"].copy()
obj_joint.location = (0.0, 0.0, +Q*4+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*6+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*1+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
if n == 1:
obj_joint = bpy.data.objects["joint.green.002"].copy()
obj_joint.location = (0.0, 0.0, +Q*2 + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
else:
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n == 1:
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.002"].copy()
obj_joint.location = (0.0, 0.0, +Q*1 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
else:
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.gold.spine.y3y4"].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (3 % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y3y4.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class LeftShoulder(RightShoulder):
J = 5 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple, disciple2_loc, disciple2_rot, disciple2):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# disciple position
self.disciple2_loc = disciple2_loc
self.disciple2_rot = disciple2_rot
# disciple
self.disciple2 = disciple2
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig, self.disciple_loc, self.disciple_rot,
self.disciple, self.disciple2_loc, self.disciple2_rot, self.disciple2)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((A*3/0.512329)*A, (A/0.512329)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((-A/0.512329)*A, (1.97543/0.512329)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
y[2] = mathutils.Euler(((A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((A/0.512329)*A, (1.97543/0.512329)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
o[2] = mathutils.Euler(((-A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((A/0.512329)*A, (3/0.512329)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
y[4] = mathutils.Euler(((A*3/0.512329)*A, (1.97543/0.512329)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple,
disciple2_loc, disciple2_rot, disciple2):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'a2a1' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
#disciple
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'b3y3' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
#disciple2
disciple2.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple2 position
disciple2.rig.location.x += disciple2_loc[0]
disciple2.rig.location.y += disciple2_loc[1]
disciple2.rig.location.z += disciple2_loc[2]
disciple2.rig.rotation_euler = disciple2_rot
class Head(Formula):
J = 6 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.4*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.4*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((3.354023/0.476741)*A, (-2.400706/0.476741)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((14.608374/0.476741)*A, (-8.410717/0.476741)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
b[4] = mathutils.Euler(((6.742701/0.476741)*A, (-17.577248/0.476741)*A, 0.0), 'XYZ')
print ("b4 =", b[4])
y[2] = mathutils.Euler(((2.400629/0.476741)*A, (-3.3541/0.476741)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((11.032801/0.476741)*A, (-11.9863/0.476741)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
y[4] = mathutils.Euler(((5.394285/0.476741)*A, (-15.241665/0.476741)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
y[5] = mathutils.Euler(((4.080432/0.476741)*A, (-15.544949/0.476741)*A, 0.0), 'XYZ')
print ("y5 =", y[5])
o[2] = mathutils.Euler(((5.976194/0.476741)*A, (0.221443/0.476741)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((12.381183/0.476741)*A, (-14.321769/0.476741)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
o[4] = mathutils.Euler(((4.226511/0.476741)*A, (-15.915896/0.476741)*A, 0.0), 'XYZ')
print ("o4 =", o[4])
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.a2a1.head"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class Neck(Formula):
J = 3 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(self.J)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig,
self.disciple_loc, self.disciple_rot, self.disciple)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.25*self.A*0.4, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.25*self.A*0.4, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
y[2] = mathutils.Euler((-A, -(0.470026/0.953482)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
o[1] = mathutils.Euler(((-2.10399/0.953482)*A, -A, 0.0), 'XYZ')
print ("o1 =", o[1])
def constructMovement(self, J, helicity, amt, rig, a, b, y, o):
# Linkages
aa = [[0 for i in range(4)] for j in range(4)] # Link α(i) - α(j)
ab = [[0 for i in range(4)] for j in range(4)] # Link α(i) - β(j)
ya = [[0 for i in range(4)] for j in range(4)] # Link γ(i) - α(j)
yy = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - γ(j)
by = [[0 for i in range(self.J)] for j in range(self.J)] # Link β(i) - γ(j)
yo = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - δ(j)
rig.location = mathutils.Euler((0.0, 0.0, 0.0), 'XYZ')
rig.show_in_front = True
amt.show_names = True
amt.display_type = 'STICK'
# amt.display_type = 'BBONE'
# Link object to scene
bpy.data.collections['movement'].objects.link(rig)
bpy.context.view_layer.objects.active = rig
bpy.context.view_layer.update()
# Edit
bpy.ops.object.editmode_toggle()
# Construction Linkage
aa[2][1] = amt.edit_bones.new('a2a1')
aa[2][1].head = a[2]
aa[2][1].tail = a[1]
ab[1][1] = amt.edit_bones.new('a1b1')
ab[1][1].head = a[1]
ab[1][1].tail = b[1]
ab[1][1].parent = aa[2][1]
by[1][1] = amt.edit_bones.new('b1y1')
by[1][1].head = b[1]
by[1][1].tail = y[1]
by[1][1].parent = ab[1][1]
by[1][1].use_inherit_rotation = False
ya[1][2] = amt.edit_bones.new('y1a2')
ya[1][2].head = y[1]
ya[1][2].tail = a[2]
ya[1][2].parent = by[1][1]
yo[1][1] = amt.edit_bones.new('y1o1')
yo[1][1].head = y[1]
yo[1][1].tail = o[1]
yo[1][1].parent = by[1][1]
yy[1][2] = amt.edit_bones.new('y1y2')
yy[1][2].head = y[1]
yy[1][2].tail = y[2]
yy[1][2].parent = by[1][1]
# all bones select
# Bone constraints. Armature must be in pose mode.
bpy.ops.object.mode_set(mode='POSE')
bpy.ops.pose.select_all(action="SELECT")
# Edit
bpy.ops.object.editmode_toggle()
if helicity == 'right':
bpy.ops.armature.calculate_roll(type='GLOBAL_POS_Z')
else:
bpy.ops.armature.calculate_roll(type='GLOBAL_NEG_Z')
# IK constraint
cns = rig.pose.bones['y1a2'].constraints.new('IK')
cns.name = 'Ik'
cns.target = rig
cns.subtarget = 'a2a1'
cns.chain_count = 2
cns.use_stretch = False
bpy.ops.object.mode_set(mode='OBJECT')
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y1a2' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.000"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.y1o1"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
n = 1
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class LowerHindlimb(Formula):
J = 6 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.8*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.8*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((1.05/0.35)*A, A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((0.7/0.35)*A, (-1.19497/0.35)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
b[4] = mathutils.Euler(((5.0818/0.35)*A, (-6.77175/0.35)*A, 0.0), 'XYZ')
print ("b4 =", b[4])
y[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((1.19497/0.35)*A, (-1.19497/0.35)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
y[4] = mathutils.Euler(((5.92677/0.35)*A, (-5.92677/0.35)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
y[5] = mathutils.Euler(((7.633759/0.35)*A, (-7.633759/0.35)*A, 0.0), 'XYZ')
print ("y5 =", y[5])
o[2] = mathutils.Euler(((0.7/0.35)*A, 0.0, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((0.35/0.35)*A, (-2.03995/0.35)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
o[4] = mathutils.Euler(((5.92677/0.35)*A, (-7.12175/0.35)*A, 0.0), 'XYZ')
print ("o4 =", o[4])
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
if part == 'right-lowerhindlimb':
obj_joint = bpy.data.objects["joint.gold.a2a1.lowerhindlimb-right"].copy()
else:
obj_joint = bpy.data.objects["joint.gold.a2a1.lowerhindlimb-left"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class UpperHindlimb(Formula):
J = 5 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end, disciple_loc, disciple_rot, disciple):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig, self.disciple_loc, self.disciple_rot, self.disciple)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(1.25*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(1.25*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((7.6/0.6)*A, (-6.4/0.6)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((7.471703/0.6)*A, (-19.577606/0.6)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
y[2] = mathutils.Euler(((6.4/0.6)*A, (-7.6/0.6)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((5.7769/0.6)*A, (-19.488718/0.6)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
y[4] = mathutils.Euler(((5.688129/0.6)*A, (-21.183546/0.6)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
o[2] = mathutils.Euler(((8.094728/0.6)*A, (-7.688817/0.6)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((6.91248/0.6)*A, (-20.749912/0.6)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y3y4' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
if part == 'right-upperhindlimb':
obj_joint = bpy.data.objects["joint.gold.a2a1.upperhindlimb-right"].copy()
else:
obj_joint = bpy.data.objects["joint.gold.a2a1.upperhindlimb-left"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class LeftIlium(Formula):
J = 5 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end, disciple_loc, disciple_rot, disciple):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig, self.disciple_loc, self.disciple_rot, self.disciple)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((A*3/0.512329)*A, (A/0.512329)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((-A/0.512329)*A, (-3/0.512329)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
y[2] = mathutils.Euler(((A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((A/0.512329)*A, (-3/0.512329)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
o[2] = mathutils.Euler(((-A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((A/0.512329)*A, (-4.03054/0.512329)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
y[4] = mathutils.Euler(((A*3/0.512329)*A, (-3/0.512329)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'b3y3' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.000"].copy()
obj_joint.location = (0.0, 0.0, -Q*0+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.002"].copy()
obj_joint.location = (0.0, 0.0, +Q*4+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*6+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*1+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
if n == 1:
obj_joint = bpy.data.objects["joint.green.002"].copy()
obj_joint.location = (0.0, 0.0, +Q*2 + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
else:
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n == 1:
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.002"].copy()
obj_joint.location = (0.0, 0.0, +Q*1 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
else:
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.gold.spine.y3y4"].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (3 % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y3y4.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class RightIlium(LeftIlium):
J = 5 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple, disciple2_loc, disciple2_rot, disciple2):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# disciple position
self.disciple2_loc = disciple2_loc
self.disciple2_rot = disciple2_rot
# disciple
self.disciple2 = disciple2
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig, self.disciple_loc, self.disciple_rot,
self.disciple, self.disciple2_loc, self.disciple2_rot, self.disciple2)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A*0.8, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((A*3/0.512329)*A, (A/0.512329)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((-A/0.512329)*A, (1.97543/0.512329)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
y[2] = mathutils.Euler(((A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((A/0.512329)*A, (1.97543/0.512329)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
o[2] = mathutils.Euler(((-A/0.512329)*A, (-A/0.512329)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((A/0.512329)*A, (3/0.512329)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
y[4] = mathutils.Euler(((A*3/0.512329)*A, (1.97543/0.512329)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple,
disciple2_loc, disciple2_rot, disciple2):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y1y2' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
#disciple
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'b3y3' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
#disciple2
disciple2.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple2 position
disciple2.rig.location.x += disciple2_loc[0]
disciple2.rig.location.y += disciple2_loc[1]
disciple2.rig.location.z += disciple2_loc[2]
disciple2.rig.rotation_euler = disciple2_rot
class Tail(Formula):
J = 6 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(4)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(0.8*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(0.8*self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
o[1] = mathutils.Euler((A, A, 0.0), 'XYZ')
print ("o1 =", o[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
b[2] = mathutils.Euler(((4.08/0.7)*A, (-2.68/0.7)*A, 0.0), 'XYZ')
print ("b2 =", b[2])
b[3] = mathutils.Euler(((2.520382/0.7)*A, (-7.734981/0.7)*A, 0.0), 'XYZ')
print ("b3 =", b[3])
b[4] = mathutils.Euler(((4.650852/0.7)*A, (-10.086805/0.7)*A, 0.0), 'XYZ')
print ("b4 =", b[4])
y[2] = mathutils.Euler(((2.68/0.7)*A, (-4.08/0.7)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
y[3] = mathutils.Euler(((4.314873/0.7)*A, (-8.571764/0.7)*A, 0.0), 'XYZ')
print ("y3 =", y[3])
y[4] = mathutils.Euler(((4.065916/0.7)*A, (-9.98368/0.7)*A, 0.0), 'XYZ')
print ("y4 =", y[4])
y[5] = mathutils.Euler(((3.816914/0.7)*A, (-11.395846/0.7)*A, 0.0), 'XYZ')
print ("y5 =", y[5])
o[2] = mathutils.Euler(((4.5405/0.7)*A, (-3.402836/0.7)*A, 0.0), 'XYZ')
print ("o2 =", o[2])
o[3] = mathutils.Euler(((4.899491/0.7)*A, (-8.674883/0.7)*A, 0.0), 'XYZ')
print ("o3 =", o[3])
o[4] = b[4]
print ("o4 =", o[4])
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
if part == 'tail':
obj_joint = bpy.data.objects["joint.gold.a2a1.tail"].copy()
else:
obj_joint = bpy.data.objects["joint.gold.a2a1.tail"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for n in range(1, J - 1):
if n <= (J-2):
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
if n <= (J-3):
# Pattern 1 of ob
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2 + Q*(n % 2)*6 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "o"+str(n)+"b"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yo
obj_joint = bpy.data.objects["joint.copper.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n+1)+"o"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
class Sacrum(Formula):
J = 3 #joint number
# Overriding
def __init__(self, P, A, move, part, helicity, start, end,
disciple_loc, disciple_rot, disciple):
global interval
global frame_start
global frame_end
self.interval = interval
self.frame_start = frame_start
self.frame_end = frame_end
# pivot factor
self.P = P
# scale factor
self.A = A
# name
self.move = move
# element
self.part = part
# element helicity
self.helicity = helicity
self.start = start
self.end = end
# disciple position
self.disciple_loc = disciple_loc
self.disciple_rot = disciple_rot
# disciple
self.disciple = disciple
# Create armature and object
self.amt = bpy.data.armatures.new(move + '.' + part + '.' + helicity + '.data')
self.rig = bpy.data.objects.new(move + '.' + part + '.' + helicity, self.amt)
# Joints
self.a = [0 for i in range(self.J)] # Joint α
self.b = [0 for i in range(self.J)] # Joint β
self.y = [0 for i in range(self.J)] # Joint γ
self.o = [0 for i in range(self.J)] # Joint δ
# Configuration Movement
self.configMovement(self.P, self.A, self.J, self.a, self.b, self.y, self.o)
# Construction Movement
self.constructMovement(self.J, self.helicity, self.amt, self.rig, self.a, self.b, self.y, self.o)
# Parent set disciple to master
self.setParent(self.helicity, self.move, self.rig,
self.disciple_loc, self.disciple_rot, self.disciple)
# Construction Rotation
self.configRotation(self.rig, self.interval, self.frame_start, self.frame_end, self.start, self.end)
# Configuration Linkage
self.configLink(self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Construction Linkage
self.constructLink(self.A, self.J, self.helicity, self.rig, self.move, self.part)
# Overriding Configuration Movement
def configMovement(self, P, A, J, a, b, y, o):
a[1] = mathutils.Euler((P, A, 0.0), 'XYZ')
print ("a1 =", a[1])
a[2] = mathutils.Euler((A, -A, 0.0), 'XYZ')
print ("a2 =", a[2])
b[1] = mathutils.Euler((-A, A, 0.0), 'XYZ')
print ("b1 =", b[1])
B = A * 2 * sqrt (2)
C = B + (B * sqrt (2))
D = C * sqrt (2)
E = C + D
y[1] = mathutils.Euler((-A, -A, 0.0), 'XYZ')
print ("y1 =", y[1])
y[2] = mathutils.Euler((-A, -(0.173028/0.431828)*A, 0.0), 'XYZ')
print ("y2 =", y[2])
o[1] = mathutils.Euler(((-0.77453/0.431828)*A, -A, 0.0), 'XYZ')
print ("o1 =", o[1])
def constructMovement(self, J, helicity, amt, rig, a, b, y, o):
# Linkages
aa = [[0 for i in range(4)] for j in range(4)] # Link α(i) - α(j)
ab = [[0 for i in range(4)] for j in range(4)] # Link α(i) - β(j)
ya = [[0 for i in range(4)] for j in range(4)] # Link γ(i) - α(j)
yy = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - γ(j)
by = [[0 for i in range(self.J)] for j in range(self.J)] # Link β(i) - γ(j)
yo = [[0 for i in range(self.J)] for j in range(self.J)] # Link γ(i) - δ(j)
rig.location = mathutils.Euler((0.0, 0.0, 0.0), 'XYZ')
rig.show_in_front = True
amt.show_names = True
amt.display_type = 'STICK'
# amt.display_type = 'BBONE'
# Link object to scene
bpy.data.collections['movement'].objects.link(rig)
bpy.context.view_layer.objects.active = rig
bpy.context.view_layer.update()
# Edit
bpy.ops.object.editmode_toggle()
# Construction Linkage
aa[2][1] = amt.edit_bones.new('a2a1')
aa[2][1].head = a[2]
aa[2][1].tail = a[1]
ab[1][1] = amt.edit_bones.new('a1b1')
ab[1][1].head = a[1]
ab[1][1].tail = b[1]
ab[1][1].parent = aa[2][1]
by[1][1] = amt.edit_bones.new('b1y1')
by[1][1].head = b[1]
by[1][1].tail = y[1]
by[1][1].parent = ab[1][1]
by[1][1].use_inherit_rotation = False
ya[1][2] = amt.edit_bones.new('y1a2')
ya[1][2].head = y[1]
ya[1][2].tail = a[2]
ya[1][2].parent = by[1][1]
yo[1][1] = amt.edit_bones.new('y1o1')
yo[1][1].head = y[1]
yo[1][1].tail = o[1]
yo[1][1].parent = ya[1][2]
yy[1][2] = amt.edit_bones.new('y1y2')
yy[1][2].head = y[1]
yy[1][2].tail = y[2]
yy[1][2].parent = by[1][1]
# all bones select
# Bone constraints. Armature must be in pose mode.
bpy.ops.object.mode_set(mode='POSE')
bpy.ops.pose.select_all(action="SELECT")
# Edit
bpy.ops.object.editmode_toggle()
if helicity == 'right':
bpy.ops.armature.calculate_roll(type='GLOBAL_POS_Z')
else:
bpy.ops.armature.calculate_roll(type='GLOBAL_NEG_Z')
# IK constraint
cns = rig.pose.bones['y1a2'].constraints.new('IK')
cns.name = 'Ik'
cns.target = rig
cns.subtarget = 'a2a1'
cns.chain_count = 2
cns.use_stretch = False
bpy.ops.object.mode_set(mode='OBJECT')
# Parent set disciple to master
def setParent(self, helicity, move, rig, disciple_loc, disciple_rot, disciple):
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.scene.frame_current = 0
bpy.ops.object.select_all(action='DESELECT')
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig
bpy.ops.object.editmode_toggle()
parent_bone = 'y1o1' # choose the bone name which you want to be the parent
rig.data.edit_bones.active = rig.data.edit_bones[parent_bone]
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
disciple.rig.select_set(state=True)
rig.select_set(state=True)
bpy.context.view_layer.objects.active = rig #the active object will be the parent of all selected object
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
bpy.ops.object.select_all(action='DESELECT') #deselect all objects
# disciple position
disciple.rig.location.x += disciple_loc[0]
disciple.rig.location.y += disciple_loc[1]
disciple.rig.location.z += disciple_loc[2]
disciple.rig.rotation_euler = disciple_rot
def configLink(self, A, J, helicity, rig, move, part):
bpy.ops.object.mode_set(mode='OBJECT')
Q = (0.18648+0.146446)*A
# Z = -Q*2
Z = 0.0
obj_joint = bpy.data.objects["joint.gold.000"].copy()
obj_joint.location = (0.0, 0.0, -Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a2a1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.silver.001"].copy()
obj_joint.location = (0.0, 0.0, +Q+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1a2.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.copper.y1o1.sacrum.B"].copy()
obj_joint.location = (0.0, 0.0, +Q*3+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y1o1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
obj_joint = bpy.data.objects["joint.blue.001"].copy()
obj_joint.location = (0.0, 0.0, -Q*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "a1b1.mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
n = 1
# Pattern 2 of by
obj_joint = bpy.data.objects["joint.green.001"].copy()
obj_joint.location = (0.0, 0.0, -Q + Q*((n+1) % 2)*4 +Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "b"+str(n)+"y"+str(n)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
# Pattern 2 of yy
obj_joint = bpy.data.objects["joint.gold.00"+str(1 + (n+1) % 2)].copy()
obj_joint.location = (0.0, 0.0, +Q*(1 - (n % 2))*2+Z)
obj_joint.scale = (A, A, A)
obj_joint.name = "y"+str(n)+"y"+str(n+1)+".mesh." + move + '.' + part +'.' + helicity
bpy.data.collections['link'].objects.link(obj_joint)
for ob in data.collections['link'].objects:
if "mesh" in ob.name:
ob.select_set(state = True, view_layer = None)
bpy.ops.object.make_single_user(type='SELECTED_OBJECTS', object=True, obdata=True, material=True, animation=True)
bpy.context.scene.cursor.location = (0.0, 0.0, 0.0)
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
def formula():
# pivot factor
P = 0
# scale factor
A = 1
# joint number
J = 6
# name
move = 'formula'
# element
part = 'universe'
# left or right
helicity = 'left'
start = 0
end = start+360
formula = Formula(P, A, J, move, part, helicity, start, end)
def lowerforelimbs():
# scale factor
A = 0.35
# pivot factor
P = ((0.75*-0.35)/0.35)*A
# name
move = 'equestrianism-pace'
# element
part = 'right-lowerforelimb'
# left or right
helicity = 'right'
start = -85
end = start-720
global lowerforelimb_right
lowerforelimb_right = LowerForelimb(P, A, move, part, helicity, start, end)
# element
part = 'left-lowerforelimb'
# left or right
helicity = 'left'
start = -85
end = start+720
global lowerforelimb_left
lowerforelimb_left = LowerForelimb(P, A, move, part, helicity, start, end)
def upperforelimbs():
# scale factor
A = 0.6
# pivot factor
P = 0.0
# name
move = 'equestrianism-pace'
# element
part = 'right-upperforelimb'
# left or right
helicity = 'right'
start = 90
end = start-720
global lowerforelimb_right
lowerforelimb = lowerforelimb_right
lowerforelimb_loc = ((9.657187/0.6)*A, (-20.120846/0.6)*A, (-0.031213/0.6)*A)
lowerforelimb_rot = mathutils.Euler((math.radians(-180), math.radians(180), math.radians(185)), 'XYZ')
global upperforelimb_right
upperforelimb_right = UpperForelimb(P, A, move, part, helicity, start, end,
lowerforelimb_loc, lowerforelimb_rot, lowerforelimb)
# element
part = 'left-upperforelimb'
# left or right
helicity = 'left'
start = 90
end = start+720
global lowerforelimb_left
lowerforelimb = lowerforelimb_left
lowerforelimb_loc = ((9.657187/0.6)*A, (-20.120846/0.6)*A, (-0.031213/0.6)*A)
lowerforelimb_rot = mathutils.Euler((math.radians(-180), math.radians(180), math.radians(185)), 'XYZ')
global upperforelimb_left
upperforelimb_left = UpperForelimb(P, A, move, part, helicity, start, end,
lowerforelimb_loc, lowerforelimb_rot, lowerforelimb)
def shoulder():
start = 44
end = start+720
# name
move = 'equestrianism-pace'
# scale factor
A = 0.512329
# pivot factor
P = (-0.467885/0.512329)*A
# element
part = 'right-shoulder'
# left or right
helicity = 'left'
global upperforelimb_right
upperforelimb = upperforelimb_right
upperforelimb_loc = ((1.841208/0.512329)*A, (-4.782617/0.512329)*A, (-1.980514/0.512329)*A)
upperforelimb_rot = mathutils.Euler((math.radians(-269.253), math.radians(-257.073), math.radians(-538.019)), 'XYZ')
global shoulder_right
shoulder_right = RightShoulder(P, A, move, part, helicity, start, end,
upperforelimb_loc, upperforelimb_rot, upperforelimb)
start = 44
end = start+720
# element
part = 'left-shoulder'
global neck
neck_loc = ((1.518864/0.512329)*A, (-1.409492/0.512329)*A, (0.676093/0.512329)*A)
neck_rot = mathutils.Euler((math.radians(-720), math.radians(180), math.radians(180)), 'XYZ')
global upperforelimb_left
upperforelimb = upperforelimb_left
upperforelimb_loc = ((2.049464/0.512329)*A, (2.827685/0.512329)*A, (-2.198156/0.512329)*A)
upperforelimb_rot = mathutils.Euler((math.radians(-89.2696), math.radians(76.9094), math.radians(-1076.87)), 'XYZ')
global shoulder_left
shoulder_left = LeftShoulder(P, A, move, part, helicity, start, end,
neck_loc, neck_rot, neck, upperforelimb_loc, upperforelimb_rot, upperforelimb)
def head():
# scale factor
A = 0.476741
# pivot factor
P = (-0.327763/0.476741)*A
# name
move = 'equestrianism-pace'
# element
part = 'head'
# left or right
helicity = 'right'
start = -90
end = start-720*2
global head
head = Head(P, A, move, part, helicity, start, end)
def neck():
# scale factor
A = 0.953482
# pivot factor
P = 0
# name
move = 'equestrianism-pace'
# neck element
part = 'neck'
# helicity
helicity = 'left'
start = 0
end = start+0
head_loc = ((-3.369717/0.953482)*A, (-0.875949/0.953482)*A, (-0.790696/0.953482)*A)
head_rot = mathutils.Euler((math.radians(270), math.radians(-172.554), math.radians(0)), 'XYZ')
global head
global neck
neck = Neck(P, A, move, part, helicity, start, end, head_loc, head_rot, head)
def lowerhindlimbs():
# scale factor
A = 0.35
# pivot factor
P = ((0.75*-0.35)/0.35)*A
# name
move = 'equestrianism-pace'
# element
part = 'right-lowerhindlimb'
# left or right
helicity = 'left'
start = 64
end = start+720
global lowerhindlimb_right
lowerhindlimb_right = LowerHindlimb(P, A, move, part, helicity, start, end)
# element
part = 'left-lowerhindlimb'
# left or right
helicity = 'right'
start = -244
end = start-720
global lowerhindlimb_left
lowerhindlimb_left = LowerHindlimb(P, A, move, part, helicity, start, end)
def upperhindlimbs():
# scale factor
A = 0.6
# pivot factor
P = 0.0
# name
move = 'equestrianism-pace'
# element
part = 'right-upperhindlimb'
# left or right
helicity = 'left'
start = -135
end = start-720
global lowerhindlimb_right
lowerhindlimb = lowerhindlimb_right
lowerhindlimb_loc = ((6.346961/0.6)*A, (-20.121071/0.6)*A, 0.0)
lowerhindlimb_rot = mathutils.Euler((math.radians(-180), math.radians(180), math.radians(192)), 'XYZ')
global upperhindlimb_right
upperhindlimb_right = UpperHindlimb(P, A, move, part, helicity, start, end,
lowerhindlimb_loc, lowerhindlimb_rot, lowerhindlimb)
# element
part = 'left-upperhindlimb'
# left or right
helicity = 'right'
start = -45
end = start+720
global lowerhindlimb_left
lowerhindlimb = lowerhindlimb_left
lowerhindlimb_loc = ((6.346961/0.6)*A, (-20.121071/0.6)*A, 0.0)
lowerhindlimb_rot = mathutils.Euler((math.radians(-180), math.radians(180), math.radians(192)), 'XYZ')
global upperhindlimb_left
upperhindlimb_left = UpperHindlimb(P, A, move, part, helicity, start, end,
lowerhindlimb_loc, lowerhindlimb_rot, lowerhindlimb)
def ilium():
start = 179
end = start+720
# name
move = 'equestrianism-pace'
# scale factor
A = 0.512329
# pivot factor
P = (-0.467885/0.512329)*A
# element
part = 'left-ilium'
# left or right
helicity = 'left'
global upperhindlimb_left
upperhindlimb = upperhindlimb_left
upperhindlimb_loc = ((3.293583/0.512329)*A, (-4.316476/0.512329)*A, (1.055012/0.512329)*A)
upperhindlimb_rot = mathutils.Euler((math.radians(79.4694), math.radians(81.7266), math.radians(892.052)), 'XYZ')
global ilium_left
ilium_left = LeftIlium(P, A, move, part, helicity, start, end,
upperhindlimb_loc, upperhindlimb_rot, upperhindlimb)
start = 179
end = start+720
# element
part = 'right-ilium'
global tail
tail_loc = ((1.095053/0.512329)*A, (-0.236876/0.512329)*A, (7.257094/0.512329)*A)
tail_rot = mathutils.Euler((math.radians(90), math.radians(252.044), math.radians(0)), 'XYZ')
global upperhindlimb_right
upperhindlimb = upperhindlimb_right
upperhindlimb_loc = ((3.028501/0.512329)*A, (3.383841/0.512329)*A, (1.280923/0.512329)*A)
upperhindlimb_rot = mathutils.Euler((math.radians(-270.286), math.radians(81.7909), math.radians(182.527)), 'XYZ')
global ilium_right
ilium_right = RightIlium(P, A, move, part, helicity, start, end,
tail_loc, tail_rot, tail, upperhindlimb_loc, upperhindlimb_rot, upperhindlimb)
def tail():
# scale factor
A = 0.7
# pivot factor
P = (-0.437499/0.7)*A
# name
move = 'equestrianism-pace'
# element
part = 'tail'
# left or right
helicity = 'right'
start = 0
end = start+720*2
global tail
tail = Tail(P, A, move, part, helicity, start, end)
def costa():
# scale factor
A = 1.28082
# pivot factor
P = (-1.20397/1.28082)*A
# name
move = 'equestrianism-pace'
# element
part = 'costa'
# left or right
helicity = 'left'
start = 360
end = start
global shoulder_left
global shoulder_right
shoulder_loc = ((-8.841815/1.28082)*A, (-1.016781/1.28082)*A, (1.557212/1.28082)*A)
shoulder_rot = mathutils.Euler((math.radians(338.534), math.radians(273.483), math.radians(21.1521)), 'XYZ')
global costa
costa = Costa(P, A, move, part, helicity, start, end,
shoulder_loc, shoulder_rot, shoulder_left, shoulder_right)
# shoulder_loc, shoulder_rot, shoulder_left, shoulder_right,
# neck_loc, neck_rot, neck)
def spine():
# scale factor
A = 1.71652
# pivot factor
P = (-1.656175/1.71652)*A
# name
move = 'equestrianism-pace'
# element
part = 'spine'
# left or right
helicity = 'left'
start = 180
end = start-720*2
global costa
global ilium_left
global ilium_right
costa_loc = ((-2.62224/1.71652)*A, (-0.810857/1.71652)*A, (1.28082/1.71652)*A)
costa_rot = mathutils.Euler((math.radians(-270), math.radians(0), math.radians(315)), 'XYZ')
ilium_loc = ((8.423421/1.71652)*A, (-14.897697/1.71652)*A, (-0.813787/1.71652)*A)
ilium_rot = mathutils.Euler((math.radians(90), math.radians(-180), math.radians(432.166)), 'XYZ')
global spine
spine = Spine(P, A, move, part, helicity, start, end,
costa_loc, costa_rot, costa, ilium_loc, ilium_rot, ilium_left, ilium_right)
def sacrum():
# scale factor
A = 0.215914
# pivot factor
P = 0
# name
move = 'equestrianism-pace'
# element
part = 'sacrum'
# left or right
helicity = 'left'
start = 0
end = start+0
global spine
spine_loc = ((14.937735/0.215914)*A, (-0.30611/0.215914)*A, (9.682981/0.215914)*A)
spine_rot = mathutils.Euler((math.radians(-270), math.radians(-44.5509), math.radians(180)), 'XYZ')
global sacrum
sacrum = Sacrum(P, A, move, part, helicity, start, end,
spine_loc, spine_rot, spine)
sacrum_loc = ((6.310129/0.215914)*A, (4.989754/0.215914)*A, (10.548208/0.215914)*A)
sacrum_rot = mathutils.Euler((math.radians(-90.0), math.radians(180.0), math.radians(0.0)), 'XYZ')
# position
sacrum.rig.location.x += sacrum_loc[0]
sacrum.rig.location.y += sacrum_loc[1]
sacrum.rig.location.z += sacrum_loc[2]
sacrum.rig.rotation_euler = sacrum_rot
def main(origin):
# create new collection
newCol = bpy.data.collections.new('movement')
# link the newCol to the scene
bpy.context.scene.collection.children.link(newCol)
newCol = bpy.data.collections.new('link')
bpy.context.scene.collection.children.link(newCol)
global interval
global frame_start
global frame_end
frame_start = 0
frame_end = 240
interval = frame_end - frame_start
# formula()
head()
neck()
lowerforelimbs()
upperforelimbs()
shoulder()
tail()
lowerhindlimbs()
upperhindlimbs()
ilium()
costa()
spine()
sacrum()
if __name__ == "__main__":
# renaming of corrada objects
# for ob in context.collection.objects:
# if "joint_" in ob.name:
# ob.name = ob.name.replace("_", ".")
main((0.0, 0.0, 0.0))
| 33.550708 | 121 | 0.545474 | 21,205 | 146,885 | 3.700259 | 0.029238 | 0.016288 | 0.011814 | 0.009023 | 0.911845 | 0.898055 | 0.885349 | 0.866359 | 0.854953 | 0.846707 | 0 | 0.059971 | 0.287075 | 146,885 | 4,377 | 122 | 33.558373 | 0.689318 | 0.090874 | 0 | 0.837637 | 0 | 0 | 0.056434 | 0.003394 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02856 | false | 0 | 0.002347 | 0 | 0.042254 | 0.072379 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bcb1db89e8bdbf1bef79478045e185fd32126c64 | 330 | py | Python | src/ipnetblocks/exceptions/__init__.py | whois-api-llc/ip-netblocks-py | 8f815437caf06a402aadf296c95c3a808a0f93cf | [
"MIT"
] | null | null | null | src/ipnetblocks/exceptions/__init__.py | whois-api-llc/ip-netblocks-py | 8f815437caf06a402aadf296c95c3a808a0f93cf | [
"MIT"
] | null | null | null | src/ipnetblocks/exceptions/__init__.py | whois-api-llc/ip-netblocks-py | 8f815437caf06a402aadf296c95c3a808a0f93cf | [
"MIT"
] | null | null | null | __all__ = ['ParameterError', 'HttpApiError', 'IpNetblocksApiError',
'ApiAuthError', 'ResponseError', 'EmptyApiKeyError',
'UnparsableApiResponseError']
from .error import ParameterError, HttpApiError, \
IpNetblocksApiError, ApiAuthError, ResponseError, \
EmptyApiKeyError, UnparsableApiResponseError
| 41.25 | 67 | 0.745455 | 18 | 330 | 13.444444 | 0.611111 | 0.214876 | 0.371901 | 0.471074 | 0.92562 | 0.92562 | 0.92562 | 0 | 0 | 0 | 0 | 0 | 0.157576 | 330 | 7 | 68 | 47.142857 | 0.870504 | 0 | 0 | 0 | 0 | 0 | 0.339394 | 0.078788 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
bcc7c83d33e4c1818781da86f6538e9c9d5791d6 | 127 | py | Python | boa3_test/test_sc/interop_test/stdlib/Base58CheckDecodeMismatchedType.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/interop_test/stdlib/Base58CheckDecodeMismatchedType.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/interop_test/stdlib/Base58CheckDecodeMismatchedType.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin.interop.stdlib import base58_check_decode
def main(key: int) -> bytes:
return base58_check_decode(key)
| 21.166667 | 59 | 0.779528 | 19 | 127 | 5 | 0.789474 | 0.231579 | 0.357895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.133858 | 127 | 5 | 60 | 25.4 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
bccf99a4223608e68d51e3483f01a888e0501eb2 | 31,584 | py | Python | lib/models/axialnet.py | rajib216/Medical-Transformer | e9ef562cea29aab8e4e3026e7f241558217f347a | [
"MIT"
] | null | null | null | lib/models/axialnet.py | rajib216/Medical-Transformer | e9ef562cea29aab8e4e3026e7f241558217f347a | [
"MIT"
] | null | null | null | lib/models/axialnet.py | rajib216/Medical-Transformer | e9ef562cea29aab8e4e3026e7f241558217f347a | [
"MIT"
] | null | null | null | import pdb
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from .utils import *
import pdb
import matplotlib.pyplot as plt
import random
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class AxialAttention(nn.Module):
def __init__(self, in_planes, out_planes, groups=8, kernel_size=56,
stride=1, bias=False, width=False):
assert (in_planes % groups == 0) and (out_planes % groups == 0)
super(AxialAttention, self).__init__()
self.in_planes = in_planes
self.out_planes = out_planes
self.groups = groups
self.group_planes = out_planes // groups
self.kernel_size = kernel_size
self.stride = stride
self.bias = bias
self.width = width
# Multi-head self attention
self.qkv_transform = qkv_transform(in_planes, out_planes * 2, kernel_size=1, stride=1,
padding=0, bias=False)
self.bn_qkv = nn.BatchNorm1d(out_planes * 2)
self.bn_similarity = nn.BatchNorm2d(groups * 3)
self.bn_output = nn.BatchNorm1d(out_planes * 2)
# Position embedding
self.relative = nn.Parameter(torch.randn(self.group_planes * 2, kernel_size * 2 - 1), requires_grad=True)
query_index = torch.arange(kernel_size).unsqueeze(0)
key_index = torch.arange(kernel_size).unsqueeze(1)
relative_index = key_index - query_index + kernel_size - 1
self.register_buffer('flatten_index', relative_index.view(-1))
if stride > 1:
self.pooling = nn.AvgPool2d(stride, stride=stride)
self.reset_parameters()
def forward(self, x):
# pdb.set_trace()
if self.width:
x = x.permute(0, 2, 1, 3)
else:
x = x.permute(0, 3, 1, 2) # N, W, C, H
N, W, C, H = x.shape
x = x.contiguous().view(N * W, C, H)
# Transformations
qkv = self.bn_qkv(self.qkv_transform(x))
q, k, v = torch.split(qkv.reshape(N * W, self.groups, self.group_planes * 2, H), [self.group_planes // 2, self.group_planes // 2, self.group_planes], dim=2)
# Calculate position embedding
all_embeddings = torch.index_select(self.relative, 1, self.flatten_index).view(self.group_planes * 2, self.kernel_size, self.kernel_size)
q_embedding, k_embedding, v_embedding = torch.split(all_embeddings, [self.group_planes // 2, self.group_planes // 2, self.group_planes], dim=0)
qr = torch.einsum('bgci,cij->bgij', q, q_embedding)
kr = torch.einsum('bgci,cij->bgij', k, k_embedding).transpose(2, 3)
qk = torch.einsum('bgci, bgcj->bgij', q, k)
stacked_similarity = torch.cat([qk, qr, kr], dim=1)
stacked_similarity = self.bn_similarity(stacked_similarity).view(N * W, 3, self.groups, H, H).sum(dim=1)
#stacked_similarity = self.bn_qr(qr) + self.bn_kr(kr) + self.bn_qk(qk)
# (N, groups, H, H, W)
similarity = F.softmax(stacked_similarity, dim=3)
sv = torch.einsum('bgij,bgcj->bgci', similarity, v)
sve = torch.einsum('bgij,cij->bgci', similarity, v_embedding)
stacked_output = torch.cat([sv, sve], dim=-1).view(N * W, self.out_planes * 2, H)
output = self.bn_output(stacked_output).view(N, W, self.out_planes, 2, H).sum(dim=-2)
if self.width:
output = output.permute(0, 2, 1, 3)
else:
output = output.permute(0, 2, 3, 1)
if self.stride > 1:
output = self.pooling(output)
return output
def reset_parameters(self):
self.qkv_transform.weight.data.normal_(0, math.sqrt(1. / self.in_planes))
#nn.init.uniform_(self.relative, -0.1, 0.1)
nn.init.normal_(self.relative, 0., math.sqrt(1. / self.group_planes))
class AxialAttention_dynamic(nn.Module):
def __init__(self, in_planes, out_planes, groups=8, kernel_size=56,
stride=1, bias=False, width=False):
assert (in_planes % groups == 0) and (out_planes % groups == 0)
super(AxialAttention_dynamic, self).__init__()
self.in_planes = in_planes
self.out_planes = out_planes
self.groups = groups
self.group_planes = out_planes // groups
self.kernel_size = kernel_size
self.stride = stride
self.bias = bias
self.width = width
# Multi-head self attention
self.qkv_transform = qkv_transform(in_planes, out_planes * 2, kernel_size=1, stride=1,
padding=0, bias=False)
self.bn_qkv = nn.BatchNorm1d(out_planes * 2)
self.bn_similarity = nn.BatchNorm2d(groups * 3)
self.bn_output = nn.BatchNorm1d(out_planes * 2)
# Priority on encoding
## Initial values
self.f_qr = nn.Parameter(torch.tensor(0.1), requires_grad=False)
self.f_kr = nn.Parameter(torch.tensor(0.1), requires_grad=False)
self.f_sve = nn.Parameter(torch.tensor(0.1), requires_grad=False)
self.f_sv = nn.Parameter(torch.tensor(1.0), requires_grad=False)
# Position embedding
self.relative = nn.Parameter(torch.randn(self.group_planes * 2, kernel_size * 2 - 1), requires_grad=True)
query_index = torch.arange(kernel_size).unsqueeze(0)
key_index = torch.arange(kernel_size).unsqueeze(1)
relative_index = key_index - query_index + kernel_size - 1
self.register_buffer('flatten_index', relative_index.view(-1))
if stride > 1:
self.pooling = nn.AvgPool2d(stride, stride=stride)
self.reset_parameters()
# self.print_para()
def forward(self, x):
if self.width:
x = x.permute(0, 2, 1, 3)
else:
x = x.permute(0, 3, 1, 2) # N, W, C, H
N, W, C, H = x.shape
x = x.contiguous().view(N * W, C, H)
# Transformations
qkv = self.bn_qkv(self.qkv_transform(x))
q, k, v = torch.split(qkv.reshape(N * W, self.groups, self.group_planes * 2, H), [self.group_planes // 2, self.group_planes // 2, self.group_planes], dim=2)
# Calculate position embedding
all_embeddings = torch.index_select(self.relative, 1, self.flatten_index).view(self.group_planes * 2, self.kernel_size, self.kernel_size)
q_embedding, k_embedding, v_embedding = torch.split(all_embeddings, [self.group_planes // 2, self.group_planes // 2, self.group_planes], dim=0)
qr = torch.einsum('bgci,cij->bgij', q, q_embedding)
kr = torch.einsum('bgci,cij->bgij', k, k_embedding).transpose(2, 3)
qk = torch.einsum('bgci, bgcj->bgij', q, k)
# multiply by factors
qr = torch.mul(qr, self.f_qr)
kr = torch.mul(kr, self.f_kr)
stacked_similarity = torch.cat([qk, qr, kr], dim=1)
stacked_similarity = self.bn_similarity(stacked_similarity).view(N * W, 3, self.groups, H, H).sum(dim=1)
#stacked_similarity = self.bn_qr(qr) + self.bn_kr(kr) + self.bn_qk(qk)
# (N, groups, H, H, W)
similarity = F.softmax(stacked_similarity, dim=3)
sv = torch.einsum('bgij,bgcj->bgci', similarity, v)
sve = torch.einsum('bgij,cij->bgci', similarity, v_embedding)
# multiply by factors
sv = torch.mul(sv, self.f_sv)
sve = torch.mul(sve, self.f_sve)
stacked_output = torch.cat([sv, sve], dim=-1).view(N * W, self.out_planes * 2, H)
output = self.bn_output(stacked_output).view(N, W, self.out_planes, 2, H).sum(dim=-2)
if self.width:
output = output.permute(0, 2, 1, 3)
else:
output = output.permute(0, 2, 3, 1)
if self.stride > 1:
output = self.pooling(output)
return output
def reset_parameters(self):
self.qkv_transform.weight.data.normal_(0, math.sqrt(1. / self.in_planes))
#nn.init.uniform_(self.relative, -0.1, 0.1)
nn.init.normal_(self.relative, 0., math.sqrt(1. / self.group_planes))
class AxialAttention_wopos(nn.Module):
def __init__(self, in_planes, out_planes, groups=8, kernel_size=56,
stride=1, bias=False, width=False):
assert (in_planes % groups == 0) and (out_planes % groups == 0)
super(AxialAttention_wopos, self).__init__()
self.in_planes = in_planes
self.out_planes = out_planes
self.groups = groups
self.group_planes = out_planes // groups
self.kernel_size = kernel_size
self.stride = stride
self.bias = bias
self.width = width
# Multi-head self attention
self.qkv_transform = qkv_transform(in_planes, out_planes * 2, kernel_size=1, stride=1,
padding=0, bias=False)
self.bn_qkv = nn.BatchNorm1d(out_planes * 2)
self.bn_similarity = nn.BatchNorm2d(groups )
self.bn_output = nn.BatchNorm1d(out_planes * 1)
if stride > 1:
self.pooling = nn.AvgPool2d(stride, stride=stride)
self.reset_parameters()
def forward(self, x):
if self.width:
x = x.permute(0, 2, 1, 3)
else:
x = x.permute(0, 3, 1, 2) # N, W, C, H
N, W, C, H = x.shape
x = x.contiguous().view(N * W, C, H)
# Transformations
qkv = self.bn_qkv(self.qkv_transform(x))
q, k, v = torch.split(qkv.reshape(N * W, self.groups, self.group_planes * 2, H), [self.group_planes // 2, self.group_planes // 2, self.group_planes], dim=2)
qk = torch.einsum('bgci, bgcj->bgij', q, k)
stacked_similarity = self.bn_similarity(qk).reshape(N * W, 1, self.groups, H, H).sum(dim=1).contiguous()
similarity = F.softmax(stacked_similarity, dim=3)
sv = torch.einsum('bgij,bgcj->bgci', similarity, v)
sv = sv.reshape(N*W,self.out_planes * 1, H).contiguous()
output = self.bn_output(sv).reshape(N, W, self.out_planes, 1, H).sum(dim=-2).contiguous()
if self.width:
output = output.permute(0, 2, 1, 3)
else:
output = output.permute(0, 2, 3, 1)
if self.stride > 1:
output = self.pooling(output)
return output
def reset_parameters(self):
self.qkv_transform.weight.data.normal_(0, math.sqrt(1. / self.in_planes))
#nn.init.uniform_(self.relative, -0.1, 0.1)
# nn.init.normal_(self.relative, 0., math.sqrt(1. / self.group_planes))
#end of attn definition
class AxialBlock(nn.Module):
expansion = 2
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None, kernel_size=56):
super(AxialBlock, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
width = int(planes * (base_width / 64.))
# Both self.conv2 and self.downsample layers downsample the input when stride != 1
self.conv_down = conv1x1(inplanes, width)
self.bn1 = norm_layer(width)
self.hight_block = AxialAttention(width, width, groups=groups, kernel_size=kernel_size)
self.width_block = AxialAttention(width, width, groups=groups, kernel_size=kernel_size, stride=stride, width=True)
self.conv_up = conv1x1(width, planes * self.expansion)
self.bn2 = norm_layer(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv_down(x)
out = self.bn1(out)
out = self.relu(out)
# print(out.shape)
out = self.hight_block(out)
out = self.width_block(out)
out = self.relu(out)
out = self.conv_up(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class AxialBlock_dynamic(nn.Module):
expansion = 2
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None, kernel_size=56):
super(AxialBlock_dynamic, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
width = int(planes * (base_width / 64.))
# Both self.conv2 and self.downsample layers downsample the input when stride != 1
self.conv_down = conv1x1(inplanes, width)
self.bn1 = norm_layer(width)
self.hight_block = AxialAttention_dynamic(width, width, groups=groups, kernel_size=kernel_size)
self.width_block = AxialAttention_dynamic(width, width, groups=groups, kernel_size=kernel_size, stride=stride, width=True)
self.conv_up = conv1x1(width, planes * self.expansion)
self.bn2 = norm_layer(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv_down(x)
out = self.bn1(out)
out = self.relu(out)
out = self.hight_block(out)
out = self.width_block(out)
out = self.relu(out)
out = self.conv_up(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class AxialBlock_wopos(nn.Module):
expansion = 2
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None, kernel_size=56):
super(AxialBlock_wopos, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
# print(kernel_size)
width = int(planes * (base_width / 64.))
# Both self.conv2 and self.downsample layers downsample the input when stride != 1
self.conv_down = conv1x1(inplanes, width)
self.conv1 = nn.Conv2d(width, width, kernel_size = 1)
self.bn1 = norm_layer(width)
self.hight_block = AxialAttention_wopos(width, width, groups=groups, kernel_size=kernel_size)
self.width_block = AxialAttention_wopos(width, width, groups=groups, kernel_size=kernel_size, stride=stride, width=True)
self.conv_up = conv1x1(width, planes * self.expansion)
self.bn2 = norm_layer(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
# pdb.set_trace()
out = self.conv_down(x)
out = self.bn1(out)
out = self.relu(out)
# print(out.shape)
out = self.hight_block(out)
out = self.width_block(out)
out = self.relu(out)
out = self.conv_up(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
#end of block definition
class ResAxialAttentionUNet(nn.Module):
def __init__(self, block, layers, num_classes=2, zero_init_residual=True,
groups=8, width_per_group=64, replace_stride_with_dilation=None,
norm_layer=None, s=0.125, img_size = 128,imgchan = 3):
super(ResAxialAttentionUNet, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
self._norm_layer = norm_layer
self.inplanes = int(64 * s)
self.dilation = 1
if replace_stride_with_dilation is None:
replace_stride_with_dilation = [False, False, False]
if len(replace_stride_with_dilation) != 3:
raise ValueError("replace_stride_with_dilation should be None "
"or a 3-element tuple, got {}".format(replace_stride_with_dilation))
self.groups = groups
self.base_width = width_per_group
self.conv1 = nn.Conv2d(imgchan, self.inplanes, kernel_size=7, stride=2, padding=3,
bias=False)
self.conv2 = nn.Conv2d(self.inplanes, 128, kernel_size=3, stride=1, padding=1, bias=False)
self.conv3 = nn.Conv2d(128, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = norm_layer(self.inplanes)
self.bn2 = norm_layer(128)
self.bn3 = norm_layer(self.inplanes)
self.relu = nn.ReLU(inplace=True)
# self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, int(128 * s), layers[0], kernel_size= (img_size//2))
self.layer2 = self._make_layer(block, int(256 * s), layers[1], stride=2, kernel_size=(img_size//2),
dilate=replace_stride_with_dilation[0])
self.layer3 = self._make_layer(block, int(512 * s), layers[2], stride=2, kernel_size=(img_size//4),
dilate=replace_stride_with_dilation[1])
self.layer4 = self._make_layer(block, int(1024 * s), layers[3], stride=2, kernel_size=(img_size//8),
dilate=replace_stride_with_dilation[2])
# Decoder
self.decoder1 = nn.Conv2d(int(1024 *2*s) , int(1024*2*s), kernel_size=3, stride=2, padding=1)
self.decoder2 = nn.Conv2d(int(1024 *2*s) , int(1024*s), kernel_size=3, stride=1, padding=1)
self.decoder3 = nn.Conv2d(int(1024*s), int(512*s), kernel_size=3, stride=1, padding=1)
self.decoder4 = nn.Conv2d(int(512*s) , int(256*s), kernel_size=3, stride=1, padding=1)
self.decoder5 = nn.Conv2d(int(256*s) , int(128*s) , kernel_size=3, stride=1, padding=1)
self.adjust = nn.Conv2d(int(128*s) , num_classes, kernel_size=1, stride=1, padding=0)
self.soft = nn.Softmax(dim=1)
def _make_layer(self, block, planes, blocks, kernel_size=56, stride=1, dilate=False):
norm_layer = self._norm_layer
downsample = None
previous_dilation = self.dilation
if dilate:
self.dilation *= stride
stride = 1
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
norm_layer(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, groups=self.groups,
base_width=self.base_width, dilation=previous_dilation,
norm_layer=norm_layer, kernel_size=kernel_size))
self.inplanes = planes * block.expansion
if stride != 1:
kernel_size = kernel_size // 2
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes, groups=self.groups,
base_width=self.base_width, dilation=self.dilation,
norm_layer=norm_layer, kernel_size=kernel_size))
return nn.Sequential(*layers)
def _forward_impl(self, x):
# AxialAttention Encoder
# pdb.set_trace()
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.relu(x)
x1 = self.layer1(x)
x2 = self.layer2(x1)
# print(x2.shape)
x3 = self.layer3(x2)
# print(x3.shape)
x4 = self.layer4(x3)
x = F.relu(F.interpolate(self.decoder1(x4), scale_factor=(2,2), mode ='bilinear'))
x = torch.add(x, x4)
x = F.relu(F.interpolate(self.decoder2(x) , scale_factor=(2,2), mode ='bilinear'))
x = torch.add(x, x3)
x = F.relu(F.interpolate(self.decoder3(x) , scale_factor=(2,2), mode ='bilinear'))
x = torch.add(x, x2)
x = F.relu(F.interpolate(self.decoder4(x) , scale_factor=(2,2), mode ='bilinear'))
x = torch.add(x, x1)
x = F.relu(F.interpolate(self.decoder5(x) , scale_factor=(2,2), mode ='bilinear'))
x = self.adjust(F.relu(x))
# pdb.set_trace()
return x
def forward(self, x):
return self._forward_impl(x)
class medt_net(nn.Module):
def __init__(self, block, block_2, layers, num_classes=2, zero_init_residual=True,
groups=8, width_per_group=64, replace_stride_with_dilation=None,
norm_layer=None, s=0.125, img_size = 512,imgchan = 3):
super(medt_net, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
self._norm_layer = norm_layer
self.inplanes = int(64 * s)
self.dilation = 1
if replace_stride_with_dilation is None:
replace_stride_with_dilation = [False, False, False]
if len(replace_stride_with_dilation) != 3:
raise ValueError("replace_stride_with_dilation should be None "
"or a 3-element tuple, got {}".format(replace_stride_with_dilation))
self.groups = groups
self.base_width = width_per_group
self.conv1 = nn.Conv2d(imgchan, self.inplanes, kernel_size=7, stride=2, padding=3,
bias=False)
self.conv2 = nn.Conv2d(self.inplanes, 128, kernel_size=3, stride=1, padding=1, bias=False)
self.conv3 = nn.Conv2d(128, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = norm_layer(self.inplanes)
self.bn2 = norm_layer(128)
self.bn3 = norm_layer(self.inplanes)
# self.conv1 = nn.Conv2d(1, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = norm_layer(self.inplanes)
self.relu = nn.ReLU(inplace=True)
# self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, int(128 * s), layers[0], kernel_size= (img_size//2))
self.layer2 = self._make_layer(block, int(256 * s), layers[1], stride=2, kernel_size=(img_size//2),
dilate=replace_stride_with_dilation[0])
# self.layer3 = self._make_layer(block, int(512 * s), layers[2], stride=2, kernel_size=(img_size//4),
# dilate=replace_stride_with_dilation[1])
# self.layer4 = self._make_layer(block, int(1024 * s), layers[3], stride=2, kernel_size=(img_size//8),
# dilate=replace_stride_with_dilation[2])
# Decoder
# self.decoder1 = nn.Conv2d(int(1024 *2*s) , int(1024*2*s), kernel_size=3, stride=2, padding=1)
# self.decoder2 = nn.Conv2d(int(1024 *2*s) , int(1024*s), kernel_size=3, stride=1, padding=1)
# self.decoder3 = nn.Conv2d(int(1024*s), int(512*s), kernel_size=3, stride=1, padding=1)
self.decoder4 = nn.Conv2d(int(512*s) , int(256*s), kernel_size=3, stride=1, padding=1)
self.decoder5 = nn.Conv2d(int(256*s) , int(128*s) , kernel_size=3, stride=1, padding=1)
self.adjust = nn.Conv2d(int(128*s) , num_classes, kernel_size=1, stride=1, padding=0)
self.soft = nn.Softmax(dim=1)
self.conv1_p = nn.Conv2d(imgchan, self.inplanes, kernel_size=7, stride=2, padding=3,
bias=False)
self.conv2_p = nn.Conv2d(self.inplanes,128, kernel_size=3, stride=1, padding=1,
bias=False)
self.conv3_p = nn.Conv2d(128, self.inplanes, kernel_size=3, stride=1, padding=1,
bias=False)
# self.conv1 = nn.Conv2d(1, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1_p = norm_layer(self.inplanes)
self.bn2_p = norm_layer(128)
self.bn3_p = norm_layer(self.inplanes)
self.relu_p = nn.ReLU(inplace=True)
img_size_p = img_size // 4
self.layer1_p = self._make_layer(block_2, int(128 * s), layers[0], kernel_size= (img_size_p//2))
self.layer2_p = self._make_layer(block_2, int(256 * s), layers[1], stride=2, kernel_size=(img_size_p//2),
dilate=replace_stride_with_dilation[0])
self.layer3_p = self._make_layer(block_2, int(512 * s), layers[2], stride=2, kernel_size=(img_size_p//4),
dilate=replace_stride_with_dilation[1])
self.layer4_p = self._make_layer(block_2, int(1024 * s), layers[3], stride=2, kernel_size=(img_size_p//8),
dilate=replace_stride_with_dilation[2])
# Decoder
self.decoder1_p = nn.Conv2d(int(1024 *2*s) , int(1024*2*s), kernel_size=3, stride=2, padding=1)
self.decoder2_p = nn.Conv2d(int(1024 *2*s) , int(1024*s), kernel_size=3, stride=1, padding=1)
self.decoder3_p = nn.Conv2d(int(1024*s), int(512*s), kernel_size=3, stride=1, padding=1)
self.decoder4_p = nn.Conv2d(int(512*s) , int(256*s), kernel_size=3, stride=1, padding=1)
self.decoder5_p = nn.Conv2d(int(256*s) , int(128*s) , kernel_size=3, stride=1, padding=1)
self.decoderf = nn.Conv2d(int(128*s) , int(128*s) , kernel_size=3, stride=1, padding=1)
self.adjust_p = nn.Conv2d(int(128*s) , num_classes, kernel_size=1, stride=1, padding=0)
self.soft_p = nn.Softmax(dim=1)
def _make_layer(self, block, planes, blocks, kernel_size=56, stride=1, dilate=False):
norm_layer = self._norm_layer
downsample = None
previous_dilation = self.dilation
if dilate:
self.dilation *= stride
stride = 1
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
norm_layer(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, groups=self.groups,
base_width=self.base_width, dilation=previous_dilation,
norm_layer=norm_layer, kernel_size=kernel_size))
self.inplanes = planes * block.expansion
if stride != 1:
kernel_size = kernel_size // 2
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes, groups=self.groups,
base_width=self.base_width, dilation=self.dilation,
norm_layer=norm_layer, kernel_size=kernel_size))
return nn.Sequential(*layers)
def _forward_impl(self, x):
xin = x.clone()
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.conv3(x)
x = self.bn3(x)
# x = F.max_pool2d(x,2,2)
x = self.relu(x)
# x = self.maxpool(x)
# pdb.set_trace()
x1 = self.layer1(x)
# print(x1.shape)
x2 = self.layer2(x1)
# print(x2.shape)
# x3 = self.layer3(x2)
# # print(x3.shape)
# x4 = self.layer4(x3)
# # print(x4.shape)
# x = F.relu(F.interpolate(self.decoder1(x4), scale_factor=(2,2), mode ='bilinear'))
# x = torch.add(x, x4)
# x = F.relu(F.interpolate(self.decoder2(x4) , scale_factor=(2,2), mode ='bilinear'))
# x = torch.add(x, x3)
# x = F.relu(F.interpolate(self.decoder3(x3) , scale_factor=(2,2), mode ='bilinear'))
# x = torch.add(x, x2)
x = F.relu(F.interpolate(self.decoder4(x2) , scale_factor=(2,2), mode ='bilinear'))
x = torch.add(x, x1)
x = F.relu(F.interpolate(self.decoder5(x) , scale_factor=(2,2), mode ='bilinear'))
# print(x.shape)
# end of full image training
# y_out = torch.ones((1,2,128,128))
x_loc = x.clone()
# x = F.relu(F.interpolate(self.decoder5(x) , scale_factor=(2,2), mode ='bilinear'))
#start
for i in range(0,4):
for j in range(0,4):
x_p = xin[:,:,128*i:64*2*(i+1),64*2*j:64*2*(j+1)]
# begin patch wise
x_p = self.conv1_p(x_p)
x_p = self.bn1_p(x_p)
# x = F.max_pool2d(x,2,2)
x_p = self.relu(x_p)
x_p = self.conv2_p(x_p)
x_p = self.bn2_p(x_p)
# x = F.max_pool2d(x,2,2)
x_p = self.relu(x_p)
x_p = self.conv3_p(x_p)
x_p = self.bn3_p(x_p)
# x = F.max_pool2d(x,2,2)
x_p = self.relu(x_p)
# x = self.maxpool(x)
# pdb.set_trace()
x1_p = self.layer1_p(x_p)
# print(x1.shape)
x2_p = self.layer2_p(x1_p)
# print(x2.shape)
x3_p = self.layer3_p(x2_p)
# # print(x3.shape)
x4_p = self.layer4_p(x3_p)
x_p = F.relu(F.interpolate(self.decoder1_p(x4_p), scale_factor=(2,2), mode ='bilinear'))
x_p = torch.add(x_p, x4_p)
x_p = F.relu(F.interpolate(self.decoder2_p(x_p) , scale_factor=(2,2), mode ='bilinear'))
x_p = torch.add(x_p, x3_p)
x_p = F.relu(F.interpolate(self.decoder3_p(x_p) , scale_factor=(2,2), mode ='bilinear'))
x_p = torch.add(x_p, x2_p)
x_p = F.relu(F.interpolate(self.decoder4_p(x_p) , scale_factor=(2,2), mode ='bilinear'))
x_p = torch.add(x_p, x1_p)
x_p = F.relu(F.interpolate(self.decoder5_p(x_p) , scale_factor=(2,2), mode ='bilinear'))
x_loc[:,:,64*2*i:64*2*(i+1),64*2*j:64*2*(j+1)] = x_p
x = torch.add(x,x_loc)
x = F.relu(self.decoderf(x))
x = self.adjust(F.relu(x))
# pdb.set_trace()
return x
def forward(self, x):
return self._forward_impl(x)
def axialunet(pretrained=False, **kwargs):
model = ResAxialAttentionUNet(AxialBlock, [1, 2, 4, 1], s= 0.125, **kwargs)
return model
def gated(pretrained=False, **kwargs):
model = ResAxialAttentionUNet(AxialBlock_dynamic, [1, 2, 4, 1], s= 0.125, **kwargs)
return model
def MedT(pretrained=False, **kwargs):
model = medt_net(AxialBlock_dynamic,AxialBlock_wopos, [1, 2, 4, 1], s= 0.125, **kwargs)
return model
def logo(pretrained=False, **kwargs):
model = medt_net(AxialBlock,AxialBlock, [1, 2, 4, 1], s= 0.125, **kwargs)
return model
# EOF
| 43.206566 | 165 | 0.578267 | 4,300 | 31,584 | 4.076279 | 0.060698 | 0.057052 | 0.023962 | 0.025217 | 0.923152 | 0.919785 | 0.903298 | 0.890632 | 0.882816 | 0.871349 | 0 | 0.043251 | 0.296511 | 31,584 | 730 | 166 | 43.265753 | 0.745623 | 0.095238 | 0 | 0.762279 | 0 | 0 | 0.015963 | 0.002018 | 0 | 0 | 0 | 0 | 0.005894 | 1 | 0.05501 | false | 0 | 0.017682 | 0.003929 | 0.127701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bcfcb2f57b157e8f9d2526c8d8a21c10ff07f605 | 197 | py | Python | stable_baselines3/custom_td3/__init__.py | sandipan1/stable-baselines3 | 5fe6d54f3bbdfade1e90ff5fc9b2506f3facdc37 | [
"MIT"
] | null | null | null | stable_baselines3/custom_td3/__init__.py | sandipan1/stable-baselines3 | 5fe6d54f3bbdfade1e90ff5fc9b2506f3facdc37 | [
"MIT"
] | null | null | null | stable_baselines3/custom_td3/__init__.py | sandipan1/stable-baselines3 | 5fe6d54f3bbdfade1e90ff5fc9b2506f3facdc37 | [
"MIT"
] | null | null | null | from stable_baselines3.custom_td3.policies import CustomTD3Policy
from stable_baselines3.custom_td3.td3 import TD3
from stable_baselines3.custom_td3.feature_extractor import CustomCombinedExtractor | 65.666667 | 82 | 0.913706 | 25 | 197 | 6.92 | 0.44 | 0.17341 | 0.346821 | 0.450867 | 0.50289 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 0.055838 | 197 | 3 | 82 | 65.666667 | 0.88172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4c11bfc98c33b9e0cfc37c8daf9ec25181cbc567 | 4,334 | py | Python | tests/tasks/test_multiplex_node_classification.py | zhjhr181/cogdl | 42e22cb891c4877029f881a1ed8ea028237fa625 | [
"MIT"
] | 1 | 2021-05-13T14:30:26.000Z | 2021-05-13T14:30:26.000Z | tests/tasks/test_multiplex_node_classification.py | LuChengTHU/cogdl | 90e6ea74209fd8a8a310efc4d7e7060bcb313a5e | [
"MIT"
] | null | null | null | tests/tasks/test_multiplex_node_classification.py | LuChengTHU/cogdl | 90e6ea74209fd8a8a310efc4d7e7060bcb313a5e | [
"MIT"
] | null | null | null | from cogdl.tasks import build_task
from cogdl.datasets import build_dataset
from cogdl.models import build_model
from cogdl.utils import build_args_from_dict
def get_default_args():
default_dict = {"hidden_size": 16, "cpu": True, "enhance": False, "save_dir": "./embedding", "checkpoint": False}
return build_args_from_dict(default_dict)
# def add_args_for_gcc(args):
# args.load_path = "./saved/gcc_pretrained.pth"
# return args
# def test_gcc_imdb():
# args = get_default_args()
# args = add_args_for_gcc(args)
# args.task = 'multiplex_node_classification'
# args.dataset = 'gtn-imdb'
# args.model = 'gcc'
# dataset = build_dataset(args)
# args.num_features = dataset.num_features
# args.num_classes = dataset.num_classes
# args.num_edge = dataset.num_edge
# args.num_nodes = dataset.num_nodes
# args.num_channels = 2
# args.num_layers = 2
# model = build_model(args)
# task = build_task(args)
# ret = task.train()
# assert ret['f1'] >= 0 and ret['f1'] <= 1
# def test_gcc_acm():
# args = get_default_args()
# args = add_args_for_gcc(args)
# args.task = 'multiplex_node_classification'
# args.dataset = 'gtn-acm'
# args.model = 'gcc'
# dataset = build_dataset(args)
# args.num_features = dataset.num_features
# args.num_classes = dataset.num_classes
# args.num_edge = dataset.num_edge
# args.num_nodes = dataset.num_nodes
# args.num_channels = 2
# args.num_layers = 2
# model = build_model(args)
# task = build_task(args)
# ret = task.train()
# assert ret['f1'] >= 0 and ret['f1'] <= 1
# def test_gcc_dblp():
# args = get_default_args()
# args = add_args_for_gcc(args)
# args.task = 'multiplex_node_classification'
# args.dataset = 'gtn-dblp'
# args.model = 'gcc'
# dataset = build_dataset(args)
# args.num_features = dataset.num_features
# args.num_classes = dataset.num_classes
# args.num_edge = dataset.num_edge
# args.num_nodes = dataset.num_nodes
# args.num_channels = 2
# args.num_layers = 2
# model = build_model(args)
# task = build_task(args)
# ret = task.train()
# assert ret['f1'] >= 0 and ret['f1'] <= 1
def test_metapath2vec_gtn_acm():
args = get_default_args()
args.task = "multiplex_node_classification"
args.dataset = "gtn-acm"
args.model = "metapath2vec"
args.walk_length = 5
args.walk_num = 1
args.window_size = 3
args.worker = 5
args.iteration = 1
args.schema = "No"
task = build_task(args)
ret = task.train()
assert ret["f1"] > 0
def test_metapath2vec_gtn_imdb():
args = get_default_args()
args.task = "multiplex_node_classification"
args.dataset = "gtn-imdb"
args.model = "metapath2vec"
args.walk_length = 5
args.walk_num = 1
args.window_size = 3
args.worker = 5
args.iteration = 1
args.schema = "No"
task = build_task(args)
ret = task.train()
assert ret["f1"] > 0
def test_pte_gtn_imdb():
args = get_default_args()
args.task = "multiplex_node_classification"
args.dataset = "gtn-imdb"
args.model = "pte"
args.walk_length = 5
args.walk_num = 1
args.negative = 3
args.batch_size = 10
args.alpha = 0.025
args.order = "No"
task = build_task(args)
ret = task.train()
assert ret["f1"] > 0
def test_pte_gtn_dblp():
args = get_default_args()
args.task = "multiplex_node_classification"
args.dataset = "gtn-dblp"
args.model = "pte"
args.walk_length = 5
args.walk_num = 1
args.negative = 3
args.batch_size = 10
args.alpha = 0.025
args.order = "No"
task = build_task(args)
ret = task.train()
assert ret["f1"] > 0
def test_hin2vec_dblp():
args = get_default_args()
args.task = "multiplex_node_classification"
args.dataset = "gtn-dblp"
args.model = "hin2vec"
args.walk_length = 5
args.walk_num = 1
args.negative = 3
args.batch_size = 1000
args.hop = 2
args.epochs = 1
args.lr = 0.025
args.cpu = True
task = build_task(args)
ret = task.train()
assert ret["f1"] > 0
if __name__ == "__main__":
test_metapath2vec_gtn_acm()
test_metapath2vec_gtn_imdb()
test_pte_gtn_imdb()
test_pte_gtn_dblp()
test_hin2vec_dblp()
| 27.257862 | 117 | 0.646516 | 604 | 4,334 | 4.370861 | 0.139073 | 0.047727 | 0.047727 | 0.054545 | 0.825758 | 0.814015 | 0.800758 | 0.800758 | 0.800758 | 0.800758 | 0 | 0.023981 | 0.230272 | 4,334 | 158 | 118 | 27.43038 | 0.767386 | 0.408629 | 0 | 0.705882 | 0 | 0 | 0.118421 | 0.057815 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.070588 | false | 0 | 0.047059 | 0 | 0.129412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4c17e907a35434b6955dd0ad31fc187559968eb9 | 344 | py | Python | python/anyascii/_data/_1f8.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | python/anyascii/_data/_1f8.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | python/anyascii/_data/_1f8.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | b='< ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v < ^ > v \\ / \\ / - | < ^ > v \\ / \\ / < ^ > v \\ / \\ / < ^ > v \\ / \\ / < ^ > v \\ / \\ / < ^ > v \\ / \\ / < ^ > v < ^ > v < ^ > v - - - - < > < > < > < > < > < > - - \\ /' | 344 | 344 | 0.078488 | 27 | 344 | 1 | 0.074074 | 1.851852 | 2.666667 | 3.407407 | 0.962963 | 0.962963 | 0.962963 | 0.962963 | 0.962963 | 0.962963 | 0 | 0 | 0.514535 | 344 | 1 | 344 | 344 | 0.161677 | 0 | 0 | 0 | 0 | 1 | 0.985507 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 17 |
4c29b37387a58190c851de15810531e941019c93 | 9,870 | py | Python | tests/proquest/fixtures.py | arielmorelli/circulation | 6008f79c502f58484dd8c800ccc90c34b6d5e9d7 | [
"Apache-2.0"
] | null | null | null | tests/proquest/fixtures.py | arielmorelli/circulation | 6008f79c502f58484dd8c800ccc90c34b6d5e9d7 | [
"Apache-2.0"
] | null | null | null | tests/proquest/fixtures.py | arielmorelli/circulation | 6008f79c502f58484dd8c800ccc90c34b6d5e9d7 | [
"Apache-2.0"
] | null | null | null | import datetime
from webpub_manifest_parser.core.ast import CollectionList, PresentationMetadata
from webpub_manifest_parser.opds2.ast import (
OPDS2Feed,
OPDS2FeedMetadata,
OPDS2Group,
OPDS2Publication,
)
PROQUEST_PUBLICATION_1 = OPDS2Publication(
metadata=PresentationMetadata(
identifier="urn:proquest.com/document-id/1",
modified=datetime.datetime(2020, 1, 31, 0, 0, 0),
)
)
PROQUEST_PUBLICATION_2 = OPDS2Publication(
metadata=PresentationMetadata(
identifier="urn:proquest.com/document-id/2",
modified=datetime.datetime(2020, 1, 30, 0, 0, 0),
)
)
PROQUEST_PUBLICATION_3 = OPDS2Publication(
metadata=PresentationMetadata(
identifier="urn:proquest.com/document-id/3",
modified=datetime.datetime(2020, 1, 29, 0, 0, 0),
)
)
PROQUEST_PUBLICATION_4 = OPDS2Publication(
metadata=PresentationMetadata(
identifier="urn:proquest.com/document-id/4",
modified=datetime.datetime(2020, 1, 28, 0, 0, 0),
)
)
PROQUEST_FEED_PAGE_1 = OPDS2Feed(
metadata=OPDS2FeedMetadata(
title="Page # 1", current_page=1, items_per_page=10, number_of_items=20
),
groups=CollectionList(
[
OPDS2Group(
publications=CollectionList(
[PROQUEST_PUBLICATION_1, PROQUEST_PUBLICATION_2]
)
)
]
),
)
PROQUEST_FEED_PAGE_2 = OPDS2Feed(
metadata=OPDS2FeedMetadata(
title="Page # 2", current_page=2, items_per_page=10, number_of_items=20
),
groups=CollectionList(
[
OPDS2Group(
publications=CollectionList(
[PROQUEST_PUBLICATION_3, PROQUEST_PUBLICATION_4]
)
)
]
),
)
PROQUEST_RAW_PUBLICATION_1_ID = "12345"
PROQUEST_RAW_PUBLICATION_1_COVER_HREF = "http://proquest.com/covers/12345-m.jpg"
PROQUEST_RAW_PUBLICATION_2_ID = "12346"
PROQUEST_RAW_PUBLICATION_2_COVER_HREF = "http://proquest.com/covers/12346-m.jpg"
PROQUEST_RAW_FEED = """{{
"metadata": {{
"title": "Test Feed",
"itemsPerPage": 1,
"numberOfItems": 1
}},
"links": [{{
"href": "https://drafts.opds.io/schema/feed.schema.json",
"type": "application/opds+json",
"rel": "self",
"alternate": [],
"children": []
}}],
"publications": [],
"navigation": [{{
"href": "https://drafts.opds.io/schema/feed.schema.json",
"type": "application/opds+json",
"title": "Test",
"rel": "self",
"alternate": [],
"children": []
}}],
"facets": [],
"groups": [{{
"metadata": {{
"title": "Test Group"
}},
"links": [{{
"href": "https://drafts.opds.io/schema/feed.schema.json",
"type": "application/opds+json",
"rel": "self",
"alternate": [],
"children": []
}}],
"publications": [{{
"metadata": {{
"identifier": "urn:proquest.com/document-id/{0}",
"@type": "http://schema.org/Book",
"title": "Test Book 1",
"modified": "2020-11-19T08:00:00.000Z",
"published": "2020-01-15T08:06:00.000Z",
"language": [
"eng"
],
"author": [{{
"name": "Test, Author",
"links": [{{
"href": "https://catalog.feedbooks.com/catalog/index.json",
"type": "application/opds+json",
"alternate": [],
"children": []
}}]
}}],
"publisher": {{
"name": "Test Publisher",
"links": []
}},
"subject": [],
"readingProgression": "ltr"
}},
"links": [{{
"href": "https://proquest.com/lib/detail.action?docID={0}",
"type": "application/vnd.adobe.adept+xml",
"rel": "http://opds-spec.org/acquisition",
"properties": {{
"indirectAcquisition": [{{
"type": "application/epub+zip",
"alternate": [],
"children": []
}}]
}},
"language": [
"eng"
],
"alternate": [],
"children": []
}}],
"images": [{{
"href": "{1}",
"type": "image/jpeg",
"language": [
"eng"
],
"alternate": [],
"children": []
}}]
}},
{{
"metadata": {{
"identifier": "urn:proquest.com/document-id/{2}",
"@type": "http://schema.org/Book",
"title": "Test Book 2",
"modified": "2020-11-19T08:00:00.000Z",
"published": "2020-01-15T08:06:00.000Z",
"language": [
"eng"
],
"author": [{{
"name": "Test, Author",
"links": [{{
"href": "https://catalog.feedbooks.com/catalog/index.json",
"type": "application/opds+json",
"alternate": [],
"children": []
}}]
}}],
"publisher": {{
"name": "Test Publisher",
"links": []
}},
"subject": [],
"readingProgression": "ltr"
}},
"links": [{{
"href": "https://proquest.com/lib/detail.action?docID={2}",
"type": "application/vnd.adobe.adept+xml",
"rel": "http://opds-spec.org/acquisition",
"properties": {{
"indirectAcquisition": [{{
"type": "application/epub+zip",
"alternate": [],
"children": []
}}]
}},
"language": [
"eng"
],
"alternate": [],
"children": []
}}],
"images": [{{
"href": "{3}",
"type": "image/jpeg",
"language": [
"eng"
],
"alternate": [],
"children": []
}}]
}}]
}}]
}}
""".format(
PROQUEST_RAW_PUBLICATION_1_ID,
PROQUEST_RAW_PUBLICATION_1_COVER_HREF,
PROQUEST_RAW_PUBLICATION_2_ID,
PROQUEST_RAW_PUBLICATION_2_COVER_HREF,
)
PROQUEST_RAW_PUBLICATION_3_ID = "12347"
PROQUEST_RAW_PUBLICATION_3_COVER_HREF = "http://proquest.com/covers/12347-m.jpg"
PROQUEST_RAW_FEED_WITH_A_REMOVED_PUBLICATION = """{{
"metadata": {{
"title": "Test Feed",
"itemsPerPage": 1,
"numberOfItems": 1
}},
"links": [{{
"href": "https://drafts.opds.io/schema/feed.schema.json",
"type": "application/opds+json",
"rel": "self",
"alternate": [],
"children": []
}}],
"publications": [],
"navigation": [{{
"href": "https://drafts.opds.io/schema/feed.schema.json",
"type": "application/opds+json",
"title": "Test",
"rel": "self",
"alternate": [],
"children": []
}}],
"facets": [],
"groups": [{{
"metadata": {{
"title": "Test Group"
}},
"links": [{{
"href": "https://drafts.opds.io/schema/feed.schema.json",
"type": "application/opds+json",
"rel": "self",
"alternate": [],
"children": []
}}],
"publications": [{{
"metadata": {{
"identifier": "urn:proquest.com/document-id/{0}",
"@type": "http://schema.org/Book",
"title": "Test Book 1",
"modified": "2020-11-19T08:00:00.000Z",
"published": "2020-01-15T08:06:00.000Z",
"language": [
"eng"
],
"author": [{{
"name": "Test, Author",
"links": [{{
"href": "https://catalog.feedbooks.com/catalog/index.json",
"type": "application/opds+json",
"alternate": [],
"children": []
}}]
}}],
"publisher": {{
"name": "Test Publisher",
"links": []
}},
"subject": [],
"readingProgression": "ltr"
}},
"links": [{{
"href": "https://proquest.com/lib/detail.action?docID={0}",
"type": "application/vnd.adobe.adept+xml",
"rel": "http://opds-spec.org/acquisition",
"properties": {{
"indirectAcquisition": [{{
"type": "application/epub+zip",
"alternate": [],
"children": []
}}]
}},
"language": [
"eng"
],
"alternate": [],
"children": []
}}],
"images": [{{
"href": "{1}",
"type": "image/jpeg",
"language": [
"eng"
],
"alternate": [],
"children": []
}}]
}},
{{
"metadata": {{
"identifier": "urn:proquest.com/document-id/{2}",
"@type": "http://schema.org/Book",
"title": "Test Book 3",
"modified": "2020-11-19T08:00:00.000Z",
"published": "2020-01-15T08:06:00.000Z",
"language": [
"eng"
],
"author": [{{
"name": "Test, Author",
"links": [{{
"href": "https://catalog.feedbooks.com/catalog/index.json",
"type": "application/opds+json",
"alternate": [],
"children": []
}}]
}}],
"publisher": {{
"name": "Test Publisher",
"links": []
}},
"subject": [],
"readingProgression": "ltr"
}},
"links": [{{
"href": "https://proquest.com/lib/detail.action?docID={2}",
"type": "application/vnd.adobe.adept+xml",
"rel": "http://opds-spec.org/acquisition",
"properties": {{
"indirectAcquisition": [{{
"type": "application/epub+zip",
"alternate": [],
"children": []
}}]
}},
"language": [
"eng"
],
"alternate": [],
"children": []
}}],
"images": [{{
"href": "{3}",
"type": "image/jpeg",
"language": [
"eng"
],
"alternate": [],
"children": []
}}]
}}]
}}]
}}
""".format(
PROQUEST_RAW_PUBLICATION_1_ID,
PROQUEST_RAW_PUBLICATION_1_COVER_HREF,
PROQUEST_RAW_PUBLICATION_3_ID,
PROQUEST_RAW_PUBLICATION_3_COVER_HREF,
)
| 26.32 | 80 | 0.490476 | 847 | 9,870 | 5.589138 | 0.138135 | 0.079003 | 0.065061 | 0.048585 | 0.921208 | 0.845374 | 0.800169 | 0.792142 | 0.792142 | 0.726236 | 0 | 0.041556 | 0.314894 | 9,870 | 374 | 81 | 26.390374 | 0.658533 | 0 | 0 | 0.834254 | 0 | 0 | 0.767882 | 0.11307 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008287 | 0 | 0.008287 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4c3f0409944ec90ed0c6d88e75712e1bfc356b0a | 5,234 | py | Python | tests/test_mq/test_mq_users.py | symroe/moto | 4e106995af6f2820273528fca8a4e9ee288690a5 | [
"Apache-2.0"
] | null | null | null | tests/test_mq/test_mq_users.py | symroe/moto | 4e106995af6f2820273528fca8a4e9ee288690a5 | [
"Apache-2.0"
] | 1 | 2022-02-19T02:10:45.000Z | 2022-02-19T02:15:52.000Z | tests/test_mq/test_mq_users.py | symroe/moto | 4e106995af6f2820273528fca8a4e9ee288690a5 | [
"Apache-2.0"
] | null | null | null | import boto3
import pytest
import sure # noqa # pylint: disable=unused-import
from botocore.exceptions import ClientError
from moto import mock_mq
@mock_mq
def test_create_user():
client = boto3.client("mq", region_name="us-east-1")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[],
)["BrokerId"]
client.create_user(BrokerId=broker_id, Username="admin", Password="adm1n")
resp = client.describe_broker(BrokerId=broker_id)
resp.should.have.key("Users").equals([{"Username": "admin"}])
@mock_mq
def test_describe_user():
client = boto3.client("mq", region_name="us-east-1")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[],
)["BrokerId"]
client.create_user(
BrokerId=broker_id,
Username="admin",
Password="adm1n",
ConsoleAccess=True,
Groups=["group1", "group2"],
)
resp = client.describe_user(BrokerId=broker_id, Username="admin")
resp.should.have.key("BrokerId").equals(broker_id)
resp.should.have.key("ConsoleAccess").equals(True)
resp.should.have.key("Groups").equals(["group1", "group2"])
resp.should.have.key("Username").equals("admin")
@mock_mq
def test_describe_user_unknown():
client = boto3.client("mq", region_name="us-east-2")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[],
)["BrokerId"]
with pytest.raises(ClientError) as exc:
client.describe_user(BrokerId=broker_id, Username="unknown")
err = exc.value.response["Error"]
err["Code"].should.equal("NotFoundException")
err["Message"].should.equal(
"Can't find requested user [unknown]. Make sure your user exists."
)
@mock_mq
def test_list_users_empty():
client = boto3.client("mq", region_name="us-east-1")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[],
)["BrokerId"]
resp = client.list_users(BrokerId=broker_id)
resp.should.have.key("BrokerId").equals(broker_id)
resp.should.have.key("Users").equals([])
@mock_mq
def test_list_users():
client = boto3.client("mq", region_name="us-east-1")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[{"Username": "admin", "Password": "adm1n"}],
)["BrokerId"]
client.create_user(BrokerId=broker_id, Username="user1", Password="us3r1")
resp = client.list_users(BrokerId=broker_id)
resp.should.have.key("BrokerId").equals(broker_id)
resp.should.have.key("Users").length_of(2)
resp["Users"].should.contain({"Username": "admin"})
resp["Users"].should.contain({"Username": "user1"})
@mock_mq
def test_update_user():
client = boto3.client("mq", region_name="us-east-2")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[{"Username": "admin", "Password": "adm1n"}],
)["BrokerId"]
client.update_user(BrokerId=broker_id, Username="admin", Groups=["administrators"])
resp = client.describe_user(BrokerId=broker_id, Username="admin")
resp.should.have.key("BrokerId").equals(broker_id)
resp.should.have.key("Groups").equals(["administrators"])
resp.should.have.key("Username").equals("admin")
@mock_mq
def test_delete_user():
client = boto3.client("mq", region_name="us-east-1")
broker_id = client.create_broker(
AutoMinorVersionUpgrade=False,
BrokerName="testbroker",
DeploymentMode="dm",
EngineType="ACTIVEMQ",
EngineVersion="version",
HostInstanceType="hit",
PubliclyAccessible=True,
Users=[{"Username": "admin", "Password": "adm1n"}],
)["BrokerId"]
client.create_user(BrokerId=broker_id, Username="user1", Password="us3r1")
client.delete_user(BrokerId=broker_id, Username="admin")
resp = client.list_users(BrokerId=broker_id)
resp.should.have.key("BrokerId").equals(broker_id)
resp.should.have.key("Users").length_of(1)
resp["Users"].should.contain({"Username": "user1"})
| 30.254335 | 87 | 0.657241 | 559 | 5,234 | 6.014311 | 0.157424 | 0.059488 | 0.058299 | 0.070791 | 0.849197 | 0.840274 | 0.784355 | 0.748364 | 0.735872 | 0.734682 | 0 | 0.008049 | 0.192969 | 5,234 | 172 | 88 | 30.430233 | 0.787879 | 0.006687 | 0 | 0.724638 | 0 | 0 | 0.156851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050725 | false | 0.050725 | 0.036232 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
4c5d5820131c0d3f670ce061ec8c03a3bfe4d7c3 | 31,123 | py | Python | sdk/python/pulumi_azure/media/content_key_policy.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/media/content_key_policy.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/media/content_key_policy.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ContentKeyPolicyArgs', 'ContentKeyPolicy']
@pulumi.input_type
class ContentKeyPolicyArgs:
def __init__(__self__, *,
media_services_account_name: pulumi.Input[str],
policy_options: pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]],
resource_group_name: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a ContentKeyPolicy resource.
:param pulumi.Input[str] media_services_account_name: The Media Services account name. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]] policy_options: One or more `policy_option` blocks as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[str] description: A description for the Policy.
:param pulumi.Input[str] name: The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
"""
pulumi.set(__self__, "media_services_account_name", media_services_account_name)
pulumi.set(__self__, "policy_options", policy_options)
pulumi.set(__self__, "resource_group_name", resource_group_name)
if description is not None:
pulumi.set(__self__, "description", description)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="mediaServicesAccountName")
def media_services_account_name(self) -> pulumi.Input[str]:
"""
The Media Services account name. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "media_services_account_name")
@media_services_account_name.setter
def media_services_account_name(self, value: pulumi.Input[str]):
pulumi.set(self, "media_services_account_name", value)
@property
@pulumi.getter(name="policyOptions")
def policy_options(self) -> pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]]:
"""
One or more `policy_option` blocks as defined below.
"""
return pulumi.get(self, "policy_options")
@policy_options.setter
def policy_options(self, value: pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]]):
pulumi.set(self, "policy_options", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description for the Policy.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _ContentKeyPolicyState:
def __init__(__self__, *,
description: Optional[pulumi.Input[str]] = None,
media_services_account_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policy_options: Optional[pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ContentKeyPolicy resources.
:param pulumi.Input[str] description: A description for the Policy.
:param pulumi.Input[str] media_services_account_name: The Media Services account name. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[str] name: The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]] policy_options: One or more `policy_option` blocks as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
"""
if description is not None:
pulumi.set(__self__, "description", description)
if media_services_account_name is not None:
pulumi.set(__self__, "media_services_account_name", media_services_account_name)
if name is not None:
pulumi.set(__self__, "name", name)
if policy_options is not None:
pulumi.set(__self__, "policy_options", policy_options)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description for the Policy.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="mediaServicesAccountName")
def media_services_account_name(self) -> Optional[pulumi.Input[str]]:
"""
The Media Services account name. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "media_services_account_name")
@media_services_account_name.setter
def media_services_account_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "media_services_account_name", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="policyOptions")
def policy_options(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]]]:
"""
One or more `policy_option` blocks as defined below.
"""
return pulumi.get(self, "policy_options")
@policy_options.setter
def policy_options(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ContentKeyPolicyPolicyOptionArgs']]]]):
pulumi.set(self, "policy_options", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
class ContentKeyPolicy(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
media_services_account_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policy_options: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ContentKeyPolicyPolicyOptionArgs']]]]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a Content Key Policy.
## Example Usage
```python
import pulumi
import json
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_account = azure.storage.Account("exampleAccount",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
account_tier="Standard",
account_replication_type="GRS")
example_service_account = azure.media.ServiceAccount("exampleServiceAccount",
location=example_resource_group.location,
resource_group_name=example_resource_group.name,
storage_accounts=[azure.media.ServiceAccountStorageAccountArgs(
id=example_account.id,
is_primary=True,
)])
example_content_key_policy = azure.media.ContentKeyPolicy("exampleContentKeyPolicy",
resource_group_name=example_resource_group.name,
media_services_account_name=example_service_account.name,
policy_options=[
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="fairPlay",
fairplay_configuration=azure.media.ContentKeyPolicyPolicyOptionFairplayConfigurationArgs(
ask="bb566284cc124a21c435a92cd3c108c4",
pfx="MIIG7gIBAzCCBqoGCSqGSIb3DQEHAaCCBpsEggaXMIIGkzCCA7wGCSqGSIb3DQEHAaCCA60EggOpMIIDpTCCA6EGCyqGSIb3DQEMCgECoIICtjCCArIwHAYKKoZIhvcNAQwBAzAOBAiV65vFfxLDVgICB9AEggKQx2dxWefICYodVhRLSQVMJRYy5QkM1VySPAXGP744JHrb+s0Y8i/6a+a5itZGlXw3kvxyflHtSsuuBCaYJ1WOCp9jspixJEliFHXTcel96AgZlT5tB7vC6pdZnz8rb+lyxFs99x2CW52EsadoDlRsYrmkmKdnB0cx2JHJbLeXuKV/fjuRJSqCFcDa6Nre8AlBX0zKGIYGLJ1Cfpora4kNTXxu0AwEowzGmoCxqrpKbO1QDi1hZ1qHrtZ1ienAKfiTXaGH4AMQzyut0AaymxalrRbXibJYuefLRvXqx0oLZKVLAX8fR1gnac6Mrr7GkdHaKCsk4eOi98acR7bjiyRRVYYS4B6Y0tCeRJNe6zeYVmLdtatuOlOEVDT6AKrJJMFMyITVS+2D771ge6m37FbJ36K3/eT/HRq1YDsxfD/BY+X7eMIwQrVnD5nK7avXfbIni57n5oWLkE9Vco8uBlMdrx4xHt9vpe42Pz2Yh2O4WtvxcgxrAknvPpV1ZsAJCfvm9TTcg8qZpjyePn3B9TvFVSXMJHn/rzu6OJAgFgVFAe1tPGLh1XBxAvwpB8EqcycIIUUFUBy4HgYCicjI2jp6s8Kk293Uc/TA2623LrWgP/Xm5hVB7lP1k6W9LDivOlAA96D0Cbk08Yv6arkCYj7ONFO8VZbO0zKAAOLHMw/ZQRIutGLrDlqgTDeRXRuReX7TNjDBxp2rzJBY0uU5g9BMFxQrbQwEx9HsnO4dVFG4KLbHmYWhlwS2V2uZtY6D6elOXY3SX50RwhC4+0trUMi/ODtOxAc+lMQk2FNDcNeKIX5wHwFRS+sFBu5Um4Jfj6Ua4w1izmu2KiPfDd3vJsm5Dgcci3fPfdSfpIq4uR6d3JQxgdcwEwYJKoZIhvcNAQkVMQYEBAEAAAAwWwYJKoZIhvcNAQkUMU4eTAB7ADcAMQAxADAANABBADgARgAtADQAQgBFADAALQA0AEEAMgA4AC0AOAAyADIANQAtAEYANwBBADcAMwBGAEMAQQAwAEMARABEAH0wYwYJKwYBBAGCNxEBMVYeVABNAGkAYwByAG8AcwBvAGYAdAAgAEIAYQBzAGUAIABDAHIAeQBwAHQAbwBnAHIAYQBwAGgAaQBjACAAUAByAG8AdgBpAGQAZQByACAAdgAxAC4AMDCCAs8GCSqGSIb3DQEHBqCCAsAwggK8AgEAMIICtQYJKoZIhvcNAQcBMBwGCiqGSIb3DQEMAQMwDgQISS7mG/riQJkCAgfQgIICiPSGg5axP4JM+GmiVEqOHTVAPw2AM8OPnn1q0mIw54oC2WOJw3FFThYHmxTQzQ1feVmnkVCv++eFp+BYTcWTa+ehl/3/Nvr5uLTzDxmCShacKwoWXOKtSLh6mmgydvMqSf6xv1bPsloodtrRxhprI2lBNBW2uw8az9eLdvURYmhjGPf9klEy/6OCA5jDT5XZMunwiQT5mYNMF7wAQ5PCz2dJQqm1n72A6nUHPkHEusN7iH/+mv5d3iaKxn7/ShxLKHfjMd+r/gv27ylshVHiN4mVStAg+MiLrVvr5VH46p6oosImvS3ZO4D5wTmh/6wtus803qN4QB/Y9n4rqEJ4Dn619h+6O7FChzWkx7kvYIzIxvfnj1PCFTEjUwc7jbuF013W/z9zQi2YEq9AzxMcGro0zjdt2sf30zXSfaRNt0UHHRDkLo7yFUJG5Ka1uWU8paLuXUUiiMUf24Bsfdg2A2n+3Qa7g25OvAM1QTpMwmMWL9sY2hxVUGIKVrnj8c4EKuGJjVDXrze5g9O/LfZr5VSjGu5KsN0eYI3mcePF7XM0azMtTNQYVRmeWxYW+XvK5MaoLEkrFG8C5+JccIlN588jowVIPqP321S/EyFiAmrRdAWkqrc9KH+/eINCFqjut2YPkCaTM9mnJAAqWgggUWkrOKT/ByS6IAQwyEBNFbY0TWyxKt6vZL1EW/6HgZCsxeYycNhnPr2qJNZZMNzmdMRp2GRLcfBH8KFw1rAyua0VJoTLHb23ZAsEY74BrEEiK9e/oOjXkHzQjlmrfQ9rSN2eQpRrn0W8I229WmBO2suG+AQ3aY8kDtBMkjmJno7txUh1K5D6tJTO7MQp343A2AhyJkhYA7NPnDA7MB8wBwYFKw4DAhoEFPO82HDlCzlshWlnMoQPStm62TMEBBQsPmvwbZ5OlwC9+NDF1AC+t67WTgICB9A=",
pfx_password="password",
rental_duration_seconds=2249,
rental_and_lease_key_type="PersistentUnlimited",
),
open_restriction_enabled=True,
),
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="playReady",
playready_configuration_licenses=[azure.media.ContentKeyPolicyPolicyOptionPlayreadyConfigurationLicenseArgs(
allow_test_devices=True,
begin_date="2017-10-16T18:22:53Z",
play_right=azure.media.ContentKeyPolicyPolicyOptionPlayreadyConfigurationLicensePlayRightArgs(
scms_restriction=2,
digital_video_only_content_restriction=False,
image_constraint_for_analog_component_video_restriction=False,
image_constraint_for_analog_computer_monitor_restriction=False,
allow_passing_video_content_to_unknown_output="NotAllowed",
uncompressed_digital_video_opl=100,
uncompressed_digital_audio_opl=100,
analog_video_opl=150,
compressed_digital_audio_opl=150,
),
license_type="Persistent",
content_type="UltraVioletDownload",
content_key_location_from_header_enabled=True,
)],
open_restriction_enabled=True,
),
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="clearKey",
clear_key_configuration_enabled=True,
token_restriction=azure.media.ContentKeyPolicyPolicyOptionTokenRestrictionArgs(
issuer="urn:issuer",
audience="urn:audience",
token_type="Swt",
primary_symmetric_token_key="AAAAAAAAAAAAAAAAAAAAAA==",
),
),
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="widevine",
widevine_configuration_template=json.dumps({
"allowed_track_types": "SD_HD",
"content_key_specs": [{
"track_type": "SD",
"security_level": 1,
"required_output_protection": {
"hdcp": "HDCP_V2",
},
}],
"policy_overrides": {
"can_play": True,
"can_persist": True,
"can_renew": False,
},
}),
open_restriction_enabled=True,
),
])
```
## Import
Resource Groups can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:media/contentKeyPolicy:ContentKeyPolicy example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Media/mediaservices/account1/contentkeypolicies/policy1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A description for the Policy.
:param pulumi.Input[str] media_services_account_name: The Media Services account name. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[str] name: The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ContentKeyPolicyPolicyOptionArgs']]]] policy_options: One or more `policy_option` blocks as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ContentKeyPolicyArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Content Key Policy.
## Example Usage
```python
import pulumi
import json
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_account = azure.storage.Account("exampleAccount",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
account_tier="Standard",
account_replication_type="GRS")
example_service_account = azure.media.ServiceAccount("exampleServiceAccount",
location=example_resource_group.location,
resource_group_name=example_resource_group.name,
storage_accounts=[azure.media.ServiceAccountStorageAccountArgs(
id=example_account.id,
is_primary=True,
)])
example_content_key_policy = azure.media.ContentKeyPolicy("exampleContentKeyPolicy",
resource_group_name=example_resource_group.name,
media_services_account_name=example_service_account.name,
policy_options=[
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="fairPlay",
fairplay_configuration=azure.media.ContentKeyPolicyPolicyOptionFairplayConfigurationArgs(
ask="bb566284cc124a21c435a92cd3c108c4",
pfx="MIIG7gIBAzCCBqoGCSqGSIb3DQEHAaCCBpsEggaXMIIGkzCCA7wGCSqGSIb3DQEHAaCCA60EggOpMIIDpTCCA6EGCyqGSIb3DQEMCgECoIICtjCCArIwHAYKKoZIhvcNAQwBAzAOBAiV65vFfxLDVgICB9AEggKQx2dxWefICYodVhRLSQVMJRYy5QkM1VySPAXGP744JHrb+s0Y8i/6a+a5itZGlXw3kvxyflHtSsuuBCaYJ1WOCp9jspixJEliFHXTcel96AgZlT5tB7vC6pdZnz8rb+lyxFs99x2CW52EsadoDlRsYrmkmKdnB0cx2JHJbLeXuKV/fjuRJSqCFcDa6Nre8AlBX0zKGIYGLJ1Cfpora4kNTXxu0AwEowzGmoCxqrpKbO1QDi1hZ1qHrtZ1ienAKfiTXaGH4AMQzyut0AaymxalrRbXibJYuefLRvXqx0oLZKVLAX8fR1gnac6Mrr7GkdHaKCsk4eOi98acR7bjiyRRVYYS4B6Y0tCeRJNe6zeYVmLdtatuOlOEVDT6AKrJJMFMyITVS+2D771ge6m37FbJ36K3/eT/HRq1YDsxfD/BY+X7eMIwQrVnD5nK7avXfbIni57n5oWLkE9Vco8uBlMdrx4xHt9vpe42Pz2Yh2O4WtvxcgxrAknvPpV1ZsAJCfvm9TTcg8qZpjyePn3B9TvFVSXMJHn/rzu6OJAgFgVFAe1tPGLh1XBxAvwpB8EqcycIIUUFUBy4HgYCicjI2jp6s8Kk293Uc/TA2623LrWgP/Xm5hVB7lP1k6W9LDivOlAA96D0Cbk08Yv6arkCYj7ONFO8VZbO0zKAAOLHMw/ZQRIutGLrDlqgTDeRXRuReX7TNjDBxp2rzJBY0uU5g9BMFxQrbQwEx9HsnO4dVFG4KLbHmYWhlwS2V2uZtY6D6elOXY3SX50RwhC4+0trUMi/ODtOxAc+lMQk2FNDcNeKIX5wHwFRS+sFBu5Um4Jfj6Ua4w1izmu2KiPfDd3vJsm5Dgcci3fPfdSfpIq4uR6d3JQxgdcwEwYJKoZIhvcNAQkVMQYEBAEAAAAwWwYJKoZIhvcNAQkUMU4eTAB7ADcAMQAxADAANABBADgARgAtADQAQgBFADAALQA0AEEAMgA4AC0AOAAyADIANQAtAEYANwBBADcAMwBGAEMAQQAwAEMARABEAH0wYwYJKwYBBAGCNxEBMVYeVABNAGkAYwByAG8AcwBvAGYAdAAgAEIAYQBzAGUAIABDAHIAeQBwAHQAbwBnAHIAYQBwAGgAaQBjACAAUAByAG8AdgBpAGQAZQByACAAdgAxAC4AMDCCAs8GCSqGSIb3DQEHBqCCAsAwggK8AgEAMIICtQYJKoZIhvcNAQcBMBwGCiqGSIb3DQEMAQMwDgQISS7mG/riQJkCAgfQgIICiPSGg5axP4JM+GmiVEqOHTVAPw2AM8OPnn1q0mIw54oC2WOJw3FFThYHmxTQzQ1feVmnkVCv++eFp+BYTcWTa+ehl/3/Nvr5uLTzDxmCShacKwoWXOKtSLh6mmgydvMqSf6xv1bPsloodtrRxhprI2lBNBW2uw8az9eLdvURYmhjGPf9klEy/6OCA5jDT5XZMunwiQT5mYNMF7wAQ5PCz2dJQqm1n72A6nUHPkHEusN7iH/+mv5d3iaKxn7/ShxLKHfjMd+r/gv27ylshVHiN4mVStAg+MiLrVvr5VH46p6oosImvS3ZO4D5wTmh/6wtus803qN4QB/Y9n4rqEJ4Dn619h+6O7FChzWkx7kvYIzIxvfnj1PCFTEjUwc7jbuF013W/z9zQi2YEq9AzxMcGro0zjdt2sf30zXSfaRNt0UHHRDkLo7yFUJG5Ka1uWU8paLuXUUiiMUf24Bsfdg2A2n+3Qa7g25OvAM1QTpMwmMWL9sY2hxVUGIKVrnj8c4EKuGJjVDXrze5g9O/LfZr5VSjGu5KsN0eYI3mcePF7XM0azMtTNQYVRmeWxYW+XvK5MaoLEkrFG8C5+JccIlN588jowVIPqP321S/EyFiAmrRdAWkqrc9KH+/eINCFqjut2YPkCaTM9mnJAAqWgggUWkrOKT/ByS6IAQwyEBNFbY0TWyxKt6vZL1EW/6HgZCsxeYycNhnPr2qJNZZMNzmdMRp2GRLcfBH8KFw1rAyua0VJoTLHb23ZAsEY74BrEEiK9e/oOjXkHzQjlmrfQ9rSN2eQpRrn0W8I229WmBO2suG+AQ3aY8kDtBMkjmJno7txUh1K5D6tJTO7MQp343A2AhyJkhYA7NPnDA7MB8wBwYFKw4DAhoEFPO82HDlCzlshWlnMoQPStm62TMEBBQsPmvwbZ5OlwC9+NDF1AC+t67WTgICB9A=",
pfx_password="password",
rental_duration_seconds=2249,
rental_and_lease_key_type="PersistentUnlimited",
),
open_restriction_enabled=True,
),
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="playReady",
playready_configuration_licenses=[azure.media.ContentKeyPolicyPolicyOptionPlayreadyConfigurationLicenseArgs(
allow_test_devices=True,
begin_date="2017-10-16T18:22:53Z",
play_right=azure.media.ContentKeyPolicyPolicyOptionPlayreadyConfigurationLicensePlayRightArgs(
scms_restriction=2,
digital_video_only_content_restriction=False,
image_constraint_for_analog_component_video_restriction=False,
image_constraint_for_analog_computer_monitor_restriction=False,
allow_passing_video_content_to_unknown_output="NotAllowed",
uncompressed_digital_video_opl=100,
uncompressed_digital_audio_opl=100,
analog_video_opl=150,
compressed_digital_audio_opl=150,
),
license_type="Persistent",
content_type="UltraVioletDownload",
content_key_location_from_header_enabled=True,
)],
open_restriction_enabled=True,
),
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="clearKey",
clear_key_configuration_enabled=True,
token_restriction=azure.media.ContentKeyPolicyPolicyOptionTokenRestrictionArgs(
issuer="urn:issuer",
audience="urn:audience",
token_type="Swt",
primary_symmetric_token_key="AAAAAAAAAAAAAAAAAAAAAA==",
),
),
azure.media.ContentKeyPolicyPolicyOptionArgs(
name="widevine",
widevine_configuration_template=json.dumps({
"allowed_track_types": "SD_HD",
"content_key_specs": [{
"track_type": "SD",
"security_level": 1,
"required_output_protection": {
"hdcp": "HDCP_V2",
},
}],
"policy_overrides": {
"can_play": True,
"can_persist": True,
"can_renew": False,
},
}),
open_restriction_enabled=True,
),
])
```
## Import
Resource Groups can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:media/contentKeyPolicy:ContentKeyPolicy example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Media/mediaservices/account1/contentkeypolicies/policy1
```
:param str resource_name: The name of the resource.
:param ContentKeyPolicyArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ContentKeyPolicyArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
media_services_account_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policy_options: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ContentKeyPolicyPolicyOptionArgs']]]]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ContentKeyPolicyArgs.__new__(ContentKeyPolicyArgs)
__props__.__dict__["description"] = description
if media_services_account_name is None and not opts.urn:
raise TypeError("Missing required property 'media_services_account_name'")
__props__.__dict__["media_services_account_name"] = media_services_account_name
__props__.__dict__["name"] = name
if policy_options is None and not opts.urn:
raise TypeError("Missing required property 'policy_options'")
__props__.__dict__["policy_options"] = policy_options
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
super(ContentKeyPolicy, __self__).__init__(
'azure:media/contentKeyPolicy:ContentKeyPolicy',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
media_services_account_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policy_options: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ContentKeyPolicyPolicyOptionArgs']]]]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None) -> 'ContentKeyPolicy':
"""
Get an existing ContentKeyPolicy resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A description for the Policy.
:param pulumi.Input[str] media_services_account_name: The Media Services account name. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[str] name: The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ContentKeyPolicyPolicyOptionArgs']]]] policy_options: One or more `policy_option` blocks as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ContentKeyPolicyState.__new__(_ContentKeyPolicyState)
__props__.__dict__["description"] = description
__props__.__dict__["media_services_account_name"] = media_services_account_name
__props__.__dict__["name"] = name
__props__.__dict__["policy_options"] = policy_options
__props__.__dict__["resource_group_name"] = resource_group_name
return ContentKeyPolicy(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
A description for the Policy.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="mediaServicesAccountName")
def media_services_account_name(self) -> pulumi.Output[str]:
"""
The Media Services account name. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "media_services_account_name")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name which should be used for this Content Key Policy. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="policyOptions")
def policy_options(self) -> pulumi.Output[Sequence['outputs.ContentKeyPolicyPolicyOption']]:
"""
One or more `policy_option` blocks as defined below.
"""
return pulumi.get(self, "policy_options")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the Resource Group where the Content Key Policy should exist. Changing this forces a new Content Key Policy to be created.
"""
return pulumi.get(self, "resource_group_name")
| 58.173832 | 2,403 | 0.683482 | 2,751 | 31,123 | 7.474736 | 0.119229 | 0.043865 | 0.036765 | 0.047853 | 0.908574 | 0.899771 | 0.88776 | 0.878277 | 0.867821 | 0.85897 | 0 | 0.035287 | 0.243325 | 31,123 | 534 | 2,404 | 58.282772 | 0.837877 | 0.580953 | 0 | 0.642512 | 1 | 0 | 0.143159 | 0.067401 | 0 | 1 | 0 | 0 | 0 | 1 | 0.154589 | false | 0.004831 | 0.033816 | 0 | 0.280193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
d5bf55dc847932fbca686790affcc584a63a5b8e | 123 | py | Python | elm/nn/__init__.py | jinxu06/gsubsampling | 2e0cace553cf43835709a34a11f9c15b08c15004 | [
"Apache-2.0"
] | 12 | 2021-06-11T12:17:58.000Z | 2021-12-16T07:36:47.000Z | elm/nn/__init__.py | jinxu06/gsubsampling | 2e0cace553cf43835709a34a11f9c15b08c15004 | [
"Apache-2.0"
] | null | null | null | elm/nn/__init__.py | jinxu06/gsubsampling | 2e0cace553cf43835709a34a11f9c15b08c15004 | [
"Apache-2.0"
] | 1 | 2022-01-31T19:39:06.000Z | 2022-01-31T19:39:06.000Z | from .mlp import *
from .conv_nn import *
from .eqv_conv_nn import *
from .eqv_gconv_nn import *
from .gconv_nn import *
| 20.5 | 27 | 0.739837 | 21 | 123 | 4.047619 | 0.333333 | 0.470588 | 0.423529 | 0.376471 | 0.447059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178862 | 123 | 5 | 28 | 24.6 | 0.841584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
d5fa93693d5bfc0db502b84f0d6940851232fe3c | 96 | py | Python | cd_perf_promotion/__init__.py | CDKGlobal/cd-performance-plugin | 58176139ef744535b156b8ef5f187f38b683b2a5 | [
"MIT"
] | null | null | null | cd_perf_promotion/__init__.py | CDKGlobal/cd-performance-plugin | 58176139ef744535b156b8ef5f187f38b683b2a5 | [
"MIT"
] | null | null | null | cd_perf_promotion/__init__.py | CDKGlobal/cd-performance-plugin | 58176139ef744535b156b8ef5f187f38b683b2a5 | [
"MIT"
] | null | null | null | import cd_perf_promotion.engines
import cd_perf_promotion.modules
import cd_perf_promotion.main
| 24 | 32 | 0.90625 | 15 | 96 | 5.4 | 0.466667 | 0.296296 | 0.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 96 | 3 | 33 | 32 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
91171a4ac2828663926df89b4fe3da2479822c5a | 5,024 | py | Python | modules/boost/simd/arithmetic/unit/py/tb_details.py | pbrunet/nt2 | 2aeca0f6a315725b335efd5d9dc95d72e10a7fb7 | [
"BSL-1.0"
] | 2 | 2016-09-14T00:23:53.000Z | 2018-01-14T12:51:18.000Z | modules/boost/simd/arithmetic/unit/py/tb_details.py | pbrunet/nt2 | 2aeca0f6a315725b335efd5d9dc95d72e10a7fb7 | [
"BSL-1.0"
] | null | null | null | modules/boost/simd/arithmetic/unit/py/tb_details.py | pbrunet/nt2 | 2aeca0f6a315725b335efd5d9dc95d72e10a7fb7 | [
"BSL-1.0"
] | null | null | null | replct = {
"abs" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"amul" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"arg" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"average" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"ceil" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"correct_fma" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"dist" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"fam" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"fast_hypot" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"floor" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"fma" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"hypot" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"iceil" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"idivceil" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"idivfix" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"idivfloor" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"idivround" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"ifloor" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"iround2even" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"iround" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"ldiv" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"ldivide" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"logical_xor" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"madd" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"max" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"min" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"minmod" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"minusone" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"mod" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"oneminus" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"oneplus" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"random" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"rdivide" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"rec" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"remainder" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"rem" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"remquo" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"round" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"rsqrt" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"sqr" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"sqrt1pm1" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"sqrt" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"trunc" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"two_add" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"two_prod" :{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]},
"two_split":{"second_call : ,"rnges" : [[["-inf","inf"]]],"specf" = ["Zero","One","Mone","Inf","Minf","Nan"]}
}
| 102.530612 | 113 | 0.495422 | 605 | 5,024 | 4.028099 | 0.097521 | 0.188757 | 0.283135 | 0.339762 | 0.897415 | 0.897415 | 0.897415 | 0.897415 | 0.897415 | 0.897415 | 0 | 0.000645 | 0.073846 | 5,024 | 48 | 114 | 104.666667 | 0.523103 | 0 | 0 | 0 | 0 | 0 | 0.375199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
9154352721cae2f6da911de514c8c369cc6d33a0 | 164 | py | Python | campy/stanford/__init__.py | TristenSeth/campy | 9e726c342d682239e1c19e6f5645c0b2167d7fab | [
"MIT"
] | null | null | null | campy/stanford/__init__.py | TristenSeth/campy | 9e726c342d682239e1c19e6f5645c0b2167d7fab | [
"MIT"
] | null | null | null | campy/stanford/__init__.py | TristenSeth/campy | 9e726c342d682239e1c19e6f5645c0b2167d7fab | [
"MIT"
] | null | null | null | """Get it?! This is the `campy.stanford` subpackage. Like Camp Stanford!"""
print('Did you mean camp.stanford?')
print('Like "Camp Stanford!"')
print("I'm funny!")
| 32.8 | 75 | 0.689024 | 25 | 164 | 4.52 | 0.68 | 0.318584 | 0.451327 | 0.371681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115854 | 164 | 4 | 76 | 41 | 0.77931 | 0.420732 | 0 | 0 | 0 | 0 | 0.651685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
e672886083ade9b46fbafc45a3e3f86230aa2a21 | 9,737 | py | Python | python/example/water_snake_ppo.py | brokencuph/diff_pd | 2c30ecfa39762c5fc78dea9c7a226000e9fc5c15 | [
"MIT"
] | 4 | 2022-03-11T20:13:17.000Z | 2022-03-31T00:49:59.000Z | python/example/water_snake_ppo.py | srl-ethz/diffPD_sim2real | e491668995a163b8ff7542d99f0b4e0c0f4ed2df | [
"MIT"
] | null | null | null | python/example/water_snake_ppo.py | srl-ethz/diffPD_sim2real | e491668995a163b8ff7542d99f0b4e0c0f4ed2df | [
"MIT"
] | 2 | 2022-03-11T20:13:24.000Z | 2022-03-12T03:38:46.000Z | import os
from pathlib import Path
import sys
import time
from functools import partial
import math
import random
import copy
from collections import deque
import logging
sys.path.append(str(Path(__file__).resolve().parent.parent))
import scipy
import scipy.optimize
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.proj3d import proj_transform
import matplotlib.animation as animation
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
from py_diff_pd.core.py_diff_pd_core import HexMesh3d, HexDeformable, StdRealVector
from py_diff_pd.common.common import create_folder, ndarray, print_info
from py_diff_pd.common.hex_mesh import generate_hex_mesh, get_boundary_face
from py_diff_pd.common.display import export_gif, Arrow3D
from py_diff_pd.common.rl_sim import DiffPDTask, make_water_snake_3d, tensor, MyGaussianActorCriticNet, get_logger, MeanStdNormalizer, AdaSim, IndSim
from deep_rl.utils import generate_tag, Config, set_one_thread
from deep_rl.agent import BaseNet, FCBody, PPOAgent
from deep_rl.network import GaussianActorCriticNet
def ppo_ada():
seed = 42
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.set_default_dtype(torch.float64)
set_one_thread()
folder = Path('water_snake').resolve() / 'Ada_PPO'
ckpt_folder = folder / 'checkpoints'
video_folder = folder / 'videos'
folder.mkdir(parents=True, exist_ok=True)
ckpt_folder.mkdir(parents=True, exist_ok=True)
video_folder.mkdir(parents=True, exist_ok=True)
kwargs = {
'game': 'water_snake'
}
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.num_workers = 4
config.task_fn = lambda: DiffPDTask(make_water_snake_3d, AdaSim, seed, config.num_workers, False) # pylint: disable=no-member
config.eval_env = DiffPDTask(make_water_snake_3d, AdaSim, seed, 1, True)
config.network_fn = lambda: MyGaussianActorCriticNet(
config.state_dim, config.action_dim,
actor_body=FCBody(config.state_dim, hidden_units=(64, 64), gate=torch.tanh),
critic_body=FCBody(config.state_dim, hidden_units=(64, 64), gate=torch.tanh))
config.actor_opt_fn = lambda params: torch.optim.Adam(params, 3e-4)
config.critic_opt_fn = lambda params: torch.optim.Adam(params, 1e-3)
config.discount = 0.99
config.use_gae = True
config.gae_tau = 0.95
config.gradient_clip = 0.5
config.rollout_length = 1000
config.eval_interval = config.rollout_length * config.num_workers
config.optimization_epochs = 10
config.mini_batch_size = 64
config.ppo_ratio_clip = 0.2
config.log_interval = config.rollout_length * config.num_workers
config.save_interval = config.rollout_length * config.num_workers * 10
config.max_steps = 1e6
config.target_kl = 0.01
config.state_normalizer = MeanStdNormalizer(read_only=True)
agent = PPOAgent(config)
agent.logger = get_logger(folder)
config = agent.config
init_ckpt = torch.load(folder.parent / 'Ada' / 'checkpoints' / '0.pth', map_location='cpu')['state_dict']
with torch.no_grad():
for name, param in agent.network.named_parameters():
if name == 'actor_body.layers.0.weight':
param.copy_(init_ckpt['layers.0.linear.weight'])
elif name == 'actor_body.layers.0.bias':
param.copy_(init_ckpt['layers.0.linear.bias'])
elif name == 'actor_body.layers.1.weight':
param.copy_(init_ckpt['layers.1.linear.weight'])
elif name == 'actor_body.layers.1.bias':
param.copy_(init_ckpt['layers.1.linear.bias'])
elif name == 'fc_action.weight':
param.copy_(init_ckpt['layers.2.weight'])
elif name == 'fc_action.bias':
param.copy_(init_ckpt['layers.2.bias'])
print(agent.network)
log = []
t0 = time.time()
while True:
last_step = config.max_steps and agent.total_steps >= config.max_steps
if last_step or agent.total_steps % config.save_interval == 0:
agent.save(ckpt_folder / f'{agent.total_steps}.pth')
if last_step or agent.total_steps % config.log_interval == 0:
agent.logger.info('steps %d, %.2f steps/s' % (agent.total_steps, config.log_interval / (time.time() - t0)))
t0 = time.time()
if last_step or agent.total_steps % config.eval_interval == 0:
config.state_normalizer.set_read_only()
state = config.eval_env.reset()
total_reward = 0.0
with torch.no_grad():
while True:
state = config.state_normalizer(state)
action = agent.network(state)['mean'].cpu().detach().numpy()
state, reward, done, info = config.eval_env.step(action)
total_reward += reward
if done:
break
agent.logger.add_scalar('episode_reward', total_reward, agent.total_steps)
log.append([agent.total_steps, total_reward])
if last_step:
agent.close()
break
config.state_normalizer.set_read_only()
agent.step()
torch.save(log, folder / 'log.pth')
def ppo_ind():
seed = 42
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.set_default_dtype(torch.float64)
set_one_thread()
folder = Path('water_snake').resolve() / 'Ind_PPO'
ckpt_folder = folder / 'checkpoints'
video_folder = folder / 'videos'
folder.mkdir(parents=True, exist_ok=True)
ckpt_folder.mkdir(parents=True, exist_ok=True)
video_folder.mkdir(parents=True, exist_ok=True)
kwargs = {
'game': 'water_snake'
}
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.num_workers = 4
config.task_fn = lambda: DiffPDTask(make_water_snake_3d, IndSim, seed, config.num_workers, False) # pylint: disable=no-member
config.eval_env = DiffPDTask(make_water_snake_3d, IndSim, seed, 1, True)
config.network_fn = lambda: MyGaussianActorCriticNet(
config.state_dim, config.action_dim,
actor_body=FCBody(config.state_dim, hidden_units=(64, 64), gate=torch.tanh),
critic_body=FCBody(config.state_dim, hidden_units=(64, 64), gate=torch.tanh))
config.actor_opt_fn = lambda params: torch.optim.Adam(params, 3e-4)
config.critic_opt_fn = lambda params: torch.optim.Adam(params, 1e-3)
config.discount = 0.99
config.use_gae = True
config.gae_tau = 0.95
config.gradient_clip = 0.5
config.rollout_length = 1000
config.eval_interval = config.rollout_length * config.num_workers
config.optimization_epochs = 10
config.mini_batch_size = 64
config.ppo_ratio_clip = 0.2
config.log_interval = config.rollout_length * config.num_workers
config.save_interval = config.rollout_length * config.num_workers * 10
config.max_steps = 1e6
config.target_kl = 0.01
config.state_normalizer = MeanStdNormalizer(read_only=True)
agent = PPOAgent(config)
agent.logger = get_logger(folder)
config = agent.config
init_ckpt = torch.load(folder.parent / 'Ind' / 'checkpoints' / '0.pth', map_location='cpu')['state_dict']
with torch.no_grad():
for name, param in agent.network.named_parameters():
if name == 'actor_body.layers.0.weight':
param.copy_(init_ckpt['layers.0.linear.weight'])
elif name == 'actor_body.layers.0.bias':
param.copy_(init_ckpt['layers.0.linear.bias'])
elif name == 'actor_body.layers.1.weight':
param.copy_(init_ckpt['layers.1.linear.weight'])
elif name == 'actor_body.layers.1.bias':
param.copy_(init_ckpt['layers.1.linear.bias'])
elif name == 'fc_action.weight':
param.copy_(init_ckpt['layers.2.weight'])
elif name == 'fc_action.bias':
param.copy_(init_ckpt['layers.2.bias'])
print(agent.network)
log = []
t0 = time.time()
while True:
last_step = config.max_steps and agent.total_steps >= config.max_steps
if last_step or agent.total_steps % config.save_interval == 0:
agent.save(ckpt_folder / f'{agent.total_steps}.pth')
if last_step or agent.total_steps % config.log_interval == 0:
agent.logger.info('steps %d, %.2f steps/s' % (agent.total_steps, config.log_interval / (time.time() - t0)))
t0 = time.time()
if last_step or agent.total_steps % config.eval_interval == 0:
config.state_normalizer.set_read_only()
state = config.eval_env.reset()
total_reward = 0.0
with torch.no_grad():
while True:
state = config.state_normalizer(state)
action = agent.network(state)['mean'].cpu().detach().numpy()
state, reward, done, info = config.eval_env.step(action)
total_reward += reward
if done:
break
agent.logger.add_scalar('episode_reward', total_reward, agent.total_steps)
log.append([agent.total_steps, total_reward])
if last_step:
agent.close()
break
config.state_normalizer.set_read_only()
agent.step()
torch.save(log, folder / 'log.pth')
if __name__ == "__main__":
ppo_ada()
| 37.74031 | 149 | 0.662524 | 1,291 | 9,737 | 4.769171 | 0.175058 | 0.025987 | 0.03898 | 0.033133 | 0.842618 | 0.826701 | 0.826701 | 0.822803 | 0.822803 | 0.822803 | 0 | 0.018496 | 0.228202 | 9,737 | 257 | 150 | 37.88716 | 0.800798 | 0.005238 | 0 | 0.787037 | 0 | 0 | 0.084065 | 0.034493 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009259 | false | 0 | 0.143519 | 0 | 0.152778 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e687b3602ab2c3c80e1ecc7c1f9a31f268a13b52 | 1,161 | py | Python | day7hangman/stages.py | ready-1/100-Days-Of-Python | 17e7f810c9cca4f1c1678eae432d6c11c8b61593 | [
"MIT"
] | null | null | null | day7hangman/stages.py | ready-1/100-Days-Of-Python | 17e7f810c9cca4f1c1678eae432d6c11c8b61593 | [
"MIT"
] | null | null | null | day7hangman/stages.py | ready-1/100-Days-Of-Python | 17e7f810c9cca4f1c1678eae432d6c11c8b61593 | [
"MIT"
] | null | null | null | stages = ['''
_________
| |
| |
O |
| |
\|/ |
| |
/ \ |
|
=================
''',
'''
_________
| |
| |
O |
| |
\|/ |
| |
/ |
|
=================
''',
'''
_________
| |
| |
O |
| |
\|/ |
| |
|
|
=================
''',
'''
_________
| |
| |
O |
| |
\|/ |
|
|
|
=================
''',
'''
_________
| |
| |
O |
| |
\| |
|
|
|
=================
''',
'''
_________
| |
| |
O |
|
|
|
|
|
=================
''',
'''
_________
| |
| |
|
|
|
|
|
|
=================
'''
] | 13.658824 | 21 | 0.064599 | 7 | 1,161 | 1.714286 | 0.285714 | 0.833333 | 1 | 1 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.707149 | 1,161 | 85 | 22 | 13.658824 | 0.035294 | 0 | 0 | 0.222222 | 0 | 0 | 0.735577 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e6b4426f38c9cbc3564f4f37b2638cc22d681abc | 252 | py | Python | optimizer_utils/__init__.py | dgreving/optimizer_utils | 91576c56fdb88899fd0e0474e8ecd187065b21d1 | [
"MIT"
] | null | null | null | optimizer_utils/__init__.py | dgreving/optimizer_utils | 91576c56fdb88899fd0e0474e8ecd187065b21d1 | [
"MIT"
] | null | null | null | optimizer_utils/__init__.py | dgreving/optimizer_utils | 91576c56fdb88899fd0e0474e8ecd187065b21d1 | [
"MIT"
] | null | null | null | from optimizer_utils.datastructures import parameter
from optimizer_utils.datastructures.parameter_controller import ParameterController
from optimizer_utils.datastructures.dataset import Dataset
from optimizer_utils.datastructures.fitter import Fitter | 63 | 83 | 0.912698 | 28 | 252 | 8.035714 | 0.357143 | 0.231111 | 0.32 | 0.568889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 252 | 4 | 84 | 63 | 0.949367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
e6cc937e562d440de0af85b4d10fa7e229f9dbcd | 4,591 | py | Python | tests/test_import_hook.py | lummax/pyrefgraph | f6631fbe0a6ad00be7a92305a5928bc48d4f5b10 | [
"MIT"
] | 1 | 2020-04-11T15:00:12.000Z | 2020-04-11T15:00:12.000Z | tests/test_import_hook.py | lummax/pyrefgraph | f6631fbe0a6ad00be7a92305a5928bc48d4f5b10 | [
"MIT"
] | 22 | 2019-02-11T04:37:36.000Z | 2020-04-15T04:16:37.000Z | tests/test_import_hook.py | lummax/pyrefgraph | f6631fbe0a6ad00be7a92305a5928bc48d4f5b10 | [
"MIT"
] | null | null | null | # coding=utf-8
import pytest
from reference_graph.util import builtins, importlib
from reference_graph.analysis import import_hook, objects, graph
class ImportImportHook(import_hook.ImportHook):
def setup(self):
super(import_hook.ImportHook, self).setup()
self._cleanup_callbacks.extend(self._monkey_patch_import())
class ImportLibImportHook(import_hook.ImportHook):
def setup(self):
super(import_hook.ImportHook, self).setup()
self._cleanup_callbacks.extend(self._monkey_patch_importlib())
def test_setup_cleanup():
def get_values():
return (builtins.__import__, getattr(importlib, "import_module", None))
old = get_values()
hook = import_hook.ImportHook(None, None)
hook.setup()
hook.cleanup()
assert get_values() == old
with import_hook.ImportHook(None, None):
pass
assert get_values() == old
def test_basic_import():
om = objects.ObjectManager()
with ImportImportHook(graph.Graph(), om):
import this
assert om.lookup_module("this") == objects.Module.from_imported(this)
def test_nested_import():
om = objects.ObjectManager()
with ImportImportHook(graph.Graph(), om):
import email.mime.message
assert om.lookup_module("email") == objects.Module.from_imported(email)
assert om.lookup_module("email.mime") == objects.Module.from_imported(email.mime)
assert om.lookup_module("email.mime.message") == objects.Module.from_imported(
email.mime.message
)
def test_from_import():
om = objects.ObjectManager()
with ImportImportHook(graph.Graph(), om):
from email.mime import message
import email.mime
assert om.lookup_module("email") == objects.Module.from_imported(email)
assert om.lookup_module("email.mime") == objects.Module.from_imported(email.mime)
assert om.lookup_module("email.mime.message") == objects.Module.from_imported(
email.mime.message
)
def test_from_import_with_nonmodule():
om = objects.ObjectManager()
with ImportImportHook(graph.Graph(), om):
from email.mime.message import MIMEMessage
import email.mime
assert om.lookup_module("email") == objects.Module.from_imported(email)
assert om.lookup_module("email.mime") == objects.Module.from_imported(email.mime)
assert om.lookup_module("email.mime.message") == objects.Module.from_imported(
email.mime.message
)
def test_complex_from_import():
om = objects.ObjectManager()
with ImportImportHook(graph.Graph(), om):
from email.mime import message, image
import email.mime
assert om.lookup_module("email") == objects.Module.from_imported(email)
assert om.lookup_module("email.mime") == objects.Module.from_imported(email.mime)
assert om.lookup_module("email.mime.message") == objects.Module.from_imported(
email.mime.message
)
assert om.lookup_module("email.mime.image") == objects.Module.from_imported(
email.mime.image
)
@pytest.mark.skipif(
not hasattr(importlib, "import_module"),
reason="importlib.import_module not available",
)
def test_basic_import_module():
om = objects.ObjectManager()
with ImportLibImportHook(graph.Graph(), om):
this = importlib.import_module("this")
assert om.lookup_module("this") == objects.Module.from_imported(this)
@pytest.mark.skipif(
not hasattr(importlib, "import_module"),
reason="importlib.import_module not available",
)
def test_nested_importlib():
om = objects.ObjectManager()
with ImportLibImportHook(graph.Graph(), om):
_message = importlib.import_module("email.mime.message")
import email.mime.message
assert om.lookup_module("email") == objects.Module.from_imported(email)
assert om.lookup_module("email.mime") == objects.Module.from_imported(email.mime)
assert om.lookup_module("email.mime.message") == objects.Module.from_imported(
email.mime.message
)
@pytest.mark.skipif(
not hasattr(importlib, "import_module"),
reason="importlib.import_module not available",
)
def test_relative_importlib():
om = objects.ObjectManager()
with ImportLibImportHook(graph.Graph(), om):
_image = importlib.import_module("..message", "email.mime.image")
import email.mime.message
assert om.lookup_module("email") == objects.Module.from_imported(email)
assert om.lookup_module("email.mime") == objects.Module.from_imported(email.mime)
assert om.lookup_module("email.mime.message") == objects.Module.from_imported(
email.mime.message
)
| 31.445205 | 85 | 0.714006 | 560 | 4,591 | 5.669643 | 0.105357 | 0.104882 | 0.092598 | 0.132283 | 0.818898 | 0.80126 | 0.789291 | 0.789291 | 0.763465 | 0.72189 | 0 | 0.00026 | 0.162927 | 4,591 | 145 | 86 | 31.662069 | 0.825917 | 0.002614 | 0 | 0.59434 | 0 | 0 | 0.094385 | 0.015075 | 0 | 0 | 0 | 0 | 0.216981 | 1 | 0.113208 | false | 0.009434 | 0.641509 | 0.009434 | 0.783019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
e6eb1936f153279d02d457589e7a74cd165811d0 | 30,671 | py | Python | unet_module.py | esteng/guiding-multi-step | 3f0db0ba70b5851cc83878f4ed48cf82342a2ddf | [
"BSD-2-Clause"
] | null | null | null | unet_module.py | esteng/guiding-multi-step | 3f0db0ba70b5851cc83878f4ed48cf82342a2ddf | [
"BSD-2-Clause"
] | null | null | null | unet_module.py | esteng/guiding-multi-step | 3f0db0ba70b5851cc83878f4ed48cf82342a2ddf | [
"BSD-2-Clause"
] | null | null | null | #
# partially ripped from https://github.com/lil-lab/ciff/
from collections import deque
import pdb
import torch
import torch.nn.functional as F
from image_encoder import FinalClassificationLayer
from mlp import MLP
from language import SourceAttention
class BaseUNet(torch.nn.Module):
def __init__(self,
in_channels: int,
out_channels: int,
hc_large: int,
hc_small: int,
kernel_size: int = 5,
stride: int = 2,
num_layers: int = 5,
num_blocks: int = 20,
dropout: float = 0.20,
depth: int = 7,
device: torch.device = "cpu"):
super(BaseUNet, self).__init__()
# placeholders
self.compute_block_dist = False
# device
self.device = device
# data
self.num_blocks = num_blocks
self.depth = depth
# model
pad = int(kernel_size / 2)
self.num_layers = num_layers
self.hc_large = hc_large
self.hc_small = hc_small
self.activation = torch.nn.LeakyReLU()
self.dropout = torch.nn.Dropout2d(dropout)
self.downconv_modules = []
self.upconv_modules = []
self.upconv_results = []
self.downnorms = []
self.upnorms = []
# exception at first layer for shape
first_downconv = torch.nn.Conv2d(in_channels, hc_large, kernel_size, stride=stride, padding=pad)
first_upconv = torch.nn.ConvTranspose2d(hc_large, hc_large, kernel_size, stride=stride, padding=pad)
first_downnorm = torch.nn.InstanceNorm2d(hc_large)
first_upnorm = torch.nn.InstanceNorm2d(hc_large)
self.downconv_modules.append(first_downconv)
self.upconv_modules.append(first_upconv)
self.downnorms.append(first_downnorm)
self.upnorms.append(first_upnorm)
for i in range(num_layers-3):
downconv = torch.nn.Conv2d(hc_large, hc_large, kernel_size, stride=stride, padding=pad)
downnorm = torch.nn.InstanceNorm2d(hc_large)
upconv = torch.nn.ConvTranspose2d(2*hc_large, hc_large, kernel_size, stride=stride, padding = pad)
upnorm = torch.nn.InstanceNorm2d(hc_large)
self.downconv_modules.append(downconv)
self.upconv_modules.append(upconv)
self.downnorms.append(downnorm)
self.upnorms.append(upnorm)
penult_downconv = torch.nn.Conv2d(hc_large, hc_large, kernel_size, stride=stride, padding=pad)
penult_downnorm = torch.nn.InstanceNorm2d(hc_large)
penult_upconv = torch.nn.ConvTranspose2d(2*hc_large, hc_small, kernel_size, stride=stride, padding=pad)
penult_upnorm = torch.nn.InstanceNorm2d(hc_small)
self.downconv_modules.append(penult_downconv)
self.upconv_modules.append(penult_upconv)
self.downnorms.append(penult_downnorm)
self.upnorms.append(penult_upnorm)
final_downconv = torch.nn.Conv2d(hc_large, hc_large, kernel_size, stride=stride, padding=pad)
final_upconv = torch.nn.ConvTranspose2d(hc_large + hc_small, out_channels, kernel_size, stride=stride, padding=pad)
self.downconv_modules.append(final_downconv)
self.upconv_modules.append(final_upconv)
self.downconv_modules = torch.nn.ModuleList(self.downconv_modules)
self.upconv_modules = torch.nn.ModuleList(self.upconv_modules)
self.downnorms = torch.nn.ModuleList(self.downnorms)
self.upnorms = torch.nn.ModuleList(self.upnorms)
self.final_layer = FinalClassificationLayer(int(out_channels/self.depth), out_channels, self.num_blocks + 1, depth = self.depth)
# make cuda compatible
self.downconv_modules = self.downconv_modules.to(self.device)
self.upconv_modules = self.upconv_modules.to(self.device)
self.downnorms = self.downnorms.to(self.device)
self.upnorms = self.upnorms.to(self.device)
self.final_layer = self.final_layer.to(self.device)
self.activation = self.activation.to(self.device)
#self._init_weights()
def _init_weights(self):
for i in range(len(self.upconv_modules)):
torch.nn.init.xavier_uniform_(self.upconv_modules[i].weight)
self.upconv_modules[i].bias.data.fill_(0)
torch.nn.init.xavier_uniform_(self.downconv_modules[i].weight)
self.downconv_modules[i].bias.data.fill_(0)
def forward(self, input_dict):
image_input = input_dict["prev_pos_input"]
# store downconv results in stack
downconv_results = deque()
# start with image input
out = image_input
# get down outputs, going down U
for i in range(self.num_layers):
downconv = self.downconv_modules[i]
out = self.activation(downconv(out))
# last layer has no norm
if i < self.num_layers-1:
downnorm = self.downnorms[i-1]
out = downnorm(out)
downconv_results.append(out)
out = self.dropout(out)
# go back up the U, concatenating residuals back in
for i in range(self.num_layers):
# concat the corresponding side of the U
upconv = self.upconv_modules[i]
if i > 0:
resid_data = downconv_results.pop()
out = torch.cat([resid_data, out], 1)
if i < self.num_layers-1:
desired_size = downconv_results[-1].size()
else:
desired_size = image_input.size()
out = self.activation(upconv(out, output_size = desired_size))
# last layer has no norm
if i < self.num_layers:
upnorm = self.upnorms[i-1]
out = upnorm(out)
out = self.dropout(out)
out = self.final_layer(out)
to_ret = {"next_position": out,
"pred_block_logits": None}
return to_ret
class UNetWithLanguage(BaseUNet):
def __init__(self,
in_channels: int,
out_channels: int,
lang_embedder: torch.nn.Module,
lang_encoder: torch.nn.Module,
hc_large: int,
hc_small: int,
kernel_size: int = 5,
stride: int = 2,
num_layers: int = 5,
num_blocks: int = 20,
dropout: float = 0.20,
depth: int = 7,
device: torch.device = "cpu"):
super(UNetWithLanguage, self).__init__(in_channels=in_channels,
out_channels=out_channels,
hc_large=hc_large,
hc_small=hc_small,
kernel_size=kernel_size,
stride=stride,
num_layers=num_layers,
num_blocks=num_blocks,
dropout=dropout,
depth=depth,
device=device)
pad = int(kernel_size / 2)
self.lang_embedder = lang_embedder
self.lang_encoder = lang_encoder
self.lang_embedder.set_device(self.device)
self.lang_encoder.set_device(self.device)
self.lang_projections = []
for i in range(self.num_layers):
lang_proj = torch.nn.Linear(self.lang_encoder.output_size, hc_large)
self.lang_projections.append(lang_proj)
self.lang_projections = torch.nn.ModuleList(self.lang_projections)
self.lang_projections = self.lang_projections.to(self.device)
self.upconv_modules = torch.nn.ModuleList()
# need extra dims for concating language
first_upconv = torch.nn.ConvTranspose2d(2*hc_large, hc_large, kernel_size, stride=stride, padding=pad)
self.upconv_modules.append(first_upconv)
for i in range(num_layers-3):
upconv = torch.nn.ConvTranspose2d(3*hc_large, hc_large, kernel_size, stride=stride, padding = pad)
self.upconv_modules.append(upconv)
penult_upconv = torch.nn.ConvTranspose2d(3*hc_large, hc_small, kernel_size, stride=stride, padding=pad)
self.upconv_modules.append(penult_upconv)
final_upconv = torch.nn.ConvTranspose2d(2*hc_large + hc_small, out_channels, kernel_size, stride=stride, padding=pad)
self.upconv_modules.append(final_upconv)
def forward(self, data_batch):
lang_input = data_batch["command"]
lang_length = data_batch["length"]
# tensorize lengths
lengths = torch.tensor(lang_length).float()
lengths = lengths.to(self.device)
# embed langauge
lang_embedded = torch.cat([self.lang_embedder(lang_input[i]).unsqueeze(0) for i in range(len(lang_input))],
dim=0)
# encode
lang_output = self.lang_encoder(lang_embedded, lengths)
# get language output as sentence embedding
sent_encoding = lang_output["sentence_encoding"]
image_input = data_batch["prev_pos_input"]
image_input = image_input.to(self.device)
# store downconv results in stack
downconv_results = deque()
lang_results = deque()
downconv_sizes = deque()
# start with image input
out = image_input
# get down outputs, going down U
for i in range(self.num_layers):
downconv = self.downconv_modules[i]
out = self.activation(downconv(out))
# last layer has no norm
if i < self.num_layers-1:
downnorm = self.downnorms[i-1]
out = downnorm(out)
out = self.dropout(out)
# get language projection at that layer
lang_proj = self.lang_projections[i]
lang = lang_proj(sent_encoding)
# expand language for tiling
bsz, __, width, height = out.shape
lang = lang.view((bsz, -1, 1, 1))
lang = lang.repeat((1, 1, width, height))
lang_results.append(lang)
# concat language in
downconv_sizes.append(out.size())
out_with_lang = torch.cat([out, lang], 1)
out_with_lang = self.dropout(out_with_lang)
downconv_results.append(out_with_lang)
if i == self.num_layers-1:
# at end set out include lang
out = out_with_lang
# pop off last one
downconv_sizes.pop()
downconv_results.pop()
# go back up the U, concatenating residuals and language
for i in range(self.num_layers):
# concat the corresponding side of the U
upconv = self.upconv_modules[i]
if i > 0:
resid_data = downconv_results.pop()
out = torch.cat([resid_data, out], 1)
if i < self.num_layers-1:
desired_size = downconv_sizes.pop()
else:
desired_size = image_input.size()
out = self.activation(upconv(out, output_size = desired_size))
# last layer has no norm
if i < self.num_layers:
upnorm = self.upnorms[i-1]
out = upnorm(out)
out = self.dropout(out)
out = self.final_layer(out)
to_ret = {"next_position": out,
"pred_block_logits": None}
return to_ret
class UNetWithBlocks(UNetWithLanguage):
def __init__(self,
in_channels: int,
out_channels: int,
lang_embedder: torch.nn.Module,
lang_encoder: torch.nn.Module,
hc_large: int,
hc_small: int,
kernel_size: int = 5,
stride: int = 2,
num_layers: int = 5,
num_blocks: int = 20,
mlp_num_layers: int = 3,
dropout: float = 0.20,
resolution: int = None,
depth: int = 7,
device: torch.device = "cpu"):
super(UNetWithBlocks, self).__init__(in_channels=in_channels,
out_channels=out_channels,
lang_embedder=lang_embedder,
lang_encoder=lang_encoder,
hc_large=hc_large,
hc_small=hc_small,
kernel_size=kernel_size,
stride=stride,
num_layers=num_layers,
num_blocks=num_blocks,
dropout=dropout,
depth=depth,
device=device)
self.compute_block_dist = True
self.resolution = resolution
# (elias): automatically infer this size when the num_layers is different
width = int(self.resolution**(1/(num_layers-1)))
self.block_prediction_module = MLP(input_dim = 2*width*width*hc_large,
hidden_dim = 2*hc_large,
output_dim = 21,
num_layers = mlp_num_layers,
dropout = dropout)
def forward(self, data_batch):
lang_input = data_batch["command"]
lang_length = data_batch["length"]
# tensorize lengths
lengths = torch.tensor(lang_length).float()
lengths = lengths.to(self.device)
# embed language
lang_embedded = torch.cat([self.lang_embedder(lang_input[i]).unsqueeze(0) for i in range(len(lang_input))],
dim=0)
# encode
lang_output = self.lang_encoder(lang_embedded, lengths)
# get language output as sentence embedding
sent_encoding = lang_output["sentence_encoding"]
#image_input = data_batch["prev_pos_input"]
image_input = data_batch["prev_pos_input"]
#image_input = image_input.reshape((-1, 1, 64, 64))
#image_input = image_input.repeat((1,2, 1, 1))
image_input = image_input.to(self.device)
# store downconv results in stack
downconv_results = deque()
lang_results = deque()
downconv_sizes = deque()
# start with image input
out = image_input
# get down outputs, going down U
for i in range(self.num_layers):
downconv = self.downconv_modules[i]
out = self.activation(downconv(out))
# last layer has no norm
if i < self.num_layers-1:
downnorm = self.downnorms[i-1]
out = downnorm(out)
out = self.dropout(out)
# get language projection at that layer
lang_proj = self.lang_projections[i]
lang = lang_proj(sent_encoding)
# expand language for tiling
bsz, __, width, height = out.shape
lang = lang.view((bsz, -1, 1, 1))
lang = lang.repeat((1, 1, width, height))
lang_results.append(lang)
# concat language in
downconv_sizes.append(out.size())
out_with_lang = torch.cat([out, lang], 1)
out_with_lang = self.dropout(out_with_lang)
downconv_results.append(out_with_lang)
if i == self.num_layers-1:
# at end set out include lang
out = out_with_lang
# predict blocks from deepest downconv
out_for_blocks = out.view((bsz, -1))
pred_block_logits = self.block_prediction_module(out_for_blocks)
# pop off last one
downconv_sizes.pop()
downconv_results.pop()
# go back up the U, concatenating residuals and language
for i in range(self.num_layers):
# concat the corresponding side of the U
upconv = self.upconv_modules[i]
if i > 0:
resid_data = downconv_results.pop()
out = torch.cat([resid_data, out], 1)
if i < self.num_layers-1:
desired_size = downconv_sizes.pop()
else:
desired_size = image_input.size()
out = self.activation(upconv(out, output_size = desired_size))
# last layer has no norm
if i < self.num_layers:
upnorm = self.upnorms[i-1]
out = upnorm(out)
out = self.dropout(out)
out = self.final_layer(out)
to_ret = {"next_position": out,
"pred_block_logits": pred_block_logits}
return to_ret
class UNetWithAttention(BaseUNet):
def __init__(self,
in_channels: int,
out_channels: int,
lang_embedder: torch.nn.Module,
lang_encoder: torch.nn.Module,
hc_large: int,
hc_small: int,
kernel_size: int = 5,
stride: int = 2,
num_layers: int = 5,
num_blocks: int = 20,
dropout: float = 0.20,
depth: int = 7,
device: torch.device = "cpu",
do_reconstruction: bool = False):
super(UNetWithAttention, self).__init__(in_channels=in_channels,
out_channels=out_channels,
hc_large=hc_large,
hc_small=hc_small,
kernel_size=kernel_size,
stride=stride,
num_layers=num_layers,
num_blocks=num_blocks,
dropout=dropout,
depth=depth,
device=device)
pad = int(kernel_size / 2)
self.lang_embedder = lang_embedder
self.lang_encoder = lang_encoder
self.lang_embedder.set_device(self.device)
self.lang_encoder.set_device(self.device)
self.do_reconstruction = do_reconstruction
self.lang_projections = []
self.lang_attentions = []
for i in range(self.num_layers):
lang_proj = torch.nn.Linear(self.lang_encoder.output_size, hc_large)
self.lang_projections.append(lang_proj)
src_attn_module = SourceAttention(hc_large, hc_large, hc_large)
self.lang_attentions.append(src_attn_module)
self.lang_projections = torch.nn.ModuleList(self.lang_projections)
self.lang_projections = self.lang_projections.to(self.device)
self.lang_attentions = torch.nn.ModuleList(self.lang_attentions)
self.lang_attentions = self.lang_attentions.to(self.device)
self.upconv_modules = torch.nn.ModuleList()
# need extra dims for concating language
first_upconv = torch.nn.ConvTranspose2d(2*hc_large, hc_large, kernel_size, stride=stride, padding=pad)
self.upconv_modules.append(first_upconv)
for i in range(num_layers-3):
upconv = torch.nn.ConvTranspose2d(3*hc_large, hc_large, kernel_size, stride=stride, padding = pad)
self.upconv_modules.append(upconv)
penult_upconv = torch.nn.ConvTranspose2d(3*hc_large, hc_small, kernel_size, stride=stride, padding=pad)
self.upconv_modules.append(penult_upconv)
final_upconv = torch.nn.ConvTranspose2d(2*hc_large + hc_small, out_channels, kernel_size, stride=stride, padding=pad)
self.upconv_modules.append(final_upconv)
if self.do_reconstruction:
self.recon_layer = FinalClassificationLayer(int(out_channels/self.depth), out_channels, 8, depth = self.depth)
def forward(self, data_batch):
lang_input = data_batch["command"]
lang_length = data_batch["length"]
# tensorize lengths
lengths = torch.tensor(lang_length).float()
lengths = lengths.to(self.device)
# embed langauge
lang_embedded = torch.cat([self.lang_embedder(lang_input[i]).unsqueeze(0) for i in range(len(lang_input))],
dim=0)
# encode
lang_output = self.lang_encoder(lang_embedded, lengths)
# get language output as sequence of hiddent states
lang_states = lang_output["output"]
image_input = data_batch["prev_pos_input"]
image_input = image_input.to(self.device)
# store downconv results in stack
downconv_results = deque()
lang_results = deque()
downconv_sizes = deque()
# start with image input
out = image_input
# get down outputs, going down U
for i in range(self.num_layers):
downconv = self.downconv_modules[i]
out = self.activation(downconv(out))
# last layer has no norm
if i < self.num_layers-1:
downnorm = self.downnorms[i-1]
out = downnorm(out)
out = self.dropout(out)
downconv_sizes.append(out.size())
# get language projection at that layer
lang_proj = self.lang_projections[i]
lang = lang_proj(lang_states)
# get attention layer
lang_attn = self.lang_attentions[i]
# get weighted language input
lang_by_image = lang_attn(out, lang, lang)
# concat weighted language in
out_with_lang = torch.cat([out, lang_by_image], 1)
out_with_lang = self.dropout(out_with_lang)
downconv_results.append(out_with_lang)
if i == self.num_layers-1:
# at end set out include lang
out = out_with_lang
# pop off last one
downconv_sizes.pop()
downconv_results.pop()
# go back up the U, concatenating residuals and language
for i in range(self.num_layers):
# concat the corresponding side of the U
upconv = self.upconv_modules[i]
if i > 0:
resid_data = downconv_results.pop()
out = torch.cat([resid_data, out], 1)
if i < self.num_layers-1:
desired_size = downconv_sizes.pop()
else:
desired_size = image_input.size()
out = self.activation(upconv(out, output_size = desired_size))
# last layer has no norm
if i < self.num_layers:
upnorm = self.upnorms[i-1]
out = upnorm(out)
out = self.dropout(out)
pre_final = out
out = self.final_layer(pre_final)
if self.do_reconstruction:
recon_out = self.recon_layer(pre_final)
else:
recon_out = None
to_ret = {"next_position": out,
"reconstruction": recon_out,
"pred_block_logits": None}
return to_ret
class IDLayer(torch.nn.Module):
def __init__(self):
super(IDLayer, self).__init__()
def forward(self, x):
return x
class UNetNoNorm(UNetWithLanguage):
def __init__(self,
in_channels: int,
out_channels: int,
lang_embedder: torch.nn.Module,
lang_encoder: torch.nn.Module,
hc_large: int,
hc_small: int,
kernel_size: int = 5,
stride: int = 2,
num_layers: int = 5,
num_blocks: int = 20,
dropout: float = 0.20,
depth: int = 7,
device: torch.device = "cpu"):
super(UNetNoNorm, self).__init__(in_channels=in_channels,
out_channels=out_channels,
lang_embedder=lang_embedder,
lang_encoder=lang_encoder,
hc_large=hc_large,
hc_small=hc_small,
kernel_size=kernel_size,
stride=stride,
num_layers=num_layers,
num_blocks=num_blocks,
dropout=dropout,
depth=depth,
device=device)
# override with id layers
self.upnorms = torch.nn.ModuleList([IDLayer() for i in range(len(self.upnorms))])
self.downnorms = torch.nn.ModuleList([IDLayer() for i in range(len(self.downnorms))])
class UNetForBERT(UNetWithAttention):
def __init__(self,
in_channels: int,
out_channels: int,
lang_embedder: torch.nn.Module,
lang_encoder: torch.nn.Module,
hc_large: int,
hc_small: int,
kernel_size: int = 5,
stride: int = 2,
num_layers: int = 5,
num_blocks: int = 20,
dropout: float = 0.20,
depth: int = 7,
device: torch.device = "cpu"):
super(UNetForBERT, self).__init__(in_channels=in_channels,
out_channels=out_channels,
lang_embedder=lang_embedder,
lang_encoder=lang_encoder,
hc_large=hc_large,
hc_small=hc_small,
kernel_size=kernel_size,
stride=stride,
num_layers=num_layers,
num_blocks=num_blocks,
dropout=dropout,
depth=depth,
device=device)
self.lang_encoder.output_size = 768
# reset projections
for i in range(self.num_layers):
lang_proj = torch.nn.Linear(self.lang_encoder.output_size, hc_large)
self.lang_projections[i] = lang_proj
self.lang_projections = self.lang_projections.to(self.device)
def forward(self, data_batch):
lang_input = data_batch["command"]
lang_length = data_batch["length"]
# tensorize lengths
lengths = torch.tensor(lang_length).float()
lengths = lengths.to(self.device)
# embed langauge
lang_embedded = torch.cat([self.lang_embedder(lang_input[i]).unsqueeze(0) for i in range(len(lang_input))],
dim=0)
# already encoded with BERT!
lang_output = {"output": lang_embedded}
# get language output as sequence of hiddent states
lang_states = lang_output["output"]
image_input = data_batch["prev_pos_input"]
image_input = image_input.to(self.device)
# store downconv results in stack
downconv_results = deque()
lang_results = deque()
downconv_sizes = deque()
# start with image input
out = image_input
# get down outputs, going down U
for i in range(self.num_layers):
downconv = self.downconv_modules[i]
out = self.activation(downconv(out))
# last layer has no norm
if i < self.num_layers-1:
downnorm = self.downnorms[i-1]
out = downnorm(out)
out = self.dropout(out)
downconv_sizes.append(out.size())
# get language projection at that layer
lang_proj = self.lang_projections[i]
lang = lang_proj(lang_states)
# get attention layer
lang_attn = self.lang_attentions[i]
# get weighted language input
lang_by_image = lang_attn(out, lang, lang)
# concat weighted language in
out_with_lang = torch.cat([out, lang_by_image], 1)
out_with_lang = self.dropout(out_with_lang)
downconv_results.append(out_with_lang)
if i == self.num_layers-1:
# at end set out include lang
out = out_with_lang
# pop off last one
downconv_sizes.pop()
downconv_results.pop()
# go back up the U, concatenating residuals and language
for i in range(self.num_layers):
# concat the corresponding side of the U
upconv = self.upconv_modules[i]
if i > 0:
resid_data = downconv_results.pop()
out = torch.cat([resid_data, out], 1)
if i < self.num_layers-1:
desired_size = downconv_sizes.pop()
else:
desired_size = image_input.size()
out = self.activation(upconv(out, output_size = desired_size))
# last layer has no norm
if i < self.num_layers:
upnorm = self.upnorms[i-1]
out = upnorm(out)
out = self.dropout(out)
out = self.final_layer(out)
to_ret = {"next_position": out,
"pred_block_logits": None}
return to_ret
| 40.677719 | 137 | 0.536174 | 3,324 | 30,671 | 4.714501 | 0.069194 | 0.03331 | 0.027375 | 0.016144 | 0.849531 | 0.820752 | 0.784953 | 0.777104 | 0.772254 | 0.751196 | 0 | 0.009269 | 0.384435 | 30,671 | 753 | 138 | 40.73174 | 0.820763 | 0.086759 | 0 | 0.787431 | 0 | 0 | 0.012753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025878 | false | 0 | 0.012939 | 0.001848 | 0.062847 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fc166dae3ef94cb1b8af5c86fd77057318269806 | 31,062 | py | Python | chengyubert/models/modeling_2stage.py | VisualJoyce/ChengyuBERT | 605db3a4b3241dd4d02baa41a68bf23b5b00b36d | [
"MIT"
] | 8 | 2020-12-11T13:06:16.000Z | 2022-03-01T13:47:51.000Z | chengyubert/models/modeling_2stage.py | VisualJoyce/ChengyuBERT | 605db3a4b3241dd4d02baa41a68bf23b5b00b36d | [
"MIT"
] | 18 | 2020-12-31T07:32:55.000Z | 2022-02-07T08:33:30.000Z | chengyubert/models/modeling_2stage.py | VisualJoyce/ChengyuBERT | 605db3a4b3241dd4d02baa41a68bf23b5b00b36d | [
"MIT"
] | 3 | 2021-03-25T01:08:56.000Z | 2022-03-22T09:05:57.000Z | from __future__ import absolute_import, division, print_function
import torch
import torch.nn as nn
from transformers import BertModel, BertPreTrainedModel
from chengyubert.models import register_model
@register_model('chengyubert-2stage-stage1')
class ChengyuBertTwoStagePretrain(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 4, config.hidden_size)
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
over_states = self.over_linear(torch.cat([blank_states,
cls_states,
blank_states * cls_states,
blank_states - cls_states], dim=-1))
over_logits = self.vocab(over_states)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return cond_logits, over_logits
@register_model('chengyubert-2stage-stage1-mask')
class ChengyuBertTwoStageMaskPretrain(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
over_logits = self.vocab(blank_states)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return cond_logits, over_logits
@register_model('chengyubert-2stage-stage1-cls')
class ChengyuBertTwoStageCLSPretrain(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
over_states = self.over_linear(torch.cat([blank_states,
cls_states], dim=-1))
over_logits = self.vocab(over_states)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return cond_logits, over_logits
@register_model('chengyubert-layernorm-2stage-stage1')
class ChengyuBertLayerNormTwoStagePretrain(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 4, config.hidden_size)
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.LayerNorm(self.idiom_embedding(self.enlarged_candidates))
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
over_states = self.over_linear(torch.cat([blank_states,
cls_states,
blank_states * cls_states,
blank_states - cls_states], dim=-1))
over_logits = self.vocab(over_states)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return cond_logits, over_logits
@register_model('chengyubert-layernorm-2stage-stage1-mask')
class ChengyuBertLayerNormTwoStageMaskPretrain(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.LayerNorm(self.idiom_embedding(self.enlarged_candidates))
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
over_logits = self.vocab(blank_states)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return cond_logits, over_logits
@register_model('chengyubert-layernorm-2stage-stage1-cls')
class ChengyuBertLayerNormTwoStageCLSPretrain(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.LayerNorm(self.idiom_embedding(self.enlarged_candidates))
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
over_states = self.over_linear(torch.cat([blank_states,
cls_states], dim=-1))
over_logits = self.vocab(over_states)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return cond_logits, over_logits
@register_model('chengyubert-2stage-stage2')
class ChengyuBertTwoStageFinetune(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 4, config.hidden_size)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
if option_ids is None and options_embeds is None:
raise ValueError('Either option_ids or options_embeds should be given.')
elif options_embeds is not None:
encoded_options = options_embeds
else:
encoded_options = self.idiom_embedding(option_ids)
over_states = self.over_linear(torch.cat([blank_states,
cls_states,
blank_states * cls_states,
blank_states - cls_states], dim=-1))
over_logits = self.vocab(over_states)
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
# encoded_context = encoded_layer
# mo_logits = torch.einsum('bld,bnd->bln', [encoded_context, encoded_options]) # (b, 256, 10)
# logits, _ = torch.max(mo_logits, dim=1)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(cond_logits, targets)
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return loss, over_loss
else:
return cond_logits, over_logits, cond_logits
@register_model('chengyubert-2stage-stage2-mask')
class ChengyuBertTwoStageMaskFinetune(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
if option_ids is None and options_embeds is None:
raise ValueError('Either option_ids or options_embeds should be given.')
elif options_embeds is not None:
encoded_options = options_embeds
else:
encoded_options = self.idiom_embedding(option_ids)
over_logits = self.vocab(blank_states)
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
# encoded_context = encoded_layer
# mo_logits = torch.einsum('bld,bnd->bln', [encoded_context, encoded_options]) # (b, 256, 10)
# logits, _ = torch.max(mo_logits, dim=1)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(cond_logits, targets)
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return loss, over_loss
else:
return cond_logits, over_logits, cond_logits
@register_model('chengyubert-2stage-stage2-cls')
class ChengyuBertTwoStageCLSFinetune(BertPreTrainedModel):
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 2, config.hidden_size)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.init_weights()
def vocab(self, blank_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
return torch.einsum('bd,nd->bn', [blank_states, idiom_embeddings]) # (b, 256, 10)
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
if option_ids is None and options_embeds is None:
raise ValueError('Either option_ids or options_embeds should be given.')
elif options_embeds is not None:
encoded_options = options_embeds
else:
encoded_options = self.idiom_embedding(option_ids)
over_states = self.over_linear(torch.cat([blank_states,
cls_states], dim=-1))
over_logits = self.vocab(over_states)
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
# encoded_context = encoded_layer
# mo_logits = torch.einsum('bld,bnd->bln', [encoded_context, encoded_options]) # (b, 256, 10)
# logits, _ = torch.max(mo_logits, dim=1)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(cond_logits, targets)
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return loss, over_loss
else:
return cond_logits, over_logits, cond_logits
@register_model('chengyubert-2stage-stage2-window')
class ChengyuBertTwoStageWindow(BertPreTrainedModel):
r"""
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``:
Labels for computing the sequence classification/regression loss.
Indices should be in ``[0, ..., config.num_labels - 1]``.
If ``config.num_labels == 1`` a regression loss is computed (Mean-Square loss),
If ``config.num_labels > 1`` a classification loss is computed (Cross-Entropy).
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
**loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
Classification (or regression if config.num_labels==1) loss.
**logits**: ``torch.FloatTensor`` of shape ``(batch_size, config.num_labels)``
Classification (or regression if config.num_labels==1) scores (before SoftMax).
**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
of shape ``(batch_size, sequence_length, hidden_size)``:
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
**attentions**: (`optional`, returned when ``config.output_attentions=True``)
list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples::
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
"""
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.window_size = int(self.model_name.split('-')[-1])
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.over_linear = nn.Linear(config.hidden_size * 4, config.hidden_size)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.init_weights()
def vocab(self, over_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
c_mo_logits = torch.einsum('bd,nd->bn', [over_states, idiom_embeddings]) # (b, 256, 10)
return c_mo_logits
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
batch_size, length = input_ids.size()
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
cls_states = encoded_layer[:, 0]
if option_ids is None and options_embeds is None:
raise ValueError('Either option_ids or options_embeds should be given.')
elif options_embeds is not None:
encoded_options = options_embeds
else:
encoded_options = self.idiom_embedding(option_ids)
over_states = self.over_linear(torch.cat([blank_states,
cls_states,
blank_states * cls_states,
blank_states - cls_states], dim=-1))
over_logits = self.vocab(over_states)
encoded_context = encoded_layer
mo_logits = torch.einsum('bld,bnd->bln', [encoded_context, encoded_options]) # (b, 256, 10)
if self.window_size > length:
logits, _ = torch.max(mo_logits, dim=1)
elif self.window_size == 0:
new_logits = []
for i, p in enumerate(positions):
new_logits.append(mo_logits[i, p])
logits = torch.stack(new_logits, dim=0)
else:
window_size = self.window_size
new_logits = []
for i, p in enumerate(positions):
if p >= window_size and p + window_size >= length:
new_logits.append(torch.max(mo_logits[i, p - window_size:], dim=0)[0])
elif p >= window_size and p + window_size < length:
new_logits.append(torch.max(mo_logits[i, (p - window_size): (p + window_size) + 1], dim=0)[0])
elif p < window_size:
new_logits.append(torch.max(mo_logits[i, : (p + window_size) + 1], dim=0)[0])
logits = torch.stack(new_logits, dim=0)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits, targets)
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return loss, over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return logits, over_logits, cond_logits
@register_model('chengyubert-2stage-stage2-mask-window')
class ChengyuBertTwoStageMaskWindow(BertPreTrainedModel):
r"""
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``:
Labels for computing the sequence classification/regression loss.
Indices should be in ``[0, ..., config.num_labels - 1]``.
If ``config.num_labels == 1`` a regression loss is computed (Mean-Square loss),
If ``config.num_labels > 1`` a classification loss is computed (Cross-Entropy).
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
**loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
Classification (or regression if config.num_labels==1) loss.
**logits**: ``torch.FloatTensor`` of shape ``(batch_size, config.num_labels)``
Classification (or regression if config.num_labels==1) scores (before SoftMax).
**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
of shape ``(batch_size, sequence_length, hidden_size)``:
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
**attentions**: (`optional`, returned when ``config.output_attentions=True``)
list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples::
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
"""
def __init__(self, config, opts):
super().__init__(config)
self.model_name = opts.model
self.window_size = int(self.model_name.split('-')[-1])
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.register_buffer('enlarged_candidates', torch.arange(opts.len_idiom_vocab))
self.idiom_embedding = nn.Embedding(opts.len_idiom_vocab, config.hidden_size)
self.init_weights()
def vocab(self, over_states):
idiom_embeddings = self.idiom_embedding(self.enlarged_candidates)
c_mo_logits = torch.einsum('bd,nd->bn', [over_states, idiom_embeddings]) # (b, 256, 10)
return c_mo_logits
def forward(self, input_ids, token_type_ids, attention_mask, positions, option_ids,
inputs_embeds=None, options_embeds=None, compute_loss=False, targets=None):
batch_size, length = input_ids.size()
encoded_outputs = self.bert(input_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds)
encoded_layer = encoded_outputs[0]
blank_states = encoded_layer[[i for i in range(len(positions))], positions] # [batch, hidden_state]
if option_ids is None and options_embeds is None:
raise ValueError('Either option_ids or options_embeds should be given.')
elif options_embeds is not None:
encoded_options = options_embeds
else:
encoded_options = self.idiom_embedding(option_ids)
over_logits = self.vocab(blank_states)
encoded_context = encoded_layer
mo_logits = torch.einsum('bld,bnd->bln', [encoded_context, encoded_options]) # (b, 256, 10)
if self.window_size > length:
logits, _ = torch.max(mo_logits, dim=1)
elif self.window_size == 0:
new_logits = []
for i, p in enumerate(positions):
new_logits.append(mo_logits[i, p])
logits = torch.stack(new_logits, dim=0)
else:
window_size = self.window_size
new_logits = []
for i, p in enumerate(positions):
if p >= window_size and p + window_size >= length:
new_logits.append(torch.max(mo_logits[i, p - window_size:], dim=0)[0])
elif p >= window_size and p + window_size < length:
new_logits.append(torch.max(mo_logits[i, (p - window_size): (p + window_size) + 1], dim=0)[0])
elif p < window_size:
new_logits.append(torch.max(mo_logits[i, : (p + window_size) + 1], dim=0)[0])
logits = torch.stack(new_logits, dim=0)
if compute_loss:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits, targets)
target = torch.gather(option_ids, dim=1, index=targets.unsqueeze(1))
over_loss = loss_fct(over_logits, target.squeeze(1))
return loss, over_loss
else:
cond_logits = torch.gather(over_logits, dim=1, index=option_ids)
return logits, over_logits, cond_logits
| 49.383148 | 134 | 0.637499 | 3,692 | 31,062 | 5.090195 | 0.054442 | 0.028096 | 0.021072 | 0.02634 | 0.96818 | 0.966264 | 0.965945 | 0.965945 | 0.965945 | 0.965945 | 0 | 0.010253 | 0.265244 | 31,062 | 628 | 135 | 49.461783 | 0.813171 | 0.152952 | 0 | 0.936681 | 0 | 0 | 0.036269 | 0.013472 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072052 | false | 0 | 0.010917 | 0 | 0.179039 | 0.002183 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fc2830a88e107375e77040ac05178c12a83d8948 | 2,950 | py | Python | raw/day11/part1.py | aesdeef/advent-of-code-2021 | 4561bcf12ac03d360f5b28c48ef80134f97613b9 | [
"MIT"
] | 2 | 2021-12-03T06:18:27.000Z | 2021-12-06T11:28:33.000Z | raw/day11/part1.py | aesdeef/advent-of-code-2021 | 4561bcf12ac03d360f5b28c48ef80134f97613b9 | [
"MIT"
] | null | null | null | raw/day11/part1.py | aesdeef/advent-of-code-2021 | 4561bcf12ac03d360f5b28c48ef80134f97613b9 | [
"MIT"
] | null | null | null | import re
from collections import Counter
from itertools import chain, count
INPUT_FILE = "../../input/11.txt"
# INPUT_FILE = "test.txt"
STEPS = 100
with open(INPUT_FILE) as f:
energy_levels = [[int(x) for x in line.strip()] for line in f]
flashes = 0
# step
for _ in range(STEPS):
energy_levels = [[x + 1 for x in line] for line in energy_levels]
flashed = [
[False for l in range(len(energy_levels[0]))] for j in range(len(energy_levels))
]
while True:
new_flashed = [row[:] for row in flashed]
for (y, row) in enumerate(energy_levels):
for (x, cell) in enumerate(row):
if energy_levels[y][x] >= 10 and not flashed[y][x]:
flashed[y][x] = True
flashes += 1
for a, b in {
(y - 1, x - 1),
(y - 1, x),
(y - 1, x + 1),
(y, x - 1),
(y, x + 1),
(y + 1, x - 1),
(y + 1, x),
(y + 1, x + 1),
}:
if 0 <= a < len(energy_levels) and 0 <= b < len(
energy_levels[0]
):
energy_levels[a][b] += 1
if new_flashed == flashed:
break
# flashes += sum(x > 10 for x in chain(*energy_levels))
energy_levels = [[x if x < 10 else 0 for x in line] for line in energy_levels]
print(flashes)
with open(INPUT_FILE) as f:
energy_levels = [[int(x) for x in line.strip()] for line in f]
# step
for steps in count(1):
energy_levels = [[x + 1 for x in line] for line in energy_levels]
flashed = [
[False for l in range(len(energy_levels[0]))] for j in range(len(energy_levels))
]
while True:
old_flashed = [row[:] for row in flashed]
for (y, row) in enumerate(energy_levels):
for (x, cell) in enumerate(row):
if energy_levels[y][x] >= 10 and not flashed[y][x]:
flashed[y][x] = True
flashes += 1
for a, b in {
(y - 1, x - 1),
(y - 1, x),
(y - 1, x + 1),
(y, x - 1),
(y, x + 1),
(y + 1, x - 1),
(y + 1, x),
(y + 1, x + 1),
}:
if 0 <= a < len(energy_levels) and 0 <= b < len(
energy_levels[0]
):
energy_levels[a][b] += 1
if old_flashed == flashed:
break
energy_levels = [[x if x < 10 else 0 for x in line] for line in energy_levels]
if sum(sum(line) for line in energy_levels) == 0:
print(steps)
break
| 31.382979 | 88 | 0.424068 | 380 | 2,950 | 3.2 | 0.144737 | 0.256579 | 0.029605 | 0.026316 | 0.772204 | 0.772204 | 0.751645 | 0.751645 | 0.751645 | 0.751645 | 0 | 0.036432 | 0.460339 | 2,950 | 93 | 89 | 31.72043 | 0.727387 | 0.029492 | 0 | 0.763889 | 0 | 0 | 0.006298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0.027778 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fc4a573e0c5e58750eb2d881e9d5122e6aa541b9 | 12,381 | py | Python | tests/pipeline/ai/test_object_detect.py | vickywane/ambianic-edge | 45505eb42f27690646535206a1fb92624b9264c7 | [
"Apache-2.0"
] | 95 | 2019-12-12T02:20:40.000Z | 2022-03-30T18:23:52.000Z | tests/pipeline/ai/test_object_detect.py | vickywane/ambianic-edge | 45505eb42f27690646535206a1fb92624b9264c7 | [
"Apache-2.0"
] | 313 | 2019-11-04T21:31:26.000Z | 2022-01-01T11:00:38.000Z | tests/pipeline/ai/test_object_detect.py | githwd/ambianic-edge | 06ea327bed8c7e348210c3ddfb1c4ad6d13fa8fb | [
"Apache-2.0"
] | 49 | 2020-02-28T22:09:36.000Z | 2022-03-23T03:26:33.000Z | """Test object detection pipe element."""
import os
from ambianic.pipeline import PipeElement
from ambianic.pipeline.ai.object_detect import ObjectDetector
from PIL import Image
def _object_detect_config():
_dir = os.path.dirname(os.path.abspath(__file__))
_good_tflite_model = os.path.join(
_dir, "mobilenet_ssd_v2_coco_quant_postprocess.tflite"
)
_good_edgetpu_model = os.path.join(
_dir, "mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite"
)
_good_labels = os.path.join(_dir, "coco_labels.txt")
config = {
"model": {
"tflite": _good_tflite_model,
"edgetpu": _good_edgetpu_model,
},
"labels": _good_labels,
"top_k": 3,
"confidence_threshold": 0.8,
}
return config
def _get_image(file_name=None):
assert file_name
_dir = os.path.dirname(os.path.abspath(__file__))
image_file = os.path.join(_dir, file_name)
img = Image.open(image_file)
return img
class _OutPipeElement(PipeElement):
def __init__(self, sample_callback=None):
super().__init__()
assert sample_callback
self._sample_callback = sample_callback
def receive_next_sample(self, **sample):
self._sample_callback(**sample)
def test_model_inputs():
"""Verify against known model inputs."""
config = _object_detect_config()
object_detector = ObjectDetector(**config)
tfe = object_detector._tfengine
samples = tfe.input_details[0]["shape"][0]
assert samples == 1
height = tfe.input_details[0]["shape"][1]
assert height == 300
width = tfe.input_details[0]["shape"][2]
assert width == 300
colors = tfe.input_details[0]["shape"][3]
assert colors == 3
def test_model_outputs():
"""Verify against known model outputs."""
config = _object_detect_config()
object_detector = ObjectDetector(**config)
tfe = object_detector._tfengine
assert tfe.output_details[0]["shape"][0] == 1
scores = tfe.output_details[0]["shape"][1]
assert scores == 20
assert tfe.output_details[1]["shape"][0] == 1
boxes = tfe.output_details[1]["shape"][1]
assert boxes == 20
assert tfe.output_details[2]["shape"][0] == 1
labels = tfe.output_details[2]["shape"][1]
assert labels == 20
num = tfe.output_details[3]["shape"][0]
assert num == 1
def test_background_image():
"""Expect to not detect anything interesting in a background image."""
config = _object_detect_config()
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="background.jpg")
object_detector.receive_next_sample(image=img)
assert not result
def test_one_person():
"""Expect to detect one person."""
config = _object_detect_config()
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 1
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > 0.9
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_one_person_thermal():
"""Expect to detect one person."""
config = _object_detect_config()
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person_thermal_bw.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 1
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > 0.8
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_no_sample():
"""Expect element to pass empty sample to next element."""
config = _object_detect_config()
result = "Something"
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = image is None and inference_result is None
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
object_detector.receive_next_sample()
assert result is True
def test_bad_sample_good_sample():
"""One bad sample should not prevent good samples from being processed."""
config = _object_detect_config()
result = "nothing passed to me"
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
# bad sample
object_detector.receive_next_sample(image=None)
assert result == "nothing passed to me"
# good sample
img = _get_image(file_name="person.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 1
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > 0.9
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_one_person_no_face():
"""Expect to detect one person."""
config = _object_detect_config()
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person-no-face.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 1
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > 0.9
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_one_label_filter():
"""Expect to detect one person and no other objects."""
config = _object_detect_config()
confidence_threshold = 0.7
config["confidence_threshold"] = confidence_threshold
config["label_filter"] = ["person"]
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person-couch.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 1
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > confidence_threshold
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_two_labels_filter():
"""Expect to detect one person and one couch."""
config = _object_detect_config()
config["confidence_threshold"] = 0.6
config["label_filter"] = ["person", "couch"]
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person-couch.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 2
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > 0.7
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
category = result[1]["label"]
confidence = result[1]["confidence"]
(x0, y0) = result[1]["box"]["xmin"], result[1]["box"]["ymin"]
(x1, y1) = result[1]["box"]["xmax"], result[1]["box"]["ymax"]
assert category == "couch"
assert confidence > 0.6
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_no_labels_filter():
"""Expect to detect all labeled objects - one person and one couch."""
config = _object_detect_config()
config["confidence_threshold"] = 0.6
# No label_filter set, which is the same as None
# config['label_filter'] = None
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person-couch.jpg")
object_detector.receive_next_sample(image=img)
assert result
assert len(result) == 2
category = result[0]["label"]
confidence = result[0]["confidence"]
(x0, y0) = result[0]["box"]["xmin"], result[0]["box"]["ymin"]
(x1, y1) = result[0]["box"]["xmax"], result[0]["box"]["ymax"]
assert category == "person"
assert confidence > 0.7
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
category = result[1]["label"]
confidence = result[1]["confidence"]
(x0, y0) = result[1]["box"]["xmin"], result[1]["box"]["ymin"]
(x1, y1) = result[1]["box"]["xmax"], result[1]["box"]["ymax"]
assert category == "couch"
assert confidence > 0.6
assert x0 > 0 and x0 < x1
assert y0 > 0 and y0 < y1
def test_bad_label_filter():
"""Expect to detect nothing because the label is not in the training
label set."""
config = _object_detect_config()
config["confidence_threshold"] = 0.6
config["label_filter"] = ["SomeR@ndomJunk"]
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person-couch.jpg")
object_detector.receive_next_sample(image=img)
assert not result
def test_one_label_not_in_picture():
"""Expect to detect nothing because there is no object with the given
label in the picture."""
config = _object_detect_config()
config["confidence_threshold"] = 0.6
config["label_filter"] = ["car"]
result = None
def sample_callback(image=None, inference_result=None, **kwargs):
nonlocal result
result = inference_result
object_detector = ObjectDetector(**config)
output = _OutPipeElement(sample_callback=sample_callback)
object_detector.connect_to_next_element(output)
img = _get_image(file_name="person-couch.jpg")
object_detector.receive_next_sample(image=img)
assert not result
| 31.909794 | 78 | 0.672967 | 1,588 | 12,381 | 5.013854 | 0.095718 | 0.036925 | 0.035167 | 0.039186 | 0.800804 | 0.740392 | 0.73587 | 0.729339 | 0.72105 | 0.72105 | 0 | 0.024162 | 0.197722 | 12,381 | 387 | 79 | 31.992248 | 0.777409 | 0.065019 | 0 | 0.705263 | 0 | 0 | 0.090656 | 0.010527 | 0 | 0 | 0 | 0 | 0.238596 | 1 | 0.098246 | false | 0.007018 | 0.014035 | 0 | 0.122807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fc559cf9facce4617d3b3a79cd41cb71e300b979 | 6,103 | py | Python | models/visionprescription_tests.py | elementechemlyn/CareConnectBuilder | c004fa94c1af64d636ee25de8f13e34fe723b5f3 | [
"MIT"
] | 1 | 2021-12-24T11:14:38.000Z | 2021-12-24T11:14:38.000Z | models/visionprescription_tests.py | elementechemlyn/CareConnectBuilder | c004fa94c1af64d636ee25de8f13e34fe723b5f3 | [
"MIT"
] | null | null | null | models/visionprescription_tests.py | elementechemlyn/CareConnectBuilder | c004fa94c1af64d636ee25de8f13e34fe723b5f3 | [
"MIT"
] | 1 | 2020-09-16T14:47:26.000Z | 2020-09-16T14:47:26.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Generated from FHIR 3.0.0.11832 on 2017-03-22.
# 2017, SMART Health IT.
import os
import io
import unittest
import json
from . import visionprescription
from .fhirdate import FHIRDate
class VisionPrescriptionTests(unittest.TestCase):
def instantiate_from(self, filename):
datadir = os.environ.get('FHIR_UNITTEST_DATADIR') or ''
with io.open(os.path.join(datadir, filename), 'r', encoding='utf-8') as handle:
js = json.load(handle)
self.assertEqual("VisionPrescription", js["resourceType"])
return visionprescription.VisionPrescription(js)
def testVisionPrescription1(self):
inst = self.instantiate_from("visionprescription-example-1.json")
self.assertIsNotNone(inst, "Must have instantiated a VisionPrescription instance")
self.implVisionPrescription1(inst)
js = inst.as_json()
self.assertEqual("VisionPrescription", js["resourceType"])
inst2 = visionprescription.VisionPrescription(js)
self.implVisionPrescription1(inst2)
def implVisionPrescription1(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2014-06-15").date)
self.assertEqual(inst.dateWritten.as_json(), "2014-06-15")
self.assertEqual(inst.dispense[0].add, 1.75)
self.assertEqual(inst.dispense[0].axis, 160)
self.assertEqual(inst.dispense[0].backCurve, 8.7)
self.assertEqual(inst.dispense[0].brand, "OphthaGuard")
self.assertEqual(inst.dispense[0].color, "green")
self.assertEqual(inst.dispense[0].cylinder, -2.25)
self.assertEqual(inst.dispense[0].diameter, 14.0)
self.assertEqual(inst.dispense[0].duration.code, "month")
self.assertEqual(inst.dispense[0].duration.system, "http://unitsofmeasure.org")
self.assertEqual(inst.dispense[0].duration.unit, "month")
self.assertEqual(inst.dispense[0].duration.value, 1)
self.assertEqual(inst.dispense[0].eye, "right")
self.assertEqual(inst.dispense[0].note[0].text, "Shade treatment for extreme light sensitivity")
self.assertEqual(inst.dispense[0].power, -2.75)
self.assertEqual(inst.dispense[0].product.coding[0].code, "contact")
self.assertEqual(inst.dispense[0].product.coding[0].system, "http://hl7.org/fhir/ex-visionprescriptionproduct")
self.assertEqual(inst.dispense[1].add, 1.75)
self.assertEqual(inst.dispense[1].axis, 160)
self.assertEqual(inst.dispense[1].backCurve, 8.7)
self.assertEqual(inst.dispense[1].brand, "OphthaGuard")
self.assertEqual(inst.dispense[1].color, "green")
self.assertEqual(inst.dispense[1].cylinder, -3.5)
self.assertEqual(inst.dispense[1].diameter, 14.0)
self.assertEqual(inst.dispense[1].duration.code, "month")
self.assertEqual(inst.dispense[1].duration.system, "http://unitsofmeasure.org")
self.assertEqual(inst.dispense[1].duration.unit, "month")
self.assertEqual(inst.dispense[1].duration.value, 1)
self.assertEqual(inst.dispense[1].eye, "left")
self.assertEqual(inst.dispense[1].note[0].text, "Shade treatment for extreme light sensitivity")
self.assertEqual(inst.dispense[1].power, -2.75)
self.assertEqual(inst.dispense[1].product.coding[0].code, "contact")
self.assertEqual(inst.dispense[1].product.coding[0].system, "http://hl7.org/fhir/ex-visionprescriptionproduct")
self.assertEqual(inst.id, "33124")
self.assertEqual(inst.identifier[0].system, "http://www.happysight.com/prescription")
self.assertEqual(inst.identifier[0].value, "15014")
self.assertEqual(inst.reasonCodeableConcept.coding[0].code, "myopia")
self.assertEqual(inst.reasonCodeableConcept.coding[0].system, "http://samplevisionreasoncodes.com")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.div, "<div xmlns=\"http://www.w3.org/1999/xhtml\">Sample Contract Lens prescription</div>")
self.assertEqual(inst.text.status, "generated")
def testVisionPrescription2(self):
inst = self.instantiate_from("visionprescription-example.json")
self.assertIsNotNone(inst, "Must have instantiated a VisionPrescription instance")
self.implVisionPrescription2(inst)
js = inst.as_json()
self.assertEqual("VisionPrescription", js["resourceType"])
inst2 = visionprescription.VisionPrescription(js)
self.implVisionPrescription2(inst2)
def implVisionPrescription2(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2014-06-15").date)
self.assertEqual(inst.dateWritten.as_json(), "2014-06-15")
self.assertEqual(inst.dispense[0].add, 2.0)
self.assertEqual(inst.dispense[0].base, "down")
self.assertEqual(inst.dispense[0].eye, "right")
self.assertEqual(inst.dispense[0].prism, 0.5)
self.assertEqual(inst.dispense[0].product.coding[0].code, "lens")
self.assertEqual(inst.dispense[0].product.coding[0].system, "http://hl7.org/fhir/ex-visionprescriptionproduct")
self.assertEqual(inst.dispense[0].sphere, -2.0)
self.assertEqual(inst.dispense[1].add, 2.0)
self.assertEqual(inst.dispense[1].axis, 180)
self.assertEqual(inst.dispense[1].base, "up")
self.assertEqual(inst.dispense[1].cylinder, -0.5)
self.assertEqual(inst.dispense[1].eye, "left")
self.assertEqual(inst.dispense[1].prism, 0.5)
self.assertEqual(inst.dispense[1].product.coding[0].code, "lens")
self.assertEqual(inst.dispense[1].product.coding[0].system, "http://hl7.org/fhir/ex-visionprescriptionproduct")
self.assertEqual(inst.dispense[1].sphere, -1.0)
self.assertEqual(inst.id, "33123")
self.assertEqual(inst.identifier[0].system, "http://www.happysight.com/prescription")
self.assertEqual(inst.identifier[0].value, "15013")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
| 54.00885 | 126 | 0.687531 | 731 | 6,103 | 5.72777 | 0.194254 | 0.243611 | 0.294961 | 0.30953 | 0.807738 | 0.78075 | 0.670886 | 0.515405 | 0.515405 | 0.410795 | 0 | 0.041536 | 0.16369 | 6,103 | 112 | 127 | 54.491071 | 0.778801 | 0.018679 | 0 | 0.28125 | 1 | 0 | 0.162627 | 0.014207 | 0 | 0 | 0 | 0 | 0.729167 | 1 | 0.052083 | false | 0 | 0.0625 | 0 | 0.135417 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5db13ad9d0489e77b27ad80f0451bba1fc8e256f | 1,476 | py | Python | tests/test_setup_cfg.py | jnoortheen/pipm | 79998764e3f3d9c0c8c50ef38db6a7296997ed76 | [
"MIT"
] | 27 | 2017-10-21T12:59:15.000Z | 2022-03-28T07:34:30.000Z | tests/test_setup_cfg.py | jnoortheen/pipm | 79998764e3f3d9c0c8c50ef38db6a7296997ed76 | [
"MIT"
] | 8 | 2017-12-08T01:12:41.000Z | 2021-06-09T11:34:25.000Z | tests/test_setup_cfg.py | jnoortheen/pipm | 79998764e3f3d9c0c8c50ef38db6a7296997ed76 | [
"MIT"
] | 1 | 2019-11-04T05:19:30.000Z | 2019-11-04T05:19:30.000Z | from pipm import setup_cfg
from pytest import fixture
@fixture
def req_set_py(pkg_ir_py):
return [pkg_ir_py]
@fixture
def req_set_py_six(pkg_ir_py, pkg_ir_six):
return [pkg_ir_py, pkg_ir_six]
def test_add_requirements_with_existing_config(config, req_set_py):
config = setup_cfg.add_requirements(user_reqs=req_set_py)
assert config.get("options", "install_requires") == "\npy==1.0.0\nsix~=1.11.0"
assert config.get("options.extras_require", "dev") == "\npytest~=3.7.2"
def test_add_requirements_dev_with_existing_config(config, req_set_py):
config = setup_cfg.add_requirements(user_reqs=req_set_py, env="dev")
assert config.get("options", "install_requires") == "\nsix~=1.11.0"
assert config.get("options.extras_require", "dev") == "\npy==1.0.0\npytest~=3.7.2"
def test_add_requirements_no_config_file(chdir, req_set_py_six):
config = setup_cfg.add_requirements(user_reqs=req_set_py_six)
assert config.get("options", "install_requires") == "\npy==1.0.0\nsix~=1.11.0"
def test_add_dev_requirements_no_config_file(chdir, req_set_py_six):
config = setup_cfg.add_requirements(user_reqs=req_set_py_six, env="dev")
assert config.get("options.extras_require", "dev") == "\npy==1.0.0\nsix~=1.11.0"
def test_remove_requirements(config):
config = setup_cfg.remove_requirements({"six"})
assert config.get("options", "install_requires") == "\nsix~=1.11.0"
assert config.get("options.extras_require", "dev") == ""
| 35.142857 | 86 | 0.735772 | 241 | 1,476 | 4.157676 | 0.186722 | 0.05988 | 0.07984 | 0.175649 | 0.839321 | 0.803393 | 0.761477 | 0.758483 | 0.706587 | 0.706587 | 0 | 0.029008 | 0.112466 | 1,476 | 41 | 87 | 36 | 0.735878 | 0 | 0 | 0.230769 | 0 | 0 | 0.230352 | 0.126016 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.269231 | false | 0 | 0.076923 | 0.076923 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b900033183b293fc121cb88963fb272a6a73ed89 | 14,483 | bzl | Python | third_party/manifest.bzl | PaulSonOfLars/bazel-gazelle | 394f5355e0b91940f45bba9a705fb4382b234316 | [
"Apache-2.0"
] | 1 | 2020-01-22T10:54:09.000Z | 2020-01-22T10:54:09.000Z | third_party/manifest.bzl | lubinsz/gazelle | 56bd0dc6213cdca906975861ac59b93e60bd8c70 | [
"Apache-2.0"
] | null | null | null | third_party/manifest.bzl | lubinsz/gazelle | 56bd0dc6213cdca906975861ac59b93e60bd8c70 | [
"Apache-2.0"
] | 1 | 2020-09-02T08:00:55.000Z | 2020-09-02T08:00:55.000Z | manifest = {
"org_golang_x_tools": {
"@bazel_gazelle//third_party:org_golang_x_tools/present/BUILD.bazel.in": "present/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/benchmark/parse/BUILD.bazel.in": "benchmark/parse/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/present/BUILD.bazel.in": "cmd/present/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/titanic.biz/foo/BUILD.bazel.in": "cmd/fiximports/testdata/src/titanic.biz/foo/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/titanic.biz/bar/BUILD.bazel.in": "cmd/fiximports/testdata/src/titanic.biz/bar/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/new.com/one/BUILD.bazel.in": "cmd/fiximports/testdata/src/new.com/one/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/fruit.io/pear/BUILD.bazel.in": "cmd/fiximports/testdata/src/fruit.io/pear/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/fruit.io/banana/BUILD.bazel.in": "cmd/fiximports/testdata/src/fruit.io/banana/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/fruit.io/orange/BUILD.bazel.in": "cmd/fiximports/testdata/src/fruit.io/orange/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/testdata/src/old.com/one/BUILD.bazel.in": "cmd/fiximports/testdata/src/old.com/one/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/fiximports/BUILD.bazel.in": "cmd/fiximports/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/gorename/BUILD.bazel.in": "cmd/gorename/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/calls/BUILD.bazel.in": "cmd/guru/testdata/src/calls/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/implements-methods-json/BUILD.bazel.in": "cmd/guru/testdata/src/implements-methods-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/whicherrs/BUILD.bazel.in": "cmd/guru/testdata/src/whicherrs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/referrers-json/BUILD.bazel.in": "cmd/guru/testdata/src/referrers-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/alias/BUILD.bazel.in": "cmd/guru/testdata/src/alias/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/pointsto/BUILD.bazel.in": "cmd/guru/testdata/src/pointsto/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/implements/BUILD.bazel.in": "cmd/guru/testdata/src/implements/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/what-json/BUILD.bazel.in": "cmd/guru/testdata/src/what-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/softerrs/BUILD.bazel.in": "cmd/guru/testdata/src/softerrs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/implements-json/BUILD.bazel.in": "cmd/guru/testdata/src/implements-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/imports/BUILD.bazel.in": "cmd/guru/testdata/src/imports/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/describe/BUILD.bazel.in": "cmd/guru/testdata/src/describe/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/pointsto-json/BUILD.bazel.in": "cmd/guru/testdata/src/pointsto-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/freevars/BUILD.bazel.in": "cmd/guru/testdata/src/freevars/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/describe-json/BUILD.bazel.in": "cmd/guru/testdata/src/describe-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/implements-methods/BUILD.bazel.in": "cmd/guru/testdata/src/implements-methods/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/lib/BUILD.bazel.in": "cmd/guru/testdata/src/lib/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/lib/sublib/BUILD.bazel.in": "cmd/guru/testdata/src/lib/sublib/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/what/BUILD.bazel.in": "cmd/guru/testdata/src/what/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/peers/BUILD.bazel.in": "cmd/guru/testdata/src/peers/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/main/BUILD.bazel.in": "cmd/guru/testdata/src/main/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/definition-json/BUILD.bazel.in": "cmd/guru/testdata/src/definition-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/calls-json/BUILD.bazel.in": "cmd/guru/testdata/src/calls-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/referrers/BUILD.bazel.in": "cmd/guru/testdata/src/referrers/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/reflection/BUILD.bazel.in": "cmd/guru/testdata/src/reflection/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/testdata/src/peers-json/BUILD.bazel.in": "cmd/guru/testdata/src/peers-json/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/BUILD.bazel.in": "cmd/guru/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/guru/serial/BUILD.bazel.in": "cmd/guru/serial/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/toolstash/BUILD.bazel.in": "cmd/toolstash/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/compilebench/BUILD.bazel.in": "cmd/compilebench/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/goyacc/testdata/expr/BUILD.bazel.in": "cmd/goyacc/testdata/expr/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/goyacc/BUILD.bazel.in": "cmd/goyacc/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/ssadump/BUILD.bazel.in": "cmd/ssadump/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/stringer/testdata/BUILD.bazel.in": "cmd/stringer/testdata/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/stringer/BUILD.bazel.in": "cmd/stringer/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/html2article/BUILD.bazel.in": "cmd/html2article/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/gotype/BUILD.bazel.in": "cmd/gotype/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/heapview/BUILD.bazel.in": "cmd/heapview/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/heapview/internal/core/BUILD.bazel.in": "cmd/heapview/internal/core/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/godoc/BUILD.bazel.in": "cmd/godoc/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/godex/BUILD.bazel.in": "cmd/godex/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/bundle/testdata/src/initial/BUILD.bazel.in": "cmd/bundle/testdata/src/initial/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/bundle/testdata/src/domain.name/importdecl/BUILD.bazel.in": "cmd/bundle/testdata/src/domain.name/importdecl/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/bundle/BUILD.bazel.in": "cmd/bundle/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/gomvpkg/BUILD.bazel.in": "cmd/gomvpkg/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/tip/BUILD.bazel.in": "cmd/tip/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/goimports/BUILD.bazel.in": "cmd/goimports/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/go-contrib-init/BUILD.bazel.in": "cmd/go-contrib-init/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/callgraph/testdata/src/pkg/BUILD.bazel.in": "cmd/callgraph/testdata/src/pkg/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/callgraph/BUILD.bazel.in": "cmd/callgraph/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/eg/BUILD.bazel.in": "cmd/eg/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/getgo/BUILD.bazel.in": "cmd/getgo/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/getgo/server/BUILD.bazel.in": "cmd/getgo/server/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/benchcmp/BUILD.bazel.in": "cmd/benchcmp/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/stress/BUILD.bazel.in": "cmd/stress/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/digraph/BUILD.bazel.in": "cmd/digraph/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/cover/testdata/BUILD.bazel.in": "cmd/cover/testdata/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cmd/cover/BUILD.bazel.in": "cmd/cover/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/gcexportdata/BUILD.bazel.in": "go/gcexportdata/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/types/typeutil/BUILD.bazel.in": "go/types/typeutil/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/buildutil/BUILD.bazel.in": "go/buildutil/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/internal/gccgoimporter/BUILD.bazel.in": "go/internal/gccgoimporter/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/gccgoexportdata/BUILD.bazel.in": "go/gccgoexportdata/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/ssa/ssautil/BUILD.bazel.in": "go/ssa/ssautil/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/ssa/BUILD.bazel.in": "go/ssa/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/ssa/interp/BUILD.bazel.in": "go/ssa/interp/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/vcs/BUILD.bazel.in": "go/vcs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/loader/testdata/BUILD.bazel.in": "go/loader/testdata/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/loader/BUILD.bazel.in": "go/loader/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/ast/astutil/BUILD.bazel.in": "go/ast/astutil/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/gcimporter15/testdata/versions/BUILD.bazel.in": "go/gcimporter15/testdata/versions/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/gcimporter15/BUILD.bazel.in": "go/gcimporter15/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/pointer/testdata/BUILD.bazel.in": "go/pointer/testdata/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/pointer/BUILD.bazel.in": "go/pointer/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/callgraph/cha/BUILD.bazel.in": "go/callgraph/cha/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/callgraph/BUILD.bazel.in": "go/callgraph/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/callgraph/rta/BUILD.bazel.in": "go/callgraph/rta/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/go/callgraph/static/BUILD.bazel.in": "go/callgraph/static/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/playground/BUILD.bazel.in": "playground/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/playground/socket/BUILD.bazel.in": "playground/socket/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/BUILD.bazel.in": "godoc/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/vfs/BUILD.bazel.in": "godoc/vfs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/vfs/zipfs/BUILD.bazel.in": "godoc/vfs/zipfs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/vfs/gatefs/BUILD.bazel.in": "godoc/vfs/gatefs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/vfs/httpfs/BUILD.bazel.in": "godoc/vfs/httpfs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/vfs/mapfs/BUILD.bazel.in": "godoc/vfs/mapfs/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/analysis/BUILD.bazel.in": "godoc/analysis/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/util/BUILD.bazel.in": "godoc/util/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/static/BUILD.bazel.in": "godoc/static/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/godoc/redirect/BUILD.bazel.in": "godoc/redirect/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/imports/BUILD.bazel.in": "imports/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/blog/BUILD.bazel.in": "blog/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/blog/atom/BUILD.bazel.in": "blog/atom/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/container/intsets/BUILD.bazel.in": "container/intsets/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/refactor/satisfy/BUILD.bazel.in": "refactor/satisfy/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/refactor/rename/BUILD.bazel.in": "refactor/rename/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/refactor/eg/BUILD.bazel.in": "refactor/eg/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/refactor/importgraph/BUILD.bazel.in": "refactor/importgraph/BUILD.bazel",
"@bazel_gazelle//third_party:org_golang_x_tools/cover/BUILD.bazel.in": "cover/BUILD.bazel",
}
}
| 124.853448 | 181 | 0.762273 | 2,204 | 14,483 | 4.755898 | 0.055808 | 0.211792 | 0.10685 | 0.160275 | 0.892864 | 0.808243 | 0.790784 | 0.752337 | 0.688323 | 0.633658 | 0 | 0.000749 | 0.077677 | 14,483 | 115 | 182 | 125.93913 | 0.78395 | 0 | 0 | 0 | 0 | 0.313043 | 0.875233 | 0.861493 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.13913 | 0 | 0.13913 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
f8e81a70cebb983f178e00feda1e4950123465ed | 20 | py | Python | mpy/t2.py | jeelabs/monty | add6da22dd446cda5e93e023e90520cfdb3fc712 | [
"Unlicense"
] | 11 | 2021-02-02T02:32:50.000Z | 2021-12-30T12:55:41.000Z | mpy/t2.py | jeelabs/monty | add6da22dd446cda5e93e023e90520cfdb3fc712 | [
"Unlicense"
] | 75 | 2021-01-27T10:53:10.000Z | 2021-06-30T10:59:49.000Z | mpy/t2.py | jeelabs/monty | add6da22dd446cda5e93e023e90520cfdb3fc712 | [
"Unlicense"
] | 1 | 2021-09-25T11:18:38.000Z | 2021-09-25T11:18:38.000Z | assert 42 == 40 + 2
| 10 | 19 | 0.55 | 4 | 20 | 2.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.357143 | 0.3 | 20 | 1 | 20 | 20 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5d497220d9b00543887b0edd6538358c48e7529d | 5,652 | py | Python | src/genie/libs/parser/nxos/tests/ShowIsis/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/nxos/tests/ShowIsis/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/nxos/tests/ShowIsis/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z |
expected_output = {
'instance': {
'test': {
'isis_process': 'test',
'instance_number': 1,
'uuid': '1090519320',
'process_id': 1581,
'vrf': {
'default': {
'vrf': 'default',
'system_id': '3333.33ff.6666',
'is_type': 'L1-L2',
'sap': 412,
'queue_handle': 15,
'maximum_lsp_mtu': 1492,
'stateful_ha': 'enabled',
'graceful_restart': {
'enable': True,
'state': 'Inactive',
'last_gr_status': 'none',
},
'start_mode': 'Complete',
'bfd_ipv4': 'globally disabled',
'bfd_ipv6': 'globally disabled',
'topology_mode': 'Multitopology',
'metric_type': {
'advertise': ['wide'],
'accept': ['narrow', 'wide'],
},
'area_address': ['49.0001'],
'process': 'up and running',
'vrf_id': 1,
'during_non_graceful_controlled_restart': 'Stale routes',
'resolution_of_l3_to_l2': 'Enable',
'sr_ipv4': 'not configured and disabled',
'sr_ipv6': 'not configured and disabled',
'supported_interfaces': ['Loopback0', 'Ethernet1/1.115', 'Ethernet1/2.115'],
'topology': {
0: {
'address_family': {
'ipv4_unicast': {
'number_of_interface': 3,
'distance': 115,
},
'ipv6_unicast': {
'number_of_interface': 0,
'distance': 115,
},
},
},
2: {
'address_family': {
'ipv6_unicast': {
'number_of_interface': 3,
'distance': 115,
},
},
},
},
'authentication': {
'level_1': {
'auth_check': 'set',
},
'level_2': {
'auth_check': 'set',
},
},
'l1_next_spf': '00:00:07',
'l2_next_spf': '00:00:04',
},
'VRF1': {
'vrf': 'VRF1',
'system_id': '3333.33ff.6666',
'is_type': 'L1-L2',
'sap': 412,
'queue_handle': 15,
'maximum_lsp_mtu': 1492,
'stateful_ha': 'enabled',
'graceful_restart': {
'enable': True,
'state': 'Inactive',
'last_gr_status': 'none',
},
'start_mode': 'Complete',
'bfd_ipv4': 'globally disabled',
'bfd_ipv6': 'globally disabled',
'topology_mode': 'Multitopology',
'metric_type': {
'advertise': ['wide'],
'accept': ['narrow', 'wide'],
},
'area_address': ['49.0001'],
'process': 'up and running',
'vrf_id': 3,
'during_non_graceful_controlled_restart': 'Stale routes',
'resolution_of_l3_to_l2': 'Enable',
'sr_ipv4': 'not configured and disabled',
'sr_ipv6': 'not configured and disabled',
'supported_interfaces': ['Loopback300', 'Ethernet1/1.415', 'Ethernet1/2.415'],
'topology': {
0: {
'address_family': {
'ipv4_unicast': {
'number_of_interface': 3,
'distance': 115,
},
'ipv6_unicast': {
'number_of_interface': 0,
'distance': 115,
},
},
},
2: {
'address_family': {
'ipv6_unicast': {
'number_of_interface': 3,
'distance': 115,
},
},
},
},
'authentication': {
'level_1': {
'auth_check': 'set',
},
'level_2': {
'auth_check': 'set',
},
},
'l1_next_spf': 'Inactive',
'l2_next_spf': 'Inactive',
},
},
},
},
}
| 40.661871 | 98 | 0.294763 | 320 | 5,652 | 4.90625 | 0.321875 | 0.049682 | 0.057325 | 0.09172 | 0.850955 | 0.850955 | 0.850955 | 0.850955 | 0.850955 | 0.850955 | 0 | 0.071335 | 0.595718 | 5,652 | 138 | 99 | 40.956522 | 0.615755 | 0 | 0 | 0.617647 | 0 | 0 | 0.28885 | 0.021239 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5d636cc2b563d5075dd1015d7dbaceaadc9b40a2 | 69 | py | Python | b/__init__.py | NewPracticer/python_study_route | d661a48f4904c19e629b6a71d8db2a4874706d4d | [
"Apache-2.0"
] | null | null | null | b/__init__.py | NewPracticer/python_study_route | d661a48f4904c19e629b6a71d8db2a4874706d4d | [
"Apache-2.0"
] | null | null | null | b/__init__.py | NewPracticer/python_study_route | d661a48f4904c19e629b6a71d8db2a4874706d4d | [
"Apache-2.0"
] | null | null | null | import sys
import io
import sys
import io
## 包和模块是不会被重复导入的
## 避免循环导入 | 9.857143 | 16 | 0.768116 | 10 | 69 | 5.3 | 0.5 | 0.339623 | 0.566038 | 0.641509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 69 | 7 | 17 | 9.857143 | 0.929825 | 0.289855 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
5d67ab97fec83057d616db790dc98f0c5f792fbe | 9,635 | py | Python | venv/Lib/site-packages/numpy/typing/tests/data/reveal/fromnumeric.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 41 | 2021-06-19T13:57:18.000Z | 2021-12-02T17:08:53.000Z | venv/Lib/site-packages/numpy/typing/tests/data/reveal/fromnumeric.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 5 | 2021-05-07T10:31:27.000Z | 2021-05-07T10:33:37.000Z | venv/Lib/site-packages/numpy/typing/tests/data/reveal/fromnumeric.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 4 | 2021-07-02T03:09:51.000Z | 2021-11-25T13:00:10.000Z | """Tests for :mod:`numpy.core.fromnumeric`."""
import numpy as np
A = np.array(True, ndmin=2, dtype=bool)
B = np.array(1.0, ndmin=2, dtype=np.float32)
A.setflags(write=False)
B.setflags(write=False)
a = np.bool_(True)
b = np.float32(1.0)
c = 1.0
d = np.array(1.0, dtype=np.float32) # writeable
reveal_type(np.take(a, 0)) # E: Any
reveal_type(np.take(b, 0)) # E: Any
reveal_type(np.take(c, 0)) # E: Any
reveal_type(np.take(A, 0)) # E: Any
reveal_type(np.take(B, 0)) # E: Any
reveal_type(np.take(A, [0])) # E: Any
reveal_type(np.take(B, [0])) # E: Any
reveal_type(np.reshape(a, 1)) # E: numpy.ndarray
reveal_type(np.reshape(b, 1)) # E: numpy.ndarray
reveal_type(np.reshape(c, 1)) # E: numpy.ndarray
reveal_type(np.reshape(A, 1)) # E: numpy.ndarray
reveal_type(np.reshape(B, 1)) # E: numpy.ndarray
reveal_type(np.choose(a, [True, True])) # E: Any
reveal_type(np.choose(A, [True, True])) # E: Any
reveal_type(np.repeat(a, 1)) # E: numpy.ndarray
reveal_type(np.repeat(b, 1)) # E: numpy.ndarray
reveal_type(np.repeat(c, 1)) # E: numpy.ndarray
reveal_type(np.repeat(A, 1)) # E: numpy.ndarray
reveal_type(np.repeat(B, 1)) # E: numpy.ndarray
# TODO: Add tests for np.put()
reveal_type(np.swapaxes(A, 0, 0)) # E: numpy.ndarray
reveal_type(np.swapaxes(B, 0, 0)) # E: numpy.ndarray
reveal_type(np.transpose(a)) # E: numpy.ndarray
reveal_type(np.transpose(b)) # E: numpy.ndarray
reveal_type(np.transpose(c)) # E: numpy.ndarray
reveal_type(np.transpose(A)) # E: numpy.ndarray
reveal_type(np.transpose(B)) # E: numpy.ndarray
reveal_type(np.partition(a, 0, axis=None)) # E: numpy.ndarray
reveal_type(np.partition(b, 0, axis=None)) # E: numpy.ndarray
reveal_type(np.partition(c, 0, axis=None)) # E: numpy.ndarray
reveal_type(np.partition(A, 0)) # E: numpy.ndarray
reveal_type(np.partition(B, 0)) # E: numpy.ndarray
reveal_type(np.argpartition(a, 0)) # E: Any
reveal_type(np.argpartition(b, 0)) # E: Any
reveal_type(np.argpartition(c, 0)) # E: Any
reveal_type(np.argpartition(A, 0)) # E: Any
reveal_type(np.argpartition(B, 0)) # E: Any
reveal_type(np.sort(A, 0)) # E: numpy.ndarray
reveal_type(np.sort(B, 0)) # E: numpy.ndarray
reveal_type(np.argsort(A, 0)) # E: numpy.ndarray
reveal_type(np.argsort(B, 0)) # E: numpy.ndarray
reveal_type(np.argmax(A)) # E: numpy.signedinteger[Any]
reveal_type(np.argmax(B)) # E: numpy.signedinteger[Any]
reveal_type(np.argmax(A, axis=0)) # E: Any
reveal_type(np.argmax(B, axis=0)) # E: Any
reveal_type(np.argmin(A)) # E: numpy.signedinteger[Any]
reveal_type(np.argmin(B)) # E: numpy.signedinteger[Any]
reveal_type(np.argmin(A, axis=0)) # E: Any
reveal_type(np.argmin(B, axis=0)) # E: Any
reveal_type(np.searchsorted(A[0], 0)) # E: numpy.signedinteger[Any]
reveal_type(np.searchsorted(B[0], 0)) # E: numpy.signedinteger[Any]
reveal_type(np.searchsorted(A[0], [0])) # E: numpy.ndarray
reveal_type(np.searchsorted(B[0], [0])) # E: numpy.ndarray
reveal_type(np.resize(a, (5, 5))) # E: numpy.ndarray
reveal_type(np.resize(b, (5, 5))) # E: numpy.ndarray
reveal_type(np.resize(c, (5, 5))) # E: numpy.ndarray
reveal_type(np.resize(A, (5, 5))) # E: numpy.ndarray
reveal_type(np.resize(B, (5, 5))) # E: numpy.ndarray
reveal_type(np.squeeze(a)) # E: numpy.bool_
reveal_type(np.squeeze(b)) # E: numpy.floating[numpy.typing._32Bit]
reveal_type(np.squeeze(c)) # E: numpy.ndarray
reveal_type(np.squeeze(A)) # E: numpy.ndarray
reveal_type(np.squeeze(B)) # E: numpy.ndarray
reveal_type(np.diagonal(A)) # E: numpy.ndarray
reveal_type(np.diagonal(B)) # E: numpy.ndarray
reveal_type(np.trace(A)) # E: Any
reveal_type(np.trace(B)) # E: Any
reveal_type(np.ravel(a)) # E: numpy.ndarray
reveal_type(np.ravel(b)) # E: numpy.ndarray
reveal_type(np.ravel(c)) # E: numpy.ndarray
reveal_type(np.ravel(A)) # E: numpy.ndarray
reveal_type(np.ravel(B)) # E: numpy.ndarray
reveal_type(np.nonzero(a)) # E: tuple[numpy.ndarray]
reveal_type(np.nonzero(b)) # E: tuple[numpy.ndarray]
reveal_type(np.nonzero(c)) # E: tuple[numpy.ndarray]
reveal_type(np.nonzero(A)) # E: tuple[numpy.ndarray]
reveal_type(np.nonzero(B)) # E: tuple[numpy.ndarray]
reveal_type(np.shape(a)) # E: tuple[builtins.int]
reveal_type(np.shape(b)) # E: tuple[builtins.int]
reveal_type(np.shape(c)) # E: tuple[builtins.int]
reveal_type(np.shape(A)) # E: tuple[builtins.int]
reveal_type(np.shape(B)) # E: tuple[builtins.int]
reveal_type(np.compress([True], a)) # E: numpy.ndarray
reveal_type(np.compress([True], b)) # E: numpy.ndarray
reveal_type(np.compress([True], c)) # E: numpy.ndarray
reveal_type(np.compress([True], A)) # E: numpy.ndarray
reveal_type(np.compress([True], B)) # E: numpy.ndarray
reveal_type(np.clip(a, 0, 1.0)) # E: Any
reveal_type(np.clip(b, -1, 1)) # E: Any
reveal_type(np.clip(c, 0, 1)) # E: Any
reveal_type(np.clip(A, 0, 1)) # E: Any
reveal_type(np.clip(B, 0, 1)) # E: Any
reveal_type(np.sum(a)) # E: Any
reveal_type(np.sum(b)) # E: Any
reveal_type(np.sum(c)) # E: Any
reveal_type(np.sum(A)) # E: Any
reveal_type(np.sum(B)) # E: Any
reveal_type(np.sum(A, axis=0)) # E: Any
reveal_type(np.sum(B, axis=0)) # E: Any
reveal_type(np.all(a)) # E: numpy.bool_
reveal_type(np.all(b)) # E: numpy.bool_
reveal_type(np.all(c)) # E: numpy.bool_
reveal_type(np.all(A)) # E: numpy.bool_
reveal_type(np.all(B)) # E: numpy.bool_
reveal_type(np.all(A, axis=0)) # E: Any
reveal_type(np.all(B, axis=0)) # E: Any
reveal_type(np.all(A, keepdims=True)) # E: Any
reveal_type(np.all(B, keepdims=True)) # E: Any
reveal_type(np.any(a)) # E: numpy.bool_
reveal_type(np.any(b)) # E: numpy.bool_
reveal_type(np.any(c)) # E: numpy.bool_
reveal_type(np.any(A)) # E: numpy.bool_
reveal_type(np.any(B)) # E: numpy.bool_
reveal_type(np.any(A, axis=0)) # E: Any
reveal_type(np.any(B, axis=0)) # E: Any
reveal_type(np.any(A, keepdims=True)) # E: Any
reveal_type(np.any(B, keepdims=True)) # E: Any
reveal_type(np.cumsum(a)) # E: numpy.ndarray
reveal_type(np.cumsum(b)) # E: numpy.ndarray
reveal_type(np.cumsum(c)) # E: numpy.ndarray
reveal_type(np.cumsum(A)) # E: numpy.ndarray
reveal_type(np.cumsum(B)) # E: numpy.ndarray
reveal_type(np.ptp(a)) # E: Any
reveal_type(np.ptp(b)) # E: Any
reveal_type(np.ptp(c)) # E: Any
reveal_type(np.ptp(A)) # E: Any
reveal_type(np.ptp(B)) # E: Any
reveal_type(np.ptp(A, axis=0)) # E: Any
reveal_type(np.ptp(B, axis=0)) # E: Any
reveal_type(np.ptp(A, keepdims=True)) # E: Any
reveal_type(np.ptp(B, keepdims=True)) # E: Any
reveal_type(np.amax(a)) # E: Any
reveal_type(np.amax(b)) # E: Any
reveal_type(np.amax(c)) # E: Any
reveal_type(np.amax(A)) # E: Any
reveal_type(np.amax(B)) # E: Any
reveal_type(np.amax(A, axis=0)) # E: Any
reveal_type(np.amax(B, axis=0)) # E: Any
reveal_type(np.amax(A, keepdims=True)) # E: Any
reveal_type(np.amax(B, keepdims=True)) # E: Any
reveal_type(np.amin(a)) # E: Any
reveal_type(np.amin(b)) # E: Any
reveal_type(np.amin(c)) # E: Any
reveal_type(np.amin(A)) # E: Any
reveal_type(np.amin(B)) # E: Any
reveal_type(np.amin(A, axis=0)) # E: Any
reveal_type(np.amin(B, axis=0)) # E: Any
reveal_type(np.amin(A, keepdims=True)) # E: Any
reveal_type(np.amin(B, keepdims=True)) # E: Any
reveal_type(np.prod(a)) # E: Any
reveal_type(np.prod(b)) # E: Any
reveal_type(np.prod(c)) # E: Any
reveal_type(np.prod(A)) # E: Any
reveal_type(np.prod(B)) # E: Any
reveal_type(np.prod(A, axis=0)) # E: Any
reveal_type(np.prod(B, axis=0)) # E: Any
reveal_type(np.prod(A, keepdims=True)) # E: Any
reveal_type(np.prod(B, keepdims=True)) # E: Any
reveal_type(np.prod(b, out=d)) # E: Any
reveal_type(np.prod(B, out=d)) # E: Any
reveal_type(np.cumprod(a)) # E: numpy.ndarray
reveal_type(np.cumprod(b)) # E: numpy.ndarray
reveal_type(np.cumprod(c)) # E: numpy.ndarray
reveal_type(np.cumprod(A)) # E: numpy.ndarray
reveal_type(np.cumprod(B)) # E: numpy.ndarray
reveal_type(np.ndim(a)) # E: int
reveal_type(np.ndim(b)) # E: int
reveal_type(np.ndim(c)) # E: int
reveal_type(np.ndim(A)) # E: int
reveal_type(np.ndim(B)) # E: int
reveal_type(np.size(a)) # E: int
reveal_type(np.size(b)) # E: int
reveal_type(np.size(c)) # E: int
reveal_type(np.size(A)) # E: int
reveal_type(np.size(B)) # E: int
reveal_type(np.around(a)) # E: Any
reveal_type(np.around(b)) # E: Any
reveal_type(np.around(c)) # E: Any
reveal_type(np.around(A)) # E: Any
reveal_type(np.around(B)) # E: Any
reveal_type(np.mean(a)) # E: Any
reveal_type(np.mean(b)) # E: Any
reveal_type(np.mean(c)) # E: Any
reveal_type(np.mean(A)) # E: Any
reveal_type(np.mean(B)) # E: Any
reveal_type(np.mean(A, axis=0)) # E: Any
reveal_type(np.mean(B, axis=0)) # E: Any
reveal_type(np.mean(A, keepdims=True)) # E: Any
reveal_type(np.mean(B, keepdims=True)) # E: Any
reveal_type(np.mean(b, out=d)) # E: Any
reveal_type(np.mean(B, out=d)) # E: Any
reveal_type(np.std(a)) # E: Any
reveal_type(np.std(b)) # E: Any
reveal_type(np.std(c)) # E: Any
reveal_type(np.std(A)) # E: Any
reveal_type(np.std(B)) # E: Any
reveal_type(np.std(A, axis=0)) # E: Any
reveal_type(np.std(B, axis=0)) # E: Any
reveal_type(np.std(A, keepdims=True)) # E: Any
reveal_type(np.std(B, keepdims=True)) # E: Any
reveal_type(np.std(b, out=d)) # E: Any
reveal_type(np.std(B, out=d)) # E: Any
reveal_type(np.var(a)) # E: Any
reveal_type(np.var(b)) # E: Any
reveal_type(np.var(c)) # E: Any
reveal_type(np.var(A)) # E: Any
reveal_type(np.var(B)) # E: Any
reveal_type(np.var(A, axis=0)) # E: Any
reveal_type(np.var(B, axis=0)) # E: Any
reveal_type(np.var(A, keepdims=True)) # E: Any
reveal_type(np.var(B, keepdims=True)) # E: Any
reveal_type(np.var(b, out=d)) # E: Any
reveal_type(np.var(B, out=d)) # E: Any
| 36.358491 | 68 | 0.673171 | 1,831 | 9,635 | 3.419443 | 0.044784 | 0.338604 | 0.406325 | 0.28989 | 0.949689 | 0.94873 | 0.939467 | 0.889475 | 0.691902 | 0.641751 | 0 | 0.012661 | 0.131085 | 9,635 | 264 | 69 | 36.496212 | 0.735189 | 0.265179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003788 | 0 | 1 | 0 | false | 0 | 0.004525 | 0 | 0.004525 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
53d1bead5b470130f3e4b4659107a05cac7d49d4 | 236 | py | Python | contactnets/experiments/block3d/__init__.py | DAIRLab/contact-nets | b0e197cbb0ab5550628d71d851a6de1dab616fb6 | [
"BSD-3-Clause"
] | 16 | 2020-11-18T01:33:05.000Z | 2022-02-15T17:52:55.000Z | contactnets/experiments/block3d/__init__.py | DAIRLab/contact-nets | b0e197cbb0ab5550628d71d851a6de1dab616fb6 | [
"BSD-3-Clause"
] | null | null | null | contactnets/experiments/block3d/__init__.py | DAIRLab/contact-nets | b0e197cbb0ab5550628d71d851a6de1dab616fb6 | [
"BSD-3-Clause"
] | 1 | 2021-01-27T20:48:46.000Z | 2021-01-27T20:48:46.000Z | # flake8: noqa
from contactnets.experiments.block3d.deep_learnable import DeepLearnable
from contactnets.experiments.block3d.sim import Block3DParams
from contactnets.experiments.block3d.structured_learnable import StructuredLearnable
| 39.333333 | 84 | 0.885593 | 25 | 236 | 8.28 | 0.56 | 0.217391 | 0.376812 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.067797 | 236 | 5 | 85 | 47.2 | 0.918182 | 0.050847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
54d7c2fffe71056f36805968ef57eb27510990d7 | 47,786 | py | Python | tests/databank/test_databank.py | swordapp/python-client-sword2 | 59db54c03e4498dd6b001ac4f3a4167aa2fb8987 | [
"Apache-2.0"
] | 14 | 2015-02-02T18:39:41.000Z | 2020-07-10T15:03:57.000Z | tests/databank/test_databank.py | swordapp/python-client-sword2 | 59db54c03e4498dd6b001ac4f3a4167aa2fb8987 | [
"Apache-2.0"
] | 11 | 2015-07-20T09:03:31.000Z | 2021-02-25T22:05:53.000Z | tests/databank/test_databank.py | swordapp/python-client-sword2 | 59db54c03e4498dd6b001ac4f3a4167aa2fb8987 | [
"Apache-2.0"
] | 14 | 2015-07-16T12:39:57.000Z | 2021-02-24T18:42:52.000Z | import uuid
from . import TestController
from sword2 import Connection, Entry, Error_Document, Atom_Sword_Statement, Ore_Sword_Statement
from lxml import etree
PACKAGE = "tests/databank/example.zip"
PACKAGE_MIME = "application/zip"
SSS_URL = "http://localhost:5000/swordv2/service-document"
SSS_UN = "admin"
SSS_PW = "admin"
SSS_OBO = "obo"
DC = "{http://purl.org/dc/terms/}"
RDF = "{http://www.w3.org/1999/02/22-rdf-syntax-ns#}"
class TestConnection(TestController):
def test_01_get_service_document(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
# given that the client is fully functional, testing that the
# service document parses and is valid is sufficient. This, obviously,
# doesn't test the validation routine itself.
assert conn.sd != None
assert conn.sd.parsed == True
assert conn.sd.valid == True
assert len(conn.sd.workspaces) == 1
def test_02_get_service_document_unauthorised(self):
conn = Connection(SSS_URL, user_name="alsdkfjsdz", user_pass="ZAKJKLASJDF")
conn.get_service_document()
assert conn.sd is None
"""
def test_03_basic_create_resource_with_package(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
assert receipt.code == 201
assert receipt.location != None
# these last two assertions are contingent on if we actually get a
# receipt back from the server (which we might not legitimately get)
assert receipt.dom is None or receipt.parsed == True
assert receipt.dom is None or receipt.valid == True
"""
"""
def test_04_advanced_create_resource_with_package(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip',
in_progress = True,
suggested_identifier = str(uuid.uuid4()))
assert receipt.code == 201
assert receipt.location != None
# these last two assertions are contingent on if we actually get a
# receipt back from the server (which we might not legitimately get)
assert receipt.dom is None or receipt.parsed == True
assert receipt.dom is None or receipt.valid == True
"""
"""
def test_05_basic_create_resource_with_multipart(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Foo", id="asidjasidj", dcterms_abstract="abstract", dcterms_title="my title")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
assert receipt.code == 201
assert receipt.location != None
# these last two assertions are contingent on if we actually get a
# receipt back from the server (which we might not legitimately get)
assert receipt.dom is None or receipt.parsed == True
assert receipt.dom is None or receipt.valid == True
"""
"""
def test_06_advanced_create_resource_with_multipart(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Foo", id="asidjasidj", dcterms_abstract="abstract", dcterms_title="my title")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip',
in_progress = True,
suggested_identifier = str(uuid.uuid4()))
assert receipt.code == 201
assert receipt.location != None
# these last two assertions are contingent on if we actually get a
# receipt back from the server (which we might not legitimately get)
assert receipt.dom is None or receipt.parsed == True
assert receipt.dom is None or receipt.valid == True
"""
def test_07_basic_create_resource_with_entry(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href,
metadata_entry = e)
assert receipt.code == 201
assert receipt.location != None
# these last two assertions are contingent on if we actually get a
# receipt back from the server (which we might not legitimately get)
assert receipt.dom is None or receipt.parsed == True
assert receipt.dom is None or receipt.valid == True
def test_08_advanced_create_resource_with_entry(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj",
dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
e.register_namespace("oxds", "http://databank.ox.ac.uk/terms/")
e.add_field("oxds_whatever", "whatever")
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
in_progress = True,
suggested_identifier = str(uuid.uuid4()))
assert receipt.code == 201
assert receipt.location != None
# these last two assertions are contingent on if we actually get a
# receipt back from the server (which we might not legitimately get)
assert receipt.dom is None or receipt.parsed == True
assert receipt.dom is None or receipt.valid == True
def test_09_basic_retrieve_deposit_receipt(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
# we're going to work with the location
assert receipt.location != None
new_receipt = conn.get_deposit_receipt(receipt.location)
assert new_receipt.code == 200
assert new_receipt.parsed == True
assert new_receipt.valid == True
def test_10_advanced_retrieve_deposit_receipt(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
suggested_id = str(uuid.uuid4())
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e,
in_progress = True,
suggested_identifier = suggested_id)
# we're going to work with the location
assert receipt.location != None
new_receipt = conn.get_deposit_receipt(receipt.location)
assert new_receipt.code == 200
assert new_receipt.parsed == True
assert new_receipt.valid == True
print(new_receipt.to_xml())
# Here are some more things we can know about the receipt
# 1 - the links will all contain the suggested identifier
# 2 - the links will all contain the name of the silo
# 3 - the packaging will contain DataBankBagIt
# 4 - the DC metadata will be reflected back at us
# 5 - the atom metadata will be populated in some way
for rel, links in new_receipt.links.items():
for link in links:
assert suggested_id in link['href']
assert col.title in link['href']
assert "http://dataflow.ox.ac.uk/package/DataBankBagIt" in new_receipt.packaging
# check the atom metadata
assert new_receipt.title == "An entry only deposit"
assert new_receipt.summary == "abstract"
# check the DC metadata
assert "An entry only deposit" in new_receipt.metadata["dcterms_title"]
assert "abstract" in new_receipt.metadata["dcterms_abstract"]
assert "http://whatever/" in new_receipt.metadata["dcterms_identifier"]
"""
def test_11_basic_retrieve_content_cont_iri(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# we're going to work with the cont_iri
assert receipt.cont_iri is not None
resource = conn.get_resource(content_iri=receipt.cont_iri)
assert resource.code == 200
assert resource.content is not None
"""
"""
def test_12_basic_retrieve_content_em_iri(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# we're going to work with the edit_media iri
assert receipt.edit_media is not None
resource = conn.get_resource(content_iri=receipt.edit_media)
assert resource.code == 200
assert resource.content is not None
"""
"""
def test_13_advanced_retrieve_content_em_iri(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
packaging = 'http://purl.org/net/sword/package/SimpleZip'
if receipt.packaging is not None and len(receipt.packaging) > 0:
packaging = receipt.packaging[0]
resource = conn.get_resource(content_iri=receipt.edit_media, packaging=packaging, on_behalf_of=SSS_OBO)
assert resource.code == 200
assert resource.content is not None
"""
"""
def test_14_error_retrieve_content_em_iri(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW,
error_response_raises_exceptions=False)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
error = 'http://purl.org/net/sword/package/IJustMadeThisUp'
response = conn.get_resource(content_iri=receipt.edit_media, packaging=error)
assert response.code == 406
assert isinstance(response, Error_Document)
assert response.error_href == "http://purl.org/net/sword/error/ErrorContent"
"""
"""
def test_15_retrieve_content_em_iri_as_feed(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# we're going to work with the edit_media_feed iri
assert receipt.edit_media_feed is not None
response = conn.get_resource(content_iri=receipt.edit_media_feed)
assert response.code == 200
assert response.content is not None
# the response should be an xml document, so let's see if we can parse
# it. This should give us an exception which will fail the test if not
dom = etree.fromstring(response.content)
"""
def test_16_basic_replace_file_content(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now do the replace
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
assert new_receipt.code == 204
assert new_receipt.dom is None
def test_17_advanced_replace_file_content(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now do the replace
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip',
metadata_relevant=True)
assert new_receipt.code == 204
assert new_receipt.dom is None
"""
def test_18_basic_replace_metadata(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now do the replace
ne = Entry(title="A metadata update", id="asidjasidj", dcterms_abstract="new abstract", dcterms_identifier="http://elsewhere/")
new_receipt = conn.update(dr=receipt, metadata_entry=ne)
assert new_receipt.code == 204 or new_receipt.code == 200
if new_receipt.code == 204:
assert new_receipt.dom is None
if new_receipt.code == 200:
assert new_receipt.parsed == True
assert new_receipt.valid == True
"""
"""
def test_19_advanced_replace_metadata(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now do the replace
ne = Entry(title="A metadata update", id="asidjasidj", dcterms_abstract="new abstract", dcterms_identifier="http://elsewhere/")
new_receipt = conn.update(dr=receipt, metadata_entry=ne, in_progress=True)
assert new_receipt.code == 204 or new_receipt.code == 200
if new_receipt.code == 204:
assert new_receipt.dom is None
if new_receipt.code == 200:
assert new_receipt.parsed == True
assert new_receipt.valid == True
"""
"""
def test_20_basic_replace_with_multipart(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now do the replace
ne = Entry(title="A multipart update", id="asidjasidj", dcterms_abstract="new abstract", dcterms_identifier="http://elsewhere/")
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
metadata_entry = ne,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
assert new_receipt.code == 204 or new_receipt.code == 200
if new_receipt.code == 204:
assert new_receipt.dom is None
if new_receipt.code == 200:
assert new_receipt.parsed == True
assert new_receipt.valid == True
def test_21_advanced_replace_with_multipart(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now do the replace
ne = Entry(title="A multipart update", id="asidjasidj", dcterms_abstract="new abstract", dcterms_identifier="http://elsewhere/")
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
metadata_entry = ne,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip',
in_progress=True)
assert new_receipt.code == 204 or new_receipt.code == 200
if new_receipt.code == 204:
assert new_receipt.dom is None
if new_receipt.code == 200:
assert new_receipt.parsed == True
assert new_receipt.valid == True
def test_22_delete_content(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
# now delete the content but not the container
new_receipt = conn.delete_content_of_resource(dr=receipt)
assert new_receipt.code == 204
assert new_receipt.dom is None
def test_23_basic_add_content_to_resource_single_file(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
receipt = conn.get_deposit_receipt(receipt.location)
with open(PACKAGE) as pkg:
new_receipt = conn.add_file_to_resource(receipt.edit_media, pkg, "addition.zip", mimetype=PACKAGE_MIME)
assert new_receipt.code >= 200 and new_receipt.code < 400
assert new_receipt.location is not None
assert new_receipt.location != receipt.edit_media
def test_24_advanced_add_content_to_resource_single_file(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
receipt = conn.get_deposit_receipt(receipt.location)
with open(PACKAGE) as pkg:
new_receipt = conn.add_file_to_resource(receipt.edit_media, pkg, "addition.zip",
mimetype=PACKAGE_MIME,
metadata_relevant=True)
assert new_receipt.code >= 200 and new_receipt.code < 400
assert new_receipt.location is not None
assert new_receipt.location != receipt.edit_media
def test_25_basic_add_content_to_resource_package(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
receipt = conn.get_deposit_receipt(receipt.location)
with open(PACKAGE) as pkg:
new_receipt = conn.add_file_to_resource(receipt.edit_media, pkg, "addition.zip",
mimetype=PACKAGE_MIME,
packaging="http://purl.org/net/sword/package/SimpleZip")
assert new_receipt.code >= 200 and new_receipt.code < 400
assert new_receipt.location is not None
assert new_receipt.location == receipt.edit_media
def test_26_advanced_add_content_to_resource_package(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
receipt = conn.get_deposit_receipt(receipt.location)
with open(PACKAGE) as pkg:
new_receipt = conn.add_file_to_resource(receipt.edit_media, pkg, "addition.zip",
mimetype=PACKAGE_MIME,
packaging="http://purl.org/net/sword/package/SimpleZip",
metadata_relevant=True)
assert new_receipt.code >= 200 and new_receipt.code < 400
assert new_receipt.location is not None
assert new_receipt.location == receipt.edit_media
def test_27_basic_add_metadata(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
ne = Entry(title="Multipart deposit", id="asidjasidj", dcterms_identifier="http://another/",
dcterms_creator="Me!", dcterms_rights="CC0")
new_receipt = conn.append(dr=receipt, metadata_entry=ne)
assert new_receipt.code == 200
def test_28_advanced_add_metadata(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
ne = Entry(title="Multipart deposit", id="asidjasidj", dcterms_identifier="http://another/",
dcterms_creator="Me!", dcterms_rights="CC0")
new_receipt = conn.append(dr=receipt, metadata_entry=ne, in_progress=True)
assert new_receipt.code == 200
def test_29_basic_add_multipart(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
ne = Entry(title="Multipart deposit", id="asidjasidj", dcterms_identifier="http://another/",
dcterms_creator="Me!", dcterms_rights="CC0")
with open(PACKAGE) as pkg:
new_receipt = conn.append(dr=receipt,
metadata_entry=ne,
payload=pkg,
filename="addition.zip",
mimetype=PACKAGE_MIME,
packaging="http://purl.org/net/sword/package/SimpleZip")
assert new_receipt.code >= 200 and new_receipt.code < 400
def test_30_advanced_add_multipart(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
ne = Entry(title="Multipart deposit", id="asidjasidj", dcterms_identifier="http://another/",
dcterms_creator="Me!", dcterms_rights="CC0")
with open(PACKAGE) as pkg:
new_receipt = conn.append(dr=receipt,
metadata_entry=ne,
payload=pkg,
filename="addition.zip",
mimetype=PACKAGE_MIME,
packaging="http://purl.org/net/sword/package/SimpleZip",
in_progress=True,
metadata_relevant=True)
assert new_receipt.code >= 200 and new_receipt.code < 400
# FIXME: this test just does not work, for no discernable reason. The
# final assert of a 404 fails, and the debug output of the client says
# that the server responded with a 200. Nonetheless, the server logs show
# that it responded with a 404, which would suggest a caching issue in the
# client. I have so far been unable to figure out where, though, despite
# having tried turning off httplib2 caching and passing cache-control
# headers in as per the httplib2 documentation. help?
def test_31_delete_container(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO,
error_response_raises_exceptions=False)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
edit_iri = receipt.location
receipt = conn.get_deposit_receipt(edit_iri)
# delete the container
new_receipt = conn.delete_container(dr=receipt)
assert new_receipt.code == 204
assert new_receipt.dom is None
# the next check is that this 404s appropriately now
another_receipt = conn.get_deposit_receipt(edit_iri)
# FIXME: this is the broken assert
#assert another_receipt.code == 404
"""
"""
def test_32_get_atom_statement(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Multipart deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
edit_iri = receipt.location
receipt = conn.get_deposit_receipt(edit_iri)
assert receipt.atom_statement_iri is not None
# get the statement
statement = conn.get_atom_sword_statement(receipt.atom_statement_iri)
assert isinstance(statement, Atom_Sword_Statement)
"""
def test_33_get_ore_statement(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
# ensure that we have a receipt (the server may not give us one
# by default)
receipt = conn.get_deposit_receipt(receipt.location)
assert receipt.ore_statement_iri is not None
# get the statement
statement = conn.get_ore_sword_statement(receipt.ore_statement_iri)
assert isinstance(statement, Ore_Sword_Statement)
# some specific things that we can assert about the Statement
# 1 - it should have the original deposits listed
# 2 - it should have the aggregated resources listed
# 3 - it should have the correct state
# 4 - the dom should contain all the relevant metadata
# check the original deposits
od_uri = None
assert len(statement.original_deposits) == 1
for od in statement.original_deposits:
assert "update.zip" in od.uri
assert od.is_original_deposit
assert od.deposited_on is not None
# assert od.deposited_by == SSS_UN # FIXME: this may not work until we get auth sorted out
assert od.deposited_on_behalf_of is None
od_uri = od.uri
# check the aggregated resources
assert len(statement.resources) == 1
for ar in statement.resources:
# should be the same resource
assert od_uri == ar.uri
# check the states
assert len(statement.states) == 1
assert statement.states[0][0] == "http://databank.ox.ac.uk/state/ZipFileAdded"
print(etree.tostring(statement.dom, pretty_print=True))
# check the metadata
md_count = 0
for e in statement.dom.findall(RDF + "Description"):
for element in e.getchildren():
if element.tag == DC + "title":
assert element.text.strip() == "An entry only deposit"
md_count += 1
elif element.tag == DC + "abstract":
assert element.text.strip() == "abstract"
md_count += 1
elif element.tag == DC + "identifier":
resource = element.attrib.get(RDF + "resource", None)
if resource is not None: # because we know that there is going to be more than one identifier
assert element.attrib.get(RDF + "resource") == "http://whatever/"
md_count += 1
print("Metadata Count: " + str(md_count))
assert md_count == 3
def test_34_check_metadata_only_state(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
statement = conn.get_ore_sword_statement(receipt.ore_statement_iri)
assert len(statement.states) == 1
assert statement.states[0][0] == "http://databank.ox.ac.uk/state/EmptyContainer"
def test_35_check_new_zip_state(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip')
statement = conn.get_ore_sword_statement(receipt.ore_statement_iri)
assert len(statement.states) == 1
assert statement.states[0][0] == "http://databank.ox.ac.uk/state/ZipFileAdded"
def test_36_check_md5(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="An entry only deposit", id="asidjasidj", dcterms_abstract="abstract", dcterms_identifier="http://whatever/")
receipt = conn.create(col_iri = col.href, metadata_entry = e)
with open(PACKAGE) as pkg:
new_receipt = conn.update(dr = receipt,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="update.zip",
packaging='http://purl.org/net/sword/package/SimpleZip',
md5sum="123456789") # pass in a known md5 (even though it is wrong)
statement = conn.get_ore_sword_statement(receipt.ore_statement_iri)
# need to try and extract the md5 from the dom
count = 0
for element in statement.dom.findall("{http://www.w3.org/1999/02/22-rdf-syntax-ns#}Description/{http://vocab.ox.ac.uk/dataset/schema#}hasMD5"):
count += 1
assert element.text.strip() == "123456789"
assert count == 1
# FIXME: when we do the full swordv2 implementation, we need to do a number of
# checks to ensure that metadata and content states are properly treated
"""
def test_34_complete_deposit(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW, on_behalf_of=SSS_OBO)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
e = Entry(title="Foo", id="asidjasidj", dcterms_abstract="abstract", dcterms_title="my title")
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
metadata_entry = e,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip',
in_progress = True,
suggested_identifier = str(uuid.uuid4()))
# ensure that we have a receipt (the server may not give us one
# by default)
edit_iri = receipt.location
receipt = conn.get_deposit_receipt(edit_iri)
response = conn.complete_deposit(dr=receipt)
assert response.code == 200
def test_35_error_checksum_mismatch(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW,
error_response_raises_exceptions=False)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip',
in_progress = True,
suggested_identifier = str(uuid.uuid4()),
md5sum="123456789")
assert receipt.code == 412
assert isinstance(receipt, Error_Document)
assert receipt.error_href == "http://purl.org/net/sword/error/ErrorChecksumMismatch"
def test_36_error_bad_request(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW,
error_response_raises_exceptions=False)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip',
in_progress = "Invalid", # the API seems to allow this!
suggested_identifier = str(uuid.uuid4()))
assert receipt.code == 400
assert isinstance(receipt, Error_Document)
assert receipt.error_href == "http://purl.org/net/sword/error/ErrorBadRequest"
def test_37_error_target_owner_unknown(self):
conn = Connection(SSS_URL, user_name=SSS_UN, user_pass=SSS_PW,
error_response_raises_exceptions=False)
conn.get_service_document()
col = conn.sd.workspaces[0][1][0]
with open(PACKAGE) as pkg:
receipt = conn.create(col_iri = col.href,
payload=pkg,
mimetype=PACKAGE_MIME,
filename="example.zip",
packaging = 'http://purl.org/net/sword/package/SimpleZip',
in_progress = True,
suggested_identifier = str(uuid.uuid4()),
on_behalf_of="richard") # we expressly set the wrong obo on the request rather than the connection
assert receipt.code == 403
assert isinstance(receipt, Error_Document)
assert receipt.error_href == "http://purl.org/net/sword/error/TargetOwnerUnknown"
def test_38_error_mediation_not_allowed(self):
# this is a placeholder; it's not possible to reliably test for this
pass
def test_39_error_method_not_allowed(self):
# this is a placeholder; it's not possible to reliably test for this
pass
def test_40_error_max_upload_size_exceeded(self):
# this is a placeholder; it's not possible to reliably test for this
pass
"""
| 44.78538 | 151 | 0.583686 | 5,642 | 47,786 | 4.75771 | 0.07391 | 0.034646 | 0.028611 | 0.022427 | 0.829714 | 0.817643 | 0.812428 | 0.807399 | 0.802518 | 0.795291 | 0 | 0.01461 | 0.325346 | 47,786 | 1,066 | 152 | 44.827392 | 0.818015 | 0.040075 | 0 | 0.57971 | 0 | 0.009662 | 0.130203 | 0.002101 | 0 | 0 | 0 | 0.002814 | 0.26087 | 1 | 0.057971 | false | 0.057971 | 0.019324 | 0 | 0.082126 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
54ec696345f81105b1ebefad5e46a229d6bcb23f | 7,254 | py | Python | rdflib_sqlalchemy/tables.py | gjhiggins/rdflib-sqlalchemy | d4c057934cd2675083d3df943103bdffb20341d4 | [
"BSD-3-Clause"
] | 112 | 2015-02-21T15:56:34.000Z | 2022-02-22T12:10:26.000Z | rdflib_sqlalchemy/tables.py | gjhiggins/rdflib-sqlalchemy | d4c057934cd2675083d3df943103bdffb20341d4 | [
"BSD-3-Clause"
] | 64 | 2015-01-22T12:40:11.000Z | 2021-12-27T19:15:14.000Z | rdflib_sqlalchemy/tables.py | gjhiggins/rdflib-sqlalchemy | d4c057934cd2675083d3df943103bdffb20341d4 | [
"BSD-3-Clause"
] | 28 | 2015-06-22T08:06:58.000Z | 2022-02-16T11:17:49.000Z | from sqlalchemy import Column, Table, Index, types
from rdflib_sqlalchemy.types import TermType
MYSQL_MAX_INDEX_LENGTH = 200
TABLE_NAME_TEMPLATES = [
"{interned_id}_asserted_statements",
"{interned_id}_literal_statements",
"{interned_id}_namespace_binds",
"{interned_id}_quoted_statements",
"{interned_id}_type_statements",
]
def get_table_names(interned_id):
return [
table_name_template.format(interned_id=interned_id)
for table_name_template in TABLE_NAME_TEMPLATES
]
def create_asserted_statements_table(interned_id, metadata):
return Table(
"{interned_id}_asserted_statements".format(interned_id=interned_id),
metadata,
Column("id", types.Integer, nullable=False, primary_key=True),
Column("subject", TermType, nullable=False),
Column("predicate", TermType, nullable=False),
Column("object", TermType, nullable=False),
Column("context", TermType, nullable=False),
Column("termcomb", types.Integer, nullable=False, key="termComb"),
Index(
"{interned_id}_A_s_index".format(interned_id=interned_id),
"subject",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_A_p_index".format(interned_id=interned_id),
"predicate",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_A_o_index".format(interned_id=interned_id),
"object",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_A_c_index".format(interned_id=interned_id),
"context",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_A_termComb_index".format(interned_id=interned_id),
"termComb",
),
Index(
"{interned_id}_asserted_spoc_key".format(interned_id=interned_id),
"subject",
"predicate",
"object",
"context",
unique=True,
mysql_length=191,
),
)
def create_type_statements_table(interned_id, metadata):
return Table(
"{interned_id}_type_statements".format(interned_id=interned_id),
metadata,
Column("id", types.Integer, nullable=False, primary_key=True),
Column("member", TermType, nullable=False),
Column("klass", TermType, nullable=False),
Column("context", TermType, nullable=False),
Column("termcomb", types.Integer, nullable=False, key="termComb"),
Index(
"{interned_id}_member_index".format(interned_id=interned_id),
"member",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_klass_index".format(interned_id=interned_id),
"klass",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_c_index".format(interned_id=interned_id),
"context",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_T_termComb_index".format(interned_id=interned_id),
"termComb",
),
Index(
"{interned_id}_type_mkc_key".format(interned_id=interned_id),
"member",
"klass",
"context",
unique=True,
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
)
def create_literal_statements_table(interned_id, metadata):
return Table(
"{interned_id}_literal_statements".format(interned_id=interned_id),
metadata,
Column("id", types.Integer, nullable=False, primary_key=True),
Column("subject", TermType, nullable=False),
Column("predicate", TermType, nullable=False),
Column("object", TermType),
Column("context", TermType, nullable=False),
Column("termcomb", types.Integer, nullable=False, key="termComb"),
Column("objlanguage", types.String(255), key="objLanguage"),
Column("objdatatype", types.String(255), key="objDatatype"),
Index(
"{interned_id}_L_s_index".format(interned_id=interned_id),
"subject",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_L_p_index".format(interned_id=interned_id),
"predicate",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_L_c_index".format(interned_id=interned_id),
"context",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_L_termComb_index".format(interned_id=interned_id),
"termComb",
),
Index(
"{interned_id}_literal_spoc_key".format(interned_id=interned_id),
"subject",
"predicate",
"object",
"objLanguage",
"context",
unique=True,
mysql_length=153,
),
)
def create_quoted_statements_table(interned_id, metadata):
return Table(
"{interned_id}_quoted_statements".format(interned_id=interned_id),
metadata,
Column("id", types.Integer, nullable=False, primary_key=True),
Column("subject", TermType, nullable=False),
Column("predicate", TermType, nullable=False),
Column("object", TermType),
Column("context", TermType, nullable=False),
Column("termcomb", types.Integer, nullable=False, key="termComb"),
Column("objlanguage", types.String(255), key="objLanguage"),
Column("objdatatype", types.String(255), key="objDatatype"),
Index(
"{interned_id}_Q_s_index".format(interned_id=interned_id),
"subject",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_Q_p_index".format(interned_id=interned_id),
"predicate",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_Q_o_index".format(interned_id=interned_id),
"object",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_Q_c_index".format(interned_id=interned_id),
"context",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
),
Index(
"{interned_id}_Q_termComb_index".format(interned_id=interned_id),
"termComb",
),
Index(
"{interned_id}_quoted_spoc_key".format(interned_id=interned_id),
"subject",
"predicate",
"object",
"objLanguage",
"context",
unique=True,
mysql_length=153,
),
)
def create_namespace_binds_table(interned_id, metadata):
return Table(
"{interned_id}_namespace_binds".format(interned_id=interned_id),
metadata,
Column("prefix", types.String(20), unique=True, nullable=False, primary_key=True),
Column("uri", types.Text),
Index(
"{interned_id}_uri_index".format(interned_id=interned_id),
"uri",
mysql_length=MYSQL_MAX_INDEX_LENGTH,
)
)
| 33.897196 | 90 | 0.600772 | 745 | 7,254 | 5.469799 | 0.091275 | 0.238037 | 0.113865 | 0.170798 | 0.853006 | 0.835583 | 0.779877 | 0.770061 | 0.759264 | 0.684172 | 0 | 0.005005 | 0.283843 | 7,254 | 213 | 91 | 34.056338 | 0.779403 | 0 | 0 | 0.690355 | 0 | 0 | 0.196719 | 0.122967 | 0 | 0 | 0 | 0 | 0.020305 | 1 | 0.030457 | false | 0 | 0.010152 | 0.030457 | 0.071066 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
07488cf9f4ad8eea2264543388a9c959c6aa8aeb | 12,523 | py | Python | v6.0.5/system_snmp/test_fortios_system_snmp_user.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 14 | 2018-09-25T20:35:25.000Z | 2021-07-14T04:30:54.000Z | v6.0.6/system_snmp/test_fortios_system_snmp_user.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 32 | 2018-10-09T04:13:42.000Z | 2020-05-11T07:20:28.000Z | v6.0.5/system_snmp/test_fortios_system_snmp_user.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 11 | 2018-10-09T00:14:53.000Z | 2021-11-03T10:54:09.000Z | # Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
from mock import ANY
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
try:
from ansible.modules.network.fortios import fortios_system_snmp_user
except ImportError:
pytest.skip("Could not load required modules for testing", allow_module_level=True)
@pytest.fixture(autouse=True)
def connection_mock(mocker):
connection_class_mock = mocker.patch('ansible.modules.network.fortios.fortios_system_snmp_user.Connection')
return connection_class_mock
fos_instance = FortiOSHandler(connection_mock)
def test_system_snmp_user_creation(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_snmp_user': {
'auth_proto': 'md5',
'auth_pwd': 'test_value_4',
'ha_direct': 'enable',
'name': 'default_name_6',
'priv_proto': 'aes',
'priv_pwd': 'test_value_8',
'queries': 'enable',
'query_port': '10',
'security_level': 'no-auth-no-priv',
'source_ip': '84.230.14.12',
'source_ipv6': 'test_value_13',
'status': 'enable',
'trap_lport': '15',
'trap_rport': '16',
'trap_status': 'enable'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_snmp_user.fortios_system_snmp(input_data, fos_instance)
expected_data = {
'auth-proto': 'md5',
'auth-pwd': 'test_value_4',
'ha-direct': 'enable',
'name': 'default_name_6',
'priv-proto': 'aes',
'priv-pwd': 'test_value_8',
'queries': 'enable',
'query-port': '10',
'security-level': 'no-auth-no-priv',
'source-ip': '84.230.14.12',
'source-ipv6': 'test_value_13',
'status': 'enable',
'trap-lport': '15',
'trap-rport': '16',
'trap-status': 'enable'
}
set_method_mock.assert_called_with('system.snmp', 'user', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
def test_system_snmp_user_creation_fails(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'POST', 'http_status': 500}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_snmp_user': {
'auth_proto': 'md5',
'auth_pwd': 'test_value_4',
'ha_direct': 'enable',
'name': 'default_name_6',
'priv_proto': 'aes',
'priv_pwd': 'test_value_8',
'queries': 'enable',
'query_port': '10',
'security_level': 'no-auth-no-priv',
'source_ip': '84.230.14.12',
'source_ipv6': 'test_value_13',
'status': 'enable',
'trap_lport': '15',
'trap_rport': '16',
'trap_status': 'enable'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_snmp_user.fortios_system_snmp(input_data, fos_instance)
expected_data = {
'auth-proto': 'md5',
'auth-pwd': 'test_value_4',
'ha-direct': 'enable',
'name': 'default_name_6',
'priv-proto': 'aes',
'priv-pwd': 'test_value_8',
'queries': 'enable',
'query-port': '10',
'security-level': 'no-auth-no-priv',
'source-ip': '84.230.14.12',
'source-ipv6': 'test_value_13',
'status': 'enable',
'trap-lport': '15',
'trap-rport': '16',
'trap-status': 'enable'
}
set_method_mock.assert_called_with('system.snmp', 'user', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 500
def test_system_snmp_user_removal(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
delete_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
delete_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.delete', return_value=delete_method_result)
input_data = {
'username': 'admin',
'state': 'absent',
'system_snmp_user': {
'auth_proto': 'md5',
'auth_pwd': 'test_value_4',
'ha_direct': 'enable',
'name': 'default_name_6',
'priv_proto': 'aes',
'priv_pwd': 'test_value_8',
'queries': 'enable',
'query_port': '10',
'security_level': 'no-auth-no-priv',
'source_ip': '84.230.14.12',
'source_ipv6': 'test_value_13',
'status': 'enable',
'trap_lport': '15',
'trap_rport': '16',
'trap_status': 'enable'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_snmp_user.fortios_system_snmp(input_data, fos_instance)
delete_method_mock.assert_called_with('system.snmp', 'user', mkey=ANY, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
def test_system_snmp_user_deletion_fails(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
delete_method_result = {'status': 'error', 'http_method': 'POST', 'http_status': 500}
delete_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.delete', return_value=delete_method_result)
input_data = {
'username': 'admin',
'state': 'absent',
'system_snmp_user': {
'auth_proto': 'md5',
'auth_pwd': 'test_value_4',
'ha_direct': 'enable',
'name': 'default_name_6',
'priv_proto': 'aes',
'priv_pwd': 'test_value_8',
'queries': 'enable',
'query_port': '10',
'security_level': 'no-auth-no-priv',
'source_ip': '84.230.14.12',
'source_ipv6': 'test_value_13',
'status': 'enable',
'trap_lport': '15',
'trap_rport': '16',
'trap_status': 'enable'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_snmp_user.fortios_system_snmp(input_data, fos_instance)
delete_method_mock.assert_called_with('system.snmp', 'user', mkey=ANY, vdom='root')
schema_method_mock.assert_not_called()
assert is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 500
def test_system_snmp_user_idempotent(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'DELETE', 'http_status': 404}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_snmp_user': {
'auth_proto': 'md5',
'auth_pwd': 'test_value_4',
'ha_direct': 'enable',
'name': 'default_name_6',
'priv_proto': 'aes',
'priv_pwd': 'test_value_8',
'queries': 'enable',
'query_port': '10',
'security_level': 'no-auth-no-priv',
'source_ip': '84.230.14.12',
'source_ipv6': 'test_value_13',
'status': 'enable',
'trap_lport': '15',
'trap_rport': '16',
'trap_status': 'enable'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_snmp_user.fortios_system_snmp(input_data, fos_instance)
expected_data = {
'auth-proto': 'md5',
'auth-pwd': 'test_value_4',
'ha-direct': 'enable',
'name': 'default_name_6',
'priv-proto': 'aes',
'priv-pwd': 'test_value_8',
'queries': 'enable',
'query-port': '10',
'security-level': 'no-auth-no-priv',
'source-ip': '84.230.14.12',
'source-ipv6': 'test_value_13',
'status': 'enable',
'trap-lport': '15',
'trap-rport': '16',
'trap-status': 'enable'
}
set_method_mock.assert_called_with('system.snmp', 'user', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 404
def test_system_snmp_user_filter_foreign_attributes(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_snmp_user': {
'random_attribute_not_valid': 'tag',
'auth_proto': 'md5',
'auth_pwd': 'test_value_4',
'ha_direct': 'enable',
'name': 'default_name_6',
'priv_proto': 'aes',
'priv_pwd': 'test_value_8',
'queries': 'enable',
'query_port': '10',
'security_level': 'no-auth-no-priv',
'source_ip': '84.230.14.12',
'source_ipv6': 'test_value_13',
'status': 'enable',
'trap_lport': '15',
'trap_rport': '16',
'trap_status': 'enable'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_snmp_user.fortios_system_snmp(input_data, fos_instance)
expected_data = {
'auth-proto': 'md5',
'auth-pwd': 'test_value_4',
'ha-direct': 'enable',
'name': 'default_name_6',
'priv-proto': 'aes',
'priv-pwd': 'test_value_8',
'queries': 'enable',
'query-port': '10',
'security-level': 'no-auth-no-priv',
'source-ip': '84.230.14.12',
'source-ipv6': 'test_value_13',
'status': 'enable',
'trap-lport': '15',
'trap-rport': '16',
'trap-status': 'enable'
}
set_method_mock.assert_called_with('system.snmp', 'user', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
| 36.832353 | 142 | 0.59243 | 1,434 | 12,523 | 4.880056 | 0.133891 | 0.045727 | 0.052015 | 0.046442 | 0.853387 | 0.84667 | 0.827808 | 0.827808 | 0.827808 | 0.827808 | 0 | 0.028506 | 0.26607 | 12,523 | 339 | 143 | 36.941003 | 0.732891 | 0.053022 | 0 | 0.862816 | 0 | 0 | 0.354893 | 0.073714 | 0 | 0 | 0 | 0 | 0.129964 | 1 | 0.025271 | false | 0 | 0.028881 | 0 | 0.057762 | 0.00361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0758bbc08b685546c94dd9d2a62abf4e74319e25 | 45,586 | py | Python | business_register/migrations/0068_auto_20210312_1130.py | OlexandrTopuzov/Data_converter | 0ac2319ccaae790af35ab2202724c65d83d32ecc | [
"MIT"
] | null | null | null | business_register/migrations/0068_auto_20210312_1130.py | OlexandrTopuzov/Data_converter | 0ac2319ccaae790af35ab2202724c65d83d32ecc | [
"MIT"
] | null | null | null | business_register/migrations/0068_auto_20210312_1130.py | OlexandrTopuzov/Data_converter | 0ac2319ccaae790af35ab2202724c65d83d32ecc | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2021-03-12 11:30
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('location_register', '0019_auto_20210305_1702'),
('data_ocean', '0023_auto_20210312_1130'),
('business_register', '0067_auto_20210305_1702'),
]
operations = [
migrations.AlterField(
model_name='assignee',
name='company',
field=models.ForeignKey(help_text='Company is the legal successor', on_delete=django.db.models.deletion.CASCADE, related_name='assignees', to='business_register.Company', verbose_name='є правонаступником'),
),
migrations.AlterField(
model_name='assignee',
name='edrpou',
field=models.CharField(help_text='EDRPOU number as string', max_length=11, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='assignee',
name='name',
field=models.CharField(help_text='Assignee name in Ukrainian', max_length=610, null=True, verbose_name='name'),
),
migrations.AlterField(
model_name='bancruptcyreadjustment',
name='company',
field=models.ForeignKey(help_text='Bankruptcy readjustment', on_delete=django.db.models.deletion.CASCADE, related_name='bancruptcy_readjustment', to='business_register.Company'),
),
migrations.AlterField(
model_name='bancruptcyreadjustment',
name='head_name',
field=models.CharField(help_text='Head name', max_length=515, null=True),
),
migrations.AlterField(
model_name='bancruptcyreadjustment',
name='op_date',
field=models.DateField(help_text='Date of bankruptcy readjustment as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='bancruptcyreadjustment',
name='reason',
field=models.TextField(help_text='Reason of bankruptcy', null=True, verbose_name='reason'),
),
migrations.AlterField(
model_name='bancruptcyreadjustment',
name='sbj_state',
field=models.CharField(help_text='Subject state', max_length=345, null=True),
),
migrations.AlterField(
model_name='bylaw',
name='name',
field=models.CharField(help_text='Company name in Ukrainian', max_length=320, null=True, unique=True, verbose_name='name'),
),
migrations.AlterField(
model_name='company',
name='address',
field=models.CharField(help_text='Registration address in Ukrainian', max_length=1000, null=True, verbose_name='address'),
),
migrations.AlterField(
model_name='company',
name='antac_id',
field=models.PositiveIntegerField(blank=True, db_index=True, default=None, help_text='ID from ANTACs DB', null=True, unique=True, verbose_name='id from ANTACs DB'),
),
migrations.AlterField(
model_name='company',
name='authority',
field=models.ForeignKey(help_text='Authorized state agency which register the company', null=True, on_delete=django.db.models.deletion.CASCADE, to='data_ocean.Authority', verbose_name='registration authority'),
),
migrations.AlterField(
model_name='company',
name='authorized_capital',
field=models.FloatField(help_text='Authorized capital as number', null=True, verbose_name='share capital'),
),
migrations.AlterField(
model_name='company',
name='boss',
field=models.CharField(blank=True, default='', help_text='CEO of the company', max_length=100, null=True, verbose_name='CEO'),
),
migrations.AlterField(
model_name='company',
name='bylaw',
field=models.ForeignKey(help_text='By law', null=True, on_delete=django.db.models.deletion.CASCADE, to='business_register.Bylaw', verbose_name='charter'),
),
migrations.AlterField(
model_name='company',
name='code',
field=models.CharField(db_index=True, help_text='Our code', max_length=510, verbose_name='our code'),
),
migrations.AlterField(
model_name='company',
name='company_type',
field=models.ForeignKey(help_text='Type of the company', null=True, on_delete=django.db.models.deletion.CASCADE, to='business_register.CompanyType', verbose_name='company type'),
),
migrations.AlterField(
model_name='company',
name='contact_info',
field=models.CharField(help_text='Info about contacts', max_length=310, null=True, verbose_name='contacts'),
),
migrations.AlterField(
model_name='company',
name='country',
field=models.ForeignKey(help_text='Country of origin', max_length=60, null=True, on_delete=django.db.models.deletion.CASCADE, to='location_register.Country', verbose_name='country'),
),
migrations.AlterField(
model_name='company',
name='edrpou',
field=models.CharField(db_index=True, help_text='EDRPOU number as string', max_length=260, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='company',
name='from_antac_only',
field=models.BooleanField(help_text='If this field has "true" - Data provided by the Anti-Corruption Action Center.', null=True),
),
migrations.AlterField(
model_name='company',
name='name',
field=models.CharField(help_text='Company name in Ukrainian', max_length=500, null=True, verbose_name='name'),
),
migrations.AlterField(
model_name='company',
name='parent',
field=models.ForeignKey(help_text='Company that has a controlling interest in the company', null=True, on_delete=django.db.models.deletion.CASCADE, to='business_register.Company', verbose_name='parent company'),
),
migrations.AlterField(
model_name='company',
name='registration_date',
field=models.DateField(help_text='Registration date as string in YYYY-MM-DD format', null=True, verbose_name='registration date'),
),
migrations.AlterField(
model_name='company',
name='registration_info',
field=models.CharField(help_text='Registration info of the company', max_length=450, null=True, verbose_name='registration info'),
),
migrations.AlterField(
model_name='company',
name='short_name',
field=models.CharField(help_text='Short name of the company in Ukrainian', max_length=500, null=True, verbose_name='short name'),
),
migrations.AlterField(
model_name='company',
name='source',
field=models.CharField(blank=True, choices=[('ukr', 'The United State Register of Legal Entities, Individual Entrepreneurs and Public Organizations of Ukraine'), ('gb', 'Company House (UK companies` register)'), ('antac', 'ANTAC')], db_index=True, default=None, help_text='Source', max_length=5, null=True, verbose_name='source'),
),
migrations.AlterField(
model_name='company',
name='status',
field=models.ForeignKey(help_text='Company legal status', null=True, on_delete=django.db.models.deletion.CASCADE, to='data_ocean.Status', verbose_name='status'),
),
migrations.AlterField(
model_name='companydetail',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='company_detail', to='business_register.Company'),
),
migrations.AlterField(
model_name='companydetail',
name='executive_power',
field=models.CharField(help_text='Executive power of the company', max_length=390, null=True),
),
migrations.AlterField(
model_name='companydetail',
name='founding_document_number',
field=models.CharField(help_text='Founding document number as string', max_length=375, null=True),
),
migrations.AlterField(
model_name='companydetail',
name='managing_paper',
field=models.CharField(help_text='Managing paper of the company', max_length=360, null=True),
),
migrations.AlterField(
model_name='companydetail',
name='superior_management',
field=models.CharField(help_text='Superior management of the company', max_length=620, null=True),
),
migrations.AlterField(
model_name='companydetail',
name='terminated_info',
field=models.CharField(help_text='Info about termination', max_length=600, null=True),
),
migrations.AlterField(
model_name='companydetail',
name='termination_cancel_info',
field=models.CharField(help_text='Info about termination cancellation', max_length=570, null=True),
),
migrations.AlterField(
model_name='companydetail',
name='vp_dates',
field=models.TextField(help_text='Array of dates as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='category',
field=models.CharField(blank=True, choices=[('bank_customer', 'Bank client'), ('owner', 'Owner'), ('by_position', 'By position'), ('manager', 'Manager'), ('other', 'Other')], default=None, help_text='Type of connection between the person and this company Can be: bank_customer, owner, manager, by_position, other.', max_length=15, null=True, verbose_name='connection`s category'),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='company',
field=models.ForeignKey(help_text='The company associated with this person.', on_delete=django.db.models.deletion.CASCADE, related_name='relationships_with_peps', to='business_register.Company', verbose_name='associated with PEP company'),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='confirmation_date',
field=models.CharField(help_text='Date of confirmation of connection in the "Anti-Corruption Action Center" database.', max_length=12, null=True, verbose_name='connection`s confirmation date'),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='end_date',
field=models.CharField(help_text='Date of termination of connection between the person and this company', max_length=12, null=True, verbose_name='connection`s end date'),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='is_state_company',
field=models.BooleanField(help_text='Boolean type. If its true - the company is state-owned,if its false - the company is private.', null=True),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='relationship_type',
field=models.CharField(help_text='Type of connection between the person and this company', max_length=550, null=True, verbose_name='connection`s type'),
),
migrations.AlterField(
model_name='companylinkwithpep',
name='start_date',
field=models.CharField(help_text="Date of the beginning of the person's connection with the company.", max_length=12, null=True, verbose_name='connection`s start date'),
),
migrations.AlterField(
model_name='companytokved',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='kveds', to='business_register.Company'),
),
migrations.AlterField(
model_name='companytokved',
name='kved',
field=models.ForeignKey(help_text='NACE as string', on_delete=django.db.models.deletion.CASCADE, to='business_register.Kved', verbose_name='NACE'),
),
migrations.AlterField(
model_name='companytokved',
name='primary_kved',
field=models.BooleanField(default=False, help_text='Primary NACE as string', verbose_name='declared as primary'),
),
migrations.AlterField(
model_name='companytopredecessor',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='predecessors', to='business_register.Company'),
),
migrations.AlterField(
model_name='companytopredecessor',
name='predecessor',
field=models.ForeignKey(help_text='Predecessor name in Ukrainian', on_delete=django.db.models.deletion.CASCADE, to='business_register.Predecessor'),
),
migrations.AlterField(
model_name='companytype',
name='name',
field=models.CharField(help_text='Company name in Ukrainian', max_length=270, null=True, unique=True, verbose_name='name'),
),
migrations.AlterField(
model_name='companytype',
name='name_eng',
field=models.CharField(help_text='Company name in Company House (UK companies` register)', max_length=270, null=True, unique=True, verbose_name='name in Company House (UK companies` register)'),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='authority',
field=models.ForeignKey(help_text='Authorized state agency which register the company', on_delete=django.db.models.deletion.CASCADE, to='data_ocean.Authority', verbose_name='registration authority'),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='exchange_data', to='business_register.Company'),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='end_date',
field=models.DateField(help_text='End date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='end_number',
field=models.CharField(help_text='End number', max_length=555, null=True),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='start_date',
field=models.DateField(help_text='Start date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='start_number',
field=models.CharField(help_text='Start number', max_length=555, null=True),
),
migrations.AlterField(
model_name='exchangedatacompany',
name='taxpayer_type',
field=models.ForeignKey(help_text='Taxpayer type of the company', null=True, on_delete=django.db.models.deletion.CASCADE, to='data_ocean.TaxpayerType'),
),
migrations.AlterField(
model_name='founder',
name='address',
field=models.CharField(blank=True, default='', help_text='Founder address in Ukrainian', max_length=2015, null=True, verbose_name='address'),
),
migrations.AlterField(
model_name='founder',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='founders', to='business_register.Company', verbose_name='owner'),
),
migrations.AlterField(
model_name='founder',
name='country',
field=models.CharField(blank=True, default='', help_text='Country of origin', max_length=100, null=True, verbose_name='country'),
),
migrations.AlterField(
model_name='founder',
name='edrpou',
field=models.CharField(blank=True, db_index=True, default='', help_text='EDRPOU number as string', max_length=9, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='founder',
name='equity',
field=models.FloatField(blank=True, help_text='Equity', null=True, verbose_name='equity'),
),
migrations.AlterField(
model_name='founder',
name='info',
field=models.CharField(help_text='Info', max_length=2015, verbose_name='info'),
),
migrations.AlterField(
model_name='founder',
name='info_additional',
field=models.CharField(help_text='Additional info', max_length=2015, null=True, verbose_name='additional info'),
),
migrations.AlterField(
model_name='founder',
name='info_beneficiary',
field=models.CharField(help_text='Beneficiary Info', max_length=2015, null=True, verbose_name='beneficiary info'),
),
migrations.AlterField(
model_name='founder',
name='is_beneficiary',
field=models.BooleanField(blank=True, default=False, help_text='Is beneficiary of the company', verbose_name='is beneficiary'),
),
migrations.AlterField(
model_name='founder',
name='is_founder',
field=models.BooleanField(blank=True, default=False, help_text='Is founder of the company', verbose_name='is owner'),
),
migrations.AlterField(
model_name='founder',
name='name',
field=models.TextField(db_index=True, help_text='Founder name in Ukrainian', verbose_name='name or full name'),
),
migrations.AlterField(
model_name='historicalassignee',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company is the legal successor', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company', verbose_name='є правонаступником'),
),
migrations.AlterField(
model_name='historicalassignee',
name='edrpou',
field=models.CharField(help_text='EDRPOU number as string', max_length=11, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='historicalassignee',
name='name',
field=models.CharField(help_text='Assignee name in Ukrainian', max_length=610, null=True, verbose_name='name'),
),
migrations.AlterField(
model_name='historicalbancruptcyreadjustment',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Bankruptcy readjustment', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalbancruptcyreadjustment',
name='head_name',
field=models.CharField(help_text='Head name', max_length=515, null=True),
),
migrations.AlterField(
model_name='historicalbancruptcyreadjustment',
name='op_date',
field=models.DateField(help_text='Date of bankruptcy readjustment as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='historicalbancruptcyreadjustment',
name='reason',
field=models.TextField(help_text='Reason of bankruptcy', null=True, verbose_name='reason'),
),
migrations.AlterField(
model_name='historicalbancruptcyreadjustment',
name='sbj_state',
field=models.CharField(help_text='Subject state', max_length=345, null=True),
),
migrations.AlterField(
model_name='historicalcompany',
name='address',
field=models.CharField(help_text='Registration address in Ukrainian', max_length=1000, null=True, verbose_name='address'),
),
migrations.AlterField(
model_name='historicalcompany',
name='antac_id',
field=models.PositiveIntegerField(blank=True, db_index=True, default=None, help_text='ID from ANTACs DB', null=True, verbose_name='id from ANTACs DB'),
),
migrations.AlterField(
model_name='historicalcompany',
name='authority',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Authorized state agency which register the company', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='data_ocean.Authority', verbose_name='registration authority'),
),
migrations.AlterField(
model_name='historicalcompany',
name='authorized_capital',
field=models.FloatField(help_text='Authorized capital as number', null=True, verbose_name='share capital'),
),
migrations.AlterField(
model_name='historicalcompany',
name='boss',
field=models.CharField(blank=True, default='', help_text='CEO of the company', max_length=100, null=True, verbose_name='CEO'),
),
migrations.AlterField(
model_name='historicalcompany',
name='bylaw',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='By law', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Bylaw', verbose_name='charter'),
),
migrations.AlterField(
model_name='historicalcompany',
name='code',
field=models.CharField(db_index=True, help_text='Our code', max_length=510, verbose_name='our code'),
),
migrations.AlterField(
model_name='historicalcompany',
name='company_type',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Type of the company', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.CompanyType', verbose_name='company type'),
),
migrations.AlterField(
model_name='historicalcompany',
name='contact_info',
field=models.CharField(help_text='Info about contacts', max_length=310, null=True, verbose_name='contacts'),
),
migrations.AlterField(
model_name='historicalcompany',
name='country',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Country of origin', max_length=60, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='location_register.Country', verbose_name='country'),
),
migrations.AlterField(
model_name='historicalcompany',
name='edrpou',
field=models.CharField(db_index=True, help_text='EDRPOU number as string', max_length=260, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='historicalcompany',
name='from_antac_only',
field=models.BooleanField(help_text='If this field has "true" - Data provided by the Anti-Corruption Action Center.', null=True),
),
migrations.AlterField(
model_name='historicalcompany',
name='name',
field=models.CharField(help_text='Company name in Ukrainian', max_length=500, null=True, verbose_name='name'),
),
migrations.AlterField(
model_name='historicalcompany',
name='parent',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company that has a controlling interest in the company', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company', verbose_name='parent company'),
),
migrations.AlterField(
model_name='historicalcompany',
name='registration_date',
field=models.DateField(help_text='Registration date as string in YYYY-MM-DD format', null=True, verbose_name='registration date'),
),
migrations.AlterField(
model_name='historicalcompany',
name='registration_info',
field=models.CharField(help_text='Registration info of the company', max_length=450, null=True, verbose_name='registration info'),
),
migrations.AlterField(
model_name='historicalcompany',
name='short_name',
field=models.CharField(help_text='Short name of the company in Ukrainian', max_length=500, null=True, verbose_name='short name'),
),
migrations.AlterField(
model_name='historicalcompany',
name='source',
field=models.CharField(blank=True, choices=[('ukr', 'The United State Register of Legal Entities, Individual Entrepreneurs and Public Organizations of Ukraine'), ('gb', 'Company House (UK companies` register)'), ('antac', 'ANTAC')], db_index=True, default=None, help_text='Source', max_length=5, null=True, verbose_name='source'),
),
migrations.AlterField(
model_name='historicalcompany',
name='status',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company legal status', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='data_ocean.Status', verbose_name='status'),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='executive_power',
field=models.CharField(help_text='Executive power of the company', max_length=390, null=True),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='founding_document_number',
field=models.CharField(help_text='Founding document number as string', max_length=375, null=True),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='managing_paper',
field=models.CharField(help_text='Managing paper of the company', max_length=360, null=True),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='superior_management',
field=models.CharField(help_text='Superior management of the company', max_length=620, null=True),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='terminated_info',
field=models.CharField(help_text='Info about termination', max_length=600, null=True),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='termination_cancel_info',
field=models.CharField(help_text='Info about termination cancellation', max_length=570, null=True),
),
migrations.AlterField(
model_name='historicalcompanydetail',
name='vp_dates',
field=models.TextField(help_text='Array of dates as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='historicalcompanytokved',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalcompanytokved',
name='kved',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='NACE as string', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Kved', verbose_name='NACE'),
),
migrations.AlterField(
model_name='historicalcompanytokved',
name='primary_kved',
field=models.BooleanField(default=False, help_text='Primary NACE as string', verbose_name='declared as primary'),
),
migrations.AlterField(
model_name='historicalcompanytopredecessor',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalcompanytopredecessor',
name='predecessor',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Predecessor name in Ukrainian', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Predecessor'),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='authority',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Authorized state agency which register the company', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='data_ocean.Authority', verbose_name='registration authority'),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='end_date',
field=models.DateField(help_text='End date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='end_number',
field=models.CharField(help_text='End number', max_length=555, null=True),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='start_date',
field=models.DateField(help_text='Start date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='start_number',
field=models.CharField(help_text='Start number', max_length=555, null=True),
),
migrations.AlterField(
model_name='historicalexchangedatacompany',
name='taxpayer_type',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Taxpayer type of the company', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='data_ocean.TaxpayerType'),
),
migrations.AlterField(
model_name='historicalfounder',
name='address',
field=models.CharField(blank=True, default='', help_text='Founder address in Ukrainian', max_length=2015, null=True, verbose_name='address'),
),
migrations.AlterField(
model_name='historicalfounder',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company', verbose_name='owner'),
),
migrations.AlterField(
model_name='historicalfounder',
name='country',
field=models.CharField(blank=True, default='', help_text='Country of origin', max_length=100, null=True, verbose_name='country'),
),
migrations.AlterField(
model_name='historicalfounder',
name='edrpou',
field=models.CharField(blank=True, db_index=True, default='', help_text='EDRPOU number as string', max_length=9, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='historicalfounder',
name='equity',
field=models.FloatField(blank=True, help_text='Equity', null=True, verbose_name='equity'),
),
migrations.AlterField(
model_name='historicalfounder',
name='info',
field=models.CharField(help_text='Info', max_length=2015, verbose_name='info'),
),
migrations.AlterField(
model_name='historicalfounder',
name='info_additional',
field=models.CharField(help_text='Additional info', max_length=2015, null=True, verbose_name='additional info'),
),
migrations.AlterField(
model_name='historicalfounder',
name='info_beneficiary',
field=models.CharField(help_text='Beneficiary Info', max_length=2015, null=True, verbose_name='beneficiary info'),
),
migrations.AlterField(
model_name='historicalfounder',
name='is_beneficiary',
field=models.BooleanField(blank=True, default=False, help_text='Is beneficiary of the company', verbose_name='is beneficiary'),
),
migrations.AlterField(
model_name='historicalfounder',
name='is_founder',
field=models.BooleanField(blank=True, default=False, help_text='Is founder of the company', verbose_name='is owner'),
),
migrations.AlterField(
model_name='historicalfounder',
name='name',
field=models.TextField(db_index=True, help_text='Founder name in Ukrainian', verbose_name='name or full name'),
),
migrations.AlterField(
model_name='historicalpep',
name='criminal_proceedings',
field=models.TextField(help_text='Known criminal proceedings against the person. If its is null, the person has no criminal proceedings against him.', null=True, verbose_name='known criminal proceedings against the person'),
),
migrations.AlterField(
model_name='historicalpep',
name='reason_of_termination',
field=models.CharField(blank=True, choices=[('died', 'Is dead'), ('resigned', 'Resigned or term ended'), ('linked pep died', 'Associated PEP is dead'), ('linked pep resigned', 'Associated person is no more PEP'), ('legislation changed', 'Legislation was changed'), ('company status changed', 'Company is no more state')], help_text='PEP status reason of termination. Can be "Is dead", "Resigned or term ended", "Associated PEP is dead", "Legislation was changed", "Company is no more state" or null.', max_length=125, null=True, verbose_name='reason of termination'),
),
migrations.AlterField(
model_name='historicalpep',
name='termination_date',
field=models.CharField(help_text='PEP status termination date in YYYY-MM-DD format.', max_length=10, null=True, verbose_name='PEP status termination date '),
),
migrations.AlterField(
model_name='historicalsigner',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalsigner',
name='name',
field=models.CharField(help_text='Signer name in Ukrainian', max_length=390, null=True, verbose_name='full name'),
),
migrations.AlterField(
model_name='historicalterminationstarted',
name='company',
field=models.ForeignKey(blank=True, db_constraint=False, help_text='Company name', null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='business_register.Company'),
),
migrations.AlterField(
model_name='historicalterminationstarted',
name='creditor_reg_end_date',
field=models.DateField(help_text='Creditor registration end date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='historicalterminationstarted',
name='op_date',
field=models.DateField(help_text='Date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='historicalterminationstarted',
name='reason',
field=models.TextField(help_text='Reason of termination', null=True, verbose_name='reason'),
),
migrations.AlterField(
model_name='historicalterminationstarted',
name='sbj_state',
field=models.CharField(help_text='State of the company', max_length=530, null=True),
),
migrations.AlterField(
model_name='historicalterminationstarted',
name='signer_name',
field=models.CharField(help_text='Signer name in Ukrainian', max_length=480, null=True),
),
migrations.AlterField(
model_name='kved',
name='code',
field=models.CharField(db_index=True, help_text='Code in the classification of economic activities.', max_length=10, verbose_name='code'),
),
migrations.AlterField(
model_name='kved',
name='division',
field=models.ForeignKey(help_text='Division title in the classification of economic activities.', on_delete=django.db.models.deletion.CASCADE, to='business_register.KvedDivision', verbose_name='division'),
),
migrations.AlterField(
model_name='kved',
name='group',
field=models.ForeignKey(help_text='Group title in the classification of economic activities.', on_delete=django.db.models.deletion.CASCADE, to='business_register.KvedGroup', verbose_name='group'),
),
migrations.AlterField(
model_name='kved',
name='name',
field=models.CharField(help_text='Name of the type of economic activity.', max_length=500, verbose_name='name'),
),
migrations.AlterField(
model_name='kved',
name='section',
field=models.ForeignKey(help_text='Section title in the classification of economic activities.', on_delete=django.db.models.deletion.CASCADE, to='business_register.KvedSection', verbose_name='section'),
),
migrations.AlterField(
model_name='pep',
name='criminal_proceedings',
field=models.TextField(help_text='Known criminal proceedings against the person. If its is null, the person has no criminal proceedings against him.', null=True, verbose_name='known criminal proceedings against the person'),
),
migrations.AlterField(
model_name='pep',
name='reason_of_termination',
field=models.CharField(blank=True, choices=[('died', 'Is dead'), ('resigned', 'Resigned or term ended'), ('linked pep died', 'Associated PEP is dead'), ('linked pep resigned', 'Associated person is no more PEP'), ('legislation changed', 'Legislation was changed'), ('company status changed', 'Company is no more state')], help_text='PEP status reason of termination. Can be "Is dead", "Resigned or term ended", "Associated PEP is dead", "Legislation was changed", "Company is no more state" or null.', max_length=125, null=True, verbose_name='reason of termination'),
),
migrations.AlterField(
model_name='pep',
name='termination_date',
field=models.CharField(help_text='PEP status termination date in YYYY-MM-DD format.', max_length=10, null=True, verbose_name='PEP status termination date '),
),
migrations.AlterField(
model_name='predecessor',
name='edrpou',
field=models.CharField(help_text='EDRPOU number as string', max_length=405, null=True, verbose_name='number'),
),
migrations.AlterField(
model_name='predecessor',
name='name',
field=models.CharField(help_text='Predecessor name in Ukrainian', max_length=500, null=True, verbose_name='name'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='category',
field=models.CharField(blank=True, choices=[('family', 'Family'), ('business', 'Business'), ('personal', 'Personal')], help_text='The category of the relationship with the related person. Can be: family, business, personal.', max_length=20, null=True, verbose_name='connection`s category'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='confirmation_date',
field=models.CharField(help_text='Date of confirmation of connection in the "Anti-Corruption Action Center" database.', max_length=12, null=True, verbose_name='connection`s confirmation date'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='end_date',
field=models.CharField(help_text='The date the relationship ends.', max_length=12, null=True, verbose_name='connection`s end date'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='from_person',
field=models.ForeignKey(help_text='From which person the connection is established.', on_delete=django.db.models.deletion.CASCADE, related_name='from_person_links', to='business_register.Pep', verbose_name='associated person'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='from_person_relationship_type',
field=models.CharField(help_text='The type of relationship with a related person.', max_length=90, null=True, verbose_name='connection`s type'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='start_date',
field=models.CharField(help_text='Date of the beginning of the relationship.', max_length=12, null=True, verbose_name='connection`s start date'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='to_person',
field=models.ForeignKey(help_text='With what person the connection is established.', on_delete=django.db.models.deletion.CASCADE, related_name='to_person_links', to='business_register.Pep', verbose_name='another associated person'),
),
migrations.AlterField(
model_name='relatedpersonslink',
name='to_person_relationship_type',
field=models.CharField(help_text='The type of relationship with a related person.', max_length=90, null=True, verbose_name='another person`s connection`s type'),
),
migrations.AlterField(
model_name='signer',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='signers', to='business_register.Company'),
),
migrations.AlterField(
model_name='signer',
name='name',
field=models.CharField(help_text='Signer name in Ukrainian', max_length=390, null=True, verbose_name='full name'),
),
migrations.AlterField(
model_name='terminationstarted',
name='company',
field=models.ForeignKey(help_text='Company name', on_delete=django.db.models.deletion.CASCADE, related_name='termination_started', to='business_register.Company'),
),
migrations.AlterField(
model_name='terminationstarted',
name='creditor_reg_end_date',
field=models.DateField(help_text='Creditor registration end date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='terminationstarted',
name='op_date',
field=models.DateField(help_text='Date as string in YYYY-MM-DD format', null=True),
),
migrations.AlterField(
model_name='terminationstarted',
name='reason',
field=models.TextField(help_text='Reason of termination', null=True, verbose_name='reason'),
),
migrations.AlterField(
model_name='terminationstarted',
name='sbj_state',
field=models.CharField(help_text='State of the company', max_length=530, null=True),
),
migrations.AlterField(
model_name='terminationstarted',
name='signer_name',
field=models.CharField(help_text='Signer name in Ukrainian', max_length=480, null=True),
),
]
| 54.790865 | 579 | 0.644562 | 4,888 | 45,586 | 5.845131 | 0.062602 | 0.114102 | 0.142627 | 0.165447 | 0.959259 | 0.947954 | 0.876623 | 0.851178 | 0.794302 | 0.787372 | 0 | 0.008633 | 0.237705 | 45,586 | 831 | 580 | 54.856799 | 0.813554 | 0.000987 | 0 | 0.907879 | 1 | 0.007273 | 0.290718 | 0.049584 | 0.02303 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002424 | 0 | 0.006061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4adb7534efca0fe42e7cd8dc8b1fec9e5393bce1 | 127 | py | Python | python-3/beginner/1096.py | MisaelAugusto/uri | 22bee72edf44f939d7a290383336b4d061faecbb | [
"MIT"
] | null | null | null | python-3/beginner/1096.py | MisaelAugusto/uri | 22bee72edf44f939d7a290383336b4d061faecbb | [
"MIT"
] | null | null | null | python-3/beginner/1096.py | MisaelAugusto/uri | 22bee72edf44f939d7a290383336b4d061faecbb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
I = 1
while (I <= 9):
print("I=%d J=7" % I)
print("I=%d J=6" % I)
print("I=%d J=5" % I)
I += 2 | 15.875 | 23 | 0.401575 | 28 | 127 | 1.821429 | 0.5 | 0.352941 | 0.411765 | 0.470588 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076087 | 0.275591 | 127 | 8 | 24 | 15.875 | 0.478261 | 0.165354 | 0 | 0 | 0 | 0 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
ab2b7963fab585a1b3bcf08262eee3819d7e55f0 | 5,407 | py | Python | django/Avinash/College_Management_System/CMS_project/CMS_app/forms.py | profMsaif/web_applications_2022 | 849cfeb396b82551e2553028d03fe9693773fc49 | [
"MIT"
] | null | null | null | django/Avinash/College_Management_System/CMS_project/CMS_app/forms.py | profMsaif/web_applications_2022 | 849cfeb396b82551e2553028d03fe9693773fc49 | [
"MIT"
] | null | null | null | django/Avinash/College_Management_System/CMS_project/CMS_app/forms.py | profMsaif/web_applications_2022 | 849cfeb396b82551e2553028d03fe9693773fc49 | [
"MIT"
] | 4 | 2022-03-12T10:17:00.000Z | 2022-03-26T08:40:43.000Z | from django import forms
from .models import Courses, SessionYearModel
class DateInput(forms.DateInput):
input_type = "date"
class AddStudentForm(forms.Form):
email = forms.EmailField(label="Email",
max_length=50,
widget=forms.EmailInput(attrs={"class": "form-control"}))
password = forms.CharField(label="Password",
max_length=50,
widget=forms.PasswordInput(attrs={"class": "form-control"}))
first_name = forms.CharField(label="First Name",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
last_name = forms.CharField(label="Last Name",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
username = forms.CharField(label="Username",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
address = forms.CharField(label="Address",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
# For Displaying Courses
try:
courses = Courses.objects.all()
course_list = []
for course in courses:
single_course = (course.id, course.course_name)
course_list.append(single_course)
except:
print("here")
course_list = []
# For Displaying Session Years
try:
session_years = SessionYearModel.objects.all()
session_year_list = []
for session_year in session_years:
single_session_year = (session_year.id, str(
session_year.session_start_year)+" to "+str(session_year.session_end_year))
session_year_list.append(single_session_year)
except:
session_year_list = []
gender_list = (
('Male', 'Male'),
('Female', 'Female')
)
course_id = forms.ChoiceField(label="Course",
choices=course_list,
widget=forms.Select(attrs={"class": "form-control"}))
gender = forms.ChoiceField(label="Gender",
choices=gender_list,
widget=forms.Select(attrs={"class": "form-control"}))
session_year_id = forms.ChoiceField(label="Session Year",
choices=session_year_list,
widget=forms.Select(attrs={"class": "form-control"}))
profile_pic = forms.FileField(label="Profile Pic",
required=False,
widget=forms.FileInput(attrs={"class": "form-control"}))
class EditStudentForm(forms.Form):
email = forms.EmailField(label="Email",
max_length=50,
widget=forms.EmailInput(attrs={"class": "form-control"}))
first_name = forms.CharField(label="First Name",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
last_name = forms.CharField(label="Last Name",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
username = forms.CharField(label="Username",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
address = forms.CharField(label="Address",
max_length=50,
widget=forms.TextInput(attrs={"class": "form-control"}))
# For Displaying Courses
try:
courses = Courses.objects.all()
course_list = []
for course in courses:
single_course = (course.id, course.course_name)
course_list.append(single_course)
except:
course_list = []
# For Displaying Session Years
try:
session_years = SessionYearModel.objects.all()
session_year_list = []
for session_year in session_years:
single_session_year = (session_year.id, str(
session_year.session_start_year)+" to "+str(session_year.session_end_year))
session_year_list.append(single_session_year)
except:
session_year_list = []
gender_list = (
('Male', 'Male'),
('Female', 'Female')
)
course_id = forms.ChoiceField(label="Course",
choices=course_list,
widget=forms.Select(attrs={"class": "form-control"}))
gender = forms.ChoiceField(label="Gender",
choices=gender_list,
widget=forms.Select(attrs={"class": "form-control"}))
session_year_id = forms.ChoiceField(label="Session Year",
choices=session_year_list,
widget=forms.Select(attrs={"class": "form-control"}))
profile_pic = forms.FileField(label="Profile Pic",
required=False,
widget=forms.FileInput(attrs={"class": "form-control"}))
| 42.574803 | 93 | 0.525985 | 499 | 5,407 | 5.527054 | 0.134269 | 0.095722 | 0.096447 | 0.14467 | 0.928571 | 0.920595 | 0.920595 | 0.920595 | 0.920595 | 0.920595 | 0 | 0.006373 | 0.361568 | 5,407 | 126 | 94 | 42.912698 | 0.792584 | 0.019049 | 0 | 0.896226 | 0 | 0 | 0.100962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.018868 | 0.018868 | 0 | 0.254717 | 0.009434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
db52486785c5b51ba66825dc2ea17e926301caaf | 9,869 | py | Python | tests/tests_geomstats/test_kernel_density_estimation_classifier.py | SaitejaUtpala/geomstats | 5d4e16b3f30a86aab4725142f2263d8f10a30508 | [
"MIT"
] | null | null | null | tests/tests_geomstats/test_kernel_density_estimation_classifier.py | SaitejaUtpala/geomstats | 5d4e16b3f30a86aab4725142f2263d8f10a30508 | [
"MIT"
] | null | null | null | tests/tests_geomstats/test_kernel_density_estimation_classifier.py | SaitejaUtpala/geomstats | 5d4e16b3f30a86aab4725142f2263d8f10a30508 | [
"MIT"
] | null | null | null | """Unit tests for the KDE classifier."""
import geomstats.backend as gs
import geomstats.tests
from geomstats.geometry.euclidean import Euclidean
from geomstats.geometry.hyperboloid import Hyperboloid
from geomstats.geometry.hypersphere import Hypersphere
from geomstats.geometry.poincare_ball import PoincareBall
from geomstats.learning.kernel_density_estimation_classifier import \
KernelDensityEstimationClassifier
from geomstats.learning.radial_kernel_functions import triangular_radial_kernel
class TestKernelDensityEstimationClassifier(geomstats.tests.TestCase):
"""Class defining the Kernel Density Estimation Classifier tests."""
def setUp(self):
"""Define the parameters to test."""
gs.random.seed(1234)
self.dim = 2
self.space = Euclidean(dim=self.dim)
self.distance = self.space.metric.dist
def test_predict(self):
"""Test the 'predict' class method."""
training_dataset = gs.array(
[[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[3.0, 0.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict(gs.array([[1.1, 0.0]]))
expected = gs.array([0])
self.assertAllClose(expected, result)
def test_predict_one_dimensional_data(self):
"""Test the 'predict' class method."""
training_dataset = gs.array(
[[0.0],
[1.0],
[2.0],
[3.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
distance='minkowski')
kde.fit(training_dataset, labels)
result = kde.predict(gs.array([1.1]))
expected = gs.array([0])
self.assertAllClose(expected, result)
@geomstats.tests.np_only
def test_predict_one_dimensional_data_callable_distance(self):
"""Test the 'predict' class method on one dimensional data."""
training_dataset = gs.array([0, 1, 2, 3])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict(gs.array([1.1]))
expected = gs.array([0])
self.assertAllClose(expected, result)
@geomstats.tests.np_only
def test_predict_proba_uniform_kernel_one_dimensional_data(self):
"""Test the 'predict_proba' class method using the 'uniform' kernel.
Test the 'predict_proba' class method using the 'uniform' kernel on
one-dimensional date of shape [n_samples,].
"""
training_dataset = gs.array([0, 1, 2, 3])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
kernel='uniform',
distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict_proba(gs.array([0.9]))
expected = gs.array([[1 / 2, 1 / 2]])
self.assertAllClose(expected, result, atol=gs.atol)
def test_predict_proba_uniform_kernel(self):
"""Test the 'predict_proba' class method using the 'uniform' kernel."""
training_dataset = gs.array(
[[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[3.0, 0.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
kernel='uniform',
distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict_proba(gs.array([[0.9, 0.0]]))
expected = gs.array([[1 / 2, 1 / 2]])
self.assertAllClose(expected, result, atol=gs.atol)
def test_predict_proba_distance_kernel(self):
"""Test the 'predict_proba' class method using 'distance' kernel."""
training_dataset = gs.array(
[[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[3.0, 0.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
kernel='distance',
distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict_proba(gs.array([[1.0, 0.0]]))
expected = gs.array([[1, 0]])
self.assertAllClose(expected, result, atol=gs.atol)
@geomstats.tests.np_and_pytorch_only
def test_predict_proba_triangular_kernel(self):
"""Test the 'predict_proba' class method using a triangular kernel."""
training_dataset = gs.array(
[[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[3.0, 0.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
kernel=triangular_radial_kernel,
bandwidth=2.0,
p=2,
distance='minkowski')
kde.fit(training_dataset, labels)
result = kde.predict_proba(gs.array([[1.0, 0.0]]))
expected = gs.array([[3 / 4, 1 / 4]])
self.assertAllClose(expected, result, atol=gs.atol)
@geomstats.tests.np_and_pytorch_only
def test_predict_proba_triangular_kernel_callable_distance(self):
"""Test the 'predict_proba' class method using a triangular kernel."""
training_dataset = gs.array(
[[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[3.0, 0.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
kernel=triangular_radial_kernel,
bandwidth=2.0,
distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict_proba(gs.array([[1.0, 0.0]]))
expected = gs.array([[3 / 4, 1 / 4]])
self.assertAllClose(expected, result, atol=gs.atol)
@geomstats.tests.np_and_pytorch_only
def test_predict_triangular_kernel_callable_distance(self):
"""Test the 'predict' class method using a triangular kernel."""
training_dataset = gs.array(
[[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[3.0, 0.0]])
labels = [0, 0, 1, 1]
kde = KernelDensityEstimationClassifier(
kernel=triangular_radial_kernel,
bandwidth=2.0,
distance=self.distance)
kde.fit(training_dataset, labels)
result = kde.predict(gs.array([[1.0, 0.0], [1.0, 0.0]]))
expected = gs.array([0, 0])
self.assertAllClose(expected, result, atol=gs.atol)
def test_predict_hypersphere_distance(self):
"""Test the 'predict' class method using the hypersphere distance."""
dim = 2
space = Hypersphere(dim=dim)
distance = space.metric.dist
training_dataset = gs.array(
[[1, 0, 0],
[3 ** (1 / 2) / 2, 1 / 2, 0],
[3 ** (1 / 2) / 2, - 1 / 2, 0],
[0, 0, 1],
[0, 1 / 2, 3 ** (1 / 2) / 2],
[0, - 1 / 2, 3 ** (1 / 2) / 2]])
labels = [0, 0, 0, 1, 1, 1]
kde = KernelDensityEstimationClassifier(
distance=distance)
kde.fit(training_dataset, labels)
target_dataset = gs.array(
[[2 ** (1 / 2) / 2, 2 ** (1 / 2) / 2, 0],
[0, 1 / 2, - 3 ** (1 / 2) / 2],
[0, - 1 / 2, - 3 ** (1 / 2) / 2],
[- 3 ** (1 / 2) / 2, 1 / 2, 0],
[- 3 ** (1 / 2) / 2, - 1 / 2, 0],
[0, 2 ** (1 / 2) / 2, 2 ** (1 / 2) / 2]])
result = kde.predict(target_dataset)
expected = [0, 0, 0, 1, 1, 1]
self.assertAllClose(expected, result)
def test_predict_poincare_ball_distance(self):
"""Test the 'predict' class method using the Poincare ball distance."""
dim = 2
space = PoincareBall(dim=dim)
distance = space.metric.dist
training_dataset = gs.array(
[[1 / 2, 1 / 4],
[1 / 2, 0],
[1 / 2, - 1 / 4],
[- 1 / 2, 1 / 4],
[- 1 / 2, 0],
[- 1 / 2, - 1 / 4]])
labels = [0, 0, 0, 1, 1, 1]
kde = KernelDensityEstimationClassifier(
distance=distance,
kernel='distance')
kde.fit(training_dataset, labels)
target_dataset = gs.array(
[[1 / 2, 1 / 5],
[1 / 2, 0],
[1 / 2, - 1 / 5],
[- 1 / 2, 1 / 5],
[- 1 / 2, 0],
[- 1 / 2, - 1 / 5]])
result = kde.predict(target_dataset)
expected = [0, 0, 0, 1, 1, 1]
self.assertAllClose(expected, result)
def test_predict_hyperboloid_distance(self):
"""Test the 'predict' class method using the hyperboloid distance."""
dim = 2
space = Hyperboloid(dim=dim)
distance = space.metric.dist
training_dataset_intrinsic = gs.array(
[[1 / 2, 1 / 4],
[1 / 2, 0],
[1 / 2, - 1 / 4],
[- 1 / 2, 1 / 4],
[- 1 / 2, 0],
[- 1 / 2, - 1 / 4]])
training_dataset = space.change_coordinates_system(
training_dataset_intrinsic,
from_coordinates_system='intrinsic',
to_coordinates_system='extrinsic')
labels = [0, 0, 0, 1, 1, 1]
kde = KernelDensityEstimationClassifier(
distance=distance,
kernel='distance')
kde.fit(training_dataset, labels)
target_dataset_intrinsic = gs.array(
[[1 / 2, 1 / 5],
[1 / 2, 0],
[1 / 2, - 1 / 5],
[- 1 / 2, 1 / 5],
[- 1 / 2, 0],
[- 1 / 2, - 1 / 5]])
target_dataset = space.change_coordinates_system(
target_dataset_intrinsic,
from_coordinates_system='intrinsic',
to_coordinates_system='extrinsic')
result = kde.predict(target_dataset)
expected = [0, 0, 0, 1, 1, 1]
self.assertAllClose(expected, result)
| 38.104247 | 79 | 0.544128 | 1,190 | 9,869 | 4.396639 | 0.076471 | 0.035933 | 0.024083 | 0.011468 | 0.818234 | 0.804281 | 0.794916 | 0.777714 | 0.735283 | 0.690749 | 0 | 0.063076 | 0.318877 | 9,869 | 258 | 80 | 38.251938 | 0.715263 | 0.09545 | 0 | 0.721973 | 0 | 0 | 0.010424 | 0 | 0 | 0 | 0 | 0 | 0.053812 | 1 | 0.058296 | false | 0 | 0.035874 | 0 | 0.098655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
db95bced900d4ceb274f157ac1413320f11094f6 | 160 | py | Python | pyscf/mrpt/__init__.py | nmardirossian/pyscf | 57c8912dcfcc1157a822feede63df54ed1067115 | [
"BSD-2-Clause"
] | 1 | 2018-05-02T19:55:30.000Z | 2018-05-02T19:55:30.000Z | pyscf/mrpt/__init__.py | nmardirossian/pyscf | 57c8912dcfcc1157a822feede63df54ed1067115 | [
"BSD-2-Clause"
] | null | null | null | pyscf/mrpt/__init__.py | nmardirossian/pyscf | 57c8912dcfcc1157a822feede63df54ed1067115 | [
"BSD-2-Clause"
] | 1 | 2018-12-06T03:10:50.000Z | 2018-12-06T03:10:50.000Z | from pyscf.mrpt import nevpt2
from pyscf.mrpt.nevpt2 import NEVPT
#TODO: remove it in future release
from pyscf.mrpt.nevpt2 import sc_nevpt
NEVPT2 = sc_nevpt
| 20 | 38 | 0.80625 | 27 | 160 | 4.703704 | 0.481481 | 0.212598 | 0.307087 | 0.299213 | 0.393701 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029197 | 0.14375 | 160 | 7 | 39 | 22.857143 | 0.89781 | 0.20625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
dbd4ce7da4d4db2b51c3e0d1713951c3f0ab1b30 | 6,731 | py | Python | tests/test_iwdr.py | drkennetz/cwltool-singularity23-fix | e2ec740fccc81ff7071dcd607c5c158fbc0dfb90 | [
"Apache-2.0"
] | null | null | null | tests/test_iwdr.py | drkennetz/cwltool-singularity23-fix | e2ec740fccc81ff7071dcd607c5c158fbc0dfb90 | [
"Apache-2.0"
] | null | null | null | tests/test_iwdr.py | drkennetz/cwltool-singularity23-fix | e2ec740fccc81ff7071dcd607c5c158fbc0dfb90 | [
"Apache-2.0"
] | null | null | null | import tempfile
import os
from cwltool.main import main
from cwltool import load_tool
from .util import (get_data, get_windows_safe_factory, windows_needs_docker,
needs_docker, temp_dir, needs_singularity)
@windows_needs_docker
def test_newline_in_entry():
"""
test that files in InitialWorkingDirectory are created with a newline character
"""
factory = get_windows_safe_factory()
echo = factory.make(get_data("tests/wf/iwdr-entry.cwl"))
assert echo(message="hello") == {"out": "CONFIGVAR=hello\n"}
@needs_docker
def test_iwdr_permutations():
saved_tempdir = tempfile.tempdir
with temp_dir() as misc:
tempfile.tempdir = os.path.realpath(misc)
with temp_dir() as fifth:
with temp_dir() as sixth:
with temp_dir() as seventh:
with temp_dir() as eighth:
with tempfile.NamedTemporaryFile() as first:
with tempfile.NamedTemporaryFile() as second:
with tempfile.NamedTemporaryFile() as third:
with tempfile.NamedTemporaryFile() as fourth:
with temp_dir() as outdir:
assert(main(
['--outdir', outdir,
get_data("tests/wf/iwdr_permutations.cwl"),
'--first', first.name,
'--second', second.name,
'--third', third.name,
'--fourth', fourth.name,
'--fifth', fifth,
'--sixth', sixth,
'--seventh', seventh,
'--eighth', eighth]) == 0)
tempfile.tempdir = saved_tempdir
@needs_docker
def test_iwdr_permutations_inplace():
saved_tempdir = tempfile.tempdir
with temp_dir() as misc:
tempfile.tempdir = os.path.realpath(misc)
with temp_dir() as fifth:
with temp_dir() as sixth:
with temp_dir() as seventh:
with temp_dir() as eighth:
with tempfile.NamedTemporaryFile() as first:
with tempfile.NamedTemporaryFile() as second:
with tempfile.NamedTemporaryFile() as third:
with tempfile.NamedTemporaryFile() as fourth:
with temp_dir() as outdir:
assert(main(
['--outdir', outdir,
'--enable-ext',
'--overrides',
get_data("tests/wf/iwdr_permutations_inplace.yml"),
get_data("tests/wf/iwdr_permutations.cwl"),
'--first', first.name,
'--second', second.name,
'--third', third.name,
'--fourth', fourth.name,
'--fifth', fifth,
'--sixth', sixth,
'--seventh', seventh,
'--eighth', eighth]) == 0)
tempfile.tempdir = saved_tempdir
@needs_singularity
def test_iwdr_permutations_singularity():
with temp_dir() as fifth:
with temp_dir() as sixth:
with temp_dir() as seventh:
with temp_dir() as eighth:
with tempfile.NamedTemporaryFile() as first:
with tempfile.NamedTemporaryFile() as second:
with tempfile.NamedTemporaryFile() as third:
with tempfile.NamedTemporaryFile() as fourth:
with temp_dir() as outdir:
assert(main(
['--outdir', outdir,
'--singularity',
get_data("tests/wf/iwdr_permutations.cwl"),
'--first', first.name,
'--second', second.name,
'--third', third.name,
'--fourth', fourth.name,
'--fifth', fifth,
'--sixth', sixth,
'--seventh', seventh,
'--eighth', eighth]) == 0)
@needs_singularity
def test_iwdr_permutations_singularity_inplace():
with temp_dir() as fifth:
with temp_dir() as sixth:
with temp_dir() as seventh:
with temp_dir() as eighth:
with tempfile.NamedTemporaryFile() as first:
with tempfile.NamedTemporaryFile() as second:
with tempfile.NamedTemporaryFile() as third:
with tempfile.NamedTemporaryFile() as fourth:
with temp_dir() as outdir:
assert(main(
['--outdir', outdir,
'--singularity',
'--enable-ext',
'--overrides',
get_data("tests/wf/iwdr_permutations_inplace.yml"),
get_data("tests/wf/iwdr_permutations.cwl"),
'--first', first.name,
'--second', second.name,
'--third', third.name,
'--fourth', fourth.name,
'--fifth', fifth,
'--sixth', sixth,
'--seventh', seventh,
'--eighth', eighth]) == 0)
| 54.282258 | 100 | 0.387758 | 472 | 6,731 | 5.370763 | 0.144068 | 0.063511 | 0.095464 | 0.112821 | 0.850493 | 0.843393 | 0.81854 | 0.781065 | 0.781065 | 0.781065 | 0 | 0.001263 | 0.529639 | 6,731 | 123 | 101 | 54.723577 | 0.799431 | 0.011737 | 0 | 0.869565 | 0 | 0 | 0.08921 | 0.033002 | 0 | 0 | 0 | 0 | 0.043478 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
dbde1081dd56c1873b79b4367d801f8a52ff840b | 46,843 | py | Python | BASIC/non_converge_calc.py | kianpu34593/base4gpaw | 31003e426e0fd9413115c60ed9a462763f2add1b | [
"MIT"
] | 1 | 2021-04-13T18:00:46.000Z | 2021-04-13T18:00:46.000Z | BASIC/non_converge_calc.py | kianpu34593/base4gpaw | 31003e426e0fd9413115c60ed9a462763f2add1b | [
"MIT"
] | null | null | null | BASIC/non_converge_calc.py | kianpu34593/base4gpaw | 31003e426e0fd9413115c60ed9a462763f2add1b | [
"MIT"
] | null | null | null | from copy import Error
import os
from typing import Type
from ase.parallel import paropen, parprint, world
from ase.db import connect
from ase.io import read
from glob import glob
import numpy as np
from gpaw import restart
import BASIC.optimizer as opt
import sys
from ase.constraints import FixAtoms,FixedLine
import pandas as pd
from BASIC.utils import detect_cluster
def pbc_checker(slab):
anlges_arg=[angle != 90.0000 for angle in np.round(slab.cell.angles(),decimals=4)[:2]]
if np.any(anlges_arg):
slab.pbc=[1,1,1]
else:
slab.pbc=[1,1,0]
# def detect_cluster(slab,tol=0.1):
# n=len(slab)
# dist_matrix=np.zeros((n, n))
# slab_c=np.sort(slab.get_positions()[:,2])
# for i, j in itertools.combinations(list(range(n)), 2):
# if i != j:
# cdist = np.abs(slab_c[i] - slab_c[j])
# dist_matrix[i, j] = cdist
# dist_matrix[j, i] = cdist
# condensed_m = squareform(dist_matrix)
# z = linkage(condensed_m)
# clusters = fcluster(z, tol, criterion="distance")
# return slab_c,list(clusters)
def apply_magmom(opt_slab_magmom,ads_slab,adatom=1):
if adatom == 1:
magmom_ls=np.append(opt_slab_magmom,0)
elif adatom == 2:
magmom_ls=np.append(opt_slab_magmom,0)
magmom_ls=np.append(magmom_ls,0)
ads_slab.set_initial_magnetic_moments(magmom_ls)
return ads_slab
def get_clean_slab(element,
miller_index,
report_location,
target_dir,
size,
fix_layer,
solver_fmax,
solver_maxstep,
gpaw_calc):
f = paropen(report_location,'a')
parprint('Start clean slab calculation: ', file=f)
if size != '1x1':
clean_slab_gpw_path=target_dir+'/clean_slab/slab.gpw'
if os.path.isfile(clean_slab_gpw_path):
opt_slab, pre_calc = restart(clean_slab_gpw_path)
pre_kpts=list(pre_calc.__dict__['parameters']['kpts'])
set_kpts=list(gpaw_calc.__dict__['parameters']['kpts'])
if pre_kpts == set_kpts:
parprint('\t'+size+' clean slab is pre-calculated with kpts matched.',file=f)
else:
parprint('\t'+size+' clean slab pre-calculated has different kpts. Clean slab needs to re-calculate.', file=f)
parprint('\t'+'Calculating '+size+' clean slab...',file=f)
clean_slab=read(target_dir+'/clean_slab/input.traj')
opt_slab=clean_slab_calculator(clean_slab,fix_layer,gpaw_calc,target_dir,solver_fmax,solver_maxstep)
else:
parprint('\t'+size+' clean slab is not pre-calculated.',file=f)
parprint('\t'+'Calculating '+size+' clean slab...',file=f)
interm_gpw=target_dir+'/clean_slab/slab_interm.gpw'
if os.path.isfile(interm_gpw):
clean_slab, gpaw_calc=restart(interm_gpw)
else:
clean_slab=read(target_dir+'/clean_slab/input.traj')
opt_slab=clean_slab_calculator(clean_slab,fix_layer,gpaw_calc,target_dir,solver_fmax,solver_maxstep)
else:
parprint('\tslab size is 1x1. Clean slab calculation is skipped.', file=f)
opt_slab=connect('final_database'+'/'+'surf.db').get_atoms(simple_name=element+'_'+miller_index)
parprint(' ',file=f)
f.close()
return opt_slab.get_potential_energy(), opt_slab.get_magnetic_moments()
def clean_slab_calculator(clean_slab,
fix_layer,
gpaw_calc,
target_dir,
solver_fmax,
solver_maxstep,
fix_option='bottom'):
pbc_checker(clean_slab)
calc_dict=gpaw_calc.__dict__['parameters']
if calc_dict['spinpol']:
clean_slab.set_initial_magnetic_moments([0]*len(clean_slab))
slab_c_coord,cluster=detect_cluster(clean_slab)
if fix_option == 'bottom':
unique_cluster_index=sorted(set(cluster), key=cluster.index)[fix_layer-1]
max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
fix_mask=clean_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
fixed_atom_constrain=FixAtoms(mask=fix_mask)
clean_slab.set_constraint(fixed_atom_constrain)
clean_slab.set_calculator(gpaw_calc)
opt.relax(clean_slab,target_dir+'/clean_slab',fmax=solver_fmax,maxstep=solver_maxstep)
return clean_slab
def adsorption_energy_calculator(traj_file,
report_location,
opt_slab_energy,
adatom_pot_energy,
opt_slab_magmom,
gpaw_calc,
solver_fmax,
solver_maxstep,
calc_type,
fix_layer,
fix_option = 'bottom'):
interm_gpw='/'.join(traj_file.split('/')[:-1]+['slab_interm.gpw'])
if os.path.isfile(interm_gpw):
ads_slab, gpaw_calc=restart(interm_gpw)
else:
ads_slab=read(traj_file)
pbc_checker(ads_slab)
calc_dict=gpaw_calc.__dict__['parameters']
if calc_dict['spinpol']:
ads_slab=apply_magmom(opt_slab_magmom,ads_slab)
fixed_line_constrain=FixedLine(a=-1,direction=[0,0,1])
slab_c_coord,cluster=detect_cluster(ads_slab)
if fix_option == 'bottom':
unique_cluster_index=sorted(set(cluster), key=cluster.index)[fix_layer-1]
max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
fix_mask=ads_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
if calc_type == 'grid':
fixed_atom_constrain=FixAtoms(mask=fix_mask)
ads_slab.set_constraint([fixed_atom_constrain,fixed_line_constrain])
elif calc_type == 'normal' and fix_option == 'bottom':
fixed_atom_constrain=FixAtoms(mask=fix_mask)
ads_slab.set_constraint(fixed_atom_constrain)
ads_slab.set_calculator(gpaw_calc)
location='/'.join(traj_file.split('/')[:-1])
f=paropen(report_location,'a')
parprint('Calculating '+('/'.join(location.split('/')[-2:]))+' adsorption site...',file=f)
f.close()
opt.relax(ads_slab,location,fmax=solver_fmax,maxstep=solver_maxstep)
init_ads_site=traj_file.split('/')[-2]
E_slab_ads=ads_slab.get_potential_energy()
opt_slab_energy=opt_slab_energy
adsorption_energy=E_slab_ads-(opt_slab_energy+adatom_pot_energy)
final_ads_site=list(np.round(ads_slab.get_positions()[-1][:2],decimals=3))
final_ads_site_str='_'.join([str(i) for i in final_ads_site])
return init_ads_site, adsorption_energy, final_ads_site_str
def skip_ads_calculated(report_location,
all_gpw_files,
init_adsorbates_site_lst,
adsorption_energy_lst,
final_adsorbates_site_lst,
opt_slab_energy,
adatom_pot_energy):
f = paropen(report_location,'a')
parprint('Restarting...',file=f)
for gpw_file in all_gpw_files:
location='/'.join(gpw_file.split('/')[:-1])
parprint('Skipping '+('/'.join(location.split('/')[-2:]))+' adsorption site...',file=f)
atoms=restart(gpw_file)[0]
init_adsorbates_site_lst.append(gpw_file.split('/')[-2])
E_slab_ads=atoms.get_potential_energy()
adsorption_energy=E_slab_ads-(opt_slab_energy+adatom_pot_energy)
adsorption_energy_lst.append(adsorption_energy)
final_ads_site=list(np.round(atoms.get_positions()[-1][:2],decimals=3))
final_ads_site_str='_'.join([str(i) for i in final_ads_site])
final_adsorbates_site_lst.append(final_ads_site_str)
parprint(' ',file=f)
f.close()
return init_adsorbates_site_lst,adsorption_energy_lst,final_adsorbates_site_lst
def initialize_report(report_location,gpaw_calc):
calc_dict=gpaw_calc.__dict__['parameters']
if world.rank==0 and os.path.isfile(report_location):
os.remove(report_location)
f = paropen(report_location,'a')
parprint('Initial Parameters:', file=f)
parprint('\t'+'xc: '+calc_dict['xc'],file=f)
parprint('\t'+'h: '+str(calc_dict['h']),file=f)
parprint('\t'+'kpts: '+str(calc_dict['kpts']),file=f)
parprint('\t'+'sw: '+str(calc_dict['occupations']),file=f)
parprint('\t'+'spin polarized: '+str(calc_dict['spinpol']),file=f)
if calc_dict['spinpol']:
parprint('\t'+'magmom: initialize magnetic moment from slab calculation.',file=f)
parprint(' ',file=f)
f.close()
class ads_auto_select:
def __init__(self,
element,
miller_index_tight,
gpaw_calc,
ads,
adatom_pot_energy,
solver_fmax,
solver_max_step,
restart_calc,
size=(1,1), #xy size
fix_layer=2,
fix_option='bottom'):
#initalize variable
size_xy=str(size[0])+'x'+str(size[1])
target_dir='results/'+element+'/'+'ads/'+size_xy+'/'+miller_index_tight
report_location=target_dir+'_autocat_results_report.txt'
all_ads_file_loc=target_dir+'/'+'adsorbates/'+str(ads)+'/'
## TO-DO: need to figure out how to calculate adsorption energy for larger system
# self.gpaw_calc=gpaw_calc
# self.calc_dict=self.gpaw_calc.__dict__['parameters']
# self.ads=ads
# self.all_ads_file_loc=self.target_dir+'/'+'adsorbates/'+str(self.ads)+'/'
# self.adatom_pot_energy=adatom_pot_energy
##generate report
initialize_report(report_location, gpaw_calc)
##compute clean slab energy
opt_slab_energy, opt_slab_magmom=get_clean_slab(element, miller_index_tight,
report_location, target_dir,size_xy,
fix_layer,solver_fmax,solver_max_step,
gpaw_calc)
#opt_slab=self.get_clean_slab()
##start adsorption calculation
adsorption_energy_dict={}
init_adsorbates_site_lst=[]
final_adsorbates_site_lst=[]
adsorption_energy_lst=[]
all_bridge_traj_files=glob(all_ads_file_loc+'bridge/*/input.traj')
all_ontop_traj_files=glob(all_ads_file_loc+'ontop/*/input.traj')
all_hollow_traj_files=glob(all_ads_file_loc+'hollow/*/input.traj')
all_traj_files=all_bridge_traj_files+all_ontop_traj_files+all_hollow_traj_files
all_bridge_gpw_files=glob(all_ads_file_loc+'bridge/*/slab.gpw')
all_ontop_gpw_files=glob(all_ads_file_loc+'ontop/*/slab.gpw')
all_hollow_gpw_files=glob(all_ads_file_loc+'hollow/*/slab.gpw')
all_gpw_files=all_bridge_gpw_files+all_ontop_gpw_files+all_hollow_gpw_files
## restart
if restart_calc==True and len(all_gpw_files)>=1:
init_adsorbates_site_lst,adsorption_energy_lst,final_adsorbates_site_lst=skip_ads_calculated(report_location,
all_gpw_files,
init_adsorbates_site_lst,
adsorption_energy_lst,
final_adsorbates_site_lst,
opt_slab_energy,
adatom_pot_energy)
all_gpw_files_ads_site=['/'.join(i.split('/')[:-1]) for i in all_gpw_files]
all_traj_files=[i for i in all_traj_files if '/'.join(i.split('/')[:-1]) not in all_gpw_files_ads_site]
for traj_file in all_traj_files:
#init_adsobates_site, adsorption_energy, final_adsorbates_site=self.adsorption_energy_calculator(traj_file,opt_slab)
output_lst=adsorption_energy_calculator(traj_file,report_location,
opt_slab_energy,adatom_pot_energy,
opt_slab_magmom,gpaw_calc,
solver_fmax,solver_max_step,
calc_type='normal',
fix_layer=fix_layer,fix_option = fix_option,
)
init_adsorbates_site_lst.append(output_lst[0])
adsorption_energy_lst.append(output_lst[1])
final_adsorbates_site_lst.append(output_lst[2])
adsorption_energy_dict['init_sites[x_y](Ang)']=init_adsorbates_site_lst
adsorption_energy_dict['final_sites[x_y](Ang)']=final_adsorbates_site_lst
adsorption_energy_dict['adsorption_energy(eV)']=adsorption_energy_lst
ads_df=pd.DataFrame(adsorption_energy_dict)
# ads_df.set_index('init_adsorbates_sites[x_y](Ang)',inplace=True)
ads_df.sort_values(by=['adsorption_energy(eV)'],inplace=True)
pd.set_option("display.max_rows", None, "display.max_columns", None)
f=paropen(report_location,'a')
parprint(ads_df,file=f)
parprint('',file=f)
f.close()
min_adsorbates_site=ads_df.iloc[[0]]['init_sites[x_y](Ang)'].to_list()[0]
lowest_ads_energy_slab=read(glob(all_ads_file_loc+'*/'+min_adsorbates_site+'/slab.traj')[0])
#finalize
final_slab_simple_name=element+'_'+miller_index_tight
ads_db=connect('final_database/ads_'+size_xy+'.db')
id=ads_db.reserve(name=final_slab_simple_name)
if id is None:
id=ads_db.get(name=final_slab_simple_name).id
ads_db.update(id=id,atoms=lowest_ads_energy_slab,name=final_slab_simple_name,
ads_pot_e=float(ads_df.iloc[[0]]['adsorption_energy(eV)'].to_list()[0]))
else:
ads_db.write(lowest_ads_energy_slab,
id=id,
name=final_slab_simple_name,
ads_pot_e=float(ads_df.iloc[[0]]['adsorption_energy(eV)'].to_list()[0]))
f=paropen(report_location,'a')
parprint('Adsorption energy calculation complete.',file=f)
parprint('Selected ads site is: ',file=f)
parprint(min_adsorbates_site,file=f)
f.close()
# def get_clean_slab(self):
# f = paropen(self.report_location,'a')
# parprint('Start clean slab calculation: ', file=f)
# if self.size != '1x1':
# clean_slab_gpw_path=self.target_dir+'/clean_slab/slab.gpw'
# clean_slab=read(self.target_dir+'/clean_slab/input.traj')
# if os.path.isfile(clean_slab_gpw_path):
# opt_slab, pre_calc = restart(clean_slab_gpw_path)
# pre_kpts=pre_calc.__dict__['parameters']['kpts']
# set_kpts=self.calc_dict['kpts']
# if pre_kpts == set_kpts:
# parprint('\t'+self.size+' clean slab is pre-calculated with kpts matched.',file=f)
# else:
# parprint('\t'+self.size+' clean slab pre-calculated has different kpts. Clean slab needs to re-calculate.', file=f)
# parprint('\t'+'Calculating '+self.size+' clean slab...',file=f)
# opt_slab=self.clean_slab_calculator(clean_slab)
# else:
# parprint('\t'+self.size+' clean slab is not pre-calculated.',file=f)
# parprint('\t'+'Calculating '+self.size+' clean slab...',file=f)
# opt_slab=self.clean_slab_calculator(clean_slab)
# else:
# parprint('slab size is 1x1. Clean slab calculation is skipped.', file=f)
# opt_slab=connect('final_database'+'/'+'surf.db').get_atoms(simple_name=self.element+'_'+self.miller_index_tight)
# f.close()
# return opt_slab
# def clean_slab_calculator(self,clean_slab):
# pbc_checker(clean_slab)
# if self.calc_dict['spinpol']:
# clean_slab.set_initial_magnetic_moments([0]*len(clean_slab))
# slab_c_coord,cluster=detect_cluster(clean_slab)
# if self.fix_option == 'bottom':
# unique_cluster_index=sorted(set(cluster), key=cluster.index)[self.fix_layer-1]
# max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
# fix_mask=clean_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
# else:
# raise RuntimeError('Only bottom fix option available now.')
# fixed_atom_constrain=FixAtoms(mask=fix_mask)
# clean_slab.set_constraint(fixed_atom_constrain)
# clean_slab.set_calculator(self.gpaw_calc)
# opt.relax(clean_slab,self.target_dir+'/clean_slab',fmax=self.solver_fmax,maxstep=self.solver_max_step)
# return clean_slab
# def adsorption_energy_calculator(self,traj_file,opt_slab):
# ads_slab=read(traj_file)
# pbc_checker(ads_slab)
# if self.calc_dict['spinpol']:
# ads_slab=apply_magmom(opt_slab,ads_slab)
# slab_c_coord,cluster=detect_cluster(ads_slab)
# if self.fix_option == 'bottom':
# unique_cluster_index=sorted(set(cluster), key=cluster.index)[self.fix_layer-1]
# max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
# fix_mask=ads_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
# else:
# raise RuntimeError('Only bottom fix option available now.')
# fixed_atom_constrain=FixAtoms(mask=fix_mask)
# ads_slab.set_constraint(fixed_atom_constrain)
# ads_slab.set_calculator(self.gpaw_calc)
# location='/'.join(traj_file.split('/')[:-1])
# f=paropen(self.report_location,'a')
# parprint('Calculating '+('/'.join(location.split('/')[-2:]))+' adsorption site...',file=f)
# f.close()
# opt.relax(ads_slab,location,fmax=self.solver_fmax,maxstep=self.solver_max_step)
# init_ads_site=traj_file.split('/')[-2]
# E_slab_ads=ads_slab.get_potential_energy()
# opt_slab_energy=opt_slab.get_potential_energy()*int(self.size[0])*int(self.size[2])
# adsorption_energy=E_slab_ads-(opt_slab_energy+self.adatom_pot_energy)
# final_ads_site=list(np.round(ads_slab.get_positions()[-1][:2],decimals=3))
# final_ads_site_str='_'.join([str(i) for i in final_ads_site])
# return init_ads_site, adsorption_energy, final_ads_site_str
# def apply_magmom(self,opt_slab,ads_slab):
# slab_formula=ads_slab.get_chemical_symbols()
# magmom=opt_slab.get_magnetic_moments()
# magmom_ls=np.append(magmom,np.mean(magmom))
# magmom_ls[slab_formula.index(self.ads)]=0
# ads_slab.set_initial_magnetic_moments(magmom_ls)
# def initialize_report(self,report_location,gpaw_calc):
# calc_dict=gpaw_calc.__dict__['parameters']
# if world.rank==0 and os.path.isfile(report_location):
# os.remove(report_location)
# f = paropen(report_location,'a')
# parprint('Initial Parameters:', file=f)
# parprint('\t'+'xc: '+calc_dict['xc'],file=f)
# parprint('\t'+'h: '+str(calc_dict['h']),file=f)
# parprint('\t'+'kpts: '+str(calc_dict['kpts']),file=f)
# parprint('\t'+'sw: '+str(calc_dict['occupations']),file=f)
# parprint('\t'+'spin polarized: '+str(calc_dict['spinpol']),file=f)
# if calc_dict['spinpol']:
# parprint('\t'+'magmom: initialize magnetic moment from slab calculation.',file=f)
# parprint(' ',file=f)
# f.close()
class ads_grid_calc:
def __init__(self,
element,
miller_index_tight,
gpaw_calc,
ads,
adatom_pot_energy,
solver_fmax,
solver_max_step,
restart_calc,
size,
fix_layer=2,
fix_option='bottom'):
#initalize variables
size_xy=str(size[0])+'x'+str(size[1])
target_dir='results/'+element+'/'+'ads/'+size_xy+'/'+miller_index_tight
report_location=target_dir+'_grid_results_report.txt'
all_ads_file_loc=target_dir+'/'+'adsorbates/'+str(ads)+'/'
## TO-DO: need to figure out how to calculate adsorption energy for larger system
# self.gpaw_calc=gpaw_calc
# self.calc_dict=self.gpaw_calc.__dict__['parameters']
# self.ads=ads
#self.all_ads_file_loc=self.target_dir+'/'+'adsorbates/'+str(self.ads)+'/'
#self.adatom_pot_energy=adatom_pot_energy
##generate report
initialize_report(report_location,gpaw_calc)
##compute clean slab energy
opt_slab_energy, opt_slab_magmom=get_clean_slab(element, miller_index_tight,
report_location, target_dir, size_xy,
fix_layer,solver_fmax,solver_max_step,
gpaw_calc)
##start adsorption calculation
adsorption_energy_dict={}
init_adsorbates_site_lst=[]
adsorption_energy_lst=[]
final_adsorbates_site_lst=[]
all_traj_files=glob(all_ads_file_loc+'grid/*/input.traj')
all_gpw_files=glob(all_ads_file_loc+'grid/*/slab.gpw')
## restart
if restart_calc==True and len(all_gpw_files)>=1:
init_adsorbates_site_lst,adsorption_energy_lst=skip_ads_calculated(report_location,
all_gpw_files,
init_adsorbates_site_lst,
adsorption_energy_lst,
final_adsorbates_site_lst,
opt_slab_energy,
adatom_pot_energy)[0:2]
all_gpw_files_ads_site=['/'.join(i.split('/')[:-1]) for i in all_gpw_files]
all_traj_files=[i for i in all_traj_files if '/'.join(i.split('/')[:-1]) not in all_gpw_files_ads_site]
for traj_file in all_traj_files:
output_lst=adsorption_energy_calculator(traj_file,report_location,
opt_slab_energy,adatom_pot_energy,
opt_slab_magmom,gpaw_calc,
solver_fmax,solver_max_step,
calc_type='grid',
fix_layer=fix_layer,fix_option = 'bottom',
)
init_adsorbates_site_lst.append(output_lst[0])
adsorption_energy_lst.append(output_lst[1])
adsorption_energy_dict['init_sites[x_y](Ang)']=init_adsorbates_site_lst
adsorption_energy_dict['adsorption_energy(eV)']=adsorption_energy_lst
ads_df=pd.DataFrame(adsorption_energy_dict)
#ads_df.set_index('init_adsorbates_sites[x_y](Ang)',inplace=True)
ads_df.sort_values(by=['adsorption_energy(eV)'],inplace=True)
ads_df.to_csv(target_dir+'_ads_grid.csv')
pd.set_option("display.max_rows", None, "display.max_columns", None)
f=paropen(report_location,'a')
parprint(ads_df,file=f)
parprint('',file=f)
parprint('Grid adsorption energy calculation complete.',file=f)
f.close()
# def get_clean_slab(self):
# f = paropen(self.report_location,'a')
# parprint('Start clean slab calculation: ', file=f)
# if self.size != '1x1':
# clean_slab_gpw_path=self.target_dir+'/clean_slab/slab.gpw'
# clean_slab=read(self.target_dir+'/clean_slab/input.traj')
# if os.path.isfile(clean_slab_gpw_path):
# opt_slab, pre_calc = restart(clean_slab_gpw_path)
# pre_kpts=pre_calc.__dict__['parameters']['kpts']
# set_kpts=self.calc_dict['kpts']
# if pre_kpts == set_kpts:
# parprint('\t'+self.size+' clean slab is pre-calculated with kpts matched.',file=f)
# else:
# parprint('\t'+self.size+' clean slab pre-calculated has different kpts. Clean slab needs to re-calculate.', file=f)
# parprint('\t'+'Calculating '+self.size+' clean slab...',file=f)
# opt_slab=self.clean_slab_calculator(clean_slab)
# else:
# parprint('\t'+self.size+' clean slab is not pre-calculated.',file=f)
# parprint('\t'+'Calculating '+self.size+' clean slab...',file=f)
# opt_slab=self.clean_slab_calculator(clean_slab)
# else:
# parprint('slab size is 1x1. Clean slab calculation is skipped.', file=f)
# opt_slab=connect('final_database'+'/'+'surf.db').get_atoms(simple_name=self.element+'_'+self.miller_index_tight)
# f.close()
# return opt_slab
# def clean_slab_calculator(self,clean_slab):
# pbc_checker(clean_slab)
# if self.calc_dict['spinpol']:
# clean_slab.set_initial_magnetic_moments([0]*len(clean_slab))
# slab_c_coord,cluster=detect_cluster(clean_slab)
# if self.fix_option == 'bottom':
# unique_cluster_index=sorted(set(cluster), key=cluster.index)[self.fix_layer-1]
# max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
# fix_mask=clean_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
# else:
# raise RuntimeError('Only bottom fix option available now.')
# fixed_atom_constrain=FixAtoms(mask=fix_mask)
# clean_slab.set_constraint(fixed_atom_constrain)
# clean_slab.set_calculator(self.gpaw_calc)
# opt.relax(clean_slab,self.target_dir+'/clean_slab',fmax=self.solver_fmax,maxstep=self.solver_max_step)
# return clean_slab
# def adsorption_energy_calculator(self,traj_file,opt_slab):
# ads_slab=read(traj_file)
# pbc_checker(ads_slab)
# if self.calc_dict['spinpol']:
# ads_slab=apply_magmom(opt_slab,ads_slab)
# fixed_line_constrain=FixedLine(a=-1,direction=[0,0,1])
# slab_c_coord,cluster=detect_cluster(ads_slab)
# if self.fix_option == 'bottom':
# unique_cluster_index=sorted(set(cluster), key=cluster.index)[self.fix_layer-1]
# max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
# fix_mask=ads_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
# else:
# raise RuntimeError('Only bottom fix option available now.')
# fixed_atom_constrain=FixAtoms(mask=fix_mask)
# ads_slab.set_constraint([fixed_atom_constrain,fixed_line_constrain])
# ads_slab.set_calculator(self.gpaw_calc)
# location='/'.join(traj_file.split('/')[:-1])
# f=paropen(self.report_location,'a')
# parprint('Calculating '+('/'.join(location.split('/')[-2:]))+' adsorption site...',file=f)
# f.close()
# opt.relax(ads_slab,location,fmax=self.solver_fmax,maxstep=self.solver_max_step)
# init_ads_site=traj_file.split('/')[-2]
# adsorption_energy=ads_slab.get_potential_energy()-(opt_slab.get_potential_energy()+self.adatom_pot_energy)
# return init_ads_site, adsorption_energy
# def apply_magmom(self,opt_slab,ads_slab):
# slab_formula=ads_slab.get_chemical_symbols()
# magmom=opt_slab.get_magnetic_moments()
# magmom_ls=np.append(magmom,np.mean(magmom))
# magmom_ls[slab_formula.index(self.ads)]=0
# ads_slab.set_initial_magnetic_moments(magmom_ls)
# def initialize_report(self):
# if world.rank==0 and os.path.isfile(self.report_location):
# os.remove(self.report_location)
# f = paropen(self.report_location,'a')
# parprint('Initial Parameters:', file=f)
# parprint('\t'+'xc: '+self.calc_dict['xc'],file=f)
# parprint('\t'+'h: '+str(self.calc_dict['h']),file=f)
# parprint('\t'+'kpts: '+str(self.calc_dict['kpts']),file=f)
# parprint('\t'+'sw: '+str(self.calc_dict['occupations']),file=f)
# parprint('\t'+'spin polarized: '+str(self.calc_dict['spinpol']),file=f)
# if self.calc_dict['spinpol']:
# parprint('\t'+'magmom: initial magnetic moment from slab calculation.',file=f)
# parprint(' ',file=f)
# f.close()
class ads_lowest_ads_site_calc:
def __init__(self,
element,
miller_index_tight,
gpaw_calc,
ads,
adatom_pot_energy,
solver_fmax,
solver_max_step,
restart_calc,
size, #xy size
fix_layer=2,
fix_option='bottom'):
#initalize
##globlalize variable
size_xy=str(size[0])+'x'+str(size[1])
target_dir='results/'+element+'/'+'ads/'+size_xy+'/'+miller_index_tight
report_location=target_dir+'_lowest_ads_results_report.txt'
all_ads_file_loc=target_dir+'/'+'adsorbates/'+str(ads)+'/'
##generate report
initialize_report(report_location, gpaw_calc)
##compute clean slab energy
opt_slab_energy, opt_slab_magmom=get_clean_slab(element, miller_index_tight,
report_location, target_dir, size_xy,
fix_layer,solver_fmax,solver_max_step,
gpaw_calc)
##start adsorption calculation
adsorption_energy_dict={}
init_adsorbates_site_lst=[]
final_adsorbates_site_lst=[]
adsorption_energy_lst=[]
all_traj_files=glob(all_ads_file_loc+'lowest_ads_site/*/input.traj')
all_gpw_files=glob(all_ads_file_loc+'lowest_ads_site/*/slab.gpw')
if restart_calc==True and len(all_gpw_files)>=1:
init_adsorbates_site_lst,adsorption_energy_lst=skip_ads_calculated(report_location,
all_gpw_files,
init_adsorbates_site_lst,
adsorption_energy_lst,
final_adsorbates_site_lst,
opt_slab_energy,
adatom_pot_energy)[0:2]
all_gpw_files_ads_site=['/'.join(i.split('/')[:-1]) for i in all_gpw_files]
all_traj_files=[i for i in all_traj_files if '/'.join(i.split('/')[:-1]) not in all_gpw_files_ads_site]
for traj_file in all_traj_files:
output_lst=adsorption_energy_calculator(traj_file,report_location,
opt_slab_energy,adatom_pot_energy,
opt_slab_magmom,gpaw_calc,
solver_fmax,solver_max_step,
calc_type='normal',
fix_layer=fix_layer,fix_option = 'bottom',
)
init_adsorbates_site_lst.append(output_lst[0])
adsorption_energy_lst.append(output_lst[1])
final_adsorbates_site_lst.append(output_lst[2])
adsorption_energy_dict['init_sites[x_y](Ang)']=init_adsorbates_site_lst
adsorption_energy_dict['final_sites[x_y](Ang)']=final_adsorbates_site_lst
adsorption_energy_dict['adsorption_energy(eV)']=adsorption_energy_lst
ads_df=pd.DataFrame(adsorption_energy_dict)
# ads_df.set_index('init_adsorbates_sites[x_y](Ang)',inplace=True)
ads_df.sort_values(by=['adsorption_energy(eV)'],inplace=True)
pd.set_option("display.max_rows", None, "display.max_columns", None)
f=paropen(report_location,'a')
parprint(ads_df,file=f)
parprint('',file=f)
f.close()
min_adsorbates_site=ads_df.iloc[[0]]['init_sites[x_y](Ang)'].to_list()[0]
lowest_ads_energy_slab=read(glob(all_ads_file_loc+'*/'+min_adsorbates_site+'/slab.traj')[0])
#finalize
final_slab_simple_name=element+'_'+miller_index_tight
ads_db=connect('final_database/ads_'+size_xy+'.db')
id=ads_db.reserve(name=final_slab_simple_name)
if id is None:
id=ads_db.get(name=final_slab_simple_name).id
ads_db.update(id=id,atoms=lowest_ads_energy_slab,name=final_slab_simple_name,
ads_pot_e=float(ads_df.iloc[[0]]['adsorption_energy(eV)'].to_list()[0]))
else:
ads_db.write(lowest_ads_energy_slab,
id=id,
name=final_slab_simple_name,
ads_pot_e=float(ads_df.iloc[[0]]['adsorption_energy(eV)'].to_list()[0]))
f=paropen(report_location,'a')
parprint('Adsorption energy calculation complete.',file=f)
parprint('Selected ads site is: ',file=f)
parprint(min_adsorbates_site,file=f)
f.close()
# def get_clean_slab(self):
# f = paropen(self.report_location,'a')
# parprint('Start clean slab calculation: ', file=f)
# if self.size != '1x1':
# clean_slab_gpw_path=self.target_dir+'/clean_slab/slab.gpw'
# clean_slab=read(self.target_dir+'/clean_slab/input.traj')
# if os.path.isfile(clean_slab_gpw_path):
# opt_slab, pre_calc = restart(clean_slab_gpw_path)
# pre_kpts=pre_calc.__dict__['parameters']['kpts']
# set_kpts=self.calc_dict['kpts']
# if pre_kpts == set_kpts:
# parprint('\t'+self.size+' clean slab is pre-calculated with kpts matched.',file=f)
# else:
# parprint('\t'+self.size+' clean slab pre-calculated has different kpts. Clean slab needs to re-calculate.', file=f)
# parprint('\t'+'Calculating '+self.size+' clean slab...',file=f)
# opt_slab=self.clean_slab_calculator(clean_slab)
# else:
# parprint('\t'+self.size+' clean slab is not pre-calculated.',file=f)
# parprint('\t'+'Calculating '+self.size+' clean slab...',file=f)
# opt_slab=self.clean_slab_calculator(clean_slab)
# else:
# parprint('slab size is 1x1. Clean slab calculation is skipped.', file=f)
# opt_slab=connect('final_database'+'/'+'surf.db').get_atoms(simple_name=self.element+'_'+self.miller_index_tight)
# parprint(' ',file=f)
# f.close()
# return opt_slab
# def clean_slab_calculator(self,clean_slab):
# pbc_checker(clean_slab)
# if self.calc_dict['spinpol']:
# clean_slab.set_initial_magnetic_moments([0]*len(clean_slab))
# slab_c_coord,cluster=detect_cluster(clean_slab)
# if self.fix_option == 'bottom':
# unique_cluster_index=sorted(set(cluster), key=cluster.index)[self.fix_layer-1]
# max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
# fix_mask=clean_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
# else:
# raise RuntimeError('Only bottom fix option available now.')
# fixed_atom_constrain=FixAtoms(mask=fix_mask)
# clean_slab.set_constraint(fixed_atom_constrain)
# clean_slab.set_calculator(self.gpaw_calc)
# opt.relax(clean_slab,self.target_dir+'/clean_slab',fmax=self.solver_fmax,maxstep=self.solver_max_step)
# return clean_slab
# def adsorption_energy_calculator(self,traj_file,opt_slab):
# ads_slab=read(traj_file)
# pbc_checker(ads_slab)
# if self.calc_dict['spinpol']:
# ads_slab=apply_magmom(opt_slab,ads_slab)
# slab_c_coord,cluster=detect_cluster(ads_slab)
# if self.fix_option == 'bottom':
# unique_cluster_index=sorted(set(cluster), key=cluster.index)[self.fix_layer-1]
# max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
# fix_mask=ads_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
# else:
# raise RuntimeError('Only bottom fix option available now.')
# fixed_atom_constrain=FixAtoms(mask=fix_mask)
# ads_slab.set_constraint(fixed_atom_constrain)
# ads_slab.set_calculator(self.gpaw_calc)
# location='/'.join(traj_file.split('/')[:-1])
# f=paropen(self.report_location,'a')
# parprint('\tCalculating '+('/'.join(location.split('/')[-2:]))+' adsorption site...',file=f)
# f.close()
# opt.relax(ads_slab,location,fmax=self.solver_fmax,maxstep=self.solver_max_step)
# init_ads_site=traj_file.split('/')[-2]
# E_slab_ads=ads_slab.get_potential_energy()
# opt_slab_energy=opt_slab.get_potential_energy()
# adsorption_energy=E_slab_ads-(opt_slab_energy+self.adatom_pot_energy)
# final_ads_site=list(np.round(ads_slab.get_positions()[-1][:2],decimals=3))
# final_ads_site_str='_'.join([str(i) for i in final_ads_site])
# return init_ads_site, adsorption_energy, final_ads_site_str
# def initialize_report(self):
# if world.rank==0 and os.path.isfile(self.report_location):
# os.remove(self.report_location)
# f = paropen(self.report_location,'a')
# parprint('Initial Parameters:', file=f)
# parprint('\t'+'xc: '+self.calc_dict['xc'],file=f)
# parprint('\t'+'h: '+str(self.calc_dict['h']),file=f)
# parprint('\t'+'kpts: '+str(self.calc_dict['kpts']),file=f)
# parprint('\t'+'sw: '+str(self.calc_dict['occupations']),file=f)
# parprint('\t'+'spin polarized: '+str(self.calc_dict['spinpol']),file=f)
# if self.calc_dict['spinpol']:
# parprint('\t'+'magmom: initial magnetic moment from slab calculation.',file=f)
# parprint(' ',file=f)
# f.close()
class ads_NN_interact_calc:
def __init__(self,
element,
miller_index_tight,
gpaw_calc,
ads,
solver_fmax,
solver_max_step,
restart_calc,
size, #xy size
sub_dir,
fix_layer=2,
fix_option='bottom'):
#initalize
##globlalize variable
size_xy=str(size[0])+'x'+str(size[1])
target_dir='results/'+element+'/'+'ads/'+size_xy+'/'+miller_index_tight
#report_location=target_dir+'_lowest_ads_results_report.txt'
all_ads_file_loc=target_dir+'/'+'adsorbates/'+str(ads)+'/'
##start adsorption calculation
# adsorption_energy_dict={}
# init_adsorbates_site_lst=[]
# final_adsorbates_site_lst=[]
# adsorption_energy_lst=[]
all_traj_files=glob(all_ads_file_loc+sub_dir+'/*/input.traj')
all_gpw_files=glob(all_ads_file_loc+sub_dir+'/*/slab.gpw')
if restart_calc==True and len(all_gpw_files)>=1:
all_gpw_files_ads_site=['/'.join(i.split('/')[:-1]) for i in all_gpw_files]
all_traj_files=[i for i in all_traj_files if '/'.join(i.split('/')[:-1]) not in all_gpw_files_ads_site]
for traj_file in all_traj_files:
interm_gpw='/'.join(traj_file.split('/')[:-1]+['slab_interm.gpw'])
if os.path.isfile(interm_gpw):
ads_slab, gpaw_calc=restart(interm_gpw)
else:
ads_slab=read(traj_file)
pbc_checker(ads_slab)
calc_dict=gpaw_calc.__dict__['parameters']
if calc_dict['spinpol']:
raise RuntimeError('spin polarization calculation not supported.')
slab_c_coord,cluster=detect_cluster(ads_slab)
if fix_option == 'bottom':
unique_cluster_index=sorted(set(cluster), key=cluster.index)[fix_layer-1]
max_height_fix=max(slab_c_coord[cluster==unique_cluster_index])
fix_mask=ads_slab.positions[:,2]<(max_height_fix+0.05) #add 0.05 Ang to make sure all bottom fixed
else:
raise RuntimeError('Only bottom fix option available now.')
fixed_atom_constrain=FixAtoms(mask=fix_mask)
ads_slab.set_constraint(fixed_atom_constrain)
ads_slab.set_calculator(gpaw_calc)
location='/'.join(traj_file.split('/')[:-1])
opt.relax(ads_slab,location,fmax=solver_fmax,maxstep=solver_max_step)
class ads_custom_ads_site_calc:
def __init__(self,
element,
miller_index_tight,
gpaw_calc,
ads,
adatom_pot_energy,
solver_fmax,
solver_max_step,
restart_calc,
size, #xy size
fix_layer=2,
fix_option='bottom'):
#initalize
##globlalize variable
size_xy=str(size[0])+'x'+str(size[1])
target_dir='results/'+element+'/'+'ads/'+size_xy+'/'+miller_index_tight
report_location=target_dir+'_custom_ads_results_report.txt'
all_ads_file_loc=target_dir+'/'+'adsorbates/'+str(ads)+'/'
##generate report
initialize_report(report_location, gpaw_calc)
##compute clean slab energy
opt_slab_energy, opt_slab_magmom=get_clean_slab(element, miller_index_tight,
report_location, target_dir, size_xy,
fix_layer,solver_fmax,solver_max_step,
gpaw_calc)
##start adsorption calculation
adsorption_energy_dict={}
init_adsorbates_site_lst=[]
final_adsorbates_site_lst=[]
adsorption_energy_lst=[]
all_traj_files=glob(all_ads_file_loc+'custom/*/input.traj')
all_gpw_files=glob(all_ads_file_loc+'custom/*/slab.gpw')
if restart_calc==True and len(all_gpw_files)>=1:
init_adsorbates_site_lst,adsorption_energy_lst=skip_ads_calculated(report_location,
all_gpw_files,
init_adsorbates_site_lst,
adsorption_energy_lst,
final_adsorbates_site_lst,
opt_slab_energy,
adatom_pot_energy)[0:2]
all_gpw_files_ads_site=['/'.join(i.split('/')[:-1]) for i in all_gpw_files]
all_traj_files=[i for i in all_traj_files if '/'.join(i.split('/')[:-1]) not in all_gpw_files_ads_site]
for traj_file in all_traj_files:
output_lst=adsorption_energy_calculator(traj_file,report_location,
opt_slab_energy,adatom_pot_energy,
opt_slab_magmom,gpaw_calc,
solver_fmax,solver_max_step,
calc_type='normal',
fix_layer=fix_layer,fix_option = 'bottom',
)
init_adsorbates_site_lst.append(output_lst[0])
adsorption_energy_lst.append(output_lst[1])
final_adsorbates_site_lst.append(output_lst[2])
adsorption_energy_dict['init_sites[x_y](Ang)']=init_adsorbates_site_lst
adsorption_energy_dict['final_sites[x_y](Ang)']=final_adsorbates_site_lst
adsorption_energy_dict['adsorption_energy(eV)']=adsorption_energy_lst
ads_df=pd.DataFrame(adsorption_energy_dict)
# ads_df.set_index('init_adsorbates_sites[x_y](Ang)',inplace=True)
ads_df.sort_values(by=['adsorption_energy(eV)'],inplace=True)
pd.set_option("display.max_rows", None, "display.max_columns", None)
f=paropen(report_location,'a')
parprint(ads_df,file=f)
parprint('',file=f)
f.close()
min_adsorbates_site=ads_df.iloc[[0]]['init_sites[x_y](Ang)'].to_list()[0]
#lowest_ads_energy_slab=read(glob(all_ads_file_loc+'*/'+min_adsorbates_site+'/slab.traj')[0])
#finalize
# final_slab_simple_name=element+'_'+miller_index_tight
# ads_db=connect('final_database/ads_'+size_xy+'.db')
# id=ads_db.reserve(name=final_slab_simple_name)
# if id is None:
# id=ads_db.get(name=final_slab_simple_name).id
# ads_db.update(id=id,atoms=lowest_ads_energy_slab,name=final_slab_simple_name,
# ads_pot_e=float(ads_df.iloc[[0]]['adsorption_energy(eV)'].to_list()[0]))
# else:
# ads_db.write(lowest_ads_energy_slab,
# id=id,
# name=final_slab_simple_name,
# ads_pot_e=float(ads_df.iloc[[0]]['adsorption_energy(eV)'].to_list()[0]))
f=paropen(report_location,'a')
parprint('Adsorption energy calculation complete.',file=f)
parprint('Selected ads site is: ',file=f)
parprint(min_adsorbates_site,file=f)
f.close()
| 52.047778 | 137 | 0.591465 | 5,765 | 46,843 | 4.447875 | 0.047528 | 0.048085 | 0.0218 | 0.015287 | 0.937446 | 0.925006 | 0.912292 | 0.895991 | 0.887567 | 0.879924 | 0 | 0.007031 | 0.292573 | 46,843 | 899 | 138 | 52.105673 | 0.766763 | 0.359328 | 0 | 0.738 | 0 | 0 | 0.082206 | 0.018587 | 0.01 | 0 | 0 | 0 | 0 | 1 | 0.024 | false | 0 | 0.028 | 0 | 0.072 | 0.078 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
91b1f47d029b3f6c6617f393f3934ffebfee08c0 | 4,881 | py | Python | Python/DataStructures/DoublyLinkedList/test/test_doubly_linked_list.py | ThunderZ007/Data-Structures-and-Algorithms | 148415faf6472115f6848b1a4e21b660b6d327da | [
"MIT"
] | 245 | 2020-10-05T14:52:37.000Z | 2022-03-29T07:40:38.000Z | Python/DataStructures/DoublyLinkedList/test/test_doubly_linked_list.py | ThunderZ007/Data-Structures-and-Algorithms | 148415faf6472115f6848b1a4e21b660b6d327da | [
"MIT"
] | 521 | 2020-10-05T15:25:29.000Z | 2021-11-09T13:24:01.000Z | Python/DataStructures/DoublyLinkedList/test/test_doubly_linked_list.py | ThunderZ007/Data-Structures-and-Algorithms | 148415faf6472115f6848b1a4e21b660b6d327da | [
"MIT"
] | 521 | 2020-10-05T15:29:42.000Z | 2022-03-27T10:22:00.000Z | import unittest
from Python.DataStructures.DoublyLinkedList.doubly_linked_list import DoublyLinkedList
class TestDoublyLinkedList(unittest.TestCase):
def test_list_is_empty_on_initialization(self):
a_doubly_linked_list = DoublyLinkedList()
self.assertEqual(a_doubly_linked_list.length(), 0)
def test_adding_to_front_should_append_node_to_front(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
self.assertEqual(a_doubly_linked_list.length(), 1)
a_doubly_linked_list.insert_at_beginning(4)
self.assertEqual(a_doubly_linked_list.__str__(), "4 --> 5")
def test_adding_to_end_should_append_node_to_end(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
self.assertEqual(a_doubly_linked_list.length(), 1)
a_doubly_linked_list.insert_at_end(4)
self.assertEqual(a_doubly_linked_list.__str__(), "5 --> 4")
self.assertEqual(a_doubly_linked_list.length(), 2)
def test_insert_at_should_append_node_to_correct_position_for_a_short_list(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
a_doubly_linked_list.insert_at_beginning(3)
a_doubly_linked_list.insert_at(4, 1)
self.assertEqual(a_doubly_linked_list.__str__(), "3 --> 4 --> 5")
self.assertEqual(a_doubly_linked_list.length(), 3)
def test_insert_at_should_append_node_to_correct_position(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(2)
a_doubly_linked_list.insert_at_beginning(1)
a_doubly_linked_list.insert_at(3, 2)
self.assertEqual(a_doubly_linked_list.__str__(), "1 --> 2 --> 3 --> 4 --> 5")
self.assertEqual(a_doubly_linked_list.length(), 5)
def test_insert_at_should_insert_at_start(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at(3, 0)
self.assertEqual(a_doubly_linked_list.__str__(), "3 --> 4 --> 5")
self.assertEqual(a_doubly_linked_list.length(), 3)
def test_insert_at_should_insert_at_end_for_short_list(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at(6, 2)
self.assertEqual(a_doubly_linked_list.__str__(), "4 --> 5 --> 6")
self.assertEqual(a_doubly_linked_list.length(), 3)
def test_insert_at_should_insert_at_end(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(5)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at(6, 4)
self.assertEqual(a_doubly_linked_list.__str__(), "4 --> 4 --> 4 --> 5 --> 6")
self.assertEqual(a_doubly_linked_list.length(), 5)
def test_remove_val_should_remove_values_from_list(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(3)
a_doubly_linked_list.insert_at_beginning(2)
a_doubly_linked_list.insert_at_beginning(1)
a_doubly_linked_list.remove_val(1)
self.assertEqual(a_doubly_linked_list.length(), 3)
self.assertEqual(a_doubly_linked_list.__str__(), "2 --> 3 --> 4")
def test_remove_val_should_remove_values_from_list_when_values_are_all_the_same(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.remove_val(4)
self.assertEqual(a_doubly_linked_list.length(), 0)
self.assertEqual(a_doubly_linked_list.__str__(), "")
def test_remove_val_should_remove_values_from_list_when_values_are_all_the_same_except_one(self):
a_doubly_linked_list = DoublyLinkedList()
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(5)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.insert_at_beginning(4)
a_doubly_linked_list.remove_val(4)
self.assertEqual(a_doubly_linked_list.__str__(), "5")
self.assertEqual(a_doubly_linked_list.length(), 1)
self.assertEqual(a_doubly_linked_list.__str__(), "5")
| 45.616822 | 101 | 0.742471 | 702 | 4,881 | 4.554131 | 0.078348 | 0.274007 | 0.365343 | 0.382859 | 0.913043 | 0.913043 | 0.898655 | 0.877385 | 0.801376 | 0.732249 | 0 | 0.020479 | 0.169637 | 4,881 | 106 | 102 | 46.04717 | 0.76832 | 0 | 0 | 0.662791 | 0 | 0 | 0.024175 | 0 | 0 | 0 | 0 | 0 | 0.267442 | 1 | 0.127907 | false | 0 | 0.023256 | 0 | 0.162791 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
37f5e795681c599105fbede02d50ceecd106cfc6 | 8,098 | py | Python | tests/jtes_test.py | DEDDIAG/python-jtes | 5cb0893f44113c80d7ad143f474e58be35b43db5 | [
"MIT"
] | null | null | null | tests/jtes_test.py | DEDDIAG/python-jtes | 5cb0893f44113c80d7ad143f474e58be35b43db5 | [
"MIT"
] | null | null | null | tests/jtes_test.py | DEDDIAG/python-jtes | 5cb0893f44113c80d7ad143f474e58be35b43db5 | [
"MIT"
] | null | null | null | from unittest import TestCase
from jtes import jaccard_timespan_event_score
import numpy as np
class JTESTest(TestCase):
def test_both_empty(self):
self.assertEqual(1, jaccard_timespan_event_score(np.array([]), np.array([])))
def test_empty_prediction(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:10'))
])
self.assertEqual(0, jaccard_timespan_event_score(y_true, np.array([])))
def test_empty_true(self):
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:10'))
])
self.assertEqual(0, jaccard_timespan_event_score(np.array([]), y_pred))
def test_full_overlap(self):
y = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:10'))
])
self.assertEqual(1, jaccard_timespan_event_score(y, y))
def test_single_half_overlap(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T04:00:00'))
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T05:00:00')),
])
self.assertEqual(0.75, jaccard_timespan_event_score(y_true, y_pred))
def test_double_half_overlap(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T04:00:00'))
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T02:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T05:00:00')),
])
self.assertEqual(0.5, jaccard_timespan_event_score(y_true, y_pred))
def test_false_positive(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')),
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')),
(np.datetime64('1901-01-01T03:00:00'), np.datetime64('1901-01-01T03:00:05')),
])
self.assertEqual(2 / 3, jaccard_timespan_event_score(y_true, y_pred))
def test_false_negative(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')),
(np.datetime64('1901-01-01T03:00:00'), np.datetime64('1901-01-01T03:00:05')),
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')),
])
self.assertEqual(2 / 3, jaccard_timespan_event_score(y_true, y_pred))
def test_duplicates(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')), # dub
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')), # dub
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')), # dub
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')), # dub
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:05')), # dub
])
with self.assertRaises(ValueError):
jaccard_timespan_event_score(y_true, y_pred)
def test_simple_pred_split(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T00:20:00')),
(np.datetime64('1900-01-01T00:20:00'), np.datetime64('1900-01-01T01:00:00')),
])
self.assertEqual(0.5, jaccard_timespan_event_score(y_true, y_pred))
def test_pred_split(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T04:00:00'))
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T00:20:00')),
(np.datetime64('1900-01-01T00:20:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:10:00')),
(np.datetime64('1900-01-01T03:10:00'), np.datetime64('1900-01-01T03:45:00')),
(np.datetime64('1900-01-01T03:45:00'), np.datetime64('1900-01-01T04:00:00'))
])
self.assertEqual(((20/60 + 40/60)/2 + (10/60 + 35/60 + 15/60)/3)/2,
jaccard_timespan_event_score(y_true, y_pred))
def test_pred_overlap(self):
y_true = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T04:00:00'))
])
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T00:30:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:18:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T04:00:00')),
(np.datetime64('1900-01-01T03:10:00'), np.datetime64('1900-01-01T03:44:00'))
])
self.assertEqual(((0.5+1)/2 + (18/60+1+34/60)/3)/2, jaccard_timespan_event_score(y_true, y_pred))
y_pred = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T04:00:00')),
])
self.assertEqual((1/4 + 1/4)/2, jaccard_timespan_event_score(y_true, y_pred))
def test_t0_lt_t1(self):
j_wrong = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:10'), np.datetime64('1900-01-01T03:00:00')) # t0 > t1
])
y_correct = np.array([
(np.datetime64('1900-01-01T00:00:00'), np.datetime64('1900-01-01T01:00:00')),
(np.datetime64('1900-01-01T03:00:00'), np.datetime64('1900-01-01T03:00:10'))
])
with self.assertRaises(ValueError):
jaccard_timespan_event_score(j_wrong, np.array([]))
with self.assertRaises(ValueError):
jaccard_timespan_event_score(np.array([]), j_wrong)
with self.assertRaises(ValueError):
jaccard_timespan_event_score(j_wrong, j_wrong)
with self.assertRaises(ValueError):
jaccard_timespan_event_score(j_wrong, y_correct)
with self.assertRaises(ValueError):
jaccard_timespan_event_score(y_correct, j_wrong)
| 47.91716 | 105 | 0.599037 | 1,183 | 8,098 | 3.986475 | 0.064243 | 0.279898 | 0.359627 | 0.40458 | 0.932146 | 0.919847 | 0.914334 | 0.894402 | 0.88274 | 0.86408 | 0 | 0.283271 | 0.20573 | 8,098 | 168 | 106 | 48.202381 | 0.449938 | 0.003334 | 0 | 0.680851 | 0 | 0 | 0.259177 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 1 | 0.092199 | false | 0 | 0.021277 | 0 | 0.120567 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
37fcc82a7e6dd73bfb066278d3e420e84d169372 | 1,108 | py | Python | onlineshop/category/models.py | amitgit712/multi-ven-ecom | b395e80e5e5bb3c6817e20e179bf0810c6630689 | [
"MIT"
] | null | null | null | onlineshop/category/models.py | amitgit712/multi-ven-ecom | b395e80e5e5bb3c6817e20e179bf0810c6630689 | [
"MIT"
] | null | null | null | onlineshop/category/models.py | amitgit712/multi-ven-ecom | b395e80e5e5bb3c6817e20e179bf0810c6630689 | [
"MIT"
] | null | null | null | from django.db import models
class ProductCategory(models.Model):
name = models.CharField(max_length=250)
slug = models.SlugField(max_length=255, unique=True)
description = models.CharField(max_length=255)
image = models.ImageField(upload_to='product_category/images/', null=True)
active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name_plural = 'Product Categories'
def __str__(self):
return self.name
class BlogCategory(models.Model):
name = models.CharField(max_length=250)
slug = models.SlugField(max_length=255, unique=True)
description = models.CharField(max_length=255)
image = models.ImageField(upload_to='blog_category/images/', null=True)
active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name_plural = 'Blog Categories'
def __str__(self):
return self.name
| 32.588235 | 78 | 0.731949 | 139 | 1,108 | 5.604317 | 0.33813 | 0.06932 | 0.092426 | 0.123235 | 0.893453 | 0.893453 | 0.893453 | 0.806162 | 0.806162 | 0.806162 | 0 | 0.019523 | 0.16787 | 1,108 | 33 | 79 | 33.575758 | 0.82538 | 0 | 0 | 0.72 | 0 | 0 | 0.070397 | 0.040614 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0.08 | 0.92 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
53133f14a2e213535262b73fafbee659359186f7 | 15,346 | py | Python | qtim_tools/qtim_utilities/dicom_util.py | QTIM-Lab/qtim_tools | 92bd15ec7a81c5eda70d11a015f74538f3c41e22 | [
"Apache-2.0"
] | 12 | 2017-03-29T18:17:24.000Z | 2020-03-19T05:28:56.000Z | qtim_tools/qtim_utilities/dicom_util.py | QTIM-Lab/qtim_tools | 92bd15ec7a81c5eda70d11a015f74538f3c41e22 | [
"Apache-2.0"
] | 7 | 2017-03-08T21:06:01.000Z | 2017-06-21T19:01:58.000Z | qtim_tools/qtim_utilities/dicom_util.py | QTIM-Lab/qtim_tools | 92bd15ec7a81c5eda70d11a015f74538f3c41e22 | [
"Apache-2.0"
] | 5 | 2017-03-02T09:08:21.000Z | 2019-10-26T05:37:39.000Z | """ This is a utility module for loading dicom data and headers. It borrows heavily from pydicom.
It will likely take a lot of DICOM knowledge to navigate..
"""
import pydicom
import numpy as np
import os
import glob
from collections import defaultdict
from subprocess import call
from qtim_tools.qtim_utilities.file_util import grab_files_recursive, sanitize_filename, replace_suffix
from qtim_tools.qtim_utilities.nifti_util import save_numpy_2_nifti
# factory function to create a suitable instance for accessing files
# def get_compressed_file(data):
# for cls in (ZIPFile, BZ2File, GZFile):
# if cls.is_magic(data):
# return cls(f)
# return None
def get_dicom_dictionary(input_filepath=[], dictionary_regex="*", return_type='name'):
""" Returns a dictionary of dicom tags using pydicom. TODO: let users return tag
dictionary.
"""
if os.path.isdir(input_filepath):
dictionary_file = glob.glob(os.path.join(input_filepath, '*'))[0]
else:
dictionary_file = input_filepath
img_dicom = pydicom.read_file(dictionary_file)
output_dictionary = {}
for key in img_dicom.dir():
try:
if key != 'PixelData':
output_dictionary[key] = img_dicom.data_element(key).value
except:
pass
return output_dictionary
def get_dicom_pixel_array(dicom, filename):
# Deal with data compression if necessary..
try:
return dicom.pixel_array
# except:
# current_dir = os.getcwd()
# os.chdir(os.path.dirname(filename))
# call("\"C:\\Program Files (x86)\\IrfanView\\i_view32.exe\" " + str(filename).replace('/', '\\') + " /convert=.\\temp.jpg", shell=True)
# array = misc.imread('temp.jpg')
# os.remove('temp.jpg')
# os.chdir(current_dir)
# return array
# except:
# return
except:
call("C:\\Users\\azb22\\Documents\\Software\\DCMTK\\dcmtk-3.6.2-win64-dynamic\\bin" + str(filename).replace('/', '\\') + " /convert=.\\temp.jpg", shell=True)
def get_uncompressed_dicom(filename):
data = pydicom.read_file(filename)
if data is not None and (data.file_meta.TransferSyntaxUID in pydicom.dataset.NotCompressedPixelTransferSyntaxes):
return data
# +cl --conv-lossy convert YCbCr to RGB if lossy JPEG
# +cn --conv-never never convert color space
# +px --color-by-pixel always store color-by-pixel
call(['C:\\Users\\azb22\\Documents\\Software\\DCMTK\\dcmtk-3.6.2-win64-dynamic\\bin\\dcmdjpeg.exe', '+cl', '+px', filename, 'temp.dcm'])
data = pydicom.read_file('temp.dcm')
os.remove('temp.dcm')
return data
def dcm_2_numpy(input_folder, verbose=False):
""" Uses pydicom to stack an alphabetical list of DICOM files. TODO: Make it
take slice_order into account.
"""
if verbose:
print 'Searching for dicom files...'
found_files = grab_files_recursive(input_folder)
if verbose:
print 'Found', len(found_files), 'in directory. \n'
print 'Checking DICOM compatability...'
dicom_files = []
for file in found_files:
try:
temp_dicom = pydicom.read_file(file)
dicom_files += [[file, temp_dicom.data_element('SeriesInstanceUID').value]]
except:
continue
if verbose:
print 'Found', len(dicom_files), 'DICOM files in directory. \n'
print 'Counting volumes..'
unique_dicoms = defaultdict(list)
for dicom_file in dicom_files:
UID = dicom_file[1]
unique_dicoms[UID] += [dicom_file[0]]
if verbose:
print 'Found', len(unique_dicoms.keys()), 'unique volumes \n'
print 'Saving out files from these volumes.'
output_dict = {}
output_filenames = []
for UID in unique_dicoms.keys():
try:
# Grab DICOMs for a certain Instance
current_files = unique_dicoms[UID]
current_dicoms = [get_uncompressed_dicom(dcm) for dcm in unique_dicoms[UID]]
# print current_files
# Sort DICOMs by Instance.
dicom_instances = [x.data_element('InstanceNumber').value for x in current_dicoms]
current_dicoms = [x for _, x in sorted(zip(dicom_instances, current_dicoms))]
current_files = [x for _, x in sorted(zip(dicom_instances, current_files))]
first_dicom, last_dicom = current_dicoms[0], current_dicoms[-1]
print first_dicom.file_meta
print first_dicom.file_meta.TransferSyntaxUID
# Create a filename for the DICOM
volume_label = '_'.join([first_dicom.data_element(tag).value for tag in naming_tags]).replace(" ", "")
volume_label = prefix + sanitize_filename(volume_label) + suffix + '.nii.gz'
if verbose:
print 'Saving...', volume_label
except:
print 'Could not read DICOM volume SeriesDescription. Skipping UID...', str(UID)
continue
try:
# Extract patient position information for affine creation.
output_affine = np.eye(4)
image_position_patient = np.array(first_dicom.data_element('ImagePositionPatient').value).astype(float)
image_orientation_patient = np.array(first_dicom.data_element('ImageOrientationPatient').value).astype(float)
last_image_position_patient = np.array(last_dicom.data_element('ImagePositionPatient').value).astype(float)
pixel_spacing_patient = np.array(first_dicom.data_element('PixelSpacing').value).astype(float)
# Create DICOM Space affine (don't fully understand, TODO)
output_affine[0:3, 0] = pixel_spacing_patient[0] * image_orientation_patient[0:3]
output_affine[0:3, 1] = pixel_spacing_patient[1] * image_orientation_patient[3:6]
output_affine[0:3, 2] = (image_position_patient - last_image_position_patient) / (1 - len(current_dicoms))
output_affine[0:3, 3] = image_position_patient
# Transformations from DICOM to Nifti Space (don't fully understand, TOO)
cr_flip = np.eye(4)
cr_flip[0:2,0:2] = [[0,1],[1,0]]
neg_flip = np.eye(4)
neg_flip[0:2,0:2] = [[-1,0],[0,-1]]
output_affine = np.matmul(neg_flip, np.matmul(output_affine, cr_flip))
# Create numpy array data...
output_shape = get_dicom_pixel_array(current_dicoms[0], current_files[0]).shape
output_numpy = []
for i in xrange(len(current_dicoms)):
try:
output_numpy += [get_dicom_pixel_array(current_dicoms[i], current_files[i])]
except:
print 'Warning, error at slice', i
output_numpy = np.stack(output_numpy, -1)
# If preferred, harden to identity matrix space (LPS, maybe?)
# Also unsure of the dynamic here, but they work.
if harden_orientation is not None:
cx, cy, cz = np.argmax(np.abs(output_affine[0:3,0:3]), axis=0)
output_numpy = np.transpose(output_numpy, (cx,cy,cz))
harden_matrix = np.eye(4)
for dim, i in enumerate([cx,cy,cz]):
harden_matrix[i,i] = 0
harden_matrix[dim, i] = 1
output_affine = np.matmul(output_affine, harden_matrix)
flip_matrix = np.eye(4)
for i in xrange(3):
if output_affine[i,i] < 0:
flip_matrix[i,i] = -1
output_numpy = np.flip(output_numpy, i)
output_affine = np.matmul(output_affine, flip_matrix)
# Create output folder according to tags.
specific_folder = output_folder
for tag in folder_tags:
if specific_folder == output_folder or folder_mode == 'recursive':
specific_folder = os.path.join(specific_folder, sanitize_filename(first_dicom.data_element(tag).value))
elif folder_mode == 'combine':
specific_folder = specific_folder + '_' + sanitize_filename(first_dicom.data_element(tag).value)
if not os.path.exists(specific_folder):
os.makedirs(specific_folder)
# Save out file.
output_filename = os.path.join(specific_folder, volume_label)
if os.path.exists(output_filename) and output_filename in output_filenames:
output_filename = replace_suffix(output_filename, '', '_copy')
save_numpy_2_nifti(output_numpy, output_affine, output_filename)
output_filenames += [output_filename]
except:
print 'Could not read DICOM at SeriesDescription...', volume_label
return output_filenames
return output_dict
def dcm_2_nifti(input_folder, output_folder, verbose=True, naming_tags=['SeriesDescription'], folder_tags=['PatientID', 'StudyDate'], folder_mode='combine', prefix='', suffix='', write_header=False, header_suffix='_header', harden_orientation=True):
""" Uses pydicom to stack an alphabetical list of DICOM files. TODO: Make it
take slice_order into account.
"""
if verbose:
print 'Searching for dicom files...'
found_files = grab_files_recursive(input_folder)
if verbose:
print 'Found', len(found_files), 'in directory. \n'
print 'Checking DICOM compatability...'
dicom_files = []
for file in found_files:
try:
temp_dicom = pydicom.read_file(file)
dicom_files += [[file, temp_dicom.data_element('SeriesInstanceUID').value]]
except:
continue
if verbose:
print 'Found', len(dicom_files), 'DICOM files in directory. \n'
print 'Counting volumes..'
dicom_headers = []
unique_dicoms = defaultdict(list)
for dicom_file in dicom_files:
UID = dicom_file[1]
unique_dicoms[UID] += [dicom_file[0]]
if verbose:
print 'Found', len(unique_dicoms.keys()), 'unique volumes \n'
print 'Saving out files from these volumes.'
output_dict = {}
output_filenames = []
for UID in unique_dicoms.keys():
try:
# Grab DICOMs for a certain Instance
current_files = unique_dicoms[UID]
current_dicoms = [get_uncompressed_dicom(dcm) for dcm in unique_dicoms[UID]]
# print current_files
# Sort DICOMs by Instance.
dicom_instances = [x.data_element('InstanceNumber').value for x in current_dicoms]
current_dicoms = [x for _,x in sorted(zip(dicom_instances,current_dicoms))]
current_files = [x for _,x in sorted(zip(dicom_instances,current_files))]
first_dicom, last_dicom = current_dicoms[0], current_dicoms[-1]
print first_dicom.file_meta
print first_dicom.file_meta.TransferSyntaxUID
# Create a filename for the DICOM
volume_label = '_'.join([first_dicom.data_element(tag).value for tag in naming_tags]).replace(" ", "")
volume_label = prefix + sanitize_filename(volume_label) + suffix + '.nii.gz'
if verbose:
print 'Saving...', volume_label
except:
print 'Could not read DICOM volume SeriesDescription. Skipping UID...', str(UID)
continue
try:
# Extract patient position information for affine creation.
output_affine = np.eye(4)
image_position_patient = np.array(first_dicom.data_element('ImagePositionPatient').value).astype(float)
image_orientation_patient = np.array(first_dicom.data_element('ImageOrientationPatient').value).astype(float)
last_image_position_patient = np.array(last_dicom.data_element('ImagePositionPatient').value).astype(float)
pixel_spacing_patient = np.array(first_dicom.data_element('PixelSpacing').value).astype(float)
# Create DICOM Space affine (don't fully understand, TODO)
output_affine[0:3, 0] = pixel_spacing_patient[0] * image_orientation_patient[0:3]
output_affine[0:3, 1] = pixel_spacing_patient[1] * image_orientation_patient[3:6]
output_affine[0:3, 2] = (image_position_patient - last_image_position_patient) / (1 - len(current_dicoms))
output_affine[0:3, 3] = image_position_patient
# Transformations from DICOM to Nifti Space (don't fully understand, TOO)
cr_flip = np.eye(4)
cr_flip[0:2,0:2] = [[0,1],[1,0]]
neg_flip = np.eye(4)
neg_flip[0:2,0:2] = [[-1,0],[0,-1]]
output_affine = np.matmul(neg_flip, np.matmul(output_affine, cr_flip))
# Create numpy array data...
output_shape = get_dicom_pixel_array(current_dicoms[0], current_files[0]).shape
output_numpy = []
for i in xrange(len(current_dicoms)):
try:
output_numpy += [get_dicom_pixel_array(current_dicoms[i], current_files[i])]
except:
print 'Warning, error at slice', i
output_numpy = np.stack(output_numpy, -1)
# If preferred, harden to identity matrix space (LPS, maybe?)
# Also unsure of the dynamic here, but they work.
if harden_orientation is not None:
cx, cy, cz = np.argmax(np.abs(output_affine[0:3,0:3]), axis=0)
output_numpy = np.transpose(output_numpy, (cx,cy,cz))
harden_matrix = np.eye(4)
for dim, i in enumerate([cx,cy,cz]):
harden_matrix[i,i] = 0
harden_matrix[dim, i] = 1
output_affine = np.matmul(output_affine, harden_matrix)
flip_matrix = np.eye(4)
for i in xrange(3):
if output_affine[i,i] < 0:
flip_matrix[i,i] = -1
output_numpy = np.flip(output_numpy, i)
output_affine = np.matmul(output_affine, flip_matrix)
# Create output folder according to tags.
specific_folder = output_folder
for tag in folder_tags:
if specific_folder == output_folder or folder_mode == 'recursive':
specific_folder = os.path.join(specific_folder, sanitize_filename(first_dicom.data_element(tag).value))
elif folder_mode == 'combine':
specific_folder = specific_folder + '_' + sanitize_filename(first_dicom.data_element(tag).value)
if not os.path.exists(specific_folder):
os.makedirs(specific_folder)
# Save out file.
output_filename = os.path.join(specific_folder, volume_label)
if os.path.exists(output_filename) and output_filename in output_filenames:
output_filename = replace_suffix(output_filename, '', '_copy')
save_numpy_2_nifti(output_numpy, output_affine, output_filename)
output_filenames += [output_filename]
except:
print 'Could not read DICOM at SeriesDescription...', volume_label
return output_filenames
if __name__ == '__main__':
pass | 41.032086 | 249 | 0.62355 | 1,889 | 15,346 | 4.838539 | 0.14505 | 0.036761 | 0.029759 | 0.027571 | 0.808753 | 0.803063 | 0.803063 | 0.803063 | 0.794092 | 0.794092 | 0 | 0.012485 | 0.274534 | 15,346 | 374 | 250 | 41.032086 | 0.808497 | 0.113776 | 0 | 0.857143 | 0 | 0.008658 | 0.093573 | 0.01626 | 0 | 0 | 0 | 0.010695 | 0 | 0 | null | null | 0.008658 | 0.034632 | null | null | 0.112554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
728f29a66df3f52aec0272339b981c7e4801b9e2 | 6,081 | py | Python | resources/dot_PyCharm/system/python_stubs/-762174762/PySide/QtGui/QMatrix4x4.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | 1 | 2020-04-20T02:27:20.000Z | 2020-04-20T02:27:20.000Z | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/QMatrix4x4.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/QMatrix4x4.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | # encoding: utf-8
# module PySide.QtGui
# from C:\Python27\lib\site-packages\PySide\QtGui.pyd
# by generator 1.147
# no doc
# imports
import PySide.QtCore as __PySide_QtCore
import Shiboken as __Shiboken
class QMatrix4x4(__Shiboken.Object):
# no doc
def column(self, *args, **kwargs): # real signature unknown
pass
def copyDataTo(self, *args, **kwargs): # real signature unknown
pass
def data(self, *args, **kwargs): # real signature unknown
pass
def determinant(self, *args, **kwargs): # real signature unknown
pass
def fill(self, *args, **kwargs): # real signature unknown
pass
def flipCoordinates(self, *args, **kwargs): # real signature unknown
pass
def frustum(self, *args, **kwargs): # real signature unknown
pass
def inverted(self, *args, **kwargs): # real signature unknown
pass
def isIdentity(self, *args, **kwargs): # real signature unknown
pass
def lookAt(self, *args, **kwargs): # real signature unknown
pass
def map(self, *args, **kwargs): # real signature unknown
pass
def mapRect(self, *args, **kwargs): # real signature unknown
pass
def mapVector(self, *args, **kwargs): # real signature unknown
pass
def normalMatrix(self, *args, **kwargs): # real signature unknown
pass
def optimize(self, *args, **kwargs): # real signature unknown
pass
def ortho(self, *args, **kwargs): # real signature unknown
pass
def perspective(self, *args, **kwargs): # real signature unknown
pass
def rotate(self, *args, **kwargs): # real signature unknown
pass
def row(self, *args, **kwargs): # real signature unknown
pass
def scale(self, *args, **kwargs): # real signature unknown
pass
def setColumn(self, *args, **kwargs): # real signature unknown
pass
def setRow(self, *args, **kwargs): # real signature unknown
pass
def setToIdentity(self, *args, **kwargs): # real signature unknown
pass
def toAffine(self, *args, **kwargs): # real signature unknown
pass
def toTransform(self, *args, **kwargs): # real signature unknown
pass
def translate(self, *args, **kwargs): # real signature unknown
pass
def transposed(self, *args, **kwargs): # real signature unknown
pass
def __add__(self, y): # real signature unknown; restored from __doc__
""" x.__add__(y) <==> x+y """
pass
def __copy__(self, *args, **kwargs): # real signature unknown
pass
def __div__(self, y): # real signature unknown; restored from __doc__
""" x.__div__(y) <==> x/y """
pass
def __eq__(self, y): # real signature unknown; restored from __doc__
""" x.__eq__(y) <==> x==y """
pass
def __getitem__(self, y): # real signature unknown; restored from __doc__
""" x.__getitem__(y) <==> x[y] """
pass
def __ge__(self, y): # real signature unknown; restored from __doc__
""" x.__ge__(y) <==> x>=y """
pass
def __gt__(self, y): # real signature unknown; restored from __doc__
""" x.__gt__(y) <==> x>y """
pass
def __iadd__(self, y): # real signature unknown; restored from __doc__
""" x.__iadd__(y) <==> x+=y """
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
def __isub__(self, y): # real signature unknown; restored from __doc__
""" x.__isub__(y) <==> x-=y """
pass
def __le__(self, y): # real signature unknown; restored from __doc__
""" x.__le__(y) <==> x<=y """
pass
def __lshift__(self, y): # real signature unknown; restored from __doc__
""" x.__lshift__(y) <==> x<<y """
pass
def __lt__(self, y): # real signature unknown; restored from __doc__
""" x.__lt__(y) <==> x<y """
pass
def __mul__(self, y): # real signature unknown; restored from __doc__
""" x.__mul__(y) <==> x*y """
pass
def __neg__(self): # real signature unknown; restored from __doc__
""" x.__neg__() <==> -x """
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __ne__(self, y): # real signature unknown; restored from __doc__
""" x.__ne__(y) <==> x!=y """
pass
def __radd__(self, y): # real signature unknown; restored from __doc__
""" x.__radd__(y) <==> y+x """
pass
def __rdiv__(self, y): # real signature unknown; restored from __doc__
""" x.__rdiv__(y) <==> y/x """
pass
def __reduce__(self, *args, **kwargs): # real signature unknown
pass
def __repr__(self): # real signature unknown; restored from __doc__
""" x.__repr__() <==> repr(x) """
pass
def __rlshift__(self, y): # real signature unknown; restored from __doc__
""" x.__rlshift__(y) <==> y<<x """
pass
def __rmul__(self, y): # real signature unknown; restored from __doc__
""" x.__rmul__(y) <==> y*x """
pass
def __rrshift__(self, y): # real signature unknown; restored from __doc__
""" x.__rrshift__(y) <==> y>>x """
pass
def __rshift__(self, y): # real signature unknown; restored from __doc__
""" x.__rshift__(y) <==> x>>y """
pass
def __rsub__(self, y): # real signature unknown; restored from __doc__
""" x.__rsub__(y) <==> y-x """
pass
def __rtruediv__(self, y): # real signature unknown; restored from __doc__
""" x.__rtruediv__(y) <==> y/x """
pass
def __sub__(self, y): # real signature unknown; restored from __doc__
""" x.__sub__(y) <==> x-y """
pass
def __truediv__(self, y): # real signature unknown; restored from __doc__
""" x.__truediv__(y) <==> x/y """
pass
| 28.957143 | 78 | 0.587568 | 732 | 6,081 | 4.423497 | 0.143443 | 0.22483 | 0.345893 | 0.16677 | 0.774552 | 0.706609 | 0.6958 | 0.6958 | 0.291229 | 0 | 0 | 0.002045 | 0.276106 | 6,081 | 209 | 79 | 29.095694 | 0.73353 | 0.446473 | 0 | 0.482759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.482759 | false | 0.482759 | 0.017241 | 0 | 0.508621 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
72de913fb39a48c5e529c940a97355db7ce283bd | 2,715 | py | Python | tests/test_read_gedcom.py | mustaqimM/life_line_chart | a9bbbbdeb5568aa0cc3b3b585337a3d655f4b2d6 | [
"MIT"
] | 3 | 2020-04-28T08:27:34.000Z | 2022-02-25T12:49:47.000Z | tests/test_read_gedcom.py | mustaqimM/life_line_chart | a9bbbbdeb5568aa0cc3b3b585337a3d655f4b2d6 | [
"MIT"
] | 1 | 2022-03-19T17:20:07.000Z | 2022-03-19T17:20:07.000Z | tests/test_read_gedcom.py | mustaqimM/life_line_chart | a9bbbbdeb5568aa0cc3b3b585337a3d655f4b2d6 | [
"MIT"
] | 3 | 2020-04-28T08:27:43.000Z | 2022-02-25T08:55:36.000Z | # import pytest
# from life_line_chart import GedcomParsing
from life_line_chart import ReadGedcom
import os
import sys
def test_read_sample_file():
data = ReadGedcom.read_data(os.path.join(
os.path.dirname(__file__), 'gramps_sample.ged'))
assert len(data[0]) == 42
assert len(data[1]) == 15
assert str(data[0]['@I1@']) == "OrderedDict([('tag_data', 'INDI'), ('NAME', OrderedDict([('tag_data', 'Keith Lloyd /Smith/'), ('GIVN', OrderedDict([('tag_data', 'Keith Lloyd')])), ('SURN', OrderedDict([('tag_data', 'Smith')]))])), ('SEX', OrderedDict([('tag_data', 'M')])), ('BIRT', OrderedDict([('tag_data', ''), ('TYPE', OrderedDict([('tag_data', 'Birth of Keith Lloyd Smith')])), ('DATE', OrderedDict([('tag_data', '11 AUG 1966')])), ('PLAC', OrderedDict([('tag_data', 'San Francisco, San Francisco Co., CA')]))])), ('FAMC', OrderedDict([('tag_data', '@F8@')])), ('CHAN', OrderedDict([('tag_data', ''), ('DATE', OrderedDict([('tag_data', '21 DEC 2007'), ('TIME', OrderedDict([('tag_data', '01:35:26')]))]))]))])"
assert str(data[1]['@F1@']) == "OrderedDict([('tag_data', 'FAM'), ('HUSB', OrderedDict([('tag_data', '@I27@')])), ('WIFE', OrderedDict([('tag_data', '@I25@')])), ('MARR', OrderedDict([('tag_data', ''), ('TYPE', OrderedDict([('tag_data', 'Marriage of Ingeman Smith and Marta Ericsdotter')])), ('DATE', OrderedDict([('tag_data', 'ABT 1790')])), ('PLAC', OrderedDict([('tag_data', 'Sweden')]))])), ('CHIL', OrderedDict([('tag_data', '@I39@')])), ('CHAN', OrderedDict([('tag_data', ''), ('DATE', OrderedDict([('tag_data', '21 DEC 2007'), ('TIME', OrderedDict([('tag_data', '01:35:26')]))]))]))])"
def test_read_testdata_file():
data = ReadGedcom.read_data(os.path.join(
os.path.dirname(__file__), 'autogenerated.ged'))
assert len(data[0]) == 1361
assert len(data[1]) == 498
assert str(data[0]['@I1@']) == "OrderedDict([('tag_data', 'INDI'), ('NAME', OrderedDict([('tag_data', 'Stephen /Demetro/')])), ('SEX', OrderedDict([('tag_data', 'M')])), ('BIRT', OrderedDict([('tag_data', ''), ('DATE', OrderedDict([('tag_data', '1 JUN 1001')])), ('PLAC', OrderedDict([('tag_data', 'Paris')]))])), ('DEAT', OrderedDict([('tag_data', ''), ('DATE', OrderedDict([('tag_data', '1 JUN 1060')])), ('PLAC', OrderedDict([('tag_data', 'Bruegge')]))])), ('FAMS', OrderedDict([('tag_data', '@F1@')]))])"
assert str(data[1]['@F1@']) == "OrderedDict([('tag_data', 'FAM'), ('HUSB', OrderedDict([('tag_data', '@I1@')])), ('WIFE', OrderedDict([('tag_data', '@I2@')])), ('MARR', OrderedDict([('tag_data', ''), ('DATE', OrderedDict([('tag_data', '1 MAY 1021')])), ('PLAC', OrderedDict([('tag_data', 'Tokio')]))])), ('CHIL', OrderedDict([('tag_data', '@I3@\\n@I4@\\n@I5@')]))])"
| 108.6 | 719 | 0.59558 | 329 | 2,715 | 4.726444 | 0.306991 | 0.369132 | 0.474598 | 0.099035 | 0.569132 | 0.493248 | 0.493248 | 0.453376 | 0.42701 | 0.325402 | 0 | 0.034765 | 0.099448 | 2,715 | 24 | 720 | 113.125 | 0.601227 | 0.020258 | 0 | 0.117647 | 0 | 0.235294 | 0.790286 | 0.412274 | 0 | 0 | 0 | 0 | 0.470588 | 1 | 0.117647 | false | 0 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f401525bd7d58cc08e1d3425bf324090e7781e71 | 101 | py | Python | BackendBaggie/utils.py | Baggie-App/Updateapi | 80f200d7ffd4695e6348ce6bb9a7a31a6b821e77 | [
"MIT"
] | null | null | null | BackendBaggie/utils.py | Baggie-App/Updateapi | 80f200d7ffd4695e6348ce6bb9a7a31a6b821e77 | [
"MIT"
] | null | null | null | BackendBaggie/utils.py | Baggie-App/Updateapi | 80f200d7ffd4695e6348ce6bb9a7a31a6b821e77 | [
"MIT"
] | null | null | null | import random
def create_new_ref_number():
return str(random.randint(1000000000, 9999999999))
| 20.2 | 56 | 0.772277 | 13 | 101 | 5.769231 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229885 | 0.138614 | 101 | 4 | 57 | 25.25 | 0.632184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
f44d75ba655d39114c8b48aeabbeb542330b9694 | 109 | py | Python | src/py/basic/test_append.py | gpk/program-world | 255c2532574ca2b78dcb57c1f9b96e20abe0e118 | [
"Apache-2.0"
] | 1 | 2020-06-30T14:17:46.000Z | 2020-06-30T14:17:46.000Z | src/py/basic/test_append.py | gpk/code | 255c2532574ca2b78dcb57c1f9b96e20abe0e118 | [
"Apache-2.0"
] | 10 | 2020-06-10T23:42:31.000Z | 2022-01-22T12:26:58.000Z | src/py/basic/test_append.py | gpk/code | 255c2532574ca2b78dcb57c1f9b96e20abe0e118 | [
"Apache-2.0"
] | null | null | null | from append import append
def test_append() -> None:
assert "hello world" == append("hello ", "world")
| 18.166667 | 53 | 0.66055 | 14 | 109 | 5.071429 | 0.642857 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192661 | 109 | 5 | 54 | 21.8 | 0.806818 | 0 | 0 | 0 | 0 | 0 | 0.201835 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f45ea2f636e7ffa813251b8d5a27c1417b037c94 | 4,216 | py | Python | 2015/day_11/python/day11.py | josephroquedev/advent-of-code | bb217deb7a5f5ed5c8c04cb726ddadb5b042ee4d | [
"MIT"
] | null | null | null | 2015/day_11/python/day11.py | josephroquedev/advent-of-code | bb217deb7a5f5ed5c8c04cb726ddadb5b042ee4d | [
"MIT"
] | 2 | 2021-06-02T00:41:38.000Z | 2021-11-30T10:05:29.000Z | 2015/day_11/python/day11.py | autoreleasefool/advent-of-code | bb217deb7a5f5ed5c8c04cb726ddadb5b042ee4d | [
"MIT"
] | null | null | null | from aoc import AOC
aoc = AOC(year=2015, day=11)
## Part 1
# The original password
PUZZLE_INPUT = ["v", "z", "b", "x", "k", "g", "h", "b"]
def three_straight_letters(password):
# Checks for a row of 3 letters in the password
for i in range(len(password) - 3):
for j in range(2):
if not ord(password[i + j]) + 1 == ord(password[i + j + 1]):
break
if j == 1:
return True
return False
def has_double_doubles(password):
# Checks for 2 different sets of doubles in the password
double_count = 0
last_double = None
for i in range(len(password) - 1):
if password[i] != last_double and password[i] == password[i + 1]:
double_count += 1
if double_count == 2:
return True
last_double = password[i]
return False
def increment_by_one(position, password):
# Move the letter at position up by 1
# If the letter is 'z', make it 'a' and increment the previous letter
# Skip the letters 'i', 'o' and 'l'
if password[position] == "z":
password[position] = "a"
increment_by_one(position - 1, password)
else:
password[position] = chr(ord(password[position]) + 1)
if password[position] in {"i", "o", "l"}:
password[position] = chr(ord(password[position]) + 1)
return password
def increment_all_until_valid(password):
# Skips any letters in the entire string which are 'i', 'o', or 'l'
for i, letter in enumerate(password):
if letter in {"i", "o", "l"}:
increment_by_one(i)
for j in range(i + 1, len(password)):
password[j] = "a"
return password
# Increment until the password is valid
password = increment_by_one(7, PUZZLE_INPUT)
password = increment_all_until_valid(password)
while not has_double_doubles(password) or not three_straight_letters(password):
password = increment_by_one(7, password)
aoc.p1("".join(password))
## Part 2
# The original password
PUZZLE_INPUT = ["v", "z", "b", "x", "k", "g", "h", "b"]
def three_straight_letters(password):
# Checks for a row of 3 letters in the password
for i in range(len(password) - 3):
for j in range(2):
if not ord(password[i + j]) + 1 == ord(password[i + j + 1]):
break
if j == 1:
return True
return False
def has_double_doubles(password):
# Checks for 2 different sets of doubles in the password
double_count = 0
last_double = None
for i in range(len(password) - 1):
if password[i] != last_double and password[i] == password[i + 1]:
double_count += 1
if double_count == 2:
return True
last_double = password[i]
return False
def increment_by_one(position, password):
# Move the letter at position up by 1
# If the letter is 'z', make it 'a' and increment the previous letter
# Skip the letters 'i', 'o' and 'l'
if password[position] == "z":
password[position] = "a"
increment_by_one(position - 1, password)
else:
password[position] = chr(ord(password[position]) + 1)
if password[position] in {"i", "o", "l"}:
password[position] = chr(ord(password[position]) + 1)
return password
def increment_all_until_valid(password):
# Skips any letters in the entire string which are 'i', 'o', or 'l'
for i, letter in enumerate(password):
if letter in {"i", "o", "l"}:
increment_by_one(i)
for j in range(i + 1, len(password)):
password[j] = "a"
return password
# Increment until the password is valid
password = increment_by_one(7, PUZZLE_INPUT)
password = increment_all_until_valid(password)
while not has_double_doubles(password) or not three_straight_letters(password):
password = increment_by_one(7, password)
# Increment until the password is valid
password = increment_by_one(7, password)
password = increment_all_until_valid(password)
while not has_double_doubles(password) or not three_straight_letters(password):
password = increment_by_one(7, password)
aoc.p1("".join(password))
| 31 | 79 | 0.620731 | 598 | 4,216 | 4.250836 | 0.143813 | 0.08812 | 0.06609 | 0.051928 | 0.982297 | 0.982297 | 0.97915 | 0.97915 | 0.97915 | 0.97915 | 0 | 0.01616 | 0.266129 | 4,216 | 135 | 80 | 31.22963 | 0.80543 | 0.185247 | 0 | 0.976744 | 0 | 0 | 0.009962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0.627907 | 0.011628 | 0 | 0.244186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
f4846b933734122133b5a0e747d44b6170d4be8d | 2,654 | py | Python | tests/symmetry/test_spglib_symmetry_finder.py | kijanac/Materia | b49af518c8eff7d3a8c6caff39783e3daf80a7a0 | [
"MIT"
] | null | null | null | tests/symmetry/test_spglib_symmetry_finder.py | kijanac/Materia | b49af518c8eff7d3a8c6caff39783e3daf80a7a0 | [
"MIT"
] | null | null | null | tests/symmetry/test_spglib_symmetry_finder.py | kijanac/Materia | b49af518c8eff7d3a8c6caff39783e3daf80a7a0 | [
"MIT"
] | null | null | null | # import materia as mtr
# import numpy as np
# class StructureTestClass(mtr.Structure):
# def __init__(self, *atoms):
# setattr(self, "atoms", atoms)
# def atoms(self):
# return self.atoms
# def test_align_axes_with_molecule_he():
# ssf = mtr.SpglibSymmetryFinder()
# test_result = ssf._align_rotations_with_molecule(
# inertia_tensor=np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])
# )
# check_result = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
# assert (test_result == check_result).all()
# def test_align_axes_with_molecule_h2o_norot():
# ssf = mtr.SpglibSymmetryFinder()
# test_result = ssf._align_rotations_with_molecule(
# inertia_tensor=np.array(
# [
# [0.6148148259597002, 0.0, 0.0],
# [0.0, 1.1552667840000002, 0.0],
# [0.0, 0.0, 1.7700816099597003],
# ]
# )
# )
# check_result = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
# assert (test_result == check_result).all()
# def test_align_axes_with_molecule_h2o_rot():
# ssf = mtr.SpglibSymmetryFinder()
# test_result = ssf._align_rotations_with_molecule(
# inertia_tensor=np.array(
# [
# [0.6148148259597002, 0.0, 0.0],
# [0.0, 1.7700816099597003, 0.0],
# [0.0, 0.0, 1.1552667840000002],
# ]
# )
# )
# check_result = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, -1.0], [0.0, 1.0, 0.0]])
# assert (test_result == check_result).all()
# def test_molecular_pointgroup_h2o_norot():
# ssf = mtr.SpglibSymmetryFinder()
# o = mtr.Atom(element="O", position=(0.000, 0.000, 0.000) * mtr.angstrom)
# h1 = mtr.Atom(element="H", position=(0.757, 0.586, 0.000) * mtr.angstrom)
# h2 = mtr.Atom(element="H", position=(-0.757, 0.586, 0.000) * mtr.angstrom)
# h2o = mtr.Molecule(StructureTestClass(o, h1, h2))
# test_result = ssf.molecular_pointgroup(molecule=h2o)
# check_result = "C2v"
# assert test_result == check_result
# def test_symfinder_molecular_pointgroup_h2o_rot():
# ssf = mtr.SpglibSymmetryFinder()
# o = mtr.Atom(element="O", position=(0.000, 0.000, 0.000) * mtr.angstrom)
# h1 = mtr.Atom(element="H", position=(0.757, 0.000, 0.586) * mtr.angstrom)
# h2 = mtr.Atom(element="H", position=(-0.757, 0.000, 0.586) * mtr.angstrom)
# h2o = mtr.Molecule(StructureTestClass(o, h1, h2))
# test_result = ssf.molecular_pointgroup(molecule=h2o)
# check_result = "C2v"
# assert test_result == check_result
| 29.820225 | 86 | 0.584401 | 376 | 2,654 | 3.946809 | 0.143617 | 0.098383 | 0.123315 | 0.132075 | 0.873315 | 0.858491 | 0.839623 | 0.837601 | 0.78504 | 0.759434 | 0 | 0.142645 | 0.239261 | 2,654 | 88 | 87 | 30.159091 | 0.592372 | 0.944612 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
be7c2d1a5d82b3c3ed53acb89a0617dca0ee66cd | 14,411 | py | Python | submissions/available/Johnson-CausalTesting/Holmes/fuzzers/Peach/Mutators/size.py | brittjay0104/rose6icse | 7b24743b7a805b9ed094b67e4a08bad7894f0e84 | [
"Unlicense"
] | null | null | null | submissions/available/Johnson-CausalTesting/Holmes/fuzzers/Peach/Mutators/size.py | brittjay0104/rose6icse | 7b24743b7a805b9ed094b67e4a08bad7894f0e84 | [
"Unlicense"
] | null | null | null | submissions/available/Johnson-CausalTesting/Holmes/fuzzers/Peach/Mutators/size.py | brittjay0104/rose6icse | 7b24743b7a805b9ed094b67e4a08bad7894f0e84 | [
"Unlicense"
] | null | null | null | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
from Peach.Generators.data import *
from Peach.mutator import *
class SizedVarianceMutator(Mutator):
"""
Change the length of sizes to count - N to count + N.
"""
def __init__(self, peach, node):
Mutator.__init__(self)
SizedVarianceMutator.weight = 2
self.isFinite = True
self.name = "SizedVarianceMutator"
self._peach = peach
self._dataElementName = node.getFullname()
self._n = self._getN(node, 50)
self._range = range(0 - self._n, self._n)
self._currentCount = 0
def _getN(self, node, n):
for c in node.hints:
if c.name == ('{}-N'.format(self.name)):
try:
n = int(c.value)
except:
raise PeachException("Expected numerical value for Hint "
"named [{}]".format(c.name))
return n
def next(self):
self._currentCount += 1
if self._currentCount >= len(self._range):
raise MutatorCompleted()
def getCount(self):
return len(self._range)
@staticmethod
def supportedDataElement(node):
if isinstance(node, DataElement) and node._HasSizeofRelation(node) \
and node.isMutable:
return True
return False
def sequentialMutation(self, node):
self.changedName = node.getFullnameInDataModel()
self._performMutation(node, self._range[self._currentCount])
def randomMutation(self, node, rand):
self.changedName = node.getFullnameInDataModel()
count = rand.choice(self._range)
self._performMutation(node, count)
def _performMutation(self, node, count):
"""
Perform array mutation using count
"""
relation = node._GetSizeofRelation()
nodeOf = relation.getOfElement()
size = int(node.getInternalValue())
realSize = len(nodeOf.getValue())
n = size + count
# In cases were expressionSet/Get changes the size value +/- some
# amount, we need to take that into account if possible.
diff = size - realSize
## Can we make the value?
if n - diff < 0:
# We can't make N the # we want, so do our best and get our of
# here w/o an assert check.
nodeOf.currentValue = ""
return
## Otherwise Make the value
if n <= 0:
nodeOf.currentValue = ""
elif n < size:
nodeOf.currentValue = str(nodeOf.getInternalValue())[:n - diff]
elif size == 0:
nodeOf.currentValue = "A" * (n - diff)
else:
try:
nodeOf.currentValue = \
(str(nodeOf.getInternalValue()) *
(((n - diff) / realSize) + 2))[:n - diff]
except ZeroDivisionError:
nodeOf.currentValue = ""
# Verify things worked out okay
#try:
# assert((n == long(node.getInternalValue()) and (n-diff) == len(nodeOf.getValue())) or n < 0)
#except:
# print "realSize:", realSize
# print "diff:", diff
# print "node.name:", node.name
# print "nodeOf.name:", nodeOf.name
# print "nodeOf:", nodeOf
# print "n:", n
# print "long(node.getInternalValue()):",long(node.getInternalValue())
# print "len(nodeOf.getValue()):", len(nodeOf.getValue())
# print "repr(nodeOf.getValue()):", repr(nodeOf.getValue())
# raise
class SizedNumericalEdgeCasesMutator(Mutator):
"""
Change the length of sizes to numerical edge cases
"""
def __init__(self, peach, node):
Mutator.__init__(self)
SizedNumericalEdgeCasesMutator.weight = 2
self.isFinite = True
self.name = "SizedNumericalEdgeCasesMutator"
self._peach = peach
self._dataElementName = node.getFullname()
self._n = self._getN(node, 50)
self._range = self._populateValues(node)
self._currentCount = 0
def _populateValues(self, node):
if isinstance(node, Number):
size = node.size
elif isinstance(node, Flag):
size = node.length
if size < 16:
size = 8
elif size < 32:
size = 16
elif size < 64:
size = 32
else:
size = 64
else:
size = 64 # In the case of strings or blobs
nums = []
try:
if size < 16:
gen = BadNumbers8()
else:
gen = BadNumbers16(None, self._n)
# Only if we are testing large memory
#gen = BadNumbers24(None, self._n)
#gen = BadNumbers32(None, self._n)
#gen = BadNumbers(None, self._n)
while True:
nums.append(int(gen.getValue()))
gen.next()
except:
pass
return nums
def _getN(self, node, n):
for c in node.hints:
if c.name == ('{}-N'.format(self.name)):
try:
n = int(c.value)
except:
raise PeachException("Expected numerical value for Hint "
"named [{}]".format(c.name))
return n
def next(self):
self._currentCount += 1
if self._currentCount >= len(self._range):
raise MutatorCompleted()
def getCount(self):
return len(self._range)
@staticmethod
def supportedDataElement(node):
# This will pick up both numbers or strings, etc that have a size-of
# relation.
if isinstance(node, DataElement) and node._HasSizeofRelation(node) \
and node.isMutable:
return True
return False
def sequentialMutation(self, node):
self.changedName = node.getFullnameInDataModel()
self._performMutation(node, self._range[self._currentCount])
def randomMutation(self, node, rand):
self.changedName = node.getFullnameInDataModel()
count = rand.choice(self._range)
self._performMutation(node, count)
def _performMutation(self, node, count):
"""
Perform array mutation using count
"""
relation = node._GetSizeofRelation()
nodeOf = relation.getOfElement()
size = int(node.getInternalValue())
realSize = len(nodeOf.getValue())
n = size + count
# In cases were expressionSet/Get changes the size value +/- some
# amount, we need to take that into account if possible.
diff = size - realSize
## Can we make the value?
if n - diff < 0:
# We can't make N the # we want, so do our best and get our of
# here w/o an assert check.
nodeOf.currentValue = ""
return
## Otherwise make the value
if n <= 0:
nodeOf.currentValue = ""
elif n < size:
nodeOf.currentValue = nodeOf.getInternalValue()[:n - diff]
elif size == 0:
nodeOf.currentValue = "A" * (n - diff)
else:
try:
nodeOf.currentValue = \
(str(nodeOf.getInternalValue()) *
(((n - diff) / realSize) + 2))[:n - diff]
except ZeroDivisionError:
nodeOf.currentValue = ""
# Verify things worked out okay
##try:
## assert((n == long(node.getInternalValue()) and (n-diff) == len(nodeOf.getValue())) or n < 0)
##except:
## print "realSize:", realSize
## print "diff:", diff
## print "node.name:", node.name
## print "nodeOf.name:", nodeOf.name
## print "nodeOf:", nodeOf
## print "n:", n
## print "long(node.getInternalValue()):",long(node.getInternalValue())
## print "len(nodeOf.getValue()):", len(nodeOf.getValue())
## print "repr(nodeOf.getValue()):", repr(nodeOf.getValue())[:100]
## raise
class SizedDataVarianceMutator(Mutator):
"""
Change the length of sized data to count - N to count + N.
Size indicator will stay the same
"""
def __init__(self, peach, node):
Mutator.__init__(self)
SizedDataVarianceMutator.weight = 2
self.isFinite = True
self.name = "SizedDataVarianceMutator"
self._peach = peach
self._dataElementName = node.getFullname()
self._n = self._getN(node, 50)
self._range = range(0 - self._n, self._n)
self._currentCount = 0
def _getN(self, node, n):
for c in node.hints:
if c.name == ('{}-N'.format(self.name)):
try:
n = int(c.value)
except:
raise PeachException("Expected numerical value for Hint "
"named [{}]".format(c.name))
return n
def next(self):
self._currentCount += 1
if self._currentCount >= len(self._range):
raise MutatorCompleted()
def getCount(self):
return len(self._range)
@staticmethod
def supportedDataElement(node):
if isinstance(node, DataElement) and node._HasSizeofRelation(node) \
and node.isMutable:
return True
return False
def sequentialMutation(self, node):
self.changedName = node.getFullnameInDataModel()
self._performMutation(node, self._range[self._currentCount])
def randomMutation(self, node, rand):
self.changedName = node.getFullnameInDataModel()
count = rand.choice(self._range)
self._performMutation(node, count)
def _performMutation(self, node, count):
"""
Perform array mutation using count
"""
relation = node._GetSizeofRelation()
nodeOf = relation.getOfElement()
size = int(node.getInternalValue())
realSize = len(nodeOf.getValue())
# Keep size indicator the same
node.value = node.getValue()
node.currentValue = node.getInternalValue()
# Modify data
n = size + count
if n == 0:
nodeOf.value = ""
elif n < size:
nodeOf.value = nodeOf.getValue()[:n]
elif size == 0:
nodeOf.value = "A" * n
else:
nodeOf.value = (nodeOf.getValue() * ((n / realSize) + 1))[:n]
# Verify things worked out okay
#assert(size == long(node.getInternalValue()) and n == len(nodeOf.getValue()))
class SizedDataNumericalEdgeCasesMutator(Mutator):
"""
Change the length of sizes to numerical edge cases
"""
def __init__(self, peach, node):
Mutator.__init__(self)
SizedDataNumericalEdgeCasesMutator.weight = 2
self.isFinite = True
self.name = "SizedDataNumericalEdgeCasesMutator"
self._peach = peach
self._dataElementName = node.getFullname()
self._n = self._getN(node, 50)
self._range = self._populateValues(node)
self._currentCount = 0
def _populateValues(self, node):
if isinstance(node, Number):
size = node.size
elif isinstance(node, Flag):
size = node.length
if size < 16:
size = 8
elif size < 32:
size = 16
elif size < 64:
size = 32
else:
size = 64
else:
size = 64 # In the case of strings or blobs
nums = []
try:
if size < 16:
gen = BadNumbers8()
else:
gen = BadNumbers16(None, self._n)
# Only if we are testing large memory
#gen = BadNumbers24(None, self._n)
#gen = BadNumbers32(None, self._n)
#gen = BadNumbers(None, self._n)
while True:
nums.append(int(gen.getValue()))
gen.next()
except:
pass
return nums
def _getN(self, node, n):
for c in node.hints:
if c.name == ('{}-N'.format(self.name)):
try:
n = int(c.value)
except:
raise PeachException("Expected numerical value for Hint "
"named [{}]".format(c.name))
return n
def next(self):
self._currentCount += 1
if self._currentCount >= len(self._range):
raise MutatorCompleted()
def getCount(self):
return len(self._range)
@staticmethod
def supportedDataElement(node):
# This will pick up both numbers or strings, etc that have a size-of
# relation.
if isinstance(node, DataElement) and node._HasSizeofRelation(node) \
and node.isMutable:
return True
return False
def sequentialMutation(self, node):
self.changedName = node.getFullnameInDataModel()
self._performMutation(node, self._range[self._currentCount])
def randomMutation(self, node, rand):
self.changedName = node.getFullnameInDataModel()
count = rand.choice(self._range)
self._performMutation(node, count)
def _performMutation(self, node, count):
"""
Perform array mutation using count
"""
relation = node._GetSizeofRelation()
nodeOf = relation.getOfElement()
size = int(node.getInternalValue())
# Keep size indicator the same
node.value = node.getValue()
node.currentValue = node.getInternalValue()
n = count
if n == 0:
nodeOf.value = ""
elif n < size:
nodeOf.value = nodeOf.getValue()[:n]
elif size == 0:
nodeOf.value = "A" * n
else:
nodeOf.value = (nodeOf.getValue() * ((n / size) + 1))[:n]
# Verify things worked out okay
#assert(size == long(node.getInternalValue()) and n == len(nodeOf.getValue()))
| 34.068558 | 107 | 0.544931 | 1,471 | 14,411 | 5.254249 | 0.131883 | 0.023289 | 0.024195 | 0.042438 | 0.920041 | 0.916936 | 0.913055 | 0.892612 | 0.883555 | 0.883555 | 0 | 0.009935 | 0.350427 | 14,411 | 422 | 108 | 34.149289 | 0.815725 | 0.194851 | 0 | 0.934256 | 0 | 0 | 0.026751 | 0.007744 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.00692 | 0.00692 | 0.013841 | 0.207612 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be858094847a3bd9925b4fb0b598d5f85ca1f0af | 6,853 | py | Python | tests/blackbox/app/test_v1api_rules_metadata.py | biocatchltd/Heksher | b50b3659a606cb188437adb1f95747efb3ba7b59 | [
"MIT"
] | 3 | 2021-01-21T11:41:06.000Z | 2021-10-20T06:51:53.000Z | tests/blackbox/app/test_v1api_rules_metadata.py | biocatchltd/Heksher | b50b3659a606cb188437adb1f95747efb3ba7b59 | [
"MIT"
] | 18 | 2021-02-01T06:38:53.000Z | 2022-02-14T13:46:33.000Z | tests/blackbox/app/test_v1api_rules_metadata.py | biocatchltd/Heksher | b50b3659a606cb188437adb1f95747efb3ba7b59 | [
"MIT"
] | null | null | null | import json
from pytest import mark
@mark.asyncio
async def test_post_rule_metadata(example_rule, app_client):
res = await app_client.post(f'/api/v1/rules/{example_rule}/metadata', data=json.dumps({
'metadata': {'test': False, 'second_key': 12}
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {'test': False, 'second_key': 12}
}
@mark.asyncio
async def test_post_rule_metadata_new_key(example_rule, app_client):
res = await app_client.post(f'/api/v1/rules/{example_rule}/metadata', data=json.dumps({
'metadata': {'second_key': 12}
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {'test': True, 'second_key': 12}
}
@mark.asyncio
async def test_post_not_existing_rule_metadata(app_client):
res = await app_client.post('/api/v1/rules/1234/metadata', data=json.dumps({
'metadata': {'test': True}
}))
assert res.status_code == 404
@mark.asyncio
async def test_post_rule_first_metadata(example_rule, app_client):
await app_client.put('/api/v1/settings/declare', data=json.dumps({
'name': 'test_setting',
'configurable_features': ['user', 'theme'],
'type': 'int',
'default_value': 0,
'metadata': {}
}))
post_rule_rep = await app_client.post('/api/v1/rules', data=json.dumps({
'setting': 'test_setting',
'feature_values': {'theme': 'bright'},
'value': 0,
'metadata': {}
}))
post_rule_rep.raise_for_status()
j_result = post_rule_rep.json()
rule_id = j_result.pop('rule_id')
res = await app_client.post(f'/api/v1/rules/{rule_id}/metadata', data=json.dumps({
'metadata': {'test': True}
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{rule_id}')
assert rule.json() == {
'setting': 'test_setting',
'value': 0,
'feature_values': [['theme', 'bright']],
'metadata': {'test': True}
}
@mark.asyncio
async def test_put_rule_metadata(example_rule, app_client):
res = await app_client.put(f'/api/v1/rules/{example_rule}/metadata', data=json.dumps({
'metadata': {'first': 'yes', 'second': 'no'}
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {'first': 'yes', 'second': 'no'}
}
@mark.asyncio
async def test_put_not_existing_rule_metadata(app_client):
res = await app_client.put('/api/v1/rules/12345/metadata', data=json.dumps({
'metadata': {'test': True}
}))
assert res.status_code == 404
@mark.asyncio
async def test_put_rule_empty_metadata(example_rule, app_client):
res = await app_client.put(f'/api/v1/rules/{example_rule}/metadata', data=json.dumps({
'metadata': {}
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {}
}
@mark.asyncio
async def test_put_rule_metadata_existing_key(example_rule, app_client):
res = await app_client.put(f'/api/v1/rules/{example_rule}/metadata/test', data=json.dumps({
'value': 1000
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {'test': 1000}
}
@mark.asyncio
async def test_put_rule_metadata_not_existing_key(example_rule, app_client):
res = await app_client.put(f'/api/v1/rules/{example_rule}/metadata/hello', data=json.dumps({
'value': 'world'
}))
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {'test': True, 'hello': 'world'}
}
@mark.asyncio
async def test_delete_rule_metadata(example_rule, app_client):
res = await app_client.delete(f'/api/v1/rules/{example_rule}/metadata')
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {}
}
@mark.asyncio
async def test_delete_not_existing_rule_metadata(app_client):
res = await app_client.delete('/api/v1/rules/1234/metadata')
assert res.status_code == 404
@mark.asyncio
async def test_delete_specific_key_from_rule_metadata(example_rule, app_client):
await app_client.put(f'/api/v1/rules/{example_rule}/metadata/hello', data=json.dumps({
'value': 'world'
}))
res = await app_client.delete(f'/api/v1/rules/{example_rule}/metadata/test')
res.raise_for_status()
rule = await app_client.get(f'/api/v1/rules/{example_rule}')
rule.raise_for_status()
assert rule.json() == {
'setting': 'size_limit',
'value': 10,
'feature_values': [['theme', 'bright']],
'metadata': {'hello': 'world'}
}
@mark.asyncio
async def test_get_rule_metadata(example_rule, app_client):
res = await app_client.get(f'/api/v1/rules/{example_rule}/metadata')
res.raise_for_status()
assert res.json() == {
'metadata': {'test': True}
}
@mark.asyncio
async def test_get_rule_no_metadata(app_client):
await app_client.put('/api/v1/settings/declare', data=json.dumps({
'name': 'test_setting',
'configurable_features': ['theme', 'user'],
'type': 'int'
}))
post_rule_rep = await app_client.post('/api/v1/rules', data=json.dumps({
'setting': 'test_setting',
'feature_values': {'theme': 'bright'},
'value': 0,
'metadata': {}
}))
post_rule_rep.raise_for_status()
j_result = post_rule_rep.json()
rule_id = j_result.pop('rule_id')
res = await app_client.get(f'/api/v1/rules/{rule_id}/metadata')
res.raise_for_status()
assert res.json() == {
'metadata': {}
}
| 31.15 | 96 | 0.626003 | 892 | 6,853 | 4.55157 | 0.08296 | 0.093103 | 0.096552 | 0.056897 | 0.95468 | 0.940394 | 0.920443 | 0.875862 | 0.833251 | 0.78202 | 0 | 0.015731 | 0.202247 | 6,853 | 219 | 97 | 31.292237 | 0.726907 | 0 | 0 | 0.731183 | 0 | 0 | 0.268496 | 0.127681 | 0 | 0 | 0 | 0 | 0.075269 | 1 | 0 | false | 0 | 0.010753 | 0 | 0.010753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be959db1f91d8b362ddd67c3b3135b9584790bd8 | 3,357 | py | Python | test_script/test_h5py.py | zyxwvu321/CT_seg | 970d7f9b805ee89930d72a8fb5d60c0c6ba0c4fe | [
"MIT"
] | 2 | 2019-12-25T10:30:18.000Z | 2020-01-08T13:28:19.000Z | test_script/test_h5py.py | zyxwvu321/CT_seg | 970d7f9b805ee89930d72a8fb5d60c0c6ba0c4fe | [
"MIT"
] | 1 | 2019-12-25T10:30:44.000Z | 2019-12-25T10:31:42.000Z | test_script/test_h5py.py | zyxwvu321/CT_seg | 970d7f9b805ee89930d72a8fb5d60c0c6ba0c4fe | [
"MIT"
] | 1 | 2020-05-10T11:20:57.000Z | 2020-05-10T11:20:57.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Jun 11 17:58:02 2019
@author: minjie
"""
import h5py
path = './resources/sample_patch.h5'
path_o = 'sample_patch_o.h5'
with h5py.File(path, 'r') as f:
label = f['label'][...]
raw = f['raw'][...]
with h5py.File(path, 'r') as f:
label = f['label'][...]
raw = f['raw'][...]
import os
with h5py.File('./resources/ct01m.h5', 'r') as f:
label = f['label'][...]
raw = f['raw'][...]
weight = f['weight'][...]
with h5py.File('./resources/ct01m_c1.h5', 'r') as f:
label = f['label'][...]
raw = f['raw'][...]
from sklearn.metrics import confusion_matrix
with h5py.File('resources/ct01_predictions.h5', 'r') as f:
mask0 = f['predictions'][...][1]
with h5py.File('resources/ct01.h5', 'r') as f:
label = f['label'][...]
mask = (mask0>=0.5).astype('int')
label = label.astype('int')
cm = confusion_matrix(label.flatten(), mask.flatten())
import matplotlib.pyplot as plt
idx = 100
plt.imshow(mask0[idx,:,:])
plt.imshow(label[idx,:,:].astype('float'))
#%%
import h5py
with h5py.File('resources/ct01m_predictions.h5', 'r') as f:
mask0 = f['predictions'][...]
with h5py.File('resources/ct01m.h5', 'r') as f:
label = f['label'][...]
from sklearn.metrics import confusion_matrix
for idx in range(4):
mask_t = (mask0[idx]>=0.25).astype('int')
label_t = label[idx]
cm = confusion_matrix(label_t.flatten(), mask_t.flatten())
print(cm)
#%%
import h5py
with h5py.File('resources/ct01m_c1_predictions.h5', 'r') as f:
mask0 = f['predictions'][...]
with h5py.File('resources/ct01m_c1.h5', 'r') as f:
label = f['label'][...]
from sklearn.metrics import confusion_matrix
mask_t = (mask0[1]>=0.5).astype('int')
label_t = label
cm = confusion_matrix(label_t.flatten(), mask_t.flatten())
print('ce cm')
print(cm)
recall = cm[1,1]/(cm[1,0]+cm[1,1])
precision = cm[1,1]/(cm[0,1]+cm[1,1])
fscore = 2*recall*precision/(recall+precision)
print(f'recall: {recall:.4f} precision: {precision:.4f} Fscore: {fscore :.4f}')
#%%
import h5py
with h5py.File('resources/ct01m_c1_predictions_doubleconv.h5', 'r') as f:
mask0 = f['predictions'][...]
with h5py.File('resources/ct01m_c1.h5', 'r') as f:
label = f['label'][...]
from sklearn.metrics import confusion_matrix
mask_t = (mask0[1]>=0.5).astype('int')
label_t = label
cm = confusion_matrix(label_t.flatten(), mask_t.flatten())
print('ce cm')
print(cm)
recall = cm[1,1]/(cm[1,0]+cm[1,1])
precision = cm[1,1]/(cm[0,1]+cm[1,1])
fscore = 2*recall*precision/(recall+precision)
print(f'recall: {recall:.4f} precision: {precision:.4f} Fscore: {fscore :.4f}')
#%%
import h5py
with h5py.File('resources/ct01m_c1_predictions.h5', 'r') as f:
mask0 = f['predictions'][...]
with h5py.File('resources/ct01m_c1.h5', 'r') as f:
label = f['label'][...]
from sklearn.metrics import confusion_matrix
#%%
mask_t = (mask0[1]>=0.5).astype('int')
label_t = label
cm = confusion_matrix(label_t.flatten(), mask_t.flatten())
print('ce cm')
print(cm)
recall = cm[1,1]/(cm[1,0]+cm[1,1])
precision = cm[1,1]/(cm[0,1]+cm[1,1])
fscore = 2*recall*precision/(recall+precision)
print(f'recall: {recall:.4f} precision: {precision:.4f} Fscore: {fscore :.4f}')
| 17.856383 | 79 | 0.612154 | 512 | 3,357 | 3.929688 | 0.144531 | 0.053678 | 0.083499 | 0.125249 | 0.836978 | 0.794235 | 0.774851 | 0.748509 | 0.73161 | 0.73161 | 0 | 0.055974 | 0.169794 | 3,357 | 188 | 80 | 17.856383 | 0.665949 | 0.031576 | 0 | 0.741176 | 0 | 0.035294 | 0.225757 | 0.087091 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.141176 | 0 | 0.141176 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fe253aabf09ddcb38cdbd3e07f34a2af7ccc838d | 8,738 | py | Python | tests/test_click_commands.py | Paperspace/paperspace-python | 93bdacab520ffc538ecf4d142c5f84c40446d619 | [
"0BSD"
] | 47 | 2017-07-07T11:29:13.000Z | 2021-03-15T21:49:56.000Z | tests/test_click_commands.py | Paperspace/paperspace-python | 93bdacab520ffc538ecf4d142c5f84c40446d619 | [
"0BSD"
] | 14 | 2018-04-11T10:12:54.000Z | 2019-05-31T16:17:28.000Z | tests/test_click_commands.py | Paperspace/paperspace-python | 93bdacab520ffc538ecf4d142c5f84c40446d619 | [
"0BSD"
] | 7 | 2017-08-27T11:21:35.000Z | 2019-06-03T23:52:47.000Z | import mock
from click.testing import CliRunner
from paperspace import constants
from paperspace.cli import cli
@mock.patch("paperspace.client.API")
@mock.patch("paperspace.commands.experiments.CreateExperimentCommand.execute")
def test_should_execute_create_experiment_command_when_cli_singlenode_command_was_executed(command_patched,
api_patched):
api_patched.return_value = mock.MagicMock()
runner = CliRunner()
command = "experiments create singlenode " \
"--name exp1 " \
"--projectId testHandle " \
"--container testContainer " \
"--machineType testType " \
"--command testCommand " \
"--workspaceUrl wUrl " \
"--apiKey some_key"
expected_kwargs = {"name": u"exp1",
"projectHandle": u"testHandle",
"container": u"testContainer",
"machineType": u"testType",
"command": u"testCommand",
"experimentTypeId": constants.ExperimentType.SINGLE_NODE,
"workspaceUrl": "wUrl",
}
result = runner.invoke(cli.cli, command.split())
assert result.exit_code == 0
command_patched.assert_called_once_with(expected_kwargs)
@mock.patch("paperspace.client.API")
@mock.patch("paperspace.commands.experiments.CreateExperimentCommand.execute")
def test_should_execute_create_experiment_command_when_cli_multinode_mpi_command_was_executed(command_patched,
api_patched):
api_patched.return_value = mock.MagicMock()
runner = CliRunner()
command = "experiments create multinode " \
"--name exp1 " \
"--projectId testHandle " \
"--experimentType MPI " \
"--workerContainer testWorkerContainer " \
"--workerMachineType testWorkerMachineType " \
"--workerCommand testWorkerCommand " \
"--workerCount 2 " \
"--parameterServerContainer testParameterServerContainer " \
"--parameterServerMachineType testParameterServerMachineType " \
"--parameterServerCommand testParameterServerCommand " \
"--parameterServerCount 3 " \
"--workspaceUrl wUrl " \
"--apiKey some_key"
expected_kwargs = {"name": u"exp1",
"projectHandle": u"testHandle",
"experimentTypeId": constants.ExperimentType.MPI_MULTI_NODE,
"workerContainer": u"testWorkerContainer",
"workerMachineType": u"testWorkerMachineType",
"workerCommand": u"testWorkerCommand",
"workerCount": 2,
"parameterServerContainer": u"testParameterServerContainer",
"parameterServerMachineType": u"testParameterServerMachineType",
"parameterServerCommand": u"testParameterServerCommand",
"parameterServerCount": 3,
"workspaceUrl": "wUrl",
}
result = runner.invoke(cli.cli, command.split())
assert result.exit_code == 0
command_patched.assert_called_once_with(expected_kwargs)
@mock.patch("paperspace.client.API")
@mock.patch("paperspace.commands.experiments.CreateExperimentCommand.execute")
def test_should_execute_create_experiment_command_when_cli_multinode_grpc_command_was_executed(command_patched,
api_patched):
api_patched.return_value = mock.MagicMock()
runner = CliRunner()
command = "experiments create multinode " \
"--name exp1 " \
"--projectId testHandle " \
"--experimentType GRPC " \
"--workerContainer testWorkerContainer " \
"--workerMachineType testWorkerMachineType " \
"--workerCommand testWorkerCommand " \
"--workerCount 2 " \
"--parameterServerContainer testParameterServerContainer " \
"--parameterServerMachineType testParameterServerMachineType " \
"--parameterServerCommand testParameterServerCommand " \
"--parameterServerCount 3 " \
"--workspaceUrl wUrl"
expected_kwargs = {"name": u"exp1",
"projectHandle": u"testHandle",
"experimentTypeId": constants.ExperimentType.GRPC_MULTI_NODE,
"workerContainer": u"testWorkerContainer",
"workerMachineType": u"testWorkerMachineType",
"workerCommand": u"testWorkerCommand",
"workerCount": 2,
"parameterServerContainer": u"testParameterServerContainer",
"parameterServerMachineType": u"testParameterServerMachineType",
"parameterServerCommand": u"testParameterServerCommand",
"parameterServerCount": 3,
"workspaceUrl": "wUrl",
}
result = runner.invoke(cli.cli, command.split())
assert result.exit_code == 0
command_patched.assert_called_once_with(expected_kwargs)
@mock.patch("paperspace.client.API")
@mock.patch("paperspace.commands.experiments.CreateAndStartExperimentCommand.execute")
def test_should_execute_create_experiment_command_when_cli_create_and_start_singlenode_command_was_executed(
command_patched, api_patched):
api_patched.return_value = mock.MagicMock()
runner = CliRunner()
command = "experiments createAndStart singlenode " \
"--name exp1 " \
"--projectId testHandle " \
"--container testContainer " \
"--machineType testType " \
"--command testCommand " \
"--workspaceUrl wUrl " \
"--apiKey some_key " \
"--no-logs"
expected_kwargs = {"name": u"exp1",
"projectHandle": u"testHandle",
"container": u"testContainer",
"machineType": u"testType",
"command": u"testCommand",
"experimentTypeId": constants.ExperimentType.SINGLE_NODE,
"workspaceUrl": "wUrl",
}
result = runner.invoke(cli.cli, command.split())
assert result.exit_code == 0
command_patched.assert_called_once_with(expected_kwargs)
@mock.patch("paperspace.client.API")
@mock.patch("paperspace.commands.experiments.CreateAndStartExperimentCommand.execute")
def test_should_execute_create_experiment_command_when_cli_create_and_start_multinode_mpi_command_was_executed(
command_patched, api_patched):
api_patched.return_value = mock.MagicMock()
runner = CliRunner()
command = "experiments createAndStart multinode " \
"--name exp1 " \
"--projectId testHandle " \
"--experimentType MPI " \
"--workerContainer testWorkerContainer " \
"--workerMachineType testWorkerMachineType " \
"--workerCommand testWorkerCommand " \
"--workerCount 2 " \
"--parameterServerContainer testParameterServerContainer " \
"--parameterServerMachineType testParameterServerMachineType " \
"--parameterServerCommand testParameterServerCommand " \
"--parameterServerCount 3 " \
"--workspaceUrl wUrl " \
"--no-logs"
expected_kwargs = {"name": u"exp1",
"projectHandle": u"testHandle",
"experimentTypeId": constants.ExperimentType.MPI_MULTI_NODE,
"workerContainer": u"testWorkerContainer",
"workerMachineType": u"testWorkerMachineType",
"workerCommand": u"testWorkerCommand",
"workerCount": 2,
"parameterServerContainer": u"testParameterServerContainer",
"parameterServerMachineType": u"testParameterServerMachineType",
"parameterServerCommand": u"testParameterServerCommand",
"parameterServerCount": 3,
"workspaceUrl": "wUrl",
}
result = runner.invoke(cli.cli, command.split())
assert result.exit_code == 0
command_patched.assert_called_once_with(expected_kwargs)
| 47.48913 | 111 | 0.584916 | 601 | 8,738 | 8.287854 | 0.148087 | 0.018069 | 0.038145 | 0.063843 | 0.978318 | 0.978318 | 0.978318 | 0.978318 | 0.978318 | 0.978318 | 0 | 0.004559 | 0.322156 | 8,738 | 183 | 112 | 47.748634 | 0.8364 | 0 | 0 | 0.87037 | 0 | 0 | 0.391165 | 0.181048 | 0 | 0 | 0 | 0 | 0.061728 | 1 | 0.030864 | false | 0 | 0.024691 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fe5d0616fda4ea1055a3bd6b8b7b1c3e8f3760a9 | 135,788 | py | Python | Analysis/pin_pin_beam_equations_classes.py | hotmailbox/Structural-Engineering | f34dcaec728fbb3e3a05c6f29ed5dabc621550cb | [
"BSD-3-Clause"
] | 152 | 2017-08-14T10:06:19.000Z | 2022-03-07T04:48:49.000Z | Analysis/pin_pin_beam_equations_classes.py | hotmailbox/Structural-Engineering | f34dcaec728fbb3e3a05c6f29ed5dabc621550cb | [
"BSD-3-Clause"
] | 15 | 2017-08-13T23:30:18.000Z | 2021-03-25T05:08:49.000Z | Analysis/pin_pin_beam_equations_classes.py | hotmailbox/Structural-Engineering | f34dcaec728fbb3e3a05c6f29ed5dabc621550cb | [
"BSD-3-Clause"
] | 52 | 2017-11-09T09:58:07.000Z | 2022-02-09T16:58:38.000Z | '''
BSD 3-Clause License
Copyright (c) 2019, Donald N. Bockoven III
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
from __future__ import division
from numpy import sign
from numpy import zeros
import numpy as np
import math
def PieceFunctionString(piece_set):
'''
# Returns the general piecwise function in the form of a string
# INPUT: List
# List makup:
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
output = ''
for func in piece_set:
i=0
if all(c == 0 for c in func[0]):
line = '0'
else:
line = ''
for c in func[0]:
if c == 0:
pass
elif i == 0:
line = line + '{0:0.4f}'.format(c)
elif i == 1:
if line == '':
line = line + '{0:0.4f}*x'.format(c)
elif c < 0:
line = line + '-{0:0.4f}*x'.format(abs(c))
else:
line = line + '+{0:0.4f}*x'.format(c)
else:
if line == '':
line = line + '{0:0.4f}*x^{1}'.format(c,i)
elif c < 0:
line = line + '-{0:0.4f}*x^{1}'.format(abs(c),i)
else:
line = line + '+{0:0.4f}*x^{1}'.format(c,i)
i+=1
output = output + '{0:0.4f} < x <= {1:0.4f}:\n'.format(func[1][0],func[1][1]) + line + '\n'
return output
def PieceFunctionStringHTMLTable(piece_set,heading_str):
'''
# Returns the general piecwise function in the form of a string
# INPUT: List
# List makup:
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
output = '<table>\n<tr>\n<th>{0}</th>\n</tr>\n'.format(heading_str)
for func in piece_set:
i=0
if all(c == 0 for c in func[0]):
line = '0'
else:
line = ''
for c in func[0]:
if c == 0:
pass
elif i == 0:
line = line + '{0:0.4f}'.format(c)
elif i == 1:
if line == '':
line = line + '{0:0.4f}*x'.format(c)
elif c < 0:
line = line + ' - {0:0.4f}*x'.format(abs(c))
else:
line = line + ' + {0:0.4f}*x'.format(c)
else:
if line == '':
line = line + '{0:0.4f}*x<sup>{1}</sup>'.format(c,i)
elif c < 0:
line = line + ' - {0:0.4f}*x<sup>{1}</sup>'.format(abs(c),i)
else:
line = line + ' + {0:0.4f}*x<sup>{1}</sup>'.format(c,i)
i+=1
output = output + '<tr>\n<td><u>{0:0.4f} < x <= {1:0.4f}:</u></td>\n</tr>\n<tr>\n<td><b>{2}</b></td>\n</tr>\n'.format(func[1][0],func[1][1],line)
output = output + '</table>\n'
return output
def poly_eval(c_list,x):
i = 0
res=0
if all(c == 0 for c in c_list):
pass
else:
for c in c_list:
res = res + c*math.pow(x,i)
i+=1
return res
class no_load:
def __init__(self, L):
self.p = 0
self.rl = 0
self.rr = 0
self.L = L
self.kind = 'NL'
self.x_graph = [0]
self.y_graph = [0]
def chart_load(self,x_scale=0, y_scale=0, arrows=0):
x = [0]
y = [0]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.L]]]
eis = [[[0],[0,self.L]]]
eid = [[[0],[0,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = 0
ML = 0
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
return eid
def vx(self,x):
v = 0
return v
def mx(self,x):
m = 0
return m
def eisx(self,x):
eisx = 0
return eisx
def eidx(self,x):
eid = 0
return eid
class pl:
def __init__(self, p, a, L):
self.p = float(p)
self.a = float(a)
self.L = float(L)
self.b = self.L - self.a
self.kind = 'Point'
self.error = ''
if self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
self.rl = (self.p*self.b)/self.L
self.rr = (self.p*self.a)/self.L
self.c4 = ((-1*self.rl * self.a ** 3) / 3) - ((self.rr * self.a ** 3) / 3) + ((self.rr * self.L * self.a ** 2) / 2)
self.c2 = (-1 / self.L) * ((self.c4) + ((self.rr * self.L ** 3) / 3))
self.c1 = ((-1*self.rr * self.a ** 2) / 2) - ((self.rl * self.a ** 2) / 2) + (self.rr * self.L * self.a) + self.c2
arrow_height = self.p/6.0
#30 degree arrow
arrow_plus= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus= self.a-(arrow_height*math.tan(math.radians(30)))
self.x_graph=[arrow_minus,self.a,arrow_plus,self.a,self.a]
self.y_graph=[arrow_height,0,arrow_height,0,self.p]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = (self.p/6.0)
#30 degree arrow
arrow_plus= (self.a+(arrow_height*math.tan(math.radians(30))))
arrow_minus= (self.a-(arrow_height*math.tan(math.radians(30))))
x=[arrow_minus,self.a,arrow_plus,self.a,self.a]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.p]
y = [j*y_scale for j in y]
else:
x = [self.a*x_scale, self.a*x_scale]
y = [0,self.p*y_scale]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
if self.a == 0 or self.a == self.L:
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.L]]]
eis = [[[0],[0,self.L]]]
eid = [[[0],[0,self.L]]]
else:
v = [[[self.rl],[0,self.a]],[[-1*self.rr],[self.a,self.L]]]
m = [[[0,self.rl],[0,self.a]],[[(self.rr * self.L),(-1 * self.rr)],[self.a,self.L]]]
eis = [[[self.c1,0,self.rl/2.0],[0,self.a]],[[self.c2,(self.rr * self.L),-1.0*self.rr/2.0],[self.a,self.L]]]
eid = [[[0,self.c1,0,self.rl/6.0],[0,self.a]],[[self.c4, self.c2, self.rr*self.L*0.5,-1*self.rr/6.0],[self.a,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = ((self.p*self.b*self.b) / (self.L*self.L*self.L))*((3*self.a)+self.b)
RR = ((self.p*self.a*self.a) / (self.L*self.L*self.L))*(self.a+(3*self.b))
ML = -1*(self.p*self.a*self.b*self.b) / (self.L*self.L)
MR = (self.p*self.a*self.a*self.b) / (self.L*self.L)
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
if x[i] == 0 and self.a == 0:
v[i] = 0
else:
v[i] = self.rl
else:
v[i] = -1 * self.rr
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = self.rl * x[i]
else:
m[i] = (-1 * self.rr * x[i]) + (self.rr * self.L)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = ((self.rl * x[i] ** 2) / 2) + self.c1
else:
eis[i] = ((-1.0 * self.rr * x[i] ** 2)/2.0) + (self.rr * self.L * x[i]) + self.c2
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = ((self.rl * x[i] ** 3) / 6) + (self.c1 * x[i])
else:
eid[i] = ((-1*self.rr * x[i] ** 3) / 6) + ((self.rr * self.L * x[i] ** 2) / 2) + (self.c2 * x[i]) + self.c4
return eid
def vx(self,x):
x = float(x)
if x <= self.a:
if x==0 and self.a==0:
v = 0
else:
v = self.rl
else:
v = -1 * self.rr
return v
def mx(self,x):
x = float(x)
if x <= self.a:
m = self.rl * x
else:
m = (-1 * self.rr * x) + (self.rr * self.L)
return m
def eisx(self,x):
x = float(x)
if x <= self.a:
eisx = ((self.rl * x ** 2) / 2) + self.c1
else:
eisx = ((-1.0 * self.rr * x ** 2)/2.0) + (self.rr * self.L * x) + self.c2
return eisx
def eidx(self,x):
x = float(x)
if x <= self.a:
eid = ((self.rl * x ** 3) / 6) + (self.c1 * x)
else:
eid = ((-1*self.rr * x ** 3) / 6) + ((self.rr * self.L * x ** 2) / 2) + (self.c2 * x) + self.c4
return eid
class point_moment:
def __init__(self, ma, a, L):
self.ma = float(ma)
self.a = float(a)
self.L = float(L)
self.kind = 'Moment'
self.error = ''
if a > self.L:
self.error = 'Error a > L'
self.rr = self.ma/self.L
self.rl = -1.0*self.rr
self.c2 = (-1.0/self.L) * ((self.ma*self.a**2) - (0.5*self.ma*self.a**2) + (self.rl * (self.L**3/6.0)) + (0.5*self.ma*self.L**2))
self.c1 = ma*a + self.c2
self.c3 = 0
self.c4 = ((-1.0*self.rl*self.L**3)/6.0) - (0.5*self.ma*self.L**2) - (self.c2*self.L)
r = (self.ma/2.0)
arrow_height = r/6.0
#30 degree arrow
arrow_minus= (arrow_height*math.tan(math.radians(30)))
if self.ma <0:
self.x_graph = [self.a,self.a,self.a]
self.y_graph = [r,0,-r]
x=0
y=0
for a in range(-90, 181):
x = self.a+(r*math.cos(math.radians(a)))
y = 0+(r*math.sin(math.radians(a)))
self.x_graph.append(x)
self.y_graph.append(y)
self.x_graph.append(x-arrow_minus)
self.y_graph.append(y+arrow_height)
self.x_graph.append(x)
self.y_graph.append(y)
self.x_graph.append(x+arrow_minus)
self.y_graph.append(y+arrow_height)
else:
self.x_graph = [self.a-r,self.a,self.a+r, self.a+r-arrow_minus,self.a+r,self.a+r+arrow_minus,self.a+r]
self.y_graph = [0,0,0,arrow_height,0,arrow_height,0]
x=0
y=0
for a in range(0,271):
x = self.a+(r*math.cos(math.radians(a)))
y = 0+(r*math.sin(math.radians(a)))
self.x_graph.append(x)
self.y_graph.append(y)
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
x=[]
y=[]
r = (self.ma/2.0)
if arrows == 1:
arrow_height = r/6.0
#30 degree arrow
arrow_minus= (arrow_height*math.tan(math.radians(30)))
if self.ma <0:
x = [self.a,self.a,self.a]
y = [r,0,-r]
xi=0
yi=0
for a in range(-90, 181):
xi = (self.a)+((r*math.cos(math.radians(a))))
yi = 0+((r*math.sin(math.radians(a))))
x.append(xi)
y.append(yi)
x.append(xi-arrow_minus)
y.append(yi+arrow_height)
x.append(xi)
y.append(yi)
x.append(xi+arrow_minus)
y.append(yi+arrow_height)
else:
x = [self.a-r,self.a,self.a+r, self.a+r-arrow_minus,self.a+r,self.a+r+arrow_minus,self.a+r]
y = [0,0,0,arrow_height,0,arrow_height,0]
xi=0
yi=0
for a in range(0,271):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
else:
if self.ma <0:
x = [self.a,self.a,self.a]
y = [r,0,-r]
xi=0
yi=0
for a in range(-90, 181):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
else:
x = [self.a-r,self.a,self.a+r]
y = [0,r,0]
xi=0
yi=0
for a in range(0,271):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
x = [i*x_scale for i in x]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[self.rl],[0,self.L]]]
if self.a == 0:
m = [[[self.ma,self.rl],[0,self.L]]]
elif self.a == self.L:
m = [[[0,self.rl],[0,self.L]]]
else:
m = [[[0,self.rl],[0,self.a]],[[self.ma,self.rl],[self.a,self.L]]]
eis = [[[self.c1,0,0.5*self.rl],[0,self.a]],[[self.c2,self.ma,0.5*self.rl],[self.a,self.L]]]
eid = [[[self.c3, self.c1,0,((1/6.0)*self.rl)],[0,self.a]],[[self.c4,self.c2,0.5*self.ma,(1/6.0)*self.rl],[self.a,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = ((-6.0*self.ma*self.a) / (self.L*self.L*self.L)) * (self.L-self.a)
RR = -1.0*RL
ML = ((-1.0*self.ma) / (self.L*self.L))*((self.L*self.L)-(4*self.L*self.a)+(3*self.a*self.a))
MR = -1.0*(self.ma / (self.L*self.L))*((3*self.a*self.a)-(2*self.a*self.L))
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
v[i] = self.rl
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
if x[i] == 0 and self.a == 0:
m[i] = self.ma
elif x[i] == self.L and self.a == self.L:
m[i] = -1.0*self.ma
else:
m[i] = self.rl * x[i]
else:
m[i] = (self.rl * x[i]) + self.ma
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = (0.5*self.rl*x[i]**2) + self.c1
else:
eis[i] = (0.5*self.rl*x[i]**2) + (self.ma*x[i]) + self.c2
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = ((1/6.0)*self.rl*x[i]**3) + (self.c1*x[i]) + self.c3
else:
eid[i] = (1/6.0)*self.rl*x[i]**3 + (0.5*self.ma*x[i]**2) + (self.c2*x[i]) + self.c4
return eid
def vx(self,x):
x = float(x)
v = self.rl
return v
def mx(self,x):
x = float(x)
if x <= self.a:
if x == 0 and self.a == 0:
m = self.ma
elif x == self.L and self.a == self.L:
m = -1.0*self.ma
else:
m = self.rl * x
else:
m = (self.rl * x) + self.ma
return m
def eisx(self,x):
x = float(x)
if x <= self.a:
eis = (0.5*self.rl*x**2) + self.c1
else:
eis = (0.5*self.rl*x**2) + (self.ma*x) + self.c2
return eis
def eidx(self,x):
x = float(x)
if x <= self.a:
eid = ((1/6.0)*self.rl*x**3) + (self.c1*x) + self.c3
else:
eid = (1/6.0)*self.rl*x**3 + (0.5*self.ma*x**2) + (self.c2*x) + self.c4
return eid
class udl:
def __init__(self, w1, a, b, L):
self.w1 = float(w1)
self.a = float(a)
self.L = float(L)
self.b = float(b)
self.c = b-a
self.kind = 'UDL'
self.error = ''
if self.a > self.b:
self.error = 'Error a > b'
self.error = 'Error a > b'
elif self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
elif self.b > self.L:
self.error = 'Error b > l'
self.error = 'Error b > l'
else:
pass
self.rl = (self.w1 * self.c) - (((self.w1 * self.c) * (self.a + (self.c / 2))) / self.L)
self.rr = (((self.w1 * self.c) * (self.a + (self.c / 2))) / self.L)
self.c1 = 0
self.c2 = ((-1 * self.w1 * self.a ** 2) / 2)
self.c3 = self.rr * self.L
self.c7 = 0
self.c8 = ((-1 * self.c1 * self.a ** 2) / 2) + ((self.c2 * self.a ** 2) / 2) + ((5 * self.w1 * self.a ** 4) / 24) + self.c7
self.c9 = ((-1 * self.rl * self.b ** 3) / 3) - ((self.rr * self.b ** 3) / 3) + ((self.w1 * self.b ** 4) / 8) - ((self.w1 * self.a * self.b ** 3) / 3) - ((self.c2 * self.b ** 2) / 2) + ((self.c3 * self.b ** 2) / 2) + self.c8
self.c6 = ((self.rr * self.L ** 2) / 6) - ((self.c3 * self.L) / 2) - (self.c9 / self.L)
self.c5 = ((-1 * self.rl * self.b ** 2) / 2) + ((self.w1 * self.b ** 3) / 6) - ((self.w1 * self.a * self.b ** 2) / 2) - ((self.rr * self.b ** 2) / 2) + (self.c3 * self.b) - (self.c2 * self.b) + self.c6
self.c4 = ((self.w1 * self.a ** 3) / 3) + (self.c2 * self.a) + self.c5 - (self.c1 * self.a)
arrow_height = self.w1/12.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height*math.tan(math.radians(30)))
self.x_graph=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
self.y_graph=[arrow_height,0,arrow_height,0,self.w1,self.w1,0,arrow_height,0,arrow_height]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
x=[]
y=[]
if arrows == 1:
arrow_height = self.w1/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height*math.tan(math.radians(30)))
x=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.w1,self.w1,0,arrow_height,0,arrow_height]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a,self.b,self.b]
x = [i*x_scale for i in x]
y=[0,self.w1,self.w1,0]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[self.rl],[0,self.a]],[[(self.rl+self.w1*self.a),-1.0*self.w1],[self.a,self.b]],[[-1*self.rr],[self.b,self.L]]]
m = [[[self.c1,self.rl],[0,self.a]],[[self.c2,self.rl+(self.w1*self.a),-0.5*self.w1],[self.a,self.b]],[[self.c3,-1.0*self.rr],[self.b,self.L]]]
eis = [[[self.c4,self.c1,0.5*self.rl],[0,self.a]],[[self.c5,self.c2,0.5*(self.rl+(self.w1*self.a)),(-1/6.0)*self.w1],[self.a,self.b]],[[self.c6,self.c3,-0.5*self.rr],[self.b,self.L]]]
eid = [[[self.c7,self.c4,0.5*self.c1,1/6.0*self.rl],[0,self.a]],[[self.c8, self.c5, 0.5*self.c2,(1/6.0)*(self.rl+(self.w1*self.a)),-1.0*(self.w1 / 24.0)],[self.a,self.b]],[[self.c9,self.c6,0.5*self.c3,((-1.0 * self.rr) / 6.0)],[self.b,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
v[i] = self.rl
elif x[i]<=self.b:
v[i] = self.rl - (self.w1 * (x[i] - self.a))
else:
v[i] = -1 * self.rr
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = (self.rl * x[i]) + self.c1
elif x[i] <= self.b:
m[i] = (self.rl * x[i]) - ((self.w1 * x[i] ** 2) / 2) + (self.w1 * self.a * x[i]) + self.c2
else:
m[i] = (-1 * self.rr * x[i]) + self.c3
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = ((self.rl * x[i] ** 2) / 2.0) + (self.c1 * x[i]) + self.c4
elif x[i] <= self.b:
eis[i] = ((self.rl * x[i] **2) / 2.0) - ((self.w1 * x[i] ** 3) / 6.0) + ((self.w1 * self.a * x[i] **2) / 2.0) + (self.c2 * x[i]) + self.c5
else:
eis[i] = ((-1.0 * self.rr * x[i] ** 2) / 2.0) + (self.c3 * x[i]) + self.c6
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = ((self.rl * x[i] ** 3) / 6) + ((self.c1 * x[i] ** 2) / 2) + (self.c4 * x[i]) + self.c7
elif x[i]<=self.b:
eid[i] = ((self.rl * x[i] ** 3) / 6) - ((self.w1 * x[i] ** 4) / 24) + ((self.w1 * self.a * x[i] ** 3) / 6) + ((self.c2 * x[i] ** 2) / 2) + (self.c5 * x[i]) + self.c8
else:
eid[i] = ((-1 * self.rr * x[i] ** 3) / 6) + ((self.c3 * x[i] ** 2) / 2) + (self.c6 * x[i]) + self.c9
return eid
def vx(self,x):
x = float(x)
if x <= self.a:
v = self.rl
elif x<=self.b:
v = self.rl - (self.w1 * (x - self.a))
else:
v = -1 * self.rr
return v
def mx(self,x):
x = float(x)
if x <= self.a:
m = (self.rl * x) + self.c1
elif x <= self.b:
m = (self.rl * x) - ((self.w1 * x ** 2) / 2) + (self.w1 * self.a * x) + self.c2
else:
m = (-1 * self.rr * x) + self.c3
return m
def eisx(self,x):
x = float(x)
if x <= self.a:
eis = ((self.rl * x ** 2) / 2.0) + (self.c1 * x) + self.c4
elif x <= self.b:
eis = ((self.rl * x **2) / 2.0) - ((self.w1 * x ** 3) / 6.0) + ((self.w1 * self.a * x **2) / 2.0) + (self.c2 * x) + self.c5
else:
eis = ((-1.0 * self.rr * x ** 2) / 2.0) + (self.c3 * x) + self.c6
return eis
def eidx(self,x):
x = float(x)
if x <= self.a:
eid = ((self.rl * x ** 3) / 6) + ((self.c1 * x ** 2) / 2) + (self.c4 * x) + self.c7
elif x<=self.b:
eid = ((self.rl * x ** 3) / 6) - ((self.w1 * x ** 4) / 24) + ((self.w1 * self.a * x ** 3) / 6) + ((self.c2 * x ** 2) / 2) + (self.c5 * x) + self.c8
else:
eid = ((-1 * self.rr * x ** 3) / 6) + ((self.c3 * x ** 2) / 2) + (self.c6 * x) + self.c9
return eid
def fef(self):
eis0 = self.eisx(0)
eisL = self.eisx(self.L)
s = np.array([[-1.0*eis0],[-1.0*eisL]])
ems = np.array([[-1.0*self.L/3.0 , self.L/6.0],[self.L/6.0 , -1.0*self.L/3.0]])
fem = np.linalg.solve(ems,s)
mo = point_moment(fem[0][0],0,self.L)
ml = point_moment(fem[1][0],self.L,self.L)
RL = self.rl+mo.rl+ml.rl
RR = self.rr+mo.rr+ml.rr
ML = fem[0][0]
MR = fem[1][0]
return [RL,ML,RR,MR]
class trap:
def __init__(self, w1, w2, a, b, L):
self.w1 = float(w1)
self.w2 = float(w2)
self.a = float(a)
self.L = float(L)
self.b = float(b)
self.c = self.b-self.a
self.kind = 'TRAP'
self.error = ''
if self.a > self.b:
self.error = 'Error a > b'
self.error = 'Error a > b'
elif self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
elif self.b > self.L:
self.error = 'Error b > l'
self.error = 'Error b > l'
elif sign(self.w1) != sign(self.w2) and self.w1 !=0 and self.w2 !=0:
self.error = 'Error w1 and w2 change direction'
self.error = 'Error w1 and w2 change direction'
else:
pass
self.s = (self.w2 -self.w1)/self.c
self.xbar = (self.c * ((2 * self.w2) + self.w1)) / (3 * (self.w2 + self.w1))
self.W = self.c * ((self.w1 + self.w2) / 2)
self.rr = (self.W * (self.a + self.xbar)) / self.L
self.rl = self.W - self.rr
self.c1 = 0
self.c2 = self.c1 + ((self.a ** 3 * self.s) / 6) + ((self.a ** 2 * (self.w1 - (self.s * self.a))) / 2) + ((((self.s * self.a) - (2 * self.w1)) * self.a ** 2) / 2)
self.c3 = self.rr * self.L
self.c7 = 0
self.c8 = ((-1 * self.c1 * self.a ** 2) / 2) - ((self.a ** 5 * self.s) / 30) - ((self.a ** 4 * (self.w1 - (self.s * self.a))) / 8) - ((((self.s * self.a) - (2 * self.w1)) * self.a ** 4) / 6) + ((self.c2 * self.a ** 2) / 2) + self.c7
self.c9 = ((-1 * self.rl * self.b ** 3) / 3) + ((self.b ** 5 * self.s) / 30) + ((self.b ** 4 * (self.w1 - (self.s * self.a))) / 8) + ((((self.s * self.a) - (2 * self.w1)) * self.a * self.b ** 3) / 6) - ((self.c2 * self.b ** 2) / 2) + self.c8 - ((self.rr * self.b ** 3) / 3) + ((self.c3 * self.b ** 2) / 2)
self.c6 = (((self.rr * self.L ** 3) / 6) - ((self.c3 * self.L ** 2) / 2) - self.c9) / self.L
self.c5 = ((-1 * self.rr * self.b ** 2) / 2) + (self.c3 * self.b) + self.c6 - ((self.rl * self.b ** 2) / 2) + ((self.b ** 4 * self.s) / 24) + ((self.b ** 3 * (self.w1 - (self.s * self.a))) / 6) + ((((self.s * self.a) - (2 * self.w1)) * self.a * self.b ** 2) / 4) - (self.c2 * self.b)
self.c4 = ((-1 * self.a ** 4 * self.s) / 24) - ((self.a ** 3 * (self.w1 - (self.s * self.a))) / 6) - ((((self.s * self.a) - (2 * self.w1)) * self.a ** 3) / 4) + (self.c2 * self.a) + self.c5 - (self.c1 * self.a)
arrow_height = self.w1/6.0
arrow_height2 = self.w2/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height2*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height2*math.tan(math.radians(30)))
self.x_graph=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
self.y_graph=[arrow_height,0,arrow_height,0,self.w1,self.w2,0,arrow_height2,0,arrow_height2]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
x=[]
y=[]
if arrows == 1:
arrow_height = self.w1/6.0
arrow_height2 = self.w2/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height2*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height2*math.tan(math.radians(30)))
x=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.w1,self.w2,0,arrow_height2,0,arrow_height2]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a,self.b,self.b]
x = [i*x_scale for i in x]
y=[0,self.w1,self.w2,0]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[self.rl],[0,self.a]],[[self.rl- ((((self.s * self.a) - (2 * self.w1)) * self.a) / 2),-1.0*((self.w1 - (self.s * self.a))),-1.0*(self.s/ 2)],[self.a,self.b]],[[-1.0*self.rr],[self.b,self.L]]]
m = [[[self.c1,self.rl],[0,self.a]],[[self.c2,self.rl - ((((self.s * self.a) - (2 * self.w1)) * self.a) / 2.0),-1.0*((self.w1 - (self.s * self.a)) / 2.0),-1.0*((self.s) / 6.0)],[self.a,self.b]],[[self.c3,-1.0*self.rr],[self.b,self.L]]]
eis = [[[self.c4,self.c1,(self.rl / 2.0)],[0,self.a]],[[self.c5,self.c2,(self.rl/ 2.0) - ((((self.s * self.a) - (2 * self.w1)) * self.a) / 4.0), -1.0*((self.w1 - (self.s * self.a)) / 6.0),-1.0*(self.s / 24.0)],[self.a,self.b]],[[self.c6,self.c3,((-1.0* self.rr) / 2)],[self.b,self.L]]]
eid = [[[self.c7,self.c4,(self.c1 / 2.0),(self.rl/ 6.0)],[0,self.a]],[[self.c8,self.c5,self.c2 / 2.0,(self.rl / 6.0) - ((((self.s * self.a) - (2 * self.w1)) * self.a) / 12.0), -1.0*((self.w1 - (self.s * self.a)) / 24),-1.0*(self.s / 120.0)],[self.a,self.b]],[[self.c9,self.c6,(self.c3 / 2.0),((-1.0 * self.rr) / 6.0)],[self.b,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
v[i] = self.rl
elif x[i]<=self.b:
v[i] = self.rl - ((x[i] ** 2 * self.s) / 2) - (x[i] * (self.w1 - (self.s * self.a))) - ((((self.s * self.a) - (2 * self.w1)) * self.a) / 2)
else:
v[i] = -1 * self.rr
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = (self.rl * x[i]) + self.c1
elif x[i] <= self.b:
m[i] = (self.rl * x[i]) - ((x[i] ** 3 * self.s) / 6) - ((x[i] ** 2 * (self.w1 - (self.s * self.a))) / 2) - ((((self.s * self.a) - (2 * self.w1)) * self.a * x[i]) / 2) + self.c2
else:
m[i] = (-1 * self.rr * x[i]) + self.c3
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = ((self.rl * x[i] ** 2) / 2) + (self.c1 * x[i]) + self.c4
elif x[i] <= self.b:
eis[i] = ((self.rl * x[i] ** 2) / 2) - ((x[i] ** 4 * self.s) / 24) - ((x[i] ** 3 * (self.w1 - (self.s * self.a))) / 6) - ((((self.s * self.a) - (2 * self.w1)) * self.a * x[i] ** 2) / 4) + (self.c2 * x[i]) + self.c5
else:
eis[i] = ((-1 * self.rr * x[i] ** 2) / 2) + (self.c3 * x[i]) + self.c6
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = ((self.rl * x[i] ** 3) / 6) + ((self.c1 * x[i] ** 2) / 2) + (self.c4 * x[i]) + self.c7
elif x[i]<=self.b:
eid[i] = ((self.rl * x[i] ** 3) / 6) - ((x[i] ** 5 * self.s) / 120) - ((x[i] ** 4 * (self.w1 - (self.s * self.a))) / 24) - ((((self.s * self.a) - (2 * self.w1)) * self.a * x[i] ** 3) / 12) + ((self.c2 * x[i] ** 2) / 2) + (self.c5 * x[i]) + self.c8
else:
eid[i] = ((-1 * self.rr * x[i] ** 3) / 6) + ((self.c3 * x[i] ** 2) / 2) + (self.c6 * x[i]) + self.c9
return eid
def vx(self,x):
x = float(x)
if x <= self.a:
v = self.rl
elif x<=self.b:
v = self.rl - ((x ** 2 * self.s) / 2) - (x * (self.w1 - (self.s * self.a))) - ((((self.s * self.a) - (2 * self.w1)) * self.a) / 2)
else:
v = -1 * self.rr
return v
def mx(self,x):
x = float(x)
if x <= self.a:
m = (self.rl * x) + self.c1
elif x <= self.b:
m = (self.rl * x) - ((x ** 3 * self.s) / 6) - ((x ** 2 * (self.w1 - (self.s * self.a))) / 2) - ((((self.s * self.a) - (2 * self.w1)) * self.a * x) / 2) + self.c2
else:
m = (-1 * self.rr * x) + self.c3
return m
def eisx(self,x):
x = float(x)
if x <= self.a:
eis = ((self.rl * x ** 2) / 2) + (self.c1 * x) + self.c4
elif x <= self.b:
eis = ((self.rl * x ** 2) / 2) - ((x ** 4 * self.s) / 24) - ((x ** 3 * (self.w1 - (self.s * self.a))) / 6) - ((((self.s * self.a) - (2 * self.w1)) * self.a * x ** 2) / 4) + (self.c2 * x) + self.c5
else:
eis = ((-1 * self.rr * x ** 2) / 2) + (self.c3 * x) + self.c6
return eis
def eidx(self,x):
x = float(x)
if x <= self.a:
eid = ((self.rl * x ** 3) / 6) + ((self.c1 * x ** 2) / 2) + (self.c4 * x) + self.c7
elif x<=self.b:
eid = ((self.rl * x ** 3) / 6) - ((x ** 5 * self.s) / 120) - ((x ** 4 * (self.w1 - (self.s * self.a))) / 24) - ((((self.s * self.a) - (2 * self.w1)) * self.a * x ** 3) / 12) + ((self.c2 * x ** 2) / 2) + (self.c5 * x) + self.c8
else:
eid = ((-1 * self.rr * x ** 3) / 6) + ((self.c3 * x ** 2) / 2) + (self.c6 * x) + self.c9
return eid
def fef(self):
eis0 = self.eisx(0)
eisL = self.eisx(self.L)
s = np.array([[-1.0*eis0],[-1.0*eisL]])
ems = np.array([[-1.0*self.L/3.0 , self.L/6.0],[self.L/6.0 , -1.0*self.L/3.0]])
fem = np.linalg.solve(ems,s)
mo = point_moment(fem[0][0],0,self.L)
ml = point_moment(fem[1][0],self.L,self.L)
RL = self.rl+mo.rl+ml.rl
RR = self.rr+mo.rr+ml.rr
ML = fem[0][0]
MR = fem[1][0]
return [RL,ML,RR,MR]
class end_delta:
def __init__(self, delta_i, delta_j, L):
'''
Important note it is assumed that delta_i and delta_j
have been divided by E and I. If this is being used
in combination with other loads make sure consistent
units are being used
'''
self.rl = 0
self.rr = 0
self.deltai = delta_i
self.deltaj = delta_j
self.L = L
self.slope = (delta_j - delta_i)/self.L
self.kind = 'END_DELTA'
self.x_graph = [0]
self.y_graph = [0]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
x=[0]
y=[0]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.L]]]
eis = [[[self.slope],[0,self.L]]]
eid = [[[self.deltai,self.slope],[0,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def v(self,x):
iters = len(x)
v=zeros(iters)
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
eis[i] = self.slope
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
eid[i] = self.slope*x[i] + self.deltai
return eid
def vx(self,x):
v = 0
return v
def mx(self,x):
m = 0
return m
def eisx(self,x):
eisx = self.slope
return eisx
def eidx(self,x):
eid = self.slope*x + self.deltai
return eid
def fef(self):
eis0 = self.eisx(0)
eisL = self.eisx(self.L)
s = np.array([[-1.0*eis0],[-1.0*eisL]])
ems = np.array([[-1.0*self.L/3.0 , self.L/6.0],[self.L/6.0 , -1.0*self.L/3.0]])
fem = np.linalg.solve(ems,s)
mo = point_moment(fem[0][0],0,self.L)
ml = point_moment(fem[1][0],self.L,self.L)
RL = self.rl+mo.rl+ml.rl
RR = self.rr+mo.rr+ml.rr
ML = fem[0][0]
MR = fem[1][0]
return [RL,ML,RR,MR]
class cant_right_nl:
def __init__(self, slope,L):
self.slope = slope
self.L = L
self.rl = 0
self.rr = 0
self.ml = 0
self.kind = 'NL'
self.x_graph = [0]
self.y_graph = [0]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
x=[0]
y=[0]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.L]]]
eis = [[[self.slope],[0,self.L]]]
eid = [[[0, self.slope],[0,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = 0
ML = 0
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
eis[i] = self.slope
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
eid[i] = self.slope * x[i]
return eid
def vx(self,x):
v=0
return v
def mx(self,x):
m=0
return m
def eisx(self,x):
eis = self.slope
return eis
def eidx(self,x):
eid = self.slope * x
return eid
class cant_right_point:
def __init__(self, p, a, L, Lb):
self.p = float(p)
self.a = float(a)
self.L = float(L)
self.Lb = float(Lb)
self.b = self.L - self.a
self.kind = 'Point'
self.error = ''
if self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
self.rl = self.p
self.rr = 0
self.ml = -1.0*self.p*self.a
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c1 = 0
else:
self.backspan = point_moment(-1.0*self.ml,self.Lb,self.Lb)
self.c1 = self.backspan.eisx(self.Lb)
self.c2 = 0
self.c3 = 0.5*self.rl*self.a**2 + self.ml*self.a + self.c1
self.c4 = -1.0*self.c3*self.a + (1.0/6.0)*self.rl*self.a**3 + 0.5*self.ml*self.a**2 + self.c1*self.a + self.c2
arrow_height = self.p/6.0
#30 degree arrow
arrow_plus= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus= self.a-(arrow_height*math.tan(math.radians(30)))
self.x_graph=[arrow_minus,self.a,arrow_plus,self.a,self.a]
self.y_graph=[arrow_height,0,arrow_height,0,self.p]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = self.p/6.0
#30 degree arrow
arrow_plus= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus= self.a-(arrow_height*math.tan(math.radians(30)))
x=[arrow_minus,self.a,arrow_plus,self.a,self.a]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.p]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a]
x = [i*x_scale for i in x]
y=[0,self.p]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
if self.a == 0:
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.L]]]
eis = [[[0],[0,self.L]]]
eid = [[[0],[0,self.L]]]
else:
v = [[[self.p],[0,self.a]],[[0],[self.a,self.L]]]
m = [[[self.ml,self.rl],[0,self.a]],[[0],[self.a,self.L]]]
eis = [[[self.c1,self.ml,0.5*self.rl],[0,self.a]],[[self.c3],[self.a,self.L]]]
eid = [[[self.c2,self.c1,0.5*self.ml,(1.0/6.0)*self.rl],[0,self.a]],[[self.c4, self.c3],[self.a,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = self.rl
RR = 0
ML = self.ml
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
if x[i] == 0 and self.a == 0:
v[i] == 0
else:
v[i] = self.p
else:
v[i] = 0
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
m[i] = self.rl*x[i] + self.ml
else:
m[i] = 0
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eis[i] = 0.5*self.rl*x[i]**2 + self.ml*x[i] + self.c1
else:
eis[i] = self.c3
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eid[i] = (1.0/6.0)*self.rl*x[i]**3 + 0.5*self.ml*x[i]**2 + self.c1*x[i] + self.c2
else:
eid[i] = self.c3*x[i] + self.c4
return eid
def vx(self,x):
if x<=self.a:
if x == 0 and self.a ==0:
v = 0
else:
v = self.p
else:
v = 0
return v
def mx(self,x):
if x<=self.a:
m = self.rl*x + self.ml
else:
m = 0
return m
def eisx(self,x):
if x<=self.a:
eis = 0.5*self.rl*x**2 + self.ml*x + self.c1
else:
eis = self.c3
return eis
def eidx(self,x):
if x<=self.a:
eid = (1.0/6.0)*self.rl*x**3 + 0.5*self.ml*x**2 + self.c1*x + self.c2
else:
eid = self.c3*x + self.c4
return eid
class cant_right_point_moment:
def __init__(self, ma, a, L, Lb):
self.ma = float(ma)
self.a = float(a)
self.L = float(L)
self.Lb = float(Lb)
self.b = self.L - self.a
self.kind = 'Moment'
self.error = ''
if self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
self.rl = 0
self.rr = 0
self.ml = -1.0*self.ma
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c1 = 0
else:
self.backspan = point_moment(-1.0*self.ml,self.Lb,self.Lb)
self.c1 = self.backspan.eisx(self.Lb)
self.c2 = 0
self.c3 = self.ml*self.a + self.c1
self.c4 = 0.5*self.ml*self.a**2 + self.c1 * self.a + self.c2 - self.c3 * self.a
r = (self.ma/2.0)
arrow_height = r/6.0
#30 degree arrow
arrow_minus= (arrow_height*math.tan(math.radians(30)))
if self.ma <0:
self.x_graph = [self.a,self.a,self.a]
self.y_graph = [r,0,-r]
x=0
y=0
for a in range(-90, 181):
x = self.a+(r*math.cos(math.radians(a)))
y = 0+(r*math.sin(math.radians(a)))
self.x_graph.append(x)
self.y_graph.append(y)
self.x_graph.append(x-arrow_minus)
self.y_graph.append(y+arrow_height)
self.x_graph.append(x)
self.y_graph.append(y)
self.x_graph.append(x+arrow_minus)
self.y_graph.append(y+arrow_height)
else:
self.x_graph = [self.a-r,self.a,self.a+r, self.a+r-arrow_minus,self.a+r,self.a+r+arrow_minus,self.a+r]
self.y_graph = [0,0,0,arrow_height,0,arrow_height,0]
x=0
y=0
for a in range(0,271):
x = self.a+(r*math.cos(math.radians(a)))
y = 0+(r*math.sin(math.radians(a)))
self.x_graph.append(x)
self.y_graph.append(y)
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
r = (self.ma/2.0)
if arrows == 1:
arrow_height = r/6.0
#30 degree arrow
arrow_minus= (arrow_height*math.tan(math.radians(30)))
if self.ma <0:
x= [self.a,self.a,self.a]
y = [r,0,-r]
xi=0
yi=0
for a in range(-90, 181):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
x.append(xi-arrow_minus)
y.append(yi+arrow_height)
x.append(xi)
y.append(yi)
x.append(xi+arrow_minus)
y.append(yi+arrow_height)
else:
x= [self.a-r,self.a,self.a+r, self.a+r-arrow_minus,self.a+r,self.a+r+arrow_minus,self.a+r]
y = [0,0,0,arrow_height,0,arrow_height,0]
xi=0
yi=0
for a in range(0,271):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
x = [i*x_scale for i in x]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.L]]]
m = [[[self.ml],[0,self.a]],[[0],[self.a,self.L]]]
eis = [[[self.c1,self.ml],[0,self.a]],[[self.c3],[self.a,self.L]]]
eid = [[[self.c2, self.c1,0.5*self.ml],[0,self.a]],[[self.c4,self.c3],[self.a,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = self.rl
RR = 0
ML = self.ml
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
v[i] = 0
else:
v[i] = 0
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
m[i] = self.ml
else:
m[i] = 0
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eis[i] = self.ml*x[i] + self.c1
else:
eis[i] = self.c3
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eid[i] = 0.5*self.ml*x[i]**2 + self.c1*x[i] + self.c2
else:
eid[i] = self.c3*x[i] + self.c4
return eid
def vx(self,x):
if x<=self.a:
v = 0
else:
v = 0
return v
def mx(self,x):
if x<=self.a:
m = self.ml
else:
m = 0
return m
def eisx(self,x):
if x<=self.a:
eis = self.ml*x + self.c1
else:
eis = self.c3
return eis
def eidx(self,x):
if x<=self.a:
eid = 0.5*self.ml*x**2 + self.c1*x + self.c2
else:
eid = self.c3*x + self.c4
return eid
class cant_right_udl:
def __init__(self, w1, a, b, L, Lb):
self.w1 = float(w1)
self.a = float(a)
self.L = float(L)
self.b = float(b)
self.c = self.b - self.a
self.w_tot = self.w1*self.c
self.Lb = float(Lb)
self.kind = 'UDL'
self.error = ''
if self.a > self.b:
self.error = 'Error a > b'
self.error = 'Error a > b'
elif self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
elif self.b > self.L:
self.error = 'Error b > l'
self.error = 'Error b > l'
else:
pass
self.rl = self.w_tot
self.rr = 0
self.ml = -1.0*self.w_tot*(self.b-(self.c/2))
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c1 = 0
else:
self.backspan = point_moment(-1.0*self.ml,self.Lb,self.Lb)
self.c1 = self.backspan.eisx(self.Lb)
self.c2 = 0
self.c3 = self.c1
self.c4 = self.c1*self.a + self.c2 - self.c3*a
self.c5 = 0.5*self.w_tot*self.b**2 + self.ml*self.b - (1.0/6.0)*self.w1*(self.b-self.a)**3 + self.c3
self.c6 = (1.0/6.0)*self.w_tot*self.b**3 + 0.5*self.ml*self.b**2 - (1.0/24.0)*self.w1*(self.b-self.a)**4 + self.c3*self.b + self.c4 - self.c5*self.b
arrow_height = self.w1/12.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height*math.tan(math.radians(30)))
self.x_graph=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
self.y_graph=[arrow_height,0,arrow_height,0,self.w1,self.w1,0,arrow_height,0,arrow_height]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = self.w1/12.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height*math.tan(math.radians(30)))
x=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.w1,self.w1,0,arrow_height,0,arrow_height]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a,self.b,self.b]
x = [i*x_scale for i in x]
y=[0,self.w1,self.w1,0]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = ([
[[self.rl],[0,self.a]],
[[self.rl+(self.w1*self.a),-self.w1],[self.a,self.b]],
[[0],[self.b,self.L]]
])
m = ([
[[self.ml, self.rl],[0,self.a]],
[[self.ml-(0.5*self.a*self.a*self.w1),self.rl+(self.a*self.w1),-0.5*self.w1],[self.a,self.b]],
[[0],[self.b,self.L]]
])
eis = ([
[[self.c1,self.ml,0.5*self.rl],[0,self.a]],
[[self.c3+((1.0/6.0)*self.a*self.a*self.a*self.w1),
self.ml-(0.5*self.a*self.a*self.w1),
(0.5*self.rl)+(0.5*self.a*self.w1),
((-1.0/6.0)*self.w1)],[self.a,self.b]],
[[self.c5],[self.b,self.L]]
])
eid = ([
# Range 0 to a
[[self.c2,self.c1,0.5*self.ml,(1.0/6.0)*self.rl],[0,self.a]],
# Range a to b
[[self.c4-((1.0/24.0)*math.pow(self.a,4)*self.w1), #x^0
((1.0/6.0)*math.pow(self.a,3)*self.w1)+self.c3, #x^1
((-0.25)*math.pow(self.a,2)*self.w1)+ (0.5*self.ml), #x^2
((1.0/6.0)*self.a*self.w1)+ ((1.0/6.0)*self.rl), #x^3
((-1.0/24.0)*self.w1)],[self.a,self.b]], #x^4
# Range b to L
[[self.c6,self.c5],[self.b,self.L]]
])
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = self.rl
RR = 0
ML = self.ml
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
v[i] = self.rl
elif x[i]<=self.b:
v[i] = self.rl - self.w1*(x[i]-self.a)
else:
v[i] = 0
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = self.rl*x[i] + self.ml
elif x[i] <= self.b:
m[i] = self.rl*x[i] + self.ml - (self.w1*(x[i]-self.a)*((x[i]-self.a)/2))
else:
m[i] = 0
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = 0.5*self.rl*x[i]**2 + self.ml*x[i] + self.c1
elif x[i] <= self.b:
eis[i] = 0.5*self.rl*x[i]**2 + self.ml*x[i] - ((1.0/6.0) * self.w1 * (x[i]-self.a)**3) + self.c3
else:
eis[i] = self.c5
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = ((1.0/6.0)*self.rl*x[i]*x[i]*x[i]+
0.5*self.ml*x[i]*x[i] +
self.c1 * x[i] +
self.c2)
elif x[i] <= self.b:
eid[i] = ((1.0/6.0)*self.rl*x[i]*x[i]*x[i] +
0.5*self.ml*x[i]*x[i] -
((1.0/24.0)*self.w1*(x[i]-self.a)**4) +
self.c3*x[i] +
self.c4)
else:
eid[i] = self.c5*x[i] + self.c6
return eid
def vx(self,x):
x = float(x)
if x <= self.a:
v = self.w_tot
elif x<=self.b:
v = self.w_tot - self.w1*(x-self.a)
else:
v = 0
return v
def mx(self,x):
x = float(x)
if x <= self.a:
m = self.rl*x + self.ml
elif x <= self.b:
m = self.rl*x + self.ml - (self.w1*(x-self.a)*((x-self.a)/2))
else:
m = 0
return m
def eisx(self,x):
if x <= self.a:
eis = 0.5*self.rl*x**2 + self.ml*x + self.c1
elif x <= self.b:
eis = 0.5*self.rl*x**2 + self.ml*x - ((1.0/6.0) * self.w1 * (x-self.a)**3) + self.c3
else:
eis = self.c5
return eis
def eidx(self,x):
if x <= self.a:
eid = (1.0/6.0)*self.rl*x**2 + 0.5*self.ml*x**2 + self.c1 * x + self.c2
elif x <= self.b:
eid = (1.0/6.0)*self.rl*x**3 + 0.5*self.ml*x**2 - (1.0/24.0)*self.w1*(x-self.a)**4 + self.c3*x + self.c4
else:
eid = self.c5*x + self.c6
return eid
class cant_right_trap:
def __init__(self, w1, w2, a, b, L, Lb):
self.w1 = float(w1)
self.w2 = float(w2)
self.a = float(a)
self.L = float(L)
self.b = float(b)
self.Lb = float(Lb)
self.c = self.b-self.a
self.kind = 'TRAP'
self.error = ''
if self.a > self.b:
self.error = 'Error a > b'
self.error = 'Error a > b'
elif self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
elif self.b > self.L:
self.error = 'Error b > l'
self.error = 'Error b > l'
elif sign(self.w1) != sign(self.w2) and self.w1 !=0 and self.w2 !=0:
self.error = 'Error w1 and w2 change direction'
self.error = 'Error w1 and w2 change direction'
else:
pass
self.w = 0.5*(self.w1+self.w2)*self.c
self.d = self.a+(((self.w1+(2*self.w2))/(3*(self.w2+self.w1)))*self.c)
self.s = (self.w1-self.w2)/self.c
self.rl = self.w
self.rr = 0
self.ml = -1*self.w*self.d
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c1 = 0
else:
self.backspan = point_moment(-1.0*self.ml,self.Lb,self.Lb)
self.c1 = self.backspan.eisx(self.Lb)
self.c2 = 0
self.c3 = self.ml - (1.0/6.0)*self.s*self.a**3 + 0.5*(self.s*self.a + self.w1)*self.a**2 - 0.5*(self.s*self.a + 2*self.w1)*self.a**2
self.c4 = self.c1 - (1.0/24.0)*self.s*self.a**4 + (1.0/6.0)*((self.s*self.a)+self.w1)*self.a**3 - 0.25*((self.s*self.a)+(2*self.w1))*self.a**3 - self.c3*self.a + self.ml*self.a
self.c5 = self.c1*self.a + self.c2 - self.c4*self.a - (1.0/120.0)*self.s*self.a**5 + (1.0/24.0)*((self.s*self.a)+self.w1)*self.a**4 - (1.0/12.0)*((self.s*self.a)+(2*self.w1))*self.a**4 + 0.5*self.ml*self.a**2 - 0.5*self.c3*self.a**2
self.c6 = (0.5*self.rl*self.b**2)+self.c3*self.b + (1.0/24.0)*self.s*self.b**4 - (1.0/6.0)*((self.s*self.a)+self.w1)*self.b**3 + 0.25*((self.s*self.a)+(2*self.w1))*self.a*self.b**2 + self.c4
self.c7 = ((1.0/6.0)*self.rl*self.b**3) + 0.5*self.c3*self.b**2 + (1.0/120.0)*self.s*self.b**5 - (1.0/24.0)*((self.s*self.a)+self.w1)*self.b**4 + (1.0/12.0)*((self.s*self.a)+(2*self.w1))*self.a*self.b**3 + self.c4*self.b + self.c5 - self.c6*self.b
arrow_height = self.w1/6.0
arrow_height2 = self.w2/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height2*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height2*math.tan(math.radians(30)))
self.x_graph=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
self.y_graph=[arrow_height,0,arrow_height,0,self.w1,self.w2,0,arrow_height2,0,arrow_height2]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = self.w1/6.0
arrow_height2 = self.w2/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height2*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height2*math.tan(math.radians(30)))
x=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.w1,self.w2,0,arrow_height2,0,arrow_height2]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a,self.b,self.b]
x = [i*x_scale for i in x]
y=[0,self.w1,self.w2,0]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = ([
# Range 0 to a
[[self.rl],[0,self.a]],
# Range a to b
[[(0.5*math.pow(self.a,2)*self.s) + (self.a*self.w1) + self.rl, #x^0
(-1.0*self.w1) - (self.a*self.s), #x^1
0.5*self.s], #x^2
[self.a,self.b]],
# Range b to L
[[0],[self.b,self.L]]
])
m = ([
# Range 0 to a
[[self.ml, self.rl],[0,self.a]],
# Range a to b
[[self.c3, #x^0
(0.5*math.pow(self.a,2)*self.s)+ (self.a*self.w1) + self.rl, #x^1
(-0.5*self.a*self.s)-(0.5*self.w1), #x^2
(1/6.0)*self.s], #x^3
[self.a,self.b]],
# Range b to L
[[0],[self.b,self.L]]
])
eis = ([
# Range 0 to a
[[self.c1,self.ml,0.5*self.rl],[0,self.a]],
# Range a to b
[[self.c4,#x^0
self.c3,#x^1
(0.25*math.pow(self.a,2)*self.s)+(0.5*self.a*self.w1)+(0.5*self.rl),#x^2
((-1/6.0)*self.a*self.s) - ((1/6.0)*self.w1),#x^3
(1/24.0)*self.s],#x^4
[self.a,self.b]],
# Range b to L
[[self.c6],[self.b,self.L]]
])
eid = ([
# Range 0 to a
[[self.c2,#x^0
self.c1,#x^1
0.5*self.ml,#x^2
((1.0/6.0)*self.rl),#x^3
],
[0,self.a]],
# Range a to b
[[self.c5,#x^0
self.c4,#x^1
0.5*self.c3,#x^2
((1/12.0)*math.pow(self.a,2)*self.s)+
((1/6.0)*self.a*self.w1) + ((1/6.0)*self.rl),#x^3
((-1/24.0)*self.a*self.s) - ((1/24.0)*self.w1),#x^4
(1/120.0)*self.s],#x^5
[self.a,self.b]],
# Range b to L
[[self.c7,self.c6],[self.b,self.L]]
])
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = self.rl
RR = 0
ML = self.ml
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
v[i] = self.rl
elif x[i]<=self.b:
v[i] = self.rl + 0.5*self.s*x[i]**2 - x[i]*((self.s*self.a)+self.w1) + 0.5*self.a*((self.s*self.a)+(2*self.w1))
else:
v[i] = 0
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = self.rl*x[i] + self.ml
elif x[i] <= self.b:
m[i] = self.rl*x[i] + self.c3 + (1.0/6.0)*self.s*x[i]**3 - 0.5*((self.s*self.a)+self.w1)*x[i]**2 + 0.5*((self.s*self.a)+(2*self.w1))*self.a*x[i]
else:
m[i] = 0
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = (0.5*self.rl*x[i]**2)+self.ml*x[i]+self.c1
elif x[i] <= self.b:
eis[i] = (0.5*self.rl*x[i]**2)+self.c3*x[i] + (1.0/24.0)*self.s*x[i]**4 - (1.0/6.0)*((self.s*self.a)+self.w1)*x[i]**3 + 0.25*((self.s*self.a)+(2*self.w1))*self.a*x[i]**2 + self.c4
else:
eis[i] = self.c6
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = ((1.0/6.0)*self.rl*x[i]**3)+ 0.5*self.ml*x[i]**2 + self.c1*x[i] + self.c2
elif x[i] <= self.b:
eid[i] = ((1.0/6.0)*self.rl*x[i]**3) + 0.5*self.c3*x[i]**2 + (1.0/120.0)*self.s*x[i]**5 - (1.0/24.0)*((self.s*self.a)+self.w1)*x[i]**4 + (1.0/12.0)*((self.s*self.a)+(2*self.w1))*self.a*x[i]**3 + self.c4*x[i] + self.c5
else:
eid[i] = self.c6*x[i] + self.c7
return eid
def vx(self,x):
if x <= self.a:
v= self.rl
elif x<=self.b:
v= self.rl + 0.5*self.s*x**2 - x*((self.s*self.a)+self.w1) + 0.5*self.a*((self.s*self.a)+(2*self.w1))
else:
v =0
return v
def mx(self,x):
if x <= self.a:
m = self.rl*x + self.ml
elif x <= self.b:
m = self.rl*x + self.c3 + (1.0/6.0)*self.s*x**3 - 0.5*((self.s*self.a)+self.w1)*x**2 + 0.5*((self.s*self.a)+(2*self.w1))*self.a*x
else:
m = 0
return m
def eisx(self,x):
if x <= self.a:
eis = (0.5*self.rl*x**2)+self.ml*x+self.c1
elif x <= self.b:
eis = (0.5*self.rl*x**2)+self.c3*x + (1.0/24.0)*self.s*x**4 - (1.0/6.0)*((self.s*self.a)+self.w1)*x**3 + 0.25*((self.s*self.a)+(2*self.w1))*self.a*x**2 + self.c4
else:
eis = self.c6
return eis
def eidx(self,x):
if x <= self.a:
eid = ((1.0/6.0)*self.rl*x**3)+ 0.5*self.ml*x**2 + self.c1*x + self.c2
elif x <= self.b:
eid = ((1.0/6.0)*self.rl*x**3) + 0.5*self.c3*x**2 + (1.0/120.0)*self.s*x**5 - (1.0/24.0)*((self.s*self.a)+self.w1)*x**4 + (1.0/12.0)*((self.s*self.a)+(2*self.w1))*self.a*x**3 + self.c4*x + self.c5
else:
eid = self.c6*x + self.c7
return eid
class cant_left_nl:
def __init__(self, slope, L):
self.L = float(L)
self.slope = float(slope)
self.c1 = self.slope
self.c2 = -1.0*self.c1*self.L
self.kind = 'NL'
self.rr = 0
self.rl = 0
self.mr = 0
self.x_graph = [0]
self.y_graph = [0]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
x = [0]
y = [0]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.L]]]
eis = [[[self.c1],[0,self.L]]]
eid = [[[self.c2, self.c1],[0,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = 0
ML = 0
MR = 0
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
eis[i] = self.c1
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
eid[i] = self.c1* x[i] + self.c2
return eid
def vx(self,x):
v=0
return v
def mx(self,x):
m=0
return m
def eisx(self,x):
eis = self.c1
return eis
def eidx(self,x):
eid = self.c1 * x + self.c2
return eid
class cant_left_point:
def __init__(self, p, a, L,Lb):
self.p = float(p)
self.a = float(a)
self.L = float(L)
self.Lb = float(Lb)
self.kind = 'Point'
self.error = ''
if self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
self.rr = self.p
self.rl = 0
self.mr = -1*self.p*(self.L-self.a)
# 0 length backspan indicates fixed-free beam initialize slope to 0
if self.Lb == 0:
self.backspan = no_load(0)
self.c3 = 0 + (0.5*self.p * (self.L-self.a)**2)
else:
self.backspan = point_moment(self.mr,0,self.Lb)
self.c3 = self.backspan.eisx(0) + (0.5*self.p * (self.L-self.a)**2)
self.c4 = ((1/6.0)*self.p*(self.L-self.a)**3) - (self.c3*self.L)
self.c1 = self.c3
self.c2 = (self.c3*self.a) + self.c4 - (self.c1*self.a)
arrow_height = self.p/6.0
#30 degree arrow
arrow_plus= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus= self.a-(arrow_height*math.tan(math.radians(30)))
self.x_graph=[arrow_minus,self.a,arrow_plus,self.a,self.a]
self.y_graph=[arrow_height,0,arrow_height,0,self.p]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = self.p/6.0
#30 degree arrow
arrow_plus= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus= self.a-(arrow_height*math.tan(math.radians(30)))
x=[arrow_minus,self.a,arrow_plus,self.a,self.a]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.p]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a]
x = [i*x_scale for i in x]
y=[0,self.p]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.a]],[[-1.0*self.p],[self.a,self.L]]]
m = [[[0],[0,self.a]],[[self.p*self.a,-1.0*self.p],[self.a,self.L]]]
eis = [[[self.c1],[0,self.a]],[[-0.5*self.a*self.a*self.p+self.c3,self.a*self.p, -0.5*self.p],[self.a,self.L]]]
eid = [[[self.c2,self.c1],[0,self.a]],[[self.c4+((self.a*self.a*self.a*self.p)*(1/6.0)), self.c3-(0.5*self.a*self.a*self.p),0.5*self.a*self.p,(-1/6.0)*self.p],[self.a,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = self.rr
ML = 0
MR = self.mr
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
v[i] = 0
else:
v[i] = -1*self.p
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
m[i] = 0
else:
m[i] = -1*self.p * (x[i] - self.a)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eis[i] = self.c1
else:
eis[i] = (-0.5*self.p * (x[i]-self.a)**2) + self.c3
return eis
def eid(self, x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eid[i] = self.c1*x[i] + self.c2
else:
eid[i] = (-1/6.0)*self.p*(x[i]-self.a)**3 + self.c3*x[i] + self.c4
return eid
def vx(self,x):
if x<=self.a:
v = 0
else:
v = -1*self.p
return v
def mx(self,x):
if x<=self.a:
m = 0
else:
m = -1*self.p * (x - self.a)
return m
def eisx(self,x):
if x<=self.a:
eis = self.c1
else:
eis = (-0.5*self.p * (x-self.a)**2) + self.c3
return eis
def eidx(self, x):
if x<=self.a:
eid = self.c1*x + self.c2
else:
eid = (-1/6.0)*self.p*(x-self.a)**3 + self.c3*x + self.c4
return eid
class cant_left_point_moment:
def __init__(self, ma, a, L,Lb):
self.ma = float(ma)
self.a = float(a)
self.L = float(L)
self.Lb = float(Lb)
self.kind = 'Moment'
self.error = ''
if self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
self.rr = 0
self.rl = 0
self.mr = self.ma
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c3 = 0 - (self.ma*self.L)
else:
self.backspan = point_moment(self.mr,0,Lb)
self.c3 = self.backspan.eisx(0) - (self.ma*self.L)
self.c4 = (-0.5*self.ma*self.L**2) - self.c3*self.L
self.c1 = (1.0*self.ma*self.a) + self.c3
self.c2 = 0.5*self.ma*self.a**2 + self.c3*self.a + self.c4 - self.c1*self.a
r = (self.ma/2.0)
arrow_height = r/6.0
#30 degree arrow
arrow_minus= (arrow_height*math.tan(math.radians(30)))
if self.ma <0:
self.x_graph = [self.a,self.a,self.a]
self.y_graph = [r,0,-r]
x=0
y=0
for a in range(-90, 181):
x = self.a+(r*math.cos(math.radians(a)))
y = 0+(r*math.sin(math.radians(a)))
self.x_graph.append(x)
self.y_graph.append(y)
self.x_graph.append(x-arrow_minus)
self.y_graph.append(y+arrow_height)
self.x_graph.append(x)
self.y_graph.append(y)
self.x_graph.append(x+arrow_minus)
self.y_graph.append(y+arrow_height)
else:
self.x_graph = [self.a-r,self.a,self.a+r, self.a+r-arrow_minus,self.a+r,self.a+r+arrow_minus,self.a+r]
self.y_graph = [0,0,0,arrow_height,0,arrow_height,0]
x=0
y=0
for a in range(0,271):
x = self.a+(r*math.cos(math.radians(a)))
y = 0+(r*math.sin(math.radians(a)))
self.x_graph.append(x)
self.y_graph.append(y)
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
r = (self.ma/2.0)
if arrows == 1:
arrow_height = r/6.0
#30 degree arrow
arrow_minus= (arrow_height*math.tan(math.radians(30)))
if self.ma <0:
x= [self.a,self.a,self.a]
y = [r,0,-r]
xi=0
yi=0
for a in range(-90, 181):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
x.append(xi-arrow_minus)
y.append(yi+arrow_height)
x.append(xi)
y.append(yi)
x.append(xi+arrow_minus)
y.append(yi+arrow_height)
else:
x= [self.a-r,self.a,self.a+r, self.a+r-arrow_minus,self.a+r,self.a+r+arrow_minus,self.a+r]
y = [0,0,0,arrow_height,0,arrow_height,0]
xi=0
yi=0
for a in range(0,271):
xi = self.a+(r*math.cos(math.radians(a)))
yi = 0+(r*math.sin(math.radians(a)))
x.append(xi)
y.append(yi)
x = [i*x_scale for i in x]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = [[[0],[0,self.L]]]
m = [[[0],[0,self.a]],[[self.ma],[self.a,self.L]]]
eis = [[[self.c1],[0,self.a]],[[self.c3,self.ma],[self.a,self.L]]]
eid = [[[self.c2, self.c1],[0,self.a]],[[self.c4,self.c3,0.5*self.ma],[self.a,self.L]]]
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = self.rr
ML = 0
MR = self.mr
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
v[i] = 0
else:
v[i] = 0
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
m[i] = 0
else:
m[i] = self.ma
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eis[i] = self.c1
else:
eis[i] = (self.ma * x[i]) + self.c3
return eis
def eid(self, x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i]<=self.a:
eid[i] = self.c1*x[i] + self.c2
else:
eid[i] = (0.5)*self.ma*x[i]**2 + self.c3*x[i] + self.c4
return eid
def vx(self,x):
if x<=self.a:
v = 0
else:
v = 0
return v
def mx(self,x):
if x<=self.a:
m = 0
else:
m = self.ma
return m
def eisx(self,x):
if x<=self.a:
eis = self.c1
else:
eis = (self.ma * x) + self.c3
return eis
def eidx(self, x):
if x<=self.a:
eid = self.c1*x + self.c2
else:
eid = (0.5)*self.ma*x**2 + self.c3*x + self.c4
return eid
class cant_left_udl:
def __init__(self, w1, a, b, L, Lb):
self.w1 = float(w1)
self.a = float(a)
self.L = float(L)
self.Lb = float(Lb)
self.b = float(b)
self.c = self.b-self.a
self.w_tot = self.w1*self.c
self.kind = 'UDL'
self.error = ''
if self.a > self.b:
self.error = 'Error a > b'
self.error = 'Error a > b'
elif self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
elif self.b > self.L:
self.error = 'Error b > l'
self.error = 'Error b > l'
else:
pass
self.rr = self.w_tot
self.rl = 0
self.mr = -1.0*self.w_tot*(self.L-(a+(self.c/2.0)))
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c5 = 0 + (0.5 * self.w_tot * (self.L - (self.a + (0.5*self.c)))**2)
else:
self.backspan = point_moment(self.mr,0,Lb)
self.c5 = self.backspan.eisx(0) + (0.5 * self.w_tot * (self.L - (self.a + (0.5*self.c)))**2)
self.c6 = ((1.0/6.0)*self.w_tot * (self.L - (self.a + (0.5*self.c)))**3) - (self.c5*self.L)
self.c3 =((-0.5)*self.w_tot * (self.b - (self.a + (0.5*self.c)))**2) + self.c5 + ((1.0/6.0)*self.w1*(b-a)**3)
self.c1 = self.c3
self.c4 = ((-1.0/6.0)*self.w_tot * (self.b - (self.a + (0.5*self.c)))**3) + (self.c5*self.b) + self.c6 + ((1.0/24.0)*self.w1*(self.b-self.a)**4) - (self.c3*self.b)
self.c2 = (self.c3*self.a) + self.c4 - (self.c1*self.a)
arrow_height = self.w1/12.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height*math.tan(math.radians(30)))
self.x_graph=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
self.y_graph=[arrow_height,0,arrow_height,0,self.w1,self.w1,0,arrow_height,0,arrow_height]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = self.w1/12.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height*math.tan(math.radians(30)))
x=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.w1,self.w1,0,arrow_height,0,arrow_height]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a,self.b,self.b]
x = [i*x_scale for i in x]
y=[0,self.w1,self.w1,0]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = ([
[[0],[0,self.a]],
[[self.w1*self.a,
-1.0*self.w1],
[self.a,self.b]],
[[-1.0*self.w_tot],[self.b,self.L]]
])
m = ([
# Range 0 to a
[[0],[0,self.a]],
# Range a to b
[[-0.5*math.pow(self.a,2)*self.w1,
self.a*self.w1,
-0.5*self.w1],
[self.a,self.b]],
# Range b to L
[[self.a*self.w_tot + 0.5*self.c*self.w_tot,
-1.0*self.w_tot],
[self.b,self.L]]
])
eis = ([
# Range 0 to a
[[self.c1],[0,self.a]],
# Range a to b
[[(1/6.0)*math.pow(self.a,3)*self.w1 + self.c3,#x^0
-0.5*math.pow(self.a,2)*self.w1,#x^1
0.5*self.a*self.w1,#x^2
(-1/6.0)*self.w1],#x^3
[self.a,self.b]],
# Range b to L
[[self.c5-(0.5*math.pow(self.a,2)*self.w_tot)-
(0.5*self.a*self.c*self.w_tot) - ((1/8.0)*math.pow(self.c,2)*self.w_tot),#x^0
(self.a*self.w_tot)+(0.5*self.c*self.w_tot),#x^1
-0.5*self.w_tot],#x^2
[self.b,self.L]]
])
eid = ([
# Range 0 to a
[[self.c2,self.c1],[0,self.a]],
# Range a to b
[[self.c4-((1/24.0)*math.pow(self.a,4)*self.w1),#x^0
(1/6.0)*math.pow(self.a,3)*self.w1+self.c3,#x^1
-0.25*math.pow(self.a,2)*self.w1,#x^2
(1/6.0)*self.a*self.w1,#x^3
(-1/24.0)*self.w1],#x^4
[self.a,self.b]],
# Range b to L
[[((1/6.0)*math.pow(self.a,3)*self.w_tot)+
(0.25*math.pow(self.a,2)*self.c*self.w_tot)+
(0.125*self.a*math.pow(self.c,2)*self.w_tot)+
((1/48.0)*math.pow(self.c,3)*self.w_tot)+self.c6,#x^0
(-0.5*math.pow(self.a,2)*self.w_tot)-
(0.5*self.a*self.c*self.w_tot)-
(0.125*math.pow(self.c,2)*self.w_tot)+self.c5,#x^1
(0.5*self.a*self.w_tot) + (0.25*self.c*self.w_tot),#x^2
(-1/6.0)*self.w_tot],#x^3
[self.b,self.L]]
])
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = self.rr
ML = 0
MR = self.mr
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
v[i] = 0
elif x[i]<=self.b:
v[i] = -1*self.w1*(x[i]-self.a)
else:
v[i] = -1*self.w_tot
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = 0
elif x[i] <= self.b:
m[i] = -0.5*self.w1*(x[i]-self.a)**2
else:
m[i] = -1.0 * self.w_tot * (x[i]-(self.a+(0.5*self.c)))
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = self.c1
elif x[i] <= self.b:
eis[i] = (-1.0/6.0)*self.w1*(x[i]-self.a)**3 + self.c3
else:
eis[i] = (-0.5 * self.w_tot * (x[i]-(self.a+(0.5*self.c)))**2) + self.c5
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = self.c1*x[i] + self.c2
elif x[i] <= self.b:
eid[i] = (-1.0/24.0)*self.w1*(x[i]-self.a)**4 + self.c3*x[i] + self.c4
else:
eid[i] = ((-1.0/6.0) * self.w_tot * (x[i]-(self.a+(0.5*self.c)))**3) + self.c5*x[i] + self.c6
return eid
def vx(self,x):
if x <= self.a:
v = 0
elif x<=self.b:
v = -1*self.w1*(x-self.a)
else:
v = -1*self.w_tot
return v
def mx(self,x):
if x <= self.a:
m = 0
elif x <= self.b:
m = -0.5*self.w1*(x-self.a)**2
else:
m = -1.0 * self.w_tot * (x-(self.a+(0.5*self.c)))
return m
def eisx(self,x):
if x <= self.a:
eis = self.c1
elif x <= self.b:
eis = (-1.0/6.0)*self.w1*(x-self.a)**3 + self.c3
else:
eis = (-0.5 * self.w_tot * (x-(self.a+(0.5*self.c)))**2) + self.c5
return eis
def eidx(self,x):
if x <= self.a:
eid = self.c1*x+ self.c2
elif x <= self.b:
eid = (-1.0/24.0)*self.w1*(x-self.a)**4 + self.c3*x + self.c4
else:
eid = ((-1.0/6.0) * self.w_tot * (x-(self.a+(0.5*self.c)))**3) + self.c5*x + self.c6
return eid
class cant_left_trap:
def __init__(self, w1, w2, a, b, L, Lb):
self.w1 = float(w1)
self.w2 = float(w2)
self.a = float(a)
self.L = float(L)
self.b = float(b)
self.Lb = float(Lb)
self.c = self.b-self.a
self.kind = 'TRAP'
self.error = ''
if self.a > self.b:
self.error = 'Error a > b'
self.error = 'Error a > b'
elif self.a > self.L:
self.error = 'Error a > l'
self.error = 'Error a > l'
elif self.b > self.L:
self.error = 'Error b > l'
self.error = 'Error b > l'
elif sign(self.w1) != sign(self.w2) and self.w1 !=0 and self.w2 !=0:
self.error = 'Error w1 and w2 change direction'
self.error = 'Error w1 and w2 change direction'
else:
pass
self.w = 0.5*(self.w1+self.w2)*self.c
self.dl = self.a+(((self.w1+(2*self.w2))/(3*(self.w2+self.w1)))*self.c)
self.dr = self.L-self.dl
self.s = (self.w1-self.w2)/self.c
self.cc = (((self.w1+(2*self.w2))/(3*(self.w2+self.w1)))*self.c) + self.a
self.rr = self.w
self.rl=0
self.mr = -1*self.rr*(self.L-self.cc)
# 0 length backspan indicates fixed-free beam initialize slope to 0
if Lb == 0:
self.backspan = no_load(0)
self.c6 = 0 + (0.5*self.w*(self.L-self.cc)**2)
else:
self.backspan = point_moment(self.mr,0,Lb)
self.c6 = self.backspan.eisx(0) + (0.5*self.w*(self.L-self.cc)**2)
self.c7 = ((1.0/6.0)*self.w*(self.L-self.cc)**3) - (self.c6*self.L)
self.c3 = -1.0*((1.0/6.0)*self.a*((self.a**2 * self.s) - (3*self.a*((self.a*self.s) + self.w1)) + (3*self.a*((self.a*self.s) + (2*self.w1)))))
self.c4 = (-0.5*self.w*(self.b-self.cc)**2) + self.c6 - (self.c3*self.b) - ((1.0/24.0)*self.b**2 *((self.b**2 * self.s) - (4*self.b*((self.a*self.s) + self.w1)) + (6*self.a*((self.a*self.s) + (2*self.w1)))))
self.c5 = ((-1.0/6.0)*self.w*(self.b-self.cc)**3) + (self.c6*self.b)+self.c7-(0.5*self.c3*self.b**2)-(self.c4*self.b)-((1.0/120.0)*self.b**3 *((self.b**2 * self.s) - (5*self.b*((self.a*self.s) + self.w1)) + (10*self.a*((self.a*self.s) + (2*self.w1)))))
self.c1 = ((1.0/24.0)*self.a**2 *((self.a**2 * self.s) - (4*self.a*((self.a*self.s) + self.w1)) + (6*self.a*((self.a*self.s) + (2*self.w1))))) + (self.c3*self.a) + self.c4
self.c2 = ((1.0/120.0)*self.a**3 *((self.a**2 * self.s) - (5*self.a*((self.a*self.s) + self.w1)) + (10*self.a*((self.a*self.s) + (2*self.w1))))) + (0.5*self.c3*self.a**2) + (self.c4*self.a) + self.c5 - (self.c1*self.a)
arrow_height = self.w1/6.0
arrow_height2 = self.w2/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height2*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height2*math.tan(math.radians(30)))
self.x_graph=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
self.y_graph=[arrow_height,0,arrow_height,0,self.w1,self.w2,0,arrow_height2,0,arrow_height2]
def chart_load(self, x_scale=0, y_scale=0, arrows=0):
if arrows == 1:
arrow_height = self.w1/6.0
arrow_height2 = self.w2/6.0
#30 degree arrow
arrow_plus_start= self.a+(arrow_height*math.tan(math.radians(30)))
arrow_minus_start= self.a-(arrow_height*math.tan(math.radians(30)))
arrow_plus_end= self.b+(arrow_height2*math.tan(math.radians(30)))
arrow_minus_end= self.b-(arrow_height2*math.tan(math.radians(30)))
x=[arrow_minus_start,self.a,arrow_plus_start,self.a,self.a,self.b,self.b,arrow_minus_end,self.b,arrow_plus_end]
x = [i*x_scale for i in x]
y=[arrow_height,0,arrow_height,0,self.w1,self.w2,0,arrow_height2,0,arrow_height2]
y = [j*y_scale for j in y]
else:
x=[self.a,self.a,self.b,self.b]
x = [i*x_scale for i in x]
y=[0,self.w1,self.w2,0]
y = [j*y_scale for j in y]
return x,y
def piece_functions(self):
'''
Returns the general piecwise function in the form of two lists
# list1 is the polynomial coeficients of order [c0,c1x,c2x^2,...,cnx^n]
# where the list values will only by the cn's*
# list 2 will be the range over which the function piece applies
# 0 <= a would be [0,a] **note it will be assumed the the eqality is <= not <
# rerturned lists will be [[[list11],[list21]],....,[[list1n],[list2n]]
# where n is the total number of functions to capture the range from
# 0 to the full span, L of the beam
'''
v = ([
[[0],
[0,self.a]],
[[(0.5*math.pow(self.a,2)*self.s)+(self.a*self.w1), #x^0
(-1.0*self.a*self.s) - self.w1, #x^1
0.5*self.s], #x^2
[self.a,self.b]],
[[-1.0*self.rr],
[self.b,self.L]]
])
m = ([
# Range 0 to a
[[0],
[0,self.a]],
# Range a to b
[[self.c3, #x^0
(0.5*math.pow(self.a,2)*self.s)+(self.a*self.w1), #x^1
(-0.5*self.a*self.s) - (0.5*self.w1), #x^2
(1/6.0)*self.s], #x^3
[self.a,self.b]],
# Range b to L
[[self.w*self.cc, #x^0
-1.0*self.w], #x^1
[self.b,self.L]]
])
eis = ([
# Range 0 to a
[[self.c1],
[0,self.a]],
# Range a to b
[[self.c4,#x^0
self.c3,#x^1
(0.25*math.pow(self.a,2)*self.s)+(0.5*self.a*self.w1),#x^2
((-1/6.0)*self.a*self.s)-((1/6.0)*self.w1),#x^3
(1/24.0)*self.s],#x^4
[self.a,self.b]],
# Range b to L
[[self.c6-(0.5*math.pow(self.cc,2)*self.w),#x^0
self.cc*self.w,#x^1
-0.5*self.w],#x^2
[self.b,self.L]]
])
eid = ([
# Range 0 to a
[[self.c2,self.c1],
[0,self.a]],
# Range a to b
[[self.c5,#x^0
self.c4,#x^1
0.5*self.c3,#x^2
((1/12.0)*math.pow(self.a,2)*self.s)+((1/6.0)*self.a*self.w1),#x^3
((-1/24.0)*self.a*self.s)-((1/24.0)*self.w1),#x^4
(1/120.0)*self.s],#x^5
[self.a,self.b]],
# Range b to L
[[self.c7+((1/6.0)*math.pow(self.cc,3)*self.w),#x^0
self.c6-(0.5*math.pow(self.cc,2)*self.w),#x^1
0.5*self.cc*self.w,#x^2
(-1/6.0)*self.w],#x^3
[self.b,self.L]]
])
vs = PieceFunctionString(v)
ms = PieceFunctionString(m)
eiss = PieceFunctionString(eis)
eids = PieceFunctionString(eid)
return [v,m,eis,eid],[vs,ms,eiss,eids]
def fef(self):
# Fixed End Forces
RL = 0
RR = self.rr
ML = 0
MR = self.mr
return [RL,ML,RR,MR]
def v(self,x):
iters = len(x)
v=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
v[i] = 0
elif x[i]<=self.b:
v[i] = (-0.5*((2*self.w1)-(self.s*(x[i]-self.a))))*(x[i]-self.a)
else:
v[i] = -1*self.rr
return v
def m(self,x):
iters = len(x)
m=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
m[i] = 0
elif x[i] <= self.b:
m[i] = ((1.0/6.0)*x[i]*((x[i]**2 * self.s) - (3*x[i]*((self.a*self.s) + self.w1)) + (3*self.a*((self.a*self.s) + (2*self.w1))))) + self.c3
else:
m[i] = -1*self.w*(x[i]-self.cc)
return m
def eis(self,x):
iters = len(x)
eis=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eis[i] = self.c1
elif x[i] <= self.b:
eis[i] = ((1.0/24.0)*x[i]**2 *((x[i]**2 * self.s) - (4*x[i]*((self.a*self.s) + self.w1)) + (6*self.a*((self.a*self.s) + (2*self.w1))))) + (self.c3 * x[i]) + self.c4
else:
eis[i] = (-0.5*self.w*(x[i]-self.cc)**2) + self.c6
return eis
def eid(self,x):
iters = len(x)
eid=zeros(iters)
for i in range(0,iters):
if x[i] <= self.a:
eid[i] = self.c1*x[i] + self.c2
elif x[i] <= self.b:
eid[i] = ((1.0/120.0)*x[i]**3 *((x[i]**2 * self.s) - (5*x[i]*((self.a*self.s) + self.w1)) + (10*self.a*((self.a*self.s) + (2*self.w1))))) + (0.5*self.c3 * x[i]**2) + (self.c4*x[i]) + self.c5
else:
eid[i] = ((-1.0/6.0)*self.w*(x[i]-self.cc)**3) + (self.c6*x[i]) + self.c7
return eid
def vx(self,x):
if x <= self.a:
v = 0
elif x<=self.b:
v= (-0.5*((2*self.w1)-(self.s*(x-self.a))))*(x-self.a)
else:
v = -1*self.rr
return v
def mx(self,x):
if x <= self.a:
m = 0
elif x <= self.b:
m = ((1.0/6.0)*x*((x**2 * self.s) - (3*x*((self.a*self.s) + self.w1)) + (3*self.a*((self.a*self.s) + (2*self.w1))))) + self.c3
else:
m = -1*self.w*(x-self.cc)
return m
def eisx(self,x):
if x <= self.a:
eis = self.c1
elif x <= self.b:
eis = ((1.0/24.0)*x**2 *((x**2 * self.s) - (4*x*((self.a*self.s) + self.w1)) + (6*self.a*((self.a*self.s) + (2*self.w1))))) + (self.c3 * x) + self.c4
else:
eis = (-0.5*self.w*(x-self.cc)**2) + self.c6
return eis
def eidx(self,x):
if x <= self.a:
eid = self.c1*x + self.c2
elif x <= self.b:
eid = ((1.0/120.0)*x**3 *((x**2 * self.s) - (5*x*((self.a*self.s) + self.w1)) + (10*self.a*((self.a*self.s) + (2*self.w1))))) + (0.5*self.c3 * x**2) + (self.c4*x) + self.c5
else:
eid = ((-1.0/6.0)*self.w*(x-self.cc)**3) + (self.c6*x) + self.c7
return eid
def fixed_free_left_by_stations(loads, number_of_stations):
# Take a list of loads and integer ammount of stations and return
# lists of stations, shears, moments,E*I*Slopes, and E*I*Deflections
#
# loads should already be defined using the classes in this file
#
# Assumptions:
# - all loads coming in will have the same span length
# defined. Validation of this will be added at a later date.
#
# -Consistent unit definitions across load values and lengths
L = loads[0].L
iters = int(number_of_stations)
# Review loads and add additional stations to capture load start
# and end points. For Point/Point Moments add station directly before
# and directly after load.
extra_stations = np.array([0])
for load in loads:
if load.kind == 'Point':
a = load.a
b = min(load.L,a + 0.0001)
c = max(0,a - 0.0001)
extra_stations = np.append(extra_stations, [c,a,b])
elif load.kind == 'Moment':
a = load.a
b = min(load.L,a + 0.0001)
c = max(0,a - 0.0001)
extra_stations = np.append(extra_stations, [c,a,b])
elif load.kind == 'UDL':
extra_stations = np.append(extra_stations, [load.a,load.b])
elif load.kind == 'TRAP':
extra_stations = np.append(extra_stations, [load.a,load.b])
else:
pass
extra_stations = np.unique(extra_stations)
# Generate station coordinates based on a step size of l / number of stations
step = L / (number_of_stations * 1.00) # multply by 1.00 to force Float division
xs = zeros(iters+1)
xs[0] = 0
for i in range(1,(iters+1)):
if xs[i-1] + step > L:
xs[i] = L
else:
xs[i] = xs[i-1] + step
xs = np.append(xs, extra_stations)
xs = np.sort(xs)
xs = np.unique(xs)
i = xs.shape[0]
r = 0
mr = 0
v = zeros(i)
m = zeros(i)
eis = zeros(i)
eid = zeros(i)
for load in loads:
r = r + load.rr
mr = mr + load.mr
v = v + load.v(xs)
m = m + load.m(xs)
eis = eis + load.eis(xs)
eid = eid + load.eid(xs)
result_list = [xs,r,mr,v,m,eis,eid]
return result_list
def fixed_free_right_by_stations(loads, number_of_stations):
# Take a list of loads and integer ammount of stations and return
# lists of stations, shears, moments,E*I*Slopes, and E*I*Deflections
#
# loads should already be defined using the classes in this file
#
# Assumptions:
# - all loads coming in will have the same span length
# defined. Validation of this will be added at a later date.
#
# -Consistent unit definitions across load values and lengths
L = loads[0].L
iters = int(number_of_stations)
# Review loads and add additional stations to capture load start
# and end points. For Point/Point Moments add station directly before
# and directly after load.
extra_stations = np.array([0])
for load in loads:
if load.kind == 'Point':
a = load.a
b = min(load.L,a + 0.0001)
c = max(0,a - 0.0001)
extra_stations = np.append(extra_stations, [c,a,b])
elif load.kind == 'Moment':
a = load.a
b = min(load.L,a + 0.0001)
c = max(0,a - 0.0001)
extra_stations = np.append(extra_stations, [c,a,b])
elif load.kind == 'UDL':
extra_stations = np.append(extra_stations, [load.a,load.b])
elif load.kind == 'TRAP':
extra_stations = np.append(extra_stations, [load.a,load.b])
else:
pass
extra_stations = np.unique(extra_stations)
# Generate station coordinates based on a step size of l / number of stations
step = L / (number_of_stations * 1.00) # multply by 1.00 to force Float division
xs = zeros(iters+1)
xs[0] = 0
for i in range(1,(iters+1)):
if xs[i-1] + step > L:
xs[i] = L
else:
xs[i] = xs[i-1] + step
xs = np.append(xs, extra_stations)
xs = np.sort(xs)
xs = np.unique(xs)
i = xs.shape[0]
r = 0
ml = 0
v = zeros(i)
m = zeros(i)
eis = zeros(i)
eid = zeros(i)
for load in loads:
r = r + load.rl
ml = ml + load.ml
v = v + load.v(xs)
m = m + load.m(xs)
eis = eis + load.eis(xs)
eid = eid + load.eid(xs)
result_list = [xs,r,ml,v,m,eis,eid]
return result_list
def fixed_free_at_x(loads, x):
# Take a list of loads and x location in span and return
# shear, moment,E*I*Slope, and E*I*Deflection
#
# loads should already be defined using the classes in this file
#
# Assumptions:
# - all loads coming in will have the same span length
# defined. Validation of this will be added at a later date.
#
# -Consistent unit definitions across load values and lengths
v = 0
m = 0
eis = 0
eid = 0
for load in loads:
v = v + load.vx(x)
m = m + load.mx(x)
eis = eis + load.eisx(x)
eid = eid + load.eidx(x)
result_list = [v,m,eis,eid]
return result_list
def pin_pin_single_span_at_x(loads, x):
# Take a list of loads and x locatoin in span and return
# shear, moment,E*I*Slope, and E*I*Deflection
#
# loads should already be defined using the classes in this file
#
# Assumptions:
# - all loads coming in will have the same span length
# defined. Validation of this will be added at a later date.
#
# -Consistent unit definitions across load values and lengths
v = 0
m = 0
eis = 0
eid = 0
for load in loads:
v = v + load.vx(x)
m = m + load.mx(x)
eis = eis + load.eisx(x)
eid = eid + load.eidx(x)
result_list = [v,m,eis,eid]
return result_list
def pin_pin_single_span_by_stations(loads, number_of_stations):
# Take a list of loads and integer ammount of stations and return
# lists of stations, shears, moments,E*I*Slopes, and E*I*Deflections
#
# loads should already be defined using the classes in this file
#
# Assumptions:
# - all loads coming in will have the same span length
# defined. Validation of this will be added at a later date.
#
# -Consistent unit definitions across load values and lengths
L = loads[0].L
iters = int(number_of_stations)
# Review loads and add additional stations to capture load start
# and end points. For Point/Point Moments add station directly before
# and directly after load.
extra_stations = np.array([0])
for load in loads:
if load.kind == 'Point':
a = load.a
b = min(load.L,a + 0.0001)
c = max(0,a - 0.0001)
extra_stations = np.append(extra_stations, [c,a,b])
elif load.kind == 'Moment':
a = load.a
b = min(load.L,a + 0.0001)
c = max(0,a - 0.0001)
extra_stations = np.append(extra_stations, [c,a,b])
elif load.kind == 'UDL':
extra_stations = np.append(extra_stations, [load.a,load.b])
elif load.kind == 'TRAP':
extra_stations = np.append(extra_stations, [load.a,load.b])
else:
pass
extra_stations = np.unique(extra_stations)
# Generate station coordinates based on a step size of l / number of stations
step = L / (number_of_stations * 1.00) # multply by 1.00 to force Float division
xs = zeros(iters+1)
xs[0] = 0
for i in range(1,(iters+1)):
if xs[i-1] + step > L:
xs[i] = L
else:
xs[i] = xs[i-1] + step
xs = np.append(xs, extra_stations)
xs = np.sort(xs)
xs = np.unique(xs)
i = xs.shape[0]
rl = 0
rr = 0
v = zeros(i)
m = zeros(i)
eis = zeros(i)
eid = zeros(i)
for load in loads:
rl = rl + load.rl
rr = rr + load.rr
v = v + load.v(xs)
m = m + load.m(xs)
eis = eis + load.eis(xs)
eid = eid + load.eid(xs)
result_list = [xs,rl,rr,v,m,eis,eid]
return result_list
def fixed_end_moments_from_end_slopes(eis0, eisL, fed, L):
#######################################################################################################
#
# Solve Simultaneous equation for fixed end moments knowing
# end slopes of simple beam at support points:
#
# By compatibility for fixed ends initial and final slope should be 0.
#
# Function expects consistent units for values, should produce accurate results for
# both metric and imperial units.
#
#[s0, sL] = [M0,ML]*[eis0_M0, eis0_ML
# eisL_M0, eisL_ML]
# Where:
# s0 = slope at 0 ft, or left end of beam, calculated for the single span simply supported beam
# sL = slope at L ft, or right end of beam, calculated for the single span simply supported beam
#
# s's are to be independant of E, modulus of elasticity, and I, moment of inertia, therefore
# either need to divide by E*I or provide s in terms of E*I*s
#
# M0 = fixed end moment at 0 ft, or left end
# Ml = fixed end moment at L ft, or right end
#
# eis0_M0 = slope coefficient for M0 at 0 ft, or left end
# eis0_Ml = slope coefficient for ML at 0 ft, or left end
#
# eisL_M0 = slope coefficient for M0 at L ft, or right end
# eisL_Ml = slope coefficient for ML at L ft, or right end
#
# eis0 = E*I*Slope @ 0 ft or beam left end
# eisL = E*I*Slope @ L ft or beam right end
# fed = [1,1], where a 1 signifies the location is fixed
# L = span length
#
# Assumptions:
# 1. consistent units are used for the inputs
# 2. the slopes entered are the actual slope not
# the inverse ie not the restoring slope
#
#######################################################################################################
if fed[0] == 1 and fed[1] == 1:
s = np.array([[-1.0*eis0],[-1.0*eisL]])
ems = np.array([[-1.0*L/3.0 , L/6.0],[L/6.0 , -1.0*L/3.0]])
fem = np.linalg.solve(ems,s)
elif fed[0] == 1 and fed[1] == 0:
fel= ((-1.0*eis0 * -3.0) / L)
fem = np.array([[fel],[0]])
elif fed[0] == 0 and fed[1] == 1:
fer = ((-1.0*eisL * -3.0) / L)
fem = np.array([[0],[fer]])
else:
fem = np.array([[0],[0]])
return fem
def single_span_solve_fixed_ends_and_redundant_interiors(delta, reaction_points, L, fem):
#######################################################################################################
#
# Solve Simultaneous equation for internal reactions and fixed end moments knowing
# deflection and end slopes of simple beam at support points:
#
# By compatibility for fixed ends initial and final slope should be 0, and deflection
# at each interior support location should be 0.
#
# Function expects consistent units for values, should produce accurate results for
# both metric and imperial units.
#
#[s0, sL, d1....di] = [M0,ML,p1....pi]*[eis0_M0, eis0_ML, eis0_p1......eis0_pi
# eisL_M0, eisL_ML, eisL_p1......eisL_pi
# eid_M0_p1, eid_ML_p1, eid_p11.....eid_pi1
# eid_M0_pi, eid_ML_pi, eid_p1i.....eid_pii]
# Where:
# s0 = slope at 0 ft, or left end of beam, calculated for the single span simply supported beam
# sL = slope at L ft, or right end of beam, calculated for the single span simply supported beam
# d1 = deflection at first interior support 1 location calculated for the single span simply supported beam
# di = deflection at ith interior support i location calculated for the single span simply supported beam
#
# s and d are to be independant of E, modulus of elasticity, and I, moment of inertia, therefore
# either need to divide by E*I or provide s and d in terms of E*I*s and E*I*d
#
# M0 = fixed end moment at 0 ft, or left end
# Ml = fixed end moment at L ft, or right end
# p1 = reaction at first interior support
# pi = reaction at ith interior support
#
# eis0_M0 = slope coefficient for M0 at 0 ft, or left end
# eis0_Ml = slope coefficient for ML at 0 ft, or left end
# eis0_p1 = slope coefficient for first interior support at 0 ft, or left end
# eis0_pi = slope coefficient for ith interior support at 0 ft, or left end
#
# eisL_M0 = slope coefficient for M0 at L ft, or right end
# eisL_Ml = slope coefficient for ML at L ft, or right end
# eisL_p1 = slope coefficient for first interior support at L ft, or right end
# eisL_pi = slope coefficient for ith interior support at L ft, or right end
#
# eid_M0_p1 = deflection coefficient at first interior support for M0
# eid_M0_p1 = deflection coefficient at first interior support for ML
# eid_p11 = deflection coefficient at first interior support for first interior reaction
# eid_pi1 = deflection coefficient at first interior support for ith interior reaction
#
# eid_M0_pi = deflection coefficient at ith interior support for M0
# eid_M0_pi = deflection coefficient at ith interior support for ML
# eid_p1i = deflection coefficient at ith interior support for first interior reaction
# eid_pii = deflection coefficient at ith interior support for ith interior reaction
#
# Inputs:
# delta = [eis0, eisL, eid1,...,eidi], list of deformation results for pin-pin beam from loading
# --note: deformation results must be in the order shown--
# reaction_points = [p1,....,pi], list of locations of redundant interior supports
# L = beam span
# fem = [1,1], where a 1 signifies the location is fixed
#
# Assumptions:
# 1. consistent units are used for the inputs
# 2. the deformations entered are the actual deformations not
# the inverse ie not the restoring deformation.
#
#######################################################################################################
#build the coefficient matrix rows and the deflection values
coeff_matrix = []
delta = [-1.0*x for x in delta]
#Start Moment Component
mo = point_moment(1,0,L)
ml = point_moment(1,L,L)
coeff_matrix.append([mo.eisx(0)*fem[0],ml.eisx(0)*fem[1]])
coeff_matrix.append([mo.eisx(L)*fem[0],ml.eisx(L)*fem[1]])
for support in reaction_points:
a = support
point_load = pl(1,a,L)
coeff_row = []
coeff_row.append(mo.eidx(a)*fem[0])
coeff_row.append(ml.eidx(a)*fem[1])
for point in reaction_points:
x = point
new_pl = pl(1,x,L)
eid_p = new_pl.eidx(a)
coeff_row.append(eid_p)
coeff_matrix[0].append(point_load.eisx(0))
coeff_matrix[1].append(point_load.eisx(L))
coeff_matrix.append(coeff_row)
d = np.array(delta)
coeff = np.array(coeff_matrix)
if fem == [0,1]:
d = np.delete(d, (0), axis=0)
coeff = np.delete(coeff, (0), axis=0)
coeff = np.delete(coeff, (0), axis=1)
reaction_points = [0] + reaction_points
elif fem == [1,0]:
d = np.delete(d, (1), axis=0)
coeff = np.delete(coeff, (1), axis=0)
coeff = np.delete(coeff, (1), axis=1)
reaction_points = [0] + reaction_points
elif fem == [0,0]:
d = np.delete(d, (0), axis=0)
coeff = np.delete(coeff, (0), axis=0)
coeff = np.delete(coeff, (0), axis=1)
d = np.delete(d, (0), axis=0)
coeff = np.delete(coeff, (0), axis=0)
coeff = np.delete(coeff, (0), axis=1)
else:
reaction_points = [0,0] + reaction_points
R = np.linalg.solve(coeff,d)
#List of reactions defined as loads from class types above
reactions_as_loads = []
i = 0
for reaction in R:
if (fem == [1,0] or fem == [1,1]) and i == 0:
m = reaction
reactions_as_loads.append(point_moment(m,0,L))
elif fem == [0,1] and i == 0:
m = reaction
reactions_as_loads.append(point_moment(m,L,L))
elif fem == [1,1] and i == 1:
m = reaction
reactions_as_loads.append(point_moment(m,L,L))
else:
p = reaction
a = reaction_points[i]
reactions_as_loads.append(pl(p,a,L))
i+=1
return R, reactions_as_loads
def center_span_piecewise_function(loads):
'''
Build the full piecewise fucntion set for a single span
Input: lists of loads as defined above
output: lists of piecewise functions and list of piecewise functions as text strings
It is assumed all loads have the same span length defined
'''
# Gather load start and end locations these define how the fucntions will be split
ab = []
ab.append(loads[0].L)
for load in loads:
if load.kind == "Point" or load.kind == "Moment":
ab.append(load.a)
elif load.kind == "NL" or load.kind == "END_DELTA":
pass
else:
ab.append(load.a)
ab.append(load.b)
ab = list(set(ab))
ab.sort()
v_out = []
m_out = []
eis_out = []
eid_out = []
count=0
for i in ab:
if count == 0:
piece_range = [0,i]
else:
piece_range = [ab[count-1],i]
if piece_range == [0,0]:
pass
else:
v = []
m = []
eis = []
eid = []
for load in loads:
func, func_strings = load.piece_functions()
#Shear
for piece in func[0]:
if piece[1][0] < piece_range[1] and piece[1][1] >= piece_range[1]:
eq_len_delta = len(piece[0]) - len(v) # difference in number of coefficients
if eq_len_delta > 0:
v.extend([0]*eq_len_delta)
elif eq_len_delta<0:
piece[0].extend([0]*abs(eq_len_delta))
else:
pass
v = [sum(x) for x in zip(piece[0],v)]
else:
pass
#Moment
for piece in func[1]:
if piece[1][0] < piece_range[1] and piece[1][1] >= piece_range[1]:
eq_len_delta = len(piece[0]) - len(m) # difference in number of coefficients
if eq_len_delta > 0:
m.extend([0]*eq_len_delta)
elif eq_len_delta<0:
piece[0].extend([0]*abs(eq_len_delta))
else:
pass
m = [sum(x) for x in zip(piece[0],m)]
else:
pass
#EIS
for piece in func[2]:
if piece[1][0] < piece_range[1] and piece[1][1] >= piece_range[1]:
eq_len_delta = len(piece[0]) - len(eis) # difference in number of coefficients
if eq_len_delta > 0:
eis.extend([0]*eq_len_delta)
elif eq_len_delta<0:
piece[0].extend([0]*abs(eq_len_delta))
else:
pass
eis = [sum(x) for x in zip(piece[0],eis)]
else:
pass
#EID
for piece in func[3]:
if piece[1][0] < piece_range[1] and piece[1][1] >= piece_range[1]:
eq_len_delta = len(piece[0]) - len(eid) # difference in number of coefficients
if eq_len_delta > 0:
eid.extend([0]*eq_len_delta)
elif eq_len_delta<0:
piece[0].extend([0]*abs(eq_len_delta))
else:
pass
eid = [sum(x) for x in zip(piece[0],eid)]
else:
pass
v_out.append([v,piece_range])
m_out.append([m,piece_range])
eis_out.append([eis,piece_range])
eid_out.append([eid,piece_range])
count +=1
vs = PieceFunctionString(v_out)
ms = PieceFunctionString(m_out)
eiss = PieceFunctionString(eis_out)
eids = PieceFunctionString(eid_out)
return [v_out, m_out, eis_out, eid_out],[vs, ms, eiss, eids]
def eval_beam_piece_function(piece_function,x):
'''
Given the peicewise beam functions and a location evaluate the results
return a list of [V,M,EIS,EID]
'''
res = []
for func in piece_function:
for line in func:
if line[1][0] == 0 and x ==0:
res.append(poly_eval(line[0],x))
if line[1][0] < x <= line[1][1]:
res.append(poly_eval(line[0],x))
else:
pass
return res
def points_of_zero_shear(shear_piece_function):
'''
Given the piecewise shear function for the beam return a list
of the location of zero shear or where shear jumps from + to - ie
at point loads
'''
zero_loc = []
i=0
for line in shear_piece_function:
if len(line[0]) == 1 and i==0:
pass # If function is a value then there is no chance for a sign change
else:
a = poly_eval(line[0], line[1][0]+0.0001) # value at start of bounds
b = poly_eval(line[0], line[1][1]-0.0001) # value at end of bounds
if a==0:
zero_loc.append(line[1][0])
elif b==0:
zero_loc.append(line[1][1])
else:
# if signs are the the same a/b will result in a positive value
coeff = line[0][::-1]
c = np.roots(coeff)
c = c.real[abs(c.imag)<1e-5]
for root in c:
if line[1][0] < root <= line[1][1]:
zero_loc.append(root)
else:
pass
if i==0:
pass
else:
d = poly_eval(shear_piece_function[i-1][0], line[1][0]-0.0001) # value at end of previous bounds
if d == 0:
pass
elif a/d < 0:
zero_loc.append(line[1][0])
else:
pass
i+=1
zero_loc = sorted(set(zero_loc))
return zero_loc
| 34.701763 | 345 | 0.468782 | 21,009 | 135,788 | 2.978771 | 0.024609 | 0.061121 | 0.044582 | 0.011825 | 0.908646 | 0.888241 | 0.86734 | 0.8392 | 0.810981 | 0.78845 | 0 | 0.050776 | 0.368221 | 135,788 | 3,912 | 346 | 34.710634 | 0.678708 | 0.151994 | 0 | 0.793028 | 0 | 0.000367 | 0.011261 | 0.001744 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075229 | false | 0.010275 | 0.001835 | 0 | 0.152294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
22afba8df61c9368c7249643aaaf240ddff43b21 | 24,168 | py | Python | geokey/core/tests/logger/test_log_project.py | universityofsussex/geokey | 25e161dbc81841c57c148053dbe99facc81e84b8 | [
"Apache-2.0"
] | null | null | null | geokey/core/tests/logger/test_log_project.py | universityofsussex/geokey | 25e161dbc81841c57c148053dbe99facc81e84b8 | [
"Apache-2.0"
] | null | null | null | geokey/core/tests/logger/test_log_project.py | universityofsussex/geokey | 25e161dbc81841c57c148053dbe99facc81e84b8 | [
"Apache-2.0"
] | null | null | null | """Tests for logger: model Project."""
from django.test import TestCase
from django.contrib.gis.geos import GEOSGeometry
from geokey.core.models import LoggerHistory
from geokey.users.tests.model_factories import UserFactory
from geokey.projects.tests.model_factories import ProjectFactory, AdminsFactory
class LogProjectTest(TestCase):
"""Test model Project."""
def setUp(self):
"""Set up test."""
self.user = UserFactory.create()
self.project = ProjectFactory.create(**{
'creator': self.user})
def test_log_create(self):
"""Test when project gets created."""
log_count_init = LoggerHistory.objects.count()
project = ProjectFactory.create(**{'creator': self.user})
log_count = LoggerHistory.objects.count()
self.assertEqual(log_count, log_count_init + 2)
logs = LoggerHistory.objects.all().order_by('-pk')[:2]
# Project gets created
self.assertNotEqual(logs[1].user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(logs[1].project, {
'id': str(project.id),
'name': project.name})
self.assertEqual(logs[1].usergroup, None)
self.assertEqual(logs[1].category, None)
self.assertEqual(logs[1].field, None)
self.assertEqual(logs[1].location, None)
self.assertEqual(logs[1].observation, None)
self.assertEqual(logs[1].comment, None)
self.assertEqual(logs[1].subset, None)
self.assertEqual(logs[1].action, {
'id': 'created',
'class': 'Project'})
self.assertEqual(logs[1].historical, None)
# Project creator gets added as admin
self.assertNotEqual(logs[0].user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(logs[0].project, {
'id': str(project.id),
'name': project.name})
self.assertEqual(logs[0].usergroup, None)
self.assertEqual(logs[0].category, None)
self.assertEqual(logs[0].field, None)
self.assertEqual(logs[0].location, None)
self.assertEqual(logs[0].observation, None)
self.assertEqual(logs[0].comment, None)
self.assertEqual(logs[0].subset, None)
self.assertEqual(logs[0].action, {
'id': 'created',
'class': 'Admins',
'user_id': str(self.user.id),
'user_display_name': self.user.display_name})
self.assertEqual(logs[0].historical, None)
def test_log_delete(self):
"""Test when project gets deleted."""
project_id = self.project.id
project_name = self.project.name
log_count_init = LoggerHistory.objects.count()
self.project.delete()
log_count = LoggerHistory.objects.count()
self.assertEqual(log_count, log_count_init + 2)
logs = LoggerHistory.objects.all().order_by('-pk')[:2]
# Project creator gets removed from admins
self.assertNotEqual(logs[1].user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(logs[1].project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(logs[1].usergroup, None)
self.assertEqual(logs[1].category, None)
self.assertEqual(logs[1].field, None)
self.assertEqual(logs[1].location, None)
self.assertEqual(logs[1].observation, None)
self.assertEqual(logs[1].comment, None)
self.assertEqual(logs[1].subset, None)
self.assertEqual(logs[1].action, {
'id': 'deleted',
'class': 'Admins',
'user_id': str(self.user.id),
'user_display_name': self.user.display_name})
self.assertEqual(logs[1].historical, None)
# Project gets deleted
self.assertNotEqual(logs[0].user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(logs[0].project, {
'id': str(project_id),
'name': project_name})
self.assertEqual(logs[0].usergroup, None)
self.assertEqual(logs[0].category, None)
self.assertEqual(logs[0].field, None)
self.assertEqual(logs[0].location, None)
self.assertEqual(logs[0].observation, None)
self.assertEqual(logs[0].comment, None)
self.assertEqual(logs[0].subset, None)
self.assertEqual(logs[0].action, {
'id': 'deleted',
'class': 'Project',
'field': 'status',
'value': 'deleted'})
history = self.project.history.get(pk=logs[0].historical.get('id'))
self.assertEqual(history.id, project_id)
self.assertEqual(history.name, project_name)
def test_log_update_name(self):
"""Test when name changes."""
log_count_init = LoggerHistory.objects.count()
original_name = self.project.name
self.project.name = '%s UPDATED' % self.project.name
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'name'})
self.assertEqual(log_count, log_count_init + 1)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.name, original_name)
def test_log_update_status(self):
"""Test when status changes."""
log_count_init = LoggerHistory.objects.count()
original_status = self.project.status
self.project.status = 'inactive'
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'status',
'value': self.project.status})
self.assertEqual(log_count, log_count_init + 1)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.status, original_status)
original_status = self.project.status
self.project.status = 'active'
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'status',
'value': self.project.status})
self.assertEqual(log_count, log_count_init + 2)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.status, original_status)
def test_log_update_isprivate(self):
"""Test when privacy changes."""
log_count_init = LoggerHistory.objects.count()
original_isprivate = self.project.isprivate
self.project.isprivate = False
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'isprivate',
'value': str(self.project.isprivate)})
self.assertEqual(log_count, log_count_init + 1)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.isprivate, original_isprivate)
original_isprivate = self.project.isprivate
self.project.isprivate = True
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'isprivate',
'value': str(self.project.isprivate)})
self.assertEqual(log_count, log_count_init + 2)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.isprivate, original_isprivate)
def test_log_update_islocked(self):
"""Test when locker changes."""
log_count_init = LoggerHistory.objects.count()
original_islocked = self.project.islocked
self.project.islocked = True
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'islocked',
'value': str(self.project.islocked)})
self.assertEqual(log_count, log_count_init + 1)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.islocked, original_islocked)
original_islocked = self.project.islocked
self.project.islocked = False
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'islocked',
'value': str(self.project.islocked)})
self.assertEqual(log_count, log_count_init + 2)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.islocked, original_islocked)
def test_log_update_contributing_permissions(self):
"""Test when contributing permissions changes."""
log_count_init = LoggerHistory.objects.count()
original_everyone_contributes = self.project.everyone_contributes
self.project.everyone_contributes = 'auth'
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'everyone_contributes',
'value': self.project.everyone_contributes})
self.assertEqual(log_count, log_count_init + 1)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(
history.everyone_contributes,
original_everyone_contributes)
original_everyone_contributes = self.project.everyone_contributes
self.project.everyone_contributes = 'false'
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'everyone_contributes',
'value': self.project.everyone_contributes})
self.assertEqual(log_count, log_count_init + 2)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(
history.everyone_contributes,
original_everyone_contributes)
original_everyone_contributes = self.project.everyone_contributes
self.project.everyone_contributes = 'true'
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'everyone_contributes',
'value': self.project.everyone_contributes})
self.assertEqual(log_count, log_count_init + 3)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(
history.everyone_contributes,
original_everyone_contributes)
def test_log_update_geographic_extent(self):
"""Test when geographic extent changes."""
log_count_init = LoggerHistory.objects.count()
original_geographic_extent = self.project.geographic_extent
self.project.geographic_extent = GEOSGeometry(
'{"type": "Polygon","coordinates":'
'[[[-0.505,51.682],[-0.53,51.327],'
'[0.225,51.323],[0.167,51.667],[-0.505,51.682]]]}')
self.project.save()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'updated',
'class': 'Project',
'field': 'geographic_extent'})
self.assertEqual(log_count, log_count_init + 1)
history = self.project.history.get(pk=log.historical.get('id'))
self.assertEqual(history.id, self.project.id)
self.assertEqual(history.geographic_extent, original_geographic_extent)
def test_log_update_multiple_fields(self):
"""Test when multiple model fields changes."""
log_count_init = LoggerHistory.objects.count()
original_isprivate = self.project.isprivate
original_islocked = self.project.islocked
self.project.isprivate = False
self.project.islocked = True
self.project.save()
log_count = LoggerHistory.objects.count()
self.assertEqual(log_count, log_count_init + 2)
logs = LoggerHistory.objects.all().order_by('-pk')[:2]
# 1st changed field
self.assertNotEqual(logs[1].user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(logs[1].project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(logs[1].category, None)
self.assertEqual(logs[1].field, None)
self.assertEqual(logs[1].action, {
'id': 'updated',
'class': 'Project',
'field': 'isprivate',
'value': str(self.project.isprivate)})
history_2 = self.project.history.get(pk=logs[1].historical.get('id'))
self.assertEqual(history_2.id, self.project.id)
self.assertEqual(history_2.isprivate, original_isprivate)
self.assertEqual(history_2.islocked, original_islocked)
# 2nd changed field
self.assertNotEqual(logs[0].user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(logs[0].project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(logs[0].category, None)
self.assertEqual(logs[0].field, None)
self.assertEqual(logs[0].action, {
'id': 'updated',
'class': 'Project',
'field': 'islocked',
'value': str(self.project.islocked)})
history_1 = self.project.history.get(pk=logs[0].historical.get('id'))
self.assertEqual(history_1.id, self.project.id)
self.assertEqual(history_1.isprivate, original_isprivate)
self.assertEqual(history_1.islocked, original_islocked)
# History entry is only one per save
self.assertEqual(history_1, history_2)
def test_log_add_admin(self):
"""Test when admin is added."""
log_count_init = LoggerHistory.objects.count()
new_admin = UserFactory.create()
AdminsFactory.create(project=self.project, user=new_admin)
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'created',
'class': 'Admins',
'user_id': str(new_admin.id),
'user_display_name': new_admin.display_name})
self.assertEqual(log_count, log_count_init + 1)
self.assertEqual(log.historical, None)
def test_log_remove_admin(self):
"""Test when admin is removed."""
existing_admin = UserFactory.create()
admins_relation = AdminsFactory.create(
project=self.project,
user=existing_admin)
log_count_init = LoggerHistory.objects.count()
admins_relation.delete()
log = LoggerHistory.objects.last()
log_count = LoggerHistory.objects.count()
self.assertNotEqual(log.user, {
'id': str(self.user.id),
'display_name': self.user.display_name})
self.assertEqual(log.project, {
'id': str(self.project.id),
'name': self.project.name})
self.assertEqual(log.usergroup, None)
self.assertEqual(log.category, None)
self.assertEqual(log.field, None)
self.assertEqual(log.location, None)
self.assertEqual(log.observation, None)
self.assertEqual(log.comment, None)
self.assertEqual(log.subset, None)
self.assertEqual(log.action, {
'id': 'deleted',
'class': 'Admins',
'user_id': str(existing_admin.id),
'user_display_name': existing_admin.display_name})
self.assertEqual(log_count, log_count_init + 1)
self.assertEqual(log.historical, None)
| 40.146179 | 79 | 0.614904 | 2,671 | 24,168 | 5.466492 | 0.047173 | 0.218821 | 0.166427 | 0.137114 | 0.909184 | 0.88898 | 0.844326 | 0.831724 | 0.789056 | 0.788782 | 0 | 0.007265 | 0.253889 | 24,168 | 601 | 80 | 40.212978 | 0.802462 | 0.024785 | 0 | 0.860465 | 0 | 0.001938 | 0.057892 | 0.00447 | 0 | 0 | 0 | 0 | 0.449612 | 1 | 0.023256 | false | 0 | 0.00969 | 0 | 0.034884 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
22d9dd5f9193e034e11a2a92e5b390f7bc2b99ef | 18,251 | py | Python | sdk/python/pulumi_aws/wafregional/rule.py | rapzo/pulumi-aws | 390a098221315d98a54ba97d1559e750dc3053b7 | [
"ECL-2.0",
"Apache-2.0"
] | 260 | 2018-06-18T14:57:00.000Z | 2022-03-29T11:41:03.000Z | sdk/python/pulumi_aws/wafregional/rule.py | rapzo/pulumi-aws | 390a098221315d98a54ba97d1559e750dc3053b7 | [
"ECL-2.0",
"Apache-2.0"
] | 1,154 | 2018-06-19T20:38:20.000Z | 2022-03-31T19:48:16.000Z | sdk/python/pulumi_aws/wafregional/rule.py | rapzo/pulumi-aws | 390a098221315d98a54ba97d1559e750dc3053b7 | [
"ECL-2.0",
"Apache-2.0"
] | 115 | 2018-06-28T03:20:27.000Z | 2022-03-29T11:41:06.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['RuleArgs', 'Rule']
@pulumi.input_type
class RuleArgs:
def __init__(__self__, *,
metric_name: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None,
predicates: Optional[pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a Rule resource.
:param pulumi.Input[str] metric_name: The name or description for the Amazon CloudWatch metric of this rule.
:param pulumi.Input[str] name: The name or description of the rule.
:param pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]] predicates: The objects to include in a rule (documented below).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
pulumi.set(__self__, "metric_name", metric_name)
if name is not None:
pulumi.set(__self__, "name", name)
if predicates is not None:
pulumi.set(__self__, "predicates", predicates)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="metricName")
def metric_name(self) -> pulumi.Input[str]:
"""
The name or description for the Amazon CloudWatch metric of this rule.
"""
return pulumi.get(self, "metric_name")
@metric_name.setter
def metric_name(self, value: pulumi.Input[str]):
pulumi.set(self, "metric_name", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name or description of the rule.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def predicates(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]]]:
"""
The objects to include in a rule (documented below).
"""
return pulumi.get(self, "predicates")
@predicates.setter
def predicates(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]]]):
pulumi.set(self, "predicates", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _RuleState:
def __init__(__self__, *,
arn: Optional[pulumi.Input[str]] = None,
metric_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
predicates: Optional[pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering Rule resources.
:param pulumi.Input[str] arn: The ARN of the WAF Regional Rule.
:param pulumi.Input[str] metric_name: The name or description for the Amazon CloudWatch metric of this rule.
:param pulumi.Input[str] name: The name or description of the rule.
:param pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]] predicates: The objects to include in a rule (documented below).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider .
"""
if arn is not None:
pulumi.set(__self__, "arn", arn)
if metric_name is not None:
pulumi.set(__self__, "metric_name", metric_name)
if name is not None:
pulumi.set(__self__, "name", name)
if predicates is not None:
pulumi.set(__self__, "predicates", predicates)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tags_all is not None:
pulumi.set(__self__, "tags_all", tags_all)
@property
@pulumi.getter
def arn(self) -> Optional[pulumi.Input[str]]:
"""
The ARN of the WAF Regional Rule.
"""
return pulumi.get(self, "arn")
@arn.setter
def arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "arn", value)
@property
@pulumi.getter(name="metricName")
def metric_name(self) -> Optional[pulumi.Input[str]]:
"""
The name or description for the Amazon CloudWatch metric of this rule.
"""
return pulumi.get(self, "metric_name")
@metric_name.setter
def metric_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "metric_name", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name or description of the rule.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def predicates(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]]]:
"""
The objects to include in a rule (documented below).
"""
return pulumi.get(self, "predicates")
@predicates.setter
def predicates(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['RulePredicateArgs']]]]):
pulumi.set(self, "predicates", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags assigned to the resource, including those inherited from the provider .
"""
return pulumi.get(self, "tags_all")
@tags_all.setter
def tags_all(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags_all", value)
class Rule(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
metric_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
predicates: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RulePredicateArgs']]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
Provides an WAF Regional Rule Resource for use with Application Load Balancer.
## Example Usage
```python
import pulumi
import pulumi_aws as aws
ipset = aws.wafregional.IpSet("ipset", ip_set_descriptors=[aws.wafregional.IpSetIpSetDescriptorArgs(
type="IPV4",
value="192.0.7.0/24",
)])
wafrule = aws.wafregional.Rule("wafrule",
metric_name="tfWAFRule",
predicates=[aws.wafregional.RulePredicateArgs(
type="IPMatch",
data_id=ipset.id,
negated=False,
)])
```
## Nested Fields
### `predicate`
See the [WAF Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more information.
#### Arguments
* `type` - (Required) The type of predicate in a rule. Valid values: `ByteMatch`, `GeoMatch`, `IPMatch`, `RegexMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch`
* `data_id` - (Required) The unique identifier of a predicate, such as the ID of a `ByteMatchSet` or `IPSet`.
* `negated` - (Required) Whether to use the settings or the negated settings that you specified in the objects.
## Import
WAF Regional Rule can be imported using the id, e.g.
```sh
$ pulumi import aws:wafregional/rule:Rule wafrule a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] metric_name: The name or description for the Amazon CloudWatch metric of this rule.
:param pulumi.Input[str] name: The name or description of the rule.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RulePredicateArgs']]]] predicates: The objects to include in a rule (documented below).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: RuleArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides an WAF Regional Rule Resource for use with Application Load Balancer.
## Example Usage
```python
import pulumi
import pulumi_aws as aws
ipset = aws.wafregional.IpSet("ipset", ip_set_descriptors=[aws.wafregional.IpSetIpSetDescriptorArgs(
type="IPV4",
value="192.0.7.0/24",
)])
wafrule = aws.wafregional.Rule("wafrule",
metric_name="tfWAFRule",
predicates=[aws.wafregional.RulePredicateArgs(
type="IPMatch",
data_id=ipset.id,
negated=False,
)])
```
## Nested Fields
### `predicate`
See the [WAF Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more information.
#### Arguments
* `type` - (Required) The type of predicate in a rule. Valid values: `ByteMatch`, `GeoMatch`, `IPMatch`, `RegexMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch`
* `data_id` - (Required) The unique identifier of a predicate, such as the ID of a `ByteMatchSet` or `IPSet`.
* `negated` - (Required) Whether to use the settings or the negated settings that you specified in the objects.
## Import
WAF Regional Rule can be imported using the id, e.g.
```sh
$ pulumi import aws:wafregional/rule:Rule wafrule a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc
```
:param str resource_name: The name of the resource.
:param RuleArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(RuleArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
metric_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
predicates: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RulePredicateArgs']]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = RuleArgs.__new__(RuleArgs)
if metric_name is None and not opts.urn:
raise TypeError("Missing required property 'metric_name'")
__props__.__dict__["metric_name"] = metric_name
__props__.__dict__["name"] = name
__props__.__dict__["predicates"] = predicates
__props__.__dict__["tags"] = tags
__props__.__dict__["arn"] = None
__props__.__dict__["tags_all"] = None
super(Rule, __self__).__init__(
'aws:wafregional/rule:Rule',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
arn: Optional[pulumi.Input[str]] = None,
metric_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
predicates: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RulePredicateArgs']]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None) -> 'Rule':
"""
Get an existing Rule resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: The ARN of the WAF Regional Rule.
:param pulumi.Input[str] metric_name: The name or description for the Amazon CloudWatch metric of this rule.
:param pulumi.Input[str] name: The name or description of the rule.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['RulePredicateArgs']]]] predicates: The objects to include in a rule (documented below).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider .
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _RuleState.__new__(_RuleState)
__props__.__dict__["arn"] = arn
__props__.__dict__["metric_name"] = metric_name
__props__.__dict__["name"] = name
__props__.__dict__["predicates"] = predicates
__props__.__dict__["tags"] = tags
__props__.__dict__["tags_all"] = tags_all
return Rule(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
The ARN of the WAF Regional Rule.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter(name="metricName")
def metric_name(self) -> pulumi.Output[str]:
"""
The name or description for the Amazon CloudWatch metric of this rule.
"""
return pulumi.get(self, "metric_name")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name or description of the rule.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def predicates(self) -> pulumi.Output[Optional[Sequence['outputs.RulePredicate']]]:
"""
The objects to include in a rule (documented below).
"""
return pulumi.get(self, "predicates")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Key-value map of resource tags. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> pulumi.Output[Mapping[str, str]]:
"""
A map of tags assigned to the resource, including those inherited from the provider .
"""
return pulumi.get(self, "tags_all")
| 43.044811 | 249 | 0.63728 | 2,173 | 18,251 | 5.19052 | 0.103543 | 0.097526 | 0.065786 | 0.03706 | 0.842007 | 0.826403 | 0.805834 | 0.795372 | 0.79085 | 0.769306 | 0 | 0.004034 | 0.253027 | 18,251 | 423 | 250 | 43.146572 | 0.823296 | 0.402882 | 0 | 0.647619 | 1 | 0 | 0.080362 | 0.004727 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157143 | false | 0.004762 | 0.033333 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
22f8feff245e0f3dd1c1a57154e35ef0f6f31ef1 | 68,579 | py | Python | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_heteroFair/cmp_hmmer/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_heteroFair/cmp_hmmer/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_heteroFair/cmp_hmmer/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 3.77876e-06,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202692,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.27703e-05,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.783991,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.35759,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.778616,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.9202,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.774939,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.43046,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 4.3018e-06,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0284203,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.205516,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.210186,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.20552,
'Execution Unit/Register Files/Runtime Dynamic': 0.238606,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.496612,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.70355,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 5.47042,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00139254,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00139254,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00120365,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000460892,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00301933,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00700806,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0136821,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.202057,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.439982,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.686275,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.349,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0566439,
'L2/Runtime Dynamic': 0.0386797,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 8.91832,
'Load Store Unit/Data Cache/Runtime Dynamic': 4.43502,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.248505,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.295099,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 10.0966,
'Load Store Unit/Runtime Dynamic': 6.18545,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.612772,
'Load Store Unit/StoreQ/Runtime Dynamic': 1.45533,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.217475,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.259097,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0721413,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.934466,
'Memory Management Unit/Runtime Dynamic': 0.331238,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 31.0486,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.51322e-05,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0400892,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.429607,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.469712,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 13.8445,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 2.83407e-06,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202691,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.51802e-05,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.497362,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.802226,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.404937,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.70452,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.568836,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.93646,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 2.86787e-06,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0208616,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.150857,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.154284,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.15086,
'Execution Unit/Register Files/Runtime Dynamic': 0.175146,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.317814,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.0904,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 3.57813,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00103073,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00103073,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000894159,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000344173,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0022163,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00517192,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0100112,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.148317,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.322805,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.503752,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96396,
'Instruction Fetch Unit/Runtime Dynamic': 0.990058,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0415999,
'L2/Runtime Dynamic': 0.0283869,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 7.93432,
'Load Store Unit/Data Cache/Runtime Dynamic': 3.25631,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.21667,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.21667,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 8.95749,
'Load Store Unit/Runtime Dynamic': 4.54153,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.534273,
'Load Store Unit/StoreQ/Runtime Dynamic': 1.06854,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.189615,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.190237,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0529283,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.881829,
'Memory Management Unit/Runtime Dynamic': 0.243165,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 27.3708,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 7.80108e-06,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0224397,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.265736,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.288184,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 9.66945,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 2.83407e-06,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.20269,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.26502e-05,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.502645,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.810747,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.409238,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.72263,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.574878,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.94829,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 2.38989e-06,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0210832,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.152459,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.155923,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.152462,
'Execution Unit/Register Files/Runtime Dynamic': 0.177006,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.32119,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.10211,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 3.60981,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00103936,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00103936,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000901587,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00223985,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00522015,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0100971,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.149893,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.326095,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.509103,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96396,
'Instruction Fetch Unit/Runtime Dynamic': 1.00041,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.042002,
'L2/Runtime Dynamic': 0.0286884,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 8.0065,
'Load Store Unit/Data Cache/Runtime Dynamic': 3.2914,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.219006,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.219006,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 9.0407,
'Load Store Unit/Runtime Dynamic': 4.59046,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.540031,
'Load Store Unit/StoreQ/Runtime Dynamic': 1.08006,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.191659,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.192286,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0534675,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.88534,
'Memory Management Unit/Runtime Dynamic': 0.245754,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 27.4698,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 6.87238e-06,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.022678,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.268564,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.291249,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 9.76638,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 2.83407e-06,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202691,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.51802e-05,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.499716,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.806023,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.406853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.71259,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.571527,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.94173,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 2.86787e-06,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0209603,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.15157,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.155014,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.151573,
'Execution Unit/Register Files/Runtime Dynamic': 0.175975,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.319317,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.09551,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 3.59214,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.0010365,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.0010365,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000899175,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000346106,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00222679,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00519898,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0100672,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.149019,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.32438,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.506137,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96396,
'Instruction Fetch Unit/Runtime Dynamic': 0.994803,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0417883,
'L2/Runtime Dynamic': 0.0285203,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 7.96556,
'Load Store Unit/Data Cache/Runtime Dynamic': 3.27151,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.217681,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.217681,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 8.9935,
'Load Store Unit/Runtime Dynamic': 4.56272,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.536765,
'Load Store Unit/StoreQ/Runtime Dynamic': 1.07353,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.1905,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.191124,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0531866,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.883348,
'Memory Management Unit/Runtime Dynamic': 0.244311,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 27.4138,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 7.65325e-06,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0225459,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.266992,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.289545,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 9.71203,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 9.40011525636836,
'Runtime Dynamic': 9.40011525636836,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.239407,
'Runtime Dynamic': 0.255566,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 113.542,
'Peak Power': 146.655,
'Runtime Dynamic': 43.2479,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 113.303,
'Total Cores/Runtime Dynamic': 42.9924,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.239407,
'Total L3s/Runtime Dynamic': 0.255566,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.031729 | 124 | 0.68171 | 8,098 | 68,579 | 5.767226 | 0.065942 | 0.123675 | 0.113055 | 0.093527 | 0.943366 | 0.935486 | 0.924309 | 0.900006 | 0.877031 | 0.858746 | 0 | 0.130857 | 0.224427 | 68,579 | 914 | 125 | 75.031729 | 0.747217 | 0 | 0 | 0.664114 | 0 | 0 | 0.657699 | 0.048119 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fe16d6092a8586e044f7efdd954c817c7659f39e | 1,766 | py | Python | tests/test_1849.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | tests/test_1849.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | tests/test_1849.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import pytest
"""
Test 1849. Splitting a String Into Descending Consecutive Values
"""
@pytest.fixture(scope="session")
def init_variables_1849():
from src.leetcode_1849_splitting_a_string_into_descending_consecutive_values import (
Solution,
)
solution = Solution()
def _init_variables_1849():
return solution
yield _init_variables_1849
class TestClass1849:
def test_solution_0(self, init_variables_1849):
assert not init_variables_1849().splitString("1234")
def test_solution_1(self, init_variables_1849):
assert init_variables_1849().splitString("050043")
def test_solution_2(self, init_variables_1849):
assert not init_variables_1849().splitString("9080701")
def test_solution_3(self, init_variables_1849):
assert init_variables_1849().splitString("10009998")
#!/usr/bin/env python
import pytest
"""
Test 1849. Splitting a String Into Descending Consecutive Values
"""
@pytest.fixture(scope="session")
def init_variables_1849():
from src.leetcode_1849_splitting_a_string_into_descending_consecutive_values import (
Solution,
)
solution = Solution()
def _init_variables_1849():
return solution
yield _init_variables_1849
class TestClass1849:
def test_solution_0(self, init_variables_1849):
assert not init_variables_1849().splitString("1234")
def test_solution_1(self, init_variables_1849):
assert init_variables_1849().splitString("050043")
def test_solution_2(self, init_variables_1849):
assert not init_variables_1849().splitString("9080701")
def test_solution_3(self, init_variables_1849):
assert init_variables_1849().splitString("10009998")
| 24.191781 | 89 | 0.738392 | 216 | 1,766 | 5.666667 | 0.189815 | 0.23366 | 0.305556 | 0.137255 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.116598 | 0.174405 | 1,766 | 72 | 90 | 24.527778 | 0.722908 | 0.02265 | 0 | 0.947368 | 0 | 0 | 0.040506 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.315789 | false | 0 | 0.105263 | 0.052632 | 0.526316 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
a3afd2e91caefffef39f80f5046e90bb0db66e8c | 2,099 | py | Python | examples/sparse_conv/sparse_conv.py | AyanKumarBhunia/openvino_pytorch_layers | 4fd2091fab1a0c3240147c00b906a7a233bf392e | [
"Apache-2.0"
] | null | null | null | examples/sparse_conv/sparse_conv.py | AyanKumarBhunia/openvino_pytorch_layers | 4fd2091fab1a0c3240147c00b906a7a233bf392e | [
"Apache-2.0"
] | null | null | null | examples/sparse_conv/sparse_conv.py | AyanKumarBhunia/openvino_pytorch_layers | 4fd2091fab1a0c3240147c00b906a7a233bf392e | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from open3d.ml.torch.layers import SparseConv, SparseConvTranspose
class SparseConvFunc(torch.autograd.Function):
@staticmethod
def symbolic(g, cls, feat, in_pos, out_pos, voxel_size):
kernel = cls.state_dict()["kernel"]
offset = cls.state_dict()["offset"]
kernel = g.op("Constant", value_t=kernel)
offset = g.op("Constant", value_t=offset)
return g.op("org.open3d::SparseConv", feat, in_pos, out_pos, kernel, offset)
@staticmethod
def forward(self, cls, feat, in_pos, out_pos, voxel_size):
return cls.origin_forward(feat, in_pos, out_pos, voxel_size)
class SparseConvONNX(SparseConv):
"""
This is a support class which helps export network with SparseConv in ONNX format.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.origin_forward = super().forward
def forward(self, feat, in_pos, out_pos, voxel_size):
return SparseConvFunc.apply(self, feat, in_pos, out_pos, voxel_size)
class SparseConvTransposeFunc(torch.autograd.Function):
@staticmethod
def symbolic(g, cls, feat, in_pos, out_pos, voxel_size):
kernel = cls.state_dict()["kernel"]
offset = cls.state_dict()["offset"]
kernel = g.op("Constant", value_t=kernel)
offset = g.op("Constant", value_t=offset)
return g.op("org.open3d::SparseConvTranspose", feat, in_pos, out_pos, kernel, offset)
@staticmethod
def forward(self, cls, feat, in_pos, out_pos, voxel_size):
return cls.origin_forward(feat, in_pos, out_pos, voxel_size)
class SparseConvTransposeONNX(SparseConvTranspose):
"""
This is a support class which helps export network with SparseConvTranspose in ONNX format.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.origin_forward = super().forward
def forward(self, feat, in_pos, out_pos, voxel_size):
return SparseConvTransposeFunc.apply(self, feat, in_pos, out_pos, voxel_size)
| 37.482143 | 95 | 0.687947 | 279 | 2,099 | 4.953405 | 0.21147 | 0.052098 | 0.078148 | 0.104197 | 0.772069 | 0.772069 | 0.772069 | 0.772069 | 0.768452 | 0.720695 | 0 | 0.001774 | 0.194378 | 2,099 | 55 | 96 | 38.163636 | 0.815494 | 0.082897 | 0 | 0.684211 | 0 | 0 | 0.05755 | 0.027983 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.105263 | 0.105263 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
a3bdcd0f48b5d0029e5b59f3cf32519ab902606e | 27,217 | py | Python | aventura.py | matheusjo9974/jogo-de-matematica | daf45da183d1e8504af9a2de10b8ff3c5b8ed7bb | [
"MIT"
] | null | null | null | aventura.py | matheusjo9974/jogo-de-matematica | daf45da183d1e8504af9a2de10b8ff3c5b8ed7bb | [
"MIT"
] | null | null | null | aventura.py | matheusjo9974/jogo-de-matematica | daf45da183d1e8504af9a2de10b8ff3c5b8ed7bb | [
"MIT"
] | null | null | null | #importando bibliotecas
import os
from random import randint
#Função que inicia a interface do programa
def interface():
print("\n===========================================================\n"
"---------------BEM VINDO AO JOGO DE MATEMÁTICA-------------\n"
"===========================================================\n"
"-----------------------------------------------------------\n"
"O jogo será sobre perguntas ,de matemática, que ficarão mais\n"
"dificeis conforme você for acertando, caso você erre muitas\n"
"perguntas o estágio atual terá de ser reiniciado. Para co-\n"
"mpletar o jogo complete todos os estágios. BOA SORTE!!!!!!!\n"
"-----------------------------------------------------------\n"
"Digite [sair] e tecle ENTER a qualquer momento para SAIR.")
start = input("DIgite [iniciar] e tecle ENTER para INICIAR: ")
#Bloco condicional que identifica se o usuario quer iniciar o jogo ou sair.
if start.lower() == "iniciar":
print("-----------------------------------------------------------\n"
"------------------------Vamos começar----------------------\n"
"-----------------------------------------------------------")
choise_operation()
elif start.lower() == "sair":
exit()
# Caso o valor do usuario for diferente ele trata como incorreto e reinicia o programa.
else:
os.system("cls")
print("\033[31mvalor incorreto. tente novamente\033[m")
interface()
# Função que de escolha da operação que o usuario quer jogar.
def choise_operation():
print("Adição[+]----Subtração[-]----Multiplicação[x]----Divisão[/]")
operation = input("Escolha uma das operações acima e digite seu simbolo: ")
# Bloco condicional que identifica a operação que o usuario escolheu ou se deseja sair do programa.
if operation == "+":
start_stage1()
elif operation == "-":
start_stage4()
elif operation == "x":
start_stage7()
elif operation == "/":
start_stage10()
elif operation.lower() == "sair":
exit()
# Caso o valor não esteja no bloco acima ele define como invalido e chama a função choise_operation() novamente.
else:
print("\033[31mSimbolo incorreto. tente novamente.\033[m")
choise_operation()
# Função que identifica se o usuario digitou um valor numerico ou se deseja sair do programa.
def verify_number(num):
if num.lower() == "sair":
return exit()
try:
float(num)
return num
# Caso seja um valor diferente ele envia uma mensagem de erro e reinicia a função.
except:
pass
print("\033[31mSimbolo invalido. Digite apenas números\033[m")
return verify_number(input())
# Função que mostra as perguntas que o usuario errou. Caso nao tenha errado ele só continua a programa.
def view_error(box):
if len(box) != 0:
print("\033[33mVocê errou as questôes:\033[m", end=" ")
for i in box:
print("\033[33m",i ,"\033[m", end=" ")
print("\n")
else:
return
# Função que formata valores de ponto flutuante para apenas uma casa decimal após a virgula.
def truncate(f):
# Peguei na internet, mas explicarei o que ela faz de uma forma simples.
n = 1 # numero de casa decimais que eu quero.
s = '{}'.format(f) # Converte o valor float para string e armazena em s.
i, p, d = s.partition('.') #s Aqui o float ja convertido em string é particionado em 3 partes. o I recebe a parte inteira, o P recebe o ponto, eo D recebe a parte decimal.
return '.'.join([i, (d+'0'*n)[:n]])
# explicação disso |'.'.join([i, (d+'0'*n)[:n]])| em partes
# |".".join|aqui começa a contenação novamente o ponto será o termo que vai unilos
# entao voce terá o como resultado final |1°Arg.2°Arg| --- Os parâmetros são passados
# dentro do join --- |'.'.join([i, |como primeiro parametro temos o 'i' que é a parte
# inteira do número --- |(d+'0'*n)| completa com zeros caso o numero não tenha as casas
# decimais desejadas --- |[:n]]| formata para o numero de casas decimais escolhida pelo usuario.
# e no final fica |'.'.join([i, (d+'0'*n)[:n]])|'''
# Função com o ultimo estagio do caminho da divisão.
def start_stage12():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 03-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(100, 999)
value2 = randint(100, 999)
print(lace, "- Questão: quanto é:", value1, "/", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if truncate(answer) == truncate(value1 / value2): # Comparação depois dos valores serem formatados com a truncate().
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mPARABÉNS VOCÊ COMPLETOU ESTE JOGO. !!!!!VOCÊ É DEMAIS!!!!!!\033[m\n"
"-----------------------------------------------------------")
exit() # Finaliza o jogo caso ele tenha conseguido alcançar os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage12() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o segundo estagio do caminho da divisão.
def start_stage11():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 02-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(10, 99)
value2 = randint(10, 99)
print(lace, "- Questão: quanto é:", value1, "/", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if truncate(answer) == truncate(value1 / value2): # Comparação depois dos valores serem formatados com a truncate().
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage12() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage11() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o primeiro estagio do caminho da divisão.
def start_stage10():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 01-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(0, 9)
value2 = randint(1, 9)
print(lace, "- Questão: quanto é:", value1, "/", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if truncate(answer) == truncate(value1 / value2): # Comparação depois dos valores serem formatados com a truncate().
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage11() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage10() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o ultimo estagio do caminho da multiplicação.
def start_stage9():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 03-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(100, 999)
value2 = randint(100, 999)
print(lace, "- Questão: quanto é:", value1, "x", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 * value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mPARABÉNS VOCÊ COMPLETOU ESTE JOGO. !!!!!VOCÊ É DEMAIS!!!!!!\033[m\n"
"-----------------------------------------------------------")
exit() # Finaliza o jogo caso ele tenha conseguido alcançar os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage9() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o segundo estagio do caminho da multiplicação.
def start_stage8():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 02-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(10, 99)
value2 = randint(10, 99)
print(lace, "- Questão: quanto é:", value1, "x", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 * value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage9() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage8() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o primeiro estagio do caminho da multiplicação.
def start_stage7():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 01-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(0, 9)
value2 = randint(0, 9)
print(lace, "- Questão: quanto é:", value1, "x", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 * value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage8() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage7() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o ultimo estagio do caminho da subtração.
def start_stage6():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 03-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(100, 999)
value2 = randint(100, 999)
print(lace, "- Questão: quanto é:", value1, "-", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 - value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mPARABÉNS VOCÊ COMPLETOU ESTE JOGO. !!!!!VOCÊ É DEMAIS!!!!!!\033[m\n"
"-----------------------------------------------------------")
exit() # Finaliza o jogo caso ele tenha conseguido alcançar os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage6() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o segundo estagio do caminho da subtração.
def start_stage5():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 02-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(10, 99)
value2 = randint(10, 99)
print(lace, "- Questão: quanto é:", value1, "-", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 - value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage6() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage5() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o primeiro estagio do caminho da subtração.
def start_stage4():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 01-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(0, 9)
value2 = randint(0, 9)
print(lace, "- Questão: quanto é:", value1, "-", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 - value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage5() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage4() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o ultimo estagio do caminho da adição.
def start_stage3():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 03-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(100, 999)
value2 = randint(100, 999)
print(lace, "- Questão: quanto é:", value1, "+", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 + value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mPARABÉNS VOCÊ COMPLETOU ESTE JOGO. !!!!!VOCÊ É DEMAIS!!!!!!\033[m\n"
"-----------------------------------------------------------")
exit() # Finaliza o jogo caso ele tenha conseguido alcançar os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage3() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o segundo estagio do caminho da adição.
def start_stage2():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 02-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(10, 99)
value2 = randint(10, 99)
print(lace, "- Questão: quanto é:", value1, "+", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 + value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage3() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage2() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Função com o primeiro estagio do caminho da adição.
def start_stage1():
global storage_error
hits = 0
lace = 1
storage_error = [] # Vetor apenas para armazenar as perguntas incorretas.
print("--------------------------STAGE 01-------------------------\n"
"-----------------------------------------------------------")
while lace < 6: # Laço que faz as 5 perguntas.
value1 = randint(0, 9)
value2 = randint(0, 9)
print(lace, "- Questão: quanto é:", value1, "+", value2)
answer = verify_number(input()) # Chamada a função verify_number() que verifica se o que o usuario digitou é um número.
if int(answer) == value1 + value2: # comparação entre o valor do usuario e a resposta do problema.
hits += 1
else:
storage_error.append(lace)
lace += 1
if hits >= 3:
print("-----------------------------------------------------------\n"
"você acertou", hits, "de um total de 5 questês.")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("\033[32mParabéns você completou esse estágio, agora seguiremos para\n"
"o próximo. Boa sorte!!!!!\033[m\n"
"-----------------------------------------------------------")
start_stage2() # Chama o proximo estagio caso o usuario tenha conseguido os minimos.
else:
print("-----------------------------------------------------------")
view_error(storage_error) # Chamada a função que mostra os erros passando de parâmetro o vetor.
print("Infelismente você não conseguiu terá de recomeçar. Boa sorte\n"
"-----------------------------------------------------------")
start_stage1() # Reinicia a função caso o usuario nao tenha conseguido alcançar o valor minimo.
# Chamada a função que inicia a interface do programa.
interface() | 55.887064 | 176 | 0.50902 | 2,997 | 27,217 | 4.572906 | 0.100434 | 0.052536 | 0.037796 | 0.031011 | 0.826779 | 0.819482 | 0.810653 | 0.790879 | 0.788398 | 0.788398 | 0 | 0.023303 | 0.246353 | 27,217 | 487 | 177 | 55.887064 | 0.644745 | 0.313885 | 0 | 0.859122 | 0 | 0 | 0.442463 | 0.265478 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039261 | false | 0.002309 | 0.004619 | 0 | 0.055427 | 0.187067 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a3e08db50c0fa35a47c939280d8a58b9ae928cd4 | 31,868 | py | Python | sdk/python/pulumi_oci/core/remote_peering_connection.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/core/remote_peering_connection.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/core/remote_peering_connection.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['RemotePeeringConnectionArgs', 'RemotePeeringConnection']
@pulumi.input_type
class RemotePeeringConnectionArgs:
def __init__(__self__, *,
compartment_id: pulumi.Input[str],
drg_id: pulumi.Input[str],
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
peer_id: Optional[pulumi.Input[str]] = None,
peer_region_name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a RemotePeeringConnection resource.
:param pulumi.Input[str] compartment_id: (Updatable) The OCID of the compartment to contain the RPC.
:param pulumi.Input[str] drg_id: The OCID of the DRG the RPC belongs to.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
:param pulumi.Input[str] display_name: (Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[str] peer_id: The OCID of the RPC you want to peer with.
:param pulumi.Input[str] peer_region_name: The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
"""
pulumi.set(__self__, "compartment_id", compartment_id)
pulumi.set(__self__, "drg_id", drg_id)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if peer_id is not None:
pulumi.set(__self__, "peer_id", peer_id)
if peer_region_name is not None:
pulumi.set(__self__, "peer_region_name", peer_region_name)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Input[str]:
"""
(Updatable) The OCID of the compartment to contain the RPC.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: pulumi.Input[str]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="drgId")
def drg_id(self) -> pulumi.Input[str]:
"""
The OCID of the DRG the RPC belongs to.
"""
return pulumi.get(self, "drg_id")
@drg_id.setter
def drg_id(self, value: pulumi.Input[str]):
pulumi.set(self, "drg_id", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="peerId")
def peer_id(self) -> Optional[pulumi.Input[str]]:
"""
The OCID of the RPC you want to peer with.
"""
return pulumi.get(self, "peer_id")
@peer_id.setter
def peer_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "peer_id", value)
@property
@pulumi.getter(name="peerRegionName")
def peer_region_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
"""
return pulumi.get(self, "peer_region_name")
@peer_region_name.setter
def peer_region_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "peer_region_name", value)
@pulumi.input_type
class _RemotePeeringConnectionState:
def __init__(__self__, *,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
drg_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
is_cross_tenancy_peering: Optional[pulumi.Input[bool]] = None,
peer_id: Optional[pulumi.Input[str]] = None,
peer_region_name: Optional[pulumi.Input[str]] = None,
peer_tenancy_id: Optional[pulumi.Input[str]] = None,
peering_status: Optional[pulumi.Input[str]] = None,
state: Optional[pulumi.Input[str]] = None,
time_created: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering RemotePeeringConnection resources.
:param pulumi.Input[str] compartment_id: (Updatable) The OCID of the compartment to contain the RPC.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
:param pulumi.Input[str] display_name: (Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
:param pulumi.Input[str] drg_id: The OCID of the DRG the RPC belongs to.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[bool] is_cross_tenancy_peering: Whether the VCN at the other end of the peering is in a different tenancy. Example: `false`
:param pulumi.Input[str] peer_id: The OCID of the RPC you want to peer with.
:param pulumi.Input[str] peer_region_name: The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
:param pulumi.Input[str] peer_tenancy_id: If this RPC is peered, this value is the OCID of the other RPC's tenancy.
:param pulumi.Input[str] peering_status: Whether the RPC is peered with another RPC. `NEW` means the RPC has not yet been peered. `PENDING` means the peering is being established. `REVOKED` means the RPC at the other end of the peering has been deleted.
:param pulumi.Input[str] state: The RPC's current lifecycle state.
:param pulumi.Input[str] time_created: The date and time the RPC was created, in the format defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2016-08-25T21:10:29.600Z`
"""
if compartment_id is not None:
pulumi.set(__self__, "compartment_id", compartment_id)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if drg_id is not None:
pulumi.set(__self__, "drg_id", drg_id)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if is_cross_tenancy_peering is not None:
pulumi.set(__self__, "is_cross_tenancy_peering", is_cross_tenancy_peering)
if peer_id is not None:
pulumi.set(__self__, "peer_id", peer_id)
if peer_region_name is not None:
pulumi.set(__self__, "peer_region_name", peer_region_name)
if peer_tenancy_id is not None:
pulumi.set(__self__, "peer_tenancy_id", peer_tenancy_id)
if peering_status is not None:
pulumi.set(__self__, "peering_status", peering_status)
if state is not None:
pulumi.set(__self__, "state", state)
if time_created is not None:
pulumi.set(__self__, "time_created", time_created)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The OCID of the compartment to contain the RPC.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="drgId")
def drg_id(self) -> Optional[pulumi.Input[str]]:
"""
The OCID of the DRG the RPC belongs to.
"""
return pulumi.get(self, "drg_id")
@drg_id.setter
def drg_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "drg_id", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="isCrossTenancyPeering")
def is_cross_tenancy_peering(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the VCN at the other end of the peering is in a different tenancy. Example: `false`
"""
return pulumi.get(self, "is_cross_tenancy_peering")
@is_cross_tenancy_peering.setter
def is_cross_tenancy_peering(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_cross_tenancy_peering", value)
@property
@pulumi.getter(name="peerId")
def peer_id(self) -> Optional[pulumi.Input[str]]:
"""
The OCID of the RPC you want to peer with.
"""
return pulumi.get(self, "peer_id")
@peer_id.setter
def peer_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "peer_id", value)
@property
@pulumi.getter(name="peerRegionName")
def peer_region_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
"""
return pulumi.get(self, "peer_region_name")
@peer_region_name.setter
def peer_region_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "peer_region_name", value)
@property
@pulumi.getter(name="peerTenancyId")
def peer_tenancy_id(self) -> Optional[pulumi.Input[str]]:
"""
If this RPC is peered, this value is the OCID of the other RPC's tenancy.
"""
return pulumi.get(self, "peer_tenancy_id")
@peer_tenancy_id.setter
def peer_tenancy_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "peer_tenancy_id", value)
@property
@pulumi.getter(name="peeringStatus")
def peering_status(self) -> Optional[pulumi.Input[str]]:
"""
Whether the RPC is peered with another RPC. `NEW` means the RPC has not yet been peered. `PENDING` means the peering is being established. `REVOKED` means the RPC at the other end of the peering has been deleted.
"""
return pulumi.get(self, "peering_status")
@peering_status.setter
def peering_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "peering_status", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
"""
The RPC's current lifecycle state.
"""
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> Optional[pulumi.Input[str]]:
"""
The date and time the RPC was created, in the format defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2016-08-25T21:10:29.600Z`
"""
return pulumi.get(self, "time_created")
@time_created.setter
def time_created(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_created", value)
class RemotePeeringConnection(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
drg_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
peer_id: Optional[pulumi.Input[str]] = None,
peer_region_name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
This resource provides the Remote Peering Connection resource in Oracle Cloud Infrastructure Core service.
Creates a new remote peering connection (RPC) for the specified DRG.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_remote_peering_connection = oci.core.RemotePeeringConnection("testRemotePeeringConnection",
compartment_id=var["compartment_id"],
drg_id=oci_core_drg["test_drg"]["id"],
defined_tags={
"Operations.CostCenter": "42",
},
display_name=var["remote_peering_connection_display_name"],
freeform_tags={
"Department": "Finance",
},
peer_id=oci_core_remote_peering_connection["test_remote_peering_connection2"]["id"],
peer_region_name=var["remote_peering_connection_peer_region_name"])
```
## Import
RemotePeeringConnections can be imported using the `id`, e.g.
```sh
$ pulumi import oci:core/remotePeeringConnection:RemotePeeringConnection test_remote_peering_connection "id"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compartment_id: (Updatable) The OCID of the compartment to contain the RPC.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
:param pulumi.Input[str] display_name: (Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
:param pulumi.Input[str] drg_id: The OCID of the DRG the RPC belongs to.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[str] peer_id: The OCID of the RPC you want to peer with.
:param pulumi.Input[str] peer_region_name: The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: RemotePeeringConnectionArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Remote Peering Connection resource in Oracle Cloud Infrastructure Core service.
Creates a new remote peering connection (RPC) for the specified DRG.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_remote_peering_connection = oci.core.RemotePeeringConnection("testRemotePeeringConnection",
compartment_id=var["compartment_id"],
drg_id=oci_core_drg["test_drg"]["id"],
defined_tags={
"Operations.CostCenter": "42",
},
display_name=var["remote_peering_connection_display_name"],
freeform_tags={
"Department": "Finance",
},
peer_id=oci_core_remote_peering_connection["test_remote_peering_connection2"]["id"],
peer_region_name=var["remote_peering_connection_peer_region_name"])
```
## Import
RemotePeeringConnections can be imported using the `id`, e.g.
```sh
$ pulumi import oci:core/remotePeeringConnection:RemotePeeringConnection test_remote_peering_connection "id"
```
:param str resource_name: The name of the resource.
:param RemotePeeringConnectionArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(RemotePeeringConnectionArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
drg_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
peer_id: Optional[pulumi.Input[str]] = None,
peer_region_name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = RemotePeeringConnectionArgs.__new__(RemotePeeringConnectionArgs)
if compartment_id is None and not opts.urn:
raise TypeError("Missing required property 'compartment_id'")
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["display_name"] = display_name
if drg_id is None and not opts.urn:
raise TypeError("Missing required property 'drg_id'")
__props__.__dict__["drg_id"] = drg_id
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["peer_id"] = peer_id
__props__.__dict__["peer_region_name"] = peer_region_name
__props__.__dict__["is_cross_tenancy_peering"] = None
__props__.__dict__["peer_tenancy_id"] = None
__props__.__dict__["peering_status"] = None
__props__.__dict__["state"] = None
__props__.__dict__["time_created"] = None
super(RemotePeeringConnection, __self__).__init__(
'oci:core/remotePeeringConnection:RemotePeeringConnection',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
drg_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
is_cross_tenancy_peering: Optional[pulumi.Input[bool]] = None,
peer_id: Optional[pulumi.Input[str]] = None,
peer_region_name: Optional[pulumi.Input[str]] = None,
peer_tenancy_id: Optional[pulumi.Input[str]] = None,
peering_status: Optional[pulumi.Input[str]] = None,
state: Optional[pulumi.Input[str]] = None,
time_created: Optional[pulumi.Input[str]] = None) -> 'RemotePeeringConnection':
"""
Get an existing RemotePeeringConnection resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compartment_id: (Updatable) The OCID of the compartment to contain the RPC.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
:param pulumi.Input[str] display_name: (Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
:param pulumi.Input[str] drg_id: The OCID of the DRG the RPC belongs to.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[bool] is_cross_tenancy_peering: Whether the VCN at the other end of the peering is in a different tenancy. Example: `false`
:param pulumi.Input[str] peer_id: The OCID of the RPC you want to peer with.
:param pulumi.Input[str] peer_region_name: The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
:param pulumi.Input[str] peer_tenancy_id: If this RPC is peered, this value is the OCID of the other RPC's tenancy.
:param pulumi.Input[str] peering_status: Whether the RPC is peered with another RPC. `NEW` means the RPC has not yet been peered. `PENDING` means the peering is being established. `REVOKED` means the RPC at the other end of the peering has been deleted.
:param pulumi.Input[str] state: The RPC's current lifecycle state.
:param pulumi.Input[str] time_created: The date and time the RPC was created, in the format defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2016-08-25T21:10:29.600Z`
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _RemotePeeringConnectionState.__new__(_RemotePeeringConnectionState)
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["display_name"] = display_name
__props__.__dict__["drg_id"] = drg_id
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["is_cross_tenancy_peering"] = is_cross_tenancy_peering
__props__.__dict__["peer_id"] = peer_id
__props__.__dict__["peer_region_name"] = peer_region_name
__props__.__dict__["peer_tenancy_id"] = peer_tenancy_id
__props__.__dict__["peering_status"] = peering_status
__props__.__dict__["state"] = state
__props__.__dict__["time_created"] = time_created
return RemotePeeringConnection(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Output[str]:
"""
(Updatable) The OCID of the compartment to contain the RPC.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
(Updatable) A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="drgId")
def drg_id(self) -> pulumi.Output[str]:
"""
The OCID of the DRG the RPC belongs to.
"""
return pulumi.get(self, "drg_id")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter(name="isCrossTenancyPeering")
def is_cross_tenancy_peering(self) -> pulumi.Output[bool]:
"""
Whether the VCN at the other end of the peering is in a different tenancy. Example: `false`
"""
return pulumi.get(self, "is_cross_tenancy_peering")
@property
@pulumi.getter(name="peerId")
def peer_id(self) -> pulumi.Output[str]:
"""
The OCID of the RPC you want to peer with.
"""
return pulumi.get(self, "peer_id")
@property
@pulumi.getter(name="peerRegionName")
def peer_region_name(self) -> pulumi.Output[str]:
"""
The name of the region that contains the RPC you want to peer with. Example: `us-ashburn-1`
"""
return pulumi.get(self, "peer_region_name")
@property
@pulumi.getter(name="peerTenancyId")
def peer_tenancy_id(self) -> pulumi.Output[str]:
"""
If this RPC is peered, this value is the OCID of the other RPC's tenancy.
"""
return pulumi.get(self, "peer_tenancy_id")
@property
@pulumi.getter(name="peeringStatus")
def peering_status(self) -> pulumi.Output[str]:
"""
Whether the RPC is peered with another RPC. `NEW` means the RPC has not yet been peered. `PENDING` means the peering is being established. `REVOKED` means the RPC at the other end of the peering has been deleted.
"""
return pulumi.get(self, "peering_status")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
The RPC's current lifecycle state.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> pulumi.Output[str]:
"""
The date and time the RPC was created, in the format defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2016-08-25T21:10:29.600Z`
"""
return pulumi.get(self, "time_created")
| 50.424051 | 347 | 0.66355 | 4,029 | 31,868 | 5.036982 | 0.063539 | 0.067754 | 0.062777 | 0.059624 | 0.892234 | 0.869567 | 0.85513 | 0.842761 | 0.832857 | 0.798463 | 0 | 0.005203 | 0.227972 | 31,868 | 631 | 348 | 50.503962 | 0.819656 | 0.413142 | 0 | 0.682997 | 1 | 0 | 0.107089 | 0.018393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164265 | false | 0.002882 | 0.014409 | 0 | 0.279539 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
431ce94aa237308572d08d202354d4e2366789d6 | 60,956 | py | Python | gr37/hf/X310/x310_dualpol_phasecorrelation_3.py | zleffke/flowgraph_sandbox | 6bcad45fd4585e917678b843be323278ebf06323 | [
"MIT"
] | null | null | null | gr37/hf/X310/x310_dualpol_phasecorrelation_3.py | zleffke/flowgraph_sandbox | 6bcad45fd4585e917678b843be323278ebf06323 | [
"MIT"
] | null | null | null | gr37/hf/X310/x310_dualpol_phasecorrelation_3.py | zleffke/flowgraph_sandbox | 6bcad45fd4585e917678b843be323278ebf06323 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
##################################################
# GNU Radio Python Flow Graph
# Title: Dual Polarization Phase Correlation, Start Time [UTC]: 2020-09-13T04:35:21.491545Z
# GNU Radio version: 3.7.13.4
##################################################
if __name__ == '__main__':
import ctypes
import sys
if sys.platform.startswith('linux'):
try:
x11 = ctypes.cdll.LoadLibrary('libX11.so')
x11.XInitThreads()
except:
print "Warning: failed to XInitThreads()"
from PyQt4 import Qt
from datetime import datetime as dt; import string; import math; import numpy as np
from gnuradio import analog
from gnuradio import blocks
from gnuradio import eng_notation
from gnuradio import filter
from gnuradio import gr
from gnuradio import qtgui
from gnuradio import uhd
from gnuradio.eng_option import eng_option
from gnuradio.filter import firdes
from optparse import OptionParser
import sip
import sys
import time
from gnuradio import qtgui
class x310_dualpol_phasecorrelation_3(gr.top_block, Qt.QWidget):
def __init__(self):
gr.top_block.__init__(self, "Dual Polarization Phase Correlation, Start Time [UTC]: 2020-09-13T04:35:21.491545Z")
Qt.QWidget.__init__(self)
self.setWindowTitle("Dual Polarization Phase Correlation, Start Time [UTC]: 2020-09-13T04:35:21.491545Z")
qtgui.util.check_set_qss()
try:
self.setWindowIcon(Qt.QIcon.fromTheme('gnuradio-grc'))
except:
pass
self.top_scroll_layout = Qt.QVBoxLayout()
self.setLayout(self.top_scroll_layout)
self.top_scroll = Qt.QScrollArea()
self.top_scroll.setFrameStyle(Qt.QFrame.NoFrame)
self.top_scroll_layout.addWidget(self.top_scroll)
self.top_scroll.setWidgetResizable(True)
self.top_widget = Qt.QWidget()
self.top_scroll.setWidget(self.top_widget)
self.top_layout = Qt.QVBoxLayout(self.top_widget)
self.top_grid_layout = Qt.QGridLayout()
self.top_layout.addLayout(self.top_grid_layout)
self.settings = Qt.QSettings("GNU Radio", "x310_dualpol_phasecorrelation_3")
self.restoreGeometry(self.settings.value("geometry").toByteArray())
##################################################
# Variables
##################################################
self.rx_freq = rx_freq = 60.31e6 - .54e3
self.offset_tune = offset_tune = -10e3
self.usrp_tune_freq = usrp_tune_freq = rx_freq+ offset_tune
self.usrp_clk_rate = usrp_clk_rate = 200e6
self.usrp_ddc_freq = usrp_ddc_freq = np.round(usrp_tune_freq/usrp_clk_rate* 2**32)/2**32*usrp_clk_rate
self.coarse_tune = coarse_tune = 5.35
self.ts_str = ts_str = dt.strftime(dt.utcnow(), "%Y-%m-%dT%H:%M:%S.%fZ")
self.lo_freq = lo_freq = usrp_ddc_freq +coarse_tune - offset_tune
self.title_str = title_str = "Dual Polarization Phase Correlation, Start Time [UTC]: {:s}".format(ts_str)
self.samp_rate = samp_rate = 500e3
self.pll_lbw = pll_lbw = 200
self.pll_freq = pll_freq = 100
self.phase_delta_avg = phase_delta_avg = 100000
self.lpf_trans = lpf_trans = 100
self.lpf_cutoff = lpf_cutoff = 20
self.lo_freq_label = lo_freq_label = "{:9f}".format(lo_freq)
self.filter_taps = filter_taps = firdes.low_pass(1.0, 2.5,0.1,0.02,firdes.WIN_HAMMING)
self.decim2 = decim2 = 10
self.decim = decim = 10
self.c_ms = c_ms = 299792458
##################################################
# Blocks
##################################################
self.main_tab = Qt.QTabWidget()
self.main_tab_widget_0 = Qt.QWidget()
self.main_tab_layout_0 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_0)
self.main_tab_grid_layout_0 = Qt.QGridLayout()
self.main_tab_layout_0.addLayout(self.main_tab_grid_layout_0)
self.main_tab.addTab(self.main_tab_widget_0, 'Channel')
self.main_tab_widget_1 = Qt.QWidget()
self.main_tab_layout_1 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_1)
self.main_tab_grid_layout_1 = Qt.QGridLayout()
self.main_tab_layout_1.addLayout(self.main_tab_grid_layout_1)
self.main_tab.addTab(self.main_tab_widget_1, 'Amplitude Compare')
self.main_tab_widget_2 = Qt.QWidget()
self.main_tab_layout_2 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_2)
self.main_tab_grid_layout_2 = Qt.QGridLayout()
self.main_tab_layout_2.addLayout(self.main_tab_grid_layout_2)
self.main_tab.addTab(self.main_tab_widget_2, 'Phase Compare')
self.main_tab_widget_3 = Qt.QWidget()
self.main_tab_layout_3 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_3)
self.main_tab_grid_layout_3 = Qt.QGridLayout()
self.main_tab_layout_3.addLayout(self.main_tab_grid_layout_3)
self.main_tab.addTab(self.main_tab_widget_3, 'Phase Compare PLL')
self.main_tab_widget_4 = Qt.QWidget()
self.main_tab_layout_4 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_4)
self.main_tab_grid_layout_4 = Qt.QGridLayout()
self.main_tab_layout_4.addLayout(self.main_tab_grid_layout_4)
self.main_tab.addTab(self.main_tab_widget_4, 'Constellations')
self.main_tab_widget_5 = Qt.QWidget()
self.main_tab_layout_5 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_5)
self.main_tab_grid_layout_5 = Qt.QGridLayout()
self.main_tab_layout_5.addLayout(self.main_tab_grid_layout_5)
self.main_tab.addTab(self.main_tab_widget_5, 'Phase Compare')
self.top_grid_layout.addWidget(self.main_tab, 1, 0, 1, 1)
for r in range(1, 2):
self.top_grid_layout.setRowStretch(r, 1)
for c in range(0, 1):
self.top_grid_layout.setColumnStretch(c, 1)
self._samp_rate_tool_bar = Qt.QToolBar(self)
self._samp_rate_tool_bar.addWidget(Qt.QLabel('SAMP_RATE'+": "))
self._samp_rate_line_edit = Qt.QLineEdit(str(self.samp_rate))
self._samp_rate_tool_bar.addWidget(self._samp_rate_line_edit)
self._samp_rate_line_edit.returnPressed.connect(
lambda: self.set_samp_rate(eng_notation.str_to_num(str(self._samp_rate_line_edit.text().toAscii()))))
self.main_tab_grid_layout_0.addWidget(self._samp_rate_tool_bar, 8, 0, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self._rx_freq_tool_bar = Qt.QToolBar(self)
self._rx_freq_tool_bar.addWidget(Qt.QLabel('FREQ'+": "))
self._rx_freq_line_edit = Qt.QLineEdit(str(self.rx_freq))
self._rx_freq_tool_bar.addWidget(self._rx_freq_line_edit)
self._rx_freq_line_edit.returnPressed.connect(
lambda: self.set_rx_freq(eng_notation.str_to_num(str(self._rx_freq_line_edit.text().toAscii()))))
self.main_tab_grid_layout_0.addWidget(self._rx_freq_tool_bar, 8, 1, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self._pll_lbw_tool_bar = Qt.QToolBar(self)
self._pll_lbw_tool_bar.addWidget(Qt.QLabel("pll_lbw"+": "))
self._pll_lbw_line_edit = Qt.QLineEdit(str(self.pll_lbw))
self._pll_lbw_tool_bar.addWidget(self._pll_lbw_line_edit)
self._pll_lbw_line_edit.returnPressed.connect(
lambda: self.set_pll_lbw(eng_notation.str_to_num(str(self._pll_lbw_line_edit.text().toAscii()))))
self.main_tab_grid_layout_3.addWidget(self._pll_lbw_tool_bar, 8, 1, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self._pll_freq_tool_bar = Qt.QToolBar(self)
self._pll_freq_tool_bar.addWidget(Qt.QLabel("pll_freq"+": "))
self._pll_freq_line_edit = Qt.QLineEdit(str(self.pll_freq))
self._pll_freq_tool_bar.addWidget(self._pll_freq_line_edit)
self._pll_freq_line_edit.returnPressed.connect(
lambda: self.set_pll_freq(eng_notation.str_to_num(str(self._pll_freq_line_edit.text().toAscii()))))
self.main_tab_grid_layout_3.addWidget(self._pll_freq_tool_bar, 8, 0, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self._phase_delta_avg_tool_bar = Qt.QToolBar(self)
self._phase_delta_avg_tool_bar.addWidget(Qt.QLabel("phase_delta_avg"+": "))
self._phase_delta_avg_line_edit = Qt.QLineEdit(str(self.phase_delta_avg))
self._phase_delta_avg_tool_bar.addWidget(self._phase_delta_avg_line_edit)
self._phase_delta_avg_line_edit.returnPressed.connect(
lambda: self.set_phase_delta_avg(eng_notation.str_to_num(str(self._phase_delta_avg_line_edit.text().toAscii()))))
self.main_tab_grid_layout_2.addWidget(self._phase_delta_avg_tool_bar, 5, 1, 1, 1)
for r in range(5, 6):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self._lpf_trans_tool_bar = Qt.QToolBar(self)
self._lpf_trans_tool_bar.addWidget(Qt.QLabel('LPF Transition'+": "))
self._lpf_trans_line_edit = Qt.QLineEdit(str(self.lpf_trans))
self._lpf_trans_tool_bar.addWidget(self._lpf_trans_line_edit)
self._lpf_trans_line_edit.returnPressed.connect(
lambda: self.set_lpf_trans(eng_notation.str_to_num(str(self._lpf_trans_line_edit.text().toAscii()))))
self.main_tab_grid_layout_0.addWidget(self._lpf_trans_tool_bar, 8, 3, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self._lpf_cutoff_tool_bar = Qt.QToolBar(self)
self._lpf_cutoff_tool_bar.addWidget(Qt.QLabel('LPF Cutoff'+": "))
self._lpf_cutoff_line_edit = Qt.QLineEdit(str(self.lpf_cutoff))
self._lpf_cutoff_tool_bar.addWidget(self._lpf_cutoff_line_edit)
self._lpf_cutoff_line_edit.returnPressed.connect(
lambda: self.set_lpf_cutoff(eng_notation.str_to_num(str(self._lpf_cutoff_line_edit.text().toAscii()))))
self.main_tab_grid_layout_0.addWidget(self._lpf_cutoff_tool_bar, 8, 2, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.uhd_usrp_source_1 = uhd.usrp_source(
",".join(("addr=192.168.10.2", "")),
uhd.stream_args(
cpu_format="fc32",
channels=range(2),
),
)
self.uhd_usrp_source_1.set_clock_source('gpsdo', 0)
self.uhd_usrp_source_1.set_time_source('gpsdo', 0)
self.uhd_usrp_source_1.set_subdev_spec('A:AB B:AB', 0)
self.uhd_usrp_source_1.set_samp_rate(samp_rate)
self.uhd_usrp_source_1.set_time_unknown_pps(uhd.time_spec())
self.uhd_usrp_source_1.set_center_freq(uhd.tune_request(usrp_tune_freq), 0)
self.uhd_usrp_source_1.set_gain(0, 0)
self.uhd_usrp_source_1.set_antenna('A', 0)
self.uhd_usrp_source_1.set_auto_dc_offset(True, 0)
self.uhd_usrp_source_1.set_auto_iq_balance(True, 0)
self.uhd_usrp_source_1.set_center_freq(uhd.tune_request(usrp_tune_freq), 1)
self.uhd_usrp_source_1.set_gain(0, 1)
self.uhd_usrp_source_1.set_antenna('A', 1)
self.uhd_usrp_source_1.set_auto_dc_offset(True, 1)
self.uhd_usrp_source_1.set_auto_iq_balance(True, 1)
self.qtgui_waterfall_sink_x_0_0 = qtgui.waterfall_sink_c(
2048/4, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
rx_freq, #fc
samp_rate / decim / decim2, #bw
"", #name
1 #number of inputs
)
self.qtgui_waterfall_sink_x_0_0.set_update_time(0.010)
self.qtgui_waterfall_sink_x_0_0.enable_grid(False)
self.qtgui_waterfall_sink_x_0_0.enable_axis_labels(True)
if not True:
self.qtgui_waterfall_sink_x_0_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_waterfall_sink_x_0_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
colors = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_waterfall_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_waterfall_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_waterfall_sink_x_0_0.set_color_map(i, colors[i])
self.qtgui_waterfall_sink_x_0_0.set_line_alpha(i, alphas[i])
self.qtgui_waterfall_sink_x_0_0.set_intensity_range(-140, -40)
self._qtgui_waterfall_sink_x_0_0_win = sip.wrapinstance(self.qtgui_waterfall_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_waterfall_sink_x_0_0_win, 4, 4, 4, 4)
for r in range(4, 8):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(4, 8):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_waterfall_sink_x_0 = qtgui.waterfall_sink_c(
2048/4, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
rx_freq, #fc
samp_rate / decim / decim2, #bw
"", #name
1 #number of inputs
)
self.qtgui_waterfall_sink_x_0.set_update_time(0.010)
self.qtgui_waterfall_sink_x_0.enable_grid(False)
self.qtgui_waterfall_sink_x_0.enable_axis_labels(True)
if not True:
self.qtgui_waterfall_sink_x_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_waterfall_sink_x_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
colors = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_waterfall_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_waterfall_sink_x_0.set_line_label(i, labels[i])
self.qtgui_waterfall_sink_x_0.set_color_map(i, colors[i])
self.qtgui_waterfall_sink_x_0.set_line_alpha(i, alphas[i])
self.qtgui_waterfall_sink_x_0.set_intensity_range(-140, -40)
self._qtgui_waterfall_sink_x_0_win = sip.wrapinstance(self.qtgui_waterfall_sink_x_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_waterfall_sink_x_0_win, 4, 0, 4, 4)
for r in range(4, 8):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(0, 4):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_0_0_0 = qtgui.time_sink_f(
4096, #size
samp_rate / decim / decim2, #samp_rate
"Phase", #name
3 #number of inputs
)
self.qtgui_time_sink_x_0_0_0_0.set_update_time(0.0010)
self.qtgui_time_sink_x_0_0_0_0.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_0_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_0_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, 0, "")
self.qtgui_time_sink_x_0_0_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_0_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_0_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_0_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_0_0_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_0_0_0.disable_legend()
labels = ['N/S', 'E/W', 'Delta', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_0_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_0_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_0_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_0_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_0_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_0_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_0_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_time_sink_x_0_0_0_0_win, 0, 0, 2, 4)
for r in range(0, 2):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(0, 4):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_0_0 = qtgui.time_sink_f(
4096, #size
samp_rate / decim / decim2, #samp_rate
"Phase", #name
3 #number of inputs
)
self.qtgui_time_sink_x_0_0_0.set_update_time(0.0010)
self.qtgui_time_sink_x_0_0_0.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, 0, "")
self.qtgui_time_sink_x_0_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_0_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_0_0.disable_legend()
labels = ['N/S', 'E/W', 'Delta', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_2.addWidget(self._qtgui_time_sink_x_0_0_0_win, 0, 0, 2, 2)
for r in range(0, 2):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(0, 2):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_0 = qtgui.time_sink_f(
4096*2, #size
samp_rate / decim, #samp_rate
"Amplitude", #name
3 #number of inputs
)
self.qtgui_time_sink_x_0_0.set_update_time(0.0010)
self.qtgui_time_sink_x_0_0.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, 0, "")
self.qtgui_time_sink_x_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_0.disable_legend()
labels = ['N/S', 'E/W', 'Delta', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_1.addWidget(self._qtgui_time_sink_x_0_0_win, 0, 0, 2, 4)
for r in range(0, 2):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(0, 4):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self.qtgui_histogram_sink_x_0_1_1 = qtgui.histogram_sink_f(
20,
360,
-360,
360,
"",
1
)
self.qtgui_histogram_sink_x_0_1_1.set_update_time(0.010)
self.qtgui_histogram_sink_x_0_1_1.enable_autoscale(True)
self.qtgui_histogram_sink_x_0_1_1.enable_accumulate(True)
self.qtgui_histogram_sink_x_0_1_1.enable_grid(False)
self.qtgui_histogram_sink_x_0_1_1.enable_axis_labels(True)
if not True:
self.qtgui_histogram_sink_x_0_1_1.disable_legend()
labels = ['Phase Delta [deg]', 'Corr Mag', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_histogram_sink_x_0_1_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_histogram_sink_x_0_1_1.set_line_label(i, labels[i])
self.qtgui_histogram_sink_x_0_1_1.set_line_width(i, widths[i])
self.qtgui_histogram_sink_x_0_1_1.set_line_color(i, colors[i])
self.qtgui_histogram_sink_x_0_1_1.set_line_style(i, styles[i])
self.qtgui_histogram_sink_x_0_1_1.set_line_marker(i, markers[i])
self.qtgui_histogram_sink_x_0_1_1.set_line_alpha(i, alphas[i])
self._qtgui_histogram_sink_x_0_1_1_win = sip.wrapinstance(self.qtgui_histogram_sink_x_0_1_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_histogram_sink_x_0_1_1_win, 2, 2, 2, 2)
for r in range(2, 4):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(2, 4):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_histogram_sink_x_0_1_0_0 = qtgui.histogram_sink_f(
200,
360,
-360,
360,
"",
2
)
self.qtgui_histogram_sink_x_0_1_0_0.set_update_time(0.010)
self.qtgui_histogram_sink_x_0_1_0_0.enable_autoscale(True)
self.qtgui_histogram_sink_x_0_1_0_0.enable_accumulate(False)
self.qtgui_histogram_sink_x_0_1_0_0.enable_grid(False)
self.qtgui_histogram_sink_x_0_1_0_0.enable_axis_labels(True)
if not True:
self.qtgui_histogram_sink_x_0_1_0_0.disable_legend()
labels = ['N/S Phase', 'E/W Phase', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_histogram_sink_x_0_1_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_histogram_sink_x_0_1_0_0.set_line_label(i, labels[i])
self.qtgui_histogram_sink_x_0_1_0_0.set_line_width(i, widths[i])
self.qtgui_histogram_sink_x_0_1_0_0.set_line_color(i, colors[i])
self.qtgui_histogram_sink_x_0_1_0_0.set_line_style(i, styles[i])
self.qtgui_histogram_sink_x_0_1_0_0.set_line_marker(i, markers[i])
self.qtgui_histogram_sink_x_0_1_0_0.set_line_alpha(i, alphas[i])
self._qtgui_histogram_sink_x_0_1_0_0_win = sip.wrapinstance(self.qtgui_histogram_sink_x_0_1_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_histogram_sink_x_0_1_0_0_win, 2, 0, 2, 2)
for r in range(2, 4):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(0, 2):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_histogram_sink_x_0_1_0 = qtgui.histogram_sink_f(
200,
2000,
-360,
360,
"",
2
)
self.qtgui_histogram_sink_x_0_1_0.set_update_time(0.010)
self.qtgui_histogram_sink_x_0_1_0.enable_autoscale(True)
self.qtgui_histogram_sink_x_0_1_0.enable_accumulate(False)
self.qtgui_histogram_sink_x_0_1_0.enable_grid(False)
self.qtgui_histogram_sink_x_0_1_0.enable_axis_labels(True)
if not True:
self.qtgui_histogram_sink_x_0_1_0.disable_legend()
labels = ['N/S Phase', 'E/W Phase', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_histogram_sink_x_0_1_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_histogram_sink_x_0_1_0.set_line_label(i, labels[i])
self.qtgui_histogram_sink_x_0_1_0.set_line_width(i, widths[i])
self.qtgui_histogram_sink_x_0_1_0.set_line_color(i, colors[i])
self.qtgui_histogram_sink_x_0_1_0.set_line_style(i, styles[i])
self.qtgui_histogram_sink_x_0_1_0.set_line_marker(i, markers[i])
self.qtgui_histogram_sink_x_0_1_0.set_line_alpha(i, alphas[i])
self._qtgui_histogram_sink_x_0_1_0_win = sip.wrapinstance(self.qtgui_histogram_sink_x_0_1_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_2.addWidget(self._qtgui_histogram_sink_x_0_1_0_win, 2, 0, 2, 2)
for r in range(2, 4):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(0, 2):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.qtgui_histogram_sink_x_0_1 = qtgui.histogram_sink_f(
20,
360,
-360,
360,
"",
1
)
self.qtgui_histogram_sink_x_0_1.set_update_time(0.010)
self.qtgui_histogram_sink_x_0_1.enable_autoscale(True)
self.qtgui_histogram_sink_x_0_1.enable_accumulate(True)
self.qtgui_histogram_sink_x_0_1.enable_grid(False)
self.qtgui_histogram_sink_x_0_1.enable_axis_labels(True)
if not True:
self.qtgui_histogram_sink_x_0_1.disable_legend()
labels = ['Phase Delta [deg]', 'Corr Mag', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_histogram_sink_x_0_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_histogram_sink_x_0_1.set_line_label(i, labels[i])
self.qtgui_histogram_sink_x_0_1.set_line_width(i, widths[i])
self.qtgui_histogram_sink_x_0_1.set_line_color(i, colors[i])
self.qtgui_histogram_sink_x_0_1.set_line_style(i, styles[i])
self.qtgui_histogram_sink_x_0_1.set_line_marker(i, markers[i])
self.qtgui_histogram_sink_x_0_1.set_line_alpha(i, alphas[i])
self._qtgui_histogram_sink_x_0_1_win = sip.wrapinstance(self.qtgui_histogram_sink_x_0_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_2.addWidget(self._qtgui_histogram_sink_x_0_1_win, 2, 2, 2, 2)
for r in range(2, 4):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(2, 4):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.qtgui_freq_sink_x_0_0 = qtgui.freq_sink_c(
2048/4, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
rx_freq*0, #fc
samp_rate / decim / decim2, #bw
"East/West", #name
1 #number of inputs
)
self.qtgui_freq_sink_x_0_0.set_update_time(0.010)
self.qtgui_freq_sink_x_0_0.set_y_axis(-140, 0)
self.qtgui_freq_sink_x_0_0.set_y_label('Relative Gain', 'dB')
self.qtgui_freq_sink_x_0_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, 0.0, 0, "")
self.qtgui_freq_sink_x_0_0.enable_autoscale(False)
self.qtgui_freq_sink_x_0_0.enable_grid(True)
self.qtgui_freq_sink_x_0_0.set_fft_average(0.2)
self.qtgui_freq_sink_x_0_0.enable_axis_labels(True)
self.qtgui_freq_sink_x_0_0.enable_control_panel(False)
if not False:
self.qtgui_freq_sink_x_0_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_freq_sink_x_0_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_freq_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_freq_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_freq_sink_x_0_0.set_line_width(i, widths[i])
self.qtgui_freq_sink_x_0_0.set_line_color(i, colors[i])
self.qtgui_freq_sink_x_0_0.set_line_alpha(i, alphas[i])
self._qtgui_freq_sink_x_0_0_win = sip.wrapinstance(self.qtgui_freq_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_freq_sink_x_0_0_win, 0, 4, 4, 4)
for r in range(0, 4):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(4, 8):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_freq_sink_x_0 = qtgui.freq_sink_c(
2048/4, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
rx_freq*0, #fc
samp_rate / decim / decim2, #bw
"North/South", #name
1 #number of inputs
)
self.qtgui_freq_sink_x_0.set_update_time(0.010)
self.qtgui_freq_sink_x_0.set_y_axis(-140, 0)
self.qtgui_freq_sink_x_0.set_y_label('Relative Gain', 'dB')
self.qtgui_freq_sink_x_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, 0.0, 0, "")
self.qtgui_freq_sink_x_0.enable_autoscale(False)
self.qtgui_freq_sink_x_0.enable_grid(True)
self.qtgui_freq_sink_x_0.set_fft_average(0.2)
self.qtgui_freq_sink_x_0.enable_axis_labels(True)
self.qtgui_freq_sink_x_0.enable_control_panel(False)
if not False:
self.qtgui_freq_sink_x_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_freq_sink_x_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_freq_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_freq_sink_x_0.set_line_label(i, labels[i])
self.qtgui_freq_sink_x_0.set_line_width(i, widths[i])
self.qtgui_freq_sink_x_0.set_line_color(i, colors[i])
self.qtgui_freq_sink_x_0.set_line_alpha(i, alphas[i])
self._qtgui_freq_sink_x_0_win = sip.wrapinstance(self.qtgui_freq_sink_x_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_freq_sink_x_0_win, 0, 0, 4, 4)
for r in range(0, 4):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(0, 4):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_const_sink_x_0_0 = qtgui.const_sink_c(
1024, #size
"Before PLL", #name
3 #number of inputs
)
self.qtgui_const_sink_x_0_0.set_update_time(0.010)
self.qtgui_const_sink_x_0_0.set_y_axis(-2, 2)
self.qtgui_const_sink_x_0_0.set_x_axis(-2, 2)
self.qtgui_const_sink_x_0_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, "")
self.qtgui_const_sink_x_0_0.enable_autoscale(False)
self.qtgui_const_sink_x_0_0.enable_grid(True)
self.qtgui_const_sink_x_0_0.enable_axis_labels(True)
if not True:
self.qtgui_const_sink_x_0_0.disable_legend()
labels = ['N/S', 'E/W', 'N/S * E/W', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "magenta", "red", "red",
"red", "red", "red", "red", "red"]
styles = [2, 2, 0, 0, 0,
0, 0, 0, 0, 0]
markers = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_const_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_const_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_const_sink_x_0_0.set_line_width(i, widths[i])
self.qtgui_const_sink_x_0_0.set_line_color(i, colors[i])
self.qtgui_const_sink_x_0_0.set_line_style(i, styles[i])
self.qtgui_const_sink_x_0_0.set_line_marker(i, markers[i])
self.qtgui_const_sink_x_0_0.set_line_alpha(i, alphas[i])
self._qtgui_const_sink_x_0_0_win = sip.wrapinstance(self.qtgui_const_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_4.addWidget(self._qtgui_const_sink_x_0_0_win, 0, 0, 2, 2)
for r in range(0, 2):
self.main_tab_grid_layout_4.setRowStretch(r, 1)
for c in range(0, 2):
self.main_tab_grid_layout_4.setColumnStretch(c, 1)
self.qtgui_const_sink_x_0 = qtgui.const_sink_c(
1024, #size
"After PLL", #name
3 #number of inputs
)
self.qtgui_const_sink_x_0.set_update_time(0.010)
self.qtgui_const_sink_x_0.set_y_axis(-2, 2)
self.qtgui_const_sink_x_0.set_x_axis(-2, 2)
self.qtgui_const_sink_x_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, "")
self.qtgui_const_sink_x_0.enable_autoscale(False)
self.qtgui_const_sink_x_0.enable_grid(True)
self.qtgui_const_sink_x_0.enable_axis_labels(True)
if not True:
self.qtgui_const_sink_x_0.disable_legend()
labels = ['N/S', 'E/W', 'N/S * E/W', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "magenta", "red", "red",
"red", "red", "red", "red", "red"]
styles = [2, 2, 0, 0, 0,
0, 0, 0, 0, 0]
markers = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_const_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_const_sink_x_0.set_line_label(i, labels[i])
self.qtgui_const_sink_x_0.set_line_width(i, widths[i])
self.qtgui_const_sink_x_0.set_line_color(i, colors[i])
self.qtgui_const_sink_x_0.set_line_style(i, styles[i])
self.qtgui_const_sink_x_0.set_line_marker(i, markers[i])
self.qtgui_const_sink_x_0.set_line_alpha(i, alphas[i])
self._qtgui_const_sink_x_0_win = sip.wrapinstance(self.qtgui_const_sink_x_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_4.addWidget(self._qtgui_const_sink_x_0_win, 0, 2, 2, 2)
for r in range(0, 2):
self.main_tab_grid_layout_4.setRowStretch(r, 1)
for c in range(2, 4):
self.main_tab_grid_layout_4.setColumnStretch(c, 1)
self.low_pass_filter_0_0 = filter.fir_filter_ccf(decim2, firdes.low_pass(
1, samp_rate / decim, lpf_cutoff, lpf_trans, firdes.WIN_HAMMING, 6.76))
self.low_pass_filter_0 = filter.fir_filter_ccf(decim2, firdes.low_pass(
1, samp_rate / decim, lpf_cutoff, lpf_trans, firdes.WIN_HAMMING, 6.76))
self._lo_freq_label_tool_bar = Qt.QToolBar(self)
if None:
self._lo_freq_label_formatter = None
else:
self._lo_freq_label_formatter = lambda x: str(x)
self._lo_freq_label_tool_bar.addWidget(Qt.QLabel('LO Freq [Hz]'+": "))
self._lo_freq_label_label = Qt.QLabel(str(self._lo_freq_label_formatter(self.lo_freq_label)))
self._lo_freq_label_tool_bar.addWidget(self._lo_freq_label_label)
self.top_grid_layout.addWidget(self._lo_freq_label_tool_bar)
self.freq_xlating_fir_filter_xxx_0_0 = filter.freq_xlating_fir_filter_ccc(decim, (filter_taps), lo_freq - usrp_ddc_freq, samp_rate)
self.freq_xlating_fir_filter_xxx_0 = filter.freq_xlating_fir_filter_ccc(decim, (filter_taps), lo_freq - usrp_ddc_freq, samp_rate)
self._coarse_tune_tool_bar = Qt.QToolBar(self)
self._coarse_tune_tool_bar.addWidget(Qt.QLabel("coarse_tune"+": "))
self._coarse_tune_line_edit = Qt.QLineEdit(str(self.coarse_tune))
self._coarse_tune_tool_bar.addWidget(self._coarse_tune_line_edit)
self._coarse_tune_line_edit.returnPressed.connect(
lambda: self.set_coarse_tune(eng_notation.str_to_num(str(self._coarse_tune_line_edit.text().toAscii()))))
self.top_grid_layout.addWidget(self._coarse_tune_tool_bar)
self.blocks_sub_xx_0_0_0 = blocks.sub_ff(1)
self.blocks_sub_xx_0_0 = blocks.sub_ff(1)
self.blocks_sub_xx_0 = blocks.sub_ff(1)
self.blocks_null_sink_2 = blocks.null_sink(gr.sizeof_float*1)
self.blocks_null_sink_1 = blocks.null_sink(gr.sizeof_float*1)
self.blocks_multiply_xx_0_0 = blocks.multiply_vcc(1)
self.blocks_multiply_xx_0 = blocks.multiply_vcc(1)
self.blocks_multiply_const_vxx_1_0_1_0_1 = blocks.multiply_const_vff((180.0/math.pi, ))
self.blocks_multiply_const_vxx_1_0_1_0_0_0 = blocks.multiply_const_vff((180.0/math.pi, ))
self.blocks_multiply_const_vxx_1_0_1_0_0 = blocks.multiply_const_vff((180.0/math.pi, ))
self.blocks_multiply_const_vxx_1_0_1_0 = blocks.multiply_const_vff((180.0/math.pi, ))
self.blocks_moving_average_xx_0_0_1_0 = blocks.moving_average_ff(int(phase_delta_avg), 1.0/phase_delta_avg, 4000, 1)
self.blocks_moving_average_xx_0_0_1 = blocks.moving_average_ff(int(phase_delta_avg), 1.0/phase_delta_avg, 4000, 1)
self.blocks_moving_average_xx_0_0_0_0 = blocks.moving_average_ff(int(phase_delta_avg), 1.0/phase_delta_avg, 4000, 1)
self.blocks_moving_average_xx_0_0_0 = blocks.moving_average_ff(int(phase_delta_avg), 1.0/phase_delta_avg, 4000, 1)
self.blocks_moving_average_xx_0 = blocks.moving_average_ff(int(phase_delta_avg), 1.0/phase_delta_avg, 4000, 1)
self.blocks_complex_to_magphase_1_0 = blocks.complex_to_magphase(1)
self.blocks_complex_to_magphase_0_1 = blocks.complex_to_magphase(1)
self.blocks_complex_to_magphase_0_0 = blocks.complex_to_magphase(1)
self.blocks_complex_to_magphase_0 = blocks.complex_to_magphase(1)
self.blocks_abs_xx_0_0_0 = blocks.abs_ff(1)
self.blocks_abs_xx_0_0 = blocks.abs_ff(1)
self.blocks_abs_xx_0 = blocks.abs_ff(1)
self.analog_pll_refout_cc_0_0 = analog.pll_refout_cc(math.pi/pll_lbw, math.pi/pll_freq, -math.pi/pll_freq)
self.analog_pll_refout_cc_0 = analog.pll_refout_cc(math.pi/pll_lbw, math.pi/pll_freq, -math.pi/pll_freq)
self.analog_agc2_xx_1 = analog.agc2_cc(1e-1, 1e-2, 1.0, 1.0)
self.analog_agc2_xx_1.set_max_gain(65536)
self.analog_agc2_xx_0 = analog.agc2_cc(1e-1, 1e-2, 1.0, 1.0)
self.analog_agc2_xx_0.set_max_gain(65536)
##################################################
# Connections
##################################################
self.connect((self.analog_agc2_xx_0, 0), (self.low_pass_filter_0_0, 0))
self.connect((self.analog_agc2_xx_1, 0), (self.low_pass_filter_0, 0))
self.connect((self.analog_pll_refout_cc_0, 0), (self.blocks_complex_to_magphase_0_0, 0))
self.connect((self.analog_pll_refout_cc_0, 0), (self.blocks_multiply_xx_0_0, 0))
self.connect((self.analog_pll_refout_cc_0, 0), (self.qtgui_const_sink_x_0, 0))
self.connect((self.analog_pll_refout_cc_0_0, 0), (self.blocks_complex_to_magphase_1_0, 0))
self.connect((self.analog_pll_refout_cc_0_0, 0), (self.blocks_multiply_xx_0_0, 1))
self.connect((self.analog_pll_refout_cc_0_0, 0), (self.qtgui_const_sink_x_0, 1))
self.connect((self.blocks_abs_xx_0, 0), (self.qtgui_time_sink_x_0_0, 2))
self.connect((self.blocks_abs_xx_0_0, 0), (self.qtgui_histogram_sink_x_0_1, 0))
self.connect((self.blocks_abs_xx_0_0, 0), (self.qtgui_time_sink_x_0_0_0, 2))
self.connect((self.blocks_abs_xx_0_0_0, 0), (self.qtgui_histogram_sink_x_0_1_1, 0))
self.connect((self.blocks_complex_to_magphase_0, 1), (self.blocks_multiply_const_vxx_1_0_1_0, 0))
self.connect((self.blocks_complex_to_magphase_0, 0), (self.blocks_sub_xx_0, 0))
self.connect((self.blocks_complex_to_magphase_0, 0), (self.qtgui_time_sink_x_0_0, 0))
self.connect((self.blocks_complex_to_magphase_0_0, 1), (self.blocks_multiply_const_vxx_1_0_1_0_1, 0))
self.connect((self.blocks_complex_to_magphase_0_0, 0), (self.blocks_null_sink_1, 0))
self.connect((self.blocks_complex_to_magphase_0_1, 1), (self.blocks_multiply_const_vxx_1_0_1_0_0, 0))
self.connect((self.blocks_complex_to_magphase_0_1, 0), (self.blocks_sub_xx_0, 1))
self.connect((self.blocks_complex_to_magphase_0_1, 0), (self.qtgui_time_sink_x_0_0, 1))
self.connect((self.blocks_complex_to_magphase_1_0, 1), (self.blocks_multiply_const_vxx_1_0_1_0_0_0, 0))
self.connect((self.blocks_complex_to_magphase_1_0, 0), (self.blocks_null_sink_2, 0))
self.connect((self.blocks_moving_average_xx_0, 0), (self.blocks_abs_xx_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_0, 0), (self.blocks_sub_xx_0_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_0, 0), (self.qtgui_histogram_sink_x_0_1_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_0, 0), (self.qtgui_time_sink_x_0_0_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_0_0, 0), (self.blocks_sub_xx_0_0_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_0_0, 0), (self.qtgui_histogram_sink_x_0_1_0_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_0_0, 0), (self.qtgui_time_sink_x_0_0_0_0, 0))
self.connect((self.blocks_moving_average_xx_0_0_1, 0), (self.blocks_sub_xx_0_0, 1))
self.connect((self.blocks_moving_average_xx_0_0_1, 0), (self.qtgui_histogram_sink_x_0_1_0, 1))
self.connect((self.blocks_moving_average_xx_0_0_1, 0), (self.qtgui_time_sink_x_0_0_0, 1))
self.connect((self.blocks_moving_average_xx_0_0_1_0, 0), (self.blocks_sub_xx_0_0_0, 1))
self.connect((self.blocks_moving_average_xx_0_0_1_0, 0), (self.qtgui_histogram_sink_x_0_1_0_0, 1))
self.connect((self.blocks_moving_average_xx_0_0_1_0, 0), (self.qtgui_time_sink_x_0_0_0_0, 1))
self.connect((self.blocks_multiply_const_vxx_1_0_1_0, 0), (self.blocks_moving_average_xx_0_0_0, 0))
self.connect((self.blocks_multiply_const_vxx_1_0_1_0_0, 0), (self.blocks_moving_average_xx_0_0_1, 0))
self.connect((self.blocks_multiply_const_vxx_1_0_1_0_0_0, 0), (self.blocks_moving_average_xx_0_0_1_0, 0))
self.connect((self.blocks_multiply_const_vxx_1_0_1_0_1, 0), (self.blocks_moving_average_xx_0_0_0_0, 0))
self.connect((self.blocks_multiply_xx_0, 0), (self.qtgui_const_sink_x_0_0, 2))
self.connect((self.blocks_multiply_xx_0_0, 0), (self.qtgui_const_sink_x_0, 2))
self.connect((self.blocks_sub_xx_0, 0), (self.blocks_moving_average_xx_0, 0))
self.connect((self.blocks_sub_xx_0_0, 0), (self.blocks_abs_xx_0_0, 0))
self.connect((self.blocks_sub_xx_0_0_0, 0), (self.blocks_abs_xx_0_0_0, 0))
self.connect((self.blocks_sub_xx_0_0_0, 0), (self.qtgui_time_sink_x_0_0_0_0, 2))
self.connect((self.freq_xlating_fir_filter_xxx_0, 0), (self.analog_agc2_xx_0, 0))
self.connect((self.freq_xlating_fir_filter_xxx_0_0, 0), (self.analog_agc2_xx_1, 0))
self.connect((self.low_pass_filter_0, 0), (self.analog_pll_refout_cc_0_0, 0))
self.connect((self.low_pass_filter_0, 0), (self.blocks_complex_to_magphase_0_1, 0))
self.connect((self.low_pass_filter_0, 0), (self.blocks_multiply_xx_0, 1))
self.connect((self.low_pass_filter_0, 0), (self.qtgui_const_sink_x_0_0, 1))
self.connect((self.low_pass_filter_0, 0), (self.qtgui_freq_sink_x_0_0, 0))
self.connect((self.low_pass_filter_0, 0), (self.qtgui_waterfall_sink_x_0_0, 0))
self.connect((self.low_pass_filter_0_0, 0), (self.analog_pll_refout_cc_0, 0))
self.connect((self.low_pass_filter_0_0, 0), (self.blocks_complex_to_magphase_0, 0))
self.connect((self.low_pass_filter_0_0, 0), (self.blocks_multiply_xx_0, 0))
self.connect((self.low_pass_filter_0_0, 0), (self.qtgui_const_sink_x_0_0, 0))
self.connect((self.low_pass_filter_0_0, 0), (self.qtgui_freq_sink_x_0, 0))
self.connect((self.low_pass_filter_0_0, 0), (self.qtgui_waterfall_sink_x_0, 0))
self.connect((self.uhd_usrp_source_1, 0), (self.freq_xlating_fir_filter_xxx_0, 0))
self.connect((self.uhd_usrp_source_1, 1), (self.freq_xlating_fir_filter_xxx_0_0, 0))
def closeEvent(self, event):
self.settings = Qt.QSettings("GNU Radio", "x310_dualpol_phasecorrelation_3")
self.settings.setValue("geometry", self.saveGeometry())
event.accept()
def get_rx_freq(self):
return self.rx_freq
def set_rx_freq(self, rx_freq):
self.rx_freq = rx_freq
self.set_usrp_tune_freq(self.rx_freq+ self.offset_tune)
Qt.QMetaObject.invokeMethod(self._rx_freq_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.rx_freq)))
self.qtgui_waterfall_sink_x_0_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_waterfall_sink_x_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_freq_sink_x_0_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
self.qtgui_freq_sink_x_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
def get_offset_tune(self):
return self.offset_tune
def set_offset_tune(self, offset_tune):
self.offset_tune = offset_tune
self.set_usrp_tune_freq(self.rx_freq+ self.offset_tune)
self.set_lo_freq(self.usrp_ddc_freq +self.coarse_tune - self.offset_tune)
def get_usrp_tune_freq(self):
return self.usrp_tune_freq
def set_usrp_tune_freq(self, usrp_tune_freq):
self.usrp_tune_freq = usrp_tune_freq
self.set_usrp_ddc_freq(np.round(self.usrp_tune_freq/self.usrp_clk_rate* 2**32)/2**32*self.usrp_clk_rate)
self.uhd_usrp_source_1.set_center_freq(uhd.tune_request(self.usrp_tune_freq), 0)
self.uhd_usrp_source_1.set_center_freq(uhd.tune_request(self.usrp_tune_freq), 1)
def get_usrp_clk_rate(self):
return self.usrp_clk_rate
def set_usrp_clk_rate(self, usrp_clk_rate):
self.usrp_clk_rate = usrp_clk_rate
self.set_usrp_ddc_freq(np.round(self.usrp_tune_freq/self.usrp_clk_rate* 2**32)/2**32*self.usrp_clk_rate)
def get_usrp_ddc_freq(self):
return self.usrp_ddc_freq
def set_usrp_ddc_freq(self, usrp_ddc_freq):
self.usrp_ddc_freq = usrp_ddc_freq
self.set_lo_freq(self.usrp_ddc_freq +self.coarse_tune - self.offset_tune)
self.freq_xlating_fir_filter_xxx_0_0.set_center_freq(self.lo_freq - self.usrp_ddc_freq)
self.freq_xlating_fir_filter_xxx_0.set_center_freq(self.lo_freq - self.usrp_ddc_freq)
def get_coarse_tune(self):
return self.coarse_tune
def set_coarse_tune(self, coarse_tune):
self.coarse_tune = coarse_tune
self.set_lo_freq(self.usrp_ddc_freq +self.coarse_tune - self.offset_tune)
Qt.QMetaObject.invokeMethod(self._coarse_tune_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.coarse_tune)))
def get_ts_str(self):
return self.ts_str
def set_ts_str(self, ts_str):
self.ts_str = ts_str
self.set_title_str("Dual Polarization Phase Correlation, Start Time [UTC]: {:s}".format(self.ts_str))
def get_lo_freq(self):
return self.lo_freq
def set_lo_freq(self, lo_freq):
self.lo_freq = lo_freq
self.set_lo_freq_label(self._lo_freq_label_formatter("{:9f}".format(self.lo_freq)))
self.freq_xlating_fir_filter_xxx_0_0.set_center_freq(self.lo_freq - self.usrp_ddc_freq)
self.freq_xlating_fir_filter_xxx_0.set_center_freq(self.lo_freq - self.usrp_ddc_freq)
def get_title_str(self):
return self.title_str
def set_title_str(self, title_str):
self.title_str = title_str
def get_samp_rate(self):
return self.samp_rate
def set_samp_rate(self, samp_rate):
self.samp_rate = samp_rate
Qt.QMetaObject.invokeMethod(self._samp_rate_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.samp_rate)))
self.uhd_usrp_source_1.set_samp_rate(self.samp_rate)
self.qtgui_waterfall_sink_x_0_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_waterfall_sink_x_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0_0_0.set_samp_rate(self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0_0.set_samp_rate(self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0.set_samp_rate(self.samp_rate / self.decim)
self.qtgui_freq_sink_x_0_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
self.qtgui_freq_sink_x_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
self.low_pass_filter_0_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
self.low_pass_filter_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
def get_pll_lbw(self):
return self.pll_lbw
def set_pll_lbw(self, pll_lbw):
self.pll_lbw = pll_lbw
Qt.QMetaObject.invokeMethod(self._pll_lbw_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.pll_lbw)))
self.analog_pll_refout_cc_0_0.set_loop_bandwidth(math.pi/self.pll_lbw)
self.analog_pll_refout_cc_0.set_loop_bandwidth(math.pi/self.pll_lbw)
def get_pll_freq(self):
return self.pll_freq
def set_pll_freq(self, pll_freq):
self.pll_freq = pll_freq
Qt.QMetaObject.invokeMethod(self._pll_freq_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.pll_freq)))
self.analog_pll_refout_cc_0_0.set_max_freq(math.pi/self.pll_freq)
self.analog_pll_refout_cc_0_0.set_min_freq(-math.pi/self.pll_freq)
self.analog_pll_refout_cc_0.set_max_freq(math.pi/self.pll_freq)
self.analog_pll_refout_cc_0.set_min_freq(-math.pi/self.pll_freq)
def get_phase_delta_avg(self):
return self.phase_delta_avg
def set_phase_delta_avg(self, phase_delta_avg):
self.phase_delta_avg = phase_delta_avg
Qt.QMetaObject.invokeMethod(self._phase_delta_avg_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.phase_delta_avg)))
self.blocks_moving_average_xx_0_0_1_0.set_length_and_scale(int(self.phase_delta_avg), 1.0/self.phase_delta_avg)
self.blocks_moving_average_xx_0_0_1.set_length_and_scale(int(self.phase_delta_avg), 1.0/self.phase_delta_avg)
self.blocks_moving_average_xx_0_0_0_0.set_length_and_scale(int(self.phase_delta_avg), 1.0/self.phase_delta_avg)
self.blocks_moving_average_xx_0_0_0.set_length_and_scale(int(self.phase_delta_avg), 1.0/self.phase_delta_avg)
self.blocks_moving_average_xx_0.set_length_and_scale(int(self.phase_delta_avg), 1.0/self.phase_delta_avg)
def get_lpf_trans(self):
return self.lpf_trans
def set_lpf_trans(self, lpf_trans):
self.lpf_trans = lpf_trans
Qt.QMetaObject.invokeMethod(self._lpf_trans_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.lpf_trans)))
self.low_pass_filter_0_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
self.low_pass_filter_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
def get_lpf_cutoff(self):
return self.lpf_cutoff
def set_lpf_cutoff(self, lpf_cutoff):
self.lpf_cutoff = lpf_cutoff
Qt.QMetaObject.invokeMethod(self._lpf_cutoff_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.lpf_cutoff)))
self.low_pass_filter_0_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
self.low_pass_filter_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
def get_lo_freq_label(self):
return self.lo_freq_label
def set_lo_freq_label(self, lo_freq_label):
self.lo_freq_label = lo_freq_label
Qt.QMetaObject.invokeMethod(self._lo_freq_label_label, "setText", Qt.Q_ARG("QString", self.lo_freq_label))
def get_filter_taps(self):
return self.filter_taps
def set_filter_taps(self, filter_taps):
self.filter_taps = filter_taps
self.freq_xlating_fir_filter_xxx_0_0.set_taps((self.filter_taps))
self.freq_xlating_fir_filter_xxx_0.set_taps((self.filter_taps))
def get_decim2(self):
return self.decim2
def set_decim2(self, decim2):
self.decim2 = decim2
self.qtgui_waterfall_sink_x_0_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_waterfall_sink_x_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0_0_0.set_samp_rate(self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0_0.set_samp_rate(self.samp_rate / self.decim / self.decim2)
self.qtgui_freq_sink_x_0_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
self.qtgui_freq_sink_x_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
def get_decim(self):
return self.decim
def set_decim(self, decim):
self.decim = decim
self.qtgui_waterfall_sink_x_0_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_waterfall_sink_x_0.set_frequency_range(self.rx_freq, self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0_0_0.set_samp_rate(self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0_0.set_samp_rate(self.samp_rate / self.decim / self.decim2)
self.qtgui_time_sink_x_0_0.set_samp_rate(self.samp_rate / self.decim)
self.qtgui_freq_sink_x_0_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
self.qtgui_freq_sink_x_0.set_frequency_range(self.rx_freq*0, self.samp_rate / self.decim / self.decim2)
self.low_pass_filter_0_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
self.low_pass_filter_0.set_taps(firdes.low_pass(1, self.samp_rate / self.decim, self.lpf_cutoff, self.lpf_trans, firdes.WIN_HAMMING, 6.76))
def get_c_ms(self):
return self.c_ms
def set_c_ms(self, c_ms):
self.c_ms = c_ms
def main(top_block_cls=x310_dualpol_phasecorrelation_3, options=None):
from distutils.version import StrictVersion
if StrictVersion(Qt.qVersion()) >= StrictVersion("4.5.0"):
style = gr.prefs().get_string('qtgui', 'style', 'raster')
Qt.QApplication.setGraphicsSystem(style)
qapp = Qt.QApplication(sys.argv)
tb = top_block_cls()
tb.start()
tb.show()
def quitting():
tb.stop()
tb.wait()
qapp.connect(qapp, Qt.SIGNAL("aboutToQuit()"), quitting)
qapp.exec_()
if __name__ == '__main__':
main()
| 52.010239 | 149 | 0.65923 | 10,093 | 60,956 | 3.547409 | 0.039433 | 0.028935 | 0.04843 | 0.019551 | 0.912384 | 0.860798 | 0.811362 | 0.769774 | 0.734276 | 0.697157 | 0 | 0.056222 | 0.216829 | 60,956 | 1,171 | 150 | 52.054654 | 0.693772 | 0.008285 | 0 | 0.358454 | 0 | 0.001932 | 0.032666 | 0.002283 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.025121 | 0.018357 | null | null | 0.000966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4a32261e2147111fdbdba4874d44a016b5701e9b | 11,645 | py | Python | lang/python/github/com/metaprov/modelaapi/services/feature/v1/feature_pb2_grpc.py | metaprov/modeldapi | ee05693832051dcd990ee4f061715d7ae0787340 | [
"Apache-2.0"
] | 5 | 2022-02-18T03:40:10.000Z | 2022-03-01T16:11:24.000Z | lang/python/github/com/metaprov/modelaapi/services/feature/v1/feature_pb2_grpc.py | metaprov/modeldapi | ee05693832051dcd990ee4f061715d7ae0787340 | [
"Apache-2.0"
] | 1 | 2022-01-07T19:59:25.000Z | 2022-02-04T01:21:14.000Z | lang/python/github/com/metaprov/modelaapi/services/feature/v1/feature_pb2_grpc.py | metaprov/modeldapi | ee05693832051dcd990ee4f061715d7ae0787340 | [
"Apache-2.0"
] | 1 | 2022-03-25T10:21:43.000Z | 2022-03-25T10:21:43.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from github.com.metaprov.modelaapi.services.feature.v1 import feature_pb2 as github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2
class FeatureServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ListFeatures = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/ListFeatures',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.ListFeaturesRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.ListFeaturesResponse.FromString,
)
self.CreateFeature = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/CreateFeature',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.CreateFeatureRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.CreateFeatureResponse.FromString,
)
self.GetFeature = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/GetFeature',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.GetFeatureRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.GetFeatureResponse.FromString,
)
self.UpdateFeature = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/UpdateFeature',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.UpdateFeatureRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.UpdateFeatureResponse.FromString,
)
self.DeleteFeature = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/DeleteFeature',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.DeleteFeatureRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.DeleteFeatureResponse.FromString,
)
class FeatureServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def ListFeatures(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateFeature(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetFeature(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateFeature(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteFeature(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_FeatureServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'ListFeatures': grpc.unary_unary_rpc_method_handler(
servicer.ListFeatures,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.ListFeaturesRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.ListFeaturesResponse.SerializeToString,
),
'CreateFeature': grpc.unary_unary_rpc_method_handler(
servicer.CreateFeature,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.CreateFeatureRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.CreateFeatureResponse.SerializeToString,
),
'GetFeature': grpc.unary_unary_rpc_method_handler(
servicer.GetFeature,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.GetFeatureRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.GetFeatureResponse.SerializeToString,
),
'UpdateFeature': grpc.unary_unary_rpc_method_handler(
servicer.UpdateFeature,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.UpdateFeatureRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.UpdateFeatureResponse.SerializeToString,
),
'DeleteFeature': grpc.unary_unary_rpc_method_handler(
servicer.DeleteFeature,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.DeleteFeatureRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.DeleteFeatureResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'github.com.metaprov.modelaapi.services.feature.v1.FeatureService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class FeatureService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def ListFeatures(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/ListFeatures',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.ListFeaturesRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.ListFeaturesResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CreateFeature(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/CreateFeature',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.CreateFeatureRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.CreateFeatureResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetFeature(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/GetFeature',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.GetFeatureRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.GetFeatureResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateFeature(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/UpdateFeature',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.UpdateFeatureRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.UpdateFeatureResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteFeature(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.feature.v1.FeatureService/DeleteFeature',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.DeleteFeatureRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_feature_dot_v1_dot_feature__pb2.DeleteFeatureResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 58.517588 | 171 | 0.740919 | 1,233 | 11,645 | 6.497972 | 0.089213 | 0.077384 | 0.04643 | 0.058038 | 0.888667 | 0.888667 | 0.888667 | 0.859586 | 0.845731 | 0.816276 | 0 | 0.008024 | 0.197338 | 11,645 | 198 | 172 | 58.813131 | 0.849149 | 0.058909 | 0 | 0.493827 | 1 | 0 | 0.104484 | 0.077743 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.012346 | 0.030864 | 0.135802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
436a6b0fa381b63e49af78c76313793dbbb6483b | 15,323 | py | Python | installer/generate_update_json.py | jat-jat/openmpt | 2d939ac2d41c054cb28bee18f0ba449c3a563335 | [
"BSD-3-Clause"
] | 335 | 2017-02-25T16:39:27.000Z | 2022-03-29T17:45:42.000Z | installer/generate_update_json.py | jat-jat/openmpt | 2d939ac2d41c054cb28bee18f0ba449c3a563335 | [
"BSD-3-Clause"
] | 7 | 2018-02-05T18:22:38.000Z | 2022-02-15T19:35:24.000Z | installer/generate_update_json.py | jat-jat/openmpt | 2d939ac2d41c054cb28bee18f0ba449c3a563335 | [
"BSD-3-Clause"
] | 69 | 2017-04-10T00:48:09.000Z | 2022-03-20T10:24:45.000Z | #!/usr/bin/env python3
import datetime
import hashlib
import json
import os
from subprocess import Popen
OPENMPT_VERSION_MAJORMAJOR = os.environ['OPENMPT_VERSION_MAJORMAJOR']
OPENMPT_VERSION_MAJOR = os.environ['OPENMPT_VERSION_MAJOR']
OPENMPT_VERSION_MINOR = os.environ['OPENMPT_VERSION_MINOR']
OPENMPT_VERSION_MINORMINOR = os.environ['OPENMPT_VERSION_MINORMINOR']
SVNVERSION = os.environ['SVNVERSION']
IS_RELEASE = True if OPENMPT_VERSION_MINORMINOR == "00" else False
if IS_RELEASE:
download_base_url = "https://download.openmpt.org/archive/openmpt/"
announcement_url = "https://openmpt.org/openmpt-" + OPENMPT_VERSION_MAJORMAJOR + "-" + OPENMPT_VERSION_MAJOR + "-" + OPENMPT_VERSION_MINOR + "-" + OPENMPT_VERSION_MINORMINOR + "-released"
changelog_url = "https://openmpt.org/release_notes/History.txt"
else:
download_base_url = "https://builds.openmpt.org/builds/auto/openmpt/pkg.win/"
announcement_url = "https://builds.openmpt.org/builds/auto/openmpt/pkg.win/"
changelog_url = "https://source.openmpt.org/browse/openmpt/?op=revision&rev=" + SVNVERSION
os.chdir(os.path.dirname(os.path.abspath(__file__)))
os.chdir("..")
plainversion = OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR
version = OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR if IS_RELEASE else OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-" + SVNVERSION
def hash_file_sha512(filename):
sha512 = hashlib.sha512()
with open(filename, "rb") as f:
sha512.update(f.read())
return sha512.hexdigest()
def hash_file_sha3_512(filename):
sha3_512 = hashlib.sha3_512()
with open(filename, "rb") as f:
sha3_512.update(f.read())
return sha3_512.hexdigest()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-Setup.exe",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-Setup.exe"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-Setup.exe"),
},
"filename": "OpenMPT-" + version + "-Setup.exe",
"autoupdate_installer": {
"arguments": [ "/SP-", "/SILENT", "/NOCANCEL", "/AUTOUPDATE=yes" ]
},
"autoupdate_archive": None
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-Setup.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-x86.zip",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-portable-x86.zip"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-portable-x86.zip"),
},
"filename": "OpenMPT-" + version + "-portable-x86.zip",
"autoupdate_installer": None,
"autoupdate_archive": {
"subfolder": "",
"restartbinary": "OpenMPT.exe"
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-x86.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-x86-legacy.zip",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-portable-x86-legacy.zip"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-portable-x86-legacy.zip"),
},
"filename": "OpenMPT-" + version + "-portable-x86-legacy.zip",
"autoupdate_installer": None,
"autoupdate_archive": {
"subfolder": "",
"restartbinary": "OpenMPT.exe"
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-x86-legacy.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-amd64.zip",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-portable-amd64.zip"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-portable-amd64.zip"),
},
"filename": "OpenMPT-" + version + "-portable-amd64.zip",
"autoupdate_installer": None,
"autoupdate_archive": {
"subfolder": "",
"restartbinary": "OpenMPT.exe"
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-amd64.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-amd64-legacy.zip",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-portable-amd64-legacy.zip"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-portable-amd64-legacy.zip"),
},
"filename": "OpenMPT-" + version + "-portable-amd64-legacy.zip",
"autoupdate_installer": None,
"autoupdate_archive": {
"subfolder": "",
"restartbinary": "OpenMPT.exe"
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-amd64-legacy.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-arm.zip",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-portable-arm.zip"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-portable-arm.zip"),
},
"filename": "OpenMPT-" + version + "-portable-arm.zip",
"autoupdate_installer": None,
"autoupdate_archive": {
"subfolder": "",
"restartbinary": "OpenMPT.exe"
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-arm.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-arm64.zip",
"checksums": {
"SHA-512": hash_file_sha512("installer/OpenMPT-" + plainversion + "-portable-arm64.zip"),
"SHA3-512": hash_file_sha3_512("installer/OpenMPT-" + plainversion + "-portable-arm64.zip"),
},
"filename": "OpenMPT-" + version + "-portable-arm64.zip",
"autoupdate_installer": None,
"autoupdate_archive": {
"subfolder": "",
"restartbinary": "OpenMPT.exe"
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-arm64.update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
update = {
"OpenMPT " + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR: {
"version": version,
"date": datetime.datetime.utcnow().isoformat(),
"announcement_url": announcement_url,
"changelog_url": changelog_url,
"downloads": {
"installer": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-Setup.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-Setup.exe",
"type": "installer",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":6, "version_minor":1, "servicepack_major":1, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": { "x86":True },
"supported_architectures": { "x86":True,"amd64":True,"arm":True,"arm64":True },
"required_processor_features": { "x86":{"sse2":True}, "amd64":{"sse2":True} }
},
"portable-x86": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-x86.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-x86.zip",
"type": "archive",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":10, "version_minor":0, "servicepack_major":0, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": {},
"supported_architectures": { "x86":True },
"required_processor_features": { "x86":{"sse2":True} }
},
"portable-x86-legacy": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-x86-legacy.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-x86-legacy.zip",
"type": "archive",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":6, "version_minor":1, "servicepack_major":0, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": {},
"supported_architectures": { "x86":True },
"required_processor_features": { "x86":{"sse2":True} }
},
"portable-amd64": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-amd64.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-amd64.zip",
"type": "archive",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":10, "version_minor":0, "servicepack_major":0, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": {},
"supported_architectures": { "amd64":True },
"required_processor_features": { "amd64":{"sse2":True} }
},
"portable-amd64-legacy": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-amd64-legacy.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-amd64-legacy.zip",
"type": "archive",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":6, "version_minor":1, "servicepack_major":0, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": {},
"supported_architectures": { "amd64":True },
"required_processor_features": { "amd64":{"sse2":True} }
},
"portable-arm": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-arm.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-arm.zip",
"type": "archive",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":10, "version_minor":0, "servicepack_major":0, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": {},
"supported_architectures": { "arm":True },
"required_processor_features": { "arm":{} }
},
"portable-arm64": {
"url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-arm64.update.json",
"download_url": download_base_url + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "/OpenMPT-" + version + "-portable-arm64.zip",
"type": "archive",
"can_autoupdate": True,
"autoupdate_minversion": "1.30.00.08",
"os": "windows",
"required_windows_version": { "version_major":10, "version_minor":0, "servicepack_major":0, "servicepack_minor":0, "build":0, "wine_major":1, "wine_minor":8, "wine_update":0 },
"required_architectures": {},
"supported_architectures": { "arm64":True },
"required_processor_features": { "arm64":{} }
}
}
}
}
with open("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-update.json", "wb") as f:
f.write((json.dumps(update, ensure_ascii=False, indent=1)).encode('utf-8'))
f.close()
def sign_file(filename):
p = Popen(["bin/release/vs2022-win7-static/amd64/updatesigntool.exe", "sign", "jws", "auto", filename, filename + ".jws.json"])
p.communicate()
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-Setup.update.json")
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-x86.update.json")
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-x86-legacy.update.json")
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-amd64.update.json")
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-amd64-legacy.update.json")
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-arm.update.json")
sign_file("installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-portable-arm64.update.json")
pdumpkey = Popen(["bin/release/vs2022-win7-static/amd64/updatesigntool.exe", "dumpkey", "auto", "installer/" + "OpenMPT-" + OPENMPT_VERSION_MAJORMAJOR + "." + OPENMPT_VERSION_MAJOR + "." + OPENMPT_VERSION_MINOR + "." + OPENMPT_VERSION_MINORMINOR + "-update-publickey.jwk.json"])
pdumpkey.communicate()
| 55.518116 | 291 | 0.698036 | 1,758 | 15,323 | 5.791809 | 0.076792 | 0.221371 | 0.103712 | 0.130917 | 0.876154 | 0.864074 | 0.831664 | 0.821057 | 0.821057 | 0.800138 | 0 | 0.029852 | 0.118971 | 15,323 | 275 | 292 | 55.72 | 0.72437 | 0.00137 | 0 | 0.411067 | 0 | 0 | 0.369322 | 0.1196 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011858 | false | 0 | 0.019763 | 0 | 0.039526 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4372e9a82d1f9b6c909d5a58f4e7e9a841eed579 | 185 | py | Python | python/testData/inspections/RemoveUnicodePrefixFromGluedStringNodesWithSlash.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | null | null | null | python/testData/inspections/RemoveUnicodePrefixFromGluedStringNodesWithSlash.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | null | null | null | python/testData/inspections/RemoveUnicodePrefixFromGluedStringNodesWithSlash.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | 1 | 2020-11-27T10:36:50.000Z | 2020-11-27T10:36:50.000Z | s = <error descr="Python version 3.2 does not support a 'U' prefix">u<caret></error>"string\n" \
<error descr="Python version 3.2 does not support a 'U' prefix">u</error>"next line" | 92.5 | 96 | 0.686486 | 34 | 185 | 3.735294 | 0.529412 | 0.15748 | 0.251969 | 0.362205 | 0.755906 | 0.755906 | 0.755906 | 0.755906 | 0.755906 | 0.755906 | 0 | 0.025316 | 0.145946 | 185 | 2 | 97 | 92.5 | 0.778481 | 0 | 0 | 0 | 0 | 0 | 0.607527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
437313c3f66868b4eefe6f9b7eb2ae3c0e9de94d | 406 | py | Python | ipython/startup/import_subprocess.py | dycw/dotfiles2 | 9e23c4989e9813080da3658a8f98dbb1e03776f2 | [
"MIT"
] | null | null | null | ipython/startup/import_subprocess.py | dycw/dotfiles2 | 9e23c4989e9813080da3658a8f98dbb1e03776f2 | [
"MIT"
] | null | null | null | ipython/startup/import_subprocess.py | dycw/dotfiles2 | 9e23c4989e9813080da3658a8f98dbb1e03776f2 | [
"MIT"
] | null | null | null | import subprocess # noqa: F401, S404
from subprocess import DEVNULL # noqa: F401, S404
from subprocess import PIPE # noqa: F401, S404
from subprocess import STDOUT # noqa: F401, S404
from subprocess import CalledProcessError # noqa: F401, S404
from subprocess import check_call # noqa: F401, S404
from subprocess import check_output # noqa: F401, S404
from subprocess import run # noqa: F401, S404
| 45.111111 | 61 | 0.768473 | 56 | 406 | 5.535714 | 0.25 | 0.206452 | 0.309677 | 0.36129 | 0.754839 | 0.754839 | 0.23871 | 0 | 0 | 0 | 0 | 0.142857 | 0.172414 | 406 | 8 | 62 | 50.75 | 0.779762 | 0.332512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
43c38fc9aabe918a0e8578368863f148707fd180 | 63,275 | py | Python | anuga/operators/tests/test_kinematic_viscosity_operator.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | anuga/operators/tests/test_kinematic_viscosity_operator.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | anuga/operators/tests/test_kinematic_viscosity_operator.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | from __future__ import division
from past.utils import old_div
import operator
from anuga import Domain
from anuga import Quantity
from anuga import Dirichlet_boundary
from anuga.operators.kinematic_viscosity_operator import Kinematic_viscosity_operator
from pprint import pprint
import numpy as num
from math import sqrt
import unittest
import os
class Test_kinematic_viscosity(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
try:
os.remove('domain.sww')
except:
pass
try:
pass
#os.remove('anuga.log')
except:
pass
#First test operator class (1 triangle)
def operator1(self):
points = num.array([[0.0,0.0],[1.0,0.0],[0.0,1.0]])
elements = num.array([[0,1,2]])
boundary_map = {}
boundary_map[(0,0)] = 'edge0'
boundary_map[(0,1)] = 'edge1'
boundary_map[(0,2)] = 'edge2'
domain = Domain(coordinates=points,vertices=elements,boundary=boundary_map)
D0 = Dirichlet_boundary([1,0,3])
D1 = Dirichlet_boundary([2,1,0])
D2 = Dirichlet_boundary([3,1,2])
domain.set_boundary({'edge0': D0, 'edge1': D1, 'edge2': D2})
domain.set_quantity('stage', lambda x,y : x+2*y )
domain.set_quantity('elevation', lambda x,y : 3*x+5*y )
#print domain.quantities['stage'].vertex_values
#print domain.quantities['stage'].edge_values
domain.update_boundary()
#print domain.quantities['stage'].boundary_values
return Kinematic_viscosity_operator(domain)
#Second test operator class (2 triangles)
def operator2(self):
points = num.array([[0.0,0.0],[1.0,0.0],[1.0,1.0],[0.0,1.0]])
elements = num.array([[0,1,3],[1,2,3]])
boundary_map = {}
boundary_map[(0,1)] = 'edge0'
boundary_map[(0,2)] = 'edge1'
boundary_map[(1,0)] = 'edge2'
boundary_map[(1,2)] = 'edge3'
domain = Domain(coordinates=points,vertices=elements,boundary=boundary_map)
D0 = Dirichlet_boundary([1,1,2])
D1 = Dirichlet_boundary([1,2,2])
D2 = Dirichlet_boundary([1,1,0])
D3 = Dirichlet_boundary([1,2,1])
domain.set_boundary({'edge0': D0, 'edge1': D1, 'edge2': D2, 'edge3': D3})
domain.update_boundary()
return Kinematic_viscosity_operator(domain)
def test_enumerate_boundary(self):
operator1 = self.operator1()
boundary_enumeration = operator1.domain.boundary_enumeration
assert boundary_enumeration[(0,0)] == 0
assert boundary_enumeration[(0,1)] == 1
assert boundary_enumeration[(0,2)] == 2
operator2 = self.operator2()
boundary_enumeration = operator2.domain.boundary_enumeration
assert boundary_enumeration[(0,1)] == 0
assert boundary_enumeration[(0,2)] == 1
assert boundary_enumeration[(1,0)] == 2
assert boundary_enumeration[(1,2)] == 3
def test_geo_structure(self):
operator1 = self.operator1()
indices = operator1.geo_structure_indices
values = operator1.geo_structure_values
assert num.allclose(indices, num.array([[1, 2, 3]]))
assert num.allclose(values, num.array([[-6.0, old_div(-6.0,sqrt(5)), old_div(-6.0,sqrt(5))]]))
operator2 = self.operator2()
indices = operator2.geo_structure_indices
values = operator2.geo_structure_values
assert num.allclose(indices, num.array([[1,2,3],[4,0,5]]))
assert num.allclose(values, num.array([[-3.0,old_div(-6.0,sqrt(5)),old_div(-6.0,sqrt(5))],[old_div(-6.0,sqrt(5)),-3.0,old_div(-6.0,sqrt(5))]]))
def test_elliptic_matrix_one_triangle(self):
operator = self.operator1()
domain = operator.domain
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
operator.update_elliptic_matrix(a)
A = operator.elliptic_matrix
assert num.allclose(A.todense(), num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)]))
a.set_values(10.0)
a.set_boundary_values(10.0)
operator.update_elliptic_matrix(a)
assert num.allclose(A.todense(), 10*num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)]))
def test_elliptic_matrix_two_triangles(self):
operator = self.operator2()
domain = operator.domain
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
operator.update_elliptic_matrix(a)
A = operator.elliptic_matrix
A0 = num.array([[-3.0,3.0,0.0,0.0,0.0,0.0],
[0.0,old_div(-6.0,sqrt(5.0)),0.0,0.0,6.0/sqrt(5.0),0.0]])
A1 = num.array([[old_div(-6.0,sqrt(5.0)),0.0,6.0/sqrt(5.0),0.0,0.0,0.0],\
[3.0,-3.0,0.0,0.0,0.0,0.0]])
A2 = num.array([[old_div(-6.0,sqrt(5.0)),0.0,0.0,6.0/sqrt(5.0),0.0,0.0],\
[0.0, old_div(-6.0,sqrt(5.0)), 0.0, 0.0, 0.0, 6.0/sqrt(5.0)]])
assert num.allclose(A.todense(), A0+A1+A2)
a.set_values([2.0, 1.0], location = 'centroids')
a.set_boundary_values(1.0)
operator.update_elliptic_matrix(a)
A = operator.elliptic_matrix
assert num.allclose(A.todense()[0,:], 1.5*A0[0,:]+1.5*A1[0,:]+1.5*A2[0,:])
assert num.allclose(A.todense()[1,:], A0[1,:]+1.5*A1[1,:]+A2[1,:])
# Either negative values we set matrix row to zero
a.set_values([-2.0, -2.0], location = 'centroids')
a.set_boundary_values(1.0)
operator.update_elliptic_matrix(a)
assert num.allclose(A.todense()[0,:], 0.0)
assert num.allclose(A.todense()[1,:], 0.0)
def test_elliptic_multiply_include_boundary_one_triangle(self):
operator = self.operator1()
operator.set_triangle_areas(False)
#print operator.apply_triangle_areas
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
operator.update_elliptic_matrix()
q_in = Quantity(operator.domain)
q_in.set_values(1.0)
q_in.set_boundary_values(1.0)
n = operator.n
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
q_1 = operator.elliptic_multiply(q_in)
q_2 = operator.elliptic_multiply(q_in, quantity_out = q_in)
assert id(q_in) == id(q_2)
assert num.allclose(q_1.centroid_values,q_2.centroid_values)
assert num.allclose( num.zeros((n,), float), q_1.centroid_values )
#Now have different boundary values
q_in.set_values(1.0)
q_in.set_boundary_values(0.0)
operator.update_elliptic_matrix(a)
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
q_1 = operator.elliptic_multiply(q_in)
assert num.allclose( [-6.0-12.0/sqrt(5)], q_1.centroid_values )
def test_elliptic_multiply_exclude_boundary_one_triangle(self):
operator = self.operator1()
operator.set_triangle_areas(False)
#print operator.apply_triangle_areas
#n = operator.n
q_in = Quantity(operator.domain)
q_in.set_values(1.0)
q_in.set_boundary_values(1.0)
operator.update_elliptic_matrix()
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
q_1 = operator.elliptic_multiply(q_in, include_boundary=False)
assert num.allclose( [-6.0-12.0/sqrt(5)], q_1.centroid_values )
def test_elliptic_multiply_include_boundary_one_triangle(self):
operator = self.operator1()
operator.set_triangle_areas(True)
n = operator.n
q_in = Quantity(operator.domain)
q_in.set_values(1.0)
q_in.set_boundary_values(1.0)
operator.update_elliptic_matrix()
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
q_1 = operator.elliptic_multiply(q_in)
q_2 = operator.elliptic_multiply(q_in, output = q_in)
assert id(q_in) == id(q_2)
assert num.allclose(q_1.centroid_values,q_2.centroid_values)
assert num.allclose( [-12.0-24.0/sqrt(5)], q_1.centroid_values )
#Now have different boundary values
q_in.set_values(1.0)
q_in.set_boundary_values(0.0)
operator.update_elliptic_matrix()
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
q_1 = operator.elliptic_multiply(q_in)
assert num.allclose( [-12.0-24.0/sqrt(5)], q_1.centroid_values )
def test_elliptic_multiply_exclude_boundary_one_triangle(self):
operator = self.operator1()
operator.set_triangle_areas(True)
q_in = Quantity(operator.domain)
q_in.set_values(1.0)
q_in.set_boundary_values(1.0)
operator.update_elliptic_matrix()
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
q_1 = operator.elliptic_multiply(q_in)
assert num.allclose( [-12.0-24.0/sqrt(5)], q_1.centroid_values )
def test_mul_arg(self):
operator = self.operator1()
u = Quantity(operator.domain)
u.set_values(2.0)
#q boundary_values should equal 0.0
operator.update_elliptic_boundary_term(u)
r = 2.0
try:
q_out = operator * 2.0
except TypeError:
pass
else:
raise Exception('Should have caught an TypeError')
def test_mul(self):
operator = self.operator1()
u = Quantity(operator.domain)
u.set_values(2.0)
#q boundary_values should equal 0.0
operator.update_elliptic_matrix()
operator.update_elliptic_boundary_term(u)
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
V1 = num.array([2.0]) #u=2
U1 = num.array([[2.0],[0.0],[0.0],[0.0]])
q_out = operator * u
assert num.allclose(q_out.centroid_values, 2*num.array(num.mat(A)*num.mat(U1)).reshape(1,))
def test_elliptic_solve_one_triangle(self):
operator = self.operator1()
n = operator.n
U = num.array([2.0,2.0,1.0,1.0])
u_in = Quantity(operator.domain)
u_in.set_values(U[:1], location='centroids')
u_in.set_boundary_values(U[1:])
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
# Do this to get access to the matrix
# This is also called inside elliptic_solve
operator.update_elliptic_matrix(a)
V = num.array([2.0]) #h=1, u=2
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
#U = num.array([[2.0,2.0],[2.0,1.0],[1.0,2.0],[1.0,0.0]])
#Setup up rhs as b = A u
X = num.array(2*num.mat(A)*num.mat(U.reshape(4,1))).reshape(1,)
b = Quantity(operator.domain)
b.set_values(X, location='centroids')
u_in.set_values(0.0)
u_out = operator.elliptic_solve(u_in, b, a, iprint=1)
assert num.allclose(u_out.centroid_values, U[:n])
def test_elliptic_solve_two_triangle(self):
operator = self.operator2()
n = operator.n
U = num.array([2.0,3.0,1.0,1.0,4.0,3.0])
u_in = Quantity(operator.domain)
u_in.set_values(U[:2], location='centroids')
u_in.set_boundary_values(U[2:])
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
# Do this to get access to the matrix
# This is also called inside elliptic_solve
operator.update_elliptic_matrix(a)
V1 = U[:n]
V2 = U[n:]
A = num.mat(operator.elliptic_matrix.todense())
U = num.mat(U.reshape(6,1))
#Setup up rhs as b = A u
X = num.array(2*A*U).reshape(2,)
b = Quantity(operator.domain)
b.set_values(X, location='centroids')
u_in.set_values(0.0)
u_out = operator.elliptic_solve(u_in, b, a, iprint=1)
assert num.allclose(u_out.centroid_values, V1)
assert num.allclose(u_out.boundary_values, V2)
def test_elliptic_solve_rectangular_cross(self):
from anuga import rectangular_cross_domain
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
# Diffusivity
a = Quantity(domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
# Quantity to solve
u = Quantity(domain)
u.set_values(0.0)
u.set_boundary_values(1.0)
# Quantity for rhs
b = Quantity(domain)
b.set_values(0.0)
b.set_boundary_values(0.0)
operator = Kinematic_viscosity_operator(domain)
n = operator.n
tot_len = operator.tot_len
u_out = operator.elliptic_solve(u, b, a, iprint=1)
assert num.allclose(u_out.centroid_values, num.ones_like(u_out.centroid_values))
assert num.allclose(u_out.boundary_values, num.ones_like(u_out.boundary_values))
def test_parabolic_solve_one_triangle(self):
operator = self.operator1()
n = operator.n
dt = operator.dt
U = num.array([2.0,2.0,1.0,1.0])
U_mod = num.array([10.0, 2.0, 1.0, 1.0])
u_in = Quantity(operator.domain)
u_in.set_values(U[:n], location='centroids')
u_in.set_boundary_values(U_mod[n:])
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
V = num.array([2.0])
A = num.array([-6.0-12.0/sqrt(5), 6.0, 6.0/sqrt(5), 6.0/sqrt(5)])
#Setup up rhs
X = U_mod[:n] - dt*2*num.array(num.mat(A)*num.mat(U_mod.reshape(4,1))).reshape(n,)
b = Quantity(operator.domain)
b.set_values(X, location='centroids')
u_out = operator.parabolic_solve(u_in, b, a, iprint=1)
assert num.allclose(u_out.centroid_values, U_mod[:n])
def test_parabolic_solve_two_triangles(self):
operator = self.operator2()
n = operator.n
nt = operator.tot_len
dt = operator.dt
U = num.array([2.0,3.0,1.0,1.0,4.0,3.0])
U_mod = num.array([4.0,2.0,1.0,1.0,4.0,3.0])
u_in = Quantity(operator.domain)
u_in.set_values(U[:n], location='centroids')
u_in.set_boundary_values(U_mod[n:])
a = Quantity(operator.domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
operator.update_elliptic_matrix(a)
A = num.array([[-8.36656315, 3., 2.68328157, 2.68328157, 0., 0. ],
[ 3., -8.36656315 , 0. , 0. , 2.68328157, 2.68328157]])
assert num.allclose(A,operator.elliptic_matrix.todense())
#Setup up rhs
X = U_mod[:n] - dt*2*num.array(num.mat(A)*num.mat(U_mod.reshape(nt,1))).reshape(n,)
b = Quantity(operator.domain)
b.set_values(X, location='centroids')
u_out = operator.parabolic_solve(u_in, b, a, iprint=1)
assert num.allclose(u_out.centroid_values, U_mod[:n])
def test_parabolic_solve_rectangular_cross(self):
from anuga import rectangular_cross_domain
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
# Diffusivity
a = Quantity(domain)
a.set_values(1.0)
a.set_boundary_values(1.0)
# Quantity initial condition
u_in = Quantity(domain)
#u_in.set_values( 0.0 )
u_in.set_values(lambda x,y : 16.0*x*(1-x)*y*(1-y))
u_in.set_boundary_values(0.0)
# Quantity to solve
u_mod = Quantity(domain)
u_mod.set_values(lambda x,y : 15.9*x*(1-x)*y*(1-y) )
u_mod.set_boundary_values(0.0)
# Quantity for rhs
b = Quantity(domain)
b.set_values(0.0)
b.set_boundary_values(0.0)
operator = Kinematic_viscosity_operator(domain)
dt = 0.01
operator.dt = dt
n = operator.n
nt = operator.tot_len
operator.update_elliptic_matrix(a)
A = num.mat(operator.elliptic_matrix.todense())
D = num.mat(operator.triangle_areas.todense())
U_mod = num.concatenate( (u_mod.centroid_values, u_mod.boundary_values) )
#Setup up rhs
X = U_mod[:n] - dt*num.array(D*A*num.mat(U_mod.reshape(nt,1))).reshape(n,)
b = Quantity(operator.domain)
b.set_values(X, location='centroids')
u_out = operator.parabolic_solve(u_in, b, a, iprint=1, use_dt_tol=False)
assert num.allclose(u_out.centroid_values, U_mod[:n])
def test_elliptic_solve_rectangular_cross_velocities(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*x')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
domain.update_boundary()
domain.update_centroids_of_velocities_and_height()
a = domain.quantities['height']
# Quantity to solve
u = domain.quantities['xvelocity']
u.set_boundary_values(1.0)
v = domain.quantities['yvelocity']
v.set_boundary_values(2.0)
# Quantity for rhs
b = Quantity(domain)
b.set_values(0.0)
b.set_boundary_values(0.0)
kv = Kinematic_viscosity_operator(domain)
n = kv.n
tot_len = kv.tot_len
kv.update_elliptic_matrix(a)
u_out = kv.elliptic_solve(u, b, a, update_matrix=False, iprint=1)
v_out = kv.elliptic_solve(v, b, a, update_matrix=False, iprint=1)
assert num.allclose(u_out.centroid_values, num.ones_like(u_out.centroid_values))
assert num.allclose(u_out.boundary_values, num.ones_like(u_out.boundary_values))
def test_parabolic_solve_rectangular_cross_velocities(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*x')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
domain.update_boundary()
domain.update_centroids_of_velocities_and_height()
h = domain.quantities['height']
# Quantity to solve
u = domain.quantities['xvelocity']
u.set_boundary_values(1.0)
v = domain.quantities['yvelocity']
v.set_boundary_values(2.0)
kv = Kinematic_viscosity_operator(domain)
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
dt = 100.0
kv.dt = dt
n = kv.n
nt = kv.tot_len
kv.update_elliptic_matrix(h)
kv.parabolic_solve(u, u, h, u_out=u, update_matrix=False, iprint=1, use_dt_tol=False)
kv.parabolic_solve(v, v, h, u_out=v, update_matrix=False, iprint=1, use_dt_tol=False)
#print u.centroid_values
#print u.boundary_values
assert num.allclose(u.centroid_values, num.ones_like(u.centroid_values), rtol=1.0e-1)
assert num.allclose(u.boundary_values, num.ones_like(u.boundary_values))
assert num.allclose(v.centroid_values, 2.0*num.ones_like(v.centroid_values), rtol=1.0e-1)
assert num.allclose(v.boundary_values, 2.0*num.ones_like(v.boundary_values))
domain.update_centroids_of_momentum_from_velocity()
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
assert num.allclose(uh.centroid_values, u.centroid_values*h.centroid_values )
assert num.allclose(vh.centroid_values, v.centroid_values*h.centroid_values )
def test_parabolic_solve_rectangular_cross_velocities_zero_h(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 5
n1 = 5
domain = rectangular_cross_domain(m1,n1)
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*(x-0.45)')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
w = domain.quantities['stage']
#print w.centroid_values
#print w.boundary_values
domain.distribute_to_vertices_and_edges()
#print w.centroid_values
#print w.boundary_values
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
domain.update_boundary()
#print w.centroid_values
#print w.boundary_values
domain.update_centroids_of_velocities_and_height()
h = domain.quantities['height']
h.centroid_values[:] = num.where(h.centroid_values < 1.0e-12, 0.0, h.centroid_values)
#print 'h'
#print h.centroid_values
#print h.boundary_values
# Quantity to solve
u = domain.quantities['xvelocity']
u.set_boundary_values(1.0)
#print 'u'
#print u.centroid_values
#print u.boundary_values
v = domain.quantities['yvelocity']
v.set_boundary_values(2.0)
kv = Kinematic_viscosity_operator(domain)
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
dt = 1000.0
kv.dt = dt
n = kv.n
nt = kv.tot_len
kv.update_elliptic_matrix(h)
kv.parabolic_solve(u, u, h, u_out=u, update_matrix=False, iprint=1, use_dt_tol=False)
kv.parabolic_solve(v, v, h, u_out=v, update_matrix=False, iprint=1, use_dt_tol=False)
u_expected = \
num.array([ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0.88049303, 0.85774725, 0.63198513, 0. ,
0.60127309, 0.74335638, 0.56726693, 0. , 0.5619257 ,
0.72367268, 0.56292098, 0. , 0.56846364, 0.74395284,
0.60155678, 0. , 0.63250083, 0.8583354 , 0.88103078,
0.91424291, 0.98161599, 0.9681383 , 0.92489827, 0.83150189,
0.90499771, 0.92610594, 0.88016105, 0.81330027, 0.87520116,
0.9137613 , 0.87524587, 0.83194825, 0.88028462, 0.92624037,
0.90532118, 0.91457731, 0.92521631, 0.96831727, 0.98169171,
0.98017988, 0.99638864, 0.99691038, 0.98420946, 0.95322667,
0.97923645, 0.99234935, 0.97272033, 0.94496518, 0.97116962,
0.99081552, 0.97116479, 0.95330259, 0.97272909, 0.99236019,
0.979296 , 0.9803074 , 0.98428114, 0.99692939, 0.9964171 ])
assert num.allclose(u.centroid_values, u_expected, rtol=1.0e-4)
assert num.allclose(u.boundary_values, num.ones_like(u.boundary_values))
v_expected = \
num.array([ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 1.76107875, 1.71571872, 1.26450977, 0. ,
1.20323049, 1.48723907, 1.13511687, 0. , 1.12443699,
1.44790348, 1.12678546, 0. , 1.1379396 , 1.488628 ,
1.20388888, 0. , 1.26569807, 1.71709086, 1.76232374,
1.82867872, 1.96328928, 1.93637645, 1.84998653, 1.66337269,
1.81022916, 1.85243013, 1.76063685, 1.62705824, 1.75073279,
1.82775908, 1.75083189, 1.66440859, 1.76091792, 1.85274102,
1.81097589, 1.82944675, 1.85071618, 1.93678282, 1.96345914,
1.96043069, 1.99279212, 1.99383543, 1.96848768, 1.90661323,
1.95855951, 1.98473153, 1.94554555, 1.89009256, 1.9424436 ,
1.98166495, 1.94242902, 1.90678928, 1.94556252, 1.98475631,
1.95869882, 1.96072166, 1.96865435, 1.99387965, 1.99285742])
assert num.allclose(v.centroid_values, v_expected, rtol=1.0e-4)
assert num.allclose(v.boundary_values, 2.0*num.ones_like(v.boundary_values))
domain.update_centroids_of_momentum_from_velocity()
domain.distribute_to_vertices_and_edges()
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
#print 'uh'
#print uh.centroid_values
#print uh.boundary_values
assert num.allclose(uh.centroid_values, u.centroid_values*h.centroid_values )
assert num.allclose(vh.centroid_values, v.centroid_values*h.centroid_values )
def test_kinematic_operator_default_1_5(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
domain.set_flow_algorithm('1_5')
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*(x-0.5)')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
# kill off the wave with viscosity
kv = Kinematic_viscosity_operator(domain)
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
for t in domain.evolve(yieldstep = 1.0, finaltime = 10.0):
#domain.write_time()
#domain.print_operator_timestepping_statistics()
pass
#
w = domain.quantities['stage']
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
#print 'uh'
#print uh.centroid_values
#print uh.boundary_values
#print 'w'
#print w.centroid_values
#from pprint import pprint
#pprint(w.centroid_values)
wc = num.array([
0.70714365, 0.70714416, 0.70714295, 0.70714222, 0.70714486,
0.70714507, 0.70714374, 0.70714601, 0.70714492, 0.70714425,
0.70714595, 0.70714437, 0.70714797, 0.70714691, 0.70714697,
0.70714845, 0.70714793, 0.70714793, 0.70715033, 0.70714852,
0.70715244, 0.70715018, 0.70715176, 0.70715224, 0.70715211,
0.70715265, 0.70715351, 0.7071531 , 0.70715433, 0.70715309,
0.70715351, 0.70715472, 0.70715429, 0.70715433, 0.70715487,
0.70715523, 0.7071545 , 0.70715446, 0.70715317, 0.70715564,
0.70714142, 0.70714198, 0.70714079, 0.70714299, 0.70714482,
0.70714378, 0.70714344, 0.70714377, 0.7071443 , 0.70714533,
0.70714579, 0.70714574, 0.70714906, 0.70714717, 0.70714819,
0.70714822, 0.70714976, 0.70714952, 0.70715093, 0.70715077,
0.70715217, 0.70715094, 0.70715291, 0.70715188, 0.70715352,
0.70715278, 0.707154 , 0.70715429, 0.70715376, 0.70715309,
0.70715446, 0.70715422, 0.70715366, 0.70715453, 0.70715413,
0.70715539, 0.70715385, 0.70715412, 0.70715154, 0.70715306,
0.70714038, 0.70713905, 0.7071358 , 0.70713972, 0.70714303,
0.7071419 , 0.70714066, 0.70714219, 0.7071459 , 0.70714505,
0.70714639, 0.70714648, 0.70714833, 0.70714827, 0.70715147,
0.70715013, 0.70715194, 0.70715133, 0.70715542, 0.70715345,
0.70715296, 0.70715417, 0.70715676, 0.70715521, 0.70715526,
0.7071548 , 0.70715717, 0.70715512, 0.70715381, 0.70715523,
0.70715556, 0.70715486, 0.70715482, 0.70715338, 0.70715307,
0.70715381, 0.70715132, 0.70715182, 0.70714789, 0.70715086,
0.70713443, 0.70713559, 0.70713539, 0.70713615, 0.70714057,
0.70713978, 0.70714091, 0.70714102, 0.70714618, 0.70714338,
0.70714803, 0.70714858, 0.7071519 , 0.70715029, 0.70715343,
0.70715461, 0.70715589, 0.70715519, 0.7071565 , 0.70715796,
0.70715738, 0.70715845, 0.7071601 , 0.70715829, 0.70715711,
0.70715903, 0.70716011, 0.70715714, 0.7071565 , 0.70715756,
0.70715885, 0.7071556 , 0.70715386, 0.70715406, 0.70715653,
0.70715532, 0.70714813, 0.7071515 , 0.70715242, 0.70715269,
0.70713191, 0.70712961, 0.70712505, 0.70712841, 0.70714097,
0.70713808, 0.70713862, 0.7071431 , 0.70714966, 0.7071463 ,
0.70715775, 0.70715666, 0.70715566, 0.7071554 , 0.7071632 ,
0.70716353, 0.70715928, 0.70716244, 0.70716736, 0.70716495,
0.70716301, 0.70716635, 0.70717088, 0.70716792, 0.70716369,
0.70717007, 0.7071741 , 0.70716769, 0.70716166, 0.70716991,
0.70717294, 0.70716167, 0.70715775, 0.70716057, 0.70715687,
0.70715535, 0.70715014, 0.70714766, 0.70714559, 0.70714992,
0.7071149 , 0.70708741, 0.706984 , 0.70711096, 0.70714367,
0.70714831, 0.70713519, 0.7071811 , 0.70716622, 0.70716603,
0.70714155, 0.7071748 , 0.70716885, 0.70716897, 0.70713548,
0.70716966, 0.70716924, 0.70716978, 0.70713561, 0.7071717 ,
0.70717389, 0.7071726 , 0.70713926, 0.70717593, 0.70718002,
0.70717761, 0.70714428, 0.70718053, 0.70718062, 0.70718719,
0.70715731, 0.70718271, 0.70716238, 0.7071992 , 0.70715496,
0.70716834, 0.70713531, 0.70713099, 0.70700665, 0.7071098 ,
0.70634397, 0.70524618, 0.70297607, 0.70514658, 0.70658259,
0.70506628, 0.70244401, 0.70497884, 0.70657086, 0.70498266,
0.70239779, 0.70496243, 0.7065572 , 0.7049646 , 0.70239608,
0.70496008, 0.70655538, 0.70496125, 0.70239685, 0.70496177,
0.70655883, 0.70496295, 0.70239957, 0.70496624, 0.70656625,
0.70496724, 0.70240482, 0.7049756 , 0.70658803, 0.70497608,
0.70241139, 0.70500006, 0.70660425, 0.70499778, 0.70246225,
0.70508764, 0.70636798, 0.70516922, 0.70299639, 0.70526838,
0.71780931, 0.7506157 , 0.78399529, 0.75061024, 0.71769206,
0.75059929, 0.78398287, 0.75059279, 0.71768281, 0.75059112,
0.78397863, 0.75059025, 0.71768261, 0.75058996, 0.78397777,
0.75058981, 0.71768268, 0.75058969, 0.78397749, 0.75058967,
0.7176832 , 0.75058972, 0.78397772, 0.75058986, 0.71768421,
0.7505901 , 0.78397859, 0.75059043, 0.71768534, 0.7505909 ,
0.78398028, 0.750592 , 0.71769545, 0.75059388, 0.78398545,
0.75060056, 0.71781337, 0.75061163, 0.78399848, 0.75061714,
0.81739069, 0.85076296, 0.8841241 , 0.85076174, 0.81738381,
0.85075988, 0.88412183, 0.85075808, 0.81738087, 0.85075718,
0.88412031, 0.85075635, 0.81737996, 0.85075599, 0.88411952,
0.85075563, 0.81737963, 0.85075548, 0.88411919, 0.8507555 ,
0.81738003, 0.85075569, 0.88411972, 0.85075629, 0.81738134,
0.85075692, 0.88412133, 0.85075812, 0.81738361, 0.85075914,
0.88412387, 0.85076103, 0.81738807, 0.85076269, 0.88412739,
0.85076547, 0.81739598, 0.85076786, 0.88413107, 0.85076949,
0.91748914, 0.95083916, 0.98417801, 0.95083906, 0.91748809,
0.95083882, 0.98417779, 0.95083863, 0.91748731, 0.95083843,
0.98417752, 0.9508382 , 0.91748674, 0.950838 , 0.9841771 ,
0.95083776, 0.91748646, 0.95083764, 0.98417686, 0.95083771,
0.91748702, 0.95083794, 0.98417744, 0.95083859, 0.91748864,
0.95083927, 0.98417906, 0.95084046, 0.91749107, 0.95084145,
0.98418138, 0.95084291, 0.91749397, 0.95084401, 0.98418384,
0.95084538, 0.91749653, 0.95084626, 0.98418563, 0.95084686])
#print w.centroid_values - wc
#print max(w.centroid_values - wc)
assert num.allclose(w.centroid_values, wc, rtol=2.0e-3)
def test_kinematic_operator_default(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*(x-0.5)')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
# kill off the wave with viscosity
kv = Kinematic_viscosity_operator(domain)
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
for t in domain.evolve(yieldstep = 1.0, finaltime = 10.0):
#domain.write_time()
#domain.print_operator_timestepping_statistics()
pass
#
w = domain.quantities['stage']
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
#pprint(w.centroid_values)
wc = [ 0.70708499, 0.70708621, 0.70708739, 0.70708592, 0.70708113,
0.70708436, 0.70708375, 0.70708134, 0.70707501, 0.70707913,
0.70707721, 0.70707394, 0.70706646, 0.70707106, 0.7070687 ,
0.70706476, 0.70705719, 0.70706151, 0.70705891, 0.70705455,
0.70704706, 0.70705106, 0.70704916, 0.70704428, 0.70703703,
0.70704066, 0.70703903, 0.7070339 , 0.70702799, 0.70703089,
0.7070303 , 0.70702564, 0.70702151, 0.70702355, 0.70702397,
0.70702031, 0.70701839, 0.70701905, 0.70702107, 0.7070194 ,
0.70709027, 0.70709487, 0.70709941, 0.70709437, 0.70708654,
0.70709293, 0.70709501, 0.70708947, 0.70707992, 0.70708703,
0.70708836, 0.70708163, 0.70707074, 0.70707825, 0.70707864,
0.70707138, 0.70706083, 0.70706806, 0.70706835, 0.70706081,
0.70705083, 0.707057 , 0.70705712, 0.70704999, 0.70704048,
0.70704607, 0.70704625, 0.70703935, 0.70703156, 0.70703616,
0.70703718, 0.70703054, 0.70702507, 0.70702802, 0.70703008,
0.70702449, 0.70702214, 0.70702318, 0.7070272 , 0.7070235 ,
0.70710676, 0.70711548, 0.70712417, 0.70711463, 0.70710228,
0.70711276, 0.70711859, 0.7071085 , 0.70709499, 0.70710565,
0.70710958, 0.70709958, 0.70708503, 0.70709598, 0.70709863,
0.70708794, 0.70707407, 0.70708442, 0.70708659, 0.70707544,
0.70706157, 0.70707155, 0.70707373, 0.70706302, 0.70705048,
0.70705952, 0.70706168, 0.70705157, 0.70704142, 0.70704862,
0.70705162, 0.70704216, 0.70703415, 0.70703963, 0.70704439,
0.70703587, 0.70703088, 0.70703425, 0.70704138, 0.70703513,
0.70713645, 0.70714954, 0.70716232, 0.70714793, 0.70713056,
0.70714553, 0.70715518, 0.70713986, 0.70712065, 0.70713624,
0.70714367, 0.70712812, 0.70710867, 0.70712372, 0.70712926,
0.70711412, 0.70709613, 0.70710986, 0.70711457, 0.70709938,
0.70708183, 0.70709482, 0.70709942, 0.70708498, 0.70706838,
0.70708094, 0.70708603, 0.70707212, 0.7070582 , 0.7070688 ,
0.70707538, 0.70706281, 0.70705122, 0.7070608 , 0.7070685 ,
0.70705631, 0.70704792, 0.70705483, 0.70706486, 0.70705531,
0.70718 , 0.70719823, 0.70721617, 0.70719615, 0.7071729 ,
0.70719352, 0.70720804, 0.70718662, 0.70716074, 0.70718253,
0.70719468, 0.70717278, 0.70714572, 0.70716734, 0.70717659,
0.70715531, 0.7071299 , 0.70714964, 0.70715813, 0.70713672,
0.70711249, 0.70713105, 0.70713951, 0.70711991, 0.70709744,
0.70711495, 0.70712439, 0.70710586, 0.70708626, 0.70710178,
0.70711248, 0.70709512, 0.70707906, 0.70709283, 0.70710466,
0.70708817, 0.70707441, 0.70708663, 0.70710054, 0.70708614,
0.70724108, 0.70726706, 0.70729195, 0.70726351, 0.70723292,
0.70726031, 0.7072815 , 0.70725218, 0.70721887, 0.7072471 ,
0.70726445, 0.70723489, 0.70719979, 0.70722813, 0.70724321,
0.70721483, 0.70718005, 0.7072073 , 0.70721893, 0.70719146,
0.70715956, 0.7071841 , 0.70719678, 0.70717142, 0.70714207,
0.7071646 , 0.70717743, 0.70715424, 0.70712796, 0.70714878,
0.70716287, 0.70714105, 0.70711978, 0.70713804, 0.70715526,
0.70713402, 0.70711483, 0.7071321 , 0.70715154, 0.70713215,
0.70732504, 0.7073667 , 0.70742884, 0.70736225, 0.70731461,
0.70735815, 0.70741174, 0.70734535, 0.70729534, 0.7073381 ,
0.70738346, 0.70732086, 0.70727156, 0.70731057, 0.70734823,
0.7072925 , 0.70724546, 0.70728082, 0.70731384, 0.7072646 ,
0.7072214 , 0.70725274, 0.70728508, 0.7072405 , 0.70720083,
0.70722955, 0.70726458, 0.70722164, 0.70718575, 0.70721333,
0.70725167, 0.70720856, 0.70717715, 0.70720336, 0.70724394,
0.7072014 , 0.70717247, 0.70719816, 0.70724117, 0.70719918,
0.71675646, 0.75004659, 0.78337382, 0.75004739, 0.71675772,
0.75004786, 0.78337446, 0.75004804, 0.71675836, 0.75004825,
0.78337474, 0.75004834, 0.71675882, 0.75004847, 0.7833749 ,
0.75004855, 0.71675951, 0.75004875, 0.78337515, 0.75004887,
0.71676052, 0.75004917, 0.78337553, 0.75004932, 0.71676144,
0.7500496 , 0.78337589, 0.75004969, 0.71676167, 0.75004975,
0.78337597, 0.75004976, 0.71676168, 0.75004975, 0.78337596,
0.75004977, 0.71676169, 0.75004982, 0.78337604, 0.75004916,
0.81671751, 0.85002885, 0.88335646, 0.85002953, 0.81671814,
0.85002995, 0.8833571 , 0.85003018, 0.8167184 , 0.85003028,
0.88335726, 0.85003035, 0.81671855, 0.85003041, 0.88335736,
0.85003048, 0.81671876, 0.85003058, 0.8833575 , 0.8500307 ,
0.81671909, 0.85003088, 0.88335777, 0.85003104, 0.81671941,
0.85003122, 0.88335804, 0.85003131, 0.81671948, 0.85003133,
0.88335807, 0.85003133, 0.81671948, 0.85003132, 0.88335805,
0.85003132, 0.81671948, 0.85003132, 0.88335804, 0.85003083,
0.91669795, 0.95000946, 0.98333814, 0.95001028, 0.91669869,
0.95001047, 0.98333839, 0.95001059, 0.91669887, 0.95001062,
0.98333842, 0.95001065, 0.91669897, 0.95001067, 0.98333845,
0.95001071, 0.91669912, 0.95001074, 0.98333849, 0.95001081,
0.91669939, 0.95001087, 0.98333857, 0.95001098, 0.91669968,
0.95001106, 0.98333868, 0.95001112, 0.91669971, 0.95001111,
0.98333867, 0.9500111 , 0.91669969, 0.9500111 , 0.98333867,
0.9500111 , 0.91669964, 0.9500111 , 0.98333867, 0.95001047]
assert num.allclose(w.centroid_values, wc, rtol=2.0e-3)
def test_kinematic_operator_quantity(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
#domain.set_flow_algorithm('2_0')
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*(x-0.5)')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
Q = Quantity(domain)
Q = 2.0
# kill off the wave with viscosity
kv = Kinematic_viscosity_operator(domain, diffusivity = Q)
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
for t in domain.evolve(yieldstep = 1.0, finaltime = 10.0):
#domain.write_time()
#domain.print_operator_timestepping_statistics()
pass
#
w = domain.quantities['stage']
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
#print 'uh'
#print uh.centroid_values
#print uh.boundary_values
#print 'w'
#print w.centroid_values
#from pprint import pprint
#pprint(w.centroid_values)
wc = num.array([
0.71624029, 0.71622927, 0.71621675, 0.71623888, 0.71624236,
0.71624536, 0.71625157, 0.71625028, 0.71625679, 0.71626609,
0.71630233, 0.71627457, 0.71627721, 0.71628666, 0.71633484,
0.71629002, 0.71628494, 0.716295 , 0.7163438 , 0.71629656,
0.71628493, 0.71629656, 0.71634379, 0.71629497, 0.71627716,
0.71628999, 0.71633481, 0.7162866 , 0.71625666, 0.71627448,
0.71630224, 0.71626596, 0.71624212, 0.7162501 , 0.7162514 ,
0.71624512, 0.71624 , 0.7162386 , 0.71621644, 0.71622896,
0.71619869, 0.71615658, 0.71609423, 0.71619602, 0.71627164,
0.71623926, 0.71625039, 0.71633719, 0.71638922, 0.71642539,
0.71652642, 0.71649892, 0.71646671, 0.71653525, 0.71670614,
0.71661869, 0.71649067, 0.71663318, 0.71682302, 0.71665878,
0.71649066, 0.71665876, 0.71682295, 0.71663309, 0.71646665,
0.71661859, 0.71670596, 0.71653511, 0.71638911, 0.71649877,
0.71652622, 0.71642523, 0.71627151, 0.716337 , 0.71625001,
0.71623888, 0.7161983 , 0.71619554, 0.71609371, 0.71615611,
0.71587901, 0.71555375, 0.71521927, 0.71573946, 0.71615663,
0.71586493, 0.7156413 , 0.71615004, 0.71653474, 0.71632223,
0.71618825, 0.7165586 , 0.7168124 , 0.71668994, 0.71661036,
0.7168446 , 0.71694587, 0.71689337, 0.7167922 , 0.71693225,
0.71694582, 0.71693224, 0.71679212, 0.71689325, 0.71681216,
0.71684437, 0.71661004, 0.71668963, 0.71653449, 0.71655826,
0.71618788, 0.71632191, 0.71615622, 0.71614967, 0.71564092,
0.71586446, 0.7158785 , 0.71573897, 0.71521879, 0.71555323,
0.71415117, 0.71304803, 0.71200401, 0.71333356, 0.71459491,
0.71350761, 0.71272705, 0.7140006 , 0.71526042, 0.71418365,
0.71337479, 0.7146592 , 0.71582149, 0.71478585, 0.71378284,
0.7150456 , 0.71605221, 0.71509271, 0.71396254, 0.71516103,
0.71605211, 0.71516102, 0.71396249, 0.71509256, 0.71582115,
0.7150454 , 0.71378271, 0.71478555, 0.71526005, 0.71465889,
0.71337454, 0.71418329, 0.71459453, 0.71400022, 0.71272682,
0.71350725, 0.71415077, 0.71333321, 0.71200389, 0.71304774,
0.70944126, 0.70705883, 0.70442227, 0.70714215, 0.70999341,
0.70722667, 0.70436187, 0.70745337, 0.71044978, 0.70748596,
0.70427781, 0.70768146, 0.71082549, 0.70772906, 0.70426793,
0.70786303, 0.71099495, 0.70788365, 0.70424722, 0.70791928,
0.71099502, 0.70791937, 0.70424774, 0.70788396, 0.71082556,
0.70786332, 0.70426849, 0.70772935, 0.71044982, 0.70768178,
0.7042786 , 0.70748637, 0.70999356, 0.70745385, 0.70436311,
0.70722738, 0.70944169, 0.70714295, 0.70442389, 0.70705981,
0.69895933, 0.69463188, 0.68921358, 0.693824 , 0.698153 ,
0.69349963, 0.68725093, 0.69221842, 0.69728195, 0.69180649,
0.68463972, 0.69053046, 0.69673179, 0.69018397, 0.68236173,
0.68940762, 0.69650961, 0.68925397, 0.68125059, 0.68902719,
0.69651034, 0.68902736, 0.6812516 , 0.68925556, 0.69673305,
0.6894096 , 0.6823656 , 0.69018707, 0.69728407, 0.69053386,
0.68464522, 0.69181074, 0.69815588, 0.69222279, 0.68725717,
0.69350432, 0.69896255, 0.69382873, 0.68922015, 0.69463687,
0.68375896, 0.6882601 , 0.69595562, 0.68766298, 0.68105558,
0.68673658, 0.69502847, 0.68542815, 0.67770965, 0.68435344,
0.69409778, 0.68310537, 0.67491515, 0.68222458, 0.69337943,
0.68140117, 0.67356609, 0.68097711, 0.69301997, 0.68071631,
0.67356716, 0.68071666, 0.69302027, 0.68097852, 0.6749196 ,
0.68140363, 0.69338045, 0.68222808, 0.6777162 , 0.68310954,
0.69409929, 0.68435822, 0.68106317, 0.68543327, 0.69503026,
0.68674199, 0.68376697, 0.68766854, 0.69595754, 0.68826575,
0.71760631, 0.75094294, 0.78427898, 0.75094168, 0.71760193,
0.75093986, 0.78427453, 0.75093415, 0.71758272, 0.7509278 ,
0.78426295, 0.75091754, 0.7175572 , 0.75090919, 0.78424795,
0.75089856, 0.71753518, 0.75089163, 0.78423642, 0.75088684,
0.71753524, 0.75088686, 0.78423643, 0.75089171, 0.7175573 ,
0.75089864, 0.78424798, 0.75090931, 0.71758285, 0.75091768,
0.78426303, 0.75092799, 0.7176021 , 0.75093438, 0.78427472,
0.75094013, 0.71760652, 0.75094199, 0.78427929, 0.75094328,
0.81761649, 0.85095268, 0.88428788, 0.85095192, 0.81761311,
0.8509508 , 0.88428574, 0.85094833, 0.81760513, 0.8509458 ,
0.88428131, 0.85094197, 0.81759506, 0.85093883, 0.88427596,
0.85093514, 0.81758753, 0.85093282, 0.88427212, 0.85093123,
0.81758749, 0.8509312 , 0.88427198, 0.85093269, 0.81759494,
0.85093494, 0.8842756 , 0.85093857, 0.81760502, 0.8509417 ,
0.88428088, 0.85094557, 0.81761314, 0.85094816, 0.88428543,
0.85095073, 0.81761667, 0.85095193, 0.88428775, 0.85095275,
0.91762366, 0.95095836, 0.98429205, 0.95095804, 0.91762217,
0.95095754, 0.98429102, 0.95095658, 0.91761918, 0.95095558,
0.98428903, 0.95095416, 0.91761561, 0.95095297, 0.98428667,
0.95095164, 0.91761304, 0.95095078, 0.98428497, 0.95095015,
0.91761286, 0.95095007, 0.98428475, 0.95095045, 0.9176151 ,
0.95095115, 0.98428605, 0.95095231, 0.91761853, 0.95095342,
0.9842882 , 0.9509548 , 0.91762161, 0.95095583, 0.98429026,
0.95095688, 0.91762327, 0.95095746, 0.98429146, 0.95095784])
#print max(w.centroid_values- wc)
assert num.allclose(w.centroid_values, wc, rtol=0.05)
def test_kinematic_operator_number(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
#domain.set_flow_algorithm('2_0')
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*(x-0.5)')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
# kill off the wave with viscosity
kv = Kinematic_viscosity_operator(domain, diffusivity=2.0)
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
for t in domain.evolve(yieldstep = 1.0, finaltime = 10.0):
#domain.write_time()
#domain.print_operator_timestepping_statistics()
pass
#
w = domain.quantities['stage']
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
#print 'uh'
#print uh.centroid_values
#print uh.boundary_values
#print 'w'
#print w.centroid_values
#from pprint import pprint
#pprint(w.centroid_values)
wc = num.array([
0.71624029, 0.71622927, 0.71621675, 0.71623888, 0.71624236,
0.71624536, 0.71625157, 0.71625028, 0.71625679, 0.71626609,
0.71630233, 0.71627457, 0.71627721, 0.71628666, 0.71633484,
0.71629002, 0.71628494, 0.716295 , 0.7163438 , 0.71629656,
0.71628493, 0.71629656, 0.71634379, 0.71629497, 0.71627716,
0.71628999, 0.71633481, 0.7162866 , 0.71625666, 0.71627448,
0.71630224, 0.71626596, 0.71624212, 0.7162501 , 0.7162514 ,
0.71624512, 0.71624 , 0.7162386 , 0.71621644, 0.71622896,
0.71619869, 0.71615658, 0.71609423, 0.71619602, 0.71627164,
0.71623926, 0.71625039, 0.71633719, 0.71638922, 0.71642539,
0.71652642, 0.71649892, 0.71646671, 0.71653525, 0.71670614,
0.71661869, 0.71649067, 0.71663318, 0.71682302, 0.71665878,
0.71649066, 0.71665876, 0.71682295, 0.71663309, 0.71646665,
0.71661859, 0.71670596, 0.71653511, 0.71638911, 0.71649877,
0.71652622, 0.71642523, 0.71627151, 0.716337 , 0.71625001,
0.71623888, 0.7161983 , 0.71619554, 0.71609371, 0.71615611,
0.71587901, 0.71555375, 0.71521927, 0.71573946, 0.71615663,
0.71586493, 0.7156413 , 0.71615004, 0.71653474, 0.71632223,
0.71618825, 0.7165586 , 0.7168124 , 0.71668994, 0.71661036,
0.7168446 , 0.71694587, 0.71689337, 0.7167922 , 0.71693225,
0.71694582, 0.71693224, 0.71679212, 0.71689325, 0.71681216,
0.71684437, 0.71661004, 0.71668963, 0.71653449, 0.71655826,
0.71618788, 0.71632191, 0.71615622, 0.71614967, 0.71564092,
0.71586446, 0.7158785 , 0.71573897, 0.71521879, 0.71555323,
0.71415117, 0.71304803, 0.71200401, 0.71333356, 0.71459491,
0.71350761, 0.71272705, 0.7140006 , 0.71526042, 0.71418365,
0.71337479, 0.7146592 , 0.71582149, 0.71478585, 0.71378284,
0.7150456 , 0.71605221, 0.71509271, 0.71396254, 0.71516103,
0.71605211, 0.71516102, 0.71396249, 0.71509256, 0.71582115,
0.7150454 , 0.71378271, 0.71478555, 0.71526005, 0.71465889,
0.71337454, 0.71418329, 0.71459453, 0.71400022, 0.71272682,
0.71350725, 0.71415077, 0.71333321, 0.71200389, 0.71304774,
0.70944126, 0.70705883, 0.70442227, 0.70714215, 0.70999341,
0.70722667, 0.70436187, 0.70745337, 0.71044978, 0.70748596,
0.70427781, 0.70768146, 0.71082549, 0.70772906, 0.70426793,
0.70786303, 0.71099495, 0.70788365, 0.70424722, 0.70791928,
0.71099502, 0.70791937, 0.70424774, 0.70788396, 0.71082556,
0.70786332, 0.70426849, 0.70772935, 0.71044982, 0.70768178,
0.7042786 , 0.70748637, 0.70999356, 0.70745385, 0.70436311,
0.70722738, 0.70944169, 0.70714295, 0.70442389, 0.70705981,
0.69895933, 0.69463188, 0.68921358, 0.693824 , 0.698153 ,
0.69349963, 0.68725093, 0.69221842, 0.69728195, 0.69180649,
0.68463972, 0.69053046, 0.69673179, 0.69018397, 0.68236173,
0.68940762, 0.69650961, 0.68925397, 0.68125059, 0.68902719,
0.69651034, 0.68902736, 0.6812516 , 0.68925556, 0.69673305,
0.6894096 , 0.6823656 , 0.69018707, 0.69728407, 0.69053386,
0.68464522, 0.69181074, 0.69815588, 0.69222279, 0.68725717,
0.69350432, 0.69896255, 0.69382873, 0.68922015, 0.69463687,
0.68375896, 0.6882601 , 0.69595562, 0.68766298, 0.68105558,
0.68673658, 0.69502847, 0.68542815, 0.67770965, 0.68435344,
0.69409778, 0.68310537, 0.67491515, 0.68222458, 0.69337943,
0.68140117, 0.67356609, 0.68097711, 0.69301997, 0.68071631,
0.67356716, 0.68071666, 0.69302027, 0.68097852, 0.6749196 ,
0.68140363, 0.69338045, 0.68222808, 0.6777162 , 0.68310954,
0.69409929, 0.68435822, 0.68106317, 0.68543327, 0.69503026,
0.68674199, 0.68376697, 0.68766854, 0.69595754, 0.68826575,
0.71760631, 0.75094294, 0.78427898, 0.75094168, 0.71760193,
0.75093986, 0.78427453, 0.75093415, 0.71758272, 0.7509278 ,
0.78426295, 0.75091754, 0.7175572 , 0.75090919, 0.78424795,
0.75089856, 0.71753518, 0.75089163, 0.78423642, 0.75088684,
0.71753524, 0.75088686, 0.78423643, 0.75089171, 0.7175573 ,
0.75089864, 0.78424798, 0.75090931, 0.71758285, 0.75091768,
0.78426303, 0.75092799, 0.7176021 , 0.75093438, 0.78427472,
0.75094013, 0.71760652, 0.75094199, 0.78427929, 0.75094328,
0.81761649, 0.85095268, 0.88428788, 0.85095192, 0.81761311,
0.8509508 , 0.88428574, 0.85094833, 0.81760513, 0.8509458 ,
0.88428131, 0.85094197, 0.81759506, 0.85093883, 0.88427596,
0.85093514, 0.81758753, 0.85093282, 0.88427212, 0.85093123,
0.81758749, 0.8509312 , 0.88427198, 0.85093269, 0.81759494,
0.85093494, 0.8842756 , 0.85093857, 0.81760502, 0.8509417 ,
0.88428088, 0.85094557, 0.81761314, 0.85094816, 0.88428543,
0.85095073, 0.81761667, 0.85095193, 0.88428775, 0.85095275,
0.91762366, 0.95095836, 0.98429205, 0.95095804, 0.91762217,
0.95095754, 0.98429102, 0.95095658, 0.91761918, 0.95095558,
0.98428903, 0.95095416, 0.91761561, 0.95095297, 0.98428667,
0.95095164, 0.91761304, 0.95095078, 0.98428497, 0.95095015,
0.91761286, 0.95095007, 0.98428475, 0.95095045, 0.9176151 ,
0.95095115, 0.98428605, 0.95095231, 0.91761853, 0.95095342,
0.9842882 , 0.9509548 , 0.91762161, 0.95095583, 0.98429026,
0.95095688, 0.91762327, 0.95095746, 0.98429146, 0.95095784])
#print w.centroid_values - wc
#print max(w.centroid_values - wc)
assert num.allclose(w.centroid_values, wc, rtol=0.05)
def test_kinematic_operator_string(self):
from anuga import rectangular_cross_domain
from anuga import Reflective_boundary
m1 = 10
n1 = 10
domain = rectangular_cross_domain(m1,n1)
#domain.set_flow_algorithm('2_0')
#
domain.set_quantity('elevation', expression='x')
domain.set_quantity('friction', 0.03)
domain.set_quantity('stage',expression='elevation + 2*(x-0.5)')
domain.set_quantity('xmomentum', expression='2*x+3*y')
domain.set_quantity('ymomentum', expression='5*x+7*y')
B = Reflective_boundary(domain)
domain.set_boundary( {'left': B, 'right': B, 'top': B, 'bottom': B})
# kill off the wave with viscosity
kv = Kinematic_viscosity_operator(domain, diffusivity = 'height')
# let's make timestep large so that the final solution will look like
#the solution of hte elliptic problem. In this case u -> 1, v -> 2.
for t in domain.evolve(yieldstep = 1.0, finaltime = 10.0):
#domain.write_time()
#domain.print_operator_timestepping_statistics()
pass
#
w = domain.quantities['stage']
uh = domain.quantities['xmomentum']
vh = domain.quantities['ymomentum']
#print 'uh'
#print uh.centroid_values
#print uh.boundary_values
#pprint(w.centroid_values)
wc = [ 0.70708499, 0.70708621, 0.70708739, 0.70708592, 0.70708113,
0.70708436, 0.70708375, 0.70708134, 0.70707501, 0.70707913,
0.70707721, 0.70707394, 0.70706646, 0.70707106, 0.7070687 ,
0.70706476, 0.70705719, 0.70706151, 0.70705891, 0.70705455,
0.70704706, 0.70705106, 0.70704916, 0.70704428, 0.70703703,
0.70704066, 0.70703903, 0.7070339 , 0.70702799, 0.70703089,
0.7070303 , 0.70702564, 0.70702151, 0.70702355, 0.70702397,
0.70702031, 0.70701839, 0.70701905, 0.70702107, 0.7070194 ,
0.70709027, 0.70709487, 0.70709941, 0.70709437, 0.70708654,
0.70709293, 0.70709501, 0.70708947, 0.70707992, 0.70708703,
0.70708836, 0.70708163, 0.70707074, 0.70707825, 0.70707864,
0.70707138, 0.70706083, 0.70706806, 0.70706835, 0.70706081,
0.70705083, 0.707057 , 0.70705712, 0.70704999, 0.70704048,
0.70704607, 0.70704625, 0.70703935, 0.70703156, 0.70703616,
0.70703718, 0.70703054, 0.70702507, 0.70702802, 0.70703008,
0.70702449, 0.70702214, 0.70702318, 0.7070272 , 0.7070235 ,
0.70710676, 0.70711548, 0.70712417, 0.70711463, 0.70710228,
0.70711276, 0.70711859, 0.7071085 , 0.70709499, 0.70710565,
0.70710958, 0.70709958, 0.70708503, 0.70709598, 0.70709863,
0.70708794, 0.70707407, 0.70708442, 0.70708659, 0.70707544,
0.70706157, 0.70707155, 0.70707373, 0.70706302, 0.70705048,
0.70705952, 0.70706168, 0.70705157, 0.70704142, 0.70704862,
0.70705162, 0.70704216, 0.70703415, 0.70703963, 0.70704439,
0.70703587, 0.70703088, 0.70703425, 0.70704138, 0.70703513,
0.70713645, 0.70714954, 0.70716232, 0.70714793, 0.70713056,
0.70714553, 0.70715518, 0.70713986, 0.70712065, 0.70713624,
0.70714367, 0.70712812, 0.70710867, 0.70712372, 0.70712926,
0.70711412, 0.70709613, 0.70710986, 0.70711457, 0.70709938,
0.70708183, 0.70709482, 0.70709942, 0.70708498, 0.70706838,
0.70708094, 0.70708603, 0.70707212, 0.7070582 , 0.7070688 ,
0.70707538, 0.70706281, 0.70705122, 0.7070608 , 0.7070685 ,
0.70705631, 0.70704792, 0.70705483, 0.70706486, 0.70705531,
0.70718 , 0.70719823, 0.70721617, 0.70719615, 0.7071729 ,
0.70719352, 0.70720804, 0.70718662, 0.70716074, 0.70718253,
0.70719468, 0.70717278, 0.70714572, 0.70716734, 0.70717659,
0.70715531, 0.7071299 , 0.70714964, 0.70715813, 0.70713672,
0.70711249, 0.70713105, 0.70713951, 0.70711991, 0.70709744,
0.70711495, 0.70712439, 0.70710586, 0.70708626, 0.70710178,
0.70711248, 0.70709512, 0.70707906, 0.70709283, 0.70710466,
0.70708817, 0.70707441, 0.70708663, 0.70710054, 0.70708614,
0.70724108, 0.70726706, 0.70729195, 0.70726351, 0.70723292,
0.70726031, 0.7072815 , 0.70725218, 0.70721887, 0.7072471 ,
0.70726445, 0.70723489, 0.70719979, 0.70722813, 0.70724321,
0.70721483, 0.70718005, 0.7072073 , 0.70721893, 0.70719146,
0.70715956, 0.7071841 , 0.70719678, 0.70717142, 0.70714207,
0.7071646 , 0.70717743, 0.70715424, 0.70712796, 0.70714878,
0.70716287, 0.70714105, 0.70711978, 0.70713804, 0.70715526,
0.70713402, 0.70711483, 0.7071321 , 0.70715154, 0.70713215,
0.70732504, 0.7073667 , 0.70742884, 0.70736225, 0.70731461,
0.70735815, 0.70741174, 0.70734535, 0.70729534, 0.7073381 ,
0.70738346, 0.70732086, 0.70727156, 0.70731057, 0.70734823,
0.7072925 , 0.70724546, 0.70728082, 0.70731384, 0.7072646 ,
0.7072214 , 0.70725274, 0.70728508, 0.7072405 , 0.70720083,
0.70722955, 0.70726458, 0.70722164, 0.70718575, 0.70721333,
0.70725167, 0.70720856, 0.70717715, 0.70720336, 0.70724394,
0.7072014 , 0.70717247, 0.70719816, 0.70724117, 0.70719918,
0.71675646, 0.75004659, 0.78337382, 0.75004739, 0.71675772,
0.75004786, 0.78337446, 0.75004804, 0.71675836, 0.75004825,
0.78337474, 0.75004834, 0.71675882, 0.75004847, 0.7833749 ,
0.75004855, 0.71675951, 0.75004875, 0.78337515, 0.75004887,
0.71676052, 0.75004917, 0.78337553, 0.75004932, 0.71676144,
0.7500496 , 0.78337589, 0.75004969, 0.71676167, 0.75004975,
0.78337597, 0.75004976, 0.71676168, 0.75004975, 0.78337596,
0.75004977, 0.71676169, 0.75004982, 0.78337604, 0.75004916,
0.81671751, 0.85002885, 0.88335646, 0.85002953, 0.81671814,
0.85002995, 0.8833571 , 0.85003018, 0.8167184 , 0.85003028,
0.88335726, 0.85003035, 0.81671855, 0.85003041, 0.88335736,
0.85003048, 0.81671876, 0.85003058, 0.8833575 , 0.8500307 ,
0.81671909, 0.85003088, 0.88335777, 0.85003104, 0.81671941,
0.85003122, 0.88335804, 0.85003131, 0.81671948, 0.85003133,
0.88335807, 0.85003133, 0.81671948, 0.85003132, 0.88335805,
0.85003132, 0.81671948, 0.85003132, 0.88335804, 0.85003083,
0.91669795, 0.95000946, 0.98333814, 0.95001028, 0.91669869,
0.95001047, 0.98333839, 0.95001059, 0.91669887, 0.95001062,
0.98333842, 0.95001065, 0.91669897, 0.95001067, 0.98333845,
0.95001071, 0.91669912, 0.95001074, 0.98333849, 0.95001081,
0.91669939, 0.95001087, 0.98333857, 0.95001098, 0.91669968,
0.95001106, 0.98333868, 0.95001112, 0.91669971, 0.95001111,
0.98333867, 0.9500111 , 0.91669969, 0.9500111 , 0.98333867,
0.9500111 , 0.91669964, 0.9500111 , 0.98333867, 0.95001047]
assert num.allclose(w.centroid_values, wc, rtol=2.0e-3)
################################################################################
if __name__ == "__main__":
suite = unittest.makeSuite(Test_kinematic_viscosity, 'test_') #test_')
runner = unittest.TextTestRunner()
runner.run(suite)
| 41.007777 | 151 | 0.602497 | 8,752 | 63,275 | 4.262112 | 0.175846 | 0.009651 | 0.010857 | 0.012332 | 0.813844 | 0.80111 | 0.784971 | 0.770468 | 0.758083 | 0.752346 | 0 | 0.427852 | 0.262789 | 63,275 | 1,542 | 152 | 41.034371 | 0.371814 | 0.056831 | 0 | 0.726453 | 0 | 0 | 0.020432 | 0 | 0 | 0 | 0 | 0 | 0.057114 | 1 | 0.028056 | false | 0.01002 | 0.03006 | 0 | 0.061122 | 0.013026 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
6041134e5a87bae94a5f2dd069973289ca64b33b | 310 | py | Python | benchmarks/SimResults/combinations_splash_ml_fulltrained/old/cmp_choleskybarnesradiosityocean.ncont/sim.scripts.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_splash_ml_fulltrained/old/cmp_choleskybarnesradiosityocean.ncont/sim.scripts.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_splash_ml_fulltrained/old/cmp_choleskybarnesradiosityocean.ncont/sim.scripts.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | import sys
sys.argv = [ "/scratch/nas/1/dn/sniper-6.0/scripts/mytrace.py", "stats.out" ]
execfile("/scratch/nas/1/dn/sniper-6.0/scripts/mytrace.py")
sys.argv = [ "/scratch/nas/1/dn/sniper-6.0/scripts/MLScheduler_fulltrained.py", "" ]
execfile("/scratch/nas/1/dn/sniper-6.0/scripts/MLScheduler_fulltrained.py")
| 51.666667 | 84 | 0.732258 | 52 | 310 | 4.326923 | 0.346154 | 0.177778 | 0.195556 | 0.231111 | 0.924444 | 0.924444 | 0.924444 | 0.924444 | 0.924444 | 0.853333 | 0 | 0.040816 | 0.051613 | 310 | 5 | 85 | 62 | 0.72449 | 0 | 0 | 0 | 0 | 0 | 0.73871 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
6042828fd817b022508d0bf545fc1c249cc4adbd | 180 | py | Python | s_store_api/managements.py | Saknowman/django-s-store-api | a14e000ea32cc527ad2c822f09f812194a5c8a47 | [
"MIT"
] | 1 | 2019-12-24T03:50:04.000Z | 2019-12-24T03:50:04.000Z | s_store_api/managements.py | Saknowman/django-s-store-api | a14e000ea32cc527ad2c822f09f812194a5c8a47 | [
"MIT"
] | 6 | 2020-06-05T20:16:37.000Z | 2021-09-22T18:18:13.000Z | s_store_api/managements.py | Saknowman/django-s-store-api | a14e000ea32cc527ad2c822f09f812194a5c8a47 | [
"MIT"
] | null | null | null | from s_store_api.utils.store import get_management_store_group
def set_default_groups():
try:
get_management_store_group()
except Exception:
pass
| 20 | 63 | 0.7 | 23 | 180 | 5.043478 | 0.73913 | 0.224138 | 0.310345 | 0.396552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 180 | 8 | 64 | 22.5 | 0.859259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0.166667 | 0.166667 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
6043f89edf44702160fb2b05ec7d174ae2d7d53d | 5,989 | py | Python | example/test_full/tests/test04_fk_fkback_multiple.py | hotline-emu/django-computedfields | 46275d7d1d18d58aa6b4a8f19f9588664072ce02 | [
"MIT"
] | null | null | null | example/test_full/tests/test04_fk_fkback_multiple.py | hotline-emu/django-computedfields | 46275d7d1d18d58aa6b4a8f19f9588664072ce02 | [
"MIT"
] | null | null | null | example/test_full/tests/test04_fk_fkback_multiple.py | hotline-emu/django-computedfields | 46275d7d1d18d58aa6b4a8f19f9588664072ce02 | [
"MIT"
] | null | null | null | from .base import GenericModelTestBase, MODELS
class MultipleDependenciesOne(GenericModelTestBase):
def setUp(self):
self.setDeps({
# fk + fk + fk_back + fk_back
'C': {'depends': [('self', ['name']), ('f_cb.f_ba.ag_f.gd_f', ['name']), ('cd_f.de_f', ['name'])],
'func': lambda self: self.name + ''.join(
MODELS['D'].objects.filter(f_dg__in=MODELS['G'].objects.filter(
f_ga=self.f_cb.f_ba)).values_list('name', flat=True).order_by('pk')) + ''.join(
MODELS['E'].objects.filter(f_ed__in=self.cd_f.all()).values_list('name', flat=True).order_by('pk')
)},
})
self.a = self.models.A(name='a')
self.a.save()
self.b = self.models.B(name='b', f_ba=self.a)
self.b.save()
self.c = self.models.C(name='c', f_cb=self.b)
self.c.save()
self.d = self.models.D(name='d', f_dc=self.c)
self.d.save()
self.e = self.models.E(name='e', f_ed=self.d)
self.e.save()
self.f = self.models.F(name='f', f_fe=self.e)
self.f.save()
self.g = self.models.G(name='g', f_gf=self.f, f_ga=self.a)
self.g.save()
self.d.f_dg = self.g
self.d.save()
def tearDown(self):
self.resetDeps()
def test_C_insert(self):
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cde')
def test_C_update(self):
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cde')
# change D
self.d.name = 'D'
self.d.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDe')
# add new D
new_d = self.models.D(name='d2', f_dg=self.g)
new_d.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDd2e')
# change E
self.e.name = 'E'
self.e.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDd2E')
# add new E
new_e = self.models.E(name="e2", f_ed=self.d)
new_e.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDd2Ee2')
def test_C_update_deletes(self):
# change D
self.d.name = 'D'
self.d.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDe')
# add new D
new_d = self.models.D(name='d2', f_dg=self.g)
new_d.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDd2e')
# change E
self.e.name = 'E'
self.e.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDd2E')
# add new E
new_e = self.models.E(name="e2", f_ed=self.d)
new_e.save()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDd2Ee2')
# delete new_d
new_d.delete()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'cDEe2')
# delete d - should remove D, E and e2
self.d.delete()
self.c.refresh_from_db()
self.assertEqual(self.c.comp, 'c')
class MultipleDependenciesTwo(GenericModelTestBase):
def setUp(self):
self.setDeps({
# fk_back + fk_back + fk_back + fk + fk + fk
'D': {'depends': [['self', ['name']], ['de_f.ef_f.fg_f.f_ga.f_ac.f_cb', ['name']], ['f_dc.f_cb', ['name']]],
'func': lambda self: self.name + ''.join(filter(bool, MODELS['G'].objects.filter(
f_gf__in=MODELS['F'].objects.filter(
f_fe__in=self.de_f.all())).values_list(
'f_ga__f_ac__f_cb__name', flat=True))) + self.f_dc.f_cb.name}
})
self.a = self.models.A(name='a')
self.a.save()
self.b = self.models.B(name='b', f_ba=self.a)
self.b.save()
self.c = self.models.C(name='c', f_cb=self.b)
self.c.save()
self.a.f_ac = self.c
self.a.save()
self.d = self.models.D(name='d', f_dc=self.c)
self.d.save()
self.e = self.models.E(name='e', f_ed=self.d)
self.e.save()
self.f = self.models.F(name='f', f_fe=self.e)
self.f.save()
self.g = self.models.G(name='g', f_gf=self.f, f_ga=self.a)
self.g.save()
def tearDown(self):
self.resetDeps()
def test_D_insert(self):
self.d.refresh_from_db()
self.assertEqual(self.d.comp, 'dbb')
def test_D_update(self):
self.d.refresh_from_db()
self.assertEqual(self.d.comp, 'dbb')
# change B --> should change both deps
self.b.name = 'B'
self.b.save()
self.d.refresh_from_db()
self.assertEqual(self.d.comp, 'dBB')
# add new A, B and C, change f_ga
new_b = self.models.B(name='b2')
new_b.save()
new_c = self.models.C(name='c2', f_cb=new_b)
new_c.save()
new_a = self.models.A(name='A', f_ac=new_c)
new_a.save()
self.g.f_ga = new_a
self.g.save()
self.d.refresh_from_db()
# this should only change the "first" B dep
self.assertEqual(self.d.comp, 'db2B')
def test_D_update_deletes(self):
# change B --> should change both deps
self.b.name = 'B'
self.b.save()
self.d.refresh_from_db()
self.assertEqual(self.d.comp, 'dBB')
# add new A, B and C, change f_ga
new_b = self.models.B(name='b2')
new_b.save()
new_c = self.models.C(name='c2', f_cb=new_b)
new_c.save()
new_a = self.models.A(name='A', f_ac=new_c)
new_a.save()
self.g.f_ga = new_a
self.g.save()
self.d.refresh_from_db()
# this should only change the "first" B dep
self.assertEqual(self.d.comp, 'db2B')
# delete new_b - should remove b2
new_b.delete()
self.d.refresh_from_db()
self.assertEqual(self.d.comp, 'dB')
| 35.023392 | 120 | 0.540491 | 920 | 5,989 | 3.342391 | 0.095652 | 0.050407 | 0.080325 | 0.093984 | 0.828293 | 0.803577 | 0.801951 | 0.744715 | 0.700488 | 0.700488 | 0 | 0.004973 | 0.294874 | 5,989 | 170 | 121 | 35.229412 | 0.723183 | 0.075138 | 0 | 0.798561 | 0 | 0 | 0.049465 | 0.009241 | 0 | 0 | 0 | 0 | 0.136691 | 1 | 0.071942 | false | 0 | 0.007194 | 0 | 0.093525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6083bf1470f63882392aaa735782593d99d55260 | 45 | py | Python | simulation/pedestrians/__init__.py | salinsiim/petssa-simulation | 8f0f128d462831f86664bb8d246f2c7b659a0b8d | [
"MIT"
] | null | null | null | simulation/pedestrians/__init__.py | salinsiim/petssa-simulation | 8f0f128d462831f86664bb8d246f2c7b659a0b8d | [
"MIT"
] | null | null | null | simulation/pedestrians/__init__.py | salinsiim/petssa-simulation | 8f0f128d462831f86664bb8d246f2c7b659a0b8d | [
"MIT"
] | null | null | null | from pedestrians.pedestrians import generate
| 22.5 | 44 | 0.888889 | 5 | 45 | 8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
608a8e5a1bbb75b2ab07f65fea10fd7ecee30067 | 97,279 | py | Python | website/views.py | alexjosesilva/Desafio_Estacio_2019 | 037cb8643e4c5c0e7a8c3a440aaf077ace4b763f | [
"MIT"
] | null | null | null | website/views.py | alexjosesilva/Desafio_Estacio_2019 | 037cb8643e4c5c0e7a8c3a440aaf077ace4b763f | [
"MIT"
] | null | null | null | website/views.py | alexjosesilva/Desafio_Estacio_2019 | 037cb8643e4c5c0e7a8c3a440aaf077ace4b763f | [
"MIT"
] | null | null | null | from django.urls import reverse_lazy, reverse
from django.views.generic import TemplateView, ListView, UpdateView, CreateView, DeleteView
from authweb.models import Usuario, Foo, TipoLaboratorio, Situacao, Recurso, Reserva,\
Curso
from django.contrib.auth.models import User, Permission
#FORM
from website.forms import InsereFooForm, InsereUsuarioForm, LoginUsuarioForm,\
InsereTipoLaboratorioForm,InsereSituacaoForm, InsereLaboratorioForm,InsereReservaLaboratorioForm,\
InsereCursoForm, InsereProjetorForm, InsereReservaProjetorForm,\
InsereReservaLaboratorioUsuariosForm, InsereReservaProjetorUsuariosForm
from website.forms import AuthenticationForm
#SHORTCUTS
from django.shortcuts import redirect, render
from django.shortcuts import reverse
#
from django.contrib.auth.mixins import LoginRequiredMixin,\
PermissionRequiredMixin
from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.views import LogoutView, LoginView
from django.core.exceptions import ValidationError
from django.http import HttpResponseRedirect
from django.conf import settings
from datetime import datetime
from datetime import timedelta
from django.contrib import messages
from django.db.models import Q
#from django.views.generic.edit import FormView
#class LogView(FormView):
#LOGIN E LOGOUT https://github.com/django/django/blob/master/django/contrib/auth/views.py#L38
class IndexTemplateView(TemplateView):
template_name = "website/index.html"
class AgendaTemplateView(TemplateView):
template_name = "website/fullcalendar/agenda.html"
class AgendaRecursoTemplateView(ListView):
template_name = "website/fullcalendar/recurso.html"
model = Recurso
context_object_name = "recursos"
def get_context_data(self, **kwargs):
context = super(AgendaRecursoTemplateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['reservas'] = Reserva.objetos.all()
# And so on for more models
return context
class CustomLogoutView(LogoutView):
#template_name = "website/account/logout.html"
redirect_field_name = 'redirect_to'
success_url = 'website:index'
def get(self, request):
print("****************************************************")
print("LOGOUT VIEW")
print("****************************************************")
if request.user.is_authenticated:
print("----------------------------------------------------")
print("User IS authenticated...")
logout(request)
return redirect(self.success_url)
class DashBoardView(LoginRequiredMixin, ListView):
template_name = "website/account/dashboard.html"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
"""
def get(self,request):
print("****************************************************")
print("DASHBOARD VIEW")
print("****************************************************")
if not request.user.is_authenticated:
print("----------------------------------------------------")
print("User NOT is authenticated...")
else:
print("----------------------------------------------------")
print("User IS authenticated...")
return render(request, self.template_name)
"""
def get(self,request):
print("****************************************************")
print("DASHBOARD VIEW")
print("****************************************************")
if not request.user.is_authenticated:
print("----------------------------------------------------")
print("User NOT is authenticated...")
return render(request, self.login_url)
else:
print("----------------------------------------------------")
print("User IS authenticated...")
return render(request, self.template_name)
return render(request, self.login_url)
class CustomLoginView(LoginView):
template_name = "website/account/login.html"
authentication_form = LoginUsuarioForm
model = Usuario
success_url = 'website/account/dashboard.html'
def post(self, request):
print("****************************************************")
print("POST LOGIN VIEW")
print("****************************************************")
username = request.POST['username']
print("----------------------------------------------------")
print(username)
password = request.POST['password']
print("----------------------------------------------------")
print(password)
user = authenticate(request, username=username, password=password)
if user is not None:
print("-------------------------------------------------")
print("User Already Authenticated...")
print("-------------------------------------------------")
print("User Login...")
login(request, user)
return redirect("/account/dashboard")
else:
print("-------------------------------------------------")
print("User NOT Already Authenticated...")
user = Usuario.objetos.filter(username=username,password=password).first()
if user is not None:
print("-------------------------------------------------")
print("User Found...")
print("-------------------------------------------------")
print("ID USER =" + str(user.pk))
login(request, user)
return redirect("/account/dashboard")
#return render(request, reverse("website:login") )
#return HttpResponseRedirect(reverse(self.success_url))
#return redirect("/account/dashboard")
else:
print("User Not Found...")
return render(request, self.template_name, {'form': self.authentication_form})
return render(request, self.template_name, {'form': self.authentication_form})
# No backend authenticated the credentials
#email = form.cleaned_data['email']
#password = form.cleaned_data['password']
#username = form.cleaned_data['username']
"""user = authenticate(self.request, username=username, password=password)
if user is not None: #return redirect('/foos')
login(self.request,user)
else:
return render(self.request, self.template_name, { 'form': form })
"""
#else:
#def form_valid(self, form):
class FirstTimeView(CreateView):
template_name = "website/account/first_time.html"
model = Usuario
form_class = InsereUsuarioForm
success_url = reverse_lazy("website:lista_foos")
def post(self, request):
print("****************************************************")
print("FIRST TIME VIEW")
print("****************************************************")
username = request.POST['username']
print("----------------------------------------------------")
print("USERNAME =" +str(username))
password = request.POST['password']
print("----------------------------------------------------")
print("PASSWORD = " + str(password))
print("-------------------------------------------------")
email = request.POST['email']
print("EMAIL = " + str(email))
print("-------------------------------------------------")
categoria = request.POST['categoria']
print("CATEGORIA = " + str(categoria))
print("-------------------------------------------------")
nome = request.POST['nome']
print("NOME = " + str(nome))
"""
form = InsereUsuarioForm(request.POST)
if form.is_valid():
user = form.save()
print(user.pk)
return redirect('/account/login')
"""
usuario = Usuario.objetos.filter(username=username, matricula=username, email=email,password=password, categoria=categoria).first()
if usuario is None:
print("-------------------------------------------------")
print("USER DOESN'T EXISTS!")
#https://stackoverflow.com/questions/20361235/django-set-user-permissions-when-user-is-automatically-created#answer-36018316
usuario = None
if categoria == "professor":
usuario = Usuario.objetos.create(username=username, matricula=username, email=email,password=password, categoria=categoria, nome=nome, first_name=nome )
permission = Permission.objects.get(codename='view_usuario')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_usuario')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='view_reserva')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='add_reserva')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_reserva')
usuario.user_permissions.add(permission)
pass
elif categoria == "laboratorista":
usuario = Usuario.objetos.create(username=username, matricula=username, email=email,password=password, categoria=categoria,nome = nome, first_name=nome,is_staff =1 )
permission = Permission.objects.get(codename='view_usuario')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_usuario')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='add_usuario')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='delete_usuario')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='view_tipolaboratorio')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_tipolaboratorio')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='add_tipolaboratorio')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='delete_tipolaboratorio')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='view_reserva')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_reserva')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='add_reserva')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='delete_reserva')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='view_recurso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_recurso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='add_recurso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='delete_recurso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='view_curso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='change_curso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='add_curso')
usuario.user_permissions.add(permission)
permission = Permission.objects.get(codename='delete_curso')
usuario.user_permissions.add(permission)
print("-------------------------------------------------")
print("ID USER CREATED =" + str(usuario.pk))
return redirect("/account/login")
else:
print("USER ALDEADY EXISTS! ")
return render(self.request, self.template_name, { 'form': self.form_class })
#User.objects.create(email=email,password=password)
return render(self.request, self.template_name, { 'form': self.form_class })
class UsuarioListView(PermissionRequiredMixin,LoginRequiredMixin,ListView):
permission_required = "authweb.view_usuario"
template_name = "website/usuario/lista.html"
model = Usuario
#/context_object_name = "usuarios"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(UsuarioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['usuarios'] = Usuario.objetos.filter(is_superuser=False)
# And so on for more models
return context
class UsuarioListViewUsuario(PermissionRequiredMixin,LoginRequiredMixin,ListView):
permission_required = "authweb.view_usuario"
template_name = "website/usuario/lista2.html"
model = Usuario
#context_object_name = "usuarios"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(UsuarioListViewUsuario, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['usuarios'] = Usuario.objetos.filter(id=self.request.user.id)
# And so on for more models
return context
class UsuarioCreateView(PermissionRequiredMixin,LoginRequiredMixin, CreateView):
permission_required = "authweb.add_usuario"
template_name = "website/usuario/cria.html"
model = Usuario
form_class = InsereUsuarioForm
success_url = reverse_lazy("website:lista_usuarios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def form_valid(self, form):
print("****************************************************")
#print(form.cleaned_data['origem'])
print("****************************************************")
form.save()
print("****************************************************")
return super().form_valid(form)
class UsuarioUpdateView(PermissionRequiredMixin, LoginRequiredMixin, UpdateView):
permission_required = "authweb.change_usuario"
template_name = "website/usuario/atualiza.html"
model = Usuario
fields = '__all__'
context_object_name = 'usuario'
success_url = reverse_lazy("website:lista_usuarios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class UsuarioUpdateViewUsuario(PermissionRequiredMixin, LoginRequiredMixin, UpdateView):
permission_required = "authweb.change_usuario"
template_name = "website/usuario/atualiza2.html"
model = Usuario
fields = '__all__'
context_object_name = 'usuario'
success_url = reverse_lazy("website:lista_usuario")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class UsuarioDeleteView(PermissionRequiredMixin, LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_usuario"
template_name = "website/usuario/exclui.html"
model = Usuario
context_object_name = 'usuario'
success_url = reverse_lazy("website:lista_usuarios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
#*** FOO ***
class FooListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_foo"
template_name = "website/foo/lista.html"
model = Foo
context_object_name = "foos"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class FooCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_foo"
template_name = "website/foo/cria.html"
model = Foo
form_class = InsereFooForm
success_url = reverse_lazy("website:lista_foos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class FooUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_foo"
template_name = "website/foo/atualiza.html"
model = Foo
fields = '__all__'
context_object_name = 'foo'
success_url = reverse_lazy("website:lista_foos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class FooDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_foo"
template_name = "website/foo/exclui.html"
model = Foo
context_object_name = 'foo'
success_url = reverse_lazy("website:lista_foos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class CursoListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_curso"
template_name = "website/curso/lista.html"
model = Curso
context_object_name = "cursos"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class CursoCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_curso"
template_name = "website/curso/cria.html"
model = Curso
form_class = InsereCursoForm
success_url = reverse_lazy("website:lista_cursos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class CursoUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_curso"
template_name = "website/curso/atualiza.html"
model = Curso
fields = '__all__'
context_object_name = 'curso'
success_url = reverse_lazy("website:lista_cursos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class CursoDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_curso"
template_name = "website/curso/exclui.html"
model = Curso
context_object_name = 'curso'
success_url = reverse_lazy("website:lista_cursos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class TipoLaboratorioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_tipolaboratorio"
template_name = "website/tipo_laboratorio/lista.html"
model = TipoLaboratorio
context_object_name = "tipo_laboratorios"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class TipoLaboratorioCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_tipolaboratorio"
template_name = "website/tipo_laboratorio/cria.html"
model = TipoLaboratorio
form_class = InsereTipoLaboratorioForm
success_url = reverse_lazy("website:lista_tipo_laboratorios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class TipoLaboratorioUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_tipolaboratorio"
template_name = "website/tipo_laboratorio/atualiza.html"
model = TipoLaboratorio
fields = '__all__'
context_object_name = 'tipo_laboratorio'
success_url = reverse_lazy("website:lista_tipo_laboratorios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class TipoLaboratorioDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_tipolaboratorio"
template_name = "website/tipo_laboratorio/exclui.html"
model = TipoLaboratorio
context_object_name = 'tipo_laboratorio'
success_url = reverse_lazy("website:lista_tipo_laboratorios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
#*** FOO ***
class SituacaoListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_situacao"
template_name = "website/situacao/lista.html"
model = Situacao
context_object_name = "situacaos"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class SituacaoCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_situacao"
template_name = "website/situacao/cria.html"
model = Situacao
form_class = InsereSituacaoForm
success_url = reverse_lazy("website:lista_situacaos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class SituacaoUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_situacao"
template_name = "website/situacao/atualiza.html"
model = Situacao
fields = '__all__'
context_object_name = 'situacao'
success_url = reverse_lazy("website:lista_situacaos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class SituacaoDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_situacao"
template_name = "website/situacao/exclui.html"
model = Situacao
context_object_name = 'situacao'
success_url = reverse_lazy("website:lista_situacaos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class LaboratorioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_recurso"
template_name = "website/laboratorio/lista.html"
model = Recurso
#context_object_name = "recursos"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(LaboratorioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="laboratorio")
# And so on for more models
return context
class LaboratorioCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_recurso"
template_name = "website/laboratorio/cria.html"
model = Recurso
form_class = InsereLaboratorioForm
success_url = reverse_lazy("website:lista_laboratorios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class LaboratorioUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_recurso"
template_name = "website/laboratorio/atualiza.html"
model = Recurso
fields = '__all__'
context_object_name = 'recurso'
success_url = reverse_lazy("website:lista_laboratorios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class LaboratorioDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_recurso"
template_name = "website/laboratorio/exclui.html"
model = Recurso
context_object_name = 'recurso'
success_url = reverse_lazy("website:lista_laboratorios")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
#Projetor
class ProjetorListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_recurso"
template_name = "website/projetor/lista.html"
model = Recurso
context_object_name = "projetors"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ProjetorListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['projetors'] = Recurso.objetos.filter(tipo_recurso="projetor")
# And so on for more models
return context
class ProjetorCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_recurso"
template_name = "website/projetor/cria.html"
model = Recurso
form_class = InsereProjetorForm
success_url = reverse_lazy("website:lista_projetors")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def form_valid(self, form):
print("****************************************************")
print("FORM PROJETOR VIEW")
print("****************************************************")
numero = form.cleaned_data['numero']
print("----------------------------------------------------")
print(str(numero))
print("----------------------------------------------------")
descricao = form.cleaned_data['descricao']
print("----------------------------------------------------")
print(str(descricao))
Recurso.objetos.create(numero=numero, descricao = descricao, tipo_recurso = "projetor")
return HttpResponseRedirect(reverse('website:lista_projetors'))
class ProjetorUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_recurso"
template_name = "website/projetor/atualiza.html"
model = Recurso
#fields = '__all__'
context_object_name = 'recurso'
form_class = InsereProjetorForm
success_url = reverse_lazy("website:lista_projetors")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class ProjetorDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_recurso"
template_name = "website/projetor/exclui.html"
model = Recurso
context_object_name = 'recurso'
success_url = reverse_lazy("website:lista_projetors")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class ReservaLaboratorioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_reserva"
template_name = "website/reserva_laboratorio/lista.html"
model = Reserva
#context_object_name = "reservas"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaLaboratorioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['reservas'] = Reserva.objetos.filter(tipo_recurso="laboratorio", data_hora_saida__gte = datetime.now())
# And so on for more models
return context
class ReservaLaboratorioUsuarioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_reserva"
template_name = "website/reserva_laboratorio/lista_usuario.html"
model = Reserva
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaLaboratorioUsuarioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['reservas'] = Reserva.objetos.filter(id_usuario=self.request.user.id, tipo_recurso="laboratorio", situacao =2, data_hora_saida__gte = datetime.now())
# And so on for more models
return context
class ReservaNaoConfirmadaLaboratorioUsuarioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_reserva"
template_name = "website/reserva_laboratorio/lista_nao_confirmada_usuario.html"
model = Reserva
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaNaoConfirmadaLaboratorioUsuarioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
time_threshold = datetime.now() + timedelta(hours=30)
context['reservas'] = Reserva.objetos.filter(id_usuario=self.request.user.id,confirmacao=0,data_hora_saida__gte = time_threshold, tipo_recurso="laboratorio", situacao =2)
# And so on for more models
return context
class ReservaLaboratorioCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_reserva"
template_name = "website/reserva_laboratorio/cria2.html"
model = Reserva
form_class = InsereReservaLaboratorioForm
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaLaboratorioCreateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="laboratorio")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="laboratorio", data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
justificativa = form.cleaned_data['justificativa']
print("----------------------------------------------------")
print(str(justificativa))
disciplina = form.cleaned_data['disciplina']
print("----------------------------------------------------")
print(str(disciplina))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="laboratorio" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(usuario.id))
Reserva.objetos.create(id_usuario=usuario,id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, justificativa=justificativa, tipo_recurso="laboratorio", confirmacao=False, disciplina=disciplina, nome_professor= usuario.nome )
messages.success(self.request, 'A reserva criada com sucesso')
#return redirect('reserva_laboratorio/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
class ReservaLaboratorioUsuariosCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_reserva"
template_name = "website/reserva_laboratorio/cria3.html"
model = Reserva
form_class = InsereReservaLaboratorioUsuariosForm
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaLaboratorioUsuariosCreateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="laboratorio")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="laboratorio", data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_usuario = form.cleaned_data['id_usuario']
print("----------------------------------------------------")
print(str(id_usuario))
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
justificativa = form.cleaned_data['justificativa']
print("----------------------------------------------------")
print(str(justificativa))
disciplina = form.cleaned_data['disciplina']
print("----------------------------------------------------")
print(str(disciplina))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="laboratorio" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
#usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(id_usuario.id))
Reserva.objetos.create(id_usuario=id_usuario,id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, justificativa=justificativa, tipo_recurso="laboratorio", confirmacao=False, disciplina=disciplina, nome_professor= id_usuario.nome )
messages.success(self.request, 'Reserava do Laboratorio Realizada com Sucesso')
#return redirect('reserva_laboratorio/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio_usuarios'))
class ReservaLaboratorioConfirmaUpdateView(PermissionRequiredMixin, LoginRequiredMixin,DeleteView):
permission_required = "authweb.change_reserva"
template_name = "website/reserva_laboratorio/confirma.html"
model = Reserva
#fields = '__all__'
context_object_name = 'reserva'
success_url = reverse_lazy("website:lista_reserva_laboratorios_nao_confirmada_usuario")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def delete(self, request, *args, **kwargs):
print("****************************************************")
print("FORM RESERVA LABORATORIO CONFIRMA VIEW")
print("****************************************************")
print("----------------------------------------------------")
id = self.kwargs['pk']
print("ID RESERVA = " + str(id))
print("----------------------------------------------------")
print("Confirmando reserva")
Reserva.objetos.filter(id=id).update(confirmacao=True)
messages.success(self.request, 'A reserva confirmada com sucesso!')
return HttpResponseRedirect(reverse('website:lista_reserva_laboratorios_nao_confirmada_usuario'))
class ReservaLaboratorioUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_reserva"
template_name = "website/reserva_laboratorio/atualiza.html"
model = Reserva
context_object_name = 'reserva'
form_class = InsereReservaLaboratorioForm
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaLaboratorioUpdateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="laboratorio")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="laboratorio",data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
justificativa = form.cleaned_data['justificativa']
print("----------------------------------------------------")
print(str(justificativa))
disciplina = form.cleaned_data['disciplina']
print("----------------------------------------------------")
print(str(disciplina))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="laboratorio" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(usuario.id))
Reserva.objetos.filter(id_usuario=usuario).update(id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, justificativa=justificativa, tipo_recurso="laboratorio", confirmacao=False, disciplina=disciplina, nome_professor= usuario.nome )
messages.success(self.request, 'A reserva autualizada com sucesso!')
#return redirect('reserva_laboratorio/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:atualiza_reserva_laboratorio'))
class ReservaLaboratorioDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_reserva"
template_name = "website/reserva_laboratorio/exclui.html"
model = Reserva
context_object_name = 'reserva'
success_url = reverse_lazy("website:lista_foos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
class ReservaLaboratorioUsuariosUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_reserva"
template_name = "website/reserva_laboratorio/atualiza2.html"
model = Reserva
context_object_name = 'reserva'
form_class = InsereReservaLaboratorioUsuariosForm
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaLaboratorioUsuariosUpdateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="laboratorio")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="laboratorio", data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
justificativa = form.cleaned_data['justificativa']
print("----------------------------------------------------")
print(str(justificativa))
disciplina = form.cleaned_data['disciplina']
print("----------------------------------------------------")
print(str(disciplina))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_laboratorio'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="laboratorio" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(usuario.id))
Reserva.objetos.filter(id_usuario=usuario).update(id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, justificativa=justificativa, tipo_recurso="laboratorio", confirmacao=False, disciplina=disciplina, nome_professor= usuario.nome )
messages.success(self.request, 'A reserva autualizada com sucesso!')
#return redirect('reserva_laboratorio/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:atualiza_reserva_laboratorio_usuarios'))
class ReservaProjetorListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_reserva"
template_name = "website/reserva_projetor/lista.html"
model = Reserva
#context_object_name = "reservas"
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaProjetorListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['reservas'] = Reserva.objetos.filter(tipo_recurso="projetor",data_hora_saida__gte = datetime.now())
# And so on for more models
return context
class ReservaProjetorUsuarioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_reserva"
template_name = "website/reserva_projetor/lista_usuario.html"
model = Reserva
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaProjetorUsuarioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['reservas'] = Reserva.objetos.filter(id_usuario=self.request.user.id, tipo_recurso="projetor", data_hora_saida__gte = datetime.now(), situacao =2)
# And so on for more models
return context
class ReservaNaoConfirmadaProjetorUsuarioListView(PermissionRequiredMixin,LoginRequiredMixin, ListView):
permission_required = "authweb.view_reserva"
template_name = "website/reserva_projetor/lista_nao_confirmada_usuario.html"
model = Reserva
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaNaoConfirmadaProjetorUsuarioListView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
time_threshold = datetime.now() + timedelta(hours=30)
context['reservas'] = Reserva.objetos.filter(id_usuario=self.request.user.id,confirmacao=0,data_hora_saida__gte = time_threshold, tipo_recurso="projetor", situacao =2)
# And so on for more models
return context
class ReservaProjetorCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_reserva"
template_name = "website/reserva_projetor/cria2.html"
model = Reserva
form_class = InsereReservaProjetorForm
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaProjetorCreateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="projetor")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="projetor")
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
curso = form.cleaned_data['curso']
print("----------------------------------------------------")
print(str(curso))
primeira_aula = form.cleaned_data['primeira_aula']
print("----------------------------------------------------")
print(str(primeira_aula))
segunda_aula = form.cleaned_data['segunda_aula']
print("----------------------------------------------------")
print(str(segunda_aula))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="projetor" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(usuario.id))
Reserva.objetos.create(id_usuario=usuario,id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, curso = curso, tipo_recurso="projetor", confirmacao=False, primeira_aula = primeira_aula, segunda_aula = segunda_aula)
messages.success(self.request, 'Reserava do Projetor Realizada com Sucesso')
#return redirect('reserva_projetor/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
class ReservaProjetorUsuariosCreateView(PermissionRequiredMixin, LoginRequiredMixin, CreateView):
permission_required = "authweb.add_reserva"
template_name = "website/reserva_projetor/cria3.html"
model = Reserva
form_class = InsereReservaProjetorUsuariosForm
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaProjetorUsuariosCreateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="projetor")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="projetor",data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_usuario = form.cleaned_data['id_usuario']
print("----------------------------------------------------")
print(str(id_usuario))
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
curso = form.cleaned_data['curso']
print("----------------------------------------------------")
print(str(curso))
primeira_aula = form.cleaned_data['primeira_aula']
print("----------------------------------------------------")
print(str(primeira_aula))
segunda_aula = form.cleaned_data['segunda_aula']
print("----------------------------------------------------")
print(str(segunda_aula))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="projetor" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
#usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(id_usuario))
Reserva.objetos.create(id_usuario=id_usuario,id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, curso = curso, tipo_recurso="projetor", confirmacao=False, primeira_aula = primeira_aula, segunda_aula = segunda_aula)
messages.success(self.request, 'Reserva do Projetor Realizada com Sucesso')
#return redirect('reserva_projetor/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor_usuarios'))
class ReservaProjetorConfirmaUpdateView(PermissionRequiredMixin, LoginRequiredMixin,DeleteView):
permission_required = "authweb.change_reserva"
template_name = "website/reserva_projetor/confirma.html"
model = Reserva
#fields = '__all__'
context_object_name = 'reserva'
success_url = reverse_lazy("website:lista_reserva_projetors_nao_confirmada_usuario")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def delete(self, request, *args, **kwargs):
print("****************************************************")
print("FORM RESERVA PROJETOR CONFIRMA VIEW")
print("****************************************************")
print("----------------------------------------------------")
id = self.kwargs['pk']
print("ID RESERVA = " + str(id))
print("----------------------------------------------------")
print("Confirmando reserva")
Reserva.objetos.filter(id=id).update(confirmacao=True)
messages.success(self.request, 'A Reserva Confirmada com Sucesso!')
return HttpResponseRedirect(reverse('website:lista_reserva_projetors_nao_confirmada_usuario'))
class ReservaProjetorUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_reserva"
template_name = "website/reserva_projetor/atualiza.html"
model = Reserva
form_class = InsereReservaProjetorForm
context_object_name = 'reserva'
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaProjetorCreateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="projetor")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="projetor", data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
curso = form.cleaned_data['curso']
print("----------------------------------------------------")
print(str(curso))
primeira_aula = form.cleaned_data['primeira_aula']
print("----------------------------------------------------")
print(str(primeira_aula))
segunda_aula = form.cleaned_data['segunda_aula']
print("----------------------------------------------------")
print(str(segunda_aula))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="projetor" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(usuario.id))
Reserva.objetos.filter(id_usuario=usuario).update(id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, curso = curso, tipo_recurso="projetor", confirmacao=False, primeira_aula = primeira_aula, segunda_aula = segunda_aula)
messages.success(self.request, 'Reserava do Projetor Atualizada com Sucesso!!!')
#return redirect('reserva_projetor/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:atualiza_reserva_projetor'))
class ReservaProjetorUsuariosUpdateView(PermissionRequiredMixin, LoginRequiredMixin,UpdateView):
permission_required = "authweb.change_reserva"
template_name = "website/reserva_projetor/atualiza2.html"
model = Reserva
form_class = InsereReservaProjetorUsuariosForm
context_object_name = 'reserva'
login_url = 'website:login'
redirect_field_name = 'redirect_to'
def get_context_data(self, **kwargs):
context = super(ReservaProjetorUsuariosUpdateView, self).get_context_data(**kwargs)
#context['recursos'] = Recurso.objects.all()
context['recursos'] = Recurso.objetos.filter(tipo_recurso="projetor")
context['reservas'] = Reserva.objetos.filter(tipo_recurso="projetor",data_hora_saida__gte = datetime.now())
# And so on for more models
return context
def form_valid(self, form):
print("****************************************************")
print("FORM RESERVA LABORATORIO VIEW")
print("****************************************************")
id_recurso = form.cleaned_data['id_recurso']
print("----------------------------------------------------")
print(str(id_recurso))
print("----------------------------------------------------")
data_uso = form.cleaned_data['data_uso']
print("----------------------------------------------------")
print(str(data_uso))
time_uso = form.cleaned_data['time_uso']
print("----------------------------------------------------")
print(str(time_uso))
data_liberacao = form.cleaned_data['data_liberacao']
print("----------------------------------------------------")
print(str(data_liberacao))
time_liberacao = form.cleaned_data['time_liberacao']
print("----------------------------------------------------")
print(str(time_liberacao))
curso = form.cleaned_data['curso']
print("----------------------------------------------------")
print(str(curso))
primeira_aula = form.cleaned_data['primeira_aula']
print("----------------------------------------------------")
print(str(primeira_aula))
segunda_aula = form.cleaned_data['segunda_aula']
print("----------------------------------------------------")
print(str(segunda_aula))
dow1 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 1);
dow2 = datetime(data_uso.year,data_uso.month, data_uso.day, 12, 59);
dow3 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 1);
dow4 = datetime(data_uso.year,data_uso.month, data_uso.day, 17, 59);
dow5 = datetime(data_uso.year,data_uso.month, data_uso.day, 22, 1);
dow6 = datetime(data_uso.year,data_uso.month, data_uso.day+1, 6, 59);
dt1 = datetime(data_uso.year,data_uso.month, data_uso.day, time_uso.hour, time_uso.minute)
print("----------------------------------------------------")
print("DATA INICIAL = " + str(dt1))
dt2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, time_liberacao.hour, time_liberacao.minute )
print("----------------------------------------------------")
print("DATA FINAL = " + str(dt2))
if dt1 < datetime.now() or dt2 < datetime.now():
print("----------------------------------------------------")
print("data menor que o tempo atual")
messages.error(self.request, "data menor que o tempo atual")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if dt1 >= dt2 :
print("----------------------------------------------------")
print("data liberaraco e menor ou igual que a data de uso")
messages.error(self.request, "data liberaraco e menor ou igual que a data de uso")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
if ( (dow1 <= dt1 and dt1 <= dow2 ) or (dow3 <= dt1 and dt1 <= dow4 ) or (dow5 <= dt1 and dt1 <= dow6 )):
print("----------------------------------------------------")
print("Fora de funcionamento para data de inicio")
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
dow1 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 1);
dow2 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 12, 59);
print("----------------------------------------------------")
print("[" + str(dow1) + " | " + str(dt2) + " | " + str(dow2) + "]" )
dow3 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 1);
dow4 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 17, 59);
print("----------------------------------------------------")
print("[" + str(dow3) + " | " + str(dt2) + " | " + str(dow4) + "]" )
dow5 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day, 22, 1);
dow6 = datetime(data_liberacao.year,data_liberacao.month, data_liberacao.day+1, 6, 59);
print("----------------------------------------------------")
print("[" + str(dow5) + " | " + str(dt2) + " | " + str(dow6) + "]" )
if ( (dow1 <= dt2 and dt2 <= dow2 ) or
(dow3 <= dt2 and dt2 <= dow4 ) or
(dow5 <= dt2 and dt2 <= dow6 )
):
print("----------------------------------------------------")
print("Fora de funcionamento para data final")
messages.error(self.request, 'Fora de funcionamento para data final')
return HttpResponseRedirect(reverse('website:cadastra_reserva_projetor'))
#reserva = Reserva.objetos.filter(Q(id_recurso=id_recurso) & (Q(data_hora_saida__lte = dt1) | Q(data_hora_saida__lte = dt1))).first()
reserva = Reserva.objetos.filter(id_recurso=id_recurso, data_hora_saida__lte = dt1 , data_hora_saida__gte = dt1, data_hora_chegada__lte = dt2 , data_hora_chegada__gte = dt2, tipo_recurso="projetor" ).first()
if (reserva != None):
print("A reserva nao pode ser realizada, ja existe uma reserava para esse recurso")
print("----------------------------------------------------")
print("RESERVA_ID =" + str(reserva.id))
messages.error(self.request, 'A reserva nao pode ser realizada, ja existe uma reserava para esse recurso')
else:
print("----------------------------------------------------")
print("Cadastrando Reserva...")
situacao = Situacao.objetos.filter(nome="Reservado").first()
usuario = Usuario.objetos.filter(matricula=self.request.user.username).first()
if situacao != None:
print("----------------------------------------------------")
print("SITUACAO =" + str(situacao.nome))
print("----------------------------------------------------")
print("USURIO =" + str(usuario.id))
Reserva.objetos.filter(id_usuario=usuario).update(id_recurso=id_recurso,situacao=situacao, data_hora_saida=dt1, data_hora_chegada=dt2, curso = curso, tipo_recurso="projetor", confirmacao=False, primeira_aula = primeira_aula, segunda_aula = segunda_aula)
messages.success(self.request, 'Reserava do Projetor Atualizada com Sucesso!!!')
#return redirect('reserva_projetor/cadastrar')
#return render(self.request, self.template_name, { 'form': form })
return HttpResponseRedirect(reverse('website:atualiza_reserva_projetor_usuarios'))
class ReservaProjetorDeleteView(PermissionRequiredMixin,LoginRequiredMixin, DeleteView):
permission_required = "authweb.delete_reserva"
template_name = "website/reserva_projetor/exclui.html"
model = Reserva
context_object_name = 'reserva'
success_url = reverse_lazy("website:lista_foos")
login_url = 'website:login'
redirect_field_name = 'redirect_to'
| 51.307489 | 280 | 0.560861 | 9,230 | 97,279 | 5.707801 | 0.035211 | 0.038912 | 0.021715 | 0.020196 | 0.899133 | 0.884042 | 0.86582 | 0.830609 | 0.827193 | 0.819353 | 0 | 0.011274 | 0.214959 | 97,279 | 1,896 | 281 | 51.307489 | 0.678583 | 0.045498 | 0 | 0.804485 | 0 | 0 | 0.279356 | 0.178435 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024527 | false | 0.007008 | 0.012614 | 0 | 0.362299 | 0.311843 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
60f5fd71163d041e5a1e20537ca88014eac6765d | 111 | py | Python | app/models/__init__.py | victorlomi/News-Catchup | 214b4e92b0cf90c7e4906c3b2316578918645dac | [
"Unlicense"
] | null | null | null | app/models/__init__.py | victorlomi/News-Catchup | 214b4e92b0cf90c7e4906c3b2316578918645dac | [
"Unlicense"
] | null | null | null | app/models/__init__.py | victorlomi/News-Catchup | 214b4e92b0cf90c7e4906c3b2316578918645dac | [
"Unlicense"
] | null | null | null | # import modules to expose them package members.
from app.models import source
from app.models import article
| 27.75 | 48 | 0.810811 | 17 | 111 | 5.294118 | 0.705882 | 0.155556 | 0.288889 | 0.422222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153153 | 111 | 3 | 49 | 37 | 0.957447 | 0.414414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.